Monday, September 21, 2009

Reasons and Invariability

The biggest problem with my argument is that I have assumed that the reasons an ideal agent will have when confronted with a society of non-ideal agents are the same as those if confronted with  a society of ideal agents. i.e. All I really demonstrated is that with regards to other ideal agents, our actions should be non-maleficent.

Part of what it means to be rational is a sensitivity to situations. Different situations provide different reasons in virtue of what the situations are. It is also not impossible that other non-ideal agents would present a different set of reasons than ideal agents.

 One way this would occur is if non-ideal agents behaved in obviously irrational ways. i.e. they behaved in ways which could not be truly justified by the reasons. On such cases, it seems that we have not as yet drawn any principle for acting except as in ways in which are consistent with the end of promoting your own happiness.

However, people could also act in ways, while not necessarily motivated by reason, accords with it. Given that we are unable to look into people's motivations, in these situations, we cannot distinguish them from being perfectly motivated by reasons (i.e. ideal agents). As such, the exact same reasons that apply in a society of ideal agents, also apply in such situations.

 Given that there are many  possible ends and reasons that are prima facie consonant with happiness, the mere fact that other people's actions are such that they would not be consonant with our ends if we did them is not evidence that their actions are in fact irrational.

However, someone who displays maleficence  towards those who we can regard has having behaved in not unjustified ways has violated the precepts of reason. This is sufficient evidence that they who are so maleficent are not ideal agents. As such, the duties which we have towards ideal agents are not necessarily owed to them. They may be, but that will have to be further demonstrated.

However, for now, the preceding paragraphs have been sufficient to establish the Non-agression principle (NAP) with one little caveat: maleficence towards presumptive ideal agents is not the only indicator of irrationality. There is possibly some set of actions that are non-maleficent and yet is irrational. Selling oneself into slavery is one, as are any violation of the various duties that an ideal agent has towards himself, as well as those that he has towards presumptive ideal agents. There might also be some set of actions that can never, in any possible circumstance be consonant with one's own happiness.

A future post will try to explicate what types of reasons govern our behaviour towards demonstrably non-ideal agents.


[UPDATE]
As a poster has asked, here is a guide to those who are lost about what I am talking about now.

You can start with Ideal Agent Approach for a quick look at my original thesis. You can also look at the linked essay.

Then you can move on to Concessions, where I try to restate my case while dropping a lot of unnecessary metaphysical baggage. This kind of stands alone as it repeats a lot of what was said in the Ideal Agents post as well as adds a whole lot more. To give credit where it is due, I've drawn quite a bit from Richard Chappell, a princeton postdoc

Then read Mutuality for some elaborations, clarifications and moves I've made in response to criticsms made by various friends.

The current post you are reading now is in response to a criticism from a friend, Wee Kien. Though he didn't state the problem in precisely that way, that is the closest approximation to the question he asked that poses a genuine problem to my system.

As it stands now, my system is vastly different from what I started off with, but I've still managed to arrive at at least some rational duties that bear a family resemblance to our moral intuitions.

Wednesday, September 16, 2009

Mutuality, parasitism, independant standing and happiness

In my previous post, I made the assertion that


All agents desire the satisfaction of at least their higher order desires*. (This we can call happiness, if for no other reason than lack of a better term. Some argument may be needed to satisfactorily conclude that this is indeed what we commonly mean when we talk about happiness. However successful that argument, desire satisfaction is what I mean when I talk about happiness from now on.
I would like to reword this. Generally speaking, we are happy when our desires are satisfied. But not all desires, I would be less happy if my desire for a chocolate heavy exercise free lifestyle were satisfied, than if it were frustrated because I've got a second order desire to suppress this first order desire. So, I think I should amend it to: I would be happy if I managed to satisfy my first order desires such as they would be if they were correctly ordered by second order desires. Note that this is yet again different from welfare, which is what is desirable for my own sake. It would be somewhat false to say that we all happen to desire what is actually desirable for our own sake. It is not necessarily the case that I desire what is good for me. e.g. smokers desire what is bad for them.

It is happiness in this sense that it becomes an analytical truth that all agents  necessarily have their own happiness as an end.

Another issue that came up is the following:


However, it is not necessarily the case that all these candidates are actually reasons to act. i.e. those considerations are not necessarily sufficient to determine what right action is. It is even possible, that one particular set of reasons may be the only game on the table, though it is not certain that this is the case. But even if it were to be the only available set, being a candidate for REAL reasons to act would require that it be possible that everyone adopt those reasons.


This is false. The possibility that one set of reasons could be the only possible set of reasons only sets up the possibility that it may have to be universalisable. That approach does not yield a necessary universalisation requirement.

Let us look at what we have. We do know that a society of ideal agents is possible. however, we cannot simply sneak in conditions that actually force the conclusion that all these ideal agents act from the same reasons. That is the conclusion that we are trying to show: that there are indeed common reasons that apply to all agents. Therefore we must accept that there will be a heterogeneity among these ideal agents.

Before continuing on, it should be noted that unpacking the concept of ideal agency illuminates a concept of efficacy. I previously noted that an action is fitting with respect to an agent in-so -far as it is expressive of his/her agency. But, what do we mean by agent? An agent is an autonomous actor. With regard to the issue of the slave, we addressed the issue of fittingness with respect to agent qua autonomy. An additional criterion of fittingness would also have to be with respect to agent qua actor. A person is an agent in so far as he is an actor i.e. in so far as he is potent. An impotent actor is a contradiction. Our ideal agents have to be maximally potent i.e. they must be maximally successful in pursuing their ends and responding appropriately to reason.

We know that all agents hold their own happiness as an end. This doesn't entail, in itself anything much about the content of reasons and ends that are consistent with their own happiness. We could have "Nasty" ends as well as "nice" or "neutral". The terms are used in this context as evaluatively neutral. No judgements are made as to whether these are good or bad. (at least not yet. Calling something nasty is just a way to label reasons and ends that are destructive, enslaving etc) Considered from the agent's point of view, there doesn't seem to be anything wrong with me killing and raping if this is what is consonant with my own pleasure. The fact that it reduces someone else's happiness does not give the nasty agent a reason to desist. However, if we consider the society as a whole, we see a problem. Fulfilling the nasty person's happiness requires the diminishing of the happiness of the victim (We can even specify that the victim does not share the same type of reasons and ends as Nasty ). Fulfilling the happiness of the putative victim requires frustrating the happiness and other ends of Nasty. i.e. whatever the actual status of these sets of reasons, both cannot be real reasons for acting (either one, the other or neither). If both are present not all the agents in the society can be described as ideal. Some are non-ideal as they are incapable of maximally securing their happiness.

There is also a possibility that the "victim's" happiness is served by being preyed upon by Nasty. Let's call this one Sucker. Sucker, however is an impossibility. We have already decided that it is contrary to what it means to be autonomous that agents desire to do things that they do not desire. (This is not some strange screed against BDSM. BDSM is not real slavery as it is just a game. Both parties desire what is happening to them and safe-words etc ensure that they never cross the line. Remember kids, Keep it Safe, Sane and Consensual.)

Now, given that both Nasty and its victim cannot both be real reason because they each interfere with each other's happiness We should try them out individually. We could separate all the Nasty types from the non-nasty types. If among the Nasty types, there are still some that are victims, we can iteratively separate them out until there are no conflicts of interest or there is only one set left. Consider that nasty is such that it aims to have victims. If there are no victims left, because all appropriate targets are gone, then the their happiness is frustrated and the agents cannot be said to be ideal. If there are no restrictions on who gets to be victims, they target each other and their happiness is frustrated in doing so. This is not an ideal society either.

On the face of it, this rules out all maleficent goals. i.e. We have a duty of non-maleficence.

Note: the process whereby, I iteratively separated out the Nasties and  checked to see if the society they formed was ideal simply is checking to see if they were universalisable i.e. the categorical imperative.

Wednesday, September 9, 2009

Concessions and recapitulations

Referencing my earlier post on ideal agents, and my first posts on philosophy, here are a few concessions.

1. My earlier criticism of free-standing value stands. It is a very queer object. I ought not to rely on such in order to build my ethical theories
2. Consequentialism is not totally nonsense. Given that X is valuable, it is irrational (everything else being equal) to choose a situation B which has less of X over situation A. 

That said, of course everything else is not equal, and there may be other types of value/ reasons. So, the most general statement that we can make is that there are reasons to act (P1). It is almost tautological to say that these reasons are such that if a person properly motivated by reason were aware of these reasons, everything else being equal, they would be motivated to act appropriately. But this merely highlights the fact that not everybody is motivated by reason. Sometimes (in fact, quite often), people are motivated by biases, half formed feelings etc. This can lead them to perform actions which they in fact have sufficient reason to perform or to actions which they do not. Hence, most of us are imperfect agents.

Can these reasons conflict? Consider weight loss. I have reasons to stick to my diet and go for morning jogs. (I have reasons to be healthy, to keep to fitness standards demanded by the military, I don’t want to go for remedial training etc) I also have reasons to eat lots of ice cream and not exercise (The ice cream is really nice and running is exhausting and unpleasant). There is no reason why, in general these reasons do not qualify as reasons. Yet, the answer is not indeterminate. We presume, generally that there is a right answer to at least some of these questions, perhaps I, all things considered, have better reason to exercise more and eat more healthily because it makes me healthier and allows me to avoid remedial training by the army. And the displeasure from not exercising probably outweighs the displeasure from exercising such that I should prefer exercising. However, not all dilemmas revolve around the same type of reasons (pleasure in both cases). In fact, many dilemmas revolve around different sorts of reasons. Should I return money lent to me by a friend, or should I donate the money to charity where it will make more people happy? This dilemma involves promoting happiness on the one hand, and promise keeping on the other. Any way in which we resolved the dilemma would involve limitations being placed on the scope of these reasons. i.e. particular reasons may only apply under certain conditions, or certain reasons are over-riding etc. 

And then of course, certain issues are indeed indeterminate. Whether or not one buys chocolate or vanilla ice-cream is certainly dependant on which flavour I prefer, but it is certainly inconceivable that there be any overarching reason to prefer one flavour over the other. Another way in which things could be indeterminate is if there is no good reason why certain reasons should be given prior consideration over others. And of course things are also indeterminate when the same types of reasons happen to weigh equally on both sides of the issue.

How would we actually evaluate reasons?

The first consideration is about whether an action is appropriate to the agent, or the reason (Or any other possible combination). For example, there is no reason that would motivate a rational agent to contract himself into slavery. Slavery involves the negation of the agential capacity. The act of consigning oneself to slavery is invalidated the instance it is performed. This suggests one criterion of fittingness. Actions and reasons are fitting to the extent that they increase agential capacity/activity (P2). A schema of reasons where everything was indeterminate would not be very fitting, as agents will then lack decisive reason to do anything. This would presumably invalidate what it means to be a rational agent who has reasons to act. (refer to P1) Therefore, all things being equal, we should prefer schema which are more tidy and defined properly than those which are poorly defined. Lets call this the coherency principle (P3)

This leads us to the second consideration. The reasons and principles that regulate them should be as coherent as possible. i.e. maximally coherent. If there were strange entities that provided these reasons, coherency would naturally be a requirement, but having eschewed saying anything definite about queer objects, the previous paragraphs, I think, give some reason as to why we should prefer coherent and neatly defined sets of reasons. It is also worth noting that there seem to be some things that all agents will desire. All agents desire the satisfaction of at least their higher order desires*. (This we can call happiness, if for no other reason than lack of a better term. Some argument may be needed to satisfactorily conclude that this is indeed what we commonly mean when we talk about happiness. However successful that argument, desire satisfaction is what I mean when I talk about happiness from now on.) Therefore, internal coherency requires not just some arbitrary set, but must include all goals/ ends that agents a priori have.(P3 restated)

Now we arrive at the third consideration. I have previously rehearsed the more generalised and raw form of the argument at Chappell’s blog, Philosophy etc. I will reproduce it below

My argument for such would be along the lines of the following.

A1. Reasons for acting are such that an idealised agent would be aware of them and act on them.

A2. From A1, any and all idealised agents could possibly act on such reasons

A3. A society of idealised agents is conceivable

A4. From A3, such a society is logically possible (even and especially (from A2) when all agents are acting on those reasons)

C: From A4, Genuine reasons to act should be such that all agents in this idealised society could respond to them. i.e. such reasons should be universalisable.


The above argument is unsatisfactory. A4 especially sneaks in the premise that there are very determinate ways in which these reasons play out. But that is cheating if I want to say that there are categorical reasons (moral reasons) that demand certain things out of certain people, and that these moral reasons are rationally required. In order to do a more complete justification, I will have to justify my move A3. Now, it is the case that A3 is true, but I haven’t adequately explained why I made the move and whether or not it is too stringent.

One reason to consider a society of agents is that often reasons for acting involve reasons about how we treat other agents. How we treat non-agents is also important, and the question may suffer from being ignored with all these agent-centred considerations, but that does not invalidate that agent centred considerations are an important aspect of morality. Consider also that Jesus was crucified, Krishna was shot (accidentally) and Rama was exiled by his step-mother, Kaikeyi. Bad things often happen to good guys. While rationality at least in part involves using reason to survive adverse conditions, it is not any guarantee of survival. However, a minimum requirement of rationality is that rational agents should be able to co-exist with other rational agents. We, often with our multitude of irrationalities manage to co-exist. It shouldn’t be a barrier to fully and ideally rational agents. This would be true even if the reasons for acting are thoroughly heterogeneous.

Now, to justify A4: Considerations P2 and P3 narrowed down the list of possible reasons for acting. However, it is not necessarily the case that all these candidates are actually reasons to act. i.e. those considerations are not necessarily sufficient to determine what right action is. It is even possible, that one particular set of reasons may be the only game on the table, though it is not certain that this is the case. But even if it were to be the only available set, being a candidate for REAL reasons to act would require that it be possible that everyone adopt those reasons.(P4) (This post is getting long and talking about parasitic, mutualistic and independently universalisable schema would make it even longer)

It should be noted that P3 and P4 yields the categorical imperative: Act only on the maxim that you can will to be universal law. P3 requires that we be able to will it (not just conceive it) and P4 requires that we be able to conceive that it become universal law.

Just some more on how this principle works. Here, I quote from my response further down in Chapell’s post.

Let's try egoism for a start. The egoist's maxim is "do what is in your own self interest" (even if it involves sacrificing the interests of other agents)

Under universalising conditions, that would contradict the egoists ends (which are to promote his own self interest) as there would be many other agents who would sacrifice his interests of theirs. i.e. each agent in the ideal polity would have difficulty satisfying their own ends.

Therefore, the egoist has to modify his maxim to "do what is in your own self interest, but only so far as it allows others to pursue their own interests similarly" i.e in addition to a duty to himself, the egoist has also added a duty of non-maleficence to his list. But once he has done that, he has ceased to be an egoist.

The CI, I think, is not so strong as to yield things like never lie, or never kill (like Kant envisioned), but may yield something like the weaker Rossian prima facie duties (which include things like promoting people's well-being etc, duties of fidelity, gratitude etc).
deontological intuitions (as well as our consequentialist ones) can be explained adequately by reference to the categorical imperative. I believe that you see our deontological inclinations as springing from the decision procedures we commonly use to promote the good. (but I'm not very comfortable with that)…

…If you notice, the CI doesnt actually provide the reason why an egoist should embrace non maleficence or why we should abandon value monism, only that we should. (the CI seems to give a criterion of the fittingness of reasons, not the reasons themselves) But we can actually bootstrap these reasons in, in order to comply with the CI.


The interesting point is the last part, which I seem to be exploring, but may discard at a later date. It seems to me that the duties derived from the categorical imperative do not so much as give reasons for moral facts, but more like give the shape of what moral facts look like.

Also note that the categorical imperative, without further assumptions gives us duties to one’s own happiness which are limited by duties of at least non-maleficence towards others. Furthermore, if you notice, I arrived at this conclusion without having to posit the definite existence of categorical reasons, only their logical possibility. This I think demonstrates that morality, i.e. categorical, universal, authoritative reasons are rationally required.


*This may or may not seem tautological, and while according to Kant holy wills lack sensuous aspects and so cannot desire, I take desire to mean to seek as an end. Agents who have no ends to seek are not really agents after all. This leads to an unrelated discussion about God. Namely, that an omniscient and omnipotent being which lacks for nothing is not very agent like at all. All that leaves (as per Spinoza) is that God is Being or simply Is and barely has a tenuous resemblance to agents.