Monday, September 21, 2009

Reasons and Invariability

The biggest problem with my argument is that I have assumed that the reasons an ideal agent will have when confronted with a society of non-ideal agents are the same as those if confronted with  a society of ideal agents. i.e. All I really demonstrated is that with regards to other ideal agents, our actions should be non-maleficent.

Part of what it means to be rational is a sensitivity to situations. Different situations provide different reasons in virtue of what the situations are. It is also not impossible that other non-ideal agents would present a different set of reasons than ideal agents.

 One way this would occur is if non-ideal agents behaved in obviously irrational ways. i.e. they behaved in ways which could not be truly justified by the reasons. On such cases, it seems that we have not as yet drawn any principle for acting except as in ways in which are consistent with the end of promoting your own happiness.

However, people could also act in ways, while not necessarily motivated by reason, accords with it. Given that we are unable to look into people's motivations, in these situations, we cannot distinguish them from being perfectly motivated by reasons (i.e. ideal agents). As such, the exact same reasons that apply in a society of ideal agents, also apply in such situations.

 Given that there are many  possible ends and reasons that are prima facie consonant with happiness, the mere fact that other people's actions are such that they would not be consonant with our ends if we did them is not evidence that their actions are in fact irrational.

However, someone who displays maleficence  towards those who we can regard has having behaved in not unjustified ways has violated the precepts of reason. This is sufficient evidence that they who are so maleficent are not ideal agents. As such, the duties which we have towards ideal agents are not necessarily owed to them. They may be, but that will have to be further demonstrated.

However, for now, the preceding paragraphs have been sufficient to establish the Non-agression principle (NAP) with one little caveat: maleficence towards presumptive ideal agents is not the only indicator of irrationality. There is possibly some set of actions that are non-maleficent and yet is irrational. Selling oneself into slavery is one, as are any violation of the various duties that an ideal agent has towards himself, as well as those that he has towards presumptive ideal agents. There might also be some set of actions that can never, in any possible circumstance be consonant with one's own happiness.

A future post will try to explicate what types of reasons govern our behaviour towards demonstrably non-ideal agents.


[UPDATE]
As a poster has asked, here is a guide to those who are lost about what I am talking about now.

You can start with Ideal Agent Approach for a quick look at my original thesis. You can also look at the linked essay.

Then you can move on to Concessions, where I try to restate my case while dropping a lot of unnecessary metaphysical baggage. This kind of stands alone as it repeats a lot of what was said in the Ideal Agents post as well as adds a whole lot more. To give credit where it is due, I've drawn quite a bit from Richard Chappell, a princeton postdoc

Then read Mutuality for some elaborations, clarifications and moves I've made in response to criticsms made by various friends.

The current post you are reading now is in response to a criticism from a friend, Wee Kien. Though he didn't state the problem in precisely that way, that is the closest approximation to the question he asked that poses a genuine problem to my system.

As it stands now, my system is vastly different from what I started off with, but I've still managed to arrive at at least some rational duties that bear a family resemblance to our moral intuitions.

3 comments:

  1. Could I have any hint on where to start reading rather than backtracking through posts?

    ReplyDelete
  2. Hi

    You can start with Ideal Agent Approach for a quick look at my original thesis.

    Then you can move on to Concessions, where I try to restate my case while dropping a lot of unnecessary metaphysical baggage. This kind of stands alone as it repeats a lot of what was said in the Ideal Agents post as well as adds a whole lot more. To give credit where it is due, I've drawn quite a bit from Richard Chappell, a princeton postdoc

    Then read Mutuality for some elaborations, clarifications and moves I've made in response to criticsms made by various friends.

    This post is in response to a criticism from a friend, Wee Kien. Though he didn't state the problem in precisely that way, that is the closest approximation to the question he asked that poses a genuine problem to my system.

    As it stands now, my system is vastly different from what I started off with, but I've still managed to arrive at at least some rational duties that bear a family resemblance to our moral intuitions.

    The above will be updated to the original post

    ReplyDelete
  3. Ok. I am here. I'm deep in the middle of something else, so I'll take a while to respond. But I've seen it. And I'm still not convinced by the way hahaha.

    Brian

    ReplyDelete