Ok, basically, people have been telling me that I've got way too many questionable assumptions. Also, people are not sure what I'm driving at, so I will try to do better.
1. What am I trying to show/achieve?
Given that it is conceptually the case that we ought to do what ideal agents will, the question about what exactly ideal agents will do arises. I'm trying to demonstrate that by simply considering the possibility of a society of ideal agents, we can rule out certain types of actions and ends. These considerations may or may not amount to applying the categorical imperative. I'm willing to let the cards fall where they will. Any duties that I extract from said considerations may very well have narrow applications, but that can be dealt with later. Also, my entire project should eventually yield some version of Ross's prima facie duties, or some Kantian wide duties etc.
2. In order to be able to reject certain actions or ends, I must conclude either that such actions are not conceivable, or that even if conceivable, such a society where those ends are pursued are not ideal. In order to do this, I stipulated that my ideal agents had to be maximally happy/ successful in the pursuit of their ends. (happiness and successful pursuit of ends are taken to be interchangeable) Then, if these ends were somehow frustrated, a society would not be ideal. This could yield a duty of non-maleficence, since maleficent actions are simply those that frustrate other's ends. (The actual argument is more subtle, and its actual success may in fact be questionable, but that is irrelevant for the moment)
3. In order to yield the condition that everybody be maximally happy, I posited that ideal agents were omnipotent. This was a questionable move as the status of 2 conflicting omnipotent beings is very indeterminate. Moreover, it would have been impossible to yield any duty of beneficence. This shouldn't already mean that I reject the approach, but the impossibility of obtaining such a duty arises merely from the strange fact that left to themselves, ideal agents would be able to achieve all their own ends and would not need any beneficence from others. This does not answer the question of whether we owe a duty of beneficence to people who may actually need it.
4. So, now, I propose that I drop that omnipotence requirement, and merely consider Ideal evaluators.i.e. perfectly knowledgeable and rational beings. But lacking the omnipotence feature, how do I propose that I reject societies which are not maximally happy? One way presents itself. Given perfect means-ends rationality (a bare minimum) and perfect knowledge, ideal evaluators still have a maximum ends fulfillment requirement. Only now, instead of supposing that all ends are fulfilled, it is as many ends as possible. i.e. I can reject a world if it is the case that choices other than the ones taken, would have fulfilled more ends, to a greater extent. The task now, is to plausibly derive duties of beneficence and non-maleficence. (at least towards other ideal evaluators)
5. Means ends rationality is taken as a given. i.e. it is rational to take action that will best satisfy your ends.
6. But this is also an exercise in the analysis of which ends are appropriate for pursuit, and which ends are not. This means that I must have some way of rejecting certain ends. One way I can go about this is by simply saying that there are possible reasons to reject certain ends and that ideal evaluators, in virtue of being perfectly rational and knowledgeable would be aware of these reasons if they existed.
7. However, I may be begging the question if I simply stipulate that ends which are impossible to achieve are irrational to hold, and I may also be digging myself into a hole by doing that. It may be the case that few or no ends are truly completely realisable.
8. From 4, I will restate the society of ideal evaluators requirement.
An ideal evaluator is one who is perfectly rational and perfectly knowledgeable. He is able to achieve his ends maximally (as distinct from completely). There are no actions that he can take which would achieve more of his ends (given standard human potency). A society of ideal evaluators is one where everyone maximally achieves their ends.
9. There are basically 3 types of interactions between people: conflicting interactions where both parties' ends cannot both be satisfied, cooperative interactions where both parties ends succeed or fail together and independent ends where both parties will succeed or fail on their own.
10. Whatever else we learn from economics and game theory, we know that people's ends can best be satisfied if cooperative interactions are maximised and conflicting ones are minimised. i.e. The society of ideal evaluators is one with maximal cooperation.
11. Here follows an analysis of conflict. There are some ends that are conflict prone. These ends which are conflict prone are those, the satisfaction of which, would preclude the satisfaction of ends that all agents (or maybe humans) necessarily have. For example, everybody necessarily has their own happiness as an end. Actions which necessarily impinge on the target's happiness, we call maleficent. Since it is in fact impossible for any clear thinking being to accede to maleficence, and since conflict is minimal or absent in a society of ideal evaluators, maleficence is minimal or absent in a society of ideal evaluators.
12. Here follows an analysis of cooperation. Occupying similar logical space at the other end of the spectrum is beneficence. There are some ends, which are necessarily cooperation prone. The end of other people's happiness always coincides with the other person's end which is his own happiness. Promoting another's happiness is beneficence, and it will also be found extensively in a society of ideal agents.
13. Cooperative endeavours require trust, and will therefore be found in a society of ideal evaluators
14. Given 11-13, and given that we ought to do what ideal agents would do, we have duties of non-maleficence, beneficence and fidelity.
15. Granted, these duties only seem to apply to other ideal evaluators, but that is the subject of another post.
16. One might ask why i ought to do what ideal agents would do? Especially since they seem so different from us with their perfect rationality, and perfect knowledge. However, we take as evident that what we ought to do is what we have most reason to do. We usually don't do what we have most reason to do because we lack knowledge about the world and the particular situation, and we fail to reason properly. While it is the case that we are ignorant irrational creatures, it does not follow that we ought to be this way. We really ought to do what we have most reason to, and by definition, that is simply what an ideal evaluator does.
17. We should also note that ideal evaluators are very consistent in their reasons. (This, I believe is quite uncontroversial, and would be endorsed even by particularists)
18. An interesting question is: who makes up the members of the hypothetical society of ideal evaluators? Is it idealised versions of ourselves and our countrymen? Any generic set of people? A set of people whose range of desires occupy full logical space? Can we simply exclude utility monsters?
19. One should be very careful about the structure of my theory. I have not, as of yet made any claim as to what the good is. All I have claimed is that perfectly rational and knowledgeable agents will maximally achieve their ends, and described the conditions in a society under which ends can be maximally realised. i.e. maximum cooperation.
20. One should note that things like prisoner's dilemma and tragedy of the commons seem to posit a conflict between individual rationality and group rationality. The payoff for any individual for defecting is always more than that for cooperating, however mass cooperation gives off a larger payoff. It seems as if the simplest way for cooperation to be individually rational as well as globally rational is if each person internalises the "externalities" namely, that the payoff that the "opponent" receives is also reflected back on the person. e.g. if by defecting I receive a payoff of +10 and my opponent -10, I must internalise everything by incorporating his payoff into mine. Then, a mutual cooperation where each of us receives +5 payoff would be more rational as by internalising, I've got a net payoff +10 instead of 0.
21. There are a variety of ways in which we could interpret this requirement to internalise the costs. One way, is by supposing that other people's payoffs really do matter. To be very clear, I am not assuming some notion of the good. Instead, I am concluding that some notion of the good is necessary in order to resolve paradoxes inherent in prisoner's dilemma situations.
22. An interesting question is whether the requirement to align individual rationality with global rationality can be restated as a formal requirement to act only on the maxim that you will to be a universal law.
23. Are there any other substantive principles which would produce the same effect?
24. Just in case anybody is really confused, let me explicitly state what is in my premises.
a) We ought to be rational i.e. we ought to do what we have most reason to do.
b) By rational I mean means ends rationality
c) All agents necessarily desire their own happiness
d) The reasons people act on should be consistent with one another.
e) An ideal evaluator is one who is perfectly rational and knowledgeable
f) It follows that we ought to do what ideal evaluators would do
g) A society of ideal evaluators is conceivable, i.e. logically possible
The rest of it is just what follows logically from my premises.