BROOME REVIEWS SKORUPSKI

Review:

Posted Wednesday, November 15, 1995

"Agent-neutrality, consequentialism, utilitarianism . . .: a terminological note" Utilitas, 7 (1995) pp. 49-54, by John Skorupski (St. Andrews)
Reviewed by John Broome (University of Bristol) ([email protected])

(This review is forthcoming as `Skorupski on agent-neutrality', Utilitas, 7 (1995), pp. 315-17. Preprinted here by permission.)
According to John Skorupski, a reason for action is agent-neutral if it can be expressed by means of an agent-neutral reason-predicate.[1] He defines a predicate P to be a reason-predicate if

(1) (x)(y)(Py -> there is a reason for x to do y),

where x ranges over agents and y ranges over acts open to x. He defines a reason-predicate to be agent-neutral if it contains no free occurrence of the agent-variable x.

Consider this reason for doing an act: that the act gives an expected benefit to whoever does it. This reason can be expressed by the reason- predicate `gives an expected benefit to an agent who does y' in

(2) (x)(y)(y gives an expected benefit to an agent who does y -> there is a reason for x to do y).

This predicate contains no free occurrence of x, so it is agent-neutral according to Skorupski's definition. Yet it expresses an egoistic reason for action, and egoistic reasons should definitely not be counted as agent-neutral. Skorupski's definition is too broad, then.

Thomas Nagel defines agent-neutrality differently.[2] He does not deal directly with reasons to do an act, but with reasons to promote an event, and he takes events to include acts. He defines a predicate P to be a reason-predicate if

(3) (x)(y)(Py -> there is a reason for x to promote y),

where x ranges over agents and y ranges over events. He defines a reason-predicate P to be agent-neutral if it contains no free occurrence of the variable x; otherwise he defines it to be agent-relative. Consider this reason for promoting an act: that the act gives an expected benefit to whoever does it. The statement that embeds this reason,

(4) (x)(y)(y gives an expected benefit to an agent who does y -> there is a reason for x to promote y),

says that if an act gives an expected benefit to whoever does it, then anyone has a reason to promote this act, not merely a person who can gain an expected benefit for herself by actually doing it. As a reason to promote rather than a reason to do, then, this is not an egoistic reason. Nagel would define it as agent-neutral, and would be right to do so.

Skorupski compares a pair of principles like these: that one ought to keep promises, and that one ought to maximize the keeping of promises.[3] Nagel would formulate them, respectively:

(5) (x)(y)(y is the keeping of a promise by x -> x has a reason to promote y)

(6) (x)(y)(y is the keeping of a promise -> x has a reason to promote y).

(When y is an act of x's, x promotes y by doing y.) On Nagel's definition, these express respectively an agent-relative and an agent- neutral reason. Most authors have accepted this classification. They have agreed that the principle that one ought to keep promises is agent- relative. I followed the crowd.[4] But Skorupski disagrees with me. He would formulate this principle:

(7) (x)(y)(y is the keeping of a promise -> x has a reason to do y).

Because the reason-predicate `is the keeping of a promise' contains no free occurrence of x, Skorupski claims the principle is agent-neutral. Of a parallel principle,[5] he says it `lacks nothing in impartiality'. But this principle certainly lacks something in agent-neutrality. When someone might or might not keep a promise, it gives a reason only to one person, the person whose promise it is: it gives her a reason not to break her promise. On the other hand, the principle that one ought to maximize the keeping of promises gives a reason to everyone; it gives us all a reason to try and make sure this promise is kept. Nagel's formulation picks out this difference accurately. This is another case where Skorupski counts as agent-neutral a reason that ought not be counted as agent-neutral.

Nagel has good reason to deal with reasons to promote, rather than reasons to do.[6] Intuitively, an agent-neutral reason is one that applies to any agent, rather than just to one agent or a restricted group. Provided P contains no free occurrence of x, Nagel's statement (3) can be written:

(8) (y)(Py -> (x)(there is a reason for x to promote y)),

which makes it clear that an agent-neutral reason applies to anyone. Skorupski's statement (1) cannot be transformed in the same way. Whereas anyone can have a reason to promote a particular event (or so Nagel assumes), only some people can have a reason to do a particular act: those who can do it. Therefore, Skorupski correctly restricts the range of the quantifier (y) in (1) to acts that are open to x. Formally, this means that (1) should be written:

(9) (x)(y)(y is open to x & Py -> there is a reason for x to do y).

If P contains no free occurrence of x, this can only be transformed to: (10) (y)(Py -> (x)(y is open to x -> there is a reason for x to do y)). So even when P contains no free occurrence of x, the reason it expresses applies only to people to whom the act is open. It is not truly agent- neutral, then. Consequently, reasons to do are inherently unsatisfactory bearers for the notion of agent-neutrality.

This will not be a significant limitation if acts are open to many people. Are they? It depends how they are individuated. If acts are individuated taking into account the identity of the agent ~ so my singing is a different act from your singing ~ only one person can do a particular act. Reasons to do acts individuated this way cannot be truly agent-neutral, therefore. But the more broadly individuated act of, just, singing is open to many people. So it may seem that, if acts are individuated without taking into account the identity of the agent, reasons to do them might be genuinely agent-neutral if they satisfy Skorupski's definition. But the following example shows that individuating acts broadly in this way leads to a different problem.

Suppose there is one chocolate on the table. Any one of us can eat it; the act of eating the chocolate is open to us all. Suppose each of us has a reason to do this act, and moreover it is the same reason: that eating the chocolate gives an expected benefit to whoever does it. (This is the egoistic reason expressed in (2).) Skorupski would count this as an agent-neutral reason because it applies to all of us who can do the act. But although it is a reason for each of us to do the same act, broadly individuated, of eating the chocolate, it does not give us a shared aim. It gives us conflicting aims. The intuitive notion of an agent-neutral reason requires us to share an aim,[7] but if acts are individuated without taking into account the identity of the agent, the same reason to do the same act can include many different aims. Broad individuation is therefore not an acceptable way to escape the limitations of (10).

However acts are individuated, agent-neutrality cannot be satisfactorily defined in terms of reasons to do an act. Skorupski's definition includes too much. Nagel's definition in terms of reasons to promote is more successful.

Notes

1. John Skorupski, `Agent-neutrality, consequentialism, utilitarianism . . .: a terminological note', Utilitas, 1995.

2. Thomas Nagel, The Possibility of Altruism, Princeton University Press, 1970, p. 90. Nagel uses the terms `objective' and `subjective' for `agent-neutral' and `agent-relative'.

3. Actually, he compares the principle that one ought not to tell lies with the principle that one ought to minimize the telling of lies.

4. John Broome, Weighing Goods, Blackwell, 1991, p. 5.

5. That one ought not to tell lies.

6. In `Agent-relativity and the doing-happening distinction', Philosophical Studies, 63 (1991), pp. 167~85, David McNaughton and Piers Rawling formulate moral rules as injunctions to ensure the truth of a proposition, rather than as injunctions to act. This has the same effect.

7. See Derek Parfit, Reasons and Persons, Oxford University Press, 1984, p. 104.