Hostname: page-component-7c8c6479df-nwzlb Total loading time: 0 Render date: 2024-03-28T19:49:36.247Z Has data issue: false hasContentIssue false

Should We Prevent Optimific Wrongs?

Published online by Cambridge University Press:  21 September 2015

ANDREAS L. MOGENSEN*
Affiliation:
Jesus College, Oxfordandreas.mogensen@philosophy.ox.ac.uk

Abstract

Most people believe that some optimific acts are wrong. Since we are not permitted to perform wrong acts, we are not permitted to carry out optimific wrongs. Does the moral relevance of the distinction between action and omission nonetheless permit us to allow others to carry them out? I show that there exists a plausible argument supporting the conclusion that it does. To resist my argument, we would have to endorse a principle according to which, for any wrong action, there is some reason to prevent that action over and above those reasons associated with preventing harm to its victim(s). I argue that it would be a mistake to value the prevention of wrong acts in the way required to resist my argument.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Some philosophers put forward views that are like act-consequentialism except that they permit an agent-relative ranking of outcomes. See Dreier, Jamie, ‘Structures of Normative Theories’, The Monist 76 (1993), pp. 2240CrossRefGoogle Scholar; Dreier, ‘In Defence of Consequentializing’, Oxford Studies in Normative Ethics 1 (2011), pp. 97–119; Portmore, Douglas, Commonsense Consequentialism: Wherein Morality Meets Rationality (Oxford, 2011)CrossRefGoogle Scholar. For the purposes of this article, ‘act-consequentialism’ should be understood to require an agent-neutral ranking, such that the ordering of outcomes as better or worse does not vary from agent to agent.

2 For the sake of realism, this case is modified from its canonical description in Thomson, Judith J., ‘The Trolley Problem’, Yale Law Review 95 (1985), pp. 1395–415CrossRefGoogle Scholar. I encountered this improved version in a talk by Eric Schwitzgebel.

3 A large web-based survey found that only 11 per cent of participants thought it permissible to push the person onto the tracks in this kind of case. See Hauser, Marc, Cushman, Fiery, Young, Liane, Jin, R. Kang-Xing, and Mikhail, John, ‘A Dissociation Between Moral Judgments and Justifications’, Mind & Language 22 (2007), pp. 121CrossRefGoogle Scholar.

4 For a summary of recent research on DDA, see Woollard, Fiona, ‘The Doctrine of Doing and Allowing’, Philosophy Compass 7 (2012), pp. 448–69CrossRefGoogle Scholar.

5 These cases are due to Frances Kamm, Morality, Mortality, vol. 2 (Oxford, 1996), p. 90. They are based on examples first described by Philippa Foot in ‘Killing and Letting Die’, reprinted in her Moral Dilemmas (Oxford, 2002), pp. 78–87.

6 It should be uncontroversial for those who accept DDA that it is worse for me to push the hiker than to allow her to be pushed. The question is whether the difference is so great that permitting her to be pushed is permissible, and not merely wrong to a lesser extent.

7 See Kamm, Frances, ‘Rights beyond Interests’, in her Intricate Ethics (Oxford, 2007), pp. 237–84CrossRefGoogle Scholar, at 252, and McMahan, Jeff, ‘Intention, Permissibility, Terrorism and War’, Philosophical Perspectives 23 (2009), pp. 345–72CrossRefGoogle Scholar, at 350.

8 Kamm, ‘Rights beyond Interests’.

9 For this view on the moral status of embryos, see George, Robert P. and Gomez-Lobo, Alfonso, ‘Statement of Professor George (Joined by Dr. Gomez-Lobo’, in his Human Cloning and Human Dignity: An Ethical Inquiry (Washington, D.C.: The President's Council on Bioethics, 2002), pp. 258–65Google Scholar; Gomez-Lobo, ‘The Moral Status of the Human Embryo’, Perspective in Biology and Medicine 48 (2005), pp. 201–10.

10 For this view on the moral status of animals, see Francione, Gary, Animals as Persons: Essays on the Abolition of Animal Exploitation (New York, 2008)Google Scholar; Regan, Tom, The Case for Animal Rights (Berkeley, 1983)Google Scholar.

11 Kagan, Shelly offers a similar case, about which he draws the same conclusions in The Limits of Morality (Oxford, 1989), p. 164Google Scholar.

12 Temkin, Larry, Rethinking the Good: Moral Ideals and the Nature of Practical Reasoning (Oxford, 2012), p. 205CrossRefGoogle Scholar.

13 See Thomson, Judith J., ‘The Trolley Problem’, and James Montmarquet, ‘On Doing Good: the Right and the Wrong Way’, Journal of Philosophy 79 (1982), pp. 439–55Google Scholar.

14 Tamara Horowitz argues that intuitions taken to support DDA reflect nothing more than loss aversion in ‘Philosophical Intuitions and Psychological Research’, Ethics 108 (1998), pp. 367-85. However, Kamm argues convincingly that we continue to regard killing as worse than letting die even in cases where letting die represents a loss and/or killing represents a gain foregone: see ‘Moral Intuitions, Cognitive Psychology, and the Harming/Not-Aiding Distinction’, in Intricate Ethics, pp. 422–9.

15 Kahneman, Daniel and Tversky, Amos, ‘Prospect Theory: An Analysis of Decision under Risk’, Econometrica 47 (1979), pp. 263–92CrossRefGoogle Scholar.

16 Kahmenman, Daniel and Tversky, Amos, ‘The Framing of Decisions and the Psychology of Choice’, Science 211 (1981), pp. 453–8Google Scholar.

17 See Wallach, Wendell and Allen, Colin, Moral Machines: Teaching Robots Right and Wrong (Oxford, 2009)CrossRefGoogle Scholar. More generally, as robots and drones become more prevalent they will increasingly be required to respond to morally relevant information about their surroundings. Especial concern surrounds the deployment of robots in war, on which see Arkin, Ronald, Governing Lethal Behavior in Autonomous Robots (Boca Raton, 2009)CrossRefGoogle Scholar.

18 With respect to the morality of deploying autonomous robots in battle, it seems to be taken as given that they should conform to the laws of war that govern human beings. See e.g. Arkin, Governing Lethal Behavior in Autonomous Robots. These laws arguably reflect non-consequentialist principles, such as the intention/foresight distinction: see McMahan, ‘Intention, Permissibility, Terrorism, and War’.

19 For helpful comments on previous drafts of this article I am grateful to Krister Bykvist, William MacAskill, and the audience at the Balliol Positive Ethics Seminar.