PART 1: ADAM M. WILLOWS
Morality and Rationality: Only Human?
Two children, Tom and Becky, are lost and trapped in a hole. They are exhausted and afraid, and Becky, the weaker of the two, is too tired to move. Tom provides Becky with food and water, and eventually they find a way to escape. They are saved.
Tom and Becky are mice, and their story is related in Chapter 4 of Marc Bekoff's Wild Justice (). I have taken the liberty of naming them, however, after Tom Sawyer and Becky Thatcher from The Adventures of Tom Sawyer who find themselves trapped for days in a cave. The story above belongs both to the fictional humans and to their real‐life murine counterparts.
Both Toms appear to have done a good thing in saving Becky. I think it would be fair, as Becky's father does, to “conceive a great opinion of Tom” (either Tom) and to declare that he is “no commonplace boy” (mouse). But when assessing the action of the Toms, there are two distinct courses to take. One is to say that these incidents are one example of the shared moral realms of justice, courage, praise, and blame that many creatures—human and nonhuman alike—occupy, albeit with some species‐specific differences in the nature of justice, courage, and so on. The other is to say that there is some relevant difference between the two situations such that while we may approve of the action of Tom the mouse, such approval is not moral approval. Humans – but not other animals – operate in the moral sphere.
I am in sympathy with the second option, and it is my aim in this half of our article to show why this is the more convincing alternative. To that end, I offer an analysis of the different uses of the term “morality” in our respective disciplines, and show that the two courses above are based on very different definitions of the term. One definition focuses on socially established behavioral norms and allows nonhumans to be moral agents. The other emphasizes the normative force of rationality, and likely restricts moral agency to humans. While each definition refers to distinct phenomena worthy of theoretical and practical study, referring to both as “morality” has the potential for profound confusion—a confusion which goes a long way to explaining the disagreement between our two fields. Rather than agree to disagree, I argue that the former definition fails to account for the unique human activity to which the latter refers.
What Does “Morality” Mean?
To a certain extent our disagreement is one over terminology, and the impetus for this article was born out of the realization over repeated discussions that, very often, biologists and anthropologists simply mean something quite different from philosophers and theologians when they say “morality.” Here, for example, is Bekoff's definition: morality is “a suite of interrelated other‐regarding behaviors that cultivate and regulate complex interactions within social groups” (Bekoff and Pierce , 7). Contrast this with Immanuel Kant's claim that “all moral concepts have their seat and origin completely a priori in reason… just in this purity of origin lies their dignity,” or Thomas Aquinas’ view that acts are moral “inasmuch as they proceed from the reason” (Kant , 4:411; Aquinas , 1a2ae 18:5). There may well be some overlap between these positions, especially in the behaviors identified as moral; nevertheless, I think it is clear that Kant and Bekoff do not have the same thing in mind when they talk about morality. In fact, I think this difference is indicative of a persistent divide in meaning.
Consider these further accounts of morality from anthropologists. E. O. Wilson's influential text Sociobiology says that “Moral commitment is entirely learned… children simply internalize the behavioral norms of the society” (Wilson , 562–63). More recently, primatologist Frans De Waal () defines morality as “a compass for life's choices that takes the interests of the entire community into account.” Major philosophical and theological ethicists, on the other hand, suggest definitions like “the use of reason to answer the worldview‐shaping question ‘how should life be lived?’” (Chappell , 3). The Cambridge Dictionary of Philosophy describes morality as “an informal public system applying to all rational persons” (Gert , 586).
It is generally accepted in all of these fields that morality—whatever it is—is somehow action‐guiding or character‐forming. Moral systems, facts, precepts, or codes exert some kind of force on moral agents (whatever they are) such that moral agents are inclined to act, think, and/or live in a particular way. The key difference, it seems to me, lies in what this force is taken to be, or from where it is believed to originate. This is not enough to provide a complete account of morality (there will be disagreements over exactly what this force governs) but it is enough to identify a key component. Call these different views R and S:
-
R: Morality involves or depends upon rationality.
-
S: Morality involves or depends upon social norms.
Note that neither R nor S necessarily excludes the other; it is possible to say that morality is exclusively concerned with rationality or social norms, but also possible to say that it involves both. That we are social and political creatures is crucial to the ethics of rationalists like Aristotle, who begins his Ethics with the statement that his investigation “is a kind of political science” (Aristotle , 1094b 10). R and S should be seen as representative of a general difference in emphasis. Nor do I intend to homogenize either field. It is my understanding that many later anthropologists are in substantial disagreement with the basic ethical behaviorism of Wilson; and De Waal and Bekoff's view that reason is not a foundational part of morality per se does not prevent either from acknowledging that complex reasoning is a characteristic feature of human moral behavior. Disagreements certainly exist between philosophers, notably in the sentimentalist thought of David Hume and Francis Hutcheson and, more recently, Bernard Williams's attack on the institution of morality (Williams ). Despite the internal differences, though, there seems to be a general focus within anthropology on the social as the ground of morality, whereas within philosophy and theology rationality receives at least equal billing.
Who Gets to Be Moral?
It should now be clearer why there is division over the restriction (or not) of morality to humans. If morality is primarily grounded in rationality, then it makes sense to think of it as a distinctively human trait. Reason or rationality here does not mean anything like promotion of self‐interest, or dispassionate pragmatism (Korsgaard ). Instead, it is the capacity for justifiable thinking, drawing valid conclusions, and logical thought. Practical reasoning, often taken as a hallmark of moral action, is the application of the above to action. Human capacities—in particular language and symbolic thought—give rise to the kind of abstract and practical reasoning that is (on this account) required for morality (Tse ). Nonhumans, lacking these capacities, cannot qualify as moral agents. This, however, is down to circumstance and not necessity: if nonhumans did exhibit these capacities they would indeed be moral agents. Kant in particular takes pains to emphasize this point: it is rationality, not humanity, that makes the difference here (Kant , 4:410–12).
Note that R does not entail that social norms or emotions play no role in morality; it can recognize them as extremely important. It simply holds that reason is necessary for morality; not that it is sufficient—although stronger versions of R may indeed claim rationality as the primary or overriding component of morality. Nor does R require that reason always be exercised; merely that the agent possesses the capacity to exercise it. This is a stronger theme in Aristotelian and Thomist virtue ethics than in Kantianism; the emphasis on the habitual nature of virtue means that we do not wholly govern them (Aristotle , 1114a 10–25). Aquinas also thinks it possible that we bear responsibility in cases where we neglect reason (Aquinas , 1a2ae 6.8).
On the other hand, if morality is grounded in social norms then it seems at least an open question whether it is restricted to humans. The question is, from what capacities are the relevant social norms derived? Human culture is unique in its complexity and depends on the human ability to create and share symbolic thought (Calcagno and Fuentes ). If symbolic thought, rationality, or some other distinctively human capacity plays a formative role in moral social norms then morality will, after all, be restricted to humans. Alternatively, if any social structure will suffice, then creatures such as ants or shrimps will qualify as moral agents. I suspect that most adherents of S would agree that this stretches the term “moral” beyond their intended usage, although given that the bounds of the moral are precisely what is at stake here, I cannot deny that this is a position open to them. However, the best candidate for the origin of morally relevant social norms is emotion or particular affective states. Bekoff highlights animal capacities for “empathy, forgiveness, trust, reciprocity and much more” and suggests that the difference between moral and nonmoral creatures is down to “strong emotional and cognitive cues” (Bekoff and Pierce , 3, 13). That nonhumans possess the capacity for these states will meet with little resistance. Alasdair MacIntyre notes that even thinkers who reject the presence of thought or belief in nonhumans “are generally careful not to deny that non‐human animals perceive, feel and in some cases give evidence of at least some intelligence” (MacIntyre , 13). Even Descartes, notorious for his view that animals are automata and censured by MacIntyre for his “silliness,” may in fact have thought that they were both feeling and conscious beings (Cottingham ).
The positions are now beginning to take shape. If R is correct, then morality is indeed distinctively human. If S is correct, then morality may or may not be exclusive to humans, depending on the origins of moral social norms. Call this extended position S2:
-
S2: Morality involves or depends upon social norms and the relevant social norms are derived solely from capacities not unique to humans.
The best candidate for the origin of these social norms in S2 is emotion.
These positions are closely related to the debate between moral sentimentalism and moral rationalism. Moral sentimentalism covers several views but they all agree that “emotions and desires play a leading role in the anatomy of morality” (Kauppien ). Moral rationalism instead gives this leading role to reason. S2 must reject rationalism; but it is not enough to simply embrace any kind of sentimentalism. It is little surprise that De Waal references Hume—probably the most influential sentimentalist—approvingly (De Waal , 66). But Hume's sentimentalism does not go far enough to support S2. Although he agrees that sentiment is “that which renders morality an active principle,” he thinks that
in order to pave the way for such a sentiment, and give a proper discernment of its object, it is often necessary, we find, that much reasoning should precede, that nice distinctions be made, just conclusions drawn, distant comparisons formed, complicated relations examined, and general facts fixed and ascertained. (Hume )
Although Hume ponders the possibility of a prototypical moral sense in some animals, he also thinks that humans possess a kind of moral judgment that animals do not (Beauchamp ). I think that Hume's sentimentalism represents a class of sentimentalists that will not do for adherents of S2. Although they agree that sentiment is the foundation for morality, they still give reason some kind of role such that is necessary for morality. Fitting‐attitude sentimentalists are another group in this class. Conversely, S2 requires that reason have no necessary role in morality at all—although note that S2 does not entail that reason cannot have a role in thought about morality.
I think there is a possible critique to my position here which would broaden the scope of S2. It goes like this. Morality is foundationally sentimental; human moral activity also involves reason, but this is not sufficient to represent a difference in kind between nonhuman and human morality. So we can refer to “human morality” and “hyena morality” as fundamentally similar activities/states, although (due to the exercise of reason) different in expression and the processes they involve. This is the view of De Waal and Bekoff and Pierce (De Waal , 37–42; Bekoff and Pierce , 139–41)
I am not convinced by this argument; I think that the presence of reason constitutes too stark a difference between human and hyena morality for the difference to be one of degree. However, I intend to leave it aside for now. Regardless of how accessible (or not) various kinds of sentimentalism are for S2, if rationalism is correct morality is indeed a distinctively human trait; and it is for the correctness of rationalism that I intend to argue.
Being Reasonable
Let me begin by looking at an inescapable part of moral agency: moral assessment. If I am a moral agent then I am open to assessment by moral standards. Whether those standards are diverse or united and whether they find their origin in moral facts, God, personal inclination, a social contract, or none or all of the above can be put aside. Whatever morality is, if I take part in it I enter a world of praise and blame; good and bad; obligation and supererogation. I may flourish or fail to act in accordance with universal law. I may be cruel, kind, honest, thoughtless, altruistic, prideful, cowardly, just, and much more besides (although probably not all at once). In other words, I have moral responsibility. To be a moral agent means being held to moral standards.
So what is it that is being assessed when an agent is judged by moral standards? I will begin by ruling out one possibility: we are not simply judging the event. This ought to be clear from the fact that quite different moral judgments can be formed about the same event. I rush into a burning building to save a child for its sake and the sake of its distraught parents. I am a courageous hero. I rush into a burning building to save a child for the sake of the TV cameras and the publicity which will help my political campaign. I am scheming and self‐serving. Of course, this does not mean that we cannot assess the action—good, in both cases—but there is something more going on as well. The person, as well as the deed, is in the moral dock. In other words, moral assessment has something to do with internal states. In this at least I am in agreement with my opponents. De Waal notes that “In discussing what constitutes morality, the actual behavior is less important than the underlying capacities… whether animals are nice to each other is not the issue” (De Waal , 16). In the second half of this article, Marcus Baynes‐Rock also stresses the point that significant differences between animal and human social behavior do not in themselves demonstrate that animals are not moral agents. I think this is correct; but just as behavioral differences are not necessarily a point against animal morality, nor are they a point in its favor. Action per se is not the primary focus of moral assessment.
Instead, I suggest that what is under analysis is the agent's reason for acting: why they did or thought that rather than this. Different reasons for action are precisely what make the difference in the example above. But note that not all reasons for action are of the same kind. In her influential work on intention, Elizabeth Anscombe says that there seem to be different kinds of reason or explanation for action (Anscombe ). Consider these explanations: “I coughed because I had a tickle in my throat”; “She jumped because of that bang.” Compare them to these: “I gave them a raise because I wanted to reward them”; “I came to cheer him up.” Anscombe thinks that we recognize a difference between these reasons. The latter kind invokes intentions; the former do not. Instead, Anscombe says that the former kind of reasons involve what she calls “mental causes.” The distinction between intentional and nonintentional behavior is highly significant in the history of philosophy; many thinkers, including Anscombe, identify the presence of intention as the difference between a simple bodily event or “happening,” and an action proper.
What is the difference between intentions and “mental causes”? Anscombe (I think correctly) says that the difference between an intention and a mental cause is that the kind of reasons that are intentions involve consideration of the reason or the related action as something good or bad. At its heart this is a psychological point; it is rooted in the observation that when we act intentionally, we necessarily identify some kind of good toward which we act. Whether we are right that our action is aimed at something good, or whether there might be better goods to aim for, are open questions. Clearly, we end up aiming at the wrong thing all too often. What Anscombe is saying here is that the aim of our action must on some level seem good at the time; intentional action involves desire of some kind.
In other words, intentional actions are aimed at something; and the selection of the target is what Anscombe is referring to when she says that intentions involve the consideration of good (something to be aimed at) or bad. Aquinas is another thinker who believes that a perceived good is necessary in order to form an intention or move the will; and in fact, he thinks that without an intention an event does not, strictly speaking, qualify as an action at all (Aquinas , 1a2ae 8.1, 12.1, 18.9.). There is broad support for this view in modern action theory. Opposing thinkers like Harry Frankfurt and Donald Davidson typically agree that intentional acts are a special class of event, but disagree on how intention makes actions distinctive (Frankfurt ; Davidson ). Hume also relies on the observation that intention necessarily involves a perceived good in his account of action and morality; his sentimentalism stems from his view that the passions alone, and not reason, are responsible for volitions borne out of those perceived goods (Hume , 2.3.3).
This is why moral assessment looks at reasons for action, and specifically intention: it is because that is where we find the agent's judgments about goodness and badness, and their choices to do with goodness and badness. Note that the intentional actions in my example above are precisely the kind of action that is open to moral assessment; whereas the “mentally caused” actions are not. The observation that intention involves deliberation about goodness is behind the common philosophical view that moral goodness has to do with goodness of the will—present in Kant, Aquinas, and ancient and neo‐Aristotelianism (Foot , 14).
Also present in all of these traditions is a commitment to something approaching Kant's principle that “ought implies can” (Kant , A548/B576). This is the view that if we have a moral obligation it must be possible to fulfill that obligation; otherwise, we would not be obliged to do it. “Obligation” is less fundamental to Aristotelianism than to Kantianism; but when it comes to moral assessment both think that praise or blame only properly apply when we have a choice. Theologians like Aquinas typically think that we are incapable of freeing ourselves from original sin; but even there, the possibility of acting well (through divine infusion of the virtues) exists and he states that choice, part of moral action, “is only of possible things” (Aquinas , 1a2ae 13.5). A more sophisticated version of the principle which seems to me to be consistent with virtue ethicists like Aristotle and Aquinas holds that “ought implies can” should be taken to mean that there is a possible world in which the moral obligation is fulfilled (or moral good achieved) (Donagan ). “Ought implies can” is also at the crux of the intersection between ethics and free will theory; if we are not free, the thought goes, we cannot do otherwise and hence cannot bear moral responsibility (Van Inwagen ).
So where does all this get us? I have made three main points here. First: Moral action and assessment has at least in part to do with reasons for action. Second: The relevant reasons for action necessarily involve an identification of goodness/badness. Third: Moral action and assessment implies the presence of choice, or alternate possibilities.
From these, I suggest that moral activity necessarily involves deliberation about reasons. As R. M. Hare points out, this process of deliberation is essentially rational, because it requires an understanding of and comparison between the nature and implications of different choices and reasons (Hare ). The word “deliberation” here should not be taken to imply that moral action requires protracted conscious consideration; many rationalists allow that moral reasoning can be intuitive or the product of habit. What matters is the awareness and assessment of different reasons for action. The process itself can be very brief.
When we assess someone morally, we are assessing what reasons for action they identified as good. Does the good that they identified in forming their intentions match up to what we consider to be the true good (whatever that is)? This process of deliberation is the practical reasoning discussed above. For both Kant and Aristotle it is the foundational moral activity, although they have different conceptions of its scope. It is practical reasoning that I take to be distinctively human, and practical reasoning that makes the connection between moral goodness and rationality. To borrow Foot's comment on this subject: “The discussion has been about human goodness in respect of reason recognition and reason following, and if that is not practical rationality I should like to know what is” (Foot , 13).
MacIntyre makes the same point:
Human practical rationality certainly has among its distinctive features the ability to stand back from one's initial judgments about how one should act and to evaluate them by a variety of standards.… Where my reason for acting is or has been of the form “doing x will enable me to achieve y”, where “y” stands for some good, reflection on this reason will require me to ask “In this situation do I have a better reason for acting than that doing x will enable me to achieve y?” (MacIntyre )
MacIntyre thinks, and I agree, that this process of reasoning requires uniquely human capabilities, including language use and symbolic thought. This is the kind of rationality discussed above. Without it, it is not possible to deliberate about reasons for action. I have argued that deliberation about reasons for action is fundamental to moral activity and is what we look for when we make moral judgments. Position R above is correct; rational thinking, specifically prudence, is necessarily part of morality; and this makes morality distinctively human.
“Correct” here can only reach a certain standard. I am convinced by rationalism because I think that without reason particular capacities, actions, and states cannot exist; and I take those capacities, actions and states to be necessary components of morality. It is open to my opponent to bite the bullet and deny that morality does require any such thing. If he does so then we really will have reached an impasse; a breach in meaning such that the best we can hope for is mutual awareness but not agreement. Should this happen, I simply note that whatever we choose to call it, humans do exhibit a capacity for rational deliberation about action and intention that significantly distinguishes them from nonhumans; and I take this difference to be stark enough that it deserves its own name. I suggest “morality”; but it is up to the reader to decide.
PART 2: MARCUS BAYNES‐ROCK
Morality in Animals
The question of animal morality is elusive in no small part because there are more than two positions in the debate. Conspicuous among these positions is that of Bekoff and Pierce. They hold that animals are indeed capable of morality and they employ numerous examples of animals behaving apparently morally to support their case (Bekoff ; Bekoff and Pierce ). Their position appeals to me because it gives a descriptive account of animal morality with normative elements. For example, the descriptive element sees morality as including not just values that align with human values (Shapiro ); it allows for (say) wolf morality to encompass what it is to be a good wolf. But at the same time its normative elements hold moral behavior to be universally prosocial and other‐regarding (Bekoff and Pierce , 148). I do agree that moral behavior must be other‐regarding; however, with regard to prosociality, if this implies sociality between beings of the same species then I think it is too narrow. In fact, I would argue that moral behavior can encompass acting with regard to more than just other beings; it can include regard for entire ecosystems (Leopold ). When I choose to take my paint thinners to the rubbish dump, rather than let it run into the stormwater drain, I am indeed acting prosocially but, assuming my intentions are more than simply avoiding prosecution, I am also acting with regard to the geology and life forms that lie at the other end of the stormwater drain. However, for the purposes of my argument I will focus on the moral consideration that one creature, whether human or otherwise, might have for another. It is in this space of interaction beyond one's species that I see morality as more than human.
Another position is that of Frans De Waal who argues for morality in animals based on evolutionary parsimony (De Waal ). De Waal is a primatologist so understandably he employs examples of apes and monkeys behaving morally to argue that humans are not the only moral species. Thus, humans simply exhibit a scaled‐up form of morality that is in essence no different from that of other animals. We adhere to a set of norms and values crucial to social cohesion and so do other social primates. Gary Francione calls this the “similar minds” approach and sees an underlying danger in this line of reasoning. By granting that certain animals have characteristics similar to humans they are given greater moral consideration (Francione ). Thus, by setting humans as the yardstick by which morality in other species is measured, we consign those species to a lesser kind of morality and ironically undermine inclusion of animals in our moral spheres. In this I agree with Francione especially with regard to measures of sentience and language being used to determine moral consideration of other species (see also Deane‐Drummond , 265; Deane‐Drummond et al. , 127). But I also diverge from De Waal's thesis in holding morality to be immanent in relations between moral agents regardless of species. In other words, my account leaves open the space for moral agency in other beings regardless of how closely related they are to humans.
Whereas my position in some ways aligns with those championed by Bekoff and De Waal, it is more directly opposed to that of Philippa Foot, Alasdair MacIntyre, and Christine Korsgaard. This is known as the cognitivist position (Foot ). MacIntyre holds an interesting position in relation to De Waal in that he grants dolphins the same capacity as humans to identify in conjunction a set of reasons for action. But he holds that “humans are different because we can evaluate our reasons for action for better or worse” (MacIntyre , 96). This is the essence of the argument for morality being exclusive to humans. It holds that there are goods to which humans should aim in order to be good “qua humans” and that only through practical reason can humans arrive at judgments about to which good they should strive. These are therefore moral judgments. As for nonhuman animals, according to this argument, they are not capable of the necessary kind of practical reason—stepping back and assessing their actions—and therefore are incapable of morality. I have two reservations about this argument for morality based on moral judgments. One is the unfounded assumption that animals are wantons. This assumption is made frequently in the literature usually with a lot of hedging language, as animals do not “seem” to have a capability for practical reason (Korsgaard , 5–6). However, despite the fragility of this premise I want to set it aside in favor of my main reservation which is the slippage between morality and moral judgment. I concur with MacIntyre that there can be a set of goods that make one a good human or good dolphin, and this is where I diverge from De Waal and Bekoff because I think that while altruism and cooperation might be moral for humans—in certain cases—I do not think that always holds for other animals. Animal morality may look quite different from what we, as humans, expect morality to look like, but that does not mean it is not morality. But I differ from MacIntyre in that I argue that morality is not only relational, it is in essence intuited and affective. Therefore, making judgments about intuitions and affects—as per the cognitivist position—is not morality, rather it is moral judgment. This is the categorical error that I suggest my dear colleague is making above.
For Korsgaard, morality is a manifestation of what she calls “normative self‐government” (Korsgaard , 6). This has parallels with MacIntyre's thesis which holds that it lies in practical reason. For Korsgaard, the basis of morality lies in the capacity that humans have for theory of mind. Before we are capable of morality we must be able to identify in ourselves our own motives and reasons and make judgments about these. As such, animals are incapable of morality because according to Korsgaard they do not have a self‐analytical capability that is a corollary of theory of mind. Korsgaard acknowledges that there is an element of wanting humans to be special in defining morality as uniquely human and this comes to undermine her argument. She says “if altruistic and cooperative behavior were the essence of morality then the ants and bees would be our moral heroes” (Korsgaard , 6). But this is entirely circular. There is no reason to find morality in ants and bees unpalatable other than the assumption that morality is a capacity only found in beings with reason—that is, humans. As with my position in relation to Willows and MacIntyre, I argue that Korsgaard is in fact talking about moral judgments rather than morality and that we should not exclude other species from moral agency simply because of their comparatively lesser reasoning powers.
Jonathan Haidt argues that the cognitivist position is akin to an argument for the tail wagging the dog (Haidt ). Whereas cognitivists hold that practical reason leads to judgments which motivate individuals to moral actions, Haidt argues that practical reason is in fact informed by moral intuitions and that it has a very limited influence in the other direction. Haidt buttresses his argument with a review of empirical research into morality and moral decision making. What emerges from the research is what Haidt calls a “social intuitionist model” (Haidt , 1024). This model allows for reasoning to influence intuition but only in a very few cases; it is normally applied post hoc to judgments and actions based on intuition. The experiments show that whether or not there is time to reason moral judgments follow moral intuitions. Accordingly, Haidt ascribes the system of intuited morality to “all mammals” because his system has an innate basis and normally operates prior to and even contrary to practical reason (Haidt , 1029).
An example of the empirical evidence for the social intuitionist model is a study from Joshua Greene and colleagues. These researchers presented moral dilemmas to their subjects and used fMRI scans to track the brain activity of subjects during their responses (Greene et al. ). The dilemmas presented were along the lines of the trolley dilemma. In one “personal” variant the subject was told that they could save five people from being run over by a trolley if only they pushed one person onto the tracks. In another “impersonal” variant they were told they could save five people from the trolley if they pulled a switch which diverted the trolley onto a track on which one person stood. What these researchers found was that in the “personal” variant subjects were a lot slower to make the response of “appropriate” than they were in the impersonal variant where the time to respond with appropriate or inappropriate was the same. Moreover, the brain areas that showed increased relative activation during the response to the moral personal condition were areas associated with emotion. What this suggests is that these areas of emotional processing influence moral judgment and not vice versa (see also Glenn et al. ).
I subscribe to Haidt's social intuitionist model insofar as it recognizes the primacy of affect in moral agency and places practical reason in its rightful place. After all if practical reason held primacy then calculating psychopaths would be expected to act morally. But this is not the case because evidence shows that they lack the emotional brain activity necessary to make moral decisions (Glenn et al. , 5). Where I must diverge from Haidt, however, is in how morality is defined. Here, Haidt suggests that there are innate intuitions which form the raw material onto which cultural and social norms, which resonate with those intuitions, are overlaid. I do not find this overly problematic in itself, apart from the use of the term “innate” which undermines the connections between genes and development. However, for the purposes of the argument presented here I suggest that this definition of morality—a system of norms particular to a community—is not what Willows has in mind. This does not undermine Haidt's argument because it does not hinge on his definition of morality. The empirical evidence that Haidt provides is sound, even if applied to MacIntyre's definition, but I seek to nuance it with respect to finding common ground with my opponent.
Where I do find a close correspondence with my position is in Celia Deane‐Drummond's model of “intermorality” (Deane‐Drummond , 263). This is an evolution of her earlier ideas which lean towards animals’ capacities for moral judgments and evolutionary continuity between social primates and humans (Deane‐Drummond ). What Deane‐Drummond's later model does, that is in my view progressive, is to present a relational kind of morality which sees human beings, not as individuals contemplating what they ought to do, but as beings caught up in a world of relations with other beings; relations which have moral consequences and not just in one direction. This model does away with the humanist model of morality because it allows for other species to inhabit moral worlds that do not necessarily cohere with human moral worlds. Thus, even where there is little coherence, there is still moral overlap and this is what becomes significant. Wisdom and morality are not merely things that are contained in individuals. However, while Deane‐Drummond and I both argue for morality as a product of relations, I am inclined more toward morality as immanent in relations.
For the purposes of my argument, I would like to enlist the support of a rather unlikely candidate for animal morality: a spotted hyena. I say unlikely because spotted hyenas have a reputation as cowardly scavengers who steal for a living rather than hunt for themselves. This is of course a culturally biased view and ironically does not gel with the views held by the people who coexist with spotted hyenas in my study site: the city of Harar, Ethiopia. It is there that hyenas are appreciated by the locals not just for cleaning the streets of garbage but for protection against harmful spirits and services that they perform for the town's “saints.” In fact, my research in Harar shows that most Hararis not only appreciate hyenas, they have no problem ascribing moral agency to hyenas. I have collected accounts from many people of hyenas repaying kind behavior and acting against people who harm their clan members or cause them insults (Baynes‐Rock ). However, rather than pitting the Harari perspectives against those of moral philosophers, I instead want to present my own experience of hyena morality to support my case for morality in animals. In doing so, I shall not be trying to demonstrate morality in spotted hyenas as a species; instead, I shall be arguing that morality is emergent in relations between a human being (me) and a spotted hyena named Willi.
The context for my relationship with Willi was a hyena feeding place just outside Harar's town wall where free‐ranging hyenas visit each night and are fed by a “hyena man.” By the time Willi appeared on the scene, I had been visiting the feeding place nightly for four months, spending my nights making observations of hyenas as they came and went. Willi was different than the other hyenas. While the others tolerated my presence but made no effort at interacting with me, Willi insisted on some kind of engagement. After a couple of weeks, Willi decided that I was someone, or something, worth investigating. I have no idea whether he considered me a self‐propelled object or an intentional being at the time but in either case his first attempt at any kind of interaction was to approach me and try to bite my knee. He persisted in this and I persisted in moving my leg away (in their prime, hyenas can exert 4000 n of pressure with their jaws and even at one year old they can crush bones). This maneuvering continued night after night for a few weeks but eventually we arrived at an arrangement whereby he could bite my jacket sleeve and I could grab the fur on top of his head.
Once we were comfortable with physical contact, it was but a small step for us to engage in interspecies play. Willi initiated this one night when he was playing with another hyena. He ran up to me and began biting which was effectively an invitation. This led to a chase around the hill behind the feeding place whereupon he tried to bite me and I tried to catch him while he dodged out of my way. I knew it was play and not actual avoidance because Willi was signaling it to me. He had his tail up and his mouth hanging open and after every dodge, he returned to give me another try at grabbing him. Furthermore, Willi initiated play with me on subsequent nights, something he would never have done had he thought I was chasing him with the intention of hurting him.
Bekoff and Pierce hold play to be a clear indicator of morality in animals. This is due to the inherent justice in play which enables it to function despite social and physical disparities. Animals must give honest signals that they intend to play or else they can rupture social relationships (Bekoff and Pierce , 197). They must follow certain rules of conduct and not take advantage of situations in which they hold a superior position. Thus, there is a sense of fairness implicit in play and a set of expectations held by the participants that what is play must remain play. Anyone transgressing must make amends or else be excluded from play and its socio‐psycho‐physiological benefits.
While I agree with Bekoff and Pierce about the implicit sense of fairness in play and its implications with regard to morality in animals, this is not what I intend to foreground with respect to arguing for animal morality. Instead, I wish to highlight the significance of our episodes of play in terms of the way that Willi perceived me as a distinct individual. Willi did not play just with anyone; not any hyena and certainly not any human. He had certain friends with whom he played and with whom he had a more trusting relationship. He never played with the hyena man or his son and never played with the higher ranking female hyenas such as Dibbey who persecuted him. So his criteria for a playmate were that it be someone familiar and someone who he was pretty certain would not hurt him. That I fit these criteria is an indication that Willi saw me as someone distinct from other humans and hyenas and that this distinction was consistent over time. He could make judgments about how I would act toward him based on how I had acted toward him in the past and thus he could apply a different set of rules to me than he applied to other humans who were yet to prove themselves trustworthy.
My relationship with Willi extended well beyond chasing each other about on the hill. I accompanied him on his nightly forays into Harar's Old Town looking for food and sometimes he accompanied me. On one occasion a dog chased Willi onto a common beside the town wall, and I, feeling affronted, chased the dog from the common across to the old leper colony. At that Willi joined me in the chase, gauging my intentions and following my lead, and we both pulled up, glaring at the dog as it disappeared into the night. Indeed, Willi even invited me to his home. One morning as the hyenas were making their way back to their dens outside the town, Willi convinced me to follow him to his den between a farm and a stream. I lagged behind, he stopped and waited; we were separated, he caught me up; once outside the den he made three attempts to try and get me to follow him inside. Willi and I saw each other as friends; we sought each other out on the hill behind the feeding place and at the garbage dump. When we bumped into each other in the Old Town there was recognition and familiarity that did not exist between other people and hyenas. But as human and hyena, we brought a hybrid set of standards to our relationship which guided our ways of relating. And therein lies the model of intermorality.
I highlight the difference between my relationship with Willi and that of other people and hyenas because it is about more than familiarity and trust. It is about one being recognizing another as a subject of significance; as someone with whom they stand in relation as a person worthy of moral consideration. This is where I diverge from Deane‐Drummond's account of intermorality. Whereas Deane‐Drummond's account holds intermorality to be a sum of two parts, which between humans and other animals can be unequal in terms of their level of moral agency (Deane‐Drummond et al. , 135), I hold morality to be immanent in relations and therefore the question of levels of morality becomes irrelevant. Borrowing from Martin Buber, this account of morality is one of the moral self, emerging out of consideration of the Other as a capitalized You. Buber calls this the I‐You relation wherein the self and Other merge in actualization of an ethical relation (Buber , 151). It resonates with Emmanuel Levinas and his account of the ethical demand made by the face of the other. In this Levinas is not talking about the literal face, but the Other as a capitalized You that demands of the self, consideration (Levinas , 25). Thus Willi, by entering into relations with me as a capitalized You became subject to a level of moral consideration that was no different from that which fell upon me. We both became bound by ways in which we ought to act in the presence of and toward the Other and in this lies the morality of animals.
In support of this is the suggestion that Willi could have attacked me, or at least bitten off a chunk of me. This is not far‐fetched as attacks on people by hyenas in the region around Harar are well documented and widespread. And certainly he was hungry enough because when food was made available, in the form of a dead ox or sheep, Willi gorged himself. But he did not take advantage of the many opportunities that he had to bite a little harder and take a chunk of my flesh. On the one hand, this could be because of a fear of repercussions. He could have feared that I might somehow punish him if he transgressed. But then this would make it difficult to argue that Willi was a wanton. And fearing of repercussions is not incompatible with acting morally. While fear of punishment might inhibit me from killing my enemy, I can also be inhibited by the sense that it is wrong. In the same way, Willi not harming me due to fear of repercussions is not incompatible with his sense that it would be wrong to harm someone with whom he stands in ethical relation; with someone who in Kant's terms appears as an end and not as a means. In entering into a relation with me, Willi was faced with a human looming large as a capitalized You and therefore a set of oughts that obliged him to act morally. Whether or not he stopped to reflect on that obligation is not crucial to the fact that he chose to act in a way that was faithful to the relation.
CONCLUSION
After much considered debate, each of us is convinced that the other is seriously mistaken about the nature and existence of animal morality. Where to go from here? Fortunately, we are not in total disagreement. In fact, we both have very similar views of the situation “on the ground.” Both of us think that there is such a thing as animal goods, appropriate to their own species. There is no sense in holding animals to human standards; but this need not mean that there are no standards that may determine the success of an animal qua animal. We also agree that humans are remarkably different creatures with social norms that are far more complex than those of other species. So, neither part of this article should be read as an attempt to impugn either animals or humans. Instead, our disagreement is over the social and metaphysical significance of the difference between us and other species, rather than the extent of the difference per se. Does morality constitute a fundamental break with the rest of the animal kingdom, is it the most complex example of a common theme, or is it something malleable that adapts to particular relations regardless of species?
Although we have been unable to settle the question, our hope is that this article will contribute to thought on this subject in a way that is accessible to readers in both fields. However, we also hope that it brings to light the way different commitments and methodologies profoundly affect approaches to interdisciplinary work. This is especially so when working with such a loaded term as “morality.” Having hashed it out in this article, we do not expect to agree on this matter; but we do believe that we understand each other.
References
Anscombe, G. E. M.1956. “Intention.” Proceedings of the Aristotelian Society 57:321–32.
Aquinas, Thomas. 1948. Summa Theologica. Translated by Fathers of the English Dominican Province. New York, NY: Benziger Brothers.
Aristotle. 2004. Nicomachean Ethics. Translated by J.A.K.Thomson and HughTredennick. London, UK: Penguin.
Baynes‐Rock, Marcus. 2015. Among the Bone Eaters: Encounters with Hyenas in Harar. State College: Pennsylvania State University Press.
Beauchamp, Tom L.1999. “Hume on the Nonhuman Animal.” The Journal of Medicine and Philosophy 24:322–35.
Bekoff, Marc. 2005. Animal Passions and Beastly Virtues: Reflections on Redecorating Nature. Philadelphia, PA: Temple University Press.
Bekoff, Marc, and JessicaPierce. 2009. Wild Justice: The Moral Lives of Animals. Chicago, IL: University of Chicago Press.
Buber, Martin. 1970. I and Thou. Edinburgh, UK: T&T Clark.
Calcagno, James M., and AgustínFuentes. 2012. “What Makes Us Human? Answers from Evolutionary Anthropology.” Evolutionary Anthropology: Issues, News, and Reviews 21(5):182–94.
Chappell, Timothy. 2009. Ethics and Experience: Life Beyond Moral Theory. Montreal, Canada: McGill‐Queen's University Press.
Cottingham, John. 1978. “‘A Brute to the Brutes?’ Descartes’ Treatment of Animals.” Philosophy 53:551–59.
Davidson, Donald. 2001. “Actions, Reasons and Causes .” In Essays on Actions and Events, edited by DonaldDavidson, 3–19. Oxford, UK: Clarendon Press.
Deane‐Drummond, Celia. 2009. “Are Animals Moral? Taking Soundings through Vice, Virtue, Conscience and Imago Dei .” In Creaturely Theology: On God, Humans and Other Animals, edited by CeliaDeane‐Drummond and DavidClough, 190–210. Norwich, UK: SCM Press.
Deane‐Drummond, Celia. 2015. “Deep History, Amnesia, and Animal Ethics: A Case for Inter‐Morality.” Perspectives on Science and Christian Faith 67(4):263–71.
Deane‐Drummond, Celia, NeilArner, and AgustínFuentes. 2016. “The Evolution of Morality: A Three Dimensional Map.” Philosophy, Theology and the Sciences 3(2):115–51.
DeWaal, Frans. 2006. Primates and Philosophers: How Morality Evolved. Edited by StephenMacedo and JosiahOber. Princeton, NJ: Princeton University Press.
DeWaal, Frans. 2014. “Natural Normativity: The ‘Is’ and ‘Ought’ of Animal Behavior.” Behaviour 151: 185–204.
Donagan, Alan. 1984. “Consistency in Rationalist Moral Systems.” Journal of Philosophy 81(6):291–309.
Foot, Philippa. 1995. “Does Moral Subjectivism Rest on a Mistake?” Oxford Journal of Legal Studies 15:1–14.
Foot, Philippa. 2001. Natural Goodness. Oxford, UK: Oxford University Press.
Francione, Gary. 2005. “You Hypocrites! By Granting that Animals Have Similar Minds to Ours, It Looks Like We Are Evolving in Our Moral Relationship with Other Species. Don't be Fooled.” New Scientist , June, 2005, 51–52.
Frankfurt, Harry. 1997. “The Problem of Action .” In The Philosophy of Action, edited by Alfred R.Mele, 42–52. Oxford, UK: Oxford University Press.
Gert, Bernard. 1999. “Morality .” In The Cambridge Dictionary of Philosophy, 2nd. ed. edited by RobertAudi, 586–87. Cambridge, UK: Cambridge University Press.
Glenn, Andrea L., AdrianRaine, and R. A.Schug. 2009. “The Neural Correlates of Moral Decision Making in Psychopathy.” Molecular Psychiatry 14:5–6.
Greene, Joshua, R.Brian Sommerville, Leigh E.Nystrom, John M.Darley, and Jonathan D.Cohen. 2001. “An fMRI Investigation of Emotional Engagement in Moral Judgement.” Science 293: 2105–08.
Haidt, Jonathan. 2001. “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgement .” In Reasoning: Studies of Human Inference and Its Foundations, 1024–52. Cambridge, UK: Cambridge University Press.
Hare, R. M.1979. “What Makes Choices Rational?” Review of Metaphysics 32:623–37.
Hume, David. 2000. A Treatise of Human Nature. Edited by DavidFate Norton and Mary J.Norton. Oxford, UK: Oxford University Press.
Kant, Immanuel. 1997. Groundwork of the Metaphysics of Morals. Translated and edited by MaryGregor. Cambridge, UK: Cambridge University Press.
Kant, Immanuel. 1998. Critique of Pure Reason. Translated by PaulGuyer and Allen W.Wood. New York, NY: Cambridge University Press.
Kauppien, Antti. 2016. “Moral Sentimentalism .” In The Stanford Encyclopedia of Philosophy (Fall 2016 Edition), edited by Edward N.Zalta. Accessed 11 August 2016. <http://plato.stanford.edu/archives/fall2016/entries/moral-sentimentalism/>
Korsgaard, Christine M.2010. “Reflections on the Evolution of Morality.” The Amherst Lecture in Philosophy 5:1–29. <http://www.amherstlecture.org/korsgaard2010/>
Leopold, Aldo. 1949. A Sand County Almanac. Oxford, UK: Oxford University Press.
Levinas, Emmanuel. 1999. Alterity and Transcendence. Translated by Michael B.Smith. London, UK: Athlone Press.
MacIntyre, Alasdair. 1999. Dependent Rational Animals: Why Human Beings Need the Virtues. Chicago, IL: Carus Publishing Company.
Shapiro, Paul. 2006. “Moral Agency in Other Animals.” Theoretical Medicine and Bioethics 27:357–73.
Tse, Peter Ulric. 2008. “Symbolic Thought and the Evolution of Human Morality .” In Moral Psychology Vol. 1: The Evolution of Morality: Adaptions and Innateness, edited by WalterSinnott‐Armstrong, 269–97. Cambridge, MA: MIT Press.
VanInwagen, Peter. 1999. “Moral Responsibility, Determinism, and the Ability to Do Otherwise.” The Journal of Ethics 3(4):341–50.
Williams, Bernard. 1985. Ethics and the Limits of Philosophy. Cambridge, MA: Harvard University Press.
Wilson, E. O.1975. Sociobiology: The New Synthesis. Cambridge, MA: Harvard University Press.