Logic and Moral Discourse (revised)
June 9, 2014
Stuart W. Mirsky in Ethics, Stuart Mirsky

A while back I put up on this site what my take on resolving the moral question is. After examining a number of different solutions, from intuitionism to ethical naturalism (Foot, Anscombe, and Brandom -- the last of whom presents a proposal which seems to be a hybrid of Aristotelian naturalistic "virtue ethics" and Kantian rationalism) and then on to Searle's speech-act based answer to the emotivist denial and the prescriptivist solution, as well as a Kantian style argument for the power of rational thinking to drive moral judgment, I tried to cover all the bases by looking at some additional approaches. These also included considering a Wittgensteinian solution (based on Wittgenstein's later work) which finds moral standards in our "forms of life" (Beardsmore) as well as the "sentimentalism" argument of Jesse Prinz (which elaborates a logic of sentiment in terms of the logic of description) and the evolutionary-biological thesis of James Q. Wilson (which seats moral judgments in certain species-specific capacities identifiable as human). Considering all the options, I concluded that none are quite right though all seem to have something to commend them. I offered my own tentative explanation here under the post entitled Realizational Ethics in which I tried to lay out a step by step account (not quite an argument!) that would lay bare the moral valuing mechanism and show why it works.

In a nutshell, I proposed that moral valuing involves looking at actions in the most complete way, i.e., a way that considers them in light of the quality of the intentions that underlie them and which they express and not just instrumentally (in terms of the outcomes they bring about). Unlike Searle, I argued that intentions aren't entities (a strange choice of words for such phenomena, to say the least) but that they are just a way we have of talking about some things (and therefore distinguishing them from other kinds of things, picking this up from Dennett's notion of the "intentional stance"). Nevertheless, there is a reality to them if we recognize that what we mean when we consider intentions per se is the state of the mental life of the agent, what he or she is emotionally inclined to do. And this can be looked at referentially, no less than we do with the phenomena of the observable physical world, because it makes sense to speak of intentions and selves, and about the manifold mental features of our lives.

From there I suggested that judging actions in the most complete way is to look at them as expressing intentions which is to say expressing the mental state of the acting self as opposed to considering their worth instrumentally or in terms of whether or not they satisfy some need, desire, requirement, etc. That is, we may consider actions in any of these other ways and put a value on them based on their role in addressing these elements in our psychological lives because valuing just is an adjunct of rationality per se. You can't be rational in making choices unless you can prioritize and, to do that, you need a basis for sorting, you need a scale of significance. But actions, per se, cannot be fully and adequately evaluated unless and until we also take into account the underlying motivations, the intentions they express, and these can be understood as aspects of an agential self, the state of the agent's mental life at the moment he or she is acting.

To reach this point, I argued that anything we can treat objectively (as an object of reference) can be valued and that the elements of our mental lives we associate with "intentions" can be objectified in this way, can be referred to (even if such referring does not demand that the objects of reference in question have the kinds of publicly observable features we associate with the stuff of the senses). That is, "objects of reference" are not limited to things we can see, feel, touch, hear, smell, taste, etc. An "object of reference" is a construct we "build" out of the raw material of our sensed world and some are quite complex, i.e., they are ways we have of referring to certain combinations of sensed information.

As such I made the point that intentions and selves are perfectly legitimate things we can talk about and differentiate, although we do this differently than when we differentiate between observable phenomena we pick up through the senses. In this way we can understand actions, when they are agential, as expressive of the mental state of the agent. The problem then was to ascertain how we can presume to distinguish good from bad mental states and on what basis that is not merely circular, not merely a case of claiming that good actions (understood here as morally good, not instrumentally good) are just good because we believe they are or happen to prefer one sort of mental state over another (because what we prefer at any given moment just is an aspect of that moment's mental state). The important point here is to see that we must have a reason, over and above whatever current mental state we are in, for preferring it or not doing so, a reason which is not, itself, just a function of that current mental state. To have a reason for thinking something to be morally good, there must be a reason which is not, itself, at the same level, which is not just a moral consideration of the same type as the judgment the reason is deployed to support. It must not be "moral" in the same sense that the thing we want to call morally good is thought to be that.

My solution was to suggest that it is the nature of intentionality itself that provides an avenue for judging actions through their intentions. Dennett suggests that we have three ways we look at things: 1) mechanistically (the viewpoint of physics and of any science construed through physics); 2) in terms of function qua designed systems (whether a thing is naturally "designed" by evolution or synthetically designed by creatures like ourselves); and 3) intentionally (where the thing, a kind of system, is seen to have design capabilities of its own -- at whatever level since high level designing, such as we are capable of, is not essential). The "intentional stance" in Dennett's terms is an outgrowth of the evolutionary process by which some entities come, in time, to recognize that the behaviors of other entities are internally motivated, are not just blindly performed or a function of something else outside of them. Some entities, we as evolutionary organisms come to recognize, can act in ways that affect us in a purposeful way (i.e., reflect a purpose or intention of the acting entity).

Whatever one wants to say of the adequacy of Dennett's account, it seems to me that he has hit on something of interest, i.e., that intentionality implies the capacity to recognize intentionality in others. That is, intentionality implies something going on within the organism that amounts to a mental life. Now not all mental lives are equal. Some occur at a higher level of awareness than others and, at least for now, it seems reasonable to say that the mental lives of humans are more developed (and complex because of that greater development) than that of other creatures on the planet. Still there are other creatures with mental capacities that approach our own, at least if viewed across the entire evolutionary hierarchy. Chimps and dogs are much closer to us than lizards and salamanders or tuna fish. But all seem, to the extent that they have brains, to have some sort of mental life which is recognizable to us through their behaviors.

And here is the key. To the extent we recognize degrees of intentionality in others, we are recognizing their possession of mental lives, too. Intentionality implies mental life. And, because they are intentional (at whatever degree), they also have the capacity to recognize intentionality in us (to varying degrees, of course, depending on their relative capacities). So what does it mean to recognize intentionality, a mental life, in another? It means we will treat them differently than we treat rocks or rivers or even trees. We will relate to them differently, be wary of tigers, reach out to dogs, laugh at chimps and feel aggrieved or sorrow at behaviors of such creatures, if these express suffering. Recognizing intentionality is also recognizing the mental life of the other, such as it is. To take an intentional stance, using Dennett's terminology, is to stand in a certain kind of reciprocal relation. It's too recognize (to varying degrees, depending on the facts of the case) ourselves in the other and, to the extent there is evidence for it, that the other does the same.

On this view standing in an intentional relation to other entities implies, when conditions are right, recognizing ourselves, meeting one mental life with another. And this, I want to suggest, is the basis for having the feeling we call empathy. But empathy is not just a feeling but a way we relate to another, the feelings being engendered by our relating in that way. Which comes first? It's hard to say in our ordinary life and maybe it is sometimes the feeling and sometimes the acknowledgement of the reciprocal relation implied by the intentional stance. But the only thing that matters in this account is that the feelings we have that we associate with empathy can be generated in ourselves. They can be taught. They can be learned. They can be chosen. If we behave in certain ways (because behavior is not just physical events but the intentions which they express) we can cultivate some feelings in ourselves while pruning or weeding out others.

Elsewhere here I have argued that looking at moral claims and codes across human cultures tends to turn up similarities (The Moral Way). Sometimes there are differences, true, but a great many times historical surveys show commonalities between societies in the kinds of behaviors they approve and disapprove. It seems to be the case that there are lots of different practices and moral standards across societies too, but what's interesting, I think, is the commonality. Of course, some of the commonality can be explained as a function of the types of creatures we are. Behaviors that are conducive to social cohesion (as James Wilson and R. W. Beardsmore and many others suggest) are likely driven by the nature of our species, i.e., we are social by nature. Many of the rules of behavior that develop in societies have this look to them. They are effective ways of forming and sustaining social units, of enabling creatures like ourselves to work together in groups. As such, they are likely rooted in our biological natures. But we have the capacity, as well, to disregard such behavioral inclinations at times, hence the need for moral constraints upon us (the rules a society develops and passes down from generation to generation) and for laws (the rules a society's governing body enforces).

Insofar as "moral" just refers to the value standards we apply to the behaviors of individuals within our species, this is not surprising. But there is a deeper question here because sometimes it is open to creatures like ourselves to toss out the moral standards imposed on us from outside ourselves (particularly when we become aware of the wide variation of standards from society to society without a corresponding basis for preferring one set of standards to another). Sometimes the last line of moral defense is internal to us. Sometimes we just have to say the reason to do X, or not to, is more basic than any given society's prescriptions and proscriptions.

Here, I think, is where the importance of agential reciprocity, as derived from a recognition of intentionality in other entities, provides a moral baseline that transcends the relativism implicit in societal differentiation. To the extent that we are intentional and come to think about what it means to be that, we see that there is, implicit in being intentional, the recognition of the other as a subject, as intentional, too. But recognizing another's intentionality (to the extent it is relevant and this is arguably more relevant the closer we get to the kind of mental life that underwrites our own intentionality) must be done intentionally, too, or it is not true recognition. That is, it's not enough to say of another that it has thoughts or feelings, can feel pain and anguish, etc., as we can. If we do recognize that, then we have to act with that recognition because there is no clear demarcation in intentional action between intent and physical events. And to act this way means to do so with regard to the mental life of the other. That is, it means having empathy.

I think an analysis of moral standards across many different societies will show that empathy-relevant behaviors are generally included among the behavioral admonitions associated with moral rules. And since we can engender the feeling of empathy in ourselves by acting empathetically with a mindset that includes the recognition of ourselves in the other (to the degree the facts of the other's capacities support that, of course), it makes sense to urge others (and ourselves) to act this way. We are not constrained in our actions by what we already feel, on this view, but are free to choose to feel a way we should feel.

But there is a logical problem here. That is, the argument to be empathetic on the grounds that it is the way that most fully expresses the intentionality which we have is not an argument that we must be empathetic. We are always, it seems, free to choose to be a cad, to disregard the intentional nature of the other if we want to. We are no less intentional, ourselves, for doing that. So it's not as if this strategy I have sketched out delivers an inarguable case for choosing what I have called the moral way to be, empathy. Kant's approach argues that rationality, itself, demands of us that we act in certain morally approved ways because, being rational entities, we cannot rationally choose otherwise. But even that approach doesn't force us to choose to be morally good. Aside from its other problems, even Kant's argument for morality from rationality leaves open the possibility that we can choose otherwise. Indeed, could we not, there would be no merit, as it were, in choosing the good.

Setting aside the idea that being morally good is enjoined from on high, that to please some divine authority we must be good, it seems to always be the case that we have choices that boil down to finding some self-sustaining and, so, motivating, reasons. That, after all, is the nature of the valuing dimension in our lives. So is it a problem that arguing that we should act empathetically is not logically demanded of us, that we can always act otherwise if we like? In itself, I would say no. However, in seeking to establish a case for coming to certain moral judgments, it is important that the argument one adduces have a certain force. To the extent that we cannot argue that to be intentional you must act empathetically, since you already are intentional and have nothing to lose in the way of your intentionality by acting without empathy, the force seems to dissipate.

But does it? What kind of force are we really in need of? If even Kant's rational force does not compel the will, as it were, but only guides it, may we not suppose that the argument for empathy (and the behaviors that nurture and/or constitute it) need do no more? Searle says that behavioral obligations are embedded in our acts, in particular our speech acts, and Beardsmore that what we value, as in what we feel positively toward, is embedded in our language itself. Neither offers an account which presents us with reasons to do one thing instead of another but leave that to the language itself, to what it carries. In Searle's case the commitments we undertake in our social relations are the answer while, for Beardsmore, it's the emotional affinities and disaffinities our predecessors and compatriots in the society in which we are embedded have selected and taught us through language acquisition. Wilson's account follows a similar tack, pointing out that there are some behavioral traits common to our species and which, if fostered and followed, are good for us as a species and so provide reason enough to nurture and enhance them (forming the underlying structure of the moral dimension of our lives). Here's what I want to say.

The argument for empathy, premised on the claim that empathy most fully expresses the intentionality we already have, does provide us with a reason to act in some ways and not others. It does provide us with a basis for deriving more particular moral maxims and behaviors. But it stands on recognition, on realization, not on deductive or inductive derivation. To the extent moral argument can be seen to be about prompting others (or exhorting ourselves) to adopt behaviors which acknowledge the intentionality of others like ourselves (in the relevant ways), it's about bringing others (and ourselves) to see others in a certain way. It's about pushing the behavioral choices we make toward certain types and away from others. It's about developing (in others or ourselves) another way of seeing things.

That's why moral discourse seems always to be so tightly bound up with religious discourse. Both are about realization, seeing the self in the world in a certain way. And we can argue for that. We do so every time we ask another or ourselves to consider something from a different perspective. So the argument for moral actions, based on the rightness of showing empathy, is an argument for a kind of realization, an argument reminding others (or ourselves) about what we are. An argument for empathy, as the basis for moral judgment, is just an argument to see ourselves in a certain way and so to act in that way. It doesn't guarantee, of course, that such reminders will work. But there is a basis for reminding just because there is something inarguably present to remind ourselves about.

Article originally appeared on Ludwig (http://ludwig.squarespace.com/).
See website for complete article licensing information.