Tuesday, 13 November 2012

Imponderable I: Morality

Let’s suppose that in the year 2115 neurologists tell us that they’ve figured out how the brain actually understands things. What would that mean? Precisely that they can explain it in terms of components that do not themselves understand.
Perhaps they tell us:
Here’s how the mind understands. The mind is composed of three components, the blistis, the morosum, and the hyborebus. The blistis and the morosum have nothing to do with understanding; the part that understands is the hyborebus.
We don’t have to know what these things are to know that they’ve failed. This cannot be an explanation of understanding, because it simply transfers the problem from the “mind” to the “hyborebus”. It’s like explaining vision by saying that the optic nerve brings the image from the eye to the brain, where it’s projected on a screen that’s watched by a homunculus. How does the homunculus’s vision work?
This will be the first of a series of six or seven articles, aimed at explaining things that are fundamental to human experience (or so we think) and which boggle the mind when we try to analyse them: meaning, consciousness, knowledge, the self, free will, morality. The trouble, I believe, is not that these things have no explanation, nor even that we can’t comprehend the explanation. The trouble is drawn out by the quote above. We don’t have a problem applying it to most things: we can all accept that a car doesn’t have a smaller car under its bonnet driving on a little treadmill to make the big car go, and that if it did it wouldn’t explain anything because you’d still have to ask what makes the little car go.
But with the Imponderables, as I shall call them, our intuitions run the wrong way. Here’s Jim Flynn on free will, from the disappointing final section of his otherwise excellent Where Have All the Liberals Gone?
Why assume that we must reject the reality of free choice if that renders part of reality beyond scientific explanation? Why not assume the reverse: that we must recognise a limitation on science if uncaused causes are part of reality. (p272)
Flynn deserves his international renown as a public intellectual three times over, and I have great personal respect for him; I voted for him when he ran for New Zealand’s Parliament, and I would again. And I’ll have much better things to say about Where Have All the Liberals Gone below. But this is sheer nonsense. Especially given that, just a few pages earlier, we find this:
No good reason can be given for evading the question of whether the appearance of free will matches reality. Reality always trumps appearance. (p268)
All that saves Flynn from explicit self-contradiction is that he doesn’t define what he means by “scientific explanation”, as opposed to, well, examining reality. But there is no hard distinction between science and ordinary workaday knowledge. When you open up your car bonnet to figure out where the knocking noise is coming from, you are doing science, just as surely as any boffin in a white coat. The difference is that the boffins have better measuring equipment and more time on their hands, and are getting paid to write down what they find out – which (lest I be misunderstood) frees them from mundane limits to investigate some of the most profound questions of existence.
Anyway, I’ll save free will for later. This post is about morality. It’s a commonplace that you can’t derive value-laden propositions from value-free ones. Indeed, received wisdom has it, it would be worrying if you could; if you can reduce a moral judgement to pragmatic considerations, it isn’t a moral judgement any more. Which means we can’t ground our moral philosophy in any objective fact, and it becomes a mere matter of opinion. I disagree; I think morality can be explained in terms of non-moral – or shall we say “submoral”? – considerations. Here goes.

Ought from Is

We begin with the philosopher David Hume’s famous words:
In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.
David Hume, Treatise of Human Nature
Hume concluded that “morality is determined by sentiment”; that which is moral is that which the human moral sense approves. Unfortunately, this would mean that two people whose moral senses differ – say, a progressive liberal whose moral sense says that it is wrong to discriminate against gay people, and a traditional Christian whose moral sense says that it is wrong to have sex with someone of your own gender – cannot reason with each other about right and wrong. Since the whole point of reasoning about morality in the first place is to arrive at moral principles on which both parties can agree, this is less than satisfactory.
What does it mean to say that some action (like gay sex) is morally wrong or morally right? Does it refer to some hidden property which may pertain either to gay sex or to discrimination against gays? What sort of property would that be, and how could we find it? If it’s the sort of thing we can’t test objectively, then no-one can ever say whether or not gay sex is a wrongful action. We must therefore reject essentialism at the outset. (I discuss essentialism, and why it’s useless in pretty much all contexts, here.)
What does it mean, then, when one person says “Gay sex is (morally) wrong (in itself)”, and another rejoins “Discrimination is (morally) wrong (in itself)”? Well, the first person means “No-one should ever have gay sex”, and the second means “No-one should ever discriminate against gay people”. All moral absolutes can be directly translated into “Everyone must” or “No-one should ever” statements. If it is not true that no-one should ever have gay sex, then it is not true that gay sex is morally wrong. We can take the two statements as synonymous. And therein lies the problem.
You see, words like “should” – or “ought”, in Hume’s expression – can be restated again. “You should feed the cat” is simply a more general way of saying “Feed the cat”. The same applies to moral “should”s. What our two disputants are fundamentally saying to each other is “Don’t have gay sex, ever” and “Don’t discriminate against gay people, ever”. These are not strictly propositions at all, but imperatives. And an imperative cannot be true or false. If someone says to you, “Feed the cat, please,” it is meaningless to reply “That’s not true!”
Well, that’s a bit disheartening, isn’t it? I’ve just gone and proved all moral statements are meaningless, haven’t I? Not so fast. An imperative can’t be true or false (it can’t have a truth-value); but it can be well-suited or poorly-suited to achieving a particular goal (it can have a utility-value). If someone asks you to feed the cat, it makes perfect sense to answer “You’re right, he hasn’t had a feed since this morning,” or “You’re wrong, he had a huge feed this morning and the vet says he’s getting fat.”
So what kind of utility-value could a moral statement have? Here I’d like to dispose of a red herring which is otherwise likely to lead us astray. What makes a clock a good clock? How well it keeps time. What makes a car a good car? How well it runs. What makes a pencil a good pencil? How well it writes. In general, what makes a tool a good tool? How well it fulfils its purpose. So it would be natural to think that what makes you a good person is how well you fulfil your purpose, as given to you by your creator, whoever that might be. This, I think, is why some people misunderstand Richard Dawkins and Steven Pinker and other proponents of selfish-gene evolution as arguing that we have a moral obligation to survive and reproduce at others’ expense, despite their frequent, patient protestations to the contrary. “I mean, they’re just replacing God with DNA, right?” (Hint: No.)
I recently came across an intriguing book and, like a fool, forgot to note down its title and author; it was a sort of philosophy primer for children, and it had the most excellent critique of the notion of “finding our purpose” I have ever seen. Imagine, the book said, that we discovered that Homo sapiens had been genetically engineered by super-intelligent aliens for the purpose of washing their underpants. Let’s go with that – suppose they had given us our intelligence so we could understand the instructions, our sense of beauty so we would do a thorough job, and our moral conscience so we wouldn’t steal from our masters. We would then have truly found our purpose. Would that satisfy those anxieties we have when we lie awake in the small hours, wondering what it’s all about? I can tell you it wouldn’t do it for me. It follows that my “purpose”, as such, is not what I’m really searching for.
We face a serious problem if we try to derive morality from God. “God may be our judge,” says Jim Flynn in How to Defend Humane Ideals, “but we judge God to satisfy ourselves that God is a worthy judge.” C. S. Lewis, the Christian apologist, wrestled with the problem of how a good God could create a world containing the kind of suffering he experienced following his wife’s death from cancer in 1960:
Or could one seriously introduce the idea of a bad God, as it were by the back door, through a sort of extreme Calvinism? You could say we are fallen and depraved. We are so depraved that our ideas of goodness count for nothing; or worse than nothing – the very fact that we think something good is presumptive evidence that it is really bad. Now God has in fact – our worst fears are true – all the characteristics we regard as bad: unreasonableness, vanity, vindictiveness, injustice, cruelty... It’s only our depravity that makes them look black to us.
And so what? This, for all practical (and speculative) purposes, sponges God off the slate. The word good, applied to Him, becomes meaningless: like abracadabra. We have no motive for obeying Him. Not even fear. It is true we have His threats and promises. But why should we believe them? If cruelty is from His point of view “good,” telling lies may be “good” too... what He calls Heaven might well be what we should call Hell, and vice versa.
C. S. Lewis, A Grief Observed
In other words, even if some God exists, we can’t decide that any old thing any imaginable God might want thereby counts as morality. We first need to know whether God is, in a meaningful sense, good. That means we need a criterion for distinguishing good from bad that does not depend on God. The same goes for any other entity that might have created us for a “purpose”.
What would count as a criterion for distinguishing good from bad? Lewis hints at one, and it is drawn in crystal clarity in Sam Harris’s recent book The Moral Landscape. Harris unfortunately doesn’t get the point of the ought-from-is problem (he calls the “ought” side of it a “dismal product of Abrahamic religion”); unfortunately, because his book captures the other essential part of what morality is. Moral statements, Harris claims, are statements about the well-being or suffering of conscious creatures. And he makes a good case.
I think we can know, through reason alone, that consciousness is the only intelligible domain of value. What is the alternative? I invite you to try to think of a source of value that has absolutely nothing to do with the (actual or potential) experience of conscious beings... my further claim is that the concept of “well-being” captures all that we can intelligibly value.
...And all other philosophical efforts to describe morality in terms of duty, fairness, justice, or some other principle that is not explicitly tied to the well-being of conscious creatures, draw upon some conception of well-being in the end. (pp32–33)
So I’m going to take Harris’s fundamental concept as settled; the basic goodness or badness of anything is a measure of how much it enhances well-being or suffering. Any other concept of good and bad is vacuous. I’ll refer you to Harris, for now at least, on the definability of the concept of well-being. Suffice to say that it encompasses health and long-term life satisfaction as well as momentary pleasure.

The Cardinal Virtues

Unfortunately, that’s the easy part. Well-being and suffering pertain to individual conscious beings, and there are a lot of different ones. The next question is: how do we resolve conflicts of well-being? What do we do when something that makes life better for one conscious being causes another to suffer? We need a principle that conforms to two tests: (a) we must recognise it as a moral principle, and (b) it must be derivable from non-moral, pragmatic considerations (or else we haven’t explained anything).
Let’s start by assuming a purely pragmatic viewpoint; the perspective of a sociopath, who has no moral sentiments at all. But this is a rational sociopath, who is quite happy to behave in a way that others should consider moral, once convinced that this is in his interests (I’m assigning a gender because it’s too awkward to keep going “she or he”, and I gather sociopaths are more commonly male). We will assume, furthermore, that our sociopath is like us in one singularly important respect: he has incomplete information concerning what he’s going to find waiting for him on his path through life.
Like us, our sociopath enjoys some experiences and dislikes others. Being rational, he wants to maximize his enjoyment and minimize his suffering. This means he must apply his rationality to his behaviour: he must have prudence. Sometimes he may feel inclined to seek an immediate reward which will result in lower satisfaction in the long term, or to flee from an immediate discomfort and so forfeit potential enjoyment. He must learn to resist these temptations: he must have fortitude. All his pleasures are likely to depend on limited resources, which he must conserve if he doesn’t want to spend the end of his life in a deprived state: he must have the virtue that used to be called temperance. That’s no longer such a good word for it, as it went out of fashion eighty or ninety years ago after coming to mean merely “abstaining from alcohol”. Nowadays this virtue is once again trendy, under the new name of sustainability.
Classics or philosophy nerds reading this will recognise three of Cicero’s four basic virtues, what Christian writers came to call the “cardinal” virtues. What about the fourth? Surely it’s too much to hope that we’ll be able to derive justice from the ruminations of a rational sociopath?
It is – until you add one more assumption about our sociopath, namely that one of his most important resources is the network of relationships of trust in the community in which he lives. Now, philosophically, that seems like a big jump. In philosophy, you’re allowed to postulate people being created by lightning strikes in a swamp (yes, that’s a thing), so a philosopher would certainly ask “What about a rational sociopath who exists alone, without a community?”
Well, Swampman may be acceptable by the standards of philosophical thought experiments, but in the real world he’s universe-breakingly unlikely. Not because of some “mere” (mere!) law of physics, but because of the mathematics of information theory, which applies to any highly specific, complex system regardless of other considerations. I’ve discussed this before: there is only one plausible way that anything so astoundingly information-rich as a reasoning brain can exist, and that is through the evolution of a self-replicating entity. Ironically, it’s our own brains’ highly-tuned skill at dealing with other brains that makes us miss the point here: in every language I know of, “who?” is just as simple and general a question as “what?” or “where?”, as if people were just as basic a concept, and just as likely to be found in a given part of the universe, as things and places. But a reasoning being has to be the offspring of another reasoning, or near-reasoning, being. Solipsism may not be refutable in pure logic, but it’s a hyper-astronomically bad bet.
So, from the simple fact that he finds himself reasoning, our sociopath ought to infer that there are other reasoning beings in the vicinity; lacking exact information on their cognitive capabilities, he ought to bet that they are something like his own. And it doesn’t stop there. Not only must a living thing be the offspring of something very like itself; any complex, information-rich feature it has must be a product of natural selection, and must therefore be adaptive, for its ancestors if not for itself. From which it follows that the capacity for reason must be adaptive for something.
I’m going to make a small leap here and assume that our sociopath belongs to a species that has language, or some language-like ability to communicate, as well as reason. I personally strongly doubt that any species could evolve the capacity for abstract reasoning without the ability to communicate it; I won’t argue the point here (though I may in a later Imponderable post), because I suspect most of my readers will agree.
Now, what adaptive value could this communication have? It immediately suggests a social species (solitary animals don’t need to communicate much more than “Please mate with me” and “Go away”), but that can’t be all – many highly social species get by just fine without communicating abstract ideas. It can help you convince others to follow your lead; if you can do that consistently, that means high status; in primates, including us, status translates more or less straightforwardly into mating opportunities for males and offspring survival for females. But none of that would work if people weren’t at least sometimes amenable to being convinced. Evidently, most of the time, conversation is to both conversants’ mutual benefit.
What sort of evolutionary pressure would it take for communicating complex abstract information to be adaptive? If you can persuade others to co-operate with you towards your goals, your reach towards those goals is no longer limited by your own personal carrying capacity, attention span, or anything else. Any being capable of expressing abstract ideas in words, including our sociopath, can therefore confidently bet that it is a member of a community knit together by relationships of mutual trust.
(All of which makes for some unexpected conclusions. Note that the reasoning behind the policy of upholding trust depends on Darwinian evolution. Note, also, that if there exists a rational being, such as a God, who can exist separately from a community of trust, then this reasoning falls down – God might be a cunning, lying sociopath for all we know, and he would have no reason at all to be “good” to anyone but himself!)

Nice Guys Finish First

At this point it’s customary to bring in the Prisoner’s Dilemma and Axelrod’s Tournament. I’ll let you Google those if you’re interested. The basic logic was put into words over two millennia before the computer models, by Plato (speaking in the character of Glaucon) in his Republic:
They say that to do injustice is, by nature, good; to suffer injustice, evil; but that the evil is greater than the good. And so when men have both done and suffered injustice and have had experience of both, not being able to avoid the one and obtain the other, they think that they had better agree among themselves to have neither; hence there arise laws and mutual covenants; and that which is ordained by law is termed by them lawful and just. This they affirm to be the origin and nature of justice – it is a mean or compromise, between the best of all, which is to do injustice and not be punished, and the worst of all, which is to suffer injustice without the power of retaliation...
So we’re talking about situations in which you gain more by co-operating than by going it alone, but where there’s a higher payoff in any one instance if you successfully exploit your partner, who then suffers a serious loss. (Obviously, if the reward for co-operating is greater than the temptation to exploit, no problem arises.) Where agents have many opportunities to engage with one another, Glaucon gets the logic pretty much right, although his tone is gloomier than it need be. Algebraically, let’s call the good gained by each partner in a co-operative pair C, the good gained by an exploiter G, and the good lost by a victim of exploitation E, so that the evil experienced by that victim is -E. Now if, as Glaucon says, “the evil is greater than the good” (E > G), then G - E (the net gain for an exploiter and victim) must be a negative quantity, whereas 2C (the total gain for a pair of co-operators) is a positive quantity. Therefore, even if G is greater than C, 2C is always greater than G - E; that is to say, any two partners gain more put together by co-operating than if one of them exploits the other. Co-operating is a positive-sum game.
The problem is that it’s risky to engage with a potential partner with a view to co-operation, because they might succumb to temptation and exploit you. And they’re taking the same risk if they make overtures to you. So if you want to enjoy the benefits of co-operation, you need to behave in a specific way. First, you need to lower the risk from your partners’ point of view; you need to start off by co-operating, and you need to be predictable, thus making it obvious that you’re not an exploiter yourself. But you can’t simply lay yourself open to exploitation either; if someone takes advantage of you, you have to teach them a lesson (for the good of your fellow co-operators as well as yourself). On the other hand, it’s not good to hold grudges once someone has learned their lesson, or you miss out on any further opportunities to co-operate. Computer simulations in game theory confirm these insights: the best strategies are “nice” (start off by co-operating), “clear” (they’re predictable), “retaliatory” (punish exploiters), and yet “forgiving” (don’t hold grudges).
The “clear” part is where abstract communication comes in. How do you let your partners know what you’re going to do? Tell them; then show that you meant it. Or, better, explain the principles underlying your decisions, so that even if unexpected circumstances force you to change your plans, your consistency is still evident. If you weren’t part of a co-operative community, letting everyone know what you were about to do would be maladaptive to the point of suicide – you might as well paint a target on your head. That’s why our sociopath ought, rationally, to deduce that he lives in a society that depends on mutual trust, simply from the fact that he finds himself capable of putting abstractions into words.

Being Reasonable

Remember that our sociopath has to assume that his fellow reasoners have, on average, approximately the same cognitive capacity as himself. This is where real live sociopaths tend to deviate from complete rationality: they narcissistically believe that they are the smartest person around. Do they see others routinely passing up opportunities to exploit each other, and conclude that everyone else is too stupid to know a gift when the gods drop it on them? I wouldn’t know, I’m not a sociopath. Occasionally they get lucky, and their exploitation goes unnoticed, as in the case of Josef Fritzl, who imprisoned and raped his daughter in his basement for 24 years before being caught. But that’s not the way a truly rational being would bet.
Since our sociopath is truly rational, he’ll talk to the other members of his community to try and reason out a code of conduct which will maximize gains from co-operation for everybody, including himself. He won’t threaten or attack them, for two reasons. First, because they would punish him, by withdrawing their trust at the very least. Further, it is not just their trust in himself that brings benefits to our sociopath; their co-operation with each other is what allows them to amass the bounty he seeks to tap in the first place – and they, like him, are making educated guesses about one another’s trustworthiness from incomplete information. If he behaves in an untrustworthy manner, he erodes their confidence in the community (which failed to prevent him from doing so) as well as in himself.
For the same reason, he won’t attempt to deceive them. A real-world sociopath certainly would, but that’s because real-world sociopaths have an inflated sense of their own ability to exploit others undetected. Again, remember, he has to bet that they’re more or less his cognitive equals. He might come up with an elaborate plan to outwit them, but in general that will involve a lot more effort than just being straight with them. Also, deception would diminish his ability to trust them, because now he has to be on his guard against innocent discovery as well as exploitation.
No, his best bet is to persuade them honestly, using his capacity for rational argument. But that has further implications. Here, I’ll let Steven Pinker explain.
As long as I’m talking to someone, as long as I’m providing reasons, I can’t say that I am a unique, privileged person, and hope for you to take me seriously. Why should you? You’re you, I’m me. Anything that I come up with as a code of behaviour, any reason I give you for how you should behave, has to apply to me, in order for me not to be a hypocrite or to contradict myself.
Any rule the community works out through honest, reasoned debate, will therefore have the feature of ensuring equal treatment; the implementation of that rule will apply equally to all members. In effect, equal treatment will itself be the first and most basic rule. It will apply to penalties for exploitation, as well as to everything else. This means our sociopath has to agree to undergo that penalty himself should he succumb to temptation, miscalculate the odds, and be caught.
These penalties will need careful balancing. On the one hand, they need to be severe enough to cancel out the rewards of exploitation, or calculating individuals will take the loss rather than forgo the reward, and the mutual trust that holds the community together will wither away. On the other, they must be mild enough that it always makes more sense for someone who has just earned a penalty to accept it than to break away violently from the community. Can we fine-tune within those limits? Is there a principle for determining the optimum penalty for a given degree of exploitation?
I think there is. I’m wary of setting it down as a logical rule because it resonates so perfectly with my instincts that I have to wonder if I’ve been completely rational in formulating it. But wouldn’t it be best if someone who was considering breaching trust and exploiting another community member, had a strong motive to minimize the effects of their crime even if they did decide to go ahead with it? Well, there would be a simple, clear rule to ensure that: the principle of proportionate response. The punishment should fit the crime. The exploiter should suffer the same loss that they inflicted on the victim.
And yet that’s still too clunky. Suppose that some inexperienced opportunist decides to exploit another community member, but the plan fails and – purely by chance – no-one comes to harm. Shouldn’t they still be penalized, to deter them from trying again? Suppose, on the other hand, that one community member causes another to suffer loss through an entirely unintended accident. A penalty won’t deter the first community member from anything worth deterring. You might argue that it will still inspire fear in others; but if those others see that the damage was accidental, their fear will not have a deterrent effect – it’ll erode their trust in the community instead (“You can’t get a break here, one mistake and they jump on you”). And finally, a cunning exploiter might get a naïve community member (say) to move goods from one place to another without mentioning that they belonged to somebody else, or to run a machine without mentioning that there was someone trapped inside. Such behaviour will only be deterred by detecting and punishing the exploiter. This gives us another principle: determination of intent.
Equal treatment, proportionate response, and determination of intent? This is beginning to look awfully like an anatomy of justice.
Our sociopath wants to gain the trust of his fellows, and has discovered that the only way to do that is to earn it. But we’re not quite finished. Would you trust someone whose only idea of ethics was wanting you to trust them? They have incomplete information about him, just as he has about them. He might know that he has decided never to exploit them, but they can’t be sure of that. And if he behaves in the way that might seem rational – taking every advantage he can without actually stepping over into direct exploitation – they will be quite reasonably suspicious.
What can he do? He must go, as the saying is, above and beyond the call of duty. He must adopt a further rule, one that makes him obviously trustworthy. The new rule is going to be somewhat arbitrary, but a few considerations eliminate most of the possibilities. First, the new rule must be easy to understand at a glance. Second, it must be strict and extravagantly admirable. That means the sociopath can’t wriggle out of it by getting other people to do things for him that he’s not allowing himself to do, which means the rule must apply to his allies as well as to himself. But thirdly, the rule must not break any of the conditions already outlined, including that his responses to penalize bad behaviour in his allies must be proportionate to the actual harm they do. And therein lies the problem. What kind of rule is proportionate and extravagant at the same time?
Well, how about this one? Maximize well-being as much as you can in all conscious beings, even those not capable of entering relationships of trust with you. Be charitable to strangers on the other side of the world; leave a legacy of kindness for future people who will be unable to reward or punish you on account of you being dead; be kind to animals, in proportion as they are capable of happiness and suffering.
So if I’ve done it right, we’ve found what we were looking for: a recognisably moral principle (“Be trustworthy, do good”) derived from a submoral consideration (“Being trusted is useful”). Now to face the inevitable objections. (If you think of further objections that I don’t cover below, please comment and tell me.)

What If I’m Better than You?

Throughout my little story, our rational sociopath assumed that other people were, on average, about as intelligent as he was. Let me recap why he had to assume that, just in case it wasn’t clear. Any being complex enough to engage in reason has to exist as the product of a process of evolution; evolution works by selecting among multiple copies of self-replicating entities (on Earth, DNA); self-replication tends to produce objects that are very similar to one another; therefore, our sociopath has to assume that he lives among creatures very similar to himself; he has no rational grounds for believing himself above, rather than below, the average in intelligence.
But suppose he had other grounds for considering himself to be special? Suppose he were born into a position of power? Suppose he found himself a member of a privileged élite? Suppose he built a superweapon, by which he could hold anyone to ransom? Suppose he were Gollum, and had a magic ring that could turn him invisible?
Ah, yes. Ever since I mentioned Glaucon, the Plato fans will have been waiting for that one. Having derived the idea of justice from a pragmatic social agreement, Glaucon points out that the conclusions break down if the conditions vary sufficiently from equality. He retells the story of Gyges the shepherd, who found a marvellous ring in a cave:
Now the shepherds met together, according to custom... as he was sitting among them he chanced to turn the collet of the ring inside his hand, when instantly he became invisible to the rest of the company and they began to speak of him as if he were no longer present.
Gyges rises quickly in the world. A line or two later,
...he seduced the queen, and with her help conspired against the king and slew him, and took the kingdom.
Which, Glaucon argues, is exactly what we should have expected in such a circumstance.
...no man can be imagined to be of such an iron nature that he would stand fast in justice. No man would keep his hands off what was not his own when he could safely take what he liked out of the market, or go into houses and lie with any one at his pleasure, or kill or release from prison whom he would, and in all respects be like a God among men.
Well, you can certainly see why it would be tempting. But would it be a good strategy in the long term? First of all, Gyges has no rational grounds for believing that his is the only such ring; even if there were an inscription on it, visible in the heat of the fire, that declared it the One Ring to Rule Them All, you would have to wonder whether you could trust the maker on that point. If somebody made one, somebody could make another. Gyges is not safe from exploitation by invisible thieves or murderers, unless he can call an assembly with his fellow citizens and make laws enforcing the fair use of invisibility magic.
However, that’s not the big problem. I’ve already mentioned the big problem. It is not just the community’s trust in himself that brings benefits to our sociopath; for them to have accumulated anything worth exploiting, they must be able to trust each other. If they can, and if they perceive him to be a threat, they will join forces to overpower him. Uneasy lies the head that wears a crown, they say. If they don’t perceive the threat he poses, they’ll be losing the gains from co-operation and not knowing who’s taking them, and they won’t keep on trusting each other for long. Soon our sociopath will be left with nothing to exploit.
So even if the sociopath thinks of himself as a Nietzschean Übermensch, and has the power to back him up, his rational course of action is to use that power to enforce justice in the community. Again, most sociopaths who have found themselves in positions like this have behaved quite differently; facing the real threat of rebellion, they defend themselves using funds extorted from their people, in an ever-growing spiral of power and paranoia. At each moment the decision to exploit may deliver the greatest marginal returns, but over time this strategy usually fails to increase anyone’s well-being, even the dictator’s.

What If I’m Not a Nerd?

Second objection. So far I’ve been trying to argue rationally, and – well, look at the length of the post. I think it’s all logically valid (or, at least, that the premises needed to make it logically valid are trivial), but it took a lot of thinking, and it relies on scientific principles which take a certain amount of mental work: information theory, game theory, Darwinian natural selection. So am I saying that you have to study all that before you can claim to be moral?
No, of course not. Well, perhaps if you’re a sociopath. It is a matter of simple observation that we are surrounded by fellow human beings who are very like ourselves, who enjoy sex and sweet foods and beauty and meaningful achievement. This fact is so obvious that, most of the time, we don’t bother to think about why the next person we meet isn’t likely to turn out to hate chocolate and like whacking their elbow on sharp things, or perhaps to have no conscious experiences at all. Most of us don’t need to work through the logic of trust and co-operation consciously, because natural selection has done that for us, and left us with a moral sense or “conscience”. This is so strong that, in may cases, it shines through in defiance of what would be the logical conclusion from our beliefs about the world. Even people who believe passionately in Heaven, think it is wrong to send people there. Even people who believe that suffering gives you good karma for future lives, think it is wrong to give others that kind of good karma.

What About Other Values?

Which brings us to the third objection. What about other kinds of moral value? If our theory doesn’t match our moral intuitions at all, then we shouldn’t be calling it “morality”. On the other hand, if it matches them too well, then we’d have to suspect that they’re biasing our reasoning. The psychologist Jonathan Haidt researched what people are actually thinking when they make moral judgements, and concluded that there are five (or, more recently, six) axes of moral value:
  • Care/harm – it is good to enhance the well-being of others, and wrong to suppress it.
  • Fairness/cheating – it is good to repay favours, and wrong to exploit people.
  • Liberty/oppression (the recent addition) – it is good to grant others the freedom to do what they want, and wrong to coerce them into obeying you.
  • Loyalty/betrayal – it is good to favour your friends and allies, and wrong to abandon them.
  • Authority/subversion – it is good to respect leaders and traditions, and wrong to thumb your nose at them.
  • Sanctity/degradation – it is good to stay pure, and wrong to perform various actions that make you unclean.
Haidt argues that people who insist on all six tend to have tighter-knit and more harmonious communities than people who reduce morality to just the first two or three – at the cost of making life more difficult for outsiders. Let’s have a look at them.
Care/harm and fairness/cheating are pretty much what we’ve already looked at – they’re about earning trust in a co-operative community. Liberty/oppression follows logically from both. Being hemmed in or trapped, unable to change our circumstances, is distressing to us. As usual, the explanation is evolutionary: in the wild, it can be a matter of life and death to allow yourself an escape route in case of danger. Hence (other things being equal) coercion impairs people’s well-being, while freedom enhances it. As for justice, if one person tyrannizes another then the equal treatment principle is already violated. Coercion also interferes with determination of intent, as the subordinate ends up doing things that reflect not their own intentions but those of the coercer. If someone can force you to do things, you can’t trust them not to exploit you.
Loyalty/betrayal also fits neatly into the ethos of a community of trust. If you’re loyal, that means the people you’re loyal to can trust you. You can even argue that it’s another extension of justice: if a group of people have been good to you through your life, it’s only fair that you should return the favour. Here, though, I think our instincts may lead us astray if we’re not careful. We evolved to forage in small roving bands, with strong selection pressure to favour the members of our group (who could protect and feed us) and very little to respect the interests of other groups (who were largely competitors for food and territory). Some aspects of forager life are admirable – they seem to have a better handle on equality than most other kinds of society, and their diet and exercise regimes are those the human body evolved for – but they do live under the constant threat of intergroup violence. It would be far better for any given person’s well-being if there were peace between the groups, at the cost of some feel-good xenophobic in-group bonding, so that relationships of trust could grow across the boundaries. Loyalty does not trump justice.
Authority/subversion is more problematic. Apart from anything else, it’s often in direct conflict with liberty/oppression. Most of our primate relatives live in groups with a dominance hierarchy, so it’s not surprising that we still have hierarchical instincts. Forager societies keep these instincts in check by ribbing or dismissing anyone who gets too big for their boots; since all of us are descended from foragers, it’s plausible all humans may once have lived this way. On the other hand, today’s foragers tend to live in marginal environments where agriculture is impossible, whereas most ancient foragers would have had better pickings and might well have been able to accumulate wealth and create inequalities just as their “civilized” descendants have done. In any case, once inequalities exist, it doesn’t take long for hierarchical behaviours to resurface. But is it good to kowtow to rulers and boss underlings about?
We answered that, in a way, right back at the start. Remember, we found we had to determine whether God was good before we could tell whether it was moral to fulfil his purpose for us. Similarly, we need to determine whether rulers and their rules are good before we can tell whether it is right to follow them. Insofar as our leaders uphold justice and the public trust, they should be respected. When they make rulings that oppose justice or damage the public trust, they should be defied.
Haidt also includes cases where the authority is a book or code, rather than a person, under this moral category. You’ll remember our sociopath had to adopt an extravagant behavioural rule, “above and beyond the call of duty”, so as to advertise his trustworthiness. The rule we chose for the purpose was “Maximize well-being in all conscious creatures, as best we can”. Real-life human cultures, however, have adopted a wide range of arbitrary authorities. How should we respond when we are trying to initiate a trust relationship with someone who relies on such an authority – especially if it’s an unfamiliar one? Should we flout their tradition and so make it harder for them to trust us? Should we allow them to impose the arbitrary rule on us and ours, including penalties for breaking it?
The same questions arise with sanctity/degradation, so let’s deal with them together. Natural selection equipped us with an instinct to avoid infection by microscopic pathogens, without knowing that those pathogens existed: an instinct we call “disgust”. It drives us to shun spilled bodily fluids and faeces, rotting animal flesh, and people with potentially contagious disfigurements. We’re also disgusted by sexual connections involving people we are not attracted to, even when the person they’re having sex with isn’t us. This helps to explain why Homo sapiens is one of the very few species that doesn’t mate in public view – or it would, if our disgust at third-party sex weren’t itself a mystery.
Most human cultures extend disgust to harmless meats (for instance, English-speakers typically won’t eat horse, dog, rodent, or insect meat) and to various kinds of sexual activity, regardless of the attractiveness of the participants, including incest. All such conventions, whether explicit or tacit, are accompanied by mistrust of deviants. With the food prohibitions, that very mistrust is likely what keeps the convention going. “We can’t accept those foreigners’ dinner invitation, what if they serve lobster?” – and so we never sit down together and talk about our reasons for not eating lobster. I suspect the sex prohibitions are mostly about men trying to limit other men’s access to women’s bodies, a point I argue here.
So what do we do? We want to be trusted, and giving in to the custom would seem to accomplish that, as well as demonstrating good will. If breaking the custom would genuinely distress the other person, then their well-being is surely reason enough to respect it. But if we make it a general policy to give in to such demands, we hand others a means to manipulate our behaviour through offence. It would be unjust to allow them to exact disproportionate penalties for offending their custom. We can, however, grant a partial concession in some cases. The excretion of wastes is universally felt to be disgusting, but all humans do it from time to time; likewise, couples need to have sex every so often. The obvious solution in both cases is to go somewhere you won’t impinge upon others – to seek privacy. Enlightenment liberals elevated this from a common-sense workaround to an ethical principle: a man’s home is his castle, and what goes on there is nobody else’s business. With adjustments (a man is no longer allowed to hurt his wife or children behind closed doors) it has served us fairly well, but it too has limits. It would damage the trust network, not enhance it, if we were to require all women to stay out of sight lest they give heterosexual men “impure” thoughts – a purity rule that has been active in several independent patriarchal cultures. At some point, the disputing parties will have to negotiate how far each is willing to concede to the other; and the only neutral ground is reason.

What Use is This Anyway?

Haidt posed his participants a series of questions, which are as good a launch-pad as any for the fourth objection: is our theory any use with moral problems? In each case, the question is whether a hypothetical action is morally allowable, or, if not, why not.
  • A man promises his dying mother that he’ll visit her grave, then doesn’t.
  • A family’s dog is run over. They cook it and eat it for dinner.
  • A person uses an old American flag as a cleaning rag.
  • A man buys whole raw chickens from the supermarket, has sex with them, and disposes of them.
  • A sister and brother, both adults, have protected sex by mutual consent.
The man who doesn’t visit his mother’s grave has betrayed her trust in him. Sure, he might be making a distinction between dead people who can’t be hurt and living people who can – but then again, he might just not care about keeping promises. If I knew he had to choose between visiting the grave and working to feed his children, it would be a different matter.
The family who eat their dog surely can’t have cared about it very much. They’re very eager to take advantage of it as soon as its own interests are out of the equation. If it was the dog’s own interests that held them back, and not merely that they hadn’t got bored with it yet. Their action erodes our grounds for trusting them.
I’m not an American, and the New Zealand flag doesn’t have the near-sacramental status here that the Stars and Stripes does in the US. All the same, tearing up and befouling an object that is widely accepted as an emblem of a community suggests little regard for the community itself: it would be a gratuitous gesture of contempt. Again, the issue is what the action says about the person’s attitudes and their consequences for trust.
The chicken scenario triggers the same instinctive reaction in me as the dog one, but of course the chicken has already been killed for human pleasure; it’s just a question of what kind of pleasure. I suppose, since sex, unlike eating, is something we usually do with fellow human beings, it’s frightening to think that someone is getting that kind of enjoyment from dead flesh. I have to say I also find myself disapproving of the waste of good food.
In-breeding causes disease, albeit genetic rather than infectious disease, in the offspring; but our incest scenario specifies that there are no offspring. One has to wonder how the siblings’ relationship will cope, but that’s surely their business. They will have to weave a web of deceit around their activity, but only because of the taboo itself, not because of any harm or betrayal inherent in the action. The thought of having sex with a sibling makes me slightly queasy – a well-understood psychological phenomenon called the Westermarck effect – but so does the thought of eating Brussels sprouts, and I’ve no objection if someone else wants to eat Brussels sprouts somewhere where I can’t smell them. This may be one instance where our moral instincts have got it wrong.
Haidt’s scenarios are not the only moral problems out there, of course. One that often gets trotted out is the Trolley Problem, due originally to the philosopher Philippa Foot. A railway trolley is running out of control towards a stretch of track which has five workers on it. They’re all going to die unless you do something about it. The only thing you can do is pull a switch and divert the trolley to another track, which has just one worker on it. Should you pull the switch? Most people answer yes; better that only one person should be killed than five.
Here’s a variant. You’re doing triage in the emergency department of a busy hospital. There are five patients all urgently needing different organ donations. No such organs are to be had in time to save any of their lives. However, in a sixth bed is a young man who’s come in for a sports injury, with all the relevant organs in perfect health. Should you anaesthetize him, kill him, harvest his organs and save the five? Virtually everyone answers no. (The “virtually” scares me.)
What’s the difference? In the trolley problem, it’s not your fault that the one worker was on the side track to be killed. They might well have trusted the company to secure their trolleys properly, but they had no reason to trust you to privilege their track over the other workers’ track. But in the triage problem, the five cannot be saved unless there is some one to sacrifice. The young man came into hospital trusting you to fix his injury; he wouldn’t have come if he’d thought there was a risk of you killing him for his organs. And the five were surely not expecting that you would commit murder for them – otherwise, what would be to stop you from murdering them for somebody else’s sake? Trust is the issue.

What About the Real World?

Philosophical thought experiments are all well and good, but what about the real world? Does our theory give us any answers we’re actually looking for? Let me, very quickly, visit a few morally-loaded questions that have crossed my radar in recent weeks.
  • Affirmative action. Should we give preference to women and minorities in education or employment? Or does that violate the principle of equal treatment? Wouldn’t it be better to measure each person’s competence individually and ignore what racial or gender groups they might belong to? That would seem reasonable, until you factor in the realities of life for people in the different groups. This is where Jim Flynn’s Where Have All the Liberals Gone? comes into its own. It makes perfect (pragmatic) sense for law enforcement to use profiling:
    Irish Americans have a rate of alcoholism well above that of most ethnic groups. When resources are stretched, as always, and the highway patrol is conducting random checks for drunken drivers, they would do well to stop only Irish male drivers, particularly where Irish are heavily concentrated. The problem is that they cannot be identified by appearance, and stopping all drivers to verify whether they were Irish would be self-defeating.
    (Flynn is himself an Irish American.) If you drive a beat-up-looking car, you can expect to be pulled over frequently by the police, because beat-up-looking cars correlate with crime and are easily spotted. But you can get rid of a beat-up-looking car; you can’t get rid of black skin. Steven Pinker spells out the social consequences in The Better Angels of our Nature:
    Not surprisingly, lower-status people tend not to avail themselves of the law and may be antagonistic to it, preferring the age-old alternative of self-help justice and a code of honour.
    “Self-help justice” is
    ...another name for vigilantism, frontier justice, taking the law into your own hands, and other forms of violent retaliation by which people secured justice in the absence of intervention by the state.
    Black crime and racial profiling perpetuate each other in a vicious circle. Flynn points out that black skin is also immediately apparent to employers, educators, and landlords, with similar effects. In effect, white Americans enjoy a “systematic affirmative action programme” giving them special access to virtually all benefits of civil society – special, that is, compared to what blacks get. Some kind of affirmative action programme, albeit an intelligently targeted one, pushing in the opposite direction, will be needed to achieve genuine equal treatment.
  • Abortion. Is abortion murder? At what point does a human foetus gain the rights of a person? That’s the fundamental question dividing the pro-choice and pro-life camps, although each side seems to believe that the answer is so obviously the one they favour that the other side must have some secret agenda to deny it. Since it depends on an essentialistic definition of personhood, there is no knowable answer. No “essential” change happens to the foetus at birth, but if we define personhood according to what distinguishes a zygote from an unfertilized ovum or a cell of its mother’s body, we get nonsense propositions such as that it is not murder to kill one of a pair of identical twins (since they aren’t genetically distinct), nor to kill a person with Down’s syndrome (since they don’t have a chromosome count of 46), or that it is murder not to copulate with every fertile member of the opposite sex you pass on the street (since that denies existence to potential persons). We grant people rights on two grounds: that they are conscious beings capable of suffering, and that they are communicative beings capable of entering relationships of trust. Neither condition applies to a two-week embryo, but both apply to its mother, including rape victims and poor working wives. Banning abortion imposes suffering grossly disproportionate to any “sin” on the mother’s part that the conception might have entailed.
  • Homophobia. Is it a big deal to use the word “gay” as an insult? In itself, probably not. But then it surely isn’t a big deal not to use the word “gay” as an insult, either; if someone chooses that as the place where they must take a stand for Free Speech, I’d have to wonder about their attitude to real live people with same-sex attractions. The insult depends for its force upon the fact that it describes a condition which is still widely felt, if no longer openly declared in most places, to reduce one’s right to respect in society. If anyone wishes to impose a penalty on others for same-sex coupling, the onus is on them to produce a rational argument justifying the penalty. If they wish to rely on some authority, the onus is on them to validate that authority. Absent any justification, such penalties must be considered harmful, unjust, and oppressive, whether they take the form of high-school bullying or of denying marriage rights.
  • Covering the body. Western society has purity taboos related to which parts of the human body may be displayed in public. So does Muslim society. Is one of them simply common sense and the other simply oppressive? That question answers itself. Should we require Westerners to cover themselves up to avoid offending Muslims? Not unless Muslims can provide a good reason why their taboo is better. Should we require Muslims to uncover themselves to avoid scaring Westerners? Not unless Westerners can provide a good reason why their taboo is better. Should we stand up for the rights of women in Muslim families to dress as they like, regardless of Muslim taboos? Yes, provided we are also prepared to stand up for the rights of women in Western families to dress as they like, regardless of Western taboos. But do contemporary Western nudity taboos ever cause what could reasonably be called oppression? Yes, they do. Even the Victorians were not so prudish as to object to breast-feeding in public, because the female breast was not then, as it has since become, the primary symbol of male heterosexual desire (straight men liked them, of course, but straight men like female faces, necks, backs, and feet as well). If we want to remove the taboo on public breast-feeding, we must do one or both of two things: make the breast no longer a sex symbol – i.e. display it in non-sexual contexts – or shift the responsibility for male heterosexual desire from women to men (to add another to the big pile of good reasons to do that).
  • Whistle-blowing. A multinational corporation produces dairy products, from farms mostly in one small country, at a rate far beyond those farms’ environmental capacity to sustain production. Cadmium (a toxic heavy metal) in the milk, from the fertilizer they have to use to meet quota, is approaching levels in breach of international safety standards. A large lake, one of the country’s major tourist attractions, has recently become one big algal bloom due to faecal effluent from the farms. The country’s Commerce Commission accepts the corporation’s monopoly on dairy production as “natural”, as if it were infrastructure. The corporation markets its products overseas as “100% Pure”. Though I haven’t named names, none of the aforegoing is hypothetical. If the fraud were generally known, a lot of farmers would lose their livelihood. Should the truth be told nevertheless? I think you’ve guessed my answer. There must be a transition to sustainable practice that minimizes economic loss, but the corporation is grossly betraying the trust both of its customers and of the country; it need not be rewarded for holding the farmers to ransom.
  • Vegetarianism. Is it morally wrong to kill animals for their meat? Animals cannot enter relationships of trust with us, not in the full human sense, but they can experience well-being and suffering. Few of them appear to have any concept of death (the exceptions are great apes, elephants, and possibly some whales), which means that they cannot suffer from the anticipation of it. They do, presumably, suffer both fear and pain in the actual process of slaughtering. But wild animals also suffer fear and pain when they are eaten by non-human predators, which is the fate of most wild animals. If they’re very lucky, the predator will be a large felid like a tiger or jaguar, and will apply a choking bite to kill its prey quickly. If they’re unlucky, it’ll be something like a hyena, and it’ll follow them for hours taking bites at opportune moments and waiting for them to collapse from exhaustion through blood loss. I’d still say that a wild animal has a better life, overall, than an animal in a cage farm, where the fear and pain are constant; but not an animal on a free-range farm. That’s one side of the equation. The other is, how much would a change cost us? Can we do without meat? Some people can, and I applaud them; we should by all means reduce the suffering we inflict on animals. I’m not one of them. If I don’t eat meat, I get light-headed and tired, and when I do get the chance to eat it I binge on it. This may perhaps have something to do with the anaemia that’s inherited in my mother’s family. I’m fortunate to live in a country where lamb and beef is free-range (unlike chicken and pork) and very nearly sustainably farmed (unlike dairy). Not eating meat is a noble goal, but I don’t think meat-eaters are murderers.
Another word for “real-life moral dilemma” is “politics”. It won’t have escaped you that most of these answers align me with one side of the political spectrum against the other. By no coincidence, this is the side which, according to Haidt, privileges the care/harm, fairness/cheating, and liberty/oppression value paradigms over loyalty/betrayal, authority/subversion, and sanctity/degradation. I acknowledge that these are things people care about, but they’re also things that people can’t agree on without reasoned discussion, and reasoned discussion automatically moves things into realms governed by the first three value paradigms. The more transparent our values are to scrutiny by reason, the more truly moral they are. I think our culture made a real moral advance when we stopped talking about “adultery” (a violation of purity) and started calling it “cheating” (a breach of trust). As for “faith”, I think our social instincts mislead us when we apply them inappropriately. If you want someone to trust you, it is a good idea to demonstrate that you trust them. But if you’re too trusting – if you don’t make sure things are true before you believe them – then you become untrustworthy, because you end up repeating other people’s lies. Trust is what matters.

Why Don’t We Do This Already?

People who believe in a benevolent God have to face the dilemma attributed to Epicurus:
Is God willing to prevent evil, but not able? Then he is not omnipotent.
Is he able, but not willing? Then he is malevolent.
Is he both able and willing? Then whence cometh evil?
Is he neither able nor willing? Then why call him God?
Christian theologians’ attempts to solve the dilemma are collectively known as theodicy – from Greek roots meaning “justifying God”. I won’t pretend that I think this is a worthwhile way to spend one’s time. But any system that claims to derive moral good from objective facts is faced with an equivalent problem. Why don’t people behave well automatically, without moralizing? Actually, in a way it’s a good sign that this objection has arisen; it shows that we really are talking about what people should do, not merely what they do do.
People do things because they want to. They want to because their brain organization and chemistry motivate them to. Brain organization and chemistry are products of genes interacting in the environment. Genes are a product of natural selection. In sum, we re-enact the strategies that helped our ancestors become ancestors. So if being just and good is the most rational strategy, why wasn’t it discovered by natural selection? Why aren’t we naturally moral (as well as naturally moralistic)?
We did discuss a related point earlier. Acting immorally is a bad bet, but not a hopeless one. Gambling in a casino is a bad bet, because on average you’re likely to lose more than you’re likely to win (or the casino wouldn’t make a profit). But only on average. Some people buck the odds and win big. Likewise, exploiting and oppressing your fellow human beings is a bad policy, because you’re tearing down the relationships of trust that bring benefits to you, but you might succeed (from your genes’ point of view) and end up having enormous numbers of descendants. Roughly one in 200 men worldwide, mostly in East and Central Asia, share a Y-chromosome that identifies them as direct male-line descendants of Genghis Khan; since there were a lot more than 200 men in the world when Genghis lived, that makes him a tremendous genetic success story – especially when you remember that that’s only counting male-line descendants, and the number of people who descend from him through both male and female ancestors must be much greater. Genghis appears to have out-competed other warlords by being a canny statesman, but he entered the competition in the first place by being a warlord, that is, a multiple rapist and murderer. On the reasonable assumption (backed up, I believe, by Y-chromosome data) that humanity has been plagued by warlords for much of our history, most of us will have at least one or two in our direct ancestry, and there will be many genes for warlordish behaviour kicking around. The genetic benefits for those who pull warlordism off might be more than enough to counterbalance the attendant poverty and misery, as far as natural selection is concerned.
Now, someone will ask, if that logic works for genetic fitness, why doesn’t it work for well-being? Why mightn’t a sociopath like Josef Fritzl enjoy endless happiness if his predatory behaviour succeeds, and mightn’t that be enough to make the high risk worth it? The answer turns, I think, on the distinction between momentary pleasure and well-being, which some of you will have been niggling about all this time ever since I decided to accept Sam Harris’s suggestion that well-being is the fundamental currency of moral good. Psychologists who study happiness find that it cannot be multiplied like genetic descendants. Most people have a baseline level of happiness; misfortune may push them down, or good fortune up, but they soon return to where they were before, even if the new circumstances persist. Increasing the pleasures in one’s life can increase happiness, but only up to a point, beyond which it draws diminishing returns. Hence there is not going to be a huge payoff in well-being for the sociopath who beats the odds and exploits others undetected. Many people through the ages have concluded that this is because we are not “mere” animals, that we are “luminous beings, not this crude matter”, that we have a hole in our hearts where God belongs, or some such. This is why the accumulation of immediate pleasures (especially in the form of consumer goods) and the idea that the physical universe is all there is are lumped together under the word “materialism”. I think this gets the answer exactly backwards. Let me explain.
Suppose we were immortal, immaterial souls. What would follow? Not a whole lot, because there’s not much we can say for sure about intangible things. But it might be reasonable to suppose that an immortal, immaterial soul could go on getting happier and happier indefinitely. It might be that physical pleasures wouldn’t accomplish that, but there’s no immediately obvious reason why not. Suppose, on the other hand, that we are animals, physical systems “designed” by natural selection on our genes. Our genes don’t “care”, even in the metaphorical sense of natural selection, about our happiness. They only “care” about making more of themselves. Our capacity for happiness will be such as to ensure that we do that for them. Excessive satisfaction might well be counter-adaptive, if it deters us from competing for food, security, and sex. Having achieved its genetic purpose once, we should expect our brain to produce more yearnings, so that we’d do it again. On the other hand, given the long lifespan and dependent childhood of our species, we should expect our genes to have built in reward systems to keep feeding us satisfaction if we were working towards some big reproductive payoff. We should gain long-term increases in well-being from the knowledge that we are accomplishing a worthy purpose. And that does indeed appear to be the case.

What If I Can’t Help It?

The final objection. This is all well and good, but what if we can’t do anything about our own behaviour? Not only would that make moral reasoning (apparently) rather pointless, but it would undermine the principle of determination of intent that we identified as one of the pillars of justice. Why penalize an exploiter who couldn’t help it? What if, in short, we have no free will? How can a physical system designed by natural selection have free will? (How can a non-physical system created in any other manner have free will?)
That’s an exceedingly important question. So important, in fact, that I’m not going to deal with it at the tail-end of an essay. This post, Imponderable I, has dealt with morality. Imponderable II will deal with free will.

No comments:

Post a Comment