Tuesday 17 December 2013

Imponderable III: The Self

I was a curious kid. I wouldn’t have been more than eight years old when I learned that green things are green because they absorb the light that isn’t green. Not that I’m claiming I understood it fully. For some time I was mystified as to why green cellophane didn’t make things look red when you looked through it, because I thought absorption was basically light going into the object while reflection was basically light bouncing off the object, so that light going through the object was more like absorption than reflection, because it didn’t bounce off, so if green objects absorbed red light then how come...?
In case you are puzzled by this yourself, absorption isn’t about light “going in”, it’s about light going in and stopping (because its energy has been captured and redirected elsewhere, always at least partly into raising the object’s temperature). The primary distinction is between light that stops and light that keeps going; the latter category is secondarily divided into light that passes through and light that bounces off.
However, at least some of my high-school chemistry classmates had evidently missed this information, because when our teacher explained how the red dyes in a plastic ANZAC Day poppy absorbed shorter wavelengths because the electrons in the iron atoms jumped up a couple of valence shells, or something, one of them exclaimed “So it’s not really red? It’s just that it absorbs the other light?”
This isn’t the Imponderable on consciousness, that’s still to come, so I won’t here speculate on why our experience of colour feels so removed from the Newtonian interpretation of light wavelength and frequency. The point here is this idea that photon absorption is somehow cheating – that there must be a “real red” somewhere which isn’t faked up out of all that physics-y stuff. Because people who have got past that, and are perhaps even now chortling at my sixteen-year-old classmate, are still susceptible to the idea that there might be “real” fear, “real” anger, or “real” love floating around in amongst their synapses and neurotransmitters.
When I was three, I was fascinated by taps. How did water appear out of nowhere when you turned the handle? My attempts to investigate (by poking small objects up the spouts, and leaving them there) were sharply curtailed by those in authority. What if you were strong enough to take the tap off the wall? Could you use it as a garden hose, or a water-pistol? Another curiosity was switches. Those explorations were even more firmly opposed by authority, but I did one day manage to find out what happened when you held a light-switch halfway between on and off. To my surprise, it did not put the lightbulb into a half-on, half-off state. It made a buzzing noise and emitted a puff of rather suspicious-smelling smoke.
At that age, you see, I thought of taps and switches as objects in themselves, objects that created water or electricity because that was their essential nature. In fact, of course, they are merely control nodes in much larger systems, and they would do nothing at all if isolated from their context. Now I’m not saying that people are merely nodes in a system, because people do retain the great majority of their individuality when removed from their contexts. The point I’m trying to make is that “essential nature” thinking is unhelpful. “People have a strong sense of themselves because that’s who we are as people” is as unenlightening as “Taps produce water because that’s what they are as taps”.

“I Wasn’t Myself”

You might get drunk one night – well, you might – and find yourself in conversation with somebody, and say something unusually blunt or bizarre. And you might wonder “Would I have said that, if I were sober?” and that’s a reasonable question. But you might very well phrase it as “Was that me talking, or the drink?” which is problematic. Having made their way across your blood-brain barrier, ethanol molecules are no more or less a part of the fantastically complex biochemical structure that is your brain than anything else. The drink has become part of you; you are now a different person from who you were before you started drinking. So which version of you should be held responsible for the consequences of Drunk-You’s ill-advised remarks?
You might have had severe epileptic seizures, and the doctors might have decided that the only option was to sever the two halves of your brain. I do not know what this would feel like. You would then probably participate in split-brain research, where they show a different thing to each side and see how you react. And the sort of thing they find is that if they show a sign saying “Walk” to your left eye (controlled by the right side of the brain), you would get up and walk. Then they might ask you “Where are you going?” You would not reply “You told me to walk”, because your speech is generated by the left side of the brain, which hasn’t seen the sign. Nor, it seems, would you reply “I don’t know. I just had this sudden urge to walk.” Apparently the usual response is to confabulate something like “To get a drink.” What does this say about the authenticity of our inner lives?
You might contract a toxoplasmid parasite, to pick up the example I left hanging at the end of Imponderable II, and find your behaviour and motivation changing. Assuming no mice are reading this, it probably wouldn’t make you suicidally seek to get eaten by a cat – I’m unaware of any parasite that does that to humans. But what if it did? I would confidently bet two things. I bet we, as a society, would judge the parasite to be something separate from the victim’s “self”, and on that basis would restrain toxoplasmid patients from fulfilling their destructive urges for “their own” good. And I bet the victims would judge their new feelings to be part of them“selves”, and protest against society’s unjust curtailment (as they would see it) of their actions. Who would be right?
Social media is peppered, like office break-room walls and Inspiration sections in bookshops before it, with reassuring deepities about our “selves”. Bullshit though these may be, there would be no market for them if they didn’t speak to real anxieties. Yet I’ve never seen one that pretended to address the metaphysical issues I’ve just raised. They tell us to find ourselves, to know ourselves, to believe in ourselves, to love ourselves, to care for ourselves, and occasionally to control ourselves. They don’t explain what it is that is being found, known, believed in, loved, cared for, or controlled – or what distinguishes it from whatever it is that is doing the finding, knowing, believing, loving, caring, and controlling.
Often they tell us to be ourselves. That, from one perspective, is the oddest advice possible: instructing us to do something we were by definition already doing, something we will continue to do without expending any effort or attention on the task. Yet we do things that are similarly odd every day. I tell myself to get up in the morning, and remind myself regularly of my schedule through the day. But why? If I remember it, who needs to be told? If I don’t, who’s doing the telling?
Yes, I have a pretty solid concept of “myself” as a unified, monolithic entity, even while contemplating the fact that it’s metaphysically problematic. “Myself” is the person sitting here in this chair, “myself” is the person who recklessly set “myself” the task of explaining morality, free will, the self, knowledge, consciousness, and meaning over a year ago. When I try to analyse this self-concept, however, suddenly things aren’t so clear.

Fitting In

I think I can rule out the notion that, since I am “myself”, I have instant, constant, perfectly detailed knowledge of “myself”, and anything I don’t have that kind of knowledge of (such as the precise biochemical state of my kidneys) is therefore external to “myself”. For, curiously, in forming my self-concept I use the same trick I use in forming a concept of anything else: categorization. If I knew myself perfectly, I wouldn’t need to assign myself to a category for ease of cognitive processing. But evidently I do. A few times in my life, I have had to re-categorize myself, either because something has changed or because I have learned something. And it’s always unsettling. It always feels like now I am suddenly a different person and I don’t know quite who that person is.
The first time this happened, I was nearly 21. I had recently figured out that not only was the evidence for God pretty shoddy, but the very concept of God didn’t make any sense. I remember sitting up one otherwise completely ordinary day, and suddenly saying to the empty room “I’m not a Christian any more.” At which point, the top of my head fell off. Or that’s what it felt like. Like having been stuck in a cellar, and without warning the door blows open and lets in the light and the air. For a week or two after that I didn’t know what to expect of myself. It took me a few years more to accept that the word that did fit was “atheist”, because in my old social circles that word was an accusation.
I have always had difficulty socializing. I figured out what small-talk is for by reading about monkey behaviour. Empathy is something I have to do consciously, and hypothetical empathy (how will this person feel if I do such-and-such?) is a major challenge. The misunderstandings I have blundered into in consequence, some of them serious, have added powerful social anxieties to the mix. At age 27, I made use of the free psychology service offered by the University of Otago, and was diagnosed with Asperger’s Syndrome, which the graduate student who made the diagnosis told me was a mild form of autism. If you ever imagine you completely accept a particular group, such as people with mental disabilities, try finding out you’re one of them. See what a high cliff the dividing line becomes from the other side.
However, I did eventually make the mental adjustment and re-categorize myself as a person with an autistic spectrum disability. So it was discomfiting to read, last year, an assertion by Steven Pinker that autistic people don’t realize other people have thoughts and feelings. I commented on his Facebook page as follows:
Reading my way through How the Mind Works. Mostly impressed. However, when I got up to the bit about autism... well, one of the following three propositions must be true:
(1) Steven was wrong to assert (in 1997) that autistic people don’t have a theory of mind;
(2) Asperger’s Syndrome is not a form of autism after all;
(3) I was misdiagnosed with Asperger’s Syndrome in 2005, despite discussing my degree of understanding of what other people are thinking in some detail with the psychologist.
I can assure you I have never, in my memory, had any difficulty understanding that other people have inner lives. I do remember, as a small child, thinking that inanimate objects had inner lives, and worrying about how dark and silent it must be for them with no eyes or ears.
The page admin recommended I e-mail Pinker directly with my comment, which foolishly I did. Presumably a psychologist as famous as Pinker gets amateurs taking him to task over their self-diagnosed conditions all the time; it must be one of the more tedious points of his daily routine. But it is part of my diagnosis that these things are only ever clear to me in hindsight. I got this reply:
Hi, Daniel,

I’d go with both (2) and (3) – Asperberger’s Syndrome [sic] is indeed different from autism, and for the past decade has been wildly extended and overdiagnosed.

Best,
Steve
Thankfully, this time I thought ahead and didn’t send in my instinctive response, which I’m afraid was pretty angry. Presumably neither the initial diagnosis nor the second opinion had any more impact on the neurophysiology of my condition, whatever it is, than the granting and withdrawal of planetary status had on Pluto. And, thinking soberly, there is no reason why I shouldn’t have been misdiagnosed. If it happens a lot (and Pinker would know), it’s quite likely to have happened to me. But at the time, that was too shocking to consider. I felt that my identity was under assault, just as if someone else had been caught using my bank account.
Once we get past the idea of the perfectly self-knowing self, there’s nothing terribly mysterious about any of this. Humans form coalitions to advance their common interests against other coalitions. Naturally, we need to know which coalition to join. Co-opting our knowledge-by-category cognitive trick for the purpose would be exactly the kind of thing natural selection does. Further, coalitions work most efficiently (whether against other coalitions or not) if each member specializes in a particular task, rather than everybody trying to do a bit of everything. So each individual must also choose a role within their own coalition. It shouldn’t come as a surprise that our childhood peer groups appear to account for almost as much of our personality as our genes.
Within each group, some become leaders, others foot soldiers, still others jesters, loose cannons, punching bags, or peacemakers, depending on what niche is available, how suited a child is to filling it, and chance. Once a child acquires a role, it can be hard to shake off, both because other children force the child to stay in the niche and because the child specializes in the skills necessary to prosper in it. This part of the theory, [The Nurture Assumption author Judith] Harris notes, is untested, and difficult to test, because the crucial first step – which child fills which niche in which group – is so capricious.
Steven Pinker, The Blank Slate p. 396
Untested, but highly plausible. (My niche was The Weirdo.) Skills we practise continually – social skills included – soon become what we aptly call second nature. If this is where our personalities come from, no as-yet-unknown neurological process is needed to account for it. This process of fitting and being fit into a niche within a group is known as socialization. Its mental products, the categories and roles we assign our self, together make up our social identity.
Social identity and what to do about it is a trip-wire issue in contemporary politics. In some circles you can shut people up by accusing them of not respecting your identity. In others, generally closer to the centres of power (unless perhaps in academia), you can shut people up by accusing them of playing “identity politics”. Doing the matter justice here would take up the rest of the article, so I’ll content myself with this observation: knowledge-by-category is too deeply embedded in the human mind to be switched off at convenience. It might be best if we approached every new person we met as a unique individual, without labels or assumptions, but that doesn’t happen for the wishing. Every day, I have to confront some form of the question Would I assume what I’m assuming if the person in front of me was male / white / hetero / cissexual / able-bodied / neurotypical...? This takes discipline and I’m quite sure I munt it up a lot of the time. Refusing to face it on the grounds that it’s “identity politics” is unlikely to help.

The Easiest Person to Fool

For we do an even stranger thing than defining ourselves with reference to other people: we get things wrong about ourselves. Or rather, we deceive ourselves. One psychology team asked their experimental subjects to allocate tasks between themselves and a partner:
The participants could just choose the easy task for themselves, or they could use a random number generator to decide who got which. Human selfishness being what it is, almost everyone kept the pleasant task for themselves. Later they were given an anonymous questionnaire to evaluate the experiment which unobtrusively slipped in a question about whether the participants thought that their decision had been fair. Human hypocrisy being what it is, most of them said it was. Then the experimenters described the selfish choice to another group of participants and asked them how fairly the selfish subject acted. Not surprisingly, they didn’t think it was fair at all...
Did the self-servers really, deep down, believe that they were acting fairly? Or did the conscious spin doctor in their brains just say that, while the unconscious reality-checker registered the truth? To find out, the psychologists tied up the conscious mind by forcing a group of participants to keep seven digits in memory while they evaluated the experiment, including the judgement about whether they (or others) had acted fairly. With the conscious mind distracted, the terrible truth came out: the participants judged themselves as harshly as they judged other people.
Steven Pinker, The Better Angels of our Nature p. 492
We kid ourselves that we are more virtuous, more intelligent, and more diligent than we really are. The erroneous self-image thus constructed may conveniently be called our ego. But what about the hidden, accurate, information? I have a hunch. Not anything so structured as a theory, just a hunch. When I was a Christian, I prayed frequently, because that’s what Christians do. Part of prayer in my tradition was listening for what God says to you. It seemed that he did speak to me, as a thought among other thoughts in my head. I could tell which one was God because it was so wise – it knew all my foibles and secrets, but it shared none of my grudges or mental evasions and wasn’t fooled by excuses. Yet when I asked it questions about things outside of my head, it didn’t know the answers. At first I thought this was Satan trying to trick me, then that I just wasn’t listening to God properly. What I now think I was really hearing was my own self-knowledge. And I suspect that my case is far from rare among religious people.
However, labelling the inner voice “God” blocked it from telling me truths about myself that were incompatible with my religious beliefs. Only in the last couple of years have I admitted to myself that the word “heterosexual” doesn’t really fit me. “Bisexual” is about equally far away in the opposite direction. “Bi-curious” is closer than either, though misleading in other respects. This was not accompanied by the same “Who am I?” confusion as the other identity shifts I’ve told you about, because, “deep down inside” as they say, I knew it all along. I had been deceiving myself. As of now I remain hetero-romantic as far as I am aware, and needless to say I still have heterosexual privilege.
I suppose you could call the truthful thoughts, the part that knows what the ego denies, your “real self”. That would be more accurate than calling them “God”, at least. But that’s a little odd too, because most of the time you are not viewing the world from the perspective of the truthful thoughts. What the phenomenon of self-deception actually proves is that there is no “self” the way we usually picture it: no little person in the brain, or in some soul-dimension either, who monitors all your thoughts, beliefs, desires and choices. No unified entity could both know what your truthful thoughts know and believe what your ego believes.
Why do we do this? The question seems too obvious to ask. It makes us happy to believe that we are good, competent people, of course. But why does it make us happy? Recall that our pleasures and desires are products of natural selection. Suddenly the obvious flips over. Pit a realist and an over-confident fool against a hungry lion; which one has a better chance of surviving? Once again you have to remember that our minds evolved to play social coalition games. Instead of sending the two contenders against the lion on their own, have them each first give a pep talk to recruit followers. Suppose that the fool is not so foolish that he can’t convince others he knows what he’s doing, and that the others prefer to follow a capable and confident leader. Now who’s got the advantage?

Who Are You? Who, Who?

Ego evolved so we could bluff our way through status contests, and truthful thoughts so that we wouldn’t actually square up against lions all by ourselves. If we’re products of natural selection, if we exist so that our genes can make copies of themselves, does that mean that our interests are ultimately synonymous with our genes’ interests? Are our selves merely avatars of our DNA? I don’t think it’s that simple, and not just because that would be all bleak and science-y. Not if “our interests” means what makes us happiest overall. Between religious tradition and our conditionally altruistic human nature we readily suppose that when we fulfil our life’s true purpose we will be rewarded with an sense of ultimate bliss, but our genes have no interest in giving us ultimate bliss when we could be making more copies of them by trying and failing to satisfy ourselves with sex.
But (alas?) spreading our genes takes more than sex. We have to keep our metabolic engines powered up and in good repair, we have to gather and organize the materials to build the new genes and their vehicles out of, and we have to do all this in competition with other gene-vehicles doing the same thing, some very like us and some not. That is, we need to eat and drink and protect ourselves and our families. Between that and the fact that genes don’t actually know what they’re doing, we’ve ended up with a bunch of mutually incompatible drives and longings and fears all striving for mastery. Between that and the cacophony of stimuli that the world throws at us, our brains are running myriads of different processes at once, and not one of them can be sensibly called “the self”.
Not one of them. How about all of them, then? Can we identify the self with the whole mind? That’s going to make things difficult for the concept of free will we hammered out in Imponderable II. Whatever the self is and however it works, we decided that an action can only be “free” if the self is a necessary part of the network of causes and effects in the brain leading from stimulus to behaviour. But if the self is simply everything going on in the brain, then any conceivable action is by this definition “free”. We need to tighten things up a little.
Modern Western culture has accustomed us to thinking of people as unitary, self-contained individuals. Such are the characters of the novel, an art-form which has held sway over literature for perhaps three hundred years. This perspective has merit in promoting empathy, as Pinker shows in The Better Angels of our Nature, but in a philosophical inquiry it should not be mistaken for ultimate truth. Some 20th- and 21st-century writers have experimented with forms where a single personality is represented by multiple characters. What they are effectively doing is reviving the mediaeval method of allegory.
Nor is it unnatural for a lover to regard his courtship as an adventure, not with a single person, but with that person’s varying moods, some of which are his friends and some his enemies. A man need not go to the Middle Ages to discover that his mistress is many women as well as one, and that sometimes the woman he hoped to meet is replaced by a very different woman. Accordingly, the lover in the Romance [of the Rose] is concerned not with a single “lady”, but with a number of “moods” or “aspects” of that lady who alternately help and hinder his attempts to win her love, symbolized by the Rose.
C. S. Lewis, The Allegory of Love p. 118
Some of our pleasures are easily won and short-lived, like scratching an itch. Some require long-term planning and complex co-ordination of multiple different processes; think of a life-long happy marriage. Correspondingly, some pains are quickly assuaged (leg cramps from sitting still too long), while others may disrupt our entire lives – I’ll let you think of your own example this time. We may call the first kind, in each case, “shallow” interests, and the second kind “deep”.
Most animals whose life interests can conflict with one another have circuits in the brain dedicated to resolving the conflicts intelligently. In humans these circuits, found in the frontal lobes, are especially well-developed. We can put off rewards or endure discomfort for anticipated payoffs sometimes years away. We call this “self-control”. The organs involved are not muscles, but they function like muscles: exercise fatigues them in the short term but strengthens them in the long term, and they require good nutrition to stay in shape.
Self-control is prompted by the memory of regret. For all the self-aggrandizement our egos are capable of, we humans are also very good at regretting. Shame has a peculiar power of returning in full force decades after the embarrassing event, when all other emotions, even grief, have faded. Many of us readily internalize negative perceptions by others, real or potential. For some this can be the beginning of that vicious cycle of self-hatred and misery which we call depression. If you’ve ever felt like telling a depressed person that they have nothing to be sad about and they should snap out of it and stop being pathetic, please be assured that they are already telling themselves that very thing as hard as they can. In my case, the cycle broke when I stopped calling my self-critical thoughts “God”.
At last we’re beginning to get near an answer to the questions we’ve raised. In case of conflict between deep interests and shallow ones, “our” interests are to be identified with the deep ones. Even the toxoplasmid-addled mouse would be best advised to wear bite-proof armour when indulging its passion for approaching cats, precisely so that it could have that dangerous pleasure more than once. That is why the toxoplasmid (whose reproductive interests call for the mouse to be eaten) doesn’t count as part of the mouse’s self.
Our brains are busy places, and there’s a lot of stuff going on, but it’s not totally random. To pull a single “self” out of it, we construct one like a fictional character – not as a falsehood, but a kind of abstraction unifying the diversity of our mental processes. It’s rather as if we were to write the story of a netball game or a rugby match, treating the player with the ball as a single central character throughout. Drunk You, by this analogy, is the moment when there was a field invasion and the ball ended up rolling over one of the side-lines.
With the split-brain patient, you have to bear three things in mind. First, while brain processes do have to be co-ordinated with each other, they can’t do that by each containing little copies of the others, which is to say no process can “know” what the others are doing; they have to “guess” based on the available inputs. Second, the brain was created over hundreds of millions of years by various animals’ reproductive activities, not in one go by any thinking designer. And third, no-one up until the 20th century ever did anything reproductive with their brain cut in half. It would be astonishing if any of our neural modules had contingency subroutines in the event of losing contact with a still-functional opposite hemisphere.

The Centre of Gravity

Our fundamental tactic of self-protection, self-control, and self-definition is not spinning webs or building dams, but telling stories, and more particularly concocting and controlling the story we tell others – and ourselves – about who we are. And just as spiders don’t have to think, consciously and deliberately, about how to spin their webs, and just as beavers, unlike professional human engineers, do not consciously and deliberately plan the structures they build, we (unlike professional human storytellers) do not consciously and deliberately figure out what narratives to tell and how to tell them. Our tales are spun, but for the most part we don’t spin them; they spin us. Our human consciousness, and our narrative selfhood, is their product, not their source.
These strings or streams of narrative issue forth as if from a single source – not just in the obvious sense of flowing from just one mouth, or one pencil or pen, but in a more subtle sense: their effect on any audience is to encourage them to (try to) posit a unified agent whose words they are, about whom they are: in short, to posit a centre of narrative gravity.
Daniel C. Dennett, Consciousness Explained p. 418
Like many of Dennett’s metaphors (he doesn’t call them “intuition pumps” for nothing), the Centre of Narrative Gravity works at several levels of analysis. If you make a smooth, flat cut through the middle of an object, and divide its mass exactly in half, the object’s centre of gravity lies in the plane of that cut. It is defined as the single point through which all such cuts pass, no matter what direction they’re in. It’s not a thing, not a particular particle. Indeed, it’s not strictly part of the object at all. A doughnut’s centre of gravity is outside of it, in the middle of the hole. For all that, it’s a critical concept in predicting the object’s physical behaviour.
What goes up, as they say, must come down, because it is accelerating towards the Earth’s centre of gravity. The Earth is not a single object, much less a point particle. It’s a huge sphere of solid iron, surrounded by a sphere of liquid iron, surrounded by several thick, heavy layers of assorted rocks, with a sprinkling of water and a puff of air around the outside. But here’s the rub: any assortment of objects has a centre of gravity. The system comprising my hat, my brother’s work computer in Wellington, US President Barack Obama’s most recently-purchased tie, and the seventh-largest rock in the outer ring of Neptune, has a centre of gravity at some precise point in space. That point is a theoretical abstraction, but then so is the Earth’s centre of gravity. Yet the latter does indispensable explanatory work on a prodigious number of everyday phenomena, while the former is completely frivolous. Why? Because the collection of objects that is the Earth sticks together.
In our own sports-game analogy, The Player With The Ball makes for a coherent character, despite actually being multiple different people, because there is only one ball. What holds “the self” together? Why can’t my concern for animal welfare and my desire to be lean, fit and healthy gang up and vote my taste for medium-rare steak off the island? I have a body, which is not also someone else’s body. That body moves to carry out actions in response to desires and beliefs that I am conscious of. Things outside my body might be included within my sense of self: people say “I” of their clothes (“Look, I’m all green!”) their vehicles (“I’m parked near the library”) their houses (“I’m over in Mornington”) and their bank balances (“I’m sitting just below $1000”), but none of these define the self, and all are included within the self only because the body has special access to them, as opposed to other clothes, vehicles, houses, or bank balances.
This has far-reaching implications for any belief in a disembodied “soul” or “cosmic consciousness”, and also for the notion of telepathy or any other kind of mind-to-mind contact. If thoughts could pass from body to body as freely as they do within the brain, there would be no grounds for distinguishing one self from another. As it happens, there is no evidence for any such thing, which is probably just as well. The self concept, fictive though in some ways it is, is a terrific labour-saver for figuring out moral questions. “Treat people as you would wish to be treated”, “Let others do what they want, don’t make them do what you want”, and many more simple moral principles would be incomprehensible were there no individuals to apply them.
On one particular moral issue, I can’t help noting that if selves are fictive, then there is no factual answer to the question of when a self begins to exist, and therefore of when it becomes morally wrong to terminate the process of its coming into existence. Fortunately, nature – not usually a friend to our object-oriented, sharp-division-demanding human minds – has provided a transitional moment in the process which lasts only a few seconds; which can easily be observed and verified; which occurs months or perhaps years before the first glimmerings of a sense of self, or of any capability to enter into relationships of trust; and which reduces the burden on the mother’s body by orders of magnitude, thereby rendering termination unnecessary. That moment is, of course, birth.

How Do We Know?

Wait, wait a minute. I said above that no process in the brain knows what’s going on with all the other processes. Then how do I, the composite centre-of-gravity self, know what I was thinking before I started writing this? And did you notice the flaw in my argument that the “true” self is to by identified with the deeper over the shallower interest, when there’s a conflict? By that definition, any time I do something silly it’s not really me doing it, because a shallow interest has won out over a deep one. That would mean, according to our previous arguments, that I wasn’t doing it freely and couldn’t be held morally responsible for the consequences. That would not be conducive to relationships of mutual trust, so we have to refine it. The solution must be that my self-control process has miscalculated which one was my deeper interest; I have made a mistake, I am acting as if I knew something that I actually do not know. So what does it mean to make a mistake? What does it mean to know, or not know, things? What is knowledge? Yes, yet again I close with an unanswered question. We will explore the nature of knowledge in Imponderable IV.

2 comments:

  1. The DSM-5, as far as I recall from lectures last year, does not have Asperger's Syndrome, but something called Autistic Spectrum Disorder. The further along the spectrum you are, i.e. the more severe, the more similar you are to someone at a similar point on the spectrum. The less severe your disorder is, the more variety there is among similarly severely affected people. Therefore, there's a bit of all three of your points to Steve. (1) He's wrong in asserting that all people with ASD have no theory of mind. (2) Asperger's Syndrome no longer exists, so (3) technically you were wrongly diagnosed with it if it's not a real thing... It may be that all people with severe autism have no theory of mind - especially if you define being Autistic as "having no theory of mind". But it's well documented that "high-functioning" people with autistic spectrum disorder display a wide range of symptoms, which is why the old DSM-IV had the old PDD-NOS, which also no longer exists, I think.
    I'm blethering. But you get my point.

    ReplyDelete
    Replies
    1. I was diagnosed in 2005, by a student working with the DSM-4, which did have Asperger's Syndrome. "How the Mind Works" was published in 1997, and Pinker makes no reference to any kind of "spectrum". From his e-mail to me I got the impression that he disputes the concept of an "autistic spectrum", which -- well, he's the psychology professor. If he's said more about that in any technical papers I haven't read them.

      Delete