Saturday, 13 April 2019

Game of Thrones: a pre-Season 8 thoughtdump

Jon Snow

Crossposted from Dreamwidth

To get this out of the way first: a large proportion of my family and close friends don’t watch Game of Thrones for various reasons. I gather this makes them feel a bit left out of some online conversations, since Game of Thrones has pervaded pop culture so thoroughly by now. I can relate. When I was a kid at primary school, we were the only household that didn’t have a TV. For context, in 1980s New Zealand there were exactly two TV channels, of which only one had children’s programming; so every day, every kid in the school had seen the exact same TV the previous afternoon, which made it ideal fodder for conversation icebreakers and small-talk. Every kid, except us. At the time I blamed this fact for the social difficulties which later turned out to be autism.

...annnd I’m already getting sidetracked in the first paragraph. What I was going to say was, I know how something just being popular with other people creates social pressure, even if it’s unintended, for you to join in and pretend you enjoy it as well. And honestly Game of Thrones is not for everybody. I’m going to be talking about its merits quite a bit, so I want to be clear from the get-go that if it isn’t your thing then it isn’t your thing and that’s fine. (Though I should warn you that I’m assuming my readers are familiar with the series, so this post will be both confusing and spoilery to those who aren’t.)

Indeed, you’ll notice as we go through that I’m not doing comparisons with the books very much, and the reason for that is that the books aren’t my thing. I’ve kind of skimmed through them and occasionally browsed a page or two in bookshops, but I haven’t read them properly, and that’s because I can’t. I understand (and I’ll get into) the reasoning behind the “any character can die” dynamic, and it works onscreen for me, but on the page I don’t get the intended effect. My emotional brain basically goes “Well, if I’m going to be punished for caring about these characters then I’m not going to care about them any more.”

I’m not entirely sure what difference the transition from page to screen makes. I used to think it was because the TV characters had faces and I couldn’t help empathizing with them, but the characters on The Walking Dead have faces too and I gave up on that a couple of seasons ago because I was disengaging from the characters for much the same reason I do with the Game of Thrones books.

Ahahahaha. Yes, yes, I mean the A Song of Ice and Fire books, A Game of Thrones being the title merely of Book I (roughly equivalent to Season 1 of the show). In this instance I’ll grant the book purists the point: A Song of Ice and Fire is a much more appropriate title for the series as a whole.

Whilst most fans as far as I can tell are loving the way things have developed in the later seasons, there’s also a dissatisfied contingent who argue that the whole thing started to go downhill as soon as the showrunners got ahead of George R. R. Martin’s published material. They seem to particularly dislike the way the characters have now fallen into some pretty solid coalitions of people who mostly trust each other, leaving behind all the politicking and betrayals and whisperings and jockeying for power – the game of thrones – that characterized the earlier seasons.

Not to be overly snarky, I think these people are missing the entire point of the series from start to finish. (This is as good a point as any to cut for spoilers.)

Sunday, 24 March 2019

The challenge of weeding out racism

Our Prime Minister Jacinda Ardern wore hijab to speak to grieving Muslims

Like many New Zealanders, I was inspired by our Prime Minister Jacinda Ardern’s declaration in the wake of the terror in Christchurch that “This is not us.” I took it as a signal of our intentions for the immediate future. From now on, from this day forward, this is not us. From now on we are vigilant for the early warning signs of white supremacist violence. From this day forward we reject every expression of racism and hatred and stop it in its tracks. To this promise we pledge ourselves. So say we all.

Taken as a statement of New Zealand’s past and present – the comfortable bubble we were all living in up until that Friday – I’m afraid it was inaccurate, as many other New Zealanders have sad cause to know intimately. We are a nation where cries of “Go home!” follow brown-skinned people down the street. We are a nation that elects anti-Muslim racists to Parliament and appoints their party leader to the second-highest position in the land. We are a nation whose primary political divide in our most recent election was between those who were racist against Māori and Pacific Islanders and those who were racist against Asians.

I happen to have the tremendous good fortune of being a white man; the only racism I’ve had come my way was a couple of the half-dozen occasions when I’ve been mistaken for Jewish. And yet even from this position of privilege I’ve seen plenty of racism directed at others. How much more visible must it be to those on the pointy end?

(Content note: If racism in New Zealand is the last thing you need to be reminded of just now, I’d advise not reading any further.)

There was the guy in the supermarket who yelled “Come on, [racial epithet]!” when a South Asian worker, busy arranging trolleys, briefly got in his way. There was the guy who expressed regret, in tones of deep distaste, at how his country was being “taken over” by “persons of a yellow persuasion”. There was the guy on the bus who hypothesized that the East Asian owners of the internet café next to the bus stop had taken down the bus timetables to fool potential customers into parking there. There was the guy who, having come off his bike to avoid a car rounding the corner, shouted not “Watch where you’re going!” nor “I’ve got a right to use the road too!” but “Bloody Asians!”

I’ve heard people yell at the television “You’re not Māori!” when a commentator claimed otherwise who didn’t look Māori enough for their judgement. I lived through the time when Ardern’s party (before she entered Parliament) blocked Māori customary property claims to the foreshore and seabed while allowing commercial ones, and sold this policy to the nation as “the Māoris want to stop you going to the beach” (to complete the irony, the Māori claimants more often wanted to ensure public access to beaches). I heard people then joke, since by then our laws had abandoned the racist blood quantum criterion for telling who counts as Māori, that maybe they would still be able to go to the beach if they “feel Māori”. It wasn’t that long ago. New Zealand is still pretty much the same people now as it was then.

Once, conversing about my job with a social work lecturer after class, I happened to mention that in my dentistry classes there was a high proportion of Asian students. Insofar as I had a point it was to puzzle over why so few Pākehā students were going into dentistry (the paucity of Māori and Pacific Islanders in the health professions is, alas, less mysterious). But the lecturer – whose inclusive attitudes I had until that point admired – took me to be saying something quite different. “Yeah,” he said, “they shouldn’t let them in to take those places off our people, should they?”

Friday, 15 March 2019

The Ides of March

Today I saw the best and worst of what people can be. The best, first-hand; the worst, mostly via Facebook. I live in Dunedin, which is about a five-hour drive away from Christchurch, southward down the coast. I’m going to start with the bad thing, even though it happened later, so that I can end with the good thing. Quite apart from the fact that the good thing deserves the attention more, I believe that’s the way the world is going; courage is, gradually, conquering hate.

Today New Zealand got in the world news for about the worst possible reason. Our decades-long run without a public mass shooting has been broken, and the number of people killed in political terrorist acts in the entirety of our history has gone up from three to over 40. In Christchurch, this afternoon, during the Friday prayer, a white man walked into the Al Noor Mosque in Riccarton in the central city, sprayed the place with bullets, and fled. Soon afterward, a white man walked into the Linwood Islamic Centre a few kilometres across town, and began shooting.

Co-ordinated attacks by two shooters, or did the Riccarton shooter get in his car and drive to Linwood? I’ve heard both, and at a time like this I think it’s especially important to be mindful of the limits of one’s knowledge. The police also found at least one car bomb and defused it. The number of people killed is currently estimated to be in the 40s. Several of them are known to be refugees from the war in Syria, some of them children. One man has been arrested and charged with murder. Three others have also been arrested; last I heard, one had been released and the other two were being questioned. Presumably the police cordoned off the area and took in anyone who happened to have a firearm in their car.

I gather the shooter livestreamed the attack, and also published a manifesto online, just in case anyone was in doubt that the main motive for terrorism is notoriety. I understand that the local internet providers have been working to take them down, and good on them. Let me copypaste a Facebook post by a friend of mine who’s seen the manifesto:

Here’s a few quick facts from this shooter’s manifesto that he published online, so that you don’t have to read his pathetic excuses and unintelligent hate-speech.
  • He isn’t even a Kiwi. He’s an Australian citizen who was here temporarily. A little ironic considering he’s anti-immigration.
  • He originally planned to attack the mosque in Dunedin, because of a video on Facebook that he saw from the Otago Muslim Association.
  • He was most influenced by Candace Owens. I really hope that she faces the consequences of her disgusting rhetoric over this.
  • He supports Trump’s nationalist and anti-immigration stances.
There’s literally nothing else of value. Don’t read it.

I have not seen either the video or the manifesto. I have seen the shooter’s name. It will never cross either my mouth or my fingers. May it be swiftly forgotten.


Now for the good thing. I didn’t hear about the shooting until this evening because, when it was happening, I was regretfully heading back to work after attending the Dunedin branch of the School Strike For Climate. It was astonishing. I’ve been in many protests in my time, helped orchestrate a fair number of them, and I have never, ever seen one as well-organized and inspiring as this. I’m pretty sure I have, at times in the past, tutted and waxed superior over the maturity of teenagers, for which I humbly apologize. I won’t do it again. I think the last time I saw George St filled like that was when they threatened to take away Dunedin Hospital’s neurology unit, and before that the war on Iraq. And this was put together by high school students.

For all that pundits make money touting this or that existential threat to civilization that we all need to be shaking in our shoes about, climate change is the only one that’s both real and imminent. (Nuclear war is a genuine danger but a remote one. Peak Oil is a secondary consequence of the same institutional stupidities that are causing climate change. Nothing else qualifies.)

It’s already begun; New Zealand has had a “hundred-year flood” every year for over a decade now, two of them right where I live and two more just out of town on the Taieri Plain. I knew when last winter was unseasonably mild that an unprecedentedly hot summer was on its way; I even went around telling people there were going to be big bushfires in Australia. I didn’t predict they would come as far south as Tasmania, and I certainly didn’t count on them hitting New Zealand as well, but both things happened. These events are a tiny foretaste of what is to come if we don’t take drastic action.

New Zealand doesn’t account for much of the world’s greenhouse gas emissions, but because of our small size we’re a good location for experimental social changes that the world can then scale up from. After both our major political parties embraced neoliberalism in the 1980s, neoliberals elsewhere in the world pointed at us – prematurely, it turned out – as a success story. A decade ago we got the opportunity to lead the way as developers of smart green technology, and we squandered it and hung our economy on milk instead. Can’t we please be world leaders again?

It’s easy to fall into despair over the magnitude of the problem, and that despair is a major contributor to the political inertia that has caused it. That’s why today’s demonstration brought tears to my eyes. Today I saw teenagers with a better handle on grassroots political organization than my generation ever had. Today I saw where the political will can be found to solve this problem. Today I know there is hope.


On this day 2062 years ago, a determined posse of political activists, deeply concerned for the integrity of the Republic of Rome, publicly murdered the man at the hub of the changes that they feared, and so brought about the very crisis they had hoped to avert. Their act fell short, however, of the ineffectuality of terrorism, because Julius Caesar was a genuine centre of power. Terrorism by definition strikes at the powerless; it is the epitome of cowardice. And it never succeeds. Mohandas Gandhi in India eschewed violence, and India broke free of the British Empire. The IRA in Northern Ireland embraced violence, and Northern Ireland remains a British province. The numbers across history bear out the lesson of these two examples; violence, even against legitimate targets, reduces a political movement’s chances of success by over half. Terrorist violence guarantees failure.

So, out of the action today that deserved the world’s attention and the action that hijacked it, I know which one I believe represents the future. I stand for courage, I stand for truth, and I stand for hope.

Friday, 8 March 2019

Captain Marvel: movie review

Captain Marvel movie poster, showing Brie Larson as Carol Danvers, Samuel L. Jackson as Nick Fury, and Jude Law as Yon-Rogg

Crossposted from my Dreamwidth blog

Just for fun, how many movies do you imagine fulfill all the following criteria?

  • Based on comic books, or about superheroes, or both
  • Released in cinemas
  • The title consists solely of the protagonist’s name and/or hero pseudonym
  • The title protagonist is female

Well, I can’t be bothered tracking down movies from every country in the world. But on Wikipedia’s lists of American movies there are, as of the release of Captain Marvel earlier this week, exactly six. The other five are, in order of release: Tank Girl (1995), Barb Wire (1996), Catwoman (2004), Elektra (2005), and Wonder Woman (2017). The 1984 movie Supergirl apparently was British, not American, but you can go ahead and include it if you like.

By contrast I count about 49 American movies which fulfill all the other conditions but have a male title protagonist. That’s being conservative, because I chose not to count titles containing epithets that refer to their heroes but aren’t their actual names, like “The Dark Knight” or “The First Avenger” or “Man of Steel”. If I had chosen to include those, that would have added at least another half-dozen to the male list and exactly one to the female list: My Super Ex-Girlfriend (2006). I also didn’t count sequels even when the title was just the character’s name and a number (e.g Deadpool 2), which would have lengthened the male list by another dozen or so and the female list not at all.

You could argue that manga should be counted as comic books, which adds exactly one more American movie to the female list, namely Alita: Battle Angel, again released only weeks ago. And if you want to include movies named for more than one character, that brings in things like Batman & Robin and Batman v. Superman on the male side, and one lone female character taking second place in the title of last year’s Ant-Man and the Wasp.

The YouTube comments on trailers for Captain Marvel are full of remarks like “Ooh, a strong female character, how novel” and “I don’t go to Marvel movies for the politics.”

Mind you, having now seen the movie, I can tell you there’s another strain of YouTube comments that’s even more ironic: the kind that go “I don’t need to see the movie now, they put the whole thing in the trailers.” The trailers are almost entirely taken from the first half-hour or so. The rest of the movie then takes the premise set up in that half-hour and unabashedly flips it upside-down to lie waggling its legs undignifiedly in the air.


Spoilers both great and small below the cut.

Wednesday, 13 February 2019

Did I always know I was bisexual?

I am bisexual
Reposted from my Dreamwidth blog

How long have I known I’m bisexual? A simple question with no simple answer. Someone passed a meme around on Facebook last week saying “It’s fine if you haven’t always known,” which prompted me to reflect.

I have accepted it for seven or eight years, I suppose. But was it a matter of learning something about myself I didn’t previously know? Or was it just that I started to be honest with myself about something I’d always known? Neither of those sits quite right with my memory.

I didn’t come out to anyone but my partner for several years after this realization. Even now, although I openly identify as bisexual online, you wouldn’t guess it from my life in physical space. Primarily, of course, my partner and I were and are in an exclusive relationship and already had been for years before, so I’m not seeking romantic or sexual partners of any gender and have no intention of doing so.

(This is something people sometimes misunderstand, so in case this concept is new to you: no, that doesn’t mean I’m not really bisexual or that I’ve “chosen a side”. I’ve chosen a person.)

But it took years for me to summon up the courage to come out at all, even online. I’ve never taken part in any Pride event, publicly or otherwise, nor any other LGBT-related social activity. Last year a friend invited me to a “coming out stories” session as part of a campus LGBT awareness week; I chickened out.

I grew up Evangelical, which in New Zealand isn’t quite as tightly bound to conservative politics as it is in the US, but on some issues there is definitely a Godly side and a Satanic side, and at least back in the ’80s and ’90s sexual orientation was one of those issues. Meanwhile in the secular culture which I encountered at school, to be gay was the very depth of loserdom, the nadir towards which lesser losers such as geeks and nerds and the arty-farty were presumed to be drawn.

Once I entered an environment where I had to justify moral positions with reasoning, I quickly accepted (intellectually) that there was no justification for opposing same-sex relationships. With a personal history shaped by Evangelicalism and Kiwi-bloke toxic masculinity, however, my emotional reactions took over a decade to catch up – and indeed, acknowledging my own bisexuality was a late stage in that very process.

Nowadays my only contact with the Evangelical community is through my family and some old friends, and if they’re any indication then the norm seems to be shifting. But that’s only a few people, and those few might just as easily be drifting away from the norm as drifting along with it.

Anyway, my single biggest reason for delaying coming out publicly was that I felt a bit presumptuous suddenly identifying as a member of a community which I knew very little about and had a history of being uncomfortable with.

There existed in my teenage years a movement which called itself “Gay” and “Queer” – yes, “Queer” – with its own symbols and aesthetics and its proprietary words, including “bisexual”. This movement seemed entirely alien to everything that was familiar to me, and of course both sides of my cultural background actively encouraged that alienation. I didn’t see any connection between the rainbow flags and the pink triangles and the fishnets and sequins, on the one hand, and my own developing sexuality on the other.

Saturday, 12 January 2019

Why nudity is worth defending

Riders in the 2009 World Naked Bike Ride pause in front of the White House

Nudity ought to be legal and accepted everywhere it is physically safe. The fact that it is not is a societal injustice. I know most of you aren’t going to agree straight off the bat, so let me lay out my reasoning and see what you think.

Admittedly, it’s not a major societal injustice. There are other injustices with more dire consequences for more people, that more deeply undermine our ability to trust each other and are more urgent priorities. Relatively few people share my autistic sensory aversions to clothing, and those aversions don’t usually rise above the level of mild discomfort unless it gets very hot or the clothing in question is wet. (Swimming-togs feel like knives cutting me.) But most of the time, I think, struggles against different injustices help rather than hinder one another. Raising people’s awareness of one injustice makes them more alert to other injustices, not less. It isn’t a competition.

First point: People deserve a degree of respect simply on account of being people. That includes being able to go about one’s daily business without harassment from one’s fellow citizens. There is no amount of clothing one might wear or not wear that would make one deserve to be yelled at, ogled, pelted with rubbish, or chased off the streets. It is therefore unfair to yell at someone, ogle them, throw things at them, or chase them away because of what they might choose to wear or not to wear.

Second point: Injustice is fundamentally the same thing as unfairness. We just tend to reserve the weightier word for when there are graver consequences, such as when discrimination is enforced by the police or when it prevents people from participating fully in society. Therefore, if people are threatened with arrest or prevented from participating in society due to what they are wearing or not wearing, that is an injustice. If the law allows or prescribes for it, the law is unjust.

Third point: People are in fact harassed, arrested, and ejected from public places if they go nude. We’ve just agreed that this would be an injustice if it happened; well, it does happen, and therefore it is an injustice.

Finally, this particular injustice is enforced by society as a whole, not just by officers of the law. That makes it a societal injustice. The fact that nudity is not legal or acceptable is a societal injustice. There you go.

Somehow this is easier to see when the body taboo in question is that of a culture that isn’t our own – when it’s Arab police forcing women into hijab or French police forcing them out of it, Victorian missionaries imposing Western clothing on Pacific Islanders or that one group of Pacific Islanders (the Kwaio on Malaita in the Solomons) who impose toplessness on Western visitors. But an injustice is an injustice, and it is in the nature of societal injustices that they feel like ordinary common sense to enculturated members of the societies that enforce them. Which would include ours. Which means that just because wearing clothes feels like ordinary common sense to us, doesn’t mean that it’s not a societal injustice.


Now, how serious an injustice is it? Is anyone seriously hurt by having to wear clothes (obviously not counting us autistics and our autistic sensory issues which make us, as we are reminded daily, such a nuisance to normal people)? Well, there are a couple of problems that I think are bound up with it.

Thursday, 27 December 2018

The imminent and well-deserved demise of Tumblr

Truth Coming Out Of Her Well to Shame Mankind

On 3 December Tumblr announced it was getting rid of adult content, starting in two weeks from the announcement. Tumblr management has never been competent at the best of times, but this takes the cake. This will kill the site.

Those of you who don’t have Tumblr accounts will scarcely imagine what a dumpster-fire this is. Just to start with, the main draw of Tumblr was that it was the one social media site that allowed adult content. What Tumblr has just done is basically what I’ve seen a couple of local hospitality businesses do – a pub on the Otago University campus years ago, and a café in Port Chalmers more recently. Both had quirky art-work that gave them an alternative vibe; both were bought by new owners who removed the quirky art-work and tried to rebrand them as bland mainstream venues; both promptly went out of business. The campus pub was eventually revived by a competent proprietor; the Port café is currently sitting empty, a monument to the folly of erasing a successful enterprise’s main point of distinctiveness. The same fate awaits Tumblr.

I was on Tumblr for nearly two years. My original hope, when I joined, was to spread this blog a little further, since after six years I have exactly two followers here. This was not to be, because Tumblr’s search software passes over posts that contain links to other sites – far from the only way in which Tumblr fails its users – which meant that the links to this blog were only seen by those few of my followers who happened to be online at the moment I posted the links. Additionally, I gained Tumblr followers relatively slowly; I think my peak following, after two years, was 182 including spambots. I think this may have been because I declared on my blog header that I was a nudist but didn’t intend to post any nude photos, which lost me both the people who think nudism is skeevy and the people who want to see nude photos. But I can’t be sure.

(Tumblr advertises itself as being good for artists. How are they any good for artists if they make posts with external links non-searchable, you might wonder, so that artists can either get their art seen or link to places where people can pay them for it, but not both? Well, they aren’t, that’s how. And yet, so many of us put up with them, until now.)

The announcement came in the form of an official letter from the Tumblr staff account, which I would go through and point out all the lies except that it’s all lies. The new guidelines ban, and I quote, “images, videos, or GIFs that show real-life human genitals or female-presenting nipples,” in case you were wondering why the phrase “female-presenting nipples” became a commonly repeated joke in early December. So Tumblr is apparently enlightened enough to talk in terms of gender “presentation”, and yet reactionary enough to frame femaleness as inherently more sexual than maleness. Remember, this is not an old policy. This is a policy devised in 2018 by the curators of one of the internet’s biggest platforms for feminist and LGBT content.

Most of us first knew something was going wrong on 16 November, when the Tumblr app disappeared from Apple’s App Store. For those like me who used Tumblr on our computers rather than on mobile, the first sign was when Tumblr responded by suddenly making huge numbers of posts non-searchable, based on their tags – not just things like #NSFW and #nudity but hundreds more, some quite inexplicable. I never did find out why we suddenly couldn’t search for #chronic pain. Meanwhile, posts tagged with things like #white supremacy and #white genocide continued to pop up in search results.

Our first thought was that this was a ham-fisted attempt to deal with the exponentially-increasing nuisance of pornographic spambots. Any post on any subject at all whose count of responses reached five digits would suddenly start getting reblogged by porn-themed accounts with generic comments like “Cool, see my site here.” You could report and block, report and block, report and block, and new bots would keep on coming. They weren’t intended to attract the eyes of the bloggers they were harassing, but to piggyback off them to hoist their coders’ porn sites up the Google search rankings. You see, the way Tumblr works, if you reblog a post and add a caption, then there is now a link to your blog from every other blog that has also reblogged that post. If thousands of people have reblogged it then that’s thousands of links, all from legitimate content-bearing webpages that real people read, and that’s what Google’s search algorithms look for. So our first response was: well, yay that the staff are doing something about this at last, but they need to get a lot smarter about it, please.

Of course we were quite mistaken to think Tumblr management had suddenly started caring about their users’ experience of the site. The pornbot coders wised up within a week or two and reprogrammed their bots to use the tag #SFW and no unsearchable tags, which it turned out was all it took to get past the algorithms. Those of us who conscientiously tagged Pre-Raphaelite paintings as #nudity (so people could filter them out on their work computer just in case their boss looking over their shoulder got the wrong idea) continued to be punished for our honesty.

The next hypothesis was that it was about the child porn, and that does seem to be what sparked Apple’s ire. Tumblr’s strategy for dealing with paedophiles was exactly the same as their strategy for dealing with Nazis, to wit “have a Block function and let the users do all the work”. I’m glad to say I never saw any myself, but many other users had been making complaints to the staff about the problem for years, with the same results as every other complaint to the staff. Getting dropped from the App Store, now that was something they cared about.

But while the App Store incident undoubtedly fast-tracked the adult content ban, the truth is it had been coming for months. Like many other social media platforms, Tumblr’s business plan is to hire out their users’ eyeballs to advertisers. They evidently noticed that their site was getting popular with the social justice crowd and in particular the Black Lives Matter movement, and they apparently had a big plan in the works to capitalize on that. And many advertisers baulk at the idea of people seeing their ads right next to GIFs of sex acts. So the nudity had to go. We used to think that if Tumblr had just one virtue, it was that they understood that not all nudity is sexual and not everything sexual is degrading; now it turns out they just didn’t care about being degrading until there was money in it.

On top of all that, there’s a draconian piece of legislation coming into force in the United States next year called SESTA/FOSTA – I can’t be bothered looking up the acronyms but the ST in both halves stands for “sex trafficking”. Nobody with any human decency could oppose stopping sex trafficking, which makes it the perfect pretext for interest groups pushing less creditable agendas. Under the new law, social media hosts based in the US will be liable for allowing sexual solicitation on their sites, even unknowingly, even if it’s just one ad. This is why Facebook, from next month on, is going to start cracking down on anything that could remotely be interpreted as a sexual invitation, up to and including posts consisting of “looking for a good time tonight ;)”. This legislation will incidentally stifle discussion of sexuality or sexual orientation in social media; I’m pretty sure that’s a plus as far as its originators are concerned, but it might come as an unpleasant surprise for some of its more liberal supporters.

How deleting Facebook posts saying “looking for a good time tonight ;)” will contribute to stopping sex trafficking is: it won’t. But at least Facebook understands that you need human intervention to make this sort of thing work. Tumblr think they’re going to accomplish it with software, and indeed with ludicrously simplistic software. They’ve promised that nude paintings and breastfeeding photos and news articles about nude protests will be safe once they get their programs properly trained, but the algorithm they’re using could not possibly make such fine distinctions even if you trained them on all the data on the internet for a thousand years. You may have seen funny Facebook posts about the photo-captioning AI, trained on pictures of fields of sheep, that tagged all fields as “sheep” and failed to recognise sheep in any other context. It’s the same algorithm.


Nude images follow.

Thursday, 4 October 2018

Cannabis

So it turns out the Otago University Proctor has been burgling student flats and nicking people’s bongs. There was a big protest against him last week, which I sadly couldn’t attend due to a class. I remember a time when the student libertarians could make common cause with us campus Lefties on cannabis decriminalization, if nothing else, but all the right-swinging commentary I’ve seen on the issue this week has been of the bog-standard cookie-cutter “if you think this is abuse of power go live in North Korea” variety.

It is morally wrong for a person in authority to break into people’s private spaces and remove things unless their authority is specifically and publicly constituted to grant them the power to do that, and the Proctor’s is not. This follows so straightforwardly from the trust principle that I can’t be bothered laying it all out. It is equally clearly morally wrong to constitute any authority to give its bearer the power to break into people’s private spaces and remove things unless the things in question pose sufficient risk of harm in that space to outweigh the breach of trust occasioned by the break-in. Any law which grants (for example) the police such powers is an immoral law and ought to be both changed and, until it is changed, resisted. Again, taking the trust principle as the basis of morality, which I do, that follows as night follows day. And I’ve argued the trust-morality connection over and over again on this blog, so I shall skip over that too. The only remaining question, and the topic of this post, is: does cannabis pose a risk of greater harm than is caused by arrests, seizures, prison sentences, and permanent criminal records?

The answer to that question is as follows:

No.

Well, that was easy. See you next ti—

...oh, all right. Here, have a graph. If you click it, the link will take you to the data behind it.

There are exactly zero confirmed cases of death from cannabis use – in stark contrast with alcohol and tobacco, both of which are legal. For addictiveness it’s roughly on a par with caffeine. It is associated with the same sorts of respiratory injuries that tobacco is, but that’s (as far as researchers can tell) because it’s delivered as a smoke and so comes along with tars and similarly harmful products of combustion. Hence if your drug laws are to have any semblance of logic or consistency, either you must allow cannabis or you must ban alcohol and tobacco. And if you’re thinking about the latter option, the United States already tried that with alcohol and it was a disaster.

Cannabis does bring a heightened risk of developing schizophrenia if it’s heavily consumed in adolescence. That, I’ll grant you. And it is reasonable to ask how, once legalized, it would be kept out of the hands of teenagers, given that the surest way to get teenagers to try something is to tell them it’s for adults only, as the tobacco companies know to their tremendous profit. But I’ll tell you a big secret: prohibition isn’t keeping it out of the hands of teenagers either. If you sell a substance legally, you’ll probably think twice about selling it to under-age people because that’ll get you in trouble. If it’s a prohibited substance you’re in trouble anyway, so why narrow your market?

The damping effects of cannabis on motivation are much better-known than the medical evidence for them actually warrants. Part of the problem is that people assume the cause upon seeing the effect. I have long hair that I can’t seem to get the knots out of, and I don’t wear shoes much, and so I have often been asked by complete strangers where they can score some pot in Dunedin; whereas you probably wouldn’t make that assumption of an internationally successful scientist and author like, say, Carl Sagan. On the handful of occasions when I have smoked pot, I’ve found my motivation increased rather than lowered – one time I stayed up all night writing.

I won’t deny there are people who take up cannabis and proceed to lose all interest in life; but, as is typical with other substance dependencies, I suspect you’ll find this is a sign of underlying stressors like an abusive home life or an unsustainable work or study schedule with no other way out. And if someone is having substance problems, it’s easier for a legal supplier than an illegal one to refuse their business, as G. K. Chesterton long ago pointed out with regard to alcohol. Perhaps this is why Portugal has experienced such a drop in drug-related social problems since they legalized all recreational drugs there.

There are of course many wild claims out there about the health benefits of cannabis; if you believe some people, it’ll cure everything from cancer to the common cold. It’s hard for researchers to find out which of these claims have substance on account of, you guessed it, prohibition. I mean, sure, a government can license a pharmacology institute or whatever to do controlled experiments, but they can’t exactly license the entire supply chain so the institute can get hold of enough of the stuff to run anything with a meaningful sample size. Nevertheless, it’s pretty clear that cannabis is an effective painkiller, and last I heard there was some evidence – at a “not sure yet but worth following up” level – that it might slow the progression of some cancers.

Even if you want to be cautious about recreational cannabis, there is absolutely no justification for keeping it from being used as a medicine. Obviously it wouldn’t come from the pharmacist as a cigarette, since smoke and tars do so much damage to your lungs. And it’s a bit slow to get working when taken orally, hence why people sometimes get into trouble the first time they try edibles, eating more and more just because it hasn’t hit yet. The fastest form of delivery might be a nasal spray; your smelling nerves take in molecules from the air because that’s how smell works, and they’re a direct channel to the brain because that’s how nerves work. For people with impaired olfactory function, an inhaler might be a second option.

Actually I started writing this a few days before the protest. I was editing some lecture notes in a university library a couple of tables away from the door to a videoconference room, and three students went in and proceeded to hold what was evidently the Nay side of a video-link debate on the legalization of cannabis. The male one of the three had a particularly loud and annoying voice, but all of their arguments were so stupid that I must, in charity, suppose that they had been roped in to argue the Nay side without believing a word they were saying. (This is why I’ve never done competitive debating.)

Their worst argument of all was against medicinal cannabis. We don’t need it, said Mister Loud Voice, because we’ve already got painkillers. Maybe if I hadn’t had so many pharmacy lectures in the last few years that wouldn’t have sounded quite so head-smackingly silly. Yes, we have other painkillers. We have paracetamol (what North Americans call acetaminophen), which is safe unless you have liver disease. We have non-steroidal anti-inflammatories like aspirin and ibuprofen, which are moderately safe unless you have stomach ulcers or acid reflux. We have corticosteroids, which do nasty things to your bones if taken for any length of time, and we have opioids, which right now are killing Americans in record numbers as the cohort who got addicted to them en masse in their teens enter retirement. A lot of elderly people have both liver and stomach trouble and can’t really afford to make their bones any weaker than they already are. What possible justification can there be for denying them a safe, effective painkiller?

Close behind was the bit where they said their main concern about cannabis was that it might lead people on to other, more harmful drugs. Yes, I’m sure that happens. But do you know why it happens? It happens because once you’ve got into cannabis, well hey, you’re already breaking the law and thumbing your nose at a disapproving society, so why not go for a harder-core experience? The same happened with alcohol when that was prohibited. Once again, legalization would lighten the problem, not worsen it.

All the same, that argument does illuminate the cultural mindset that hangs like a ball-and-chain about the ankles of the legalization movement. The basic problem is one of our mental categories for hazardous substances. Some substances are filth and some are poisonous and some are medicines and some are chemicals, but the troublesome class here, the one cannabis has been put into, is the class we think of as Drugs! (The exclamation mark should be pronounced as a horrified gasp.) I can’t just call them “drugs”, because that word, like “chemicals”, has an application which is scientifically meaningful, morally neutral, and vastly broader than the colloquial one. Drugs in the scientific sense do include cannabis, and also alcohol – the phrase “drugs and alcohol” makes about as much sense as “vehicles and cars” – and caffeine, and nutmeg, and St John’s wort, and moisturizers, and anything else that has a dose-dependent effect on the human body or mind.

Drugs! as a subclass of drugs share no common attribute but that they are illicit. But because they’re a culturally recognisable category, it’s easy to think of them going together. Thus, to Mister Loud Voice and millions like him, cannabis goes naturally with heroin and cocaine and methamphetamine because they are all Drugs! The government in its infinite wisdom has recognised that people should not use Drugs! and so has passed laws against them. If you let people use one kind of Drugs! then you are declaring that Drugs! are OK after all, so of course they’re going to move on to other kinds of Drugs! as well. In case you haven’t guessed, I think the very concept of Drugs! hinders rather than helps us in developing rational drug safety policies. But that’s by the by.

So if there’s nothing wrong with cannabis, why is it illegal? I don’t think Mister Loud Voice used that particular argument, but others certainly have. The answer, of course, is that most law-makers think that way too. But why did it become illegal in the first place? Recreational marijuana, previously the most popular legal alternative to alcohol, was banned in the US just when Prohibition was lifted and all the enforcement officials were looking for new jobs. Medical cannabis followed in the 1970s as a casualty of Richard Nixon’s War On Drugs. And the US has enough clout internationally that a lot of other countries still think it’s an example worth following. I’m chary as a rule about Big Bad Government conspiracy theories, but in this case we have a paper trail and a confession:

The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the anti-war Left and black people. You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.
John Erlichman, Assistant to President Nixon for Domestic Affairs, speaking to Harper’s Magazine in 1994

And nowadays of course you have the private prison industry in the US, which has threatened to sue several states if they legalize cannabis and thus reduce the prison population with its convenient supply of cheap labour and legal inability to vote. So in summary, the Proctor of Otago University has committed burglary against the community he is tasked with serving in order to uphold a legally-sanctioned societal prejudice known to be based on fifty-year-old racist lies. I would have joined in the protest if I could. Consider this my contribution.

Thursday, 23 August 2018

In defence of multiculturalism

“This is the future that liberals want”: a woman in full-face hijab sitting next to a transgender woman on public transport

Though this blog could very well be retitled Stuff I Disagree With, I try not to argue with the same person two posts in a row, especially when it’s someone who is mostly on the same side as me. So I’m sorry to have to pick on Chris Trotter again. But his recent Bowalley Road post “Checkmate In Two Years?” needs a response. I’m not debating its major thesis – I don’t know whether the present media flap over free speech for “alt-right” bigots will or will not blow up into an electoral defeat for Jacinda Ardern’s Labour-Greens government in 2020. I can’t see it myself, but Trotter has historically been better at predicting New Zealand election outcomes than I have. But I have some things to say about the points Trotter raises along the way.

Let’s start with this:

There’s a saying, often attributed to Voltaire, which declares: “To learn who rules over you, simply find out who you are not allowed to criticize.” The free speech controversy, by identifying multiculturalism as the concept Kiwis are not allowed to critique without drawing down the unrelenting wrath of its state-sanctioned and supported defenders, has caused many citizens to wonder when and how “nationalism” and “biculturalism” became dirty words.

Do I have to pull out that xkcd cartoon again? When did Southern and Molyneux get arrested? Because I don’t remember seeing that bit on the news. People protested against them, yes. That is to say, people criticized them loudly and angrily. There’s a saying, often attributed to Voltaire, which declares—

OK, cheap shot. Again, I’m not here to rehash my previous post. I’m here to talk about “multiculturalism” and “nationalism”, and in New Zealand that means giving biculturalism a look-in as well. Trotter’s post makes as good a springboard as any.

First up, either Trotter or I must be confused about what “nationalism” means. Trotter says

A country whose elites have signed up to an economic philosophy based on the free movement of goods, capital and labour – the three fundamental drivers of globalization – is more or less obliged to adopt multiculturalism as it core social philosophy.
Old fashioned New Zealand nationalism, and its more recent offshoot “biculturalism”, were products of a country which saw itself as offering something uniquely and positively its own to the rest of the world. It is probable that a substantial majority of Kiwis still subscribe to this notion (although a significant minority still struggle with the concept of biculturalism).
What the free speech controversy of the past four weeks revealed to New Zealanders was that too forthright an expression of cultural nationalism can result in the persons advocating such notions being branded xenophobic or racist – and even to accusations of being a white supremacist, fascist or Nazi.
The battle for free speech cannot, therefore, be prevented from extending out into a broader discussion over whether or not New Zealanders have the right to reject the downsides of neoliberalism, globalization and multiculturalism. Is it any longer possible to advance the radically nationalistic idea that the nature and future of New Zealand is a matter which New Zealanders alone must decide, without finding oneself pilloried on Twitter or banned from the nation’s universities?

Abstractions are always fuzzy around the edges, and “nationalism” shades into “racism” along one edge and “patriotism” along another. Still, Trotter is here giving the term a usage that I do not recognise. As I understand the word, the central concept of nationalism is to connect political sovereignty within a given state to membership of some particular ethnicity, understood as being the rightful owners (in some sense) of that state. Foreigners and immigrants, except for expatriates of the favoured ethnicity, are to be excluded from the political process. Typically this exclusion is to be accomplished by exclusion from the territory, sometimes with the alternative option of cultural and linguistic assimilation into the favoured ethnicity. It is not about “offering something uniquely and positively our own to the rest of the world”. It’s about keeping something uniquely and positively our own all to ourselves and the rest of the world can naff off.

What does Trotter mean by “the nature and future of New Zealand is a matter which New Zealanders alone must decide”? Is the “New Zealand” whose future is being decided exactly the same entity as the “New Zealanders” doing the deciding? Is there a concern that the New Zealand electorate might start accepting votes from citizens of Sri Lanka, Ghana, or Luxembourg? Or does the sentence mean “People of Pākehā and Māori ethnicity have a special right to exert political control over the lives of anyone, of any ethnicity, resident in the territory governed from Wellington”? (That’s insofar as “Pākehā” is a distinct ethnicity, of course. I’m not clear what we white New Zealanders have, as Pākehā, that’s “uniquely and positively our own” and couldn’t just as readily be found among, say, white Australians or English-speaking white South Africans.)

A lot of the opposition to global neoliberal capitalism is framed in terms of the threat it poses to the “national sovereignty” of individual countries over their own economies and ecologies. If this is where Trotter is coming from, then my only quarrel with him is his choice of words. I don’t like the idea of land or other local resources being owned and controlled by people who don’t live or pay taxes or buy goods and services in the area. But I also don’t like the “national sovereignty” framing. It doesn’t bother me that the people who own the farms or mines or whatever aren’t New Zealanders; it bothers me that the feedback loop between cause and effect is severed, that a small group of powerful people can wreak extensive damage on the landscape and economy without experiencing any consequences to discourage such behaviour. If some corporation is polluting rivers in Otago, it’s not important to me whether their headquarters are located in Auckland or Beijing.

But at least Trotter does offer some explication of his use of the term “nationalism”, however incomplete. That gives me some idea what he’s talking about. Not so “multiculturalism”. That word he never unpacks. It is evidently associated with “neoliberalism” and “globalization”, but more than that is hard to discern. So I genuinely don’t know whether what I will defend for the rest of this post under the name “multiculturalism” is the same thing as what Trotter is opposing, or at least lending support to the opponents of, under that same name. Bear that in mind as we proceed.

Thursday, 26 July 2018

Don’t let fascists steal free speech from us

Two “alt-right” speakers from Canada, Lauren Southern and Stefan Molyneux, have decided they will, after all, go ahead with their visit to New Zealand (temporarily cancelled) where they will espouse their views. The venue is, as I write, being kept secret. Previously, they wanted to speak in an Auckland venue called the Bruce Mason Centre, but were denied that opportunity at the instruction of Auckland Mayor Phil Goff. This has raised concerns about freedom of speech in New Zealand, and not just among those who agree with their position. Chris Trotter, New Zealand’s most respected Left blogger, has taken up the cause in several recent posts on his blog Bowalley Road.

I’ve been drafting this post for two years now; the issue keeps coming up and going away again, and I have to change the bits where I relate it to current events. It’s a bit bigger than I can really pin down in one post. I’m going to skip out some things I originally planned to say about particular progressive concerns that some believe constitute assaults on free speech – trigger warnings, safe spaces, cultural appropriation – because they just make the whole thing too unwieldy. I may devote more blog posts to them in future. Today my topics are: What is the ethical basis of the right to free speech? What sort of policies do we need to build around it? And what are its limits?

Political discussions frequently open with assertions of rights that the disputants hold to be incontrovertibly inviolable – endowed, in the famous words, by the Creator. But what makes a right a right? Why is it that some good things you might want are your “right” to enjoy, while others are merely privileges? What makes the difference? And what if one of your rights can’t be granted without breaching someone else’s rights? What do you do then? This sort of question is why I like to go back a bit further, behind the concept of rights to their basis in ethical philosophy.

In case you’re a new reader, I’ll just quickly run you through my basic moral philosophy. Morality isn’t something objective, not if by that you mean it’s something “out there” in the universe, independent of our minds. You can’t logically prove a “should” statement from an “is” statement without at least tacitly calling in another “should” statement, and if you try to prove that second “should” statement you just go around the circle again, and so on forever. And perhaps that’s just as well, because if morality was something “out there”, then any intersection between moral facts and human well-being would be coincidental. Appeals to moral authority, even cosmic moral authority, don’t help: “You should obey the authority” is just another “should” statement and another trip around the circle.

Hence, morality is subjective. But “subjective” is not the same as “arbitrary”. To call something “subjective” just means it’s an experience that people have rather than a thing in the universe – it’s “in here” rather than “out there”. Sweetness is subjective, but no-one disagrees as to which is the sweeter of maple syrup and grapefruit juice. You can justify a “should” statement; you just have to back it with an “I want” statement. (I want to be healthy, therefore I should exercise more than I do.) As a social species, pretty much anything we want requires cooperation with other people, and cooperation requires trust, so we evolved moral instincts to allow us to trust each other. Therefore, in my view, trust is the basis of all morality.

To earn trust, your actions must fulfill three conditions. They must be benevolent; they must be consistent; and both of these facts must be clear to other observers. Benevolence alone is not quite enough. Pure benevolence, the greatest good for the greatest number, is the moral philosophy known as utilitarianism. The harmonics of cold calculating efficiency that cling around that word somewhat misrepresent the idea; “utility” in the philosophical sense includes beauty and joy as well as usefulness. However, the “calculating” part is bang-on. Utilitarian philosophers spend a lot of time balancing harms and benefits and fretting about whether they’ve left something out. And I don’t know about you, but that makes me nervous. I can’t help worrying that they might end up putting the things I care about on the “sacrifice for the greater good” side of the equation.

When you factor in the consistency and the clarity, utilitarianism gives way to a couple of other moral schemas. One of them is virtue ethics – if you practise being a good person until it becomes habit, your actions will be clearly and consistently benevolent. This isn’t particularly relevant to today’s topic, and I mention it only for completeness. The other one, however, is the answer to our question: the origin of rights. To be clearly and consistently benevolent is to commit to doing some good things all the time for everybody while refraining from doing some bad things ever to anybody. When a society makes such a commitment, whether in law or custom, it thereby grants its members the right to enjoy the good things and the right not to suffer the bad things. That’s where rights come from.

And that helps us answer our other question. When you have to choose one right over another, you should honour the one that best serves the principle of trust. Suppose you’re a surgeon, and you have in your hospital five people urgently needing different organ transplants and also a healthy young person who’s come in for a sports injury. A utilitarian calculus might prompt you to at least consider killing the young person to harvest their organs, sacrificing one life to save five. But of course if you did that, no patient could ever trust you again not to kill them for their organs. The trust principle would therefore accord with what (I sincerely hope) your moral instincts tell you and render such a course of action unthinkable.

Since the seventeenth century most of the questions vexing Western political theorists have been variations of “What is the right balance between peace, justice, and freedom?” When you find yourself facing a trade-off between the three, how do you choose which to preserve and which to sacrifice? (This is not quite the same as the question actually driving social progress, which is “How many crumbs do we have to give these annoying poor people before they’ll shut up and go away?”) What’s often not noticed is that peace, justice, and freedom all have the same end-goal, i.e. not having to fear violence any more. Peace means you don’t have to fear violence from strangers. Justice means you don’t have to fear violence from your neighbours. Freedom means you don’t have to fear violence from the state.

But you can never have complete freedom in any society, because freedoms are necessarily in tension with each other. If you are free to do some particular thing, that means I am not free to stop you from doing that particular thing, and vice versa. So, for instance, you are free to play irritating contemporary music and there’s not much I can do about it, but I am not free to walk around in public wearing the amount of clothing I feel physically comfortable in. If I call the police and complain about your music I’ll be ignored, unless it’s very loud; if you call the police and complain about my nudity I’ll be arrested. Conversely, if I try and cut the power cord on your stereo I’ll be the one in trouble, whereas if you threaten me with assault unless I put some clothes on the authorities will probably agree I had it coming. (As you see, legal freedoms tend to get skewed towards majority cultural groups and away from, in this case, those of us who have autistic sensory issues.)


Now we can narrow in from freedom generally to freedom of speech in particular.

Friday, 1 June 2018

Against enforced monogamy

The issue of sexual violence has never gone away. The #metoo movement is just one more response to it, though so far one that’s been getting more notice than many. In the last, I don’t know, month or so, the media at large has at last started to notice the well-established connection between misogynistic violence and mass shootings, and the word “incel” – involuntarily celibate – has achieved a greater currency than it had before. I gather “incel” was first coined to describe the experience of being queer and unable to find someone of the gender you like who likes people of your gender, but it has now unfortunately been very firmly appropriated as a self-identifier for that subset of men who (a) aren’t getting sex and (b) believe they are owed sex. And this of course has led to suggestions that maybe mass shootings would be averted if more women would “take one for the team” and have sex with incels. Because, apparently, some people aren’t content with being horrible human beings in the privacy of their own homes.

I have seen it claimed, mind you, that incels aren’t real because it’s not actually hard to find someone to have sex with and these men must be just being needlessly picky (or obsessive) about their choice of partner. I can’t hold with this. For some people it may be easy to find people to have sex with, but there are many circumstances which make it difficult for others. One, as already noted, is being the only queer person in your offline circle. Various disabilities – physical and social – have similar effects. It’s not the not getting sex that makes incels horrible people, it’s the belief that they are owed it. Men who believe they’re owed sex are horrible people whether or not they’re getting it, which brings this digression nicely back to the point.

A couple of weeks ago a guy walked into Santa Fe High School in Texas and shot people, killing ten of them. Following an excellent suggestion that’s been passed around Twitter and Tumblr, I’m going to refer to the perpetrator as “Shooter #101” (this was the 101st mass shooting in the United States this year). The mother of one of the dead, Shana Fisher, has claimed that her daughter was repeatedly harassed by Shooter #101 for a date and ultimately turned him down publicly in front of the school, and that she was the first person he shot. Several articles on the incident have used this for a lede. Apparently it hasn’t been corroborated by witnesses – which is bad enough, but I’ve worked in journalism in a very small way, I can understand going ahead with the article on the assumption that the mother knew what she was talking about. What’s not forgivable is that this was how they framed this element of the case:

Spurned advances from [Shooter #101] provoked Texas shooting, says mother of girl killed (NZ Herald)Texas school shooter killed girl who turned down his advances and embarrassed him in class, mother says (LA Times)

And these are far from the worst. You see the narrative being put forward here? Men (overwhelmingly) do the shooting, but never fear, we can always find a way that it’s ultimately a woman’s fault. Women are put on earth to meet men’s needs, and men go astray when women fail to fulfill that function. Men’s sexual and romantic yearnings give them a proprietary right over women’s bodies, attention, and time. Most commentators would recoil from this position if it were stated openly; the danger of framing male violence as the headlines above do is precisely that this narrative sneaks under the radar as a tacit premise instead of being exposed to challenge as an explicit proposition. To men who do believe that women owe them sex, this subtextual confirmation helps that belief fit just that little bit more comfortably into their picture of the world.

Monday, 23 April 2018

Science belongs to every culture

I submitted this to Stuff Nation after this piece by one Bob Brockie, complaining about the New Zealand Royal Society’s choice to officially acknowledge the Treaty of Waitangi, came across my Facebook feed. They haven’t published it, so I guess I’m free to put it here.

Bob Brockie thinks that the Treaty of Waitangi is irrelevant to the scientific endeavour, and the Royal Society of New Zealand ought therefore to ignore it. I think he’s wrong, and I’ll show you why.

First of all, while the phrase “the principles of the Treaty” does sound worryingly vague, its twenty-year history in legal usage has pinned down its precise meaning. I don’t think Brockie is aware of this. The principles of the Treaty are often summed up as: partnership; protection; and participation. Partnership would mean granting Māori people and Māori institutions equal say with Pākehā in decision-making around science, such as what areas of research should be given priority over others. Protection would mean respecting Māori cultural sensibilities just as much as Pākehā ones in ethical deliberations over research on human subjects. Participation would mean ensuring that Māori and Pākehā have equal opportunities to become scientists and to benefit from science and technology. I can’t imagine that Brockie would object to any of that; so I presume he just didn’t know what “the principles of the Treaty” are.

Second, Brockie is simply wrong to assert that, in the humanities, “everybody’s opinions or beliefs can be of equal value and should never be challenged.” Of course, many humanities academics make the equal and opposite error of claiming that the sciences do not teach critical thinking, and therefore the humanities ought to be in charge. Nor are science and the humanities “parallel universes” with little to say to each other. To take just a couple of examples: history and archaeology greatly enrich each other, while literature and the arts contain a goldmine of long-term information about the human mind that can benefit psychology.

Personally I would go so far as to say that the humanities themselves constitute a science as rigorous and empirical as any other. As geology is the science of rocks, and astronomy the science of stars, the humanities are the science of meaning. I do share Brockie’s suspicion of postmodernist ideology, which in my opinion has greatly hampered progress in the humanities. But other scientific fields have also had their fads and fancies, such as behaviourism in psychology, or group selection in evolutionary biology.

Finally, while Brockie is strictly correct that traditional Māori belief “has its roots in the supernatural and vitalism”, he is mistaken if he thinks that this in any way distinguishes it from traditional Pākehā belief, with its heavens and its hell, its angels and devils and immortal souls, and its fixed Platonic or Aristotelian essences. I think Brockie here falls prey to a common confusion between two related, but distinct, meanings of the word “science".

If by “science” we mean any systematic endeavour to understand the world through strictly empirical investigation, then I quite agree with Brockie that this is the only source of reliable knowledge. But “science” in this sense does not exclude the knowledge of non-Western cultures, of which the traditional navigation methods that brought the ancestors of the Māori across the Pacific Ocean to these shores are a shining example.

If on the other hand by “science” we mean the body of knowledge that the West has gradually accumulated since Francis Bacon and Copernicus, then of course this tradition has drawn more heavily on European thought than on other cultures’. But “science” in this sense has no especial claim to be more reliable than other systematic, empirical traditions of knowledge.

I don’t claim to know very much about traditional Māori lore, and yet I can name four points on which it beat the West to the scientific punch off the top of my head:

  • Western tradition gives the universe an eternally pre-existing God; Māori lore states that it began from nothing (Te Kore).
  • Western tradition has God create plants and animals in separate kinds from the beginning; Māori lore acknowledges the familial kinship of all life.
  • Western tradition puts the seat of consciousness and will in the heart; Māori lore puts it in the head.

And on a more mundane but practical note,

  • When Western doctors were still cross-infecting patients left, right, and centre, Māori practitioners had long been guarding against sickness by washing their hands after dealing with blood.

I imagine it’s this sort of thing that the President of the Royal Society had in mind in recommending that scientists “embrace the research methodologies of multiple knowledge systems”, as Brockie complains. I can report that pharmacists are only now beginning to investigate traditional Māori healing practices (rongoā); an initial study found that many of the plants used contain pharmacologically useful substances – and that’s as far as they’ve got.

Obviously more progress needs to be made, and obviously it won’t be made by uncritically accepting whatever cultural traditions tell us. But it won’t be made by uncritically throwing them out either. Nor will it be made by walling off the different fields of knowledge from each other. “Sticking to one’s knitting” is not the way to go.

Wednesday, 21 March 2018

What economic and government systems do you think function best?

Last week one of my Tumblr followers asked me the above question. I wrote down as many things as I could think of in the time I had, which wasn’t everything. I started putting down justifying arguments for each point, but found that this was making it far too long for a Tumblr post. So I’m repeating my answer here, with a few more points and some argumentation to back it up and hopefully a bit more coherence (but you be the judge of that).

I try to keep my thinking grounded in empirical evidence, but I only have so much time for doing research and what I do find is inevitably biased by being filtered through my own perspective, which is not neutral but was formed through many years of political involvement. I began my political life in 1996, at the age of 18, in a protest against Otago University raising tuition fees. It was a big protest, because at that point New Zealand tertiary institutions had only been charging tuition for a few years and it had caught a lot of people by surprise. So there were a lot of dedicated protesters involved. Many of them were Marxists, so I started off as a kind of Marxist camp follower leaning towards anarchism of sorts. I still feel loyalty to this crowd, and there are some social values that I still think Marxism captures better than most other politics. But looking at the empirical evidence I am unable to endorse the prototypical Marxist plan for achieving those values.

In particular, countries that remodel themselves from the ground up with armed Marxist revolutions always end up as repressive, poverty-stricken dictatorships. I know of no exceptions. Some are worse than others – if I had to choose, I’d much rather live under Fidel Castro or Muammar Gaddafi than Pol Pot – but none of them have ever produced the communist paradise, or even the socialist interim state, that Marx envisioned. In a few places in the world you can see a Marxist regime and a liberal regime side by side, with the same geopolitical and environmental conditions, and compare their socioeconomic outcomes; the liberals (West Germany, South Korea, Botswana) always do better than the Marxists (East Germany, North Korea, Zimbabwe). And really, Marx should have known better, given that the prime real-life event he used to exemplify his theories was the French Revolution, which had exactly the same effect in installing the Napoleonic Empire.

The Koreas from space at night

Empirically, the systems which function best, in the sense of facilitating human life, health, knowledge, freedom, prosperity, and equality, are those known as “mixed economies”, like those of Scandinavia and Japan and formerly New Zealand. These combine open but well-regulated markets with stable democratic government, progressive tax systems, state-owned infrastructure, and high public expenditure on social welfare, health, and education. But of course there are still a great many areas in which I believe progress could be made. And here they are.

Monday, 26 February 2018

Is the writing on the wall?

At the top of this page, for the last five years, my blog header has announced that my job is to take notes in lectures at the University of Otago on behalf of students with disabilities. This is still the case this year – but it may not be next year. I love this job and I want to come back to it as long as I can, but I’m not sure there will be anything to come back to. I think the University (not the Disabilities Office but someone higher up) is trying to stealthily disestablish my position.

In mid-2012 when I started working here, I was given nine lectures a week – nearly twenty hours of work, since I have to edit the notes after I’ve taken them. Then from 2013 to 2015 it was more like fifteen or sixteen classes, dominated by dentistry, over thirty hours. Not quite full-time, but enough to save money for a holiday in Japan. In 2016 the hours dropped off a bit and I started an expensive course of dental treatment and my savings carried me, narrowly, through the summer. But then last year the bottom fell out of the student enrolment for the service I provide. From fourteen or so note-takers the Disabilities Office went down to two, of which I was fortunate to be one. Most concerningly, every one of the students enrolling had been enrolled in previous years; not a single one was new to the service. This year I’m back down to nine classes and, once again, all the student names are ones I’ve seen before. Sorry, let me rephrase that. Once again, both the student names are ones I’ve seen before. I gather I’ll have a few extra classes from a couple of additional students, but not every week. Something has gone wrong.

Apparently, according to my supervisor, the University’s official position is now that we shouldn’t be providing notes for people who don’t absolutely need them, because no-one’s going to be taking notes for them out in the real world and they need to practise doing it for themselves. I’m still not sure how that translates to absolutely nobody new signing up for note-taking; I suspect, but don’t know how I could find out for certain, that the University has stopped mentioning the service in their marketing or at Orientation or wherever my clients used to hear about it before. That means that when my current students graduate and leave, I – like a couple of hundred other Otago service staff members so far – will be out of a job.

(The other option, as I’ve been reminded since I first wrote this post, is peer note-taking, which is when the University pays another student in the disabled student’s class a lot less than they pay me to hand in a copy of their own notes. This has helped me out on occasions when I’ve fallen ill and not had time to arrange a swap with another professional note-taker. But it’s not going to be the same quality as what I do, because they don’t have the training I’ve had, they mostly don’t have my typing speed, they won’t have developed a good system of digital shorthand like I have, and being busy students they don’t have the time I do to devote to editing. Also, I’m told – I can’t substantiate this – if the disabled student and the student note-taker don’t get on, it’s not unheard-of for the note-taker to do bad notes on purpose to hand in while keeping the more accurate version to themselves. Hiring professionals does make a big difference. Switching to purely peer note-taking would still constitute a huge downgrade to the service the University offers.)

I know, I know, mine is not a neutral viewpoint. As both an employee whose livelihood depends on this service and a disabled person myself, I am obviously not going to feel very good about this. But frankly, the University’s reasoning is bullshit. It’s the same tired justification that’s always trotted out for denying accommodations to disabled people: “Take away the crutches and they’ll learn to pull themselves up by the bootstraps.” Well, see, the thing is, those are actually rather apt metaphors, but not the way their users intend. Taking away someone’s crutches and pulling people up by their bootstraps both, if you were to demonstrate them literally, have the same effect: the victim falls flat on their face.

But isn’t one of the goals of education to allow people to integrate freely in society and thus live life to the fullest extent of their capabilities? Absolutely. But taking away accommodations does the opposite of that. It’s true that the world outside the tertiary education sector doesn’t have many note-takers, but I would venture to suggest that that’s because the world outside the tertiary education sector doesn’t have many lectures. Dentists don’t have to type for hours every day, but dental students do. The University’s new policy will mean there’ll be people who couldn’t follow their dream career in dentistry because they couldn’t type; which, given Otago has the only Dental School in the country, is a gross dereliction of duty.

Some disability advocates declare that disability is “socially constructed”. This is neither false nor nonsense, but it’s so misleading as to leave people less enlightened after they’ve heard it than before, unless they’ve taken a cultural anthropology course or similar and learned what this progressive-intellectual shibboleth actually means. It emphatically does not mean that disability is all made up. It emphatically does not mean that people are only disabled because everyone around them treats them like they’re disabled. In fact, it’s pretty much the opposite. Let me explain.

The word “disability” is often used as a synonym for “impairment”, but there’s a subtle distinction which is worth highlighting. Impairments are not socially constructed in any meaningful sense. An impairment is what you physically can’t do with your body that most people can, or can only do with great effort that most people can do easily. (I’m including your brain as part of your body, obviously.) A disability is what obstacles your impairment poses for you as you participate in society – as you work, as you study, as you socialize, as you consume entertainment, and so on. An impairment cannot be removed at will, or it wouldn’t be an impairment. To remove the obstacles to social participation that constitute a disability, society must accommodate impairments. People with impairments are disabled because society doesn’t acknowledge the impairments enough.

An example to make the distinction clear. Short-sightedness, long-sightedness, and astigmatism are all visual impairments. But only very severe forms of these conditions are disabling in our society, because the accommodations for the milder forms – eyeglasses and contact lenses – are accepted without question as normal. If your bank made you take your glasses off “so we can see your face properly” and then made you fill out forms in tiny print without them, then you’d be disabled. If you had to take them off to get your driver’s licence photo and then weren’t allowed to wear them while driving so law enforcement could match you to the photo, then you’d be disabled. If all eyeglass-frames came in one ugly, ill-fitting style, and the people selling them told you you should be grateful to have glasses at all, then you’d be disabled. If strangers and casual acquaintances came up to you in the street suggesting you’d be rid of the need for that contraption on your head if only you would try the new eye-strengthening course they’ve been doing (it’s called Sight Naturally, it’s based on ancient tribal colour lore, you never see the tribespeople in National Geographic wearing glasses, do you?), then you’d be disabled.

There’s a confusion here, by the way, which applies especially to mental and intellectual impairments. Lots of people say they’re in favour of helping people with such conditions. But when they say that, they’re picturing treatments which will wipe away the impairment and turn the patients “normal”. Failing that, they’d rather mentally impaired people disappeared behind institutional doors than be out and about on the street where decent people might bump into them (think of the children!) At a disabilities conference that I, yes, took notes for to earn a bit of extra cash last year, one speaker described dinner party conversations where she would mention that she worked in a mental health facility. “It must be hard keeping them contained. Was the woman who killed herself one of yours? You’d feel sorry for them if they weren’t such a drain on society.” Then she would reveal the truth – that she was paid to test the facilities in the capacity of a client. “Oh, but you don’t seem like you’re about to stab us!” “No, but the night is young.” The goal of mental health treatment, as of all disability accommodations, isn’t to turn the client “normal”. It’s to give them their life back.

From all this it follows that, if you remove a disability accommodation that was previously available, you are creating disability. You are disabling people who happen to have impairments. That stain will be on the University’s hands if my job disappears next year or the year after. I hope their lavish new landscaping project is worth it.

Meanwhile, I have the rather urgent concern of finding an alternative source of income. Because I’m disabled too. I’ve previously mentioned the social anxieties which make applying for work a terrifying ordeal for me. But that terror is upon me. The writing is on the wall.

Monday, 29 January 2018

Economics, the evidence-free discipline – from the horse’s mouth

If economists wished to study the horse, they wouldn’t go and look at horses. They’d sit in their studies and say to themselves, “What would I do if I were a horse?”
Ely Devons, British economist

Ever since I started working in a job that periodically puts me in economics lectures, I’ve noticed that economists have a very different idea of what constitutes evidence for their statements than what scientists do. And when I say “noticed”, I mean it’s been thunderingly obvious. The health sciences, in particular, spend hours upon hours drumming into their students’ heads how much it takes to call your practice “evidence-based”. In economics lectures I’ve heard lecturers say outright, “If your analysis of the data disagrees with economic theory, trust economic theory.” Economics is at about the stage medicine was at in the mid-nineteenth century, when blood-letting and cold showers were the go-to treatment for every ill because physicians knew how the body worked, damn it, and didn’t need jumped-up empiricists coming in telling them how to do their job thank you very much.

A nineteenth-century physician practising bloodletting

But I don’t think nineteenth-century physicians ever proudly declared that their theories were evidence-free and thought it a mark of superiority. Yet I encountered economists saying exactly that, in an article from only a year ago, in a debate with a libertarian on Tumblr recently. I’m referring to the Mises Institute’s “Ten Fundamental Laws of Economics”. The list includes some uncontroversial items, but also contentious ones such as “Productivity determines the wage rate”, “Labour does not create value”, and “Profit is the entrepreneurial bonus”. It ends with Law 10: “All genuine laws of economics are logical laws.” This is explicated as

Economic laws are synthetic a priori reasoning. One cannot falsify such laws empirically because they are true in themselves. As such, the fundamental economic laws do not require empirical verification.

Which basically translates to “Anyone who disagrees with us is wrong by definition.” This is not about economics being a “soft science” rather than a “hard science”. This statement makes economics as practised by the Mises Institute not a science at all.

The phrase “synthetic a priori” is, in this context, pure bafflegab, but unfortunately it’s going to take a bit of unpacking. The philosopher Immanuel Kant divided truths along two lines. First, they can be “synthetic” or “analytic”. An analytic truth is basically simply a definition of a word: the usual go-to example is “All bachelors are unmarried,” which is true because being unmarried is part of the definition of being a bachelor. A synthetic truth is one that can’t be derived from the definitions of words alone, such as “I have two cats.” Second, truths can be a priori or a posteriori – Latin for “from before” and “from after”, respectively. An a priori truth has to be true in any conceivable universe; you know it is true before you go investigating. “One plus one equals two” is an a priori truth. An a posteriori truth is one that might or might not be true, and that you therefore can’t know is true until somebody investigates, such as “I have eaten the last of the cheese.”

Two lines of distinction potentially divide a set into four subsets, in this case analytic a priori, analytic a posteriori, synthetic a posteriori, and the one we’re interested in, synthetic a priori. Three of these are uncontroversial. All philosophers agree there is no such thing as an analytic a posteriori truth, and most philosophers agree there are analytic a priori truths and synthetic a posteriori truths. The big disagreement over Kantian philosophy is over whether there is such a thing as a synthetic a priori truth – whether there is anything that has to be true in any conceivable universe, but that can’t be reduced to definitions of terms and logical deductions from such definitions. The Mises Institute puts economic principles in this category. What would this mean?

Kant himself populated the synthetic a priori category with mathematical truths like “One plus one equals two”. Personally I’m inclined to the school of thought that mathematics is in fact analytic. There are arguments to be had on both sides, and I won’t go into them. I suppose the four-colour map theorem might count as synthetic a priori – no-one has ever created a map that needed more than four colours to fill it without any two areas of the same colour touching, and somebody has proved that this is indeed impossible via a computer program, but the proof is too complex for a human mind to comprehend. The point is that no-one can even imagine such a map. It’s inconceivable. For the Mises Institute’s “laws” to be synthetic a priori, it would have to be the case that no-one could even imagine a world in which productivity did not determine the wage rate.

The Mises Institute might respond that someone who thinks they’re imagining a world in which productivity doesn’t determine the wage rate is kidding themselves, just as someone who thinks they’re imagining a five-colour map is kidding themselves (your mental picture is just a vague squiggle; you aren’t filling in the details). But it is not at all difficult to imagine a shareholder-profit-maximizing corporation deciding to funnel 100% of the profit margin from a productivity boost into shareholder dividends instead of wages. Nor is it hard to imagine every other firm in the market doing the same, thus leaving no competing employer to whom the employees could defect. The Mises Institute is committed to the claim that this scenario is not merely implausible but unimaginable. Either that, or the phrase “synthetic a priori” in their statement is bafflegab.

Of course, if economic principles are not synthetic a priori, then what the Mises Institute is doing is making up excuses for why their beliefs shouldn’t be exposed to empirical testing, and if you find someone doing that then they’re up to something dodgy. My Tumblr correspondent claimed that societies organized according to these theories could produce and distribute goods and services “way more [efficiently] than any Marxist or Keynesian society can.” Well, that’s an empirical question. I have sat through more than enough economics lectures to be well aware of the theories as to why the free market is meant to be the most efficient means possible of maximizing production and optimizing distribution of goods and services. But only empirical data can tell us whether it is the most efficient means possible. Any attempt to put it beyond the reach of empirical investigation, such as the Mises Institute is here guilty of, suggests that its proponents fear it would disappoint them.

And before someone asks, yes, for all the respect I have for the Marxist community for the work they do in activism towards social change, I do have to concede that Marxists are equally prone to shielding their theories from the possibility of being refuted by reality. I have written about that elsewhere on this blog, if you care to go looking. But Marxist theories don’t at present dominate the global economic system. Capitalist ones do. The point is that only reality can tell you what’s true. Nineteenth-century medicine wasn’t dislodged by some other theory-driven approach; it was corrected by recourse to empirical evidence. In economics as in medicine, when the “experts” cling to their pet theories over reality, people die. We need evidence-based economics and we need it yesterday.