Saturday, April 23, 2011

Site Move!

I've finally taken the next step and bought some internet real estate. Please direct your attention to:

for all your future blogging needs. All existing posts and comments have been replicated there.

Thank you!

Subjective Objectivity

It’s the Easter weekend! This means that the work week finished one day early and many of us felt a lot of familiar ‘Friday-y’ feelings on Thursday. In my case, I have a pair of pants that I only wear on Friday that are particularly snazzy. You see, they’re dry-clean only, so I don’t want them getting dirty too fast, so I allow myself to wear them on Fridays to celebrate the end of the work week.

On Thursday morning, I found myself naturally reaching for my Friday pants. I began to think about the nature of days. Today wasn’t Friday, it was Thursday. Yet it was also the end of the work week, which made it substantially similar to a Friday. Further, I began to wonder what the days of the week really meant and this spiralled off into thinking about subjectivity and objectivity.

In moments like this, I’m reminded of my classes on post-modernism. Jean-Paul Baudrillard may argue that not only did last Thursday feel like Friday, last Thursday was in fact Friday. After all, what does Friday really mean? For most of us, the major thing that Friday means is that it’s the last day of the work week. Therefore, this week, Thursday was Friday not merely in our minds but in fact. How can this be?

Objective subjectivity
We think of things that are definite in our lives as objective. Friday is an objective thing: everyone agrees that it’s Friday. Yet cosmologically speaking, there is nothing objective about our days of the week. There is no fact of nature or anything external to society on which the days are based. Even having seven days is arbitrary; it just means that we can (almost) get four weeks into a phase of the moon. Yet three weeks of ten days would be even closer.

Even the year is arbitrary. It makes some sense in that it’s the time it takes for things to get warm, cold and warm again. Yet this means little to anyone living in the equator, and the point at which our year begins could be anywhere on the calendar. Indeed, in other cultures, New Year is at entirely different points of the Earth’s transit around the sun.

On the larger scale, there’s the disagreement over when the Millennium celebrations should have been. Most of us celebrated on New Year’s Eve 1999. As many rightly pointed out, mathematically speaking, the Millennium should have been celebrated a year later, as there was no year zero. My argument then and now was this: what is more important? Either we celebrate 2000 years since an arbitrary-chosen date that is meaningless to most people in the world and, even for those it does have meaning to, does not correspond to any actual historical event but to early estimations of the birth of someone who may or may not have been divine... or we celebrate the fact that all four digits of the year are changing all at once. Can we really argue that either is more important, or less arbitrary?

So strictly speaking, there is absolutely nothing about our calendar, except possibly the length of the year, which is objective; yet we see it as fundamentally objective. Why? because it’s shared by wider society. The European calendar is fundamental to organising our lives. It may be arbitrary, but it’s useful – if we say ‘I’ll see you on Monday’, everyone who uses the same calendar (which is most of the people we’re likely to be talking to) knows what we mean. That power, that utility, is what makes it seem objective.

Morality as a calendar
I suspect that the same holds true for other ‘objective’ elements. Take morality: is murder wrong because it is inherently, objectively wrong; or is it wrong because society decides it is wrong? Certainly, every human society since the dawn of history has had rules against murder (though many have varied in their definition of murder, or their exceptions that rendered killing acceptable). Does that mean it’s objective, or simply that rules against murder are useful for holding societies together? After all, if murder is acceptable, contract law breaks down. How can I get you to build a house for me if you can’t trust that I won’t kill you at the end instead of pay you? A society that condones killing in all cases is no society at all.

As with most things... it isn’t that simple. Hard cases may make bad laws, as lawyers remind us, but they’re also very useful for ironing out logical flaws. Take the classic case of the Hindu practice of Sati. This was the widespread but infrequent practice of wives burning themselves alive after their husband’s death in order to remit his sins and ensure his passage into paradise. Defenders of the practice pointed out that it was voluntary; yet as in today’s controversy over the Burkha, opponents say that this ‘choice’ was often illusory, governed as it was by societal values and rejection of the woman if she chose poorly.

A purely subjective view of morality cannot possibly condemn this, and supporters of objective morality hold up cases like this as an example of the failure of alternative systems of morality. The British colonials solved this via the time-honoured method of declaring their morals superior by virtue of possessing more guns than anyone else involved in the debate.

Is there a solution to this? Youtube user SisyphusRedeemed (whose channel I would heartily recommend) points out that in the philosophical literature, there is a lot of muddying of the water between the very ideas of ‘objective’ and ‘subjective’, particularly in the area of morality. I have not read this literature and do not intend to offer my insights as either original or conclusive, merely as interesting and personal. My attempt to add to the debate is this:

I reject the notion that if morality is based on societal norms that it is not useful. As we can see by the calendar, societal norms lead to a certain level of objectivity in what is otherwise subjective. The subjectivity of the calendar does not lead to anarchy and disagreements over what day it is (unless you’re dealing in real time with people in other time zones). There is no reason to believe that the lack of a purely objective basis for morality would lead to people doing whatever they want.

Not to avoid the question
Nevertheless, the problem of Sati remains. Can we criticise the British for imposing their morals on another culture, or should we say they did the right thing? As a student of anthropology, the issue of dealing with and judging other cultures is a familiar one. Early in the 20th Century, anthropology as a discipline turned away from judgment into complete cultural relativism. These days, there’s a trend away from that towards a limited relativism: that is, withholding judgment of a culture until one understands what it means to those inside it. We stay away from uninformed, kneejerk responses and judge from a position of wisdom instead of cultural and moral superiority.

I hope that by that, I have provided a pathway by which a morality which is not completely objective (yet also not entirely subjective) can provide real, practical solutions to the usual objection. Perhaps I will revise my thoughts once I delve further into the literature and immerse myself in the debate. For now, this suffices for my purposes.

Thursday, April 21, 2011

What do we really know?

The topic of knowledge is one often relegated to the more esoteric branches of philosophy. It does, however, become of profound practical importance when weighing up different fields of knowledge. Take the obvious conflict between science and certain types of religion (namely, literalist/fundamentalist churches that reject science when it disagrees with their texts). A common ploy by apologists is to say that science can’t be trusted because it always changes; even scientists admit they don’t really know that what they say is true! The believer, on the other hand, knows what the truth is, because it’s written in their book.

This problem doesn’t only affect religious discussion, but also discourses in popular science. When Phil Jones of the University of East Anglia’s Climatic Research Unit (CRU) said that he could not be ‘certain within the 95th percentile’ that warming had occurred when looking only at the data from the last 15 years, this was widely reported by the Daily Mail as ‘CRU head says no warming since 1995’. Not only was this profoundly dishonest journalism in that it twisted his words, it betrayed a poor knowledge of what scientific certainty means.

Science, after all, doesn’t really claim to ‘know’ anything at all. What does this mean? What do we really know?

Armchair philosophy time
Like many discussions, the question of what we ‘know’ depends very heavily on the definition of terms. In this case, the most important one is the word ‘know’. What do we really mean by this? This question isn’t quite so simple as it seems; we use the word to cover quite a wide range of potential certainly values, up to 100% certainty.

Even then, as human beings we are very poor at dealing with concepts like 100% certainty. We can feel 100% sure about something even if we recognise that there are plausible ways that we can be wrong. Our thought processes (and importantly, our use of language) projects an inaudible and invisible ‘reasonably’ into many sentences; we are ‘reasonably’ sure, ‘reasonably’ certain.

My claim here is that if we define ‘knowledge’ as things we are 100% certain about, that there is no way at all to claim that we ‘know’ anything. This is where certain philosophical though experiments and science-fiction concepts come from. How can we be sure that we aren’t wired into the Matrix? We can’t. Every single piece of data we collect on the natural world might simply be computer-generated algorithms, with no way of telling the difference.

Surely we can be sure of ourselves. Cogito Ergo Sum – I think, therefore I am. Even asking the question of our own existence means that we exist. To this end, I’d like to ask you to imagine a far future science-fiction world. In this world, there are computers capable of generating extraordinarily complex patterns and transmitting them into the minds of citizens. These citizens experience something tremendous: all of a sudden they have memories of other lives projected into their heads. These are not the memories of real people; they are generated by the computers. The process lasts only seconds, but the memories stay around for some time.

These memories feel very real to the person who experiences them, even though they are false. They are profoundly affecting, but not ‘real’ in any conceivable sense. Crucially, the character present in the memories, like any truly memorable cinematic hero, believes that he or she genuinely inhabits the world that they perceive. They are not separate to it, they are part of it. They are not aware that they are merely created by a computer and have no real existence.

Now imagine that you are in fact not a person, but one of these memories. You exist in a single moment of time; your entire existence is simply your collection of memories. Is there any way to argue that someone like this is, in fact, a person? If the construct existed in time and could learn new things, then perhaps. But being a pattern simply implanted into someone’s head, memories already formed... such a ‘person’ is more analogous to a cinematic character than a true human being.

How could you tell the difference? You may protest that we’re present in now, we perceive time, so we are forming new memories. True; but think back to five minutes ago. Then, you were convinced you were in the present, and now that’s just a memory. What if all you have is memories? You would not be able to tell the difference because you exist in a single point of time.

I think this thought experiment does have issues and could certainly do with some work; nevertheless, I think that there is a path we can see towards being unsure of the nature of our being. While cogito ergo sum proves that we exist... it does not prove what we are. Fundamentally, it remains incapable of proving that we are people and not simple procedurally-generated computer images.

It’s my position that there is no fact beyond our mere existence that we can be 100% sure of. We exist in a world not of certainty, but of doubt. However, our minds have created methods – psychological, social and linguistic – of dealing with doubt. Hence, we talk about (and even think about) certainty as being total when it quite clearly is not. We don’t even have language that can properly communicate the difference between ‘certain enough to satisfy any reasonable doubt’ and ‘beyond any possible doubt, no matter how unlikely’. The latter is so alien to us that we never create a word for it.

Science is one of the few disciplines (along with certain branches of philosophy) that attempts this task. Using proper scientific language, we can never really claim to ‘know’ something. After all, what does experiment really prove? Even discounting dishonesty, all that is really proven is that a particular experiment was conducted and a particular result occurred. If a hundred experiments all turned out the same way, the evidence is certainly growing... but it can by definition never reach 100%.

This is where the evangelical preachers such as William Lane Craig come in. They understand that science is never 100% certain and take that as a weakness. Look at all the arguments about evolution; that just shows they aren’t really sure. Christopher Monckton uses similar arguments against climate change: just look at the debates! Any consensus is a myth.

It’s my suggestion that what they’re doing here (deliberately or not) is jumping between different definitions of ‘certainty’, ‘knowledge’ and similar words. They use the scientific sense (which is precise and means complete lack of any possible alternative, no matter how bizarre) when dismissing scientific findings they don’t like and the common sense (which has necessary levels of uncertainty built in) when discussing the things that they do like. Monckton can dismiss claims for global warming because they aren’t scientifically certain, but accept claims against it when they are only colloquially ‘certain’... even if they’re less likely to be true than the ones he dismisses.

(Particularly ingenuous is the habit of apologists such as Craig to take any admission by scientists that there might be a god as a major concession. The corollary of never being 100% certain that something is true is of course that you can never be 100% certain that anything is not true. Of course science cannot prove the non-existence of God; a truly omnipotent deity could make experiment return whatever results were desired. Science doesn’t squeeze out religion; it squeezes out the need for religion.)

Regardless of what we ‘know’, we can always say that we ‘believe’. This is another word that has a variety of meanings, having long ago been co-opted by religion. Again, unscrupulous argumentation will take advantage of that by shifting between meanings and contexts. Again, Craig will happily argue that science is a religion because scientists ‘believe’ in a rational universe, which is equivalent to his own ‘belief’ in God.

We all have a set of beliefs – that is, things we hold to be true simply because it’s easier than doubting them all the time. Among these beliefs is the notion that we truly exist; that we are real, thinking human beings inhabiting a physical space that closely resembles our perception of our surroundings. It’s just easier that way – after all, it’s what appears to be real, and we don’t gain anything by behaving as if this is not the case.

Science is, of course, founded on a belief that experimentation and human reason are capable of explaining the natural world. This is not a belief pulled from the ether; it is one backed up with a great deal of evidence and an impressive track record. To compare this to religious faith (another word with multiple meanings) is false equivalence of the first order.

All in all, saying we ‘believe’ something is, while correct, fraught with perils of miscommunication.

So what can we really say we know? It depends on the sense we use the word. If we’re using technical language in a scientific paper, we need to stay away from the word, using it as sparingly as possible. In general conversation, conversely, we can say we ‘know’ anything that we’re reasonably sure of. Depending on the person in question, that might be a low bar or a high one – it all depends how good their critical thinking is. Fundamentally, it’s an emotional feeling, one of confidence, that we gain by virtue of the fact that the object of our ‘knowledge’ agrees with (is parsimonious with) our existing set of beliefs.

I feel quite confident saying that I ‘know’ that the universe can exist without God, that the world is warming, that I am truly a person who exists in time, that all animal species are evolved from a common ancestor, or that when a drop a pebble it will fall to the earth and not simply float there. Strictly speaking I don’t ‘know’ those things; I’m reasonably confident and I believe them. Such language is important when speaking technically, and when I’m trying to explain a complex point I’ll usually try to stick to the technical terms.

I have nothing but contempt, however, for those who take advantage of the precision of technical language to undermine an argument; the William Lane Craigs of this world who will point to the uncertainty of science and say that it ‘isn’t really sure of anything after all’. I feel particularly offended at this because uncertainty is not, in fact, a weakness of science after all. Uncertainty is in fact science’s greatest strength. Sokrates claimed that ‘wisest is he who knows that he knows nothing’ – hidden in that paradox is the idea that someone who believes they know everything will never learn anything. Only those who admit that they don’t know will stand a chance of ever knowing anything at all.

So in conclusion, I suppose I feel sorry for Craig as much as I’m annoyed by him.

Monday, March 28, 2011


Some Christians say that lack of belief in God is punished by Hell because He sent his Word to earth and it is clear; any concrete proof like modern-day miracles would undermine free will – it would force us to worship Him instead of allowing us to come to Him on our own. I object to this on several fronts. For one, one of the Apostles, Thomas, displayed doubt as to Jesus’ resurrection even after viewing it himself. While he was rebuked, it was only gently; yet in the same book, we learn that those who do not believe, despite having seen no miracles, will be punished.

No, my main problem is that we have lots of different competing Words written down that still survive to this day, let alone those that have fallen by the wayside. Even if we do believe in the Christian God, there are a multitude of different takes on that particular belief. Do we believe in salvation of works or faith, or both? Do we keep Sunday holy and do no work, or do we just go to church then? Do we pray in solitude, or in groups? Etc etc etc.

How many of these are correct? They can't all be, because they're contradictory. The world was created by God or sprang forth from Brahman, or something else; they can't all be right without them ALL being wrong in significant ways. So if the majority of religious Words are wrong, then why can't they all be wrong? If they're all flawed, how much of any is correct?

So even if Christianity is right, and their religion (the one they were presumably born into) happens, against all odds, to be correct... God sent down His Word, yet failed to make it stand out significantly among the many existing in the world. Logically, rationally, we have no more reason to believe in Yahweh or the divinity of Christ then we do in Vishnu, Allah or the Buddha. You find the Christian scriptures particularly inspiring; I don't. I see nothing in them that could not have been written by ordinary, non-divinely-inspired mortals from the time they were written. Some of the teachings I like, some I find wretched and abominable... and even those I like are not unique to those writings. The Golden Rule, for instance, existed in the form it's present in Christianity for hundreds of years prior to the arrival of Christ.

So if I die, and I ascend, and I find Yahweh standing there with Christ at his right side and they ask me why I didn't believe, I will simply say: "Lord, I can see now that I was mistaken. But I am a rational, thinking man, as You made me. I never felt Your presence in any way that could not have been explained without invoking a force that I could neither see nor hear nor touch. You hid yourself away so effectively that I had no logical reason to believe that You existed. However, despite not believing in any eternal reward, I acted as a good person. I loved my neighbour, I forgave my enemies, I turned the other cheek. I did this not in hope of eternal life but simply because it was good to do so. If you choose to send me to Hell, then that is your decision, but I would feel that that would be an injustice. I am as You made me, and I feel you would have been worse-served by a believer who did all of the above merely to gain reward than you were in life by myself."

And any God who wants to say no to that is no God I want anything to do with.

Saturday, February 12, 2011

Religious Intolerance

Why do we, as a society, put up with discrimination as long as it comes from religion? It seems that bad behaviour gets a free pass as long as it’s in line with ancient traditions and given a veneer of respectability by being tied to ritual.

There’s a perfect example in the paper today, dealing with New South Wales’ anti-discrimination laws, which provide an exception for religious schools. These schools are permitted to expel homosexual students. They can also fire or block the advancement of homosexual teachers, or those who are divorced or become single parents.

Commenting on the law, the NSW Attorney-General commented that a balance must be struck “between protecting individuals from unlawful discrimination while allowing people to practise their own beliefs.”

Sounds good. Now, how far does this go? What religious beliefs wouldn’t be tolerated? Would we let religious schools discriminate on the basis of race? What about gender? If not, why not? By what basis would we permit discrimination on the basis of sexuality (or marital status) but not race and gender? It would please me greatly if the Attorney-General of New South Wales would clarify this point.

Aside from this clause making a joke of both sexuality as a protected category, it throws the issue of permissibility of religious intolerance into sharp relief. I think it has something to do with the notion that churches, as many religious advocates will tell you, are where flawed humans learn about morals. This gives any church activity a sheen of morality and a sense of decency. Thus, they don’t attract as much criticism: if a church takes an action out of their sense of morality, then it’s permissible. It raises attention only when the church goes against its moral sense, or performs actions that are well outside of the community’s morality.

The greatest example of this in recent memory is the Catholic Church’s treatment of abuse cases. Not only was the Church hypocritical in its handling of the cases, it offended community sensibilities. The community found itself in a position of being morally superior to the supposed source of morality and, seeing the feet of clay, reacted harshly.

If this is the case, it says one sad but obvious thing about the NSW law: discriminating against homosexuals is not far enough outside of community sensibilities that it raises ire. At least, it isn’t when churches do it. Apparently, enough people look the situation and shrug... or even approve. After all, how is ‘churches discriminate against homosexuals’ new? Sadly, it isn’t; while gay-friendly churches do exist, the best homosexuals can expect from most of them is to be ignored. It is a sad moment, however, when such discrimination on the public purse is met with such apathy... or even outright defence of the action. Remember that private and religious schools receive part-government funding in Australia, funding that appears to come with few strings.

In the United States, a country dominated by religious rhetoric, not even the Boy Scouts were immune to following anti-discrimination laws when they received government funding. If churches are to be free to follow their conscience, then there’s no reason that taxpayers have to fund that. If they want to make a real moral choice, then they can choose between discrimination and government funding.

I think that, in truth, most of society has shed much idea that churches a more moral than other institutions. While the general sense lingers on after the concept has fled, there have been enough scandals that the aura of sanctity has faded from all churches, not just the Catholic. I believe we're ready to move against this sort of thing, and it's an important fight. Religion is the last great excuse for prejudice and homosexuals (and single mothers) are one of the last great legitimate targets for prejudice. The more this is talked about, the harder it'll be for them to keep it up.

Monday, January 24, 2011

"Great Men" and Science

Those who’ve read me before know that I’m a big supporter of the scientific method, the scientific community and the results of science. That’s not an unqualified support, of course... the success of any construct composed of people is, of course, the people involved.

The key for me is that science should not (and generally does not) designate Great Men (apologies for the gendered language, but I feel using an outdated term makes my point better). Let me illustrate why this important this way: one common tactic of Creationists to assault Evolution (as if disproving Evolution could in any way validate Creationism) is to attack Darwin himself. This is usually done either by attacking the man, pointing out that he was religious, or claiming that he recanted on his deathbed.

The fact that he was religious is no secret. Many evolutionary biologists are religious (by some counts, most of them) and Darwin himself saw no incompatibility between his theory and religion itself... only between his theory and a literal interpretation of Genesis. The personal attacks do not reflect on his theory, while his deathbed recanting has been repeatedly debunked.

However, the counterargument is unnecessary. The disconnect is this: that even if his religious life was a factor in his research, even if he were a terrible person, even if he did recant... it would not affect his theory. In religion, people believe what powerful people say because they are Great Men. Jesus is trusted because he’s the Son of God, Mohammed because he is Allah’s prophet, Paul because he was supposedly revealing divine wisdom. Their teachings are taken wholesale and treated as truth because of their source. Apologists are fond of arguing that their texts are more reliable than science because textbooks change while scripture is eternal.

But this is not how Science works. Darwin, Einstein, Hawking all had to prove their theories before they were accepted. Evolution through natural selection was in fact not fully accepted until the second half of the 20th century, when it was merged with its principle competitor – gene theory – thanks to the discovery and analysis of DNA. In science, we do not believe people because they are Great. We believe they are Great because their discoveries were tested, proven and withstood the test of time. Had Darwin recanted on his deathbed, it would not have changed the fact that his discovery was right; truth is independent of the researcher.

At least, that’s how it’s supposed to work; an ideal case. Unfortunately, it often isn’t so simple in the real world, where professional reputations can intimidate others from testing hypotheses. This seems to be particularly the case in medical research, where evidence is emerging of peer review being used not to improve knowledge but to quash competition. Medical knowledge has, for some reason, always proven resistant to scientific inquiry and reverent of the Great Men. The Greek physician Galen made many advances in his own day and perhaps deserved his reputation; but it was only in the 17th century that his theory of uni-directional blood flow was overturned. Up until the 20th century, medical schools often preferred to teach Galen directly rather than later, more reliable texts (and making Galen instead part of the history of medicine instead of its present). To medicine, too, the unchanging nature of the old texts had value.

I feel it’s important to point out the failings of science, just as it is to point out its successes, and to add my voice to the concerns of the Cult of the Great Man over Evidence. Good science is skeptical; good science challenges what we know and recognises that our knowledge is imperfect. It searches for perfection, knowing it will never reach it. There must always be researchers who challenge the status quo, because a strong theory will of necessity stand up to any scientific scrutiny. If it cannot, it must be replaced. Great Men are no substitute for evidence and scientific rigor.