Saturday, April 23, 2011

Site Move!

I've finally taken the next step and bought some internet real estate. Please direct your attention to:

for all your future blogging needs. All existing posts and comments have been replicated there.

Thank you!

Subjective Objectivity

It’s the Easter weekend! This means that the work week finished one day early and many of us felt a lot of familiar ‘Friday-y’ feelings on Thursday. In my case, I have a pair of pants that I only wear on Friday that are particularly snazzy. You see, they’re dry-clean only, so I don’t want them getting dirty too fast, so I allow myself to wear them on Fridays to celebrate the end of the work week.

On Thursday morning, I found myself naturally reaching for my Friday pants. I began to think about the nature of days. Today wasn’t Friday, it was Thursday. Yet it was also the end of the work week, which made it substantially similar to a Friday. Further, I began to wonder what the days of the week really meant and this spiralled off into thinking about subjectivity and objectivity.

In moments like this, I’m reminded of my classes on post-modernism. Jean-Paul Baudrillard may argue that not only did last Thursday feel like Friday, last Thursday was in fact Friday. After all, what does Friday really mean? For most of us, the major thing that Friday means is that it’s the last day of the work week. Therefore, this week, Thursday was Friday not merely in our minds but in fact. How can this be?

Objective subjectivity
We think of things that are definite in our lives as objective. Friday is an objective thing: everyone agrees that it’s Friday. Yet cosmologically speaking, there is nothing objective about our days of the week. There is no fact of nature or anything external to society on which the days are based. Even having seven days is arbitrary; it just means that we can (almost) get four weeks into a phase of the moon. Yet three weeks of ten days would be even closer.

Even the year is arbitrary. It makes some sense in that it’s the time it takes for things to get warm, cold and warm again. Yet this means little to anyone living in the equator, and the point at which our year begins could be anywhere on the calendar. Indeed, in other cultures, New Year is at entirely different points of the Earth’s transit around the sun.

On the larger scale, there’s the disagreement over when the Millennium celebrations should have been. Most of us celebrated on New Year’s Eve 1999. As many rightly pointed out, mathematically speaking, the Millennium should have been celebrated a year later, as there was no year zero. My argument then and now was this: what is more important? Either we celebrate 2000 years since an arbitrary-chosen date that is meaningless to most people in the world and, even for those it does have meaning to, does not correspond to any actual historical event but to early estimations of the birth of someone who may or may not have been divine... or we celebrate the fact that all four digits of the year are changing all at once. Can we really argue that either is more important, or less arbitrary?

So strictly speaking, there is absolutely nothing about our calendar, except possibly the length of the year, which is objective; yet we see it as fundamentally objective. Why? because it’s shared by wider society. The European calendar is fundamental to organising our lives. It may be arbitrary, but it’s useful – if we say ‘I’ll see you on Monday’, everyone who uses the same calendar (which is most of the people we’re likely to be talking to) knows what we mean. That power, that utility, is what makes it seem objective.

Morality as a calendar
I suspect that the same holds true for other ‘objective’ elements. Take morality: is murder wrong because it is inherently, objectively wrong; or is it wrong because society decides it is wrong? Certainly, every human society since the dawn of history has had rules against murder (though many have varied in their definition of murder, or their exceptions that rendered killing acceptable). Does that mean it’s objective, or simply that rules against murder are useful for holding societies together? After all, if murder is acceptable, contract law breaks down. How can I get you to build a house for me if you can’t trust that I won’t kill you at the end instead of pay you? A society that condones killing in all cases is no society at all.

As with most things... it isn’t that simple. Hard cases may make bad laws, as lawyers remind us, but they’re also very useful for ironing out logical flaws. Take the classic case of the Hindu practice of Sati. This was the widespread but infrequent practice of wives burning themselves alive after their husband’s death in order to remit his sins and ensure his passage into paradise. Defenders of the practice pointed out that it was voluntary; yet as in today’s controversy over the Burkha, opponents say that this ‘choice’ was often illusory, governed as it was by societal values and rejection of the woman if she chose poorly.

A purely subjective view of morality cannot possibly condemn this, and supporters of objective morality hold up cases like this as an example of the failure of alternative systems of morality. The British colonials solved this via the time-honoured method of declaring their morals superior by virtue of possessing more guns than anyone else involved in the debate.

Is there a solution to this? Youtube user SisyphusRedeemed (whose channel I would heartily recommend) points out that in the philosophical literature, there is a lot of muddying of the water between the very ideas of ‘objective’ and ‘subjective’, particularly in the area of morality. I have not read this literature and do not intend to offer my insights as either original or conclusive, merely as interesting and personal. My attempt to add to the debate is this:

I reject the notion that if morality is based on societal norms that it is not useful. As we can see by the calendar, societal norms lead to a certain level of objectivity in what is otherwise subjective. The subjectivity of the calendar does not lead to anarchy and disagreements over what day it is (unless you’re dealing in real time with people in other time zones). There is no reason to believe that the lack of a purely objective basis for morality would lead to people doing whatever they want.

Not to avoid the question
Nevertheless, the problem of Sati remains. Can we criticise the British for imposing their morals on another culture, or should we say they did the right thing? As a student of anthropology, the issue of dealing with and judging other cultures is a familiar one. Early in the 20th Century, anthropology as a discipline turned away from judgment into complete cultural relativism. These days, there’s a trend away from that towards a limited relativism: that is, withholding judgment of a culture until one understands what it means to those inside it. We stay away from uninformed, kneejerk responses and judge from a position of wisdom instead of cultural and moral superiority.

I hope that by that, I have provided a pathway by which a morality which is not completely objective (yet also not entirely subjective) can provide real, practical solutions to the usual objection. Perhaps I will revise my thoughts once I delve further into the literature and immerse myself in the debate. For now, this suffices for my purposes.

Thursday, April 21, 2011

What do we really know?

The topic of knowledge is one often relegated to the more esoteric branches of philosophy. It does, however, become of profound practical importance when weighing up different fields of knowledge. Take the obvious conflict between science and certain types of religion (namely, literalist/fundamentalist churches that reject science when it disagrees with their texts). A common ploy by apologists is to say that science can’t be trusted because it always changes; even scientists admit they don’t really know that what they say is true! The believer, on the other hand, knows what the truth is, because it’s written in their book.

This problem doesn’t only affect religious discussion, but also discourses in popular science. When Phil Jones of the University of East Anglia’s Climatic Research Unit (CRU) said that he could not be ‘certain within the 95th percentile’ that warming had occurred when looking only at the data from the last 15 years, this was widely reported by the Daily Mail as ‘CRU head says no warming since 1995’. Not only was this profoundly dishonest journalism in that it twisted his words, it betrayed a poor knowledge of what scientific certainty means.

Science, after all, doesn’t really claim to ‘know’ anything at all. What does this mean? What do we really know?

Armchair philosophy time
Like many discussions, the question of what we ‘know’ depends very heavily on the definition of terms. In this case, the most important one is the word ‘know’. What do we really mean by this? This question isn’t quite so simple as it seems; we use the word to cover quite a wide range of potential certainly values, up to 100% certainty.

Even then, as human beings we are very poor at dealing with concepts like 100% certainty. We can feel 100% sure about something even if we recognise that there are plausible ways that we can be wrong. Our thought processes (and importantly, our use of language) projects an inaudible and invisible ‘reasonably’ into many sentences; we are ‘reasonably’ sure, ‘reasonably’ certain.

My claim here is that if we define ‘knowledge’ as things we are 100% certain about, that there is no way at all to claim that we ‘know’ anything. This is where certain philosophical though experiments and science-fiction concepts come from. How can we be sure that we aren’t wired into the Matrix? We can’t. Every single piece of data we collect on the natural world might simply be computer-generated algorithms, with no way of telling the difference.

Surely we can be sure of ourselves. Cogito Ergo Sum – I think, therefore I am. Even asking the question of our own existence means that we exist. To this end, I’d like to ask you to imagine a far future science-fiction world. In this world, there are computers capable of generating extraordinarily complex patterns and transmitting them into the minds of citizens. These citizens experience something tremendous: all of a sudden they have memories of other lives projected into their heads. These are not the memories of real people; they are generated by the computers. The process lasts only seconds, but the memories stay around for some time.

These memories feel very real to the person who experiences them, even though they are false. They are profoundly affecting, but not ‘real’ in any conceivable sense. Crucially, the character present in the memories, like any truly memorable cinematic hero, believes that he or she genuinely inhabits the world that they perceive. They are not separate to it, they are part of it. They are not aware that they are merely created by a computer and have no real existence.

Now imagine that you are in fact not a person, but one of these memories. You exist in a single moment of time; your entire existence is simply your collection of memories. Is there any way to argue that someone like this is, in fact, a person? If the construct existed in time and could learn new things, then perhaps. But being a pattern simply implanted into someone’s head, memories already formed... such a ‘person’ is more analogous to a cinematic character than a true human being.

How could you tell the difference? You may protest that we’re present in now, we perceive time, so we are forming new memories. True; but think back to five minutes ago. Then, you were convinced you were in the present, and now that’s just a memory. What if all you have is memories? You would not be able to tell the difference because you exist in a single point of time.

I think this thought experiment does have issues and could certainly do with some work; nevertheless, I think that there is a path we can see towards being unsure of the nature of our being. While cogito ergo sum proves that we exist... it does not prove what we are. Fundamentally, it remains incapable of proving that we are people and not simple procedurally-generated computer images.

It’s my position that there is no fact beyond our mere existence that we can be 100% sure of. We exist in a world not of certainty, but of doubt. However, our minds have created methods – psychological, social and linguistic – of dealing with doubt. Hence, we talk about (and even think about) certainty as being total when it quite clearly is not. We don’t even have language that can properly communicate the difference between ‘certain enough to satisfy any reasonable doubt’ and ‘beyond any possible doubt, no matter how unlikely’. The latter is so alien to us that we never create a word for it.

Science is one of the few disciplines (along with certain branches of philosophy) that attempts this task. Using proper scientific language, we can never really claim to ‘know’ something. After all, what does experiment really prove? Even discounting dishonesty, all that is really proven is that a particular experiment was conducted and a particular result occurred. If a hundred experiments all turned out the same way, the evidence is certainly growing... but it can by definition never reach 100%.

This is where the evangelical preachers such as William Lane Craig come in. They understand that science is never 100% certain and take that as a weakness. Look at all the arguments about evolution; that just shows they aren’t really sure. Christopher Monckton uses similar arguments against climate change: just look at the debates! Any consensus is a myth.

It’s my suggestion that what they’re doing here (deliberately or not) is jumping between different definitions of ‘certainty’, ‘knowledge’ and similar words. They use the scientific sense (which is precise and means complete lack of any possible alternative, no matter how bizarre) when dismissing scientific findings they don’t like and the common sense (which has necessary levels of uncertainty built in) when discussing the things that they do like. Monckton can dismiss claims for global warming because they aren’t scientifically certain, but accept claims against it when they are only colloquially ‘certain’... even if they’re less likely to be true than the ones he dismisses.

(Particularly ingenuous is the habit of apologists such as Craig to take any admission by scientists that there might be a god as a major concession. The corollary of never being 100% certain that something is true is of course that you can never be 100% certain that anything is not true. Of course science cannot prove the non-existence of God; a truly omnipotent deity could make experiment return whatever results were desired. Science doesn’t squeeze out religion; it squeezes out the need for religion.)

Regardless of what we ‘know’, we can always say that we ‘believe’. This is another word that has a variety of meanings, having long ago been co-opted by religion. Again, unscrupulous argumentation will take advantage of that by shifting between meanings and contexts. Again, Craig will happily argue that science is a religion because scientists ‘believe’ in a rational universe, which is equivalent to his own ‘belief’ in God.

We all have a set of beliefs – that is, things we hold to be true simply because it’s easier than doubting them all the time. Among these beliefs is the notion that we truly exist; that we are real, thinking human beings inhabiting a physical space that closely resembles our perception of our surroundings. It’s just easier that way – after all, it’s what appears to be real, and we don’t gain anything by behaving as if this is not the case.

Science is, of course, founded on a belief that experimentation and human reason are capable of explaining the natural world. This is not a belief pulled from the ether; it is one backed up with a great deal of evidence and an impressive track record. To compare this to religious faith (another word with multiple meanings) is false equivalence of the first order.

All in all, saying we ‘believe’ something is, while correct, fraught with perils of miscommunication.

So what can we really say we know? It depends on the sense we use the word. If we’re using technical language in a scientific paper, we need to stay away from the word, using it as sparingly as possible. In general conversation, conversely, we can say we ‘know’ anything that we’re reasonably sure of. Depending on the person in question, that might be a low bar or a high one – it all depends how good their critical thinking is. Fundamentally, it’s an emotional feeling, one of confidence, that we gain by virtue of the fact that the object of our ‘knowledge’ agrees with (is parsimonious with) our existing set of beliefs.

I feel quite confident saying that I ‘know’ that the universe can exist without God, that the world is warming, that I am truly a person who exists in time, that all animal species are evolved from a common ancestor, or that when a drop a pebble it will fall to the earth and not simply float there. Strictly speaking I don’t ‘know’ those things; I’m reasonably confident and I believe them. Such language is important when speaking technically, and when I’m trying to explain a complex point I’ll usually try to stick to the technical terms.

I have nothing but contempt, however, for those who take advantage of the precision of technical language to undermine an argument; the William Lane Craigs of this world who will point to the uncertainty of science and say that it ‘isn’t really sure of anything after all’. I feel particularly offended at this because uncertainty is not, in fact, a weakness of science after all. Uncertainty is in fact science’s greatest strength. Sokrates claimed that ‘wisest is he who knows that he knows nothing’ – hidden in that paradox is the idea that someone who believes they know everything will never learn anything. Only those who admit that they don’t know will stand a chance of ever knowing anything at all.

So in conclusion, I suppose I feel sorry for Craig as much as I’m annoyed by him.