Our knowledge about the workings of the human mind has taken a gigantic leap forward over the last few decades. There is a growing body of evidence piling up, suggesting something that most of you will probably already know; human beings are not at all as rational as they would like to be.
Research upon research shows that we are prone to all kinds of cognitive mistakes or –biases that constantly lead us astray and the scientific literature is abound with examples of how we trick ourselves into all sorts of irrational behaviour.
One example that supposedly shows our innate tendency to jump to false conclusions is the famous “bat and ball problem”, which goes something like this:
“A bat and a ball cost $ 1.10 together; the bat costs $ 1 more than the ball. How much does the ball cost?”
If you are like me (and a majority of people is), your initial, intuitive answer is most likely to be that the ball costs $ 0.10, right?
On closer examination however, it turns out that if a bat costs $ 1 more than a $ 0.10 ball, that bat alone would cost $ 1.10 (which would make the total $ 1.20). The correct answer is that the ball costs $ 0.05.
So it’s obvious that our first intuitive response does not provide us with a correct answer, but does that really make us irrational? And maybe even more importantly; should irrationality be considered to be a cognitive deficiency?
According to a number of critics there could be something fundamentally wrong with the widespread assumption that if our decision making processes lead us to anything less than an optimized solution – or in case of the bat & ball problem; the truth – they are somehow flawed. For example; a lot of experiments supporting this notion seem to have much in common with the bat & ball problem in the sense that the decisions are presented as isolated theoretical cases instead of in the context reality.
Especially in the case of the bat & ball problem it seems quite obvious that it is presented in such a way that its whole purpose is to mislead us. It’s not at all difficult to ‘prove’ that any of our abilities are ‘flawed’ by designing context for which they’re not suited. Yes, lungs are terrible at breathing under water and our eyes can’t detect ultraviolet, infrared, gamma or X-rays and are practically useless in the dark, yet this hardly makes them deficient (The bat & ball is almost like giving someone a blindfold and then claiming that his or her eyes don’t function optimally).
Ecological validity It seems to me that the real test to decide if our cognitive abilities are innately flawed could only be conducted under real life conditions (I.e. would someone in a shop really be informed about the price of the ball and the bat, by being presented with a riddle?). Simply rephrasing the question would solve the bat & ball problem immediately, so I’m afraid it hardly justifies any other conclusion than us not being that good at (quickly) solving riddles.
There are however many examples where we also seem to err in every day, real life situations. Does this then qualify as an assessment for our supposed irrationality? Again; I seriously doubt this because of two reasons:
First of all; it may very well be that the underlying economic assumption that the validity of a decision primarily- if not only- depends on the quality (maximized utility) of the outcome, is in itself seriously flawed.
In real life it’s a rule rather than exception that most of our decisions have to be made under constraints of time and knowledge, while having an almost limitless supply of options to choose from. So in reality there is another equally important and inevitable factor that determines the validity of a decision, and that is; the level of scarce resources needed (time, cognitive effort etc.) to reach an optimized or maximized solution. In other words, what we have to ask ourselves this; would an optimized solution still be warranted, if the same investment of resources would have provided us with a much larger number of ‘good enough’ solutions?
German psychologist Gerd Gigenrenzer has provided numerous examples where deciding with a ‘good enough solution’ in mind, turned out to be superior to its ‘optimizing’ counterpart. Gigenrenzer actually uses the distinction between deciding under ‘risk’ (all probabilities and outcomes are known), or under ‘uncertainty’ (we cannot possibly oversee all the variables). Had we evolved in a world of risk we would probably have been a lot more like Mr. Spock instead of -I’m almost afraid to say-; Homer Simpson…..
Looking at the problem from a point of mental computation, I think evolution actually got it quite right; optimal solutions as a rule are simply out of reach in the complexity of reality. This leads me to my second point; how do we define human rationality? Should it be determined on the bases of single, isolated events or should we assess it as an average of multiple decisions over longer stretches of time?
The terms proximate and ultimate causation, as used in biology and evolutionary psychology, may offer an interesting perspective on the matter: Proximate causation is immediately responsible for a result; I eat because I’m hungry, whereas ultimate causation accounts for the deeper underlying reason; I eat because it sustains my life, so my genes can be propagated into the next generation.
If we apply this logic to the rationality/irrationality question, things become a lot more interesting; can proximate irrationality actually be ultimately rational? According to a team of American researchers the answer is resounding yes, and they call it: Deep Rationality. Deep rationality takes an evolutionary perspective on the economics of decision making and sheds an interesting new light on why most of these so-called ‘cognitive mistakes’ might in fact be well designed and valuable cognitive tools in an abundant and complex world.
I think it was cognitive scientist Steven Pinker that once wrote something like (and I’m paraphrasing here); “our brains didn’t evolve for the truth, but to master the environment”.
I couldn’t agree more…