Sometimes I dream about having time. I read books by people who have obviously had time to bake their ideas and put them down on paper. Mine rarely reach the half-baked stage and normally end up at most 10% baked. Later this month I have 8 or so Internet- and telephone-free days on a freighter ploughing the North Atlantic from Montréal to Liverpool. Who knows what might emerge. In the meantime, here’s what I have 5% baked at the moment, building on a couple of recent posts, particularly the one about whether or not mathematics just a collection of tautologies.
First thread. In his book Rationality in Action, John Searle considers deciding for whom to vote in a political election. He argues that there must be a rational choice for you (one where you would expect to gain the most) but that the reasons don’t act on you, you select a reason for voting for person X.
Second thread. At work, I’m sponsoring some research at the University of Waterloo related to proximate systems: systems where for one reason or another, an exact output (decision) cannot be made. Perhaps the output is known to depend of the value of some variables for which we cannot get precise or accurate values. Perhaps the computation is infeasible. Perhaps there is no correct outcome.
It is generally agreed that, while mathematics may be tautological, for practical purposes it provides endless hours of fun for mathematicians who find things that, once found, are obvious (because any intelligent deity would have deduced them immediately from the axioms).
Deciding for whom I should vote in the next Canadian election is presumably also obvious to that intelligent deity. My views about income tax, the price of cigarettes, the seal hunt, the war in Afghanistan, immigration and all the other topics with which politicians concern themselves are known to that deity. The promises of the politicians and their track records for keeping promises are known and so there is only one possible candidate for whom I should vote. This is a proximate system only because I can’t fully express my views on those topics precisely even to myself and I certainly don’t have the time (or inclination) to study the promise-keeping of the politicians. But, in principle, I could find out.
So, the tautological resolution of Riemann’s Hypothesis and the resolution of the manner in which I should vote at the next election are not that far apart. They are both hidden precisely because of my limited capacity for rational thought. The question is whether they are intrinsically so.
I tried to differentiate, a few postings ago, between things that can’t be known and things that we can’t know. A number of people (including my wife) have incorrectly said that this is a distinction without a difference. Clearly, something that cannot be known is also something we cannot know but I do not believe that the two sets are equal. A complete arithmetic, for example, cannot be known and we cannot know it. But there is another element in those things we cannot know: us. If the computational theory of mind is even an approximation to our reasoning apparatus, then our minds are digital and finite. Given this, there must obviously exist things that can be known but which we cannot know: things that are too big or which fall through the digital cracks. My meaning, however, was deeper. I believe that, leaving aside the finite nature of our brains, bringing a human into the process introduces unknowable entities.
As a total digression, there is another interesting observation in Searle’s book. I have spent a reasonable part of my life building systems that rely, to some extent, on rational behaviour by participants: behaviour to maximise their expected utility. Searle considers the case of a 25 cent coin (or whatever is the equivalent in the UK: a fifty pence coin?)—a coin that is not particuarly valuable but which I would probably take the trouble to bend down and pick up if I passed it in the street. That I would be willing to stoop to pick it up indicates that it has some worth to me. I should therefore be willing to accept a bet, at some odds, to gain 25 cents against losing my life. Of course, there are no such odds. At 10:1 it’s laughable and at 1,000,000:1 I still wouldn’t take the bet. This is, of course, irrational behaviour.