Postdicting The Future
Steve Luttrell’s blog, Fact and Fiction has led me to the Discover Interview with Marvin Minsky. This is very fortuitous because I had been hotly debating the topics Minsky discusses—Neuroscience, consciousness and AI (artifical intelligence rather than artifical insemination, the latter being a topic I’m glad to know little about)—with a colleague at work all week.
Now I must declare that Minsky has always been one of my heroes. One of the oldest technical books sitting on my bookshelf is his “Computation: Finite and Infinite Machines” (apparently, “An Open University Set Book”). Inside the front cover, in my juvenile hand, it says “Chris Hobbs, June 1978”. I remember being inspired by that book back in what was presumably 1978.
Minsky pours the same level of scorn on neuroscience that it is currently fashionable for the neuroscientists to pour on AI:
I don’t see neuroscience as serious. What they have are nutty little theories, and they do elaborate experiments to confirm them and don’t know what to do if they don’t work…When you talk to neuroscientists, they seem so unsophisticated; they major in biology and know about potassium and calcium channels, but they don’t have sophisticated psychological ideas.
I spoke in an earlier blog entry about the distinction between “what can be known” and “what we can know” and nailed my colours to the mast of the first of these as the proper subject for philosophy. Human consciousness is, of course, the filter through which we, as humans, make all of our discoveries but trying to understand this filter by neurological techniques seems too introspective ever to produce results.
As Minsky implies in his interview, as problems become increasingly sophisticated there is a point where closed-form analytical solutions give way to simulations: effectively to metaphorical reasoning. Having been doing discrete event simulations all my life from the days when GPSS was the new kid on the block, I am aware of the metaphorical nature of the output of a simulation. (And while I’m at it, having just looked up that Wikipedia link to GPSS, I notice that the first line of text is in the past tense: “General Purpose Simulation System (GPSS) was a discrete time simulation language….”. I’ll have to go in and correct that.)
The big insight in Minsky’s interview is his view that AI has addressed superficially hard problems (chess, theorem proving) but ignored the questions that a 3-year-old would ask: “Why, when it rains, would someone want to stay dry?” Having seen people deliberately get wet in a shower, this is, of course, a very reasonable question. The point I think that he misses, although he touches on it referring to the CyCorp commonsense knowledge programme, is the need to demolish the incredibly primitive view we currently have of ontologies (in the computing rather than philosophical sense). Douglas Lenat, founder of CyCorp has said:
Once you have a truly massive amount of information integrated as knowledge, then the human-software system will be superhuman, in the same sense that mankind with writing is superhuman compared to mankind before writing.
I would like to believe that this is true but feel that, given our extremely limited means of expressing ontologies, it parallels too closely the artificial life belief of the 1980s that, once sufficient complexity had been achieved, life would spontaneously arise: effectively “all conscious beings are more complex than our current systems so, if we make our systems more complex then they will become conscious.” I hope that the CyCorp argument is not similar: “reasoning entities such as humans have enormously more facts at their disposal than our systems, so, if we make more facts available to them then they will start reasoning.” Certainly I have been meaning to download openCyc and have a play.
Neither Minsky’s interview nor Luttrell’s blog uses the term “Semantic Web”. Strange.
As an afterword, I would like to thank Steve Luttrell for introducing me to a new, and potentially immensely important, word: “postdict”, a word derived in much the same way as “predict” but with a flip of sign. I predict that I will use it frequently.