Software Musings

Postdicting The Future

Posted on: 03/02/2007

Steve Luttrell’s blog, Fact and Fiction has led me to the Discover Interview with Marvin Minsky. This is very fortuitous because I had been hotly debating the topics Minsky discusses—Neuroscience, consciousness and AI (artifical intelligence rather than artifical insemination, the latter being a topic I’m glad to know little about)—with a colleague at work all week.

Now I must declare that Minsky has always been one of my heroes. One of the oldest technical books sitting on my bookshelf is his “Computation: Finite and Infinite Machines” (apparently, “An Open University Set Book”). Inside the front cover, in my juvenile hand, it says “Chris Hobbs, June 1978”. I remember being inspired by that book back in what was presumably 1978.

Minsky pours the same level of scorn on neuroscience that it is currently fashionable for the neuroscientists to pour on AI:

I don’t see neuroscience as serious. What they have are nutty little theories, and they do elaborate experiments to confirm them and don’t know what to do if they don’t work…When you talk to neuroscientists, they seem so unsophisticated; they major in biology and know about potassium and calcium channels, but they don’t have sophisticated psychological ideas.

I spoke in an earlier blog entry about the distinction between “what can be known” and “what we can know” and nailed my colours to the mast of the first of these as the proper subject for philosophy. Human consciousness is, of course, the filter through which we, as humans, make all of our discoveries but trying to understand this filter by neurological techniques seems too introspective ever to produce results.

As Minsky implies in his interview, as problems become increasingly sophisticated there is a point where closed-form analytical solutions give way to simulations: effectively to metaphorical reasoning. Having been doing discrete event simulations all my life from the days when GPSS was the new kid on the block, I am aware of the metaphorical nature of the output of a simulation. (And while I’m at it, having just looked up that Wikipedia link to GPSS, I notice that the first line of text is in the past tense: “General Purpose Simulation System (GPSS) was a discrete time simulation language….”. I’ll have to go in and correct that.)

The big insight in Minsky’s interview is his view that AI has addressed superficially hard problems (chess, theorem proving) but ignored the questions that a 3-year-old would ask: “Why, when it rains, would someone want to stay dry?” Having seen people deliberately get wet in a shower, this is, of course, a very reasonable question. The point I think that he misses, although he touches on it referring to the CyCorp commonsense knowledge programme, is the need to demolish the incredibly primitive view we currently have of ontologies (in the computing rather than philosophical sense). Douglas Lenat, founder of CyCorp has said:

Once you have a truly massive amount of information integrated as knowledge, then the human-software system will be superhuman, in the same sense that mankind with writing is superhuman compared to mankind before writing.

I would like to believe that this is true but feel that, given our extremely limited means of expressing ontologies, it parallels too closely the artificial life belief of the 1980s that, once sufficient complexity had been achieved, life would spontaneously arise: effectively “all conscious beings are more complex than our current systems so, if we make our systems more complex then they will become conscious.” I hope that the CyCorp argument is not similar: “reasoning entities such as humans have enormously more facts at their disposal than our systems, so, if we make more facts available to them then they will start reasoning.” Certainly I have been meaning to download openCyc and have a play.

Neither Minsky’s interview nor Luttrell’s blog uses the term “Semantic Web”. Strange.

As an afterword, I would like to thank Steve Luttrell for introducing me to a new, and potentially immensely important, word: “postdict”, a word derived in much the same way as “predict” but with a flip of sign. I predict that I will use it frequently.

Advertisements

3 Responses to "Postdicting The Future"

Somebody wrote on another forum: ‘Computers aren’t human, therefore cannot have human experiences.’

My reply was:

The point is, can they have any experiences? It’s not so much a question of what a computer can or might in the future do for us or even how much it can or can’t do similar things that we can do. The difference between us and any computer we can so far consider as a practical proposition is the extent to which the computer can be demonstrably aware of what it is doing.

We have very little understanding at present of consciousness. Before we could build a computer comparable to ourselves we would need to find a way of simulating the various degrees of self awareness found in living things, including human beings. We don’t know for certain what the link is between consciousness and self-expression but it seems likely to be pretty strong.

Compared to consciousness, the computer and the HB pencil have a lot in common.

To a later posting, I replied:

There seems to be a desire to show that we are not all that different from machines currently within our power to make, that I would say borders on religious fervour:-) This need to explain ourselves away stretches even to the extent of considering a bunch of probes and feedback units communicating via sophisticated computer applications as all that is needed to produce consciousness, even self consciousness. I can’t see any scientific evidence to relate the ability to learn and optimise with inner self-awareness. Indeed, many clearly conscious beings have a remarkable propensity not to learn!

Interesting what CyCorp man wrote about mankind becoming comparatively superhuman after learning to write. The analogy should be that the human-software system will become superhuman-software system and not superhuman. What on earth is a ‘human-software system’, anyway?

David

[…] the blog he wrote yesterday, my husband Chris mentions a well known “artificial intelligence” expert called Marvin […]

Further to what I wrote last night and to what you have written in several places recently, information being about what, where, who, which, etc. and knowledge being about how and why, I can see how computers (or robots) can come up with information but I can’t imagine how they could ever be knowledgeable. I suppose a subtle programmer might get a machine to appear knowledgeable, but only by programming in his own knowledge and falsely attributing it to the machine. Computers might be able to produce text books but I don’t think they could never write convincing fiction because they don’t really “know” anything. They don’t have any understanding. Hence no sense of humour either, which makes them unsatisfactory companions in the long term.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

February 2007
M T W T F S S
« Jan   Mar »
 1234
567891011
12131415161718
19202122232425
262728  

Disclaimer

The author of this blog used to be an employee of Nortel. Even when he worked for Nortel the views expressed in the blog did not represent the views of Nortel. Now that he has left, the chances are even smaller that his views match those of Nortel.
%d bloggers like this: