Software Musings

Progress?

Posted on: 22/07/2006

There is a feeling around that computers are much more powerful these days than they were 20 years ago and can handle much larger problems. Indeed, I was recently mocked at a meeting with a couple of academics from a computer science department of a prestigious university for rejecting that hypothesis. I didn’t have time to defend myself properly at the meeting so I’m taking this opportunity to lay out my argument.

My first premise is that there are no really interesting problems for which we have solutions calculable in polynomial time. Certainly calculating my payslip and my company’s accounts are polynomial-time problems but the really interesting problems are at least NP. So, let’s take the best case for my critics, to give them as much ammunition as possible: let’s assume a problem with solution time or memory requirement being O(2N).

Now we really ought to do some historical digging but I’m going to assume that, in the last 20 years, the clock speed of the average processor on an average desk has risen from 10 MHz to 1 GHz. Multi-core processing doesn’t really affect this calculation because the effect is sub-linear in processors. Memory has undergone a bigger transformation in the same time: say from 64 kbytes to 1 Gbyte (even though Bill Gates did say that 640 kbytes was enough for anyone).

If we assume that my 20 year-old computer can solve one of my O(2N) problems with N components then the size of problem that can be tackled today depends on whether it is processor- or memory-bound. A quick calculation shews that, if the problem is processor-bound, then the new, improved, high-speed computer can handle a problem with 1.664N components: a massive increase of 66% brought about by 20 years of stupendous improvement!

If the problem is memory-bound (unlikely for most of the NP problems I struggle with), then we get a better improvement since the ratio of memory now to memory then is so much larger. In fact we can now handle a problem with 2.4N components: we’ve more than doubled the size of the problem we can handle!

So, given the weakest time or memory bound (O(2N) rather than, say, O(N!)) we can now handle problems with about twice as many components as we could 20 years ago. Not much progress given all of the hype and excitement, is it?

The problem, of course, is that von Neumann would be totally at home with any of today’s digital computers: the architecture is effectively unchanged since his pioneering time. In fact is it difficult to see how the architecture of the computer on which I’m typing this blog entry (a mac mini) differs greatly from that of Babbage’s analytical engine. The implementation is greatly different (using electricity rather than mechanics, for example) but the underlying architecture is very similar.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

July 2006
M T W T F S S
« Jun   Aug »
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Disclaimer

The author of this blog used to be an employee of Nortel. Even when he worked for Nortel the views expressed in the blog did not represent the views of Nortel. Now that he has left, the chances are even smaller that his views match those of Nortel.
%d bloggers like this: