compressed strings and errors.
Date: 08/28/05
(Algorithms) Keywords: software
I'm writing up something for the all_complexity community but I'm having trouble with my argument. It goes like this:
Explain why software code is brittle, in the sense that small changes in it break everything, while genetic codes accrete changes all the time and a written text can make do with plenty of spelling errors.
I first appeal to Shannons' work on signal theory, that natural language and possibly the genetic one has a lot of redundancy, allowing the message to persist. On the other hand, a sanely written piece of computer code is consistent and as small as practically possible.
I further argue that capturing the desired behaviour of the program in as little code as possible is in itself a form of compression, trying to shrink the size of the program down to the kolmogorov complexity of the desired output.
Here's the fuzzy part of my argument:
I claim handwavingly that since compression exploits and removes any pattern in the text, changing one byte in the compressed text will have a very large effect on the to-be-extracted plaintext. I know this is a hallmark of good encryption, but is this true for compressed text as well?
I could argue that n bytes in the compressed file must affect >n bytes in the plaintext since the plaintext has more bytes and they all need to be "caused" by the compressed bytes.
Is this a sound argument? I'd rather base the argument on real informatics than my own top-of-my-head musings. :-)
Source: http://community.livejournal.com/algorithms/64457.html