Cantor's Diagonalization Argument

    Date: 04/16/06 (Algorithms)    Keywords: no keywords

    I never really came to a sound understanding of Cantor's diagonalization argument (pops in a new window) in terms of it actually showing there are an uncountable number of total functions from N to N. While I intuitively know that there are an uncountable number of real numbers due to the existence of irrational numbers like the square root of 2, I was under the impression that Cantor did this without this assumption. Wikipedia’s version of the proof is entirely different from the book I was taught it from, and I’m not so sure about some of the steps it takes. So to ask questions about the version it presents, how is step 3 true? That is, how can we assume that each of the numbers in the range [0,1] can be represented by decimal expansion? It seems to me that this statement is not analogous to the assumption that the numbers are countably infinite (step 1). It seems to me that if you assume this, then you assume that you can create an infinitely unknown sequence of integers in the form of decimal expansion, which is analogous to the assumption of an irrational number.

    http://en.wikipedia.org/wiki/Cantor%27s_diagonalization_argument

    Source: http://community.livejournal.com/algorithms/75306.html

« xor question || Finite State Machine:... »


antivirus | apache | asp | blogging | browser | bugtracking | cms | crm | css | database | ebay | ecommerce | google | hosting | html | java | jsp | linux | microsoft | mysql | offshore | offshoring | oscommerce | php | postgresql | programming | rss | security | seo | shopping | software | spam | spyware | sql | technology | templates | tracker | virus | web | xml | yahoo | home