This is one of the most profound and humbling results in the history of mathematics. It reveals a fundamental limit to human knowledge and machine capability.
To understand why almost all numbers are uncomputable (and thus effectively unknowable), we have to combine two major concepts from the 19th and 20th centuries: Georg Cantor’s theory of infinite sets and Alan Turing’s theory of computation.
Here is the detailed explanation of the proof.
Part 1: Countable vs. Uncountable Infinity (Cantor)
In the late 1800s, the mathematician Georg Cantor proved that not all infinities are the same size. He distinguished between two types:
- Countable Infinity: A set is "countable" if you can list its items in a sequence (1st, 2nd, 3rd...). The set of natural numbers ($1, 2, 3...$) is the standard for countable infinity. Surprisingly, the set of all integers and even all rational numbers (fractions) are also countable. You can design a system to list them all without missing any.
- Uncountable Infinity: A set is "uncountable" if it is so large that no matter how you try to list the items, you will always leave an infinite number of them out.
The Continuum Argument: Cantor proved that the set of Real Numbers (the continuum, including all decimals like $\pi$, $\sqrt{2}$, $0.123...$) is uncountable.
He did this using his famous Diagonal Argument. If you try to list every real number between 0 and 1, you can construct a new number that isn't on your list by changing the first digit of the first number, the second digit of the second number, and so on. Since you can always create a number that wasn't on the list, the list can never be complete.
Conclusion 1: The set of Real Numbers is uncountably infinite. It is a "larger" infinity than the integers.
Part 2: What is a Computable Number? (Turing)
In 1936, Alan Turing defined computation using the Turing Machine—an abstract model of a computer that reads and writes symbols on a strip of tape according to a set of rules.
A real number is considered Computable if there exists a finite computer program (or Turing Machine) that can calculate that number's digits to any desired precision. * Rational numbers (like $0.5$ or $1/3$) are computable. * Algebraic numbers (like $\sqrt{2}$) are computable. * Famous transcendental numbers (like $\pi$ and $e$) are computable. (We have algorithms that can spit out the digits of $\pi$ forever).
Crucially, every computer program is essentially a finite string of characters (code). Every piece of software, every algorithm, can be converted into a single, massive integer (binary code is just a number).
Because every computer program corresponds to an integer, the set of all possible computer programs is Countable. You can list them: Program 1, Program 2, Program 3...
Conclusion 2: The set of Computable Numbers is effectively the same size as the set of integers. It is a countably infinite set.
Part 3: The Proof (Comparing the Sizes)
Now we simply compare the size of the two sets we just defined.
- The Box of Programs: The set of all numbers we can compute is Countable. (It is small, relatively speaking).
- The Universe of Numbers: The set of all Real Numbers is Uncountable. (It is massive).
In set theory, if you subtract a Countable set from an Uncountable set, the remainder is still Uncountable. The "larger" infinity completely swallows the "smaller" one.
Think of it like probability: If you threw a dart at a number line stretching from 0 to 1, what are the odds you hit a computable number? Because the computable numbers are countable points scattered in an uncountably dense sea, the total length (or "measure") of all computable numbers combined is zero.
The Result: The probability of hitting a computable number is 0%. The probability of hitting an uncomputable number is 100%.
Therefore, "almost all" numbers (in the mathematical sense of "measure theory") are uncomputable.
Part 4: What are these Uncomputable Numbers?
This is the disturbing part. An uncomputable number is a number with an infinite string of digits that has no pattern, no algorithm, and no formula that can generate it.
Because they are uncomputable: 1. They cannot be written down. To write a number, you need a finite representation (symbols). But these numbers have no finite definition. 2. They cannot be predicted. If you knew the first trillion digits, you would have zero clue what the trillion-and-first digit is. 3. They are Paradoxical. We know they exist. We know they make up 99.999...% of the number line. Yet, we can hardly name a single specific one.
Chaitin’s Constant ($\Omega$): One of the few examples of a "defined" uncomputable number is Gregory Chaitin’s constant, $\Omega$ (Omega). It represents the probability that a randomly constructed computer program will halt (finish running). While we can define $\Omega$ in English, we cannot compute its digits. If we could, we would solve the "Halting Problem," which Turing proved is impossible. We know a few of the starting bits of $\Omega$, but calculating the rest becomes exponentially harder until it becomes mathematically impossible.
Summary: The Limits of Knowledge
The proof leads to a staggering philosophical realization:
Mathematics and Computer Science are islands of order in a vast ocean of chaos. The numbers we use, know, and love ($\pi, 1, 42, \sqrt{2}$) are the rare exceptions. The vast majority of reality consists of numbers that are fundamentally essentially random, structureless, and forever beyond the reach of any human mind or supercomputer.