This is the idea of compressibility. If you define an algorithm then the compressibility of a string of data with regards to that algorithm is how short the shortest input which produces that string as output. Incompressible strings (i.e. random) strings are those where the shortest input that produces the string is longer than the string itself.
1/?
The thing is, you can always change your algorithm so that any specific output is very compressible (e.g. something akin to "if input==0; return 7568301222338375). But you can never do this for *every* output at the same time, because you just run out of short input strings.
Thus you can prove that for any fixed algorithm, the vast majority of strings will be incompressible/random.
The really interesting thing is that this formal concept of randomness is really new, despite the concept of randomness/chaos/chance being positively ancient