@AVincentInSpace @vathpela Unicode currently only reserves code points 0..0x10FFFF (https://en.wikipedia.org/wiki/Unicode_block), so all existing CPs fit in 21 bits.
But @djl was saying we don't have to waste 32 bits to encode the most common code points since UTF-8 came along. This chat here only uses 8 bits per CP/character.
@tek @AVincentInSpace @vathpela
Another issue here is that one of the worst sins in programming is _premature optimization_.
As someone whose serious programming experience was all before 1990, my intuitions are way off for modern processors.
I'm dealing with 500 MB of Japanese text on disk, and reading them into Python and searching them is zippy quick on a PC.
So, IMHO, the world would work fine if the folks at Unicode defined a 32-bit fixed-with encoding, and we just used that.