ultimate storage hack
ultimate storage hack
Awesome idea. In base 64 to deal with all the funky characters.
It will be really nice browsing this filesystem…
Broke: file names have a max character length.
Woke: split b64-encoded data into numbered parts and add .part-1…n suffix to each file name.
each file is minimum 4kb
(base64.length/max_character) * min_filesize < actual_file_size
For this to pay off
each file is minimum 4kb
$ touch empty_file $ ls -l total 8 -rw-rw-r-- 1 user group 0 may 14 20:13 empty_file $ wc -c empty_file 0 empty_fileHuh?
Reminds me of a project i stumbled upon the other day using various services like Google drive, Dropbox, cloudflare, discord for simultaneous remote storage. The goal was to use whatever service that has data to upload to, to store content there as a Filesystem.
I only remember discord being one of the weird ones where they would use base512 (or higher, I couldn’t find the library) to encode the data. The thing with discord, is that you’re limited by characters, and so the best way to store data in a compact way is to take advantage of whatever characters that are supported

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
this is actually a joke compression algorithm that compresses your data by one byte by appending it to the filename. (and you can execute it as many time as you want)
Too bad I can’t remember the name.
If you have a tub full of water and a take a sip, you still have a tub full of water. Therefore only drink in small sips and you will have infinite water.
Water shortage is a scam.
Out of context, but this video showing the amount of freshwater on the planet in perspective was eye opening for me… I see water availability different since.
Reality is stranger than fiction:
Nice stuff.
I got sold on the :
EOF does not consume less
space than “5”
because, even though the space taken by the filesystem is the fault of the filesystem, one needs to consider the minimum information requirements of stating starts and ends of files, specially when stuff is split into multiple files.
I would have actually considered the file size information as part of the file size instead (for both the input and the output) because, for a binary file, which can include a string of bits which might match an EOF, causing a falsely ended file, would be a problem. And as such, the contestant didn’t go checking for character == EOF, but used the function that truly tells whether the end of file is reached, which would, then be using the file system’s file size information.
Since the input file was a 3145728 bytes and the output files would have been smaller than that, I would go with 22 bits to store the file size information. This would be in favour of the contestant as:
On the other hand, had the contestant decided to break the file between bits (instead at byte ends), instead of bytes (which, from the code, I think they didn’t) the file size information would require an additional 3 bits.
Now, using this logic, if I check the result:
From the result claimed by the contestant, there were 44 extra bytes (352 bits) remaining.
+ 22 bits for the input file size information - 22*219 bits for the output file size information because 219 files
so the contestant succeeds by 352 + 22 − (22 × 219) = −4444 bits.
In other words, fails by 4444 bits.
Now of course, the output file size information might be representable in a smaller number of bits, but to calculate that, I would require downloading the file (which I am not in the mood for.
And in that case, you would require additional information to tell the file size bits. So;
22 in the inputqalc says, log(3145728 / 219, 2) = (ln(1048576) − ln(73)) / ln(2) ≈ 13.81017544But even then, you have 352 + 5 + 22 − (5 + (14 × 219)) = −2692 for the best case scenario in which all output file sizes manage to be under 14 bits of file size informations.
More realistically, it would be something around 352 + 5 + 22 − ((5 + 14) × 219) = −3782 because you will the the 5 bits for every file, separately, with the 14 in this case, be a changing value for every file, giving a possibly smaller number.
If instead going with the naive 8 bit EOF that the offerer desired, well, going with 2 consecutive characters instead of a single one, seems doable. As long as you are able to find enough of said 2 characters.
After going on a little google search, I seem to think that in a 3MiB file, there would be either 47 or 383 (depending upon which of my formulae was correct) possible occurrences of the same 2 character combination. Well, you’d need to find the correct combination.
But of course, that’s not exactly compression for a binary file, as I said before, as an EOF is not good enough.
per page
I mean, yes. obviously.
If you had 1000 bytes of text on 1 page before, you now have 1byte per page on 1000 pages afterwards
Have a macro that decreases all font size on opening and then increases all again before closing.
Follow me irl for more compression techniques.