DJ Majumdar

@deepjoy
0 Followers
3 Following
20 Posts

Week 2 update on the local S3 server.

Filesystem scanner landed. Three levels: L1 discovers files
L2 collects size and mtime
L3 streams MD5 (ETags) and SHA-256 (content hash) in one pass

Uploads via S3 API get full metadata immediately. Files on disk get indexed progressively. (inotify triggered)

Delete from disk = gone from S3.

Also: SigV4 auth, multipart uploads stream to disk, CopyObject, tagging, conditional requests, range reads.

#selfhosted #selfhosting #homelab #rustlang #opensource

@AF0AJ That's always how it goes — the first working prototype unlocks a flood of "oh wait, I could also..." Dangerous and fun in equal measure.
@AF0AJ The PoC-to-reality jump is the best part. That's when you stop asking "can this work?" and start asking "how good can I make it?" Looking forward to seeing the finished build.
@crispius Vibration pads help, but the real move might be cheap SSDs - prices keep falling and they're silent. I went with BTRFS RAID 10 mixing SSDs and platters, which lets you swap drives in gradually as budget allows. Less spinning rust = less noise, better latency, and no 0200 wake-up calls.
@multifact This is the thing nobody talks about when showing off their Docker stacks. You can self-host everything, encrypt your drives, run weekly backups - and your router is still a single point of trust for all of it. It's the one device that touches every packet.

@britter Good shout. rclone serve s3 covers similar ground. Two key differences for my use case:

1. rclone keeps metadata in memory (gone on restart) and holds multipart parts in RAM. Shoebox uses per-bucket SQLite for persistent metadata and streams parts to disk.

2. The bigger divergence is where it's headed: content-hash indexing across drives. When your object store knows the hash of every file, duplicates are just a query - not a weekend project.

Building an S3-compatible server for local filesystems. Point it at a directory, get an S3 endpoint.

Files stay where they are. Works with rclone, AWS CLI, any S3 SDK. When the object store knows every file's content hash, duplicates are just a query.

Started to deduplicate photos on my NAS. Realized the S3 API unlocks more: backup tools, sync workflows, dev tests.

#selfhosted #selfhosting #homelab #rustlang #opensource #S3

@wikiyu Honestly, "server is sleeping" pages deserve to be a standard HTTP status code. 503 Service Unavailable just doesn't capture the vibe.

Also, daily reboots is just aggressive self-healing infrastructure. You're not chaotic, you're ahead of the curve.

Building an S3-compatible server for local FS. Started with the duplicate photos problem. Realized the real value is making local files accessible via S3 API.

1st milestone: `shoebox ~/Photos` serves an S3 endpoint. `aws s3 ls` returns actual files. PutObject, GetObject, DeleteObject, ListObjectsV2 all working. SQLite metadata layer + filesystem ops with symlink safety.

Next: SigV4 auth, multipart uploads, scanner.

#selfhosted #selfhosting #homelab #rustlang #opensource #buildinpublic #S3

@Natanox Thanks for the heads-up! 🙏 Nothing like a surprise maintenance mode to spice up a Wednesday. This is why we test in prod... wait, no, that's exactly why we DON'T test in prod 😅

Hope they get a fix out soon!