Building an S3-compatible server for local filesystems. Point it at a directory, get an S3 endpoint.

Files stay where they are. Works with rclone, AWS CLI, any S3 SDK. When the object store knows every file's content hash, duplicates are just a query.

Started to deduplicate photos on my NAS. Realized the S3 API unlocks more: backup tools, sync workflows, dev tests.

#selfhosted #selfhosting #homelab #rustlang #opensource #S3

@deepjoy there's the rclone server s3 command. Maybe thst does what you need already?

@britter Good shout. rclone serve s3 covers similar ground. Two key differences for my use case:

1. rclone keeps metadata in memory (gone on restart) and holds multipart parts in RAM. Shoebox uses per-bucket SQLite for persistent metadata and streams parts to disk.

2. The bigger divergence is where it's headed: content-hash indexing across drives. When your object store knows the hash of every file, duplicates are just a query - not a weekend project.