While I agree with scrubbles about you eventually wanting public services covered, and so the initial pain is worth it in the long run, it can be done with an internal DNS server. I started this way, still use it (mostly for Gitlab CE, which needs a name), and have the SWAG+Authelia setup for public-facing stuff that I mentioned above.
Dashboards like Heimdall and Homepage do the job nicely, but if you want to give the internal DNS thing a try, this is how I set mine up internally:
*.lan or *.local domain, but that's not a good idea (it's a RFC and unintended consequences things). Using a *.arpa domain is a better option, e.g. whatever.arpa. But it's your call.service.whatever.arpa. to an IP.service.whatever.arpa..[/whatever.arpa/]your.new.dns.ip
My client devices all use the AGH as their DNS server. Lookups to internal addresses get forwarded to my internal DNS server and everything else gets done by AGH. This lets me browse to http://service.whatever.arpa on my network without issue.
This is how I do it. It works internally and externally, though it's more than OP needs. :)
To add to what's been said (in case it's useful to others), it's worth looking at SWAG and Authelia to do the proxying for services visible to the Internet. I run them in a Docker container and it does all the proxying, takes care of the SSL certificate and auto-renews it, and adds MFA to the services you run that support it (all browsing, MFA-aware apps, etc).
Another thing I like about SWAG's setup is that you select which services/hostnames you want to expose, name them in the SUBDOMAINS environment variable in Docker (easy to remove one if you take a service down, for maintenance, etc), and then each has its own config file in Nginx's proxy-confs directory that does the https://name.domain -> http://IP:port redirection for that service (e.g. wordpress.subdomain.conf), assuming the traffic has met whatever MFA and geo-whitelisting stuff you have set up.
I also have Cloudflare protecting the traffic (proxying the domain's A record and the wildcard CNAME) to my public address, which adds another layer.
Nginx webserver and reverse proxy with php support and a built-in Certbot (Let's Encrypt) client. It also contains fail2ban for intrusion prevention. - linuxserver/docker-swag
That's a really open-ended question. Depends purely upon your interests and appetite for risk, etc.
Might be worth looking at, from a Docker perspective:
I have zero problem with curated or algorithmic timelines. I have a 100% problem when there isn't a chronology timeline option.
It's simple really: give me the permanent option of chronological with dark pattern fuckery of having to reset it periodically, or fuck off forever.
Every time a social media site has offered, pleaded, cajoled or forced me to take a non-chronological timeline, I've refused. And if that refusal eventually becomes impossible (no option, addons no longer work, etc), I take my eyeballs elsewhere.
You're not an edge case. :)
Yeah, it make for a nice workflow, doesn't it. It doesn't give you the "fully automated" achievement, but it's not much of a chore. :)
Have you considered something like borgbackup? It does good deduplication, so you won't have umpteen copies of unchanged files.
I use it mostly for my daily driver laptop to backup to my NAS, and the Gitlab CE container running on the NAS acts as the equivalent for its local Git repos, which are then straightforward to copy elsewhere. Though haven't got it scripting anything like bouncing containers or DB dumps.
Agreed. The lack of varied examples in documentation is my common tripping point. When I hate myself, I use visit SarcasmStackOverflow to find examples, and then reference those against the module's documentation.
And it's definitely become an easier process as I've read more documentation.