My dear #fediverse, does someone has a nice #ansible repo to setup an #EKS on aws? If I can avoid to start from scratch 😅
My dear #fediverse, does someone has a nice #ansible repo to setup an #EKS on aws? If I can avoid to start from scratch 😅
I wrote up a quick how-to for running data backups inside a Kubernetes cluster using CronJobs. I wish it was as simple as a crontab + bash script like the olden days, but it works well enough. It is nice how declarative and stateless it is though!
https://nbailey.ca/post/backup-k8s-cronjob/
#kubernetes #backup #backups #cronjob #postgres #postgresql #kafka #aws #s3 #eks #bash #terraform #sysadmin #linux #blog #blogpost
A key part of operating any safe and reliable system is ensuring that there is a way to recover deleted or lost data in a prompt and consistent way. One key part of that is to maintain automatic backups that are recoverable and verifiable. This is a quick and easy way to accomplish that goal, by using existing pieces of infrastructure that are common in production networks. There are countless ways to perform a backup, this is simply one of the “easiest” given these ingredients are available.
So, like, #AWS #EKS.. the kernel defaults for the EKS nodes are by in large, consistent with 10mbps half duplex networking on a workstation. Judging by how many hoops you need to jump through to manage sysctl's on EKS and #K8S in general, I can only see one of two possible explanations:
1) There's some magic kernel module installed for EKS or K8S that obviates the need to tune the kernel for server workloads.
2) We stopped caring about synchronizing the network stack to the network it's connected to and the use of the server because it's cloud and/or K8S and wasting resources is just what we do for the convenience of buying Bezos a new spaceship or super yacht.
I see a ton of network implicated slowdowns in pipelines on EKS.There's a fuckton of dropped packets, retransmits, and context switches. We can tell the kernel to spend a bit more time per cycle on processing network packets. We can increase the default and max buffer sizes for TCP and UDP sockets which are transmitting MASSIVE amounts of data for "15GBps" bursts. We can adjust the TCP timeout to match the AWS network to prevent half-open connections. We can increase the kernel backlog depth for busy services. Maybe, I mean, **I** can. It's a twisted, gnarly, and wholly undocumented nightmare for K8S and EKS mostly involving logging into the EKS nodes and manually setting the sysctls one at a time.. Does anyone have a better way? I've yet to read something that demonstrated how to do this in some sane manner.. FWIW, it was one `file` and one `exec` resource in Puppet to adjust an entire fleet consistently.
Salesforce just completed a massive migration: 1,000+ Amazon EKS clusters moved from Kubernetes Cluster Autoscaler to Karpenter!
The impact❓
⇨ Faster scaling ⇨ Simpler operations ⇨ Lower costs ⇨ More flexible, self-service infrastructure for internal dev teams
Details here 👉 https://bit.ly/49xaKQy
【re:Invent2025】EKSでKube Resource Orchestrator(KRO)を試してみる!
https://qiita.com/daitak/items/f379f366aab8b65a04fb?utm_campaign=popular_items&utm_medium=feed&utm_source=popular_items
#qiita #AWS #kubernetes #eks #プラットフォームエンジニアリング #reInvent2025