My dear #fediverse, does someone has a nice #ansible repo to setup an #EKS on aws? If I can avoid to start from scratch 😅

#sysadmin #devops #linux #opensource #foss

I wrote up a quick how-to for running data backups inside a Kubernetes cluster using CronJobs. I wish it was as simple as a crontab + bash script like the olden days, but it works well enough. It is nice how declarative and stateless it is though!

https://nbailey.ca/post/backup-k8s-cronjob/

#kubernetes #backup #backups #cronjob #postgres #postgresql #kafka #aws #s3 #eks #bash #terraform #sysadmin #linux #blog #blogpost

Backup Postgres databases with Kubernetes CronJobs

A key part of operating any safe and reliable system is ensuring that there is a way to recover deleted or lost data in a prompt and consistent way. One key part of that is to maintain automatic backups that are recoverable and verifiable. This is a quick and easy way to accomplish that goal, by using existing pieces of infrastructure that are common in production networks. There are countless ways to perform a backup, this is simply one of the “easiest” given these ingredients are available.

A multi-cloud strategy, building a distributed system, your Kubernetes pods need secure, passwordless authentication across AWS, Azure, and GCP. https://hackernoon.com/the-clean-way-to-access-aws-azure-and-gcp-from-kubernetes-no-secrets-no-rotations #eks
The Clean Way to Access AWS, Azure, and GCP From Kubernetes (No Secrets, No Rotations) | HackerNoon

A multi-cloud strategy, building a distributed system, your Kubernetes pods need secure, passwordless authentication across AWS, Azure, and GCP.

Як так виходить, що DevOps кандидат, який працював декілька років в #EPAM, потім в #LuxSoft, а зараз знов в #EPAM, маючи декілька сертифікації #AWS (отриманих в том же ЕПАМ), а також сертифікацію по #Kubernetes #SKAD, не знає відповіді на питання:
- що треба зробити в новому кластері #EKS щоб створити балансер для деплоймента?
Не без труда кандидат відповів, що треба зробити #ingress з типом #nginx, але не зміг відповісти чому після цього балансер не створився (бо в новому кластері немає nginx ingress controller). Ну, й я б ставив ALB ingress controller, не nginx.
При чому це був типу strong middle по скілах. Мені здається, що у мене стронг джуни знають відповідь, бо кожний грається з кластером та самі усе потрібне в нього ставлять.

So, like, #AWS #EKS.. the kernel defaults for the EKS nodes are by in large, consistent with 10mbps half duplex networking on a workstation. Judging by how many hoops you need to jump through to manage sysctl's on EKS and #K8S in general, I can only see one of two possible explanations:

1) There's some magic kernel module installed for EKS or K8S that obviates the need to tune the kernel for server workloads.

2) We stopped caring about synchronizing the network stack to the network it's connected to and the use of the server because it's cloud and/or K8S and wasting resources is just what we do for the convenience of buying Bezos a new spaceship or super yacht.

I see a ton of network implicated slowdowns in pipelines on EKS.There's a fuckton of dropped packets, retransmits, and context switches. We can tell the kernel to spend a bit more time per cycle on processing network packets. We can increase the default and max buffer sizes for TCP and UDP sockets which are transmitting MASSIVE amounts of data for "15GBps" bursts. We can adjust the TCP timeout to match the AWS network to prevent half-open connections. We can increase the kernel backlog depth for busy services. Maybe, I mean, **I** can. It's a twisted, gnarly, and wholly undocumented nightmare for K8S and EKS mostly involving logging into the EKS nodes and manually setting the sysctls one at a time.. Does anyone have a better way? I've yet to read something that demonstrated how to do this in some sane manner.. FWIW, it was one `file` and one `exec` resource in Puppet to adjust an entire fleet consistently.

Salesforce just completed a massive migration: 1,000+ Amazon EKS clusters moved from Kubernetes Cluster Autoscaler to Karpenter!

The impact❓
⇨ Faster scaling ⇨ Simpler operations ⇨ Lower costs ⇨ More flexible, self-service infrastructure for internal dev teams

Details here 👉 https://bit.ly/49xaKQy

#Kubernetes #AWS #EKS #InfoQ

Learn how to use EKS Pod Identity principal tags to isolate each tenant’s S3 access with a single shared IAM role. https://hackernoon.com/how-to-use-eks-pod-identity-to-isolate-tenant-data-in-s3-with-a-shared-iam-role #eks
How to Use EKS Pod Identity to Isolate Tenant Data in S3 With a Shared IAM Role | HackerNoon

Learn how to use EKS Pod Identity principal tags to isolate each tenant’s S3 access with a single shared IAM role.

@MichalBryxi

People loving up on
#EKS.
【re:Invent2025】EKSでKube Resource Orchestrator(KRO)を試してみる! - Qiita

こんにちは、タカサオです! 先日開催されたre:Invent 2025で、EKS Capabilitiesが発表されました! EKS CapabilitiesはGitOps,プラットフォームエンジニアリングを行う際に便利な以下の機能をAWSマネージドで提供してくれる機...

Qiita
Last but certainly not least, my roundup of all the #AWS #CloudOps news from #reinvent, including the new #multicloud Interconnect, #EKS Capabilities, #observability and #logmanagement updates for #cloudwatch, and more. https://www.techtarget.com/searchcloudcomputing/news/366636053/AWS-CloudOps-hones-multi-cloud-support-for-AI-resilience
AWS CloudOps hones multi-cloud support for AI, resilience

Network, observability and Kubernetes management news at re:Invent aligned around themes of multi-cloud scale and resilience amid AI growth and cloud outage concerns.

TechTarget