#OpenShift hosters 🔊 Red Hat has released blocker for copy-fail vulnerability, no reboots needed:

https://access.redhat.com/solutions/7142136

#RedHat #CopyFail #CVE202631431

CVE-2026-31431 Mitigation for Managed OpenShift (Zero-Reboot BPF LSM DaemonSet) - Red Hat Customer Portal

All OpenShift clusters are confirmed to be affected by CVE-2026-31431 ("Copy Fail"), which has been classified as an important vulnerability. Red Hat is developing a fix for the CVE that will be released in z-streams for OpenShift 4.16, 4.18, 4.19, 4.20, and 4.21. Until the fix is released, a mitigation can be applied to the cluster to disable the affected component.

Red Hat Customer Portal

Does anyone have any good tutorials on building a simple image with Tekton? I found lots of tutorials, most using Kaniko (see my other too, archived by Google in 2025) or tasks from the Tekton Hub (which seems to no longer be available?)

#DevOps #Kubernetes #k8s #OpenShift #Tekton #CICD #Containers #Docker #Pipelines

EDB is heading to Red Hat Summit 2026 in Atlanta this May to show what sovereign, production-grade AI looks like when it's built on infrastructure you actually own.

The EDB and Red Hat collaboration centers on EDB Postgres AI on Red Hat #OpenShift, a unified data layer that handles both structured data and vector embeddings in a single #Postgres instance, eliminating the need for separate vector stores or proprietary pipelines: https://www.enterprisedb.com/blog/edb-red-hat-summit-2026-building-ai-ground-you-own

Red Hat and Tesla engineers tackled a real production problem together.

3x output tokens/sec, 2x faster TTFT on Llama 3.1 70B with KServe + llm-d + vLLM. Fixes pushed upstream to KServe along the way.

This is what open source looks like. 🤝 🚀

https://llm-d.ai/blog/production-grade-llm-inference-at-scale-kserve-llm-d-vllm

#RedHat #Tesla #RedHatAI #vLLM #Pytorch #Kubernetes #OpenShift #KServe #llmd #Llama #OpenSource

Production-Grade LLM Inference at Scale with KServe, llm-d, and vLLM | llm-d

How migrating from a simple vLLM deployment to a robust MLOps platform utilizing KServe, llm-d's intelligent routing, and vLLM solved significant scaling and operational challenges in LLM deployment through deep customization and prefix-cache aware routing to maximize GPU utilization.

llm-d

New Fedora Podcast episode!! 🎙️
What does bootc actually look like when it's running in production? James Harmison joins us to talk about building custom bootc images across wildly different contexts: NVIDIA drivers, AGX Orin hardware, replacing RHCOS in OpenShift, and even a couch gaming rig.
Real world. Real use cases. No lab bubbles.

🎧 podcast.fedoraproject.org
#Fedora #Linux #bootc #OpenSource #Containers #OpenShift

Jos jotakuta kiinnostaa elanto kuben päällä automatisoinnissa, #CGI:llä näyttää olevan kaipuu #OpenShift tiimiinsä. En ole siellä töissä, mutta jaan kun tuttuja.

https://cgi.njoyn.com/corp/xweb/xweb.asp?clid=21001&page=jobdetails&jobid=J0326-2971&BRID=1286725&SBDID=943&LANG=1
CGI Architecture Careers | Kubernetes-asiantuntija | Helsinki, , Finland

CGI Architecture Careers | Kubernetes-asiantuntija | Helsinki, , Finland

Fun weekend playing with Go to build a CLI + Web app dashboard to display commit/RC/Release deployment on our openshift cluster.

Data from Gitlab, Harbor, Openshift, Argocd

Multi namespaces, multi clusters, build RC tag & deploy to many with image build retag.

This is the third iteration of this idea originally developed in bash, then Python and now Go

the CLI is also able to act as a release manager tool, this means it uses Gitlab API to create SemVer RC tags that our tekton pipelines process differently on each environment:

  • commit based build, test, deploy
  • RC tag based that build, test, deploy in acceptance
  • retag RC image to staging
  • retag RC image to production

🔗 https://rmendes.net/notes/2026/04/19/e83d9

A Node on the Web

✎ Note 19 April 2026 Coding DevOps Openshift Fun weekend playing with Go to build a CLI + Web app dashboard to display commit/RC/Release deployment on our openshift cluster. Data from Gitlab, Ha...

A Node on the Web
rsync took about half an hour at least. Something seriously wrong between my #OpenShift and NFS server.
233% 3-year return on investment and 13 months to payback with Red Hat AI

Discover the financial benefits and return on investment (ROI) experienced by customers using Red Hat AI. Learn how organizations turned infrastructure challenges into measurable financial gains with a 3-year ROI of 233% and a 13-month payback period.

Adfinis (Bern): Senior System Engineer (Cloud Native, 80-100%) - 0414

Adfinis AG has a job opening for Senior System Engineer (Cloud Native, 80-100%) - 0414 in Bern (published: 14.04.2026). Apply now or check the other available jobs.