Manage your Linux systems like a container!

I’ve got to tell you, I have not been so excited about a technology… probably since Containers. At Summit this year Red Hat announced the General Availability of Image Mode for RHEL. So I got to spend a week in Boston, explaining, over and over again, why that’s important.

See, Image mode is kind of a big deal. It takes container workflows, and applies it to your data center servers using a technology called bootc. This concept isn’t new exactly, this sort of technology has been applied to edge devices, and phones, and other appliances for years. But what we have now is a general purpose linux that you can update using a bootable container image. This changes things.

So think about a Linux system as you know it today. We’re calling that Package Mode now in order to avoid confusion. RHEL Package Mode is a Linux base, with a package manager, where you install and configure things, and then fight to keep those things from drifting pretty much from then until eternity. There’s a whole facet of the IT industry around mitigating that drift. Package and config management is a huge business! For good reason! Drift is what makes your routine 2AM maintenance into a panic attack when the database server doesn’t come back up.

So I talked a lot about Image Mode at Summit, but I have to admit, I hadn’t touched it yet! So Now that I’m back home, and my time is a little less all consumed by prep for the RHEL 10 release, and Summit deadlines, I decided to take some time and get hands on with this revolutionary thing.

Building a pipeline

So, I use Gitlab community edition as a repository for a few container builds I maintain. Some time back I managed to get the CI/CD pipelines working for my container builds. These were nothing fancy, but they work. I commit a change to the repository, and a job kicks off to rebuild the container, and push it into a registry. In some cases that’s just the internal Gitlab registry, in others its Docker Hub. I, of course, do it all with Podman. So when I decided to tackle Image Mode, I thought it would be best to just rip that band-aid right off and do it in Gitlab, and have the builds happen there. How hard could it be? I already had container builds running there!

So I made a repo, and copied my CI config from one of the container builds that just used podman and the local registry, and threw in a basic Containerfile that just sourced FROM the RHEL bootc base image, and then did a package install. Commit, sit back in my arrogance and wait for my image.

It failed. For reasons I still don’t fully understand, the container build uses fuse-overlayfs to do its build, and couldn’t in my runner’s podman in podman build container. I did some research, and luckily I have access to internal Red Hat knowledge, so I was able to bounce some ideas around and came up with a solution. Two things actually. My runner needed some config changes. Here, I’ll share them with you.

Here is my Runner config

[[runners]] name = "dind-container" url = "https://git.undrground.org" id = 3 token = "NoTokenForYou" token_obtained_at = somedatestamp token_expires_at = someotherdatestamp executor = "docker" environment = ["FF_NETWORK_PER_BUILD=1"] [runners.cache] MaxUploadedArchiveSize = 0 [runners.cache.s3] [runners.cache.gcs] [runners.cache.azure] [runners.docker] tls_verify = false image = "docker:git" privileged = true disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/cache"] shm_size = 0 network_mtu = 0

The things I had to add were, first, privileged = true. This gives the container the access it needs to do its fusefs work. And the environment “FF_NETWORK_PER_BUILD=1”, which I believe tweaks the podman networking such that it fixed a DNS resolution problem I was having in my builds.

With that fixed, I was able to get builds working! I have two things to share that may help you if you are trying to do the same. First, another Red Hatter built a public example repo that will apparently “just work” if you use it as a base for your Image Mode CI/CD. It didn’t work for me, but I suspect that was more about my gitlab setup and less about the functionality of the example. You can find that example, Here. What I ended up doing was modify my existing podman CI file. That looks like this:

---image: registry.undrground.org/gangrif/podman-builder:latest#services:# - docker:dindbefore_script: - dnf -y install podman git subscription-manager buildah skopeo podman - subscription-manager register --org=${RHT_ORGID} --activationkey=${RHT_ACT_KEY} - subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms --enable rhel-9-for-x86_64-baseos-rpms - export REVISION=$(git rev-parse --short HEAD) - podman login --username gitlab-ci-token --password $CI_JOB_TOKEN $CI_REGISTRY - podman login --username $RHLOGIN --password "$RHPASS" registry.redhat.ioafter_script: - podman logout $CI_REGISTRY - subscription-manager unregisterstages: - buildcontainerize: stage: build script: . - podman build --secret id=creds,src=/run/containers/0/auth.json --build-arg GIT_HASH=$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:latest . - podman push $CI_REGISTRY_IMAGE

Now, this example contains no verification or validation, so I suggest you maybe look into the proper example linked externally. That one has a lot of testing included. Mine will improve with time. 😉

Registry Authentication for your build

Now, there’s a few things to note here. First, Notice that I am not just logging into my own registry, but registry.redhat.io. You register using your Red Hat login for the Red Hat private registry, and that’s where the bootc base images come from. I also use subscription-manager to register the build container to Red Hat’s CDN. That’s because the RHEL Image Mode build is building RHEL, and must be done using an entitled host in order to receive any updates or packages during the container build. This was something I had gotten stuck on for some time, its a little tough to wrap your head around. Once you do though, it makes sense.

Authenticating your bootc system with your registry, automatically

I am also passing the podman authentication token file into a podman secret at build time. This is important later. If your bootc images are stored in a registry that is not public, you will need to authenticate to that registry in order to pull your updated images after deployment. The easiest way to bake in that authentication is to simply take the authentication from the build host, and place it into the built image. There is some trickery that happens in your Containerfile to make this work. You can read more about this here.

Containerfile

So, I told you we build image mode like a container. I meant it. We literally write a Contanerfile, and source it from these special bootc images that are published by Red Hat. There are a few things you’ll want to think about when building a bootc Containerfile vs a standard application container. Things that you wouldn’t normally think about when building a normal container.

Content

First, RHEL is entitled software, that doesn’t change for RHEL Image Mode. This is pretty seemless if you are doing your build directly on an Entitled RHEL system. But if you’re in a ubi container like I am, you’ll need to subscribe the UBI container because the BootC build will depend on that entitlement to enable its own repositories. That is not true, however, for 3rd party public repositories. Those just get enabled right inside of the Containerfile. This sounds confusing, but it boils down to this. RHEL repository? Entitled by the build host, Other repository? Add it via the Containerfile. I add EPEL in my example below.

Users

Something else I don’t usually see done in a standard container is the addition of users. Remember this is going to be a full RHEL host at the other end, so you might need to add users. In my case I am adding a local “breakglass” user, because I am leveraging IdM for my identities. But if something goes wrong during the provisioning, i want a user I can login to the system with to troubleshoot. You can also come in later with other tools to add users. You can enable cloud-init and add them there, or if you are using the image builder tool I’ll talk about in a bit, you can give it a config.toml file to add users at that point.

Other Considerations

Other things that you’ll need to think about might be firewall rules, container registry authentication, and even the lack of an ENTRYPOINT or CMD. Because this system is expected to boot into a full OS, it is not going to run a single dedicated workload. Instead you’ll be enabling services like you would on a standard RHEL system, with systemctl.

My Containerfile

Now that we’re through all of that, let me show you what I ended up with as a Containerfile.

FROM registry.redhat.io/rhel9/rhel-bootc:latest# Enable EPEL, install updates, and install some packagesRUN dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpmRUN dnf -y updateRUN dnf -y install ipa-hcc-client rhc rhc-worker-playbook cloud-init && dnf clean all# This sets up automatic registration with Red Hat InsightsCOPY --chmod=0644 rhc-connect.service /usr/lib/systemd/system/rhc-connect.serviceCOPY .rhc_connect_credentials /etc/rhc/.rhc_connect_credentialsRUN systemctl enable rhc-connect && touch /etc/rhc/.run_rhc_connect_next_boot# This is my backdoor user, in case of IdM join failureRUN useradd breakglassRUN usermod -p '$6$s0m3pAssw0rDHasH' breakglassRUN groupmems -g wheel -a breakglass# This picks up that podman pull secret, and adds it to the build imageCOPY link-podman-credentials.conf /usr/lib/tmpfiles.d/link-podman-credentials.confRUN --mount=type=secret,id=creds,required=true cp /run/secrets/creds /usr/lib/container-auth.json && \ chmod 0600 /usr/lib/container-auth.json && \ ln -sr /usr/lib/container-auth.json /etc/ostree/auth.json# This configures the bootc update timer to run at a time that I consider acceptableRUN mkdir -p /etc/systemd/system/bootc-fetch-apply-updates.timer.d/COPY weekly-timer.conf /etc/systemd/system/bootc-fetch-apply-updates.timer.d/weekly.conf

You can see from my comments what’s going on in the various blocks in that Containerfile. My intention is to use this as a base RHEL system, and then make more derivative images based on this one. For instance, if I wanted a web server, I would base a new Containerfile on this image, and then add in a RUN dnf install httpd. Its important to note that you shouldn’t be installing packages on these deployed systems after they are up and running. Those installations should happen in the image. If you install a package on a running image mode system, that change will not be carried into the next image update on your system unless you then incorporate it into your bootable container image. This means that you will need to plan ahead, but it also means that tracking package drift in the future is a thing of the past!

In my case, the above mentioned CI automation, and this Containerfile worked in my Gitlab instance, with the above Runner modifications. The build job will take some time, a bootc image is much larger than the lightweight container images you are used to if you’ve been building application containers.

But what about turning that into a VM?

So I am covering but ONE method of getting this image deployed to an acutal system. You can use a myriad of different methods including Kickstart, writing an ISO, PXEBOOT, but what I am doing (because it suits my needs) is turning my image into a qcow2 file, which is a virtual disk image for use with Libvirt. If you’re familiar with Image Builder, the tool used to churn out tailored RHEL disk images, then this wont be a surprise. Theres a container that you can grab that just runs image builder, you give it a bootable container image, and it turns it into a qcow2! Ive cooked up a script that pulls my bootable container right from my registry, writes it to a qcow2, then immediately passes that to virt-install and builds a VM out of it!

In my case, it also uses cloud-init to set its hostname, auto registers, and connects to insights, and then uses a slick new tech preview feature that auto-joins my lab’s IdM domain through insights! Here is my script:

#!/bin/bashVMNAME=$1podman login --username my-gitlab-username -p 'gitlab-token' registry.undrground.orgpodman login --username my-redhat-login -p 'redhatpassword registry.redhat.iopodman pull registry.undrground.org/gangrif/rhel9-imagemode:latestsudo podman run \ --rm \ -it \ --privileged \ --pull=newer \ --security-opt label=type:unconfined_t \ -v $(pwd)/config.toml:/config.toml \ -v $(pwd)/output:/output \ -v /var/lib/containers/storage:/var/lib/containers/storage \ registry.redhat.io/rhel9/bootc-image-builder:latest \ --type qcow2 \ registry.undrground.org/gangrif/rhel9-imagemode:latestcat << EOF > $VMNAME.init#cloud-configfqdn: $VMNAME.idm.undrground.orgEOFmv $(pwd)/output/qcow2/disk.qcow2 /var/lib/libvirt/images/$VMNAME-disk0.qcow2virt-install \--name $VMNAME \--memory 4096 \--vcpus 2 \--os-variant rhel9-unknown \--import \--clock offset=localtime \--disk=/var/lib/libvirt/images/$VMNAME-disk0.qcow2 \-w bridge=bridge20-lab \--autoconsole none \--cloud-init user-data=$VMNAME.init

This, of course, can be improved, but as a proof of concept it works great! Ive build a few test systems and so far its working flawlessly! Now, when I wans to update my systems, I update the gitlab repository with the changes, and let the CI run. Then once it completes, all I do is run this script to make a new vm! The running vms -should- (i have not tested this yet) get the updated bootble container image from the registry on saturday at 3AM, and reboot if new changes are applied.

Wrapping it up

This is, i think, the thing we’ve been promised for years. Ever since the advent of the cloud when we were told that we should stop treating our servers like pets, but never really given a clear definition of how. Image Mode makes that promise a reality. I’m certain I’ll be sharing more as my Image Mode journey progresses. Thanks for reading!

Share via:

0Shares
  • Facebook
  • Twitter
  • LinkedIn
  • More

#bootc #cloud #image #imageMode #linux #redHat #redHatEnterpriseLinux #rhel #services

Des drones en meute avec cerveau partagé. Mise à jour en plein vol. Menace repérée, cible traitée. Science-fiction ? Non. Lockheed & IBM l’ont fait.
🔗 https://visegradpost.com/fr/2025/06/07/quand-les-drones-deviennent-intelligents-lockheed-et-ibm-lancent-des-essaims-capables-de-penser-et-chasser-ensemble/
#IA #DroneSwarm #GuerreConnectée #RedHat #NoRebootNeeded #ProjectPitchforkWasRight
Quand les drones deviennent intelligents : Lockheed et IBM lancent des essaims capables de penser et chasser ensemble - Visegrád Post

EN BREF 🚀 Essaims de drones intelligents : Lockheed Martin et IBM Red Hat développent des drones capables de mises à jour logicielles rapides et sécurisées. 🧠 Adaptabilité en temps réel : Les drones utilisent l’IA et l’apprentissage automatique pour s’adapter aux menaces dynamiques sur le champ de bataille. 🔄 Coordination et efficacité : Les

Visegrád Post

Me llamó mucho la atención la colaboración de RED HAT con el genocidio de Palestina. Triste que se use el sofware libre para crímenes de guerra. Y alienta el debate sobre las oscuras contradicciones de las empresas en el mundo linux

"Red Hat, filial independiente de IBM, también vende tecnologías de computación en nube al ejército israelí".

El artículo se titula:

Microsoft, OpenAI y crímenes de guerra
#redhat #palestina #genocidio #gaza #fedora #linux #sofwarelibre
https://loquesomos.org/microsoft-openai-y-crimenes-de-guerra/

¿Dónde está alojado #Wayland? Pues en la instancia propia de #GitLab de #freedesktop. Esto es infinitamente más coherente con los principios del #SoftwareLibre, le pese a quien le pese.

Inestimados haters de #RedHat, haciendo las cosas como las hacéis no vais a ninguna parte.

Banned and erased from freedesktop.org, Enrico Weigelt unveils Xlibre—a fork of Xorg aimed at revitalizing X11 outside corporate influence.
https://linuxiac.com/xlibre-xserver-project-plans-revival-of-x11/

#xorg #x11 #redhat #opensource

Dodane do bazy: Aurora. Aurora to dystrybucja Linuksa oparta na Fedora Silverblue, której celem jest bycie stacją roboczą ogólnego przeznaczenia. https://linuxiarze.pl/distro-aurora/ #linux #fedora #redhat

It makes no sense that in 2025 we don't have the man pages updated with content similar to the output of curl cheat.sh at the very top of the page yet.

The main argument against allowing cheat.sh on cert exams is it gives an "unfair advantage" to people who are competent and use the best tools for the job. We should just put those docs into the baseline docs so that they're available for everyone and the so called "unfair advantage" is removed.

#Linux #Tooling #Documentation #RedHat #RHEL

... and do not be confused here - Wayland is THE #RedHat baby - #RedHat is THE company that kills the X11 since years.

To name it in 'simple' manners #RedHat is THE #Wormtongue for #Theoden king.

... and this is time when everyone is free from that bullshit.

Long live Xlibre!

... and do not be confused here - Wayland is THE #RedHat baby - #RedHat is THE company that kills the X11 since years.

To name it in 'simple' manners #RedHat is THE #Wormtongue for #Theoden king.

... and this is time when everyone is free from that bullshit.

Long live Xlibre!

How #RedHat just quietly, radically transformed enterprise #server #Linux
#RHEL10 becomes the first major enterprise Linux distro to discard traditional packaging and embrace immutable.
In 2010s, idea of an immutable Linux distro began to take shape. Following popularization of containers with rise of Docker, people became interested in Linux, where core system is locked read-only and can only be updated as a whole (atomically) instead of being updated package by package.
https://www.zdnet.com/article/how-red-hat-just-quietly-radically-transformed-enterprise-server-linux/
How Red Hat just quietly, radically transformed enterprise server Linux

RHEL 10 becomes the first major enterprise Linux distro to discard traditional packaging and embrace immutable. See how we got here.

ZDNET