A grumpy ItSec guy walks through the office when he overhears an exchange of words.

devops0: I'll push the new image - just pull "latest"

ItSec (walking by): Careful. "latest" doesn't work the way you think.

devops1: How so?

ItSec: It's just a tag. Whoever pushes the image decides what "latest" points to. Sometimes it's the newest.

First, assume you have a local registry running on localhost:5000 and two Ubuntu images already present: ubuntu:23.04 and ubuntu:22.04. Tag and push both by their actual versions so the registry has explicit versioned tags. Then, on purpose, point latest to 22.04.

# start quick&dirty&unsecure local registry
docker run -d --name registry -p 5000:5000 --restart=always registry:2


# push explicit versions
docker tag ubuntu:23.04 localhost:5000/ubuntu:23.04
docker push localhost:5000/ubuntu:23.04

docker tag ubuntu:22.04 localhost:5000/ubuntu:22.04
docker push localhost:5000/ubuntu:22.04

# intentionally make "latest" refer to 22.04
docker tag ubuntu:22.04 localhost:5000/ubuntu:latest
docker push localhost:5000/ubuntu:latest

Now pull without a tag and see what you actually get. Omitting the tag defaults the client to requesting “:latest”. Because you explicitly set latest to 22.04, that’s exactly what will be pulled and run.

# pull without a tag -> defaults to :latest
docker pull localhost:5000/ubuntu

# verify the version by inspecting inside a container
docker run --rm localhost:5000/ubuntu cat /etc/os-release | grep VERSION=

VERSION="22.04.5 LTS (Jammy Jellyfish)"

If you now retag latest to 23.04 and push again, the same pull with no tag will start returning 23.04. Nothing "automatic" updated it; you changed it yourself by moving the tag.

That's the entire point, latest is a conventional, movable label, not a magical link to the newest software. It can be older than other tags in the same repository if someone set it that way. It can also be missing entirely.

For more grumpy stories visit:
1) https://infosec.exchange/@reynardsec/115093791930794699
2) https://infosec.exchange/@reynardsec/115048607028444198
3) https://infosec.exchange/@reynardsec/115014440095793678
4) https://infosec.exchange/@reynardsec/114912792051851956
5) https://infosec.exchange/@reynardsec/115133293060285123
6) https://infosec.exchange/@reynardsec/115178689445065785
7) https://infosec.exchange/@reynardsec/115253419819097049

#appsec #devops #programming #webdev #docker #containers #cybersecurity #infosec #cloud #sysadmin #sysops #java #php #javascript #node

Cześć wszystkim.
Jestem nowym użytkownikiem mastodon i postanowiłem, że skorzystam z okazji i się przedstawię.

Mówią na mnie Kazoo i jestem pasjonatem Linuxa i wszystko co z nim związane

Prowadzę mini bloga blog.howfaristovalhalla.com — blog o administrowaniu Linuxem, automatyzacji i chmurze.

Piszę głównie dla osób, które już trochę znają Linuxa, ale też dla tych, którzy chcą obrać swoją ścieżkę w IT.

Kilka lat temu sam przeszedłem tę drogę — po 17 latach prowadzenia firm przebranżowiłem się i dziś pracuję jako Cloud Linux Engineer.

Zanim to zrobiłem, spędziłem wiele godzin by znaleźć potwierdzenie, czy w wieku 40 lat jest to możliwe.

Dzisiaj jestem takim przykładem.
Utrzymałem się na rynku, wzmocniłem się finansowo i zaraziłem się nową pasją.

Miło mi Was poznać.
Kazoo

#linux #Cloud #sysops

How does a typical DDoS on a WordPress installation happen?

- A search-based DDoS attack by bypassing the cache
- Attacker sends a large volume of unique search queries so responses never hit the cache example ?s=something-xyz
- Each request becomes a cache miss, forwarded from network edge
- WordPress runs PHP + WP_Query for every request often triggering expensive database work.
- Repeated heavy queries exhaust CPU, memory and DB capacity so the website slows and eventually crashes.
- This is an Application-layer (Layer 7) HTTP flood that mimics normal user traffic.
- Key signals to look out for: huge spikes of /?s= requests in the logs, very high query entropy, cache-hit rate collapses.

Cache-busting search queries force every request through the database, turning cheap HTTP calls into expensive backend load.

Great Sysops lightning talk by Tiia Ohtokallio!

#WPSuomi #wpfi #WordPress #Sysops

A good #ShellScript, battle-tested over years, is an art — no doubt about it (a proud dad) #SysOps

A grumpy ItSec guy walks through the office when he overhears an exchange of words.

devops0: Two containers went rogue last night and starved the whole host.
devops1: What are we supposed to do?

ItSec (walking by): Set limits. It's not rocket science. Docker exposes cgroup controls for CPU, memory, I/O and PIDs. Use them.

The point is: availability is part of security too. Linux control groups allow you to cap, isolate and observe resource usage, which is exactly how Docker enforces container limits for CPU, memory, block I/O and process counts [1]. Let's make it tangible with a small lab. We'll spin a container, install stress-ng, and watch limits in action.

# On the Docker host
docker run -itd --name ubuntu-limits ubuntu:22.04
docker exec -it ubuntu-limits bash

# Inside the container
apt update && apt install -y stress-ng
stress-ng --version

Check how many cores you see, then drive them.

# Inside the container
nproc

# For my host nproc returns 4
stress-ng --cpu 4 --cpu-load 100 --timeout 30s

In another terminal, watch usage from the host.

docker stats

Now clamp CPU for the running container and see the throttle take effect.

docker update ubuntu-limits --cpus=1
docker stats

The --cpus flag is a wrapper over the Linux CFS period/quota; --cpus=1 caps the container at roughly one core worth of time on a multi‑core host.

Memory limits are similar. First tighten RAM and swap, then try to over‑allocate in the container.

# On the host
docker update ubuntu-limits --memory=128m --memory-swap=256m
docker stats
# Inside the container: stays under the cap
stress-ng --vm 1 --vm-bytes 100M --timeout 30s --vm-keep

# Inside the container: tries to exceed; you may see reclaim/pressure instead of success
stress-ng --vm 1 --vm-bytes 300M --timeout 30s --vm-keep

A few memory details matter. --memory is the hard ceiling; --memory-swap controls total RAM+swap available. Setting swap equal to memory disables swap for that container; leaving it unset often allows swap equal to the memory limit; setting -1 allows unlimited swap up to what the host provides.

docker run -it --rm \
--name demo \
--cpus=1 \
--memory=256m \
--memory-swap=256m \
--pids-limit=25 \
ubuntu:22.04 bash

For plain docker compose (non‑Swarm), set service‑level attributes. The Compose Services reference explicitly supports cpus, mem_limit, memswap_limit and pids_limit on services [2].

services:
api:
image: ubuntu:22.04
command: ["sleep","infinity"]
cpus: "1" # 50% of one CPU equivalent
mem_limit: "256m" # hard RAM limit
memswap_limit: "256m" # RAM+swap; equal to mem_limit disables swap
pids_limit: 50 # max processes inside the container

[1] https://docs.docker.com/engine/containers/resource_constraints/
[2] https://docs.docker.com/reference/compose-file/services/

For more grumpy stories visit:
1) https://infosec.exchange/@reynardsec/115093791930794699
2) https://infosec.exchange/@reynardsec/115048607028444198
3) https://infosec.exchange/@reynardsec/115014440095793678
4) https://infosec.exchange/@reynardsec/114912792051851956
5) https://infosec.exchange/@reynardsec/115133293060285123
6) https://infosec.exchange/@reynardsec/115178689445065785

#appsec #devops #programming #webdev #docker #containers #cybersecurity #infosec #cloud #sysadmin #sysops #java #php #javascript #node

PSA at least one of us is going to be around https://datenspuren.de/2025/ #datenspuren in Dresden this weekend.

If you are experienced in #sysops , #email tech or have good #rust experience and general interest in #chatmail (https://chatmail.at ) maybe drop us a DM and let's meet and chat. Depending on circumstances there might also be spontaneous sessions you could lookout for.

Datenspuren 2025

Symposium Datenspuren, 2025, Zentralwerk, Riesaer Straße 32, 01127 Dresden; veranstaltet vom Chaos Computer Club Dresden

devops0: Our audit report says we must "enable Docker rootless mode". I have no clue what that even is...
devops1: Sounds like some another security BS. What's "rootless" supposed to do?

ItSec: Relax. Rootless mode runs the Docker daemon and containers as a regular, unprivileged user [1]. It uses a user namespace, so both the daemon and your containers live in "user space", not as root. That shrinks the blast radius if the daemon or a app in container is compromised, because a breakout wouldn't hand out root on the host.

devops1: Fine. If it's "not hard" to implement, we can consider this.

ItSec: Deal.

Note: this mode does have some limitations. You can review them in docs [2].

First, let's check which user the Docker daemon is currently running as.

ps -C dockerd -o pid,user,group,cmd --no-headers

You should see something like:

9250 root root /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Here's a clean, minimal path that matches the current docs. First, stop the rootful daemon.

sudo systemctl disable --now docker.service docker.socket

Then install the uid/gid mapping tools. On Ubuntu it's uidmap.

sudo apt update && sudo apt install -y uidmap

Docker provides a setup tool. If you installed official DEB/RPM packages, it's already in /usr/bin. Run it as your normal user.

dockerd-rootless-setuptool.sh install

If that command doesn't exist, install the extras package or use the official rootless script.

sudo apt-get install -y docker-ce-rootless-extras
# or, without package manager access:
curl -fsSL https://get.docker.com/rootless | sh

The tool creates a per-user systemd service, a "rootless" CLI context, and prints environment hints. You usually want your client to talk to the user-scoped socket permanently, so export DOCKER_HOST and persist it in your shell profile.

export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock
echo 'export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock' >> ~/.bashrc

Enable auto-start for your user session and let services run even after logout ("linger").

systemctl --user enable docker
sudo loginctl enable-linger $(whoami)

Point the CLI at the new context and sanity-check.

docker context use rootless

Once more, check which privileges the Docker daemon is running with:

ps -C dockerd -o pid,user,group,cmd --no-headers

Now you will see something like:

10728 ubuntu ubuntu dockerd

And pssst! Podman runs containers in "rootless" mode by default [3].

[1] https://docs.docker.com/engine/security/rootless/
[2] https://docs.docker.com/engine/security/rootless/troubleshoot/
[3] https://documentation.suse.com/en-us/smart/container/html/rootless-podman/index.html#rootless-podman-sle

For more grumpy stories visit:
1) https://infosec.exchange/@reynardsec/115093791930794699
2) https://infosec.exchange/@reynardsec/115048607028444198
3) https://infosec.exchange/@reynardsec/115014440095793678
4) https://infosec.exchange/@reynardsec/114912792051851956
5) https://infosec.exchange/@reynardsec/115133293060285123

#appsec #devops #programming #webdev #java #javascript #python #php #docker #containers #k8s #cybersecurity #infosec #cloud #hacking #sysadmin #sysops

A grumpy ItSec guy walks through the office when he overhears an exchange of words.

devops0: These k8s security SaaS prices are wild.
devops1: Image scanning, policy engines, "enterprise tiers"... why are we paying so much?

ItSec (walking by): You pay for updates & support, probably, but you can do some of this yourselves with a bit of k8s hacking.

devops0: How, exactly?

Disclaimer: this is a PoC for learning, not a production-ready solution.

Kubernetes can ask an external webhook whether a given image should be allowed via Admission Controller, in this case ImagePolicyWebhook [1]. The webhook receives an ImageReview payload [2], initiates a scan, and returns "allowed: true/false".

We will write a Flask endpoint that invokes Trivy [3] for each image and denies pod creation process if HIGH or CRITICAL vuln appear.

Below is a minimal Flask service.

from flask import Flask, request, jsonify
import subprocess, json, shlex, re

app = Flask(__name__)

def is_valid_image_format(image: str) -> bool:
if not re.fullmatch(r"[A-Za-z0-9/_:.@+-]{1,300}", image):
return False
if image.startswith("-"):
return False
return True


def scan_with_trivy(image: str):
cmd = [
"trivy", "--quiet",
"--severity", "HIGH,CRITICAL",
"image", "--format", "json",
image
]
r = subprocess.run(cmd, capture_output=True, text=True)
try:
data = json.loads(r.stdout or "{}")
results = data.get("Results", [])
vulns = []
for res in results:
for v in res.get("Vulnerabilities", []) or []:
if v.get("Severity") in ("HIGH", "CRITICAL"):
vulns.append(v)
return vulns
except json.JSONDecodeError:
return None

@app.route("/scan", methods=["POST"])
def scan():
body = request.get_json(force=True, silent=True) or {}
containers = body.get("spec", {}).get("containers", [])
if not containers:
return jsonify({
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"status": {"allowed": False, "reason": "No containers provided"}
})

results = []
decision = True
for c in containers:
image = c.get("image", "")
if not is_valid_image_format(image):
results.append({"image": image, "allowed": False, "reason": "Invalid image format"})
decision = False
continue
vulns = scan_with_trivy(shlex.quote(image))
if vulns is None:
results.append({"image": image, "allowed": False, "reason": "Scanner error"})
decision = False
continue
if vulns:
results.append({"image": image, "allowed": False, "reason": "HIGH/CRITICAL vulnerabilities detected"})
decision = False
else:
results.append({"image": image, "allowed": True})

return jsonify({
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"status": {"allowed": decision, "results": results}
})

if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)

Run the service wherever Trivy is available. Tip: warm up the trivy vulns db once so the first request will not timeout.

trivy image alpine:3.22 #warm up
gunicorn -w 4 -b 0.0.0.0:5000 app:app

Test it with an ImageReview-like request. Replace the and URL and images as you wish/need.

curl -s -X POST http://127.0.0.1:5000/scan -H "Content-Type: application/json" -d '{
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"spec": {
"containers": [
{"image": "alpine:3.22"},
{"image": "nginx:latest"}
]
}
}' | jq .

Tell the API server to use ImagePolicyWebhook. The AdmissionConfiguration points at a kubeconfig for the webhook endpoint (/etc/kubernetes/admission-control-config.yaml).

apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: /etc/kubernetes/webhook-kubeconfig.yaml
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: false

The webhook kubeconfig targets your scanner's HTTP endpoint (/etc/kubernetes/webhook-kubeconfig.yaml). Edit "server" value for your case.

apiVersion: v1
kind: Config
clusters:
- name: webhook
cluster:
server: http://192.168.108.48:5000/scan
contexts:
- name: webhook
context:
cluster: webhook
user: ""
current-context: webhook

Mount the AdmissionConfiguration and enable the plugin in the API server manifest. Add the following flags and mount the config file; adjust paths and IPs to your environment (kube-apiserver.yaml):

---
apiVersion: v1
[...]
containers:
- command:
- kube-apiserver
[...]
- --admission-control-config-file=/etc/kubernetes/admission-control-config.yaml
- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
[...]
volumeMounts:
[...]
- mountPath: /etc/kubernetes/admission-control-config.yaml
name: admission-control-config
readOnly: true
- mountPath: /etc/kubernetes/webhook-kubeconfig.yaml
name: webhook-kubeconfig
readOnly: true
volumes:
[...]
path: /etc/kubernetes/admission-control-config.yaml
type: FileOrCreate
- name: webhook-kubeconfig
hostPath:
path: /etc/kubernetes/webhook-kubeconfig.yaml
type: FileOrCreate

After the API server restarts, the cluster will begin asking app about images during pod creation. A quick check shows an allowed image and a blocked one:

kubectl run ok --image=docker.io/alpine:3.22
pod/ok created

kubectl run nope --image=docker.io/nginx:latest
Error from server (Forbidden): pods "nope" is forbidden: one or more images rejected by webhook backend

That's the whole trick. Kubernetes asks our Flask app. App calls Trivy. If HIGH or CRITICAL vulnerabilities are present, the admission decision is deny, and the pod never starts. It's not fancy and as I wrote before, it's not meant for production, but it illustrates exactly how admission can enforce image hygiene without buying an external SaaS.

[1] https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook
[2] https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#request-payloads
[3] https://github.com/aquasecurity/trivy

For more grumpy stories visit:
1) https://infosec.exchange/@reynardsec/115093791930794699
2) https://infosec.exchange/@reynardsec/115048607028444198
3) https://infosec.exchange/@reynardsec/115014440095793678
4) https://infosec.exchange/@reynardsec/114912792051851956

#appsec #devops #kubernetes #programming #webdev #docker #containers #k8s #cybersecurity #infosec #cloud #hacking #sysadmin #sysops

Nothing beats the satisfaction of having to deal with a gnarly implementation of a tool...and succeeding despite bad documentation, LLMs hallucinating, the stars not aligning and the coffee not brewing. #sysops #funstuff #tech #satisfaction
On-premises Active Directory Hacks Microsoft 365 Services

On-premises Active Directory Hacks Microsoft 365 Services In my previous article The End of Active Directory: Why Your Cybersecurity Strategy Demands Entra ID Now I wrote about the inherent incompatibility of Active Directory with modern enterprise security architectures and cloud strategies. Only t