I seem to have an issue with GitLab-CI and/or FreeBSD-poudriere.

Running it as gitlab ci job fails. Running same script manually works.

Sadly there is no FreeBSD table at the Chemnitzer Linux-Tage. Only NetBSD for *BSD.

#FreeBSD #GitLab #poudtiere #CLT2026

Now I have the problem that podman 4.3.1 does not have global IPv6 by default. I get a fe80::/10 and a fd00::/8.

How to get global current IP into podman containers?

#GitLab #podman #curl #IPv6

@txt_file by defining your own network.
More realistically by shipping your own custom CNI configuration.

And even more realistically if it is on a notebook and you're moving between networks either not at all or with a VPN.

@agowa338 sounds awful

Legacy IPv4 sets up a out of the box working legacy IPv4 connection. For current IP I have to jump hoops.

Sounds awful
#podman #IPv6

@txt_file because well that's because it is NATed which makes it independent from whatever the uplink network topology is...
@agowa338 current IP can also do NAT. Why does Podman setup NAT for the old stuff but keeps the current technology broken?

@txt_file cause people cannot agree on how current IP stack should look like. You've the hard liners on one side that say "no NAT what so ever" and you've the equally opinionated ones on the other side of "NAT worked for v4, so we'll NAT6 everything what so ever". + a lot of people that feel indifferent and are like "v4 kinda works, you'll figure it out, whatever".

And neither side wants to see that not also having equally good support for the other will always leave some things broken...

@agowa338 @txt_file I'm somewhere in the middle, with some approximation of "route where you can, NAT where you must." 😕

Before we started using containers in our production environment I pointed out that communication between components in our production environment was not designed to work with NAT, and there was no way we were going to have enough IPv4 addresses for our deployment.

So things were deployed to use IPv6 between all our docker containers. Docker does not turn on IPv6 by default, but it was not too difficult to turn on and most of the time it just works. (We had some challenges with DigitalOcean doing a terrible job at network configuration.)

Then one day our automation to deploy security updates started rolling out an update of docker which happened to turn on NAT for IPv6 traffic. This update broke our production environment in four distinct ways. All of them were due to the use of NAT.

Even though I had pointed out in advance that NAT wouldn’t work I still had not anticipated that it would break in that many different ways.

For us NAT was the problem and IPv6 was the solution. I don’t get why somebody thought it was a good idea to port the problem to IPv6. And it’s baffling to me that some people don’t want to use the solution unless they get to keep the problem.

@kasperd @txt_file

Well the main issue with IPv6 is that it'll break for local dev environments on notebooks as soon as you move to a different wifi...

@kasperd @txt_file

As well as when you've only a dynamic IPv6 prefix assigned by DHCP-PD. But that is kinda fixable. However it requires a restart of all containers when the prefix changes. So it's also far from ideal.

But at companies on servers neither of these are an issue and therefore it becomes the better solution.

Changing the IP addresses of a container is not straightforward, but it is possible. As far as I recall it can be done without restarting the container, but it’s been a while since I did it, so I could misremember.

@kasperd @txt_file

Not in a generic way and most applications just don't like it as it causes the network interface to be replaced in most cases.

You could do something more sophisticated with a custom CNI, but at that point it's most of the time cost prohibitive and the project gets shut down with "eh, whatever we'll just use IPv4, that 'just works'. We'll revisit IPv6 once it is production ready in docker"

@agowa338 @kasperd @txt_file I feel like this is IPv4 thinking (using just one IP address). AFAIK it is possible in Linux to assign both an ULA and a GUA (or multiple) to a container, so that for local development the ULA can be used while for connections from outside the GUA would be used.

I am not sure if Docker/Podman do this though, just saying that the underlying mechanism (network namespaces) should be able to do that, and that IPv6 (as specified) has a solution for this kinds of problems.

(The same is probably true for renumbering, though this relies on what the containering solution supports even more…)

@ledoian @txt_file @kasperd

All of the docker stuff really doesn't like more than 1 IP per interface.

The most cursed thing you can try to do IPv6 with is anything docker and/or k8s related....

@agowa338 sounds like docker & k8s are broken by design.
@ledoian @kasperd

@txt_file @ledoian @kasperd

Tbh in regards to IPv6 (almost) the entire ecosystem is...