I seem to have an issue with GitLab-CI and/or FreeBSD-poudriere.
Running it as gitlab ci job fails. Running same script manually works.
Sadly there is no FreeBSD table at the Chemnitzer Linux-Tage. Only NetBSD for *BSD.
I seem to have an issue with GitLab-CI and/or FreeBSD-poudriere.
Running it as gitlab ci job fails. Running same script manually works.
Sadly there is no FreeBSD table at the Chemnitzer Linux-Tage. Only NetBSD for *BSD.
@txt_file by defining your own network.
More realistically by shipping your own custom CNI configuration.
And even more realistically if it is on a notebook and you're moving between networks either not at all or with a VPN.
@txt_file cause people cannot agree on how current IP stack should look like. You've the hard liners on one side that say "no NAT what so ever" and you've the equally opinionated ones on the other side of "NAT worked for v4, so we'll NAT6 everything what so ever". + a lot of people that feel indifferent and are like "v4 kinda works, you'll figure it out, whatever".
And neither side wants to see that not also having equally good support for the other will always leave some things broken...
Before we started using containers in our production environment I pointed out that communication between components in our production environment was not designed to work with NAT, and there was no way we were going to have enough IPv4 addresses for our deployment.
So things were deployed to use IPv6 between all our docker containers. Docker does not turn on IPv6 by default, but it was not too difficult to turn on and most of the time it just works. (We had some challenges with DigitalOcean doing a terrible job at network configuration.)
Then one day our automation to deploy security updates started rolling out an update of docker which happened to turn on NAT for IPv6 traffic. This update broke our production environment in four distinct ways. All of them were due to the use of NAT.
Even though I had pointed out in advance that NAT wouldn’t work I still had not anticipated that it would break in that many different ways.
For us NAT was the problem and IPv6 was the solution. I don’t get why somebody thought it was a good idea to port the problem to IPv6. And it’s baffling to me that some people don’t want to use the solution unless they get to keep the problem.
@agowa338 @kasperd @txt_file I feel like this is IPv4 thinking (using just one IP address). AFAIK it is possible in Linux to assign both an ULA and a GUA (or multiple) to a container, so that for local development the ULA can be used while for connections from outside the GUA would be used.
I am not sure if Docker/Podman do this though, just saying that the underlying mechanism (network namespaces) should be able to do that, and that IPv6 (as specified) has a solution for this kinds of problems.
(The same is probably true for renumbering, though this relies on what the containering solution supports even more…)
My experience is that Docker is doing a bad job at IPv6, but it’s even worse with IPv4.
If you have a static IPv6 prefix and know how to configure it, it can work well in Docker. But don’t expect the default configuration to work well.