I seem to have an issue with GitLab-CI and/or FreeBSD-poudriere.
Running it as gitlab ci job fails. Running same script manually works.
Sadly there is no FreeBSD table at the Chemnitzer Linux-Tage. Only NetBSD for *BSD.
I seem to have an issue with GitLab-CI and/or FreeBSD-poudriere.
Running it as gitlab ci job fails. Running same script manually works.
Sadly there is no FreeBSD table at the Chemnitzer Linux-Tage. Only NetBSD for *BSD.
@txt_file by defining your own network.
More realistically by shipping your own custom CNI configuration.
And even more realistically if it is on a notebook and you're moving between networks either not at all or with a VPN.
@txt_file cause people cannot agree on how current IP stack should look like. You've the hard liners on one side that say "no NAT what so ever" and you've the equally opinionated ones on the other side of "NAT worked for v4, so we'll NAT6 everything what so ever". + a lot of people that feel indifferent and are like "v4 kinda works, you'll figure it out, whatever".
And neither side wants to see that not also having equally good support for the other will always leave some things broken...
Before we started using containers in our production environment I pointed out that communication between components in our production environment was not designed to work with NAT, and there was no way we were going to have enough IPv4 addresses for our deployment.
So things were deployed to use IPv6 between all our docker containers. Docker does not turn on IPv6 by default, but it was not too difficult to turn on and most of the time it just works. (We had some challenges with DigitalOcean doing a terrible job at network configuration.)
Then one day our automation to deploy security updates started rolling out an update of docker which happened to turn on NAT for IPv6 traffic. This update broke our production environment in four distinct ways. All of them were due to the use of NAT.
Even though I had pointed out in advance that NAT wouldn’t work I still had not anticipated that it would break in that many different ways.
For us NAT was the problem and IPv6 was the solution. I don’t get why somebody thought it was a good idea to port the problem to IPv6. And it’s baffling to me that some people don’t want to use the solution unless they get to keep the problem.
As well as when you've only a dynamic IPv6 prefix assigned by DHCP-PD. But that is kinda fixable. However it requires a restart of all containers when the prefix changes. So it's also far from ideal.
But at companies on servers neither of these are an issue and therefore it becomes the better solution.
Not in a generic way and most applications just don't like it as it causes the network interface to be replaced in most cases.
You could do something more sophisticated with a custom CNI, but at that point it's most of the time cost prohibitive and the project gets shut down with "eh, whatever we'll just use IPv4, that 'just works'. We'll revisit IPv6 once it is production ready in docker"
@agowa338 @kasperd @txt_file I feel like this is IPv4 thinking (using just one IP address). AFAIK it is possible in Linux to assign both an ULA and a GUA (or multiple) to a container, so that for local development the ULA can be used while for connections from outside the GUA would be used.
I am not sure if Docker/Podman do this though, just saying that the underlying mechanism (network namespaces) should be able to do that, and that IPv6 (as specified) has a solution for this kinds of problems.
(The same is probably true for renumbering, though this relies on what the containering solution supports even more…)