• +43 660 1453541
  • contact@germaniumhq.com

Allowing Docker Containers Inside the Kubernetes Network


Allowing Docker Containers Inside the Kubernetes Network

Let’s assume you’re spinning up pure docker containers in your Kubernetes cluster. Not pods, but docker containers. For example you’re using docker.inside in your Jenkins builds, and your Kubernetes is hosting the Jenkins instance. You’ll notice that your docker container can’t access services in your cluster.

Let’s assume you have a setup similar to this:

2019 02 28 kubernetes jenkins docker

You’ll notice that the nexus service is not avaialable from the build that’s running, since the Kubernetes networking is not available.

If that’s the case, you need to make sure the new_build knows about the nexus service. In order to achieve that you need to:

  1. Create an entry in the /etc/hosts for the ClusterIP of your nexus service. That’s required because we don’t have access to the DNS services from the cluster, since we’re running outside the cluster.

  2. Start the docker container with the host networking, so in your Jenkinsfile you can go something along these lines:

docker.inside('--networking host') {
    // ...
}

How this works is your new_build container contacts the host via regular network, and tries to go to the cluster IP of the service. The netfilter rules kick in, and land the traffic in the Kubernetes network.

What I prefer though, is to have ingress set up, and simply connect to it via the ingress virtual host name. This makes it resilient to cluster IP changes if ever the service gets redeployed. I still need both changes, but in the /etc/hosts I just point to the external IP of the machine for the service.