• +43 660 1453541
  • contact@germaniumhq.com

Optimizing Docker Containers


Optimizing Docker Containers

When creating containers, having multiple layers dramatically reduces the time of development. Unfortunately, having a bunch of layers is a bad practice. So how do we reconcile the two? Simple, by compiling the Dockerfiles with docker-optimizer.

So I created a small program named docker-optimizer, that as the name implies, it optimizes Docker files. This program does two things:

  1. It collapses consecutive RUN layers, into one single layer

  2. It collapses multiple ENV layers into one single layer

That means when we’re developing, we can have an image such as:

FROM python:3.8-slim-buster

#============================================================================
# Install requirements
#============================================================================
COPY requirements.txt /requirements.txt
RUN pip install -r /requirements.txt

#============================================================================
# Update the package list
#============================================================================
RUN apt-get update -y

#============================================================================
# Install certbot
#============================================================================
RUN apt-get install -y curl && \
    curl -LO https://dl.eff.org/certbot-auto && \
    mv certbot-auto /usr/local/bin && \
    chown root /usr/local/bin/certbot-auto && \
    chmod 755 /usr/local/bin/certbot-auto && \
    certbot-auto --install-only -n

#============================================================================
# Install kubectl
#============================================================================
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl && \
    mv kubectl /usr/local/bin && \
    chmod +x /usr/local/bin/kubectl

#============================================================================
# cleanup package list
#============================================================================
RUN rm -rf /var/lib/apt/lists/*

COPY new-certificate* /usr/local/bin/
COPY letsencrypt-operator* /usr/local/bin/
COPY adhesive_config.yml /usr/local/bin/.adhesive/config.yml

USER 1000
ENV LE_AUTO_SUDO=
WORKDIR /usr/local/bin
ENTRYPOINT ["python", "letsencrypt-operator.py"]

But after running the compilation step, we have:

# compiled by docker-optimizer
# https://github.com/bmustiata/docker-optimizer
from python:3.8-slim-buster
copy requirements.txt /requirements.txt
run pip install -r /requirements.txt && apt-get update -y && apt-get install -y curl &&     curl -LO https://dl.eff.org/certbot-auto &&     mv certbot-auto /usr/local/bin &&     chown root /usr/local/bin/certbot-auto &&     chmod 755 /usr/local/bin/certbot-auto &&     certbot-auto --install-only -n && curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl &&     mv kubectl /usr/local/bin &&     chmod +x /usr/local/bin/kubectl && rm -rf /var/lib/apt/lists/*
copy new-certificate* /usr/local/bin/
copy letsencrypt-operator* /usr/local/bin/
copy adhesive_config.yml /usr/local/bin/.adhesive/config.yml
user 1000
env LE_AUTO_SUDO=
workdir /usr/local/bin
entrypoint ["python", "letsencrypt-operator.py"]

This optimization is excellent since in building docker containers for Germanium a lot of time is spent in a few major stages:

  • update the system

  • install python

  • install chrome

  • install Germanium

As you can imagine, it gets tedious fast when trying to do only updates on the germanium package locally, without having a bunch of layers that afterward need manual synchronization.

Imagine the frustration when running a full build to find out at the end, that some changes were not ported back in the optimized version.

Now I write the "dev" version of the Dockerfile, then compile it. No way I forgot to install some random package or some other minor change I missed since there is a single source of truth.

Done!