12
- April
2020
Posted By : Alexander Goida
Considerations when deploying to Docker and Kubernetes

Before going deeper to examples of deployment to and , I want to talk about considerations which you need to take into account in general case when you want to create containerized application. I’ll touch each of the mentioned here topics with more technical details in the future articles.

Localhost & Networking

You need to take care about networking when deploying to Docker or Kubernetes. When you deploy a single service or application (a component further in the text), you won’t be able to access it by default. You need to expose ports from container to the host system. Let’s say you’ve got something like http://localhost:5000/ and you can access it from Postman. Then you created another component and deployed it, having its address as http://localhost:6000. Now you can access both from Postman, but components cannot access each other being deployed to Docker (or Kubernetes). The problem is that localhost is resolved for the container or pod where components are deployed. Imagine that you’ve deployed components on different machines, but still using localhost to communicate. There are several ways of solving this.

You can deploy everything on the same container, though this is bad practice and you should not do this in production. In such a case localhost will be resolved correctly. In some draft and experimental cases you may want this. For example, you may want this to prototyping of utility bundles which are not going to be a part of production deployment.

Containers in Docker can communicate to each other using their container names. In order to achieve this they also need to belong to the same network. For example, if you have an image web which is a simple web server, then you can execute the following commands, and running containers will be able to access each other using URL like http://web1/ and http://web2/.

The simpler way of achieving this is the usage of , which orchestrates container creation. it will attach all created containers to the same network, so it will be possible to use service names for communication between components. More about network in docker-compose here.

In Kubernetes you also can use literal names of your services to establish connectivity, though the mechanic is different in details. I found very good article which explains networking in Kubernetes.

Persistence

Everything which your component stores and updates in files or local databases will be removed when container restarts. Containers are considered to be very small, so they were designed to be stateless. It’s possible to overcome this limitation by means of using mounted volumes.

In Docker you have three types of mounted storage: volumes, bind mounts and tmpfs. More about persistence in Docker read here.

Volumes are stored inside Docker engine and not accessible from outside. Bind mounts map real folders and files on host machine to file system objects inside containers.

In Kubernetes you also have Volumes (more types than in Docker) and Persistent Volumes. The whole list of volumes in Kubernetes you can find here. Volumes and Persistent Volumes decouple life cycle from containers and pods, respectively. The independent life cycle enables you safely restart containers and pods without loss of data.

Configuration

The rule of thumb is to develop applications so that the configuration is supplied from external resource (files or databases). In case of containerized application files or local databases are the part of component Docker image and cannot be considered as an external configuration. You can use bind mounts in Docker to overcome this limitation and supplier configuration from outside the container. You can mount files or folders. For example, you can build an image removing any appsettings.json files and mount a real one when a containers starts. This also allows to update the configuration during the lifetime of the container, The one doesn’t need to be restarted to get changes.

In Kubernetes you can use configMaps. A good overview about this you can find here. In nut shell, the configMap is an object of k8s which represents key-value disctionary. This dictionary can be mounted and presented as a file on pods. For example, appsetting.json file can be presented by such configMap object.

Migration from Docker-Compose to Kubernetes

You may want to migrate from using docker-compose to Kubernetes. will help you to apply existing docker-compose files to k8s cluster and create YAML definitions for deployments and services. Below you’ll find simple example of what it ca do. I think you may cut corners using this toll and get done with migration faster.

For example, we have very simple web server. You can put anything into the file index.html in order to check web server.

Docker-compose file to start two containers from the same image.

In order to use Kompose follow the description in their installation guide. Their guide it telling that you can use kompose up to deploy docker-compose.yml files to Kubernetes. But as for me you may want to use kompose convert and then forget about it. The reason for this is that in real situation you may need a deeper configuration of your k8s objects, and having both docker-compose and k8s YAML files together is overwhelming. Anyway, just for the sake of the complete example, below you can find what kompose v1.21.0 generates for Kubernetes from the definition above.

The command produces definitions for deployments and services for each service in docker-compose (4 in total in our case). I merged all definitions together and below you can analyze what it gives. The key kompose.cmd has a value which specific for my environment and yours may be different.

Next article will be dedicated to hands-on experience of deploying and configuring components on Docker and Kubernetes.

See Also

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.