Wednesday, November 1, 2017

Docker the new form of Virtualization

Docker is becoming popular day by day because of their approach of using the container as the base for the system.

If we go by the definition of the Docker that is available in Wiki, it goes like this: "Docker is an open-source project that automates the deployment of applications inside software containers."

The definition itself is pretty clear that Docker gives the opportunity to run the application inside the so-called hot topic termed as Containers.

Also continuing the Wiki article only, Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.

This means that it is actually filling up the gap that almost every one of us had faced in a way or the other like by saying that the build is working on my machine and I don't know why it is not working in yours.

When you create an application and sets the environment to make the app work, the problem starts since there may be the chance that the guy who is trying to build the application in his machine might miss 1 or 2 steps or the steps might differ since his machine has different applications and versions including the different Processors and hardware which might affect the overall process. So docker comes into the picture and it actually fills the gap in a way or the other. The execution model of the Docker is somewhat like this:




Courtesy: Docker

If you see the architecture you can see that the Docker Container never knows on which platform the image is running, actually internally Docker Engine takes care of all the stuff. On one side where VM architecture actually shares the hardware of the Host itself the Docker just shares the process of the Host and creates a whole atmosphere around the host making it a very lightweight and easy to use.

When comparing Docker with VM we can say that VM's are the running instances of OS on the same machine that is wastage of resources and also, not a correct way of using the resources.

Coming back to the main deal, Docker has some specific terms that might come again and again in this tutorial. They are as follows:
  1. Docker Engine: 
    It is the engine that resides at the core of the docker, in reality, means it the whole and soul of the Application
  2. Docker Client:
    It is the app that interacts with the Docker engine.
  3. Docker Image:
    It is basically a Class (In Java World), that means it is the entity on which we create the containers which are the functional entities. So you can think of the Docker Images as the AMI (Amazon Images, in AWS World), upon which you can create the workstation and those workstation work exactly the way the Image we selected had been created for.
  4. Docker Hub Registry:
    Hub Registry is the cloud hub or repository from where you can get the images and use them for your development. You can create the account and you can upload your own created image for public use or upload in private mode as well.
  5. Docker Container:
    The containers are the entities that actually takes the images from the passive world to the active world. There are the running objects of the images. They are the one that interacts with the Docker host and makes the job done.
  6. Docker Volumes:
    The volumes act as a hard drive for a container. In simple words, you create a volume and attach it to the container so that you can take the backup of the state of the container at any stage in real time.
  7. Docker Network:
    The networks are the entities that are required to communicate between the containers and between the host and the container.
  8. DockerFile
  9. Docker Compose
Docker Installation:

           Coming to the support for the Docker, they are best supported in Linux environments, since the concept of Containers is best integrated with Linux environments,. Although only the installation may differ whereas all commands which we use with respect to Docker will be same in all the environments.

So let us take the first step of installing Docker in our machine (for Mac and Windows user please refer, here, refer OSX and Window part :) ).

For Linux Users please use:
  • Create a directory named docker and go inside it using cd command
    • mkdir docker
    • cd docker
  • Download the script to install the docker
    • wget https://get.docker.io  -O install_docker.sh
      • This will download the shell file in the folder named docker and name it as install_docker.sh
  • Now we will run the script with sudo permissions
    • sudo sh install_docker.sh
  • Follow the onscreen instructions and wait for some time, the script will download the latest version of the docker for you and install it with all the dependencies that it needs to run perfectly so you will be finished with Docker installed in your machine and all the path variable presets for you.
For Mac Users please use:
  • You can go here
  • Download the .dmg
  • Double click the .dmg and it will install the docker automatically in the system
  • Once the docker is installed you will see the whale icon in your taskbar telling you the status of the Docker instance.
  • Through that icon, you can start and stop the docker service, by right-clicking it.

    In order to test whether or not the docker is installed perfectly, you can use a command as well to check:
    • docker --version
    Creating the first container:
               The Containers are the Active version of the Images and Images, in turn, are Passive in nature. In very simple context we can say that the Images are Classes and Containers are its lively Objects.

    So when we say we have to spawn a container it is mainly the spawned version of an Image. Also whenever you tell docker to instantiate a container for a specific image it first tries to search the image in local and then searches the same in Docker Hub. Only if the image is not available in local, the image is being downloaded from the Docker Hub (cloud), saved in local (for future references) and then is used to create a container with it.

    One of the important things to note regarding Docker is that it is mainly a command line tool(CLI) so you need to learn lots of commands in order to interact with Docker. So for the first exercise of Creating the Container. In order to overcome this burden, some great minds have developed many Frameworks to give you the ease of Interactive Interface that can be used to interact with Docker. Among them, one is Portainer which can be used very easily. It itself comes as a Docker Image so you can just start its container and start using it very easily.

    Follow this page for installing the Portainer Container with its image in local.

    • $ docker volume create portainer_data 
    • $ docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
    The commands are pretty self-explanatory. The first command is creating a volume to store the data being created in the container so that you can take a backup or so. And the second command is creating a Container which runs in daemon (-d) mode, with the port (-p) 9000 of the host being mapped to 9000 of the container, with volume (-v) created in the first step via image portainer/portainer.

    The above step spawns a Docker Container and exposes 9000 as a port. So you can now go to browser and http://localhost:9000 and check the page of Portainer.
    • At first run, you have to give the password for Admin user and move forward. 
    • After that, you have to select whether the Docker being administered by Portainer is running on the same machine where Portainer is running or it is on some other machine. For our case we choose Local.
                                  

    • After that step the overall UI looks something like this:

    The left panel has the shortcuts and right panel will show the UI respective to the option you chose.

    So we have downloaded the image for Portainer successfully launched a container for Portainer.

    So let us suppose we want to create a container of Nginx, so for that, we need to go to Docker hub and search for Nginx Official Image.


    You might get 100's of the result, but we are going to use the official one only. You can use different on the basis of your situation, but for now, we are going to use Nginx Docker Official Image.
    1. Go to terminal
    2. Issue the command: docker pull nginx, you can get this in right-hand side.
      1. You can use the Portainer as well to download the image, by going to Images Section and giving the name of the image in the TextBoxand download the image.
    3. Wait for the docker to download the image and extract it
    4. Once the image is downloaded you can view it in Portainer as well in the images section.

    5. Now you can use Nginx Image to spawn a new container for Nginx.
    6. Creating Container:
      1. via Command Line:
        1. docker run --name="NAME_OF_CONATINER" -d -p 8080:80 nginx
        2. You can go to http://localhost:8080 and you will have the landing page of the nginx.
      2. via Portainer UI:
        1. Go to Containers Section -> Add Container, fill the fields and Start Container to start it.

      3. Now the container will be available to use.
      4. You can go to http://localhost:8080 and you will have the landing page of the nginx.
    Viewing the Containers/Images information:

    You can view the Containers/Images information via these commands:
    1. docker ps -a
      1. This will tell regarding all images right now available in the host machine.
    2. docker image inspect < id/name >
      1. This return the information regarding the image
    3. docker container inspect < id/name >
      1. This return the information regarding the container
    4. docker network inspect < id/name > 
      1. This return the information regarding the network
    5. docker volume inspect < id/name >
    Deleting the Containers:

    Deleting container is very simple.
    1. via Command Line:
      1. docker rm
      2. In case the container is running it will throw an exception telling you the same, in that case, either you can use the force option to delete the container or else stop the container first and then delete it.
        docker rm -f
    2. via Portainer UI:
      1. Go to container section -> select the container and Remove or Force Remove
    Deleting the Images:

    Deleting Images are very simple.
    1. via Command Line:
      1. docker image rm [OPTIONS] IMAGE
      2. docker image rm IMAGE < image_name /id>
      3. In case the image is having a running container then we can use -f to forcefully delete the container as well as the image
        docker rm -f  IMAGE < id/name >
    2. via Portainer UI:
      1. Go to container section -> select the container and Remove or Force Remove

    The significance of Networks:

    Networks are very important part of Docker installations. We can create networks and assign the subnet masks and ranges to the networks that can be used while creating the Containers so, the containers belonging to the same network can talk to each other via the network IP being allocated to them.

    When the Docker has installed it comes with 3 default network types(For mac). They are:
    1. Bridge
    2. Host 
    3. None
    The bridge network is responsible to create a communication between the containers belonging to the Bridge network.

    Here is the command to see the details of the network:
    1. via Command Line:
      1. docker network inspect
        1. Ex: docker network inspect bridge
          • This will return the big JSON structure that will tell you the Subnet Masking, Default Gateway, Container published with this bridge and all
    2. via Portainer UI:

      1. Go to network section and you will have all the networks with the subnet masks as well as the Default Gateway being configured.


    By default when a container is being spawned it goes random network we can specify the network, by:
    1. via Command Line:
      1. docker run --name="NAME_OF_CONATINER" -d -p 8080:80 --network="NAME_OF_THE_CONTAINER" nginx
    2. via Portainer UI:
      1. When you go to Add Container Section, in the last section you have the network tab that takes care of the container being part of the specific network.

      2. After specifying the network you can create a container that will be the part of the same network.
    By properties of networks, the docker container belonging to different networks can't talk to each other by default, we have to do some kind of Network Configurations in order to do so. But the containers belonging to the same network can talk to each other by default via the IP Address that is allocated to it randomly in the Address Range of the network.

    For practice session, we will create 3 different Containers of Nginx and try let them talk to each other.
    1. For this, we are going to use the different version of Nginx, i.e nginx:alpine (refer Significance of Images with tags).
    2. So we spawn 2 containers namely nginx_b1, nginx_b2 in bridge network and 1 in host network named nginx_h1.
      1. docker run -d -p 8080:80 --network="bridge" --name="nginx_b1" nginx:alpine
      2. docker run -d -p 8081:80 --network="bridge" --name="nginx_b2" nginx:alpine
      3. docker run -d -p 8082:80 --network="host" --name="nginx_h1" nginx:alpine
    3. Now attach a shell to a container named nginx_b1 (docker exec -it nginx_b1) and type ping nginx_b2, you will be able to do it since they both belong to the same network, but you are unable to ping nginx_h1 since the networks are different. Here is the video reference for all this scenario.
    Example of MongoDb and AdminMongo:

    In this, we are going to run a mongodb container and adminmongo container under the same network and then we will make them talk each other.
    1. Install MongoDb
      1. docker pull mongo
    2. Install admin mongo
      1. docker pull mrvautin/adminmongo
    3. Create a container for mongo image:
      1. docker run -d -p 27017:27017 --name="mongo" --network="bridge" mongo
    4. Create a container for admin mongo image:
      1. docker run -d -p 1234:1234 --name="admin_mongo" --network="bridge" mrvautin/adminmongo
    5. Once both the containers are up and running you can go to http://0.0.0.0:1234 and create a connection in this screen:
    6. You can get the ip under which the mongo is spawned via Portainer UI in Containers section. 
    7. Use the same IP while giving the connection string in the AdminMongo. 
    8. Once the connection is successful you can play with MongoDb.
    So we have actually created a network based stack of AdminMongo and Mongo.

    The significance of Images with Tags:

    The images what we pulled from the docker hub are mostly with latest tags. You might be thinking what are Tags. 

    Tags are just the namespace/versioning identity that the creator of the image uses in order to keep track of the development in the image.
    So what I mean is it's just the version number that tells the user what they are going to get if you create a container over that image.

    So let us take an example. If you say docker pull mongo, irrespective of any version of the mongo you always be having the latest build of the mongo in that image. At the same time, you can pull the tag-based image of mongo that tells which version of mongo you want to download the image of. So if you go to the official image repository page of Mongo, here,  you can see that you have different tags available with respect to the type of version you might get when you download the specific image with the tag, like this, docker pull mongo:3.0.15. This is the significance of Images with Tags.

    Dockerfiles: