Wednesday, December 13, 2017

Copy item path as a shortcut in Mac

In Windows, there are some utilities that give you the options to copy the path of any folder or file by right-clicking the file.

I tried to get the same functionality on Mac but didn't find any programs. So I searched more a found a decent way in order to achieve this. There is a thing where we can have a clickable option on the right click of the item and copy its path.

So, let's get started. :)

  1. Launch Spotlight Search and type automator


  2. Select the Service (gear icon) and click Choose.


  3. Search for Copy to clipboard in the search box and drag the item to the right-hand box.
  4. Select Files or Folders from the first drop down and Finder in the second drop down.
     

  5. Now press Command + S in order to save this and give a name, save it.


  6. Now you can go to any folder, right-click the folder/item and copy its path.

Happy coding. :)


Wednesday, November 1, 2017

Docker the new form of Virtualization

Docker is becoming popular day by day because of their approach of using the container as the base for the system.

If we go by the definition of the Docker that is available in Wiki, it goes like this: "Docker is an open-source project that automates the deployment of applications inside software containers."

The definition itself is pretty clear that Docker gives the opportunity to run the application inside the so-called hot topic termed as Containers.

Also continuing the Wiki article only, Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.

This means that it is actually filling up the gap that almost every one of us had faced in a way or the other like by saying that the build is working on my machine and I don't know why it is not working in yours.

When you create an application and sets the environment to make the app work, the problem starts since there may be the chance that the guy who is trying to build the application in his machine might miss 1 or 2 steps or the steps might differ since his machine has different applications and versions including the different Processors and hardware which might affect the overall process. So docker comes into the picture and it actually fills the gap in a way or the other. The execution model of the Docker is somewhat like this:




Courtesy: Docker

If you see the architecture you can see that the Docker Container never knows on which platform the image is running, actually internally Docker Engine takes care of all the stuff. On one side where VM architecture actually shares the hardware of the Host itself the Docker just shares the process of the Host and creates a whole atmosphere around the host making it a very lightweight and easy to use.

When comparing Docker with VM we can say that VM's are the running instances of OS on the same machine that is wastage of resources and also, not a correct way of using the resources.

Coming back to the main deal, Docker has some specific terms that might come again and again in this tutorial. They are as follows:
  1. Docker Engine: 
    It is the engine that resides at the core of the docker, in reality, means it the whole and soul of the Application
  2. Docker Client:
    It is the app that interacts with the Docker engine.
  3. Docker Image:
    It is basically a Class (In Java World), that means it is the entity on which we create the containers which are the functional entities. So you can think of the Docker Images as the AMI (Amazon Images, in AWS World), upon which you can create the workstation and those workstation work exactly the way the Image we selected had been created for.
  4. Docker Hub Registry:
    Hub Registry is the cloud hub or repository from where you can get the images and use them for your development. You can create the account and you can upload your own created image for public use or upload in private mode as well.
  5. Docker Container:
    The containers are the entities that actually takes the images from the passive world to the active world. There are the running objects of the images. They are the one that interacts with the Docker host and makes the job done.
  6. Docker Volumes:
    The volumes act as a hard drive for a container. In simple words, you create a volume and attach it to the container so that you can take the backup of the state of the container at any stage in real time.
  7. Docker Network:
    The networks are the entities that are required to communicate between the containers and between the host and the container.
  8. DockerFile
  9. Docker Compose
Docker Installation:

           Coming to the support for the Docker, they are best supported in Linux environments, since the concept of Containers is best integrated with Linux environments,. Although only the installation may differ whereas all commands which we use with respect to Docker will be same in all the environments.

So let us take the first step of installing Docker in our machine (for Mac and Windows user please refer, here, refer OSX and Window part :) ).

For Linux Users please use:
  • Create a directory named docker and go inside it using cd command
    • mkdir docker
    • cd docker
  • Download the script to install the docker
    • wget https://get.docker.io  -O install_docker.sh
      • This will download the shell file in the folder named docker and name it as install_docker.sh
  • Now we will run the script with sudo permissions
    • sudo sh install_docker.sh
  • Follow the onscreen instructions and wait for some time, the script will download the latest version of the docker for you and install it with all the dependencies that it needs to run perfectly so you will be finished with Docker installed in your machine and all the path variable presets for you.
For Mac Users please use:
  • You can go here
  • Download the .dmg
  • Double click the .dmg and it will install the docker automatically in the system
  • Once the docker is installed you will see the whale icon in your taskbar telling you the status of the Docker instance.
  • Through that icon, you can start and stop the docker service, by right-clicking it.

    In order to test whether or not the docker is installed perfectly, you can use a command as well to check:
    • docker --version
    Creating the first container:
               The Containers are the Active version of the Images and Images, in turn, are Passive in nature. In very simple context we can say that the Images are Classes and Containers are its lively Objects.

    So when we say we have to spawn a container it is mainly the spawned version of an Image. Also whenever you tell docker to instantiate a container for a specific image it first tries to search the image in local and then searches the same in Docker Hub. Only if the image is not available in local, the image is being downloaded from the Docker Hub (cloud), saved in local (for future references) and then is used to create a container with it.

    One of the important things to note regarding Docker is that it is mainly a command line tool(CLI) so you need to learn lots of commands in order to interact with Docker. So for the first exercise of Creating the Container. In order to overcome this burden, some great minds have developed many Frameworks to give you the ease of Interactive Interface that can be used to interact with Docker. Among them, one is Portainer which can be used very easily. It itself comes as a Docker Image so you can just start its container and start using it very easily.

    Follow this page for installing the Portainer Container with its image in local.

    • $ docker volume create portainer_data 
    • $ docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
    The commands are pretty self-explanatory. The first command is creating a volume to store the data being created in the container so that you can take a backup or so. And the second command is creating a Container which runs in daemon (-d) mode, with the port (-p) 9000 of the host being mapped to 9000 of the container, with volume (-v) created in the first step via image portainer/portainer.

    The above step spawns a Docker Container and exposes 9000 as a port. So you can now go to browser and http://localhost:9000 and check the page of Portainer.
    • At first run, you have to give the password for Admin user and move forward. 
    • After that, you have to select whether the Docker being administered by Portainer is running on the same machine where Portainer is running or it is on some other machine. For our case we choose Local.
                                  

    • After that step the overall UI looks something like this:

    The left panel has the shortcuts and right panel will show the UI respective to the option you chose.

    So we have downloaded the image for Portainer successfully launched a container for Portainer.

    So let us suppose we want to create a container of Nginx, so for that, we need to go to Docker hub and search for Nginx Official Image.


    You might get 100's of the result, but we are going to use the official one only. You can use different on the basis of your situation, but for now, we are going to use Nginx Docker Official Image.
    1. Go to terminal
    2. Issue the command: docker pull nginx, you can get this in right-hand side.
      1. You can use the Portainer as well to download the image, by going to Images Section and giving the name of the image in the TextBoxand download the image.
    3. Wait for the docker to download the image and extract it
    4. Once the image is downloaded you can view it in Portainer as well in the images section.

    5. Now you can use Nginx Image to spawn a new container for Nginx.
    6. Creating Container:
      1. via Command Line:
        1. docker run --name="NAME_OF_CONATINER" -d -p 8080:80 nginx
        2. You can go to http://localhost:8080 and you will have the landing page of the nginx.
      2. via Portainer UI:
        1. Go to Containers Section -> Add Container, fill the fields and Start Container to start it.

      3. Now the container will be available to use.
      4. You can go to http://localhost:8080 and you will have the landing page of the nginx.
    Viewing the Containers/Images information:

    You can view the Containers/Images information via these commands:
    1. docker ps -a
      1. This will tell regarding all images right now available in the host machine.
    2. docker image inspect < id/name >
      1. This return the information regarding the image
    3. docker container inspect < id/name >
      1. This return the information regarding the container
    4. docker network inspect < id/name > 
      1. This return the information regarding the network
    5. docker volume inspect < id/name >
    Deleting the Containers:

    Deleting container is very simple.
    1. via Command Line:
      1. docker rm
      2. In case the container is running it will throw an exception telling you the same, in that case, either you can use the force option to delete the container or else stop the container first and then delete it.
        docker rm -f
    2. via Portainer UI:
      1. Go to container section -> select the container and Remove or Force Remove
    Deleting the Images:

    Deleting Images are very simple.
    1. via Command Line:
      1. docker image rm [OPTIONS] IMAGE
      2. docker image rm IMAGE < image_name /id>
      3. In case the image is having a running container then we can use -f to forcefully delete the container as well as the image
        docker rm -f  IMAGE < id/name >
    2. via Portainer UI:
      1. Go to container section -> select the container and Remove or Force Remove

    The significance of Networks:

    Networks are very important part of Docker installations. We can create networks and assign the subnet masks and ranges to the networks that can be used while creating the Containers so, the containers belonging to the same network can talk to each other via the network IP being allocated to them.

    When the Docker has installed it comes with 3 default network types(For mac). They are:
    1. Bridge
    2. Host 
    3. None
    The bridge network is responsible to create a communication between the containers belonging to the Bridge network.

    Here is the command to see the details of the network:
    1. via Command Line:
      1. docker network inspect
        1. Ex: docker network inspect bridge
          • This will return the big JSON structure that will tell you the Subnet Masking, Default Gateway, Container published with this bridge and all
    2. via Portainer UI:

      1. Go to network section and you will have all the networks with the subnet masks as well as the Default Gateway being configured.


    By default when a container is being spawned it goes random network we can specify the network, by:
    1. via Command Line:
      1. docker run --name="NAME_OF_CONATINER" -d -p 8080:80 --network="NAME_OF_THE_CONTAINER" nginx
    2. via Portainer UI:
      1. When you go to Add Container Section, in the last section you have the network tab that takes care of the container being part of the specific network.

      2. After specifying the network you can create a container that will be the part of the same network.
    By properties of networks, the docker container belonging to different networks can't talk to each other by default, we have to do some kind of Network Configurations in order to do so. But the containers belonging to the same network can talk to each other by default via the IP Address that is allocated to it randomly in the Address Range of the network.

    For practice session, we will create 3 different Containers of Nginx and try let them talk to each other.
    1. For this, we are going to use the different version of Nginx, i.e nginx:alpine (refer Significance of Images with tags).
    2. So we spawn 2 containers namely nginx_b1, nginx_b2 in bridge network and 1 in host network named nginx_h1.
      1. docker run -d -p 8080:80 --network="bridge" --name="nginx_b1" nginx:alpine
      2. docker run -d -p 8081:80 --network="bridge" --name="nginx_b2" nginx:alpine
      3. docker run -d -p 8082:80 --network="host" --name="nginx_h1" nginx:alpine
    3. Now attach a shell to a container named nginx_b1 (docker exec -it nginx_b1) and type ping nginx_b2, you will be able to do it since they both belong to the same network, but you are unable to ping nginx_h1 since the networks are different. Here is the video reference for all this scenario.
    Example of MongoDb and AdminMongo:

    In this, we are going to run a mongodb container and adminmongo container under the same network and then we will make them talk each other.
    1. Install MongoDb
      1. docker pull mongo
    2. Install admin mongo
      1. docker pull mrvautin/adminmongo
    3. Create a container for mongo image:
      1. docker run -d -p 27017:27017 --name="mongo" --network="bridge" mongo
    4. Create a container for admin mongo image:
      1. docker run -d -p 1234:1234 --name="admin_mongo" --network="bridge" mrvautin/adminmongo
    5. Once both the containers are up and running you can go to http://0.0.0.0:1234 and create a connection in this screen:
    6. You can get the ip under which the mongo is spawned via Portainer UI in Containers section. 
    7. Use the same IP while giving the connection string in the AdminMongo. 
    8. Once the connection is successful you can play with MongoDb.
    So we have actually created a network based stack of AdminMongo and Mongo.

    The significance of Images with Tags:

    The images what we pulled from the docker hub are mostly with latest tags. You might be thinking what are Tags. 

    Tags are just the namespace/versioning identity that the creator of the image uses in order to keep track of the development in the image.
    So what I mean is it's just the version number that tells the user what they are going to get if you create a container over that image.

    So let us take an example. If you say docker pull mongo, irrespective of any version of the mongo you always be having the latest build of the mongo in that image. At the same time, you can pull the tag-based image of mongo that tells which version of mongo you want to download the image of. So if you go to the official image repository page of Mongo, here,  you can see that you have different tags available with respect to the type of version you might get when you download the specific image with the tag, like this, docker pull mongo:3.0.15. This is the significance of Images with Tags.

    Dockerfiles:









    Thursday, July 13, 2017

    How Maps work in Java

    Maps are the indispensable part of Java, and you can't think of any application without its use.

    When it comes to the background of the Maps the first thing that comes to the mind is that a Map is like a dictionary or in other words you can say its a key value pair, where you assign a key with a data, so that you can get the data from the structure just by passing the Key.

    Logically the accessing of the data happens in O(1) order but in case if there is a collision(will talk about it soon) it might take 2 or 3 hops but again that is negligible when we talk about memory and high end processors our systems have.

    As a coder or a computer enthusiast you should have the basic understanding of how hashing works, but in case if you don't have let me give you the basic idea of the same.

    What is hashing?

    Hashing, in layman language means getting the data on the basis of the value of the data.

    In other words, consider having the data of the students. Now each student is assigned with a roll number, now just consider if for a particular roll number we want to get the details of the students, we have following choices:
    1. We should keep on searching for the roll number sequentially one by one and matching the roll number with the one we are looking for.
    2. Or in normal case(not in prod level or high end apps), if we create an Array of the Student Details and save the student details at the position of the roll number in the array itself.
    Now coming to the point (1) this approach is overhead and is not acceptable since for every search this will going to search over whole data. Also if you consider the worst case this approach might end up in O(N) order, since the data you are actually searching would be at last of the storage.

    Coming to point (2) suppose you have a total of 10 students whose roll number ranges from 1 to 10 sequentially. And if every student data is stored at the position of the roll number in the Array itself we can just write a simple function to get the data of the student by following:

           //We have ARRAY_WITH_DATA_OF_STUDENTS
           function giveMeStudentWithRollNumber(N){ =====> (A)

                 //since array starts from 0
                ARRAY_WITH_DATA_OF_STUDENTS[N-1]; ====> (B)
           }

    Here in this code, the N-1 is basically a hash function which is giving me the information where the data is residing in my data structure, this data structure for me is Array.

    Now  if we refine this code a little bit we can rewrite the same code as follows:

            //ARRAY_WITH_DATA_OF_STUDENTS
           function giveMeStudentWithRollNumber(N){ =====> (A)

                 ARRAY_WITH_DATA_OF_STUDENTS[addressWhereDataIsStoredFor(N)]; ====> (B)
           }


            function addressWhereDataIsStoredFor(N){ ====> (C)
                return N-1;
           }

    Now we have made a function named addressWhereDataIsStoredFor that will return me the value of the address where my data is stored in the data structure. This function is termed as the hashFunction and the value it returns is hashValue where the actual data is stored. In our case the hash function is pretty simple, that is  f(x) = X-1; but in reality the hash function does some calculations with prime number to guarantee no hash collision. Now Hash collision is a situation where 2 different data generate the same hashValue, thus resulting in a hash collision.

    How hash map works in Java?

    Some of the important points of Map in Java:

    1. Maps in Java is an interface available at java.util
    2. There are many classes that implemented the Map and HashMap is one of them which is widely used in many applications. 
    3. By far now you might have guessed it that the Map stores the data against a key which is same as N.
    4. Now for storing the data against a key it also needs a function that will tell the Java where to store the data so that the same can be retrieved back when asked for.
    5. Since the key is meant to be unique, the is of Set type in nature which means that it can not be duplicated.
    Here is the basic example of Map with HashMap.

          import java.util.*;
          class MapUse{
                public static void main(String args[]){
                            Map test = new HashMap();
                            test.put(1, "one");
                            test.put(2, "two");
                            System.out.println(test.get(1)); // ===> one
                            test.put(1,"one again");
                            System.out.println(test.get(1)); // ===> one again
               }
          }

    Now Map only accepts the Object of that class as a key, which overrides the hashCode() and uses the equals() of the same object to check when the Hash value collided and internally the Linked List is created for the same scenario.

    In a nutshell, when an Object of Map is created it internally creates an Array, Node[], where each element is capable of storing the Start of the Linked List. The default size of this Array is 16 and it grows when the limit(based on, if you configured or default) met. In case of Maps the element of the Arrays are termed as Buckets. So basically we would have 16 buckets, where each bucket is capable of keeping a Linked List. 

    The element of Node[] is having the following members:

        Node{
            int hash;
            K key,
            V value;
            Node next;
        }

    You might be thinking why the key itself is being stored since we have the value, just return it, but there is a scenario where we have a collision, and in that case we need to return the value related to the Key we sent.

    Let me explain you this by example.

    Lets suppose my hashCode() function is this:

        function int hashCode(){
            //do the code and return n
            int size = size_of_hashmap;
            return n>(size) ? n%size : n; // =====> Please give attention to this.
        }

    This 'n' for some reason  (because of the poor formula of calculation of hashcode) gave the repeated value, i.e hash Collision at that time this key comes to a rescue, since at that time you have a linked list and that linked list nodes will be checked on the basis of equals() with the key, with all the nodes of the linked list.

    So for summary:

    1. The Maps is basically an Array of Linked Lists and each element is termed as Bucket.
    2. Each node is having the 4 things: hashCode, key, value, next.
    3. In case of collision the key comes as rescue to return the actual intended result.
    4. The index where the value is stored is basically calculated on the basis of the size of the hash map or in other words size of the array.