Monday, December 31, 2018

Configuring JEST and running test cases to get the coverage report

Test Driven Development became the basis of all the modern projects. That means people write the test cases first and then write the actual code to pass those test cases afterward.

Due to this, the demand for different testing frameworks is getting more and more.

Some of the testing frameworks that are most popular, under the category of Unit Testing, are:
  1. Mocha
  2. Jest
  3. Ava
These are some of the names that are too popular and are widely accepted in the Javascript Community.

I tried Mocha and found some difficulties in configuring it with my project and tried Jest, that comes with coverage report via Istanbul, and yeah it did well.

Today, we will see a basic project with 3 modules inside src, namely, add, mul and square with the functionality of adding, multiply and square.

Now in order to test these 3 modules, we created a directory called test in the main directory and will create the same folder structure as the src has.

For reference here is the snapshot of the folder structure.



By default, JEST picks the files that are under the test named folder and runs every file, with all the unit test cases, mentioned inside them.

Also, we have configured the JEST inside the package.json (line no: 18)only. Here is, package.json


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
  "name": "demo-mocha-nyc",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "start": "node src/index.js",
    "test": "./node_modules/.bin/jest",
    "test:watch": "./node_modules/.bin/jest --watch",    
    "test:coverage": "./node_modules/.bin/jest --coverage --colors"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "chai": "^4.2.0",
    "jest": "^23.6.0"
  },
  "jest": {
    "collectCoverage": true,
    "collectCoverageFrom": [
      "src/**/*.{js,jsx}",
      "!**/node_modules/"
    ],
    "coverageReporters": [
      "json",
      "html",
      "text",
      "json-summary"
    ]
  }
}


For further properties, you can check out JEST docs, here.

In this example, we told Jest to cover anything under src/ folder and ignore all the files of node_modules/ folder. Also just fyi, test folder is by default picked for test case files and is auto ignored while coverage report.

I have created a demo project here, to explain the above example

Happy Coding :)

Friday, December 21, 2018

Puppeteer, the future of Automation

Generate screenshots and PDFs of pages. All of us have been using Chrome Browser from past many years and it's true that CHROME is the one of the best browser out there in the market.

Now, coming to the main topic, in order to test the functionality of the website, people choose Selenium for Automation and no doubt that it helped a lot till today, using automation scripts.

But if you have used Selenium you know the pain, that once the test cases start, you can't do the work, since it opens the browser and run through the website and automatically clicks and traverse along the site.

This is a waste of resources since the developer can't work on anything else when the test cases are running.

In order to overcome this Google came up with the concept of Headless browsers, where you can think of a browser running in its context in the memory, without the UI, and can be controlled via API provided by Google.

So if you want to test the functionality of the website, you can run the website in a headless browser, that means a browser without a window and you can control the flow via the API, and that API is Puppeteer.

For now, Google introduced Puppeteer in Node Js API, and supports the Chrome / Chromium in a Headless manner (by default, but you can run full, non-headless, via configuration).

You can have a demo of Puppeteer here, https://try-puppeteer.appspot.com/

Here are some of the merits of Puppeteer:
  1. Generate screenshots and PDFs of pages.
  2. Automation Scripts to test the web pages flow, and mimicking the user behavior.
  3. We can have the client data rendered in Server via NodeJs and render the same in client easily.
You can have further information regarding Puppeteer at https://github.com/GoogleChrome/puppeteer.

In this article, we will take the example of creating the PDF of the rendered HTML in Chrome.

I have created a demo, where I have a created a template in Handlebar with a Static Message and a dynamic message.

template.hbs
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
<style type="text/css" media="all">
    @import url(https://fonts.googleapis.com/css?family=Lato:300,400,700); 
</style>
<html>
    <body>
        This is generated by Puppeteer using the Handlebar template and this message is passed as a variable to the
        template
        <br />
        <hr>
        <b>{{message}}</b>
    </body>
</html>

In this file, at line no: 10, we have a dynamic variable {{message}}.

We are going to compile this template and pass the model with to generate an HTML string by using the Handlebar API and then try to create a PDF for that generated HTML.

The whole code is in index.js

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
const puppeteer = require("puppeteer");
const hbs = require("handlebars");
const fs = require("fs-extra");
const path = require("path");

/***
 * Compiles the template with values we provide
 */
const compile = async function (templateName, data) {
    const filePath = path.join(process.cwd(), templateName);
    const template = await fs.readFile(filePath, "utf-8");
    return hbs.compile(template)(data);
}

const generatePDFByteArray = async ({
    templateName = "./template.hbs",
    data
}) => {
    try {
        /***
         * Got the end html string, after compiling the template with the 'data', 
         * since template is using a variable named 'message', 
         * we passed the 'data' with key named 'message' at line 82; 
         */
        const content = await compile(templateName, data);

        /***
         * Launched the headless chrome in memory.
         */
        const browser = await puppeteer.launch();

        /***
         * Created a new page(tab)
         */
        const page = await browser.newPage();

        /***
         * Set the content of the new page
         */
        await page.setContent(content);
        /***
         * Telling chrome to emulate screen i.e how the page looks if 
         * it would have been rendered in the normal browser.
         */
        await page.emulateMedia('screen');
        /***
         * This is needed since in case your template is loading any font from internet
         * this makes sure that the call will be waiting before it actually starts 
         * preparing the pdf capturing.
         */
        await page.goto('data:text/html,' + content, {
            waitUntil: 'networkidle0'
        });
        /***
         * We created the snapshot of the page and took the byte array
         */
        const byteArray = await page.pdf({
            format: "A4",
            landscape: true,
            scale: 1.29,
            printBackground: true
        });

        const buffer = Buffer.from(byteArray, 'binary');
        /**
         * We don't need the acknowledgement of this call that is the 
         * reason we are not waiting for this call to return.
         */
        browser.close();

        return buffer;
    } catch (e) {
        console.log('gg', e)
    }
};

(async () => {
    /***
     * The value being passed to the template for handlebar to 
     * compile the template and give the html string.
     */
    let data = { message: "This is a test message" };
    let fileName = 'temp.pdf';

    let buffer = await generatePDFByteArray({ data });
    console.log('got the byte buffer');

    console.log('Opening file and writing the buffer to it');
    let handle = await fs.open(fileName, 'w');
    await fs.write(handle, buffer, 0, buffer.length);
    await fs.close(handle);
    console.log('writing done');

    console.log('Please check the ', fileName);
})();

The code is pretty self-explanatory and easy to understand. Please leave the comments in case you need any information or have any doubt,

Here is the working code of the above example: https://github.com/ankur20us/demo-puppeteer



Update #1: 

     In case colors in rendered HTML are correct but when you get the PDF for the same the colors are gettings messed up or wrong, we have to provide one property to fix it, in CSS:


1
2
3
html {
   -webkit-print-color-adjust: exact;
}

Further Details of the bug: https://github.com/GoogleChrome/puppeteer/issues/2685



Happy coding.
:)


Wednesday, December 12, 2018

Adding shortcut for github commands

Git is no doubt the most popular code versioning tool, out there, in the market. And people often ask about GIT to understand your perspective as a developer.

Today we are going to see some small shortcuts that we can configure in our bash_profile and get the advantage of git shortcuts.

These shortcuts are pretty much self-explanatory and easy to understand.

Open, .bash_profile ( vi ~/.bash_profile) and add the following code at the end of your bash profile, save the file and close.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
#########################################################################################################################
                       #######    GITHUB SHORTCUTS     #######
#########################################################################################################################
alias gs='git status'                                                                           # Git Git status
alias glog='git log --pretty=format:"%h - %an, %ar : %s"'                                       # Git 1 liner git status
alias grid='git fetch origin develop:develop && git rebase --interactive develop'               # Git squash changes
alias gfp='git push --force'                                                                    # Git force push
alias gco='git checkout'                                                                        # Git checkout
alias gma='git merge --abort'                                                                   # Abort Git merge
alias gc='git commit -m '                                                                       # Git commit
alias gp='git push'                                                                             # Git push


###########################################################################################################################
              #######    SHOW BRANCH NAME IN COMMAND PROMPT     #######
###########################################################################################################################
parse_git_branch() {
        git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/'
}
export PS1="\u@\h \W\[\033[32m\]\$(parse_git_branch)\[\033[00m\] $ "

Restart the terminal, and you can start using the shortcuts and also if you browse the directory that is a git directory, the terminal will show the name of the branch you are currently in, like this:


Also, there is one more shortcut that seems to be quite useful, to know the parent of a branch, for that you have to add the following code in .gitconfig (vi ~/.gitconfig)  at the end of your bash profile, save the file and close.


1
2
[alias]
        parent = "!git show-branch | grep '*' | grep -v \"$(git rev-parse --abbrev-ref HEAD)\" | head -n1 | sed 's/.*\\[\\(.*\\)\\].*/\\1/' | sed 's/[\\^~].*//' #"

Restart terminal and in any directory with git content, you can call git parent to get the parent information of any branch.


Happy Coding,
:)

Monday, December 3, 2018

Adding Graphql Support in Postman

Postman is no doubt one of the best application in order to test the REST Api's. And the features of testing and pre-request scripts are one of the best out there in the market.

But after the success of Graphql, the importance of Postman is going down, since the point of passing the data and changing the URL and all is gone because graphql works in 1 URL setup.

In order to use the Graphql, many vendors came up with the UI/Editor in order to interact with the GraphQl server. Some of the apps are namely:
  1. GraphIql : Which is an electron app and needs to be installed using brew in Mac by command, brew cask install graphiql and you will get a Desktop App that looks something like:



    You can add the URL in the Text box, have the Docs and Pass the Query/Mutation in Left Hand Box and Variables in Left Bottom Box namely, Query Variables and Headers in Top Right Section and call it.
  2. Altair Chrome Extension which can be installed from here
There are many more but these 2 are the most popular. Although I personally use Altair, since it does not install an Application and rather installs a plugin and works like a charm. But some says since we have all the Test cases configured in Postman in order to run with Newman we need the call with Postman. Here are the steps to get run a simple Graphql query in Postman.

  1. Open postman
  2. Pass the URL in the URL text box.
  3. Make the method as POST
  4. Go to body section and just write {{graphqlquery}}
    1. This is reading of the Environment Variable named graphqlquery that we create in the further section.
  5. Go to Pre-Request Script section and here is the catch, add the code in the following manner:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    //Add the query/mutation here, this is dummy query
    let query = `
        query test{  
            areas{ 
                uuid
            }
        }
    `;
    //Add the Variables here
    let variables = {};
    
    postman.setEnvironmentVariable("graphqlquery", JSON.stringify({query, variables, operationName: ''}));
    

  6. If you see at line no: 12 we are setting the same variable (graphqlquery) that we are using in 4.1
  7. You can pass the variables (if required) in line 10. 
  8. After all the things are done, you can Send and it will return the JSON.
Here are the attached snapshots for the same:





Happy Coding.
:)