Showing posts with label NodeJs. Show all posts
Showing posts with label NodeJs. Show all posts

Sunday, October 2, 2022

NodeJs Read XML file and Parse Data

Hello,

Few days back I worked on task to read XML file in NodeJs and parse the data and convert data to the JSON format. Here in this blog I am going explain how I did achieved this. 

Following is our sample XML file.

<root>

    <parent>

        <firstchild>First Child Content 1</firstchild>

        <secondchild>Second Child Content 1</secondchild>

    </parent>

    <parent>

        <firstchild>First Child Content 2</firstchild>

        <secondchild>Second Child Content 2</secondchild>

    </parent>

</root>

Step 1 : Install necessary NPM packages

First install following packages in your NodeJs application.

npm install --save xmldom

mpm install --save hashmap

Here xmldom is the package we are going to use to parse the XML data and hasmap package we are going to use to convert and save data in JSON format.

Step 2 : Import necessary packages in script

const { readFile } = require('fs/promises');

const xmldom = require('xmldom');

const HashMap = require('hashmap');

We are going to use readFile from fs/promises. Also we will use xmldom to parse the string data. xmldom is a A JavaScript implementation of W3C DOM for Node.js, Rhino and the browser. Fully compatible with W3C DOM level2; and some compatible with level3. Supports DOMParser and XMLSerializer interface such as in browser.

Step 3 : Read the XML file

const readXMLFile = async()=> {

var parser = new xmldom.DOMParser();

        const result = await readFile('./data.xml',"utf8");

        const dom = parser.parseFromString(result, 'text/xml');

}

Here we are using async function to read the XML file as we want to wait till XML file is completely finished. Also we created a xmldom.DOMParser which we will use to parse the string data of the file. This parser will give you all the TAGs of the XML file just like we access standard HTML tags. With this we can use getElementsByTagName to get XML tags.  

Step 4 : Convert XML data to hasmap

const readXMLFile = async()=> {

var parser = new xmldom.DOMParser();

        const dataMap = new HashMap();

        const result = await readFile('./data.xml',"utf8");

        const dom = parser.parseFromString(result, 'text/xml');

        var parentList = dom.getElementsByTagName("parent");

        for(var i =0; i < parentList.length; i++) {

        const parent = parentList[i];

            const firstchild = parent.getElementsByTagName("firstchild")[0].textContent;

        const secondchild = parent.getElementsByTagName("secondchild")[0].textContent;

            const parentMap = new HashMap();

        parentMap.set("firstchild", firstchild);

        parentMap.set("secondchild", secondchild);

            dataMap.set(i, parentMap);

        }

}


In the above function we got list of parents by getElementsByTagName method and then we are iterating  through it and accessing the child tags and getting it's text content. Then we are simply storing it in hasmap with unique key. 

Hope this helps you.

Sunday, July 3, 2022

Mock node-fetch with JEST

Recently I tried my hands on Jest - the popular JavaScript Testing Library. In my application we were using node-fetch to make API calls. I used Jest to mock and test this API calls. Here in this blog I will explain on how to mock node-fetch with Jest. 

Following is my function to call the GET API. 

fetch_get.js

const fetch = require('node-fetch');

module.exports = async () => {

    return await fetch('http://YOUR_GET_URL').then(res => res.json());

};

Now to test this create a test file with name test.fetch_get.js

Step 1: First import the function.

const fetch_get = require('../fetch_get');


Step 2 : Mock the node-fetch with Jest

const fetch = require('node-fetch');
jest.mock('node-fetch', ()=>jest.fn())

Step 3: Create Mock response

const mockedRes = {
  "success": true,
  "data": [
    {
        id: 1,
        text: 'Test Data 1'
    },
    {
        id: 1,
        text: 'Test Data 2'
    }
  ]
};

Step 4: Add Test 

describe("Test get fetch", () => {
    let data;
    it('It should return the data', () => {
      const response = Promise.resolve({
            ok: true,
            status: 200,
            json: () => {
                return mockedRes;
            },
        });
        fetch.mockImplementation(()=> response)
        data = await fetch_get();
        expect(data).toEqual(mockedRes);
    });
});

In above code we have created a rest and mocked the response to return our response when we call .json() method for response. Then we are simply comparing the response. This is a very simple test. Here in stead of comparing data, you may have other tests like checking length of data or checking specific data etc.

Step 5: Run Test

Run the test with npm test or npm run test and you will see the following result.

Test get fetch
    ✓ It should return the data (17 ms)

Test Suites: 1 passed, 1 total
Tests:       1 passed, 1 total
Snapshots:   0 total
Time:        0.313 s, estimated 1 s

Hope this helps you.

Sunday, July 4, 2021

Docker MongoDB terminates when it runs out of memory

When you have multiple services running in docker container it's quite possible that you have an issues with certain services when your docker container runs out of memory. MongoDB is one such service. 

On docker container when you have MongoDB running and when it starts storing huge data it starts consuming lots of memory and that's where you have an issue. MongoDB will crash after sometime where isn't much memory left. 

The reason behind this is the IO model of MongoDB, it tries to keep as much data as possible in cache so read and write operations are much faster. But this creates an issue with docker as we have limited memory and lots of services are sharing that. 

Starting from MongoDB 3.2 on words WiredTiger storage engine is the default one for MongoDB and it's recommended. 

There are various advantages of WiredTiger storage engine. For example,

  • Document Level Concurrency
  • Snapshots and Checkpoints
  • Journal
  • Compression
  • Memory Use
One of most useful feature is Memory use. 

With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.

You can control it with --wiredTigerCacheSizeGB configuration.

The --wiredTigerCacheSizeGB limits the size of the WiredTiger internal cache. The operating system will use the available free memory for filesystem cache, which allows the compressed MongoDB data files to stay in memory. In addition, the operating system will use any free RAM to buffer file system blocks and file system cache.

With this setting you can enhance memory usage. MongoDB will not use excessive memory and with heavy data usage on docker container MongoDB will not crash on excessive memory usage.

Hope this helps you.

ReactJs Peer to Peer Communication

Recently I evaluated peer to peer communication approach for one of my project so here I am going to share it and how you can use it in your project in case you want to implement peer to peer communication in your project. 

We used library called PeerJS , it's simple peer to peer built on top of webRTC. For this first you have to create a server, which will act as only connection broker. No peer to peer data goes through this server. Let's just create a simple server.

Let's first install peer from npm. 

npm install peer

Now let's create NodeJs script. 

const { PeerServer } = require('peer');

const peerServer = PeerServer({ port: 9000, path: '/server' });

peerServer.on('connection', (client) => { 

console.log(client);

});

That's is now you can run this script through terminal and it will run your server on 9000 port.

Now let's connect to server from our ReactJS component. 

First lets install peerjs npm which is peer client. 

npm install peerjs

We can connect to server in componentDidMount method and add some callbacks function.

import Peer from 'peerjs';

componentDidMount = () => {

        this.peer = new Peer("USERNAME", {

          host: 'localhost',

          port: 9000,

          path: '/server'

        });

        this.peer.on("error", err => {

            console.log("error: ", err)

        })

        this.peer.on("open", id => {

            console.log(id)

        })

        this.peer.on("connection", (con) => {

            console.log("connection opened");

            con.on("data", i => {

                console.log(i)

            });

        })

}

In above code first function is the error callback function. Second once is when peer connection is opened. Third one is when you receive connection from some other peer and get some data. 

Now let's take an example of how you can connect to other peer and send data. 

const conn = this.peer.connect('REMOTE_PEER');

conn.on('open', () => {

          conn.send('DATA');

});

In above code we are connecting to some remote peer and sending some data to it.

Please note here peer to peer data goes through ICE server which you can setup and assign when you create peer server or else it will use PeerCloud server by default. For the development purpose that's ok but for the production you should create your on TURN or STUN server.

Hope this helps you in setting up 

Wednesday, August 21, 2019

NodeJs MySQL Observer

Hello,

In this blog we are going to learn how to use NodeJs to observe changes in MySql databases. This is useful when you want to track MySQL changes and based on that want to send some events to frontends or want to do any other actions.

For this first of all you have to enable binary logging in you database. Binary logging is very much useful for real time MySQL replication. In Amazon RDS, it's by default available and you can switch on it from configurations. For your local database if you are using MAMP, you can do following trick.

Create a file with name my.cnf and add following content to it.

[mysqld]
server-id = 1
default-storage-engine = InnoDB
log-bin=bin.log
log-bin-index=bin-log.index
max_binlog_size=100M
expire_logs_days = 10
binlog_format=row
socket=mysql.sock

Add this file to conf folder of your MAMP directory and restart MySQL server. This will enable binary logging in your database.

Now to observe this changes we use npm package called zongji . Install it with NPM.

Add following code to your NodeJs script.

var ZongJi = require('zongji');
var _underScore = require('underscore');

var zongji = new ZongJi({
    user : 'YOUR_USERNAME',
    password : "YOUR_PASSWORD",
    database: 'YOUR_DATABASE',
    socketPath : '/Applications/MAMP/tmp/mysql/mysql.sock'
});

Now add event on binlog.

zongji.on('binlog', function(evt) {

});

This event is triggered whenever there is a change in any of your database tables.

Inside this event you can have logic of checking new rows, updates rows, deleted rows.
zongji.on('binlog', function(evt) {
if (evt.getEventName() === 'writerows' || evt.getEventName() === 'updaterows' || evt.getEventName() === 'deleterows') {
var database = evt.tableMap[evt.tableId].parentSchema; 
        var table =  evt.tableMap[evt.tableId].tableName; 
        var columns = evt.tableMap[evt.tableId].columns; 
        _underScore.each(evt.rows, function(row) {
        });
}
});

At last start the process and pass the events you want to watch.
zongji.start({
  includeEvents: ['tablemap', 'writerows', 'updaterows', 'deleterows']
});

Monday, August 19, 2019

Accessing Data From Redis Using NodeJs

Hello,

When you are working with business applications, it's sometimes need to cache the data. At this point Redis can be very useful, it can be used as database or cache database. You can store any kind of data like strings, JSON objects etc. in Redis.

Problem we face while working with NodeJs and Redis, get data operation from Redis is Asynchronous operations so it gives you callback and your code execution will continue. This may create a problem when you want to handle it in Synchronous way. For example you may have loops inside that you are trying to access data from Redis.

In this blog I am going to explain how you can have Synchronous operations. In nutshell we got to promisify the redis module.

There is a library called bluebird, that can be used for this. Lets go step by step.

Step 1

Install bluebird and redis in your NodeJs app.

npm install bluebird
npm install redis

Step 2

Import it in NodeJs app.

var redis = require('redis');
var bluebird = require("bluebird");

Step 3

Promisify Redis.

bluebird.promisifyAll(redis.RedisClient.prototype);
bluebird.promisifyAll(redis.Multi.prototype);

Step 4

Connect to Redis client.

var client = redis.createClient();
    client.on('connect', function() {
    console.log('Redis client connected');
});

Step 5

Use Async version of get function to get data.

client.getAsync("MY_KEY").then(function(res) {
      //Access Data
});

This is how you can have Sync operations with Redis. 

Wednesday, August 14, 2019

NodeJs , MySql Wait For Query Result

Hello,

One major issue we face while working with NodeJs and MySql is waiting for query result. When you have nested queries or you have to use for loop inside query result and wait for it's output. Because NodeJs MySql uses async function to give you query result and by the time you get the result the script finish execution.

In this blog I am going to mention one of the method I have used in one of my project. There can be various other ways for it.

First of all we have to import async plugin in your node app and add it to your script.

var async = require('async');

Now, lets assume here is your first query.

const result = connection.query('SELECT * FROM TABLE1', async function(err, rows, fields) {

});

Now this query result you have to do loop and have more queries.

let resultRows = Array();
const result = connection.query('SELECT * FROM TABLE1', async function(err, rows, fields) {
     async.each(rows, function (row, callback) {
          connection.query('SELECT * FROM TABLE 2', function(err, innerRow){
                 resultRows.push(innerRow);
                 callback(null);
          });
     }, async function () {
            //This is the final function which get executed when loop is done.
            const response = {
                statusCode: 200,
                headers: {
                "Access-Control-Allow-Origin" : "*" // Required for CORS support to work
              },
                body: JSON.stringify({
                  success: true,
                  data: resultRows
                })
              };
              callback(null, response);
    });
});

As you can see in above code, after first query is executed we are using async each for the loop.
In second query result function we are returning null callback. Once the loop is executed and all the null call backs are return it will call the final function where you will get your result rows. There you can process the result.