Continuous Integration & Deployment with Mirth Using Jenkins

This blog post is to provide a view on developing a continuous integration  and deployment using Mirth tool.

You can follow the below  provided steps to  do the same.


  • Often the codes have to be moved from one environment to another. From Staging to Testing and to Production  environment.
  • Manually moving the channels can be cumbersome and prone to human error. It is always advisable to automate this entire integration and deployment feature.

Steps to perform the Continuous Integration (CI):

  1. The source of truth here is repository be it Git Hub/Lab/Bitbucket or SVN. We need to create a webhook from the git repositories.
  2. Install Jenkins in the local system. You can follow a number of procedures online.
  3. Once jenkins is installed, we need to create  job in the jekins to perform this.

Follow the below steps to create job in jenkins:

  1. select New Item and click on freestyle project and click ok.
  2. In the General tab select Git Hub project and provide the git URL of your repository.
  3. In the source code management select Git
  4. In the Build Trigger area select “GitHub hook trigger for GITScm polling”.
  5. Click Save now and the changes will be saved.

If the above steps are completed then if any new code is pushed to your repository then it will be pulled by jenkins and the pulled code will appear on your folder location.

Continuous Integration Issues:

  • If you are using a public Git Hub repository then there will be a challenge involved in Git recognizing your local IP.
  • To overcome this issue, I made my local jenkins public by using Ngrok tool. you can download the tool from here
  • Once you download the tool, open the application and type ngrok.exe http “jenkinsportnumber” in the shell. This will make your  IP+jenkins port go public.
  • After the command is entered you can see that it gives a http url in the console. You can use that URL in the github webhook instead of mentioning localhost in it.


Push any new code in the repository and check the jenkins Job and you can see the data is pulled and completed.

Happy Integrations!!!!!!!


Why PDF is complicated in Mirth

Visit to know more

This blog is about why PDF is complicated in Mirth? and How we can Split PDF in Mirth?.

Splitting of PDF, PDF parsing is complicated in Mirth because of the libraries used to perform few actions in it.

Basically, in JAVA if you want to split a PDF or parse a PDF content or manipulate data of a PDF content you would use couple of famous Libraries such as

  1. PDF BOX from Apache
  2. iText library (Licensed from 7.1 Version)

If you are going to use the unlicensed stable version on iText Library then the best version is 5.5.1

Problem of using PDF library in Mirth:

By Default, Mirth already uses these two libraries by default for its other functionality.

For Example: Mirth uses PDF BOX v.1.8.4 for the pdf viewer extension. If you are using a new version of PDF BOX library and provide it in anywhere in custom library or other locations it wont work.

Because Mirth do not identify which version of the library it has to select. You can see these two libraries in the following location shown in the screenshot below:

How to use PDF Box Library:

The Best library you can use to perform multiple functionality of PDF is using Apache PDF BOX library.

First, Mirth wants to read the font of the PDF that is suppose to manipulate. To, do that we need to add another library called fontbox-1.8.4 inside C:\Program Files\Mirth Connect\extensions\doc\lib location.

Then add this library path  in destination.xml in C:\Program Files\Mirth Connect\extensions\doc as <library type=”SERVER” path=”lib/fontbox-1.8.4.jar” />

Another approach:

If you really do not want to use the PDF Viewer functionality in mirth.

You can disable that extension and provide later version (v2.5) of PDF BOX library in the same way mentioned above.

Note: Using the PDF BOX library outside without following the above approach will not work. It will always throw error.

Is Splitting PDF possible inside Mirth?

Yes, certainly possible.

There is a Java program already written using Apache PDF Box library function 

I used the same function and converted to Javascript. Here is a sample of that code which is converted to EX4JS

Code converted from Java to Javascript – Splitting PDF

var AnyValueOfYyourChoice = ”;


var pdPage = org.apache.pdfbox.pdmodel.PDPage();
var inputDocument = org.apache.pdfbox.pdmodel.PDDocument.loadNonSeq(new‘pdfReaderFilePath’)+$(‘originalFilename’)),null);
var stripper = new org.apache.pdfbox.util.PDFTextStripper();
var outputDocument = new org.apache.pdfbox.pdmodel.PDDocument();
var uuid = UUIDGenerator.getUUID();
var page;

for (page = 1; page <= inputDocument.getNumberOfPages(); ++page) {

var text = stripper.getText(inputDocument);
var p = new java.util.regex.Pattern.compile(DataNeedToBeCheckedFor);
var m = p.matcher(text);
var pdPage = inputDocument.getDocumentCatalog().getAllPages().get(page – 1);
var output_file = new‘newPdfReaderPath’) +$(‘fileNameDocType’)+’_’+AnyValueOfYyourChoice+”.pdf”);;

What is Meaningful Use? (MU)

Read more about this on

Let us try to understand Meaningful Use (MU) as easy as possible. If any in-depth standards information is needed, regarding Meaningful Use, I suggest this blog.

Imagine there exist a physician named Dr.John Doe.

Dr.John Doe is practicing medicine in some part of USA. Dr.John Doe has a clinic (only outpatient involved) where the information about recurring and critical patients were stored in large paper files.

Imagine that Dr. John Doe is consulting in his clinic for 10 years and data of his patients piling up. Day-by-day the  time required to search the recurring patient is increasing.  Last month, there was a bad water-leak on top floor of his clinic damaging at least 5 years of patient information. Oops that’s a serious problem now which puts his job in jeopardy.

Dr.John Doe & Meaningful Use

Dr.John Doe now takes a wise decision. By converting all his paper works to digitized or electronically transformed health record in a computer system. But, what’s stopping him is the cost involved in getting these things done.

Because to get this transition done Dr.John Doe has to purchase an EMR first. Implement it in his clinic and setup infrastructure for it. Train his receptionist or nurses to use it. And all these comes with a cost. But, Dr.John Doe was happy when he came to know that government provides incentives for this adoption. But Dr.John can’t just buy any EHR and leave it, instead he has to show a meaningful use with this EHR.

So, to sum up all Meaningful Use is the adoption of Certified EHR technology.

Which means Dr.John Doe cannot buy any EHR, but an EHR that is certified by appropriate authority.  The authority that certifies should check for basic eligibility criteria of the EHR. This is in fact forms the 1st stage out of the three stages of meaningful use.


Now Dr.John Doe is still not clear and expects a clear cut answer for the following questions as shown in the picture

Answer for First Question:

Answers for the first question is somewhat clear to John Doe but to provide a clear cut explanation, the physician has to do two things to qualify for the money.

  1. Get a government certified EHR
  2. Demonstrate a meaningful use with the EHR. This means to show proof to the government authority that EHR is properly implemented according to their expected criteria.

The list of criteria is provided in a list of 25 aspects provided by the depart of health & human services and it is broken into two major parts.

  1. Core Set
  2. Menu Set

Core Set: This is the list of basic requirement list of 15 aspects that all the EHR must follow. Whenever the EHR is subjected for the certification aspect, the certification bodies will test for all the core set objectives to be available in the EHR.

Menu Set: The Menu Set is the list of 10 requirements out of which the EHR’s have the liberty to choose any 5 aspect. In other not all the 10 aspects are required to pass the certification.

So basically there are a minimum of 20 aspects that are required to eligible for the incentive criteria. If Dr. John Doe is going to purchase a EHR he must make sure it is certified with all the minimum 20 aspects. Obviously now Dr.John Doe has the answer for his first question.

Answer for the Second question:

To answer the second question, it heavily depends on which incentive program the physician depends on. There are two incentive programs

  1. Medicare
  2. Medicaid.

With Medicare the physicians can get up to 75% of the incentives up to $44,000 over four years and with medicaid, on the other hand if the physician handles or sees more than 30% of medicaid patients then he can get up to $64,000 over six years.

Which means sooner the healthcare providers adopt the system the more the money they will receive as incentives. Now to proceed further for this answer, the medicare and medicaid are basically insurance policies adopted by the US government.

Medicare applies to the patient who are more than 65 years of age irrespective of the gender while medicaid applies to  the underprivileged people. These incentive scheme are brought to implementation by HITECH abbreviated as Health Information Technology for Economic and Clinical Health act.

HITECH is one of the act passed along with the stimulus package (a.k.a) ARRA act abbreviated as American Recovery and Reinvestment Act signed by President Obama in 2009 as a part of meaningful use development process.

Unzip .zip files in mirth

Create a source as File Reader. put the zip file directory to read and make sure you enable read as binary radio button. as shown below

unzip files

Now, the logic is that the .zip file will be read via mirth engine and the data inside the zip file will be consumed as base 64 based content and written in to the output folder.

Provide the below code in the transformer editor.

// path at which the ZIP files has to be unzipped
var destinationPath = “C:\\Projects\\NON-EAI-PROJECTS\\tmp”;
var buffer_value = 1024;

// Convert incoming data to a base64 encoded data
var strBase64Data = connectorMessage.getRawData();
var decodedBytes = FileUtil.decode(strBase64Data);

// process all zipped files
var is = new;
var zis = new;

var entry;
while((entry = zis.getNextEntry()) != null) {

// save file
var count;
var buffer = java.lang.reflect.Array.newInstance(java.lang.Byte .TYPE, buffer_value);

var fileOut = new + “\\” + entry.getName());
var fos = new;

// read byte content from zipped file
while ((count =, 0, buffer_value)) != -1) {
fos.write(buffer, 0, count);


Fetching Data From APIGEE and pushing into Mirth Via RabbitMQ

This will be a interesting blog post. In this post I will explain the following:

  1. Fetching the data from APIGEE (URL re-routing) in JAVA
  2. Pushing the fetched data from APIGEE and inserting to RabbitMQ queue.
  3. Pull the data from Rabbit MQ with Mirthconnect.


APIGEE is available for both enterprise and normal non-payable version. You can sign into Apigee. There are various purposes and use-cases available to use Apigee but i’m currently using this Apigee as a URL re-routing mechanism.

Take any source of JSON data which can be available free. Sign in to APIGEE and click on API Proxies. In the opening page click on +PROXY button on the right hand side top. This will open a new page with the following information


Once the Above screen appears select the first option “REVERSE PROXY” this will act as a URL re-routing mechanism. You will have a actual URL but you will not use that URL for communicating with the clients instead, you will give one URL which which will be mapped to the original URL.

Click next on selecting the first option. Then you will see the below screen as shown:


In the above screen on Proxy Name you have to fill out the name that you wish to give as a proxy name in the first text box in my case I have provided (vibinces-eval-test). In Proxy Base Path  you need to provide a sub-context of your API name I have provided (apigeejsonprofile). In the Existing API you need to provide the full URL path of existing JSON API. Description is optional field, you can either provide it or not.

Once it is created my url looked like this you might get a URL as well with the name of your choice. In the Security  tab, It is advised to select CORS headers on browse. Because it is always possible to get the cross-origin error when you try to access data from browsers which are not verified properly. Im also using no authorization for the API.


In the Next tab you can see how the data provided is converted to your URL. It is also fascinating that APIGEE provides you two types of URL that can be used both in Testing or BETA and as well as one for PROD.


Now your URL re-router is created. That is if  you hit the URL you can see the JSON data that actually belongs to some other URL.


I’m going to create the below two classes:



public class FetchJsonFromApigee {
public static String call_me() throws Exception {
String url = “;;
URL obj = new URL(url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
con.setRequestProperty(“User-Agent”, “Mozilla/5.0”);
int responseCode = con.getResponseCode();
System.out.println(“Response Code : ” + responseCode);
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
String inputLine;
StringBuffer response = new StringBuffer();
while ((inputLine = in .readLine()) != null) {
} in .close();
System.out.println(“response : ” + response.toString());
return response.toString();
public String sendingMessage() throws Exception {
String pushedJsonMessage = FetchJsonFromApigee.call_me();
return pushedJsonMessage;


import java.util.concurrent.TimeoutException;

import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;

* @author Vibinchander.V
public class PushApigeeDataToRabbitMQ {
private final static String QUEUE_NAME = “TestQueuing”;

public static void passMessage(String message) throws IOException, TimeoutException {
ConnectionFactory factory = new ConnectionFactory();
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
channel.basicPublish(“”, QUEUE_NAME, null, message.getBytes());
System.out.println(” [x] Sent ‘” + message + “‘”);

public static void main(String[] args) throws IOException, TimeoutException {
FetchJsonFromApigee getData = new FetchJsonFromApigee();
String passMessage = null;
try {
passMessage = getData.sendingMessage();
} catch (Exception e) {
System.out.println(“Executed Main Method !!!”);

When you run the first class you can see that the data is fetched from APIGEE and pushed into RabbitMQ message queue.

3. Write a JAR file that will pull data from RabbitMQ:

I’m writing the below class. this class will be used inside the Mirth tool which will act as a consumer to pull the data out of RabbitMQ.

import java.util.concurrent.TimeoutException;

import com.rabbitmq.client.AMQP;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Consumer;
import com.rabbitmq.client.DefaultConsumer;
import com.rabbitmq.client.Envelope;
import com.rabbitmq.client.QueueingConsumer;
@author Vibinchander.V
public class QueueConsumer {
 public String returnMessage(String queueName) throws IOException, TimeoutException {
  ConnectionFactory factory = new ConnectionFactory(); 
Connection connection = factory.newConnection(); 
Channel channel = connection.createChannel(); 
channel.queueDeclare(queueName, false, false, false, null); 
boolean noAck = false; 
QueueingConsumer consumerVal = new QueueingConsumer(channel);  channel.basicConsume(queueName, noAck, consumerVal); 
boolean runInfinite = true; 
QueueingConsumer.Delivery delivery = null; 
//while (runInfinite) {  //QueueingConsumer.Delivery delivery; 

try {   
delivery = consumerVal.nextDelivery(); 
} catch (InterruptedException ie)
{   // continue; 
}  //System.out.println(“Message received” + new String(delivery.getBody()));  channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false); 
  return new String(delivery.getBody()); 

Make the above program in a JAR file put this inside the custom lib folder of Mirth. you might have to include other JAR files as well. Inside the Mirth source connector Javascript reader write the below code:

var queueConsumer = new org.envision.queuing.QueueConsumer();
msg = queueConsumer.returnMessage(“TestQueuing”);
return msg;

When you run the first you can see that data will be fetched from Apigee and pushed to RabbitMQ queue and immediately pulled by Mirth consumer.

Happy Integrations!!!!!


How to install and work with RabbitMQ – Part 1

This blog is about installation and working with one of the most useful message queuing system called RabbitMQ.

Before beginning, it is important for us to know what it actually does. This is more like a server mainly used for better queuing of messages. It uses a protocol called AMQP Advanced Message Queuing Protocol which enables the data to queue in a better rate and then process them.

This RabbitMQ is built on a language which is most infamous now and its called Erlang and it is built on Open Telecom Platform Framework for clustering and failover.

Previously there was a disadvantage in using this RabbitMQ because everything has to be handled via command prompt. People who uses this server were not tech savvy to do that. Later in their version they created a management studio that enables even a layman to work better.

It is also important for us to understand that not only RabbitMQ but also apache has a similar feature of server. Apache’s distribution of queuing mechanism is called Apache Kafka. We will discuss more about the both in fore coming days.


Step 1: Install Erlang –  We need to install Erlang and its corresponding components first before installing RabbitMQ. RabbitMQ will not work if Erlang is not installed. Download the latest version of Erlang from here.

Step 2: Once the installation is done, you have to set the environment config of ERLANG_HOME. If you are using Windows 10 you don’t need to setup the environment variable. It will be automatically setup as shown below.rabbitmq-Setup

Step 3: Install the latest version of RabbitMQ from here. This installation is pretty straight forward. If the installation of the Erlang is done without any problem then this will also work as smooth as butter.

Step 4: RabbitMQ runs by default as a windows service. We don’t need to explicitly invoke it. But at this stage you will have to do everything in the command line interface. Which is a tedious task. So it is better to work with the web management console interface of this.

  • Open command line interface with Admin access.
  • Navigate to C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-3.3.4\sbin 
  • Run the command rabbitmq-plugins.bat enable rabbitmq_management to enable the plugin of web management
  • Once, the above command is done, then re-install the RabbitMQ  using following commands:
  • rabbitmq-service.bat stop
    rabbitmq-service.bat install
    rabbitmq-service.bat start
  • Note: you can remain in the same directory to perform this.


If everything works fine as shown above in the screenshot. Then you can now open the web console of the RabbitMQ. you can access the RabbitMQ using the url http://localhost:15672/ the default username and password for this console will be guest guest

Will post more POC’s integrating RabbitMQ with Mirthconnect in future posts.

Happy Integrations!!!!!




Automate Import/Export channels functionality – Part2

This code will consist of the data needed for Mirth channel (B) in the server 2.

Basically, this channel will be reading the JSON message, decode the incoming encoded message, then automatically import those channel and then deploy it. This channel will be responsible to do all the importing operations and this will happen without any manual intervention.

Please use the below code in the source transformer of the channel and proceed connecting the mirth server 1 and mirth server 2


//Define and Initilize Mirth controller instances
var channelController = ChannelController.getInstance();
var codeTemplateController = CodeTemplateController.getInstance();
var configurationController = ConfigurationController.getInstance();

//Get list of existing libraries & channel dependencies here
var existingLibraries = codeTemplateController.getLibraries(null, true);
var channelDependencies = configurationController.getChannelDependencies();
var restoreChannelGroups = channelController.getChannelGroups(null);
var restoreLibraries = codeTemplateController.getLibraries(null, true);
var restoreChannelTagSet = configurationController.getChannelTags();
var restoreChannelDependencies = configurationController.getChannelDependencies();
var restoreDeployedChannels = channelController.getDeployedChannels(null);

//Get channel metadata
var channelMetaDataMap = configurationController.getChannelMetadata();

var abortDeploymentAndRestoreBackup = false;
var fileSuffix = DateUtil.getCurrentDate(“MMddyyyyHHmmss”);
var backupFileName = “channel backups/backup-” + fileSuffix + “.json”;
channelMap.put(“backupFileName”, backupFileName);

backup(restoreChannelGroups, restoreLibraries, restoreChannelTagSet, restoreChannelDependencies, restoreDeployedChannels);

var serializer = ObjectXMLSerializer.getInstance();
var jsonMessage = msg;
var toBeDeployedList = new;
var groups = new;

//Populate existing channel groups on the mirth instance
var existingGroups = channelController.getChannelGroups(null);
if (existingGroups === null || existingGroups === undefined) {
existingGroups = new;

//Iterae through the groups (from json message received)
for (var groupCounter = 0; groupCounter < jsonMessage.length && !abortDeploymentAndRestoreBackup; groupCounter++) {
var currentChannelGroup = jsonMessage[groupCounter];

var groupAlreadyPresent = false;
var indexFound = -1;
//Check to see if group alredy exists on mirth by iterating through the existing groups
for (existingCounter = 0; existingCounter < existingGroups.size(); existingCounter++) {
if (existingGroups.get(existingCounter).getId().equals(currentChannelGroup.groupId)) {
groupAlreadyPresent = true;
indexFound = existingCounter;
//If group is already present, then get copy of that in a variable. If not, then add it to existing groups list.
var chGroup = null;
if (!groupAlreadyPresent) {“Group NOT present. Creating a new one”);
var chGroup = new, “”);
} else {“Group Already present”);
chGroup = existingGroups.get(indexFound);

//Parse channels element from json message and iterate through the channels
var channels = currentChannelGroup.channels;
for (channelCounter = 0; channelCounter < channels.length && !abortDeploymentAndRestoreBackup; channelCounter++) {
var decodedChannel = new[channelCounter]));
logger.debug(“Decoded Channel for import:” + decodedChannel);
var channelObject = serializer.deserialize(decodedChannel, Channel);
//Get channel dependency list from decoded channel being imported
var dependentIDS = channelObject.getExportData().getDependentIds().iterator();
var dependencyIDS = channelObject.getExportData().getDependencyIds().iterator();
//Before importing a new channel, try to stop and undeploy existing channel (max 5 times) after waiting for a second each time.
//If deploy fails, abort import with message and move over to next channel
for (retryCount = 0; retryCount < 5; retryCount++) {
if (retryCount >= 1) {;
if (ChannelUtil.isChannelDeployed(channelObject.getId())) {
ChannelUtil.stopChannel(channelObject.getId());“Request raised for stopping channel: ” + channelObject.getName());
ChannelUtil.undeployChannel(channelObject.getId());“Request raised for undeploying channel: ” + channelObject.getName());
} else {“Channel is undeployed:” + channelObject.getName());

if (ChannelUtil.isChannelDeployed(channelObject.getId())) {
logger.error(“Aborting import of channel as it is still deployed:” + channelObject.getName());
abortDeploymentAndRestoreBackup = true;

//Get code template libraries details linked with a channel, iterate through it and import those first before channel.
var chLibraries = channelObject.getExportData().getCodeTemplateLibraries();
for (ch1 = 0; ch1 < chLibraries.size(); ch1++) {
var currentLibrary = chLibraries.get(ch1);
var isLibraryAlreadyPresent = false;
//Find if the code template library is already present in mirth, then overwrite and update that library. Else import a new one.
for (exLibraryCounter = 0; exLibraryCounter < existingLibraries.size(); exLibraryCounter++) {
if (existingLibraries.get(exLibraryCounter).getId().equals(currentLibrary.getId())) {
isLibraryAlreadyPresent = true;
existingLibraries.set(exLibraryCounter, currentLibrary);
//Add new library, if not already present
if (!isLibraryAlreadyPresent) {
//Find list of code templates from the library and update & import them in mirth
var chTemplates = chLibraries.get(ch1).getCodeTemplates();
for (ch2 = 0; ch2 < chTemplates.size(); ch2++) {
var codeT = chTemplates.get(ch2); //serializer.deserialize(chTemplates.get(ch2),;
codeTemplateController.updateCodeTemplate(codeT, null, true);
codeTemplateController.updateLibraries(existingLibraries, null, true);
//Import the new channel in mirth and add it deployment list
channelController.updateChannel(channelObject, null, true);
var channelAlreadyFound = false;
//Check to see, if channel is already linked to channel group. If so, no need to link it again. If not, then add to channel group.
for (existingChannelCounter = 0; existingChannelCounter < chGroup.getChannels().size(); existingChannelCounter++) {
if (chGroup.getChannels().get(existingChannelCounter).getId().equals(channelObject.getId())) {
logger.debug(channelObject.getId() + ” channel id already found. No need to add again to the group. – ” + channelObject.getName());
channelAlreadyFound = true;
if (!channelAlreadyFound) {
//Update dependency and dependent Id list (in mirth memory for now)
while (dependentIDS.hasNext()) {
var dependentId =;
if (dependentId != null && dependentId !== undefined && !dependentId.equals(channelObject.getId())) {
channelDependencies.add(new, channelObject.getId()));
while (dependencyIDS.hasNext()) {
var dependencyId =;
if (dependencyId != null && dependencyId !== undefined && !dependencyId.equals(channelObject.getId())) {
channelDependencies.add(new, dependencyId));
//Clear channel’s export data’s code template libraries and dependcies from memory before moving to next channel in channel group.
//If import of channels is done successfully, then set channel dependencies and update channel groups and deploy channels
if (!abortDeploymentAndRestoreBackup) {
//Update dependency and dependent Id list (in mirth persistence)

//Convert list of channel groups to be updated to a set and update channel groups
var newGroups = new;
for (i = 0; i < existingGroups.size(); i++) {
channelController.updateChannelGroups(newGroups, null, true);

//Deploy all the channels that were earlier added to deployment list.
for (k = 0; k < toBeDeployedList.size(); k++) {

//Wait for 5 seconds before deployment is verified.; // Not mandatory

//Identify if channel is not deployed even after 5 seconds.
for (k = 0; k < toBeDeployedList.size(); k++) {
if (!ChannelUtil.isChannelDeployed(toBeDeployedList.get(k))) { + ” channel is not deployed yet after the import, so recovery process would start now.”);
abortDeploymentAndRestoreBackup = true;

if (abortDeploymentAndRestoreBackup) {
//Read json back up file for restoring previous version
var jsonBackupObject = JSON.parse(;

//Get list of deployed channels from backup file
var deployedChannelIds = jsonBackupObject.deployedChannelIds;
var deserializerRestore = ObjectXMLSerializer.getInstance();

//Get list of channel groups from backup file
var channelGroupSetForRestore = new;
for (i = 0; i < jsonBackupObject.encodedChannelGroups.length; i++) {
channelGroupSetForRestore.add(deserializerRestore.deserialize(decode(jsonBackupObject.encodedChannelGroups[i]), ChannelGroup));

//Get list of code template libraries from backup file
var codeTemplateLibrariesForRestore = new;
for (i = 0; i < jsonBackupObject.encodedCodeTemplateLibraries.length; i++) {
codeTemplateLibrariesForRestore.add(deserializerRestore.deserialize(decode(jsonBackupObject.encodedCodeTemplateLibraries[i]), CodeTemplateLibrary));

//Get channel tags from backup file
var channelTagSetForRestore = new;
for (i = 0; i < jsonBackupObject.encodedChannelTags.length; i++) {
channelTagSetForRestore.add(deserializerRestore.deserialize(decode(jsonBackupObject.encodedChannelTags[i]), ChannelTag));

//Get channel dependencies from backup file
var channelDependenciesSetForRestore = new;
for (i = 0; i < jsonBackupObject.encodedChannelDependencies.length; i++) {
channelDependenciesSetForRestore.add(deserializerRestore.deserialize(decode(jsonBackupObject.encodedChannelDependencies[i]), ChannelTag));

//Revert code templates and libraries
for (lb = 0; lb < codeTemplateLibrariesForRestore.size(); lb++) {
var currLibrary = codeTemplateLibrariesForRestore.get(lb);
for (ct = 0; ct < currLibrary.getCodeTemplates().size(); ct++) {
codeTemplateController.updateCodeTemplate(currLibrary.getCodeTemplates().get(ct), null, true);

//Revert channel code
var channelGrpIterator = channelGroupSetForRestore.iterator();
while (channelGrpIterator.hasNext()) {
var channelGroupToBeRestored =;
for (k = 0; k < channelGroupToBeRestored.getChannels().size(); k++) {
var channelToBeRestored = channelGroupToBeRestored.getChannels().get(k);
channelController.updateChannel(channelToBeRestored, null, true);

//Revert libraries, channel groups, channel tags and channel dependencies by calling mirth classes with values parsed from backup file.
codeTemplateController.updateLibraries(codeTemplateLibrariesForRestore, null, true);
channelController.updateChannelGroups(channelGroupSetForRestore, null, true);

//Deploy old version channels
for (i = 0; i < deployedChannelIds.length; i++) {

If you guys have a look at the code deeply, you can find that it uses three functions to achieve the above code. You can either place the below functions in the code template area or in the transformer as well.

//Function to decode the value and return string
function decode(value) {
return new;

//Function to get an array of objects with xml representation of collection of objects
function getXml(collection) {
var returnList = [];
var counter = 0;
var backupSerializer = ObjectXMLSerializer.getInstance();
var iterator = collection.iterator();
while (iterator.hasNext()) {
var object =;
var writerObject = new;
backupSerializer.serialize(object, writerObject);
returnList[counter++] = FileUtil.encode(writerObject.toString().getBytes());
return returnList;

//This function takes backup of entire set of channel groups, code templates/libraries, channel tags, dependencies and list of deployed channels in time.
//backup is kept in json file with content as base64 encoded.
function backup(channelGroups, codeTemplateLibraries, channelTags, restoreChannelDependencies, restoreDeployedChannels) {
if (channelMetaDataMap != null) {
for (i = 0; i < channelGroups.size(); i++) {
var currentChannelGroup = channelGroups.get(i);
for (j = 0; j < currentChannelGroup.getChannels().size(); j++) {
var channelSetId = new;

var currentChannels = channelController.getChannels(channelSetId);
if (currentChannels != null) {
var currentChannel = currentChannels.get(0);
currentChannelGroup.getChannels().set(j, currentChannel);
var restoreDeployedChannelIds = new;
if (restoreDeployedChannels != null) {
for (i = 0; i < restoreDeployedChannels.size(); i++) {
var encodedChannelGroups = getXml(channelGroups);
var encodedCodeTemplateLibraries = getXml(codeTemplateLibraries);
var encodedChannelTags = getXml(channelTags);
var encodedChannelDependencies = getXml(restoreChannelDependencies);
var backup = {};
backup.encodedChannelGroups = encodedChannelGroups;
backup.encodedChannelTags = encodedChannelTags;
backup.encodedCodeTemplateLibraries = encodedCodeTemplateLibraries;
backup.encodedChannelDependencies = encodedChannelDependencies;
backup.deployedChannelIds = restoreDeployedChannelIds;

var outputBackup = JSON.stringify(backup);
FileUtil.write(backupFileName, false, JsonUtil.prettyPrint(outputBackup));

Yup. That’s how you can automate the tool which turns up to save much of your valuable time. Happy Automation

Automate Import/Export channels functionality – Part1

This is a weird experiment.

Just in-case we want to automate the exporting/importing of channels in mirth, then this feature will be very helpful. The user will be giving the ID’s of the channel group which needs to be exported to one mirth server to be imported into another mirth server, and this operation will be performed without any manual intervention.

The Mirth channel (A) from SERVER1 will accept all the channel group ID’s in a comma separated value, and then export the entire channel group along with the code template and dependencies attached with that channels of the group and generate a Json containing all the exported value in a base 64 encoded format.

The Mirth channel (B) from SERVER2 will consume this Json data decode the encoded string and automatically import those channels along with the group and their code templates and deploy them.

Server 1 – Mirth Channel (A):

This channel will consume the group ID’s in a comma separated value and then generate and JSON string out of it. Please copy the below code to the source/destination transformer.


//Defined and Initalized all controller instances
var configurationController = ConfigurationController.getInstance();
var channelController = ChannelController.getInstance();
var codeTemplateController = CodeTemplateController.getInstance();
var serializer = ObjectXMLSerializer.getInstance();

//Define required variables.
var groupIds = new;

//Parse the group ids passed into an array using comma as a separator
var commaSeparatedGroupIds = connectorMessage.getRaw().getContent();
var arrayGroupIds = commaSeparatedGroupIds.split(“,”);
for (i = 0; i < arrayGroupIds.length; i++) {

//Get channel groups and channel metadata and existing libraries.
var channelGroups = channelController.getChannelGroups(groupIds);
var channelMetaDataMap = configurationController.getChannelMetadata();
var libraries = codeTemplateController.getLibraries(null, true);

var output = [];

var newJsonObj = {};
newJsonObj.Manifest = [];
newJsonObj.ChannelExportData = [];

//Iterae through the channel groups (passed as input)
for (i = 0; i < channelGroups.size(); i++) {
var channelGroup = channelGroups.get(i);
var channelIds = new;
var groupNameValue = channelGroup.getName();

var groupNames = {};
groupNames.groupInfo = channelGroup.getName();
groupNames.channelNames = [];

var channelGroupJson = {};
channelGroupJson.groupId = channelGroup.getId();
channelGroupJson.groupName = channelGroup.getName();
channelGroupJson.channels = [];
output[i] = channelGroupJson;

//“CHANNEL GROUP EXPORTED WITH NUMBER OF CHANNELS: ” + channelGroup.getChannels().size());
//Iterate through all the channels in the group and add channel ids of those to a list.
for (channelCounter = 0; channelCounter < channelGroup.getChannels().size(); channelCounter++) {
var currentChannelId = channelGroup.getChannels().get(channelCounter).getId();

//Load the channel objects based on channel ids collected previously.
var channels = channelController.getChannels(channelIds);

//Iterate through the channels loaded previously and update following for each channel
//1. Export Data -> Metadata
//2. Export Data -> Code template libraries
//3. Export Data -> Channel Tags
//4. Export Data -> Dependent Ids
//5. Export Data -> Dependency Ids
//Then convert that channel object into xml with base 64 encoding
for (channelCounter = 0; channelCounter < channels.size(); channelCounter++) {
var currentChannelId = channels.get(channelCounter).getId();
var channelDetails = {};
channelDetails.channelName = channels.get(channelCounter).getName();
channelDetails.Library = [];

if (channelMetaDataMap != null) {

for (ctCounter = 0; libraries != null && ctCounter < libraries.size(); ctCounter++) {
var library = libraries.get(ctCounter);
//“library : “+library.getName())

if (library.getEnabledChannelIds().contains(currentChannelId) ||
(library.isIncludeNewChannels() && !library.getDisabledChannelIds().contains(currentChannelId))) {


var channelTagSet = configurationController.getChannelTags();
var channelTags = null;
if (channelTagSet != null) {
channelTags = channelTagSet.iterator();

while (channelTags.hasNext()) {
var channelTag =;
if (channelTag.getChannelIds().contains(currentChannelId)) {

var channelDependenciesSet = configurationController.getChannelDependencies();
var channelDependencies = null;
if (channelDependenciesSet != null) {
channelDependencies = channelDependenciesSet.iterator();
while (channelDependencies.hasNext()) {
var channelDependency =;
if (channelDependency.getDependencyId().equals(currentChannelId)) {
} else if (channelDependency.getDependentId().equals(currentChannelId)) {

var writer = new;
serializer.serialize(channels.get(channelCounter), writer);

channelGroupJson.channels[channelCounter] = FileUtil.encode(writer.toString().getBytes());

var newJson = JSON.stringify(newJsonObj);

//Write entire channel group and base64 list of its channel xmls into a file at defined location
FileUtil.write(“C:/Labs/POC/Import_Export/output.json”, false, JsonUtil.prettyPrint(newJson));
channelMap.put(“output”, JsonUtil.prettyPrint(newJson));




Create Automated Script for IT deployment

This is a hypothetical scenario:
Imagine a situation where you have developed all the channels required to build the interfaces, now you are going to move your channels to the production or any beta testing environment.

In this scenario you would want to make the channels to be imported to the mirth in specific environment. Imagine you don’t have an access to make this move. This will be permitted to be done only by the IT guys. In this scenario, the IT team will find it difficult to import the channels, as you cannot expect them to understand mirth.

The easier way for them to do is, you give a command to them they will execute the command and everything will start to work fine. i.e an import command via mirth command prompt like this:

import “Your-channel-available-folder\20180312\EAI-Deployment Script Generator.xml” force

But it will again be the manual process for the developers to do this manually. Imagine one day you have to send 4 to 5 channels to. In that case, you have to manually create this script and then send it. To overcome it, we can write one channel that will create a script for all the channels that is deployed today.

The logic behind this is that whatever is developed and tested today, only those channels will be moved to beta testing or prod. Based on that scenario, I have built the below code.

var currentDate = DateUtil.getCurrentDate(“yyyy-MM-dd”);
var currentYear = currentDate.substring(0, 4);
var currentMonth = currentDate.substring(5, 7);
var currentDay = currentDate.substring(8, 10);
var georgianMonth = parseInt(currentMonth) – 1;
var scriptBuilder = java.lang.StringBuilder();
var getScriptDate = DateUtil.getCurrentDate(“yyyyMMdd”);
// Initialize controller
var controller = com.mirth.connect.server.controllers.ControllerFactory.getFactory().createEngineController();
// Create Channel Deployed ID’s
var channels = ChannelUtil.getDeployedChannelIds().toArray();

for each(channel in channels) {


var dashboardStatus = controller.getChannelStatus(channel);
// Get Georgian date mapping from –

var fetchLastDeployedDay = dashboardStatus.getDeployedDate().get(5);
var fetchLastDeployedMonth = dashboardStatus.getDeployedDate().get(2);
var fetchLastDeployedYear = dashboardStatus.getDeployedDate().get(1);

if ((fetchLastDeployedYear == currentYear) && (fetchLastDeployedDay == currentDay) && (fetchLastDeployedMonth == georgianMonth)) {

var getDeployedChannelName = dashboardStatus.getName();
var deploymentScript = ‘import ‘ + ‘”‘ + $(‘Eai_qa_path’) + getScriptDate + ‘/’ + getDeployedChannelName + ‘.xml’ + ‘”‘ + ‘ force’;
var processedScript = deploymentScript.replace(/\//g, “\\”);



FileUtil.write(“C:/Labs/POC/Import_Export/test.txt”, false, scriptBuilder);

Put this code in the javascript listener and make it to run for 24 hours. (i.e) for every 24 hours one import script will be generated based on the channels that were developed and tested today

Happy Automating!!!!

Function for – Fetching Complete System Specification

This blog provides the code with javascript function that will fetch all the system specification of the system upon which mirth is installed.

This function does not require any input parameter, it will fetch all the statistics of the system in complete real time scenario.

function fetchSystemConfigurations() {

var systemConfiguration = ”;

function memoryCalc(data) {
var kb = data / 1024;
var mb = kb / 1024;
var gb = mb / 1024;

var finalGb = Math.round(gb);
var finalMb = Math.round(mb);
var finalKb = Math.round(kb);

var finalValue;

if (finalGb == 0) {
finalValue = finalMb + ‘MB’;
} else if (finalMb == 0) {
finalValue = finalKb + ‘KB’;
} else if (finalKb == 0) {
finalValue = data + ‘Bytes’;
} else {
finalValue = finalGb + ‘GB’;
return finalValue;

var availableProcessors = “Available Processors : ” + new java.lang.Runtime.getRuntime().availableProcessors();
var freeMemory = “Free Memory : ” + memoryCalc(new java.lang.Runtime.getRuntime().freeMemory());
var osName = “OS Name : ” + new java.lang.System.getProperty(‘’);
var maximumMemory = “Maximum Memory : ” + memoryCalc(new java.lang.Runtime.getRuntime().maxMemory());
var totalJVMMemory = “Total JVM Memory : ” + memoryCalc(new java.lang.Runtime.getRuntime().totalMemory());
var javaVersion = “Java Version : ” + new java.lang.System.getProperty(‘java.version’);
var file = new‘c:’);
var diskFreeSpace = “Disck Free Space : ” + memoryCalc(file.getFreeSpace());
var diskTotalSpace = “Total Disk Space : ” + memoryCalc(file.getTotalSpace());
var hostNameAndIP =;
var splitData = hostNameAndIP.split(“/”);
var hostName = “Host Name : ” + splitData[0];
var IP = “IP : ” + splitData[1];
var processorIdentifier = “Processor Identifier : ” + java.lang.System.getenv(“PROCESSOR_IDENTIFIER”);
var processorArchitecture = “Processor Architecture : ” + java.lang.System.getenv(“PROCESSOR_ARCHITECTURE”);
var javaClassPath = “Java Class Path : ” + new java.lang.System.getProperty(“java.class.path”);

systemConfiguration = availableProcessors + “\n” + freeMemory + “\n” + osName + “\n” + maximumMemory + “\n” + totalJVMMemory + “\n” + javaVersion + “\n” + diskFreeSpace + “\n” + diskTotalSpace + “\n” + IP + “\n” + hostName + “\n” + processorIdentifier + “\n” + processorArchitecture;

return systemConfiguration;

Put the above code in the code template library and call this function anywhere either in transformer or connector or  anywhere.


The output of the code will be as follows:

Available Processors : 4
Free Memory : 116MB
OS Name : Windows 10
Maximum Memory : 228MB
Total JVM Memory : 201MB
Java Version : 1.8.0_151
Disck Free Space : 415GB
Total Disk Space : 465GB
IP :
Host Name : VIBV-BLR-02
Processor Identifier : Intel64 Family 6 Model 142 Stepping 9, GenuineIntel
Processor Architecture : AMD64

Happy Integration ………!!!!!!!!

Perform all File IO operations – Code Templates

In this post, I’m creating the code templates that will do one stop solution for all problems that we face in mirth while doing the File based IO operation.

Imagine a case where you need to move the file from one directory to another via Mirth instead of source or destination connector provided by tool.

In that case:
1. first we need to check if the file exists or not?
2. we have to use FileUtil.write function to write that file to destination location
3. we have to delete the file from the source location.

The performance time required to complete this process will be high. and we have to catch exceptions in correct places. To avoid all these troubles I have developed Mirth code template library that will be one stop solution for all these problem.

This library utilizes apache commons FileUtils library. Download that library from this link here. In that link under binaries select if you are using Windows and commons-io-2.6-bin.tar.gz if you are using linux systems.

apache commons fileutils library

Once downloaded navigate to commons-io-2.6 folder and copy the JAR file named commons-io-2.6 alone. You will also find other JAR files along with that, you can ignore them, you have to copy this and place it in your custom-lib folder of Mirth connect installed directory. Once  done, go to Mirth settings tab and click on Reload Resource.

Provide the following codes in your code template library:

Copy Directory To Directory:

function copyDirectoryToDirectory(sourceDirectory, destinationDirectory) {

var srcDirectory = new;

var destDirectory = new;

try {

var copyDirectoryToDirectory = new, destDirectory);

} catch (exp) {



call from Transformer:

copyDirectoryToDirectory(“C:/Projects/PROJECTS/TEST/Sample Message/PDF-tests/sourcedirectory”,”C:/Projects/PROJECTS/TEST/destinationDirectory”);

Move File To Directory:

function moveFileToDirectory(sourceFileName, destinationDirectoryName) {

var srcFile = new;

var destDir = new;

try {

var moveFileToDirectory = new, destDir, false);

} catch (exp) {



call from Transformer:

moveFileToDirectory(“C:/Projects/PROJECTS/TEST/Sample Message/PDF-tests/test2.pdf”, “C:/Projects/PROJECTS/TEST/Sample Message/”);

Move Directory To Directory:

function moveDirectoryToDirectory(sourceDirectory, destinationDirectory) {

var srcDirectory = new;

var destDirectory = new;

try {

var moveDirectoryToDirectory = new, destDirectory);

} catch (exp) {



call from Transformer:

moveDirectoryToDirectory(“C:/Projects/PROJECTS/TEST/Sample Message/PDF-tests/sourcedirectory”, “C:/Projects/PROJECTS/TEST/destinationDirectory”);

Copy File To Directory:

function copyFileToDirectory(sourceFileName, destinationDirectoryName) {

var srcFile = new;

var destDir = new;

try {

var copyFileToDirectory = new, destDir);

} catch (exp) {




call from Transformer:

copyFileToDirectory(“C:/Projects/PROJECTS/TEST/Sample Message/PDF-tests/test.pdf”, “C:/Projects/PROJECTS/TEST/Sample Message/”);

Integrating – AWS EC2 (MySQL) to Mirth Engine

For this post. I have purchased a personal EC2 instance in AWS environment (a free tier for one year). Specifically an Amazon-AMI Image AWS instance with Fedora Operating System.

In the remote EC2 system, MySQL is deployed and a database is created as test. Once the DB is created in the EC2 instance, you have to create a table with some sample patient demographics information.

How to Access AWS remote server?


Open the putty client and put the AWS amazon hostname in the Host Name or IP adress text box. Else you can put the elastic IP of your AWS instance there. It doesn’t always need to be a complete hostname.

  1. Then select SSH on the left side window of the putty and select on Auth. 
  2. Once you selected Auth on the right window pane click on the Browse button and select the private key you have downloaded from the amazon.
  3. This private key will be a .ppk file. This PPK file will be necessary to establish the SSH connectivity between your putty client and the AWS system.


  1. Once this is done click on the Open button on the bottom.
  2. Once you open you will be prompted to enter the username. If you purchased a Linux Ubuntu System the default user name will be ubuntu.
  3. If you have purchased a different system (say here I have purchased the Amazon AMI system). For the Amazon AMI system you need the password as ec2-user. 
  4. you can install MySQL linux distribution based the version you use. For Amazon AMI linux it uses the Fedora System. For Fedora use this command to install the MySQL. Before executing the below command do the command sudo yum update once.
dnf install mysql-community-server

For the Debian Ubuntu distribution use the below command for the MySQL installation

  • sudo apt-get update
  • sudo apt-get install mysql-server

Once installation is done. Log into your MySQL database on the AWS linux box. You can do this by typing the below command in the putty client box.

mysql -u root -p

Once it is done, you will enter into mysql> type show databases; initially you will not have a new database been created at your end. you have to create a new database for yourself. use the below command and create a new database.

create database test;

This will create a new database. Now we have to use this database and create the tables in it. Use the below command to use this database;

use test;

Here, test is the database name that i’m creating in the box. you can have the name of your choice. Then create a table with the following fields as provided in the screen shot below.


  1. Once you have created it, log on to your AWS web console. And open the port in the security groups on the web console.
  2. Click on Edit button after selecting the security group and then add the port 3306 on TCP.
  3. Only when we do this the inbound socket of your remote system will open, then only your local Mirth system can establish the communication to the system in AWS environment.

In case if you install any application server like the Apache or Tomcat you would initially want to open the specific ports which are specified in the httpd.conf or catalina.conf files. We want to go to the security group on the AWS console and then open enable those inbound ports, then only your (IP+Port) combination will work in the browser, this IP+Port combination is technically referred as Socket.

Create a channel in your local Mirth Connect. make the source to the javascript reader and keep the polling frequency of your interest to fetch the data from the DB. In the source connector area provide the following code.

var dbConn;
// AWS Mysql Credentials
var mySqlDriver = “com.mysql.jdbc.Driver”;
// – Server IP
// 3306 – Mysql Default Port number
// test – Database in Remote server
var mySqlConnectionTemplate = “jdbc:mysql://”;
var mySqlUserName = “root”;

// Create Parent Tag <PatientDemographics>
var patientDemographicsXml = new XML(‘<PatientDemographics></PatientDemographics>’);
// Create Parent for individual patient information <IndividualPatientInformation>
var individualPatientInfoXml = new XML(‘<IndividualPatientInformation></IndividualPatientInformation>’);

try {
// MySQL connection Template
dbConn = DatabaseConnectionFactory.createDatabaseConnection(mySqlDriver, mySqlConnectionTemplate, mySqlUserName, );
// Select statement
// patient_information – is the table name
result = dbConn.executeCachedQuery(“select * from patient_information“);

// Loop through the resultset value
while ( {
individualPatientInfoXml[‘PatientId’] = result.getInt(“pid”);
individualPatientInfoXml[‘PatientFirstName’] = result.getString(“patient_first_name”);
individualPatientInfoXml[‘PatientLastName’] = result.getString(“patient_last_name”);
individualPatientInfoXml[‘PatientMiddleName’] = result.getString(“patient_middle_ name”);
individualPatientInfoXml[‘PatientSuffixName’] = result.getString(“patient_suffix_name”);
individualPatientInfoXml[‘PatientDateOfBirth’] = result.getString(“patient_date_of_birth”);
individualPatientInfoXml[‘PatientGender’] = result.getString(“patient_gender”);
individualPatientInfoXml[‘PatientAge’] = result.getInt(“patient_age”);
individualPatientInfoXml[‘PatientAddress1’] = result.getString(“patient_address_1”);
individualPatientInfoXml[‘PatientAddress2’] = result.getString(“patient_address_2”);
individualPatientInfoXml[‘PatientEmailAddress’] = result.getString(“patient_emailAddress”);
individualPatientInfoXml[‘PatientTelecomNumber’] = result.getString(“patient_telecom_number”);
individualPatientInfoXml[‘PatientRace’] = result.getString(“patient_race”);
individualPatientInfoXml[‘PatientEthincity’] = result.getString(“patient_ethincity”);
individualPatientInfoXml[‘PatientMaritalStatus’] = result.getString(“patient_maritalstatus”);
individualPatientInfoXml[‘PatientLanguage’] = result.getString(“patient_language”);
individualPatientInfoXml[‘PatientCountry’] = result.getString(“patient_country”);
individualPatientInfoXml[‘PatientState’] = result.getString(“patient_state”);
individualPatientInfoXml[‘PatientCity’] = result.getString(“patient_city”);
individualPatientInfoXml[‘PatientZipCode’] = result.getString(“patient_zipcode”);
individualPatientInfoXml[‘PatientSSN’] = result.getString(“patient_ssn”);
individualPatientInfoXml[‘PatientDriverLicense’] = result.getString(“patient_driver_license”);

patientDemographicsXml[‘PatientDemographics’] += individualPatientInfoXml;

individualPatientInfoXml = new XML(‘<IndividualPatientInformation></IndividualPatientInformation>’);

msg = patientDemographicsXml;

return msg;

} finally {
if (dbConn) {

Once the connector code is created then you will be able to fetch all the data which are available in the database as a batch, instead of single row entries. You will be able to accumulate all the data from the database on each row as shown below:


These data will be accumulated inside mirth as a XML data created in batch and the output of your mirth data will be as shown below:

<PatientAddress1>No:8, washington, test drive</PatientAddress1>
<PatientAddress2>Oregan, detroit</PatientAddress2>
<PatientAddress1>4/12 Stevie Street, jj colony</PatientAddress1>
<PatientAddress2>Michigan, detroit</PatientAddress2>

Happy Integrations !!!!!


Blog at

Up ↑