Saturday, June 20, 2020

Testing Your Java EE Thorintail Microservice

Context


Oops! Almost left one important thing, we had a view on how to "dockerize" your "Hollow JAR", but we did not focus on implementing some sort of an automated integration test for your web service. There are a lot of way one can perform this exercise. One of the recurrent practices is through the use of Postman followed by Postwoman.

For as long as your service is running then you can develop some really nice Javascript processes that can automate this for you through the tools mentioned above. Another way for traditional "Java" developers would be to go the "JUnit" route. At some point you will also need your service to be running somewhere somehow waiting for client requests.

The JUnit route later got improved for Java EE Integration Testing through the configuration and use of frameworks, Arquillian & ShrinkWrap as top ups to JUnit.


Arquillian & ShrinkWrap

The role that Arquillian plays is of a "middle-man" between an artifact (.jar .war .sar .ear) that you want to deploy and the container or application server you want to deploy to. There are two ways you can use deploy your artifact. Either you deploy the actual physical artifact that you have just built on your machine or you programmatically build one for testing. To build one programmatically you use SkrinkWrap.


Getting Started

For those that already know, you will agree that generally when you are working with a standalone application server, in this case, WildFly, setting up Arquillian is quite a bit of work, but at least you set it up once and the rewards are out of this world. On the other side I have noticed over the past 6+- (since 2013) years that it has improved with all the dependencies one must configure and configuration files that one needs to create. I am really happy with what they have done with it in Thorntail and that's what we are going to look at for our Java EE Microservice.

Once again we will continue working on the previous Thorntail repo we have been working on since our first article, Java EE Micro Services Using Thorntail to the one that followed, which is, Dockerizing Your Java EE Thorntail Microservice.

Start up by opening your "pom.xml" file and adding the following dependency. In the case of Thorntail and the fact that it's already packed nicely for you the developers. You are basically installing or including a "Fraction".

<dependency>
<groupId>
io.thorntail</groupId>
<artifactId>
arquillian</artifactId>
<scope>
test</scope>
</dependency>

Part of the normal process with Arquillian is to also include JUnit dependenies / libraries, right? Ha ha ha ha, well this Thorntail already includes it. Currently it's including JUnit 4.12 so you may exclude it using Maven and rather include JUnit 5 if you want. For the purpose of getting you up and running with Thorntail I am just going to keep it as is. This fraction also includes Arquillian & ShrinkWrap libraries that you would have to configure separately, but not today! The next thing is to configure our maven testing plugin, the Maven Failsafe Plugin
.

This plugin was desgined for integration testing which is exactly what we want since we want to integrate into our REST Service when testig and get real results.

<plugin>
<artifactId>
maven-failsafe-plugin</artifactId>
<version>
2.22.2</version>
<executions>
<execution>
<goals>
<goal>
integration-test</goal>
<goal>
verify</goal>
</goals>
</execution>
</executions>
</plugin>


Test Case Implementation

Now let's get to the fun stuff, our test case implementation. So create a new Test Suite aka Test Class. Make sure that he name of the class ends with "IT" (which means Integration Test) because by default, Maven will look for classes that end with "IT" in order to run them as Integration Tests. For example I named mine "SampleResourceIT". So let's move on to some action ...

Add an annotation at class level as follows :

...
@RunWith(
Arquillian.class)
public class
SampleResourceIT { }
...

You are instructing JUnit to run with Arquillian. Literally what's there. So this will not be a normal JUnit test case. You will notice that immediately after adding this annotation, you will have some errors already. This is because Arquillian now wants to know how you would like to package your artifact that it should deploy.

So then we should now add a new method with the @Deployment annotation. This annotation is from the Arquillian Framework. It is where we will build our artifact for Arquillian to deploy it.

...
@Deployment
public static
Archive createDeployment() { }
...

Now let's build our test ".war" file inside that method using the SkrinkWrap API.

...
@Deployment
public static
Archive createDeployment() {
WebArchive webArtifact = ShrinkWrap.create( WebArchive.class, "thorntail-test-api.war");
webArtifact.addPackages( Boolean.TRUE, "za.co.anylytical.showcase");
webArtifact.addAsWebResource("project-defaults.yml");

// Print all file and included packages
System.out.println( webArtifact.toString(true));

return
webArtifact;
}
...

So now have built our small simple test web archive file. We are giving our war file a name. "thorntail-test-api.war", I believe you know that you can name it anything you want so you are not tied to naming it almost similar to the original file name. The next thing we are doing is include a our package which contains pretty much the our REST Service and it's business logic. So in a nutshell it contains our application java classes, in "za.co.anylytical.showcase" this follows a flag that tells ShrinkWrap to search recursively. The next part is about including any of our resource file that we may want to include. So I knwo that some of our actual REST configuations are in our file, "projet-defaults.yml", and to be as close as possible to our actual application we should include it in our test web archive file. Last part is printing everything that's in our file just to see what this test war file contains and be sure that we have everything that we want in there.


Great stuff. So now we want to write a test that just calls our test service and affirm that we managed to reach the Web Resource just fine.

...
@Test
public void
test_That_We_Reach_Our_WebResource_Just_Fine_Yea() throws Exception {
Client client = ClientBuilder.newBuilder().build();
WebTarget target = client.target("http://localhost:8881/that-service/text");
Response response = target.request().get();
int statusCode = response.getStatusInfo().getStatusCode();
String reponseBody = response.readEntity(String.class);
assertEquals( 200, statusCode);

    
System.out.println("RESPONSE CODE : " + statusCode);
    System.out.println("RESPONSE BODY : " + reponseBody);
}

...

We have a simple test case that uses the standard JAX-RS 2.x Client API Client API, so no magic there. We build our client code and then call the REST Service we want to call and then validate that we get a HTTP Status Code 200 which means that things wet well. No issues, ZILCH!




These are my IntelliJ IDEA test run results. You can also go through with Maven as follows :


mvn clean install



This will perfom an build while running integration tests for us which should give you results like the ones in the image below :



If you pay close attention to this image above as you will also see that Arquillian started up Wildfly for us and also used ShrinkWrap to build a ".war" file and also deploy it for us, all done automatically. Together with this, JUnit kicked in our test case when Arquillian was done with the deploy. Now that's magic! KA-BOOM!

Full test case looks like :

@RunWith( Arquillian.class)
public class
SampleResourceIT {

@Deployment
public static
Archive createDeployment() {
WebArchive webArtifact = ShrinkWrap.create( WebArchive.class, "thorntail-test-api.war");
webArtifact.addPackages( Boolean.TRUE, "za.co.anylytical.showcase");
webArtifact.addAsWebResource("project-defaults.yml");

// Print all file and included packages
System.out.println( webArtifact.toString( true));

return webArtifact;
}

@Test
public void
test_That_We_Reach_Our_WebResource_Just_Fine_Yea() throws Exception {
Client client = ClientBuilder.newBuilder().build();
WebTarget target = client.target("http://localhost:8881/that-service/text");
Response response = target.request().get();
int statusCode = response.getStatusInfo().getStatusCode();
String reponseBody = response.readEntity(String.class);
assertEquals( 200, statusCode);

System.out.println("RESPONSE CODE : " + statusCode);
System.out.println("RESPONSE BODY : " + reponseBody);
}
}


As usual you may ...

Leave some comments below and you may checkout the GitHub Source Code to validate the steps.



Thursday, June 18, 2020

Dockerizing Your Java EE Thorntail Microservice

Context


If you have read the article about Java EE Micro Services Using Thorntail then it's about time we followed up on a question that you and some of the viewers probably asked:

"What if I want to run this as a docker service?"

Well now we can look into that. There are various ways that one achieve this, the common way has been through the use of Docker Files. The Maven users now have an alternative through the Docker Maven Plugin, which inherits a lot from the behavior of a Docker File. I believe the Gradle developers may also have something similar. Anyway may be one day we can look into it.



Starting Up The Configuration


Reach out to your previous project based on the linked article mentioned above as we will be working on that. Open your pom.xml file. The plan is to use a maven profile to configure this so that we isolate the operations we can perform using the docker plugin. Add a new maven profile :

<profile>
<id>
docker</id>
<build>
<plugins>
<plugin>
<groupId>
io.fabric8</groupId>
<artifactId>
docker-maven-plugin</artifactId>
<configuration>
<images>

</images>
</configuration>
<executions>
<execution>
<phase>
package</phase>
<goals>
<goal>
build</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>


That's all you need to get up and running, but before that, let's now get into a more detailed breakdown of what's happening.


The XML Explained


We have added the plugin with some configurations that I would like to delineate.

...
<name>
${project.build.finalName}</name>
...

This is a name of your Java Service that will be running in docker. So when you look at all your running containers, you should see the name you specified within these XML tags.


...
<run>
<ports>
<port>
8881:8881</port>
</ports>
</run>

...

We define how the container ports should be mapped to the host port. So we know that our app was initially configured at port 8881 based on our previous article, so for simplicity let's keep these the same as our initial configuration.



...
<build>
    <from>
java:openjdk-8-jdk</from>
        <ports>
            <port>
8881</port>
        </ports>
        <assembly>
            <basedir>
/</basedir>
            <inline>
                <files>
                    <file>
                        <source>
${project.build.directory}/${build.finalName}-thorntail.jar</source>
                            <outputDirectory>
/opt</outputDirectory>
                    </file>
                </files>
            </inline>
        </assembly>
        <cmd>
            <shell>
java -jar /opt/${build.finalName}-thorntail.jar -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses=true</shell>
        </cmd>
</build>

...

We use the <build> tag to define how our custom docker image should be built

...
<from>
java:openjdk-8-jdk</from>
...

We are basing our image off the java:openjdk-8jdk image

...
<assembly>
     <basedir>
/</basedir>
    <inline>
        <files>
            <file>
                <source>
${project.build.directory}/${build.finalName}-thorntail.jar</source>
                <outputDirectory>
/opt</outputDirectory>
            </file>
        </files>
    </inline>
</assembly>
...

Part of building our image includes the files that may be involved in our service. In this case we want our bootable / executable ".jar" included since it's the actual service that we want to run. We also specify where the jar should be placed when inside the docker container. This destination path is "/opt". So this mean that the moment you have connected to our docker container, you should be able to see the jar if you change to the "/opt" directory or path.

...
<cmd>
    <shell>
java -jar /opt/${build.finalName}-thorntail.jar -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses=true</shell>
</cmd>
...

This last part is the command to exectue when the image built and has created a container for us. This entry point is just a simple command that you would run on your local machine in order to bootstrap your Java API Service. So you do the same thing here, I have also added some java options in my command, which you may not need.



Building & Running Your Service

Cooooool stuff! So now we can try to run our applications, to do that all we need to do is build our image and then create our container and run it.


Let's build our service image using our new maven profile :


mvn docker:build -Pdocker



Next up we create and run our new container based on the new image :


mvn docker:start -Pdocker


And that's it, your container should be running. You can repeat the same REST API Postman test steps we did in our previous article and all should work as expected. You can also check if it's running using the docker list command as follows :


docker container ls





Great stuff! So that's all you need to do to dockerize your executable ".jar" file. My thoughts are that you should be able to do the same with a Spring Boot application. You may refer to more Docker Maven Plugins Operaions. For more insight.


Leave some comments below and you may checkout the GitHub Source Code to validate the steps.

Friday, March 13, 2020

Docker Operations For MSSQL Server Using Unix Shell



Context


I am currently working on a piece of inherited software from another organization and one of the technologies that’s been used is MSSQL Server 2017. Now we are working more on Unix based OSes and you can imagine that was a very close one given that for the longest time there was no SQL Server for any other OS apart from Windows OS. We now have SQL Server for Linux variations, starting from SQL Server 2017. The trick however is that we don’t have this for Mac OS. This means that if you are using a Mac OS one of ways to get the database service up and running is to use Docker, my reason for saying this is that all Non-Linux hosts use Linux VMs for your Docker containers. Ultimately this means that we can then set up the database service inside a Linux container that’s running in Docker.

That’s fine, we managed to work with this idea when we took over the project mentioned above and the set up works nicely, but one day our team was discussing the pain of backing up and restoring on a MS SQL Server service that’s running on Docker. Some of the challenges include:



Restoration Process

- Having to copy your database backup file over to the docker container.
- Using a client app to locate the file.
- Setting up the restoration options
- After restoration you must may be delete the file from your container to reserve space and avoid duplicate files.



Backup Process

- You backup to the default docker container directory
- Must copy the file from the docker container for safe keeping.
- After you copy you may want to remove it from the container to reserve space on your container.



These can a bunch of tiresome processes when you have to perform them on a daily basis. The idea of scripting also motivates configurations & automation of various processes and, as much as there are many ways of solving a problem, I will share one of the ways we tackled these problems to assist the development team. We will start with the basic configurations then the backup process and then move on to the restore process, also note that we will be using some plain old unix shell scripting. Hopefully you have basic understanding of that.



1.Basic Configurations & Commons


1.1 Variables / Costants


We are going to set a few variables to use later in our script as configuration values. Create a new shell script with the following code:

NB: You can your script anything you prefer, for example I named mine docker-mssql.sh 
DOCKER_MSSQL_SERVER_SHELL_SCRIPT_PATH=${0:a:h} 

# Change these for your environment
MSSQL_HOME_DIR="/var/opt/mssql/data"
MSSQL_CONTAINER_NAME="mssql-server-2017"
MSSQL_SERVER_IP="localhost"
MSSQL_SERVER_PORT="1433"
MSSQL_SA_USERNAME="SA"
MSSQL_SA_PASSWORD="exe1Hou!"


The variable DOCKER_MSSQL_SERVER_SHELL_SCRIPT_PATH stores the current path of the script. So that's the ZShell way of dynamically resolving the script's current path. If you are not using ZShell, but Bash then you can do the same as follows: 
NB: Make sure that you type all this in one line.
DOCKER_MSSQL_SERVER_SHELL_SCRIPT_PATH=$(cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd)

The next step is to try and see the possible common processes, for example when backing up or restoring the database backups, we need to copy to and from the Docker container, so I believe we need a way of copying. We can write two functions that can help us, one will copy To the container and one will copy From the container. We will tailor these functions strictly for this Docker container. These functions will also make use of the Docker copy command. 



1.2 Common Functions


Add / Implement this function in your script : dockerMSSQLCopyToVM()

function dockerMSSQLCopyToVM() {
    sourceFilePath=$1
    docker cp $sourceFilePath $MSSQL_CONTAINER_NAME:$MSSQL_HOME_DIR
}

This function takes one argument at $1. It's the standard shell way of getting parameters. Unlike Java, shell is not Zero based so the parameters start at "1". This argument should be the path to a file, in this case it will be your backup file. We then take this argument and assign it to a new variable $sourceFilePath which then gives it more meaning. Now that everything is in place, we call the "Docker copy" command to copy the given file path to the docker container. According to the Docker standards, when you copy to a docker container you need to specify the {container-name}:{path-inside-the-container} docker. You will notice that to achieve that we are now using our configured variable(s) which we have set up in section "1.1" 


Add / Implement this function in your script : dockerMSSQLCopyFromVM()


function dockerMSSQLCopyFromVM() {
    fileName=$1
    destinationFilePath=$2
    docker cp $MSSQL_CONTAINER_NAME:$MSSQL_HOME_DIR"/"$fileName $destinationFilePath
}

In this function we expect two arguments or parameters. The first one at $1 being the name of the file which you want to copy. This will be the file that's on your docker container, so you want to copy it from the container to your machine. The second parameter at $2 will be the destination path where you want to copy it. 


Add / Implement this function in your script : dockerRemoveFileFromContainer()

function dockerRemoveFileFromContainer() {
    pathOfFileToRemove=$1
    docker exec $MSSQL_CONTAINER_NAME rm -rf $pathOfFileToRemove
}

A reminder that after backing up and restoring, we also want to remove the backup files from our docker container in order to reserve some space on the container. There's no use in keeping it there if you already have it on your host machine. So now we are expecting one argument which is the path to the file you want to remove. To execute a command against a running docker container we use the docker exec command. In this case we want to run a unix rm command to "remove" the file.

 

 

2. Backing Up Your Database

 

2.1 Function Skeleton

In this section we can now look into the core functionality, this is about the backup  process. We will build this up step by step then revise the complete function later. for now, create an empty function, I created mine as follows :

function dockerMSSQLBackupDatabase() {
    
}


2.2 Gathering Backup Process Information

Thinking about this more, we need to take in some information in the form of parameters / arguments. I imagine we will need to know about the "database' name that you want to backup and also may be something like a result "backup file name" prefix so that you can identify the file quickly when you are done. 

So let's fay for our example, to make this more verbose than just a file name and extention we would also want to see the date we created this backup file and achieve a format like FileNamePrefix_2020-03-02.bak. Keep this in mind because we will be using that format. Go to your new empty function and add the following code.

databaseName=$1                                   
backupDate=$(date +%F)                             
fileName=$2"_"$backupDate".bak" 
completeBackupFilePath=$MSSQL_HOME_DIR"/"$fileName

  • By now you should be able to follow on our parameter positions, so, in the first argument we expect the name of the existing database on the service. This will be the database you want to back up. We then assign that to a meaningful name, databaseName assign. 
  • We then use the "Bash Date" to get the current date in a spefic format, so the +%F will format our date as YYYY-MM-DD, which is what we are looking for to make up our desired file name.
  • From there we put everything together, including argument 2, which is where we expect the file name prefix. Ultimately we join / put together / concatenate all this to make up our file name format, which is then assigned to a variable named, fileName.
  • Finally, we are ready and we build up our full destination file path. This is where the file will be backedup. Using our variable / config at $MSSQL_HOME_DIR, joined up with the fileName variable.

AT this stage we have our information and we can now start the actual work, connecting to our container and loging into the SQL Service so that we can run the SQL Backup statement. 


2.3 Docker & SQL Login To Execute Backup Command

It's now time to revisit our "docker exec" to login into our container, so that we can login into the SQL Service and then run something against the service to backup. This is the core function of our backup process.

docker exec -it $MSSQL_CONTAINER_NAME /opt/mssql-tools/bin/sqlcmd \
    -S $MSSQL_SERVER_IP \
    -U $MSSQL_SA_USERNAME \
    -P $MSSQL_SA_PASSWORD \
    -Q '
BACKUP DATABASE '${databaseName}'
TO DISK = '\'$completeBackupFilePath\''
WITH FORMAT, INIT,
MEDIANAME = '\'$databaseName\'' ,
NAME = '\'$databaseName\'',
SKIP, REWIND, NOUNLOAD,  STATS = 10'

  • There we go with the exec command to login into our container. We are also using our constant $MSSQL_CONTAINER_NAME, to specify our container name. 
  • Then we have a command /opt/mssql-tools/bin/sqlcmd that's used to to execute the actual SQL Service followed by parameters with options of what we are tying to do, in this case we want to login so that we can execute a SQL command to run the backup process. -S $MSSQL_SERVER_IP, to specifcy the IP of the SQL Service we want to log into; -U $MSSQL_SA_USERNAME, just a username for the database; -P $MSSQL_SERVER_PASSWORD, password for the user and finally the -Q '...', which will be used to carry our SQL Queries & Statment. In this case you will notice that we are using the SQL Backup Command , joined up by our local function variables to make up a complete backup command.
  • One small thing you are probably wondering about is the backslashes in our command. That's just the unix shell way of breaking a command that's in a single line, to multiple lines.

2.4 Cleanup After Backing Up

What happens when the backup is done? Remember that we have some utility functions we can use. So we can now clean up nicely, this way we get you manage our backup files much easier and we also clear out the container and researve some space. To do that, first we copy that file from the container to our desired paht. In my case I am copying the file to my downloads folder, as follows :  


dockerMSSQLCopyFromVM $fileName $HOME/Downloads

The next step is to remove the file, as follows :


dockerRemoveFileFromContainer $MSSQL_CONTAINER_NAME $completeBackupFilePath


2.5 Backup Process Conlusion 

We are going to do some touch ups on backing up. This will give us an idea of the whole picture.



function dockerMSSQLBackupDatabase() {

    if [[ -n $1 || -n $2 ]]; then
        databaseName=$1                                    
        backupDate=$(date +%F)                             
        fileName=$2"_"$backupDate".bak"                    
        completeBackupFilePath=$MSSQL_HOME_DIR"/"$fileName

        echo "[ Docker Backup ] Backing up your file to $completeBackupFilePath ..."
        docker exec -it $MSSQL_CONTAINER_NAME /opt/mssql-tools/bin/sqlcmd \
            -S $MSSQL_SERVER_IP \
            -U $MSSQL_SA_USERNAME \
            -P $MSSQL_SA_PASSWORD \
            -Q '
        BACKUP DATABASE '${databaseName}'
        TO DISK = '\'$completeBackupFilePath\''
        WITH FORMAT, INIT,
        MEDIANAME = '\'$databaseName\'' ,
        NAME = '\'$databaseName\'',
        SKIP, REWIND, NOUNLOAD,  STATS = 10'

        echo "[ Docker Backup ] Downloading your $fileName ... "
        dockerMSSQLCopyFromVM $fileName $HOME/Downloads

        dockerRemoveFileFromContainer $MSSQL_CONTAINER_NAME $completeBackupFilePath
        echo "[ Docker Backup ] Backup complete!"
    else
        echo "[ERROR] Please pass parameters (arg1) = [DatabaseName] and also the (arg 2) = [FileNamePrefix]"
    fi
}

Here we go. Your function should look like something like this, except the "if" and "printing or echo" part of the code, which I will explain in shortly. The "if" part is all about just a small validation to see if we do get the parameters we expect and that's it. So we know that we expect two arguments, therefore, we are just checking if they are supplied by the caller. You can just wrap up your code in the "if" statement. The other thing with the "echo" is that we are just printing out the progress as we go.

Open up your terminal window and import your script that has all this implementation as follows.


source ~/workspace/informing-techies/docker-mssql-operations/docker-mssql.sh


After that you can try and run your new docker backup function, type something like: 


dockerMSSQLBackupDatabase AlimdaadSystem MyOwnPrefix

Sample : These are the results of an expected successful run of you new function click on the image to zoom in.


Take note of my backup file name after typing that command MyOwnPrefix_2020-03-12.bak, so finally, we see it an action and it make more sense. So that's it for backing up the Docker MSSQL Server Databases. You can play more with it the logic and add some error handling and all.

 

 

3. Restoring Your Database

 

3.1 Restore Challenge

By now you have an idea of what's going on. And there will be a lot of familiar code so instead of breaking it up in pieces like we did previously I am just going to leave the rest of the code for you to have a little fun challenge and see if you ca understand what's going on.




function dockerMSSQLRestoreDatabase() {

    if [[ -n $1 || -n $2 ]]; then
        databaseName=$1     
        backupFilePath=$2   
        backupFileName=$2:t 

        echo "[ Docker Restore ] Copying your file $backupFileName over to your container ..."
        dockerMSSQLCopyToVM $backupFilePath

        dockerBackupFilePath=$MSSQL_HOME_DIR"/"$backupFileName
        echo "[ Docker Restore ] Now restoring from $dockerBackupFilePath ..."

        docker exec -it $MSSQL_CONTAINER_NAME /opt/mssql-tools/bin/sqlcmd \
            -S $MSSQL_SERVER_IP \
            -U $MSSQL_SA_USERNAME \
            -P $MSSQL_SA_PASSWORD \
            -Q '
            RESTORE DATABASE '${databaseName}'
            FROM DISK = '\'$dockerBackupFilePath\''
            WITH REPLACE,
            NOUNLOAD,
            STATS = 5'

        echo "[ Docker Restore ] Removing your backup from the container "
        dockerRemoveFileFromContainer $MSSQL_CONTAINER_NAME $dockerBackupFilePath
        echo "[ Docker Restore ] Restore complete!"
    else
        echo "[ERROR] Please pass parameters (arg1) = [DatabaseName] and also the (arg 2) = [/backup/file/fle/path.bak]"
    fi
}


And that's it! The idea is to get the actual file we want to restore, we just copy it over to the docker container. From there we look into executing the SQL Restore command, using the copied file. When the restore process is complete we then just remove the file like we previously did with out backup process. 

A little new trick in this section, as you would have noticed, is the line with backupFilenae=$2:t. This command takes the supplied full path and just extracts the filename and its extension. That's all. Now let's test the restore process and then we can wrap up this excersice.


dockerMSSQLRestoreDatabase AlimdaadSystem MyOwnPrefix

Sample : These are the results of an expected successful run of you new function click on the image to zoom in.


The image above should give you an idea of the expected positive results of your process. And that's it you may now log into your database server to double check your data and see if it's been restored to the previous state. 



Conclusion

If you take the time on this you can improve this big time. More error handling and code refactoring. And this is not the only way you can achieve this but it's one of the ways and it works very well for local development.


Leave some comments below and you may checkout the GitHub Source Code to validate the steps.