Friday, March 13, 2020

Docker Operations For MSSQL Server Using Unix Shell



Context


I am currently working on a piece of inherited software from another organization and one of the technologies that’s been used is MSSQL Server 2017. Now we are working more on Unix based OSes and you can imagine that was a very close one given that for the longest time there was no SQL Server for any other OS apart from Windows OS. We now have SQL Server for Linux variations, starting from SQL Server 2017. The trick however is that we don’t have this for Mac OS. This means that if you are using a Mac OS one of ways to get the database service up and running is to use Docker, my reason for saying this is that all Non-Linux hosts use Linux VMs for your Docker containers. Ultimately this means that we can then set up the database service inside a Linux container that’s running in Docker.

That’s fine, we managed to work with this idea when we took over the project mentioned above and the set up works nicely, but one day our team was discussing the pain of backing up and restoring on a MS SQL Server service that’s running on Docker. Some of the challenges include:



Restoration Process

- Having to copy your database backup file over to the docker container.
- Using a client app to locate the file.
- Setting up the restoration options
- After restoration you must may be delete the file from your container to reserve space and avoid duplicate files.



Backup Process

- You backup to the default docker container directory
- Must copy the file from the docker container for safe keeping.
- After you copy you may want to remove it from the container to reserve space on your container.



These can a bunch of tiresome processes when you have to perform them on a daily basis. The idea of scripting also motivates configurations & automation of various processes and, as much as there are many ways of solving a problem, I will share one of the ways we tackled these problems to assist the development team. We will start with the basic configurations then the backup process and then move on to the restore process, also note that we will be using some plain old unix shell scripting. Hopefully you have basic understanding of that.



1.Basic Configurations & Commons


1.1 Variables / Costants


We are going to set a few variables to use later in our script as configuration values. Create a new shell script with the following code:

NB: You can your script anything you prefer, for example I named mine docker-mssql.sh 
DOCKER_MSSQL_SERVER_SHELL_SCRIPT_PATH=${0:a:h} 

# Change these for your environment
MSSQL_HOME_DIR="/var/opt/mssql/data"
MSSQL_CONTAINER_NAME="mssql-server-2017"
MSSQL_SERVER_IP="localhost"
MSSQL_SERVER_PORT="1433"
MSSQL_SA_USERNAME="SA"
MSSQL_SA_PASSWORD="exe1Hou!"


The variable DOCKER_MSSQL_SERVER_SHELL_SCRIPT_PATH stores the current path of the script. So that's the ZShell way of dynamically resolving the script's current path. If you are not using ZShell, but Bash then you can do the same as follows: 
NB: Make sure that you type all this in one line.
DOCKER_MSSQL_SERVER_SHELL_SCRIPT_PATH=$(cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd)

The next step is to try and see the possible common processes, for example when backing up or restoring the database backups, we need to copy to and from the Docker container, so I believe we need a way of copying. We can write two functions that can help us, one will copy To the container and one will copy From the container. We will tailor these functions strictly for this Docker container. These functions will also make use of the Docker copy command. 



1.2 Common Functions


Add / Implement this function in your script : dockerMSSQLCopyToVM()

function dockerMSSQLCopyToVM() {
    sourceFilePath=$1
    docker cp $sourceFilePath $MSSQL_CONTAINER_NAME:$MSSQL_HOME_DIR
}

This function takes one argument at $1. It's the standard shell way of getting parameters. Unlike Java, shell is not Zero based so the parameters start at "1". This argument should be the path to a file, in this case it will be your backup file. We then take this argument and assign it to a new variable $sourceFilePath which then gives it more meaning. Now that everything is in place, we call the "Docker copy" command to copy the given file path to the docker container. According to the Docker standards, when you copy to a docker container you need to specify the {container-name}:{path-inside-the-container} docker. You will notice that to achieve that we are now using our configured variable(s) which we have set up in section "1.1" 


Add / Implement this function in your script : dockerMSSQLCopyFromVM()


function dockerMSSQLCopyFromVM() {
    fileName=$1
    destinationFilePath=$2
    docker cp $MSSQL_CONTAINER_NAME:$MSSQL_HOME_DIR"/"$fileName $destinationFilePath
}

In this function we expect two arguments or parameters. The first one at $1 being the name of the file which you want to copy. This will be the file that's on your docker container, so you want to copy it from the container to your machine. The second parameter at $2 will be the destination path where you want to copy it. 


Add / Implement this function in your script : dockerRemoveFileFromContainer()

function dockerRemoveFileFromContainer() {
    pathOfFileToRemove=$1
    docker exec $MSSQL_CONTAINER_NAME rm -rf $pathOfFileToRemove
}

A reminder that after backing up and restoring, we also want to remove the backup files from our docker container in order to reserve some space on the container. There's no use in keeping it there if you already have it on your host machine. So now we are expecting one argument which is the path to the file you want to remove. To execute a command against a running docker container we use the docker exec command. In this case we want to run a unix rm command to "remove" the file.

 

 

2. Backing Up Your Database

 

2.1 Function Skeleton

In this section we can now look into the core functionality, this is about the backup  process. We will build this up step by step then revise the complete function later. for now, create an empty function, I created mine as follows :

function dockerMSSQLBackupDatabase() {
    
}


2.2 Gathering Backup Process Information

Thinking about this more, we need to take in some information in the form of parameters / arguments. I imagine we will need to know about the "database' name that you want to backup and also may be something like a result "backup file name" prefix so that you can identify the file quickly when you are done. 

So let's fay for our example, to make this more verbose than just a file name and extention we would also want to see the date we created this backup file and achieve a format like FileNamePrefix_2020-03-02.bak. Keep this in mind because we will be using that format. Go to your new empty function and add the following code.

databaseName=$1                                   
backupDate=$(date +%F)                             
fileName=$2"_"$backupDate".bak" 
completeBackupFilePath=$MSSQL_HOME_DIR"/"$fileName

  • By now you should be able to follow on our parameter positions, so, in the first argument we expect the name of the existing database on the service. This will be the database you want to back up. We then assign that to a meaningful name, databaseName assign. 
  • We then use the "Bash Date" to get the current date in a spefic format, so the +%F will format our date as YYYY-MM-DD, which is what we are looking for to make up our desired file name.
  • From there we put everything together, including argument 2, which is where we expect the file name prefix. Ultimately we join / put together / concatenate all this to make up our file name format, which is then assigned to a variable named, fileName.
  • Finally, we are ready and we build up our full destination file path. This is where the file will be backedup. Using our variable / config at $MSSQL_HOME_DIR, joined up with the fileName variable.

AT this stage we have our information and we can now start the actual work, connecting to our container and loging into the SQL Service so that we can run the SQL Backup statement. 


2.3 Docker & SQL Login To Execute Backup Command

It's now time to revisit our "docker exec" to login into our container, so that we can login into the SQL Service and then run something against the service to backup. This is the core function of our backup process.

docker exec -it $MSSQL_CONTAINER_NAME /opt/mssql-tools/bin/sqlcmd \
    -S $MSSQL_SERVER_IP \
    -U $MSSQL_SA_USERNAME \
    -P $MSSQL_SA_PASSWORD \
    -Q '
BACKUP DATABASE '${databaseName}'
TO DISK = '\'$completeBackupFilePath\''
WITH FORMAT, INIT,
MEDIANAME = '\'$databaseName\'' ,
NAME = '\'$databaseName\'',
SKIP, REWIND, NOUNLOAD,  STATS = 10'

  • There we go with the exec command to login into our container. We are also using our constant $MSSQL_CONTAINER_NAME, to specify our container name. 
  • Then we have a command /opt/mssql-tools/bin/sqlcmd that's used to to execute the actual SQL Service followed by parameters with options of what we are tying to do, in this case we want to login so that we can execute a SQL command to run the backup process. -S $MSSQL_SERVER_IP, to specifcy the IP of the SQL Service we want to log into; -U $MSSQL_SA_USERNAME, just a username for the database; -P $MSSQL_SERVER_PASSWORD, password for the user and finally the -Q '...', which will be used to carry our SQL Queries & Statment. In this case you will notice that we are using the SQL Backup Command , joined up by our local function variables to make up a complete backup command.
  • One small thing you are probably wondering about is the backslashes in our command. That's just the unix shell way of breaking a command that's in a single line, to multiple lines.

2.4 Cleanup After Backing Up

What happens when the backup is done? Remember that we have some utility functions we can use. So we can now clean up nicely, this way we get you manage our backup files much easier and we also clear out the container and researve some space. To do that, first we copy that file from the container to our desired paht. In my case I am copying the file to my downloads folder, as follows :  


dockerMSSQLCopyFromVM $fileName $HOME/Downloads

The next step is to remove the file, as follows :


dockerRemoveFileFromContainer $MSSQL_CONTAINER_NAME $completeBackupFilePath


2.5 Backup Process Conlusion 

We are going to do some touch ups on backing up. This will give us an idea of the whole picture.



function dockerMSSQLBackupDatabase() {

    if [[ -n $1 || -n $2 ]]; then
        databaseName=$1                                    
        backupDate=$(date +%F)                             
        fileName=$2"_"$backupDate".bak"                    
        completeBackupFilePath=$MSSQL_HOME_DIR"/"$fileName

        echo "[ Docker Backup ] Backing up your file to $completeBackupFilePath ..."
        docker exec -it $MSSQL_CONTAINER_NAME /opt/mssql-tools/bin/sqlcmd \
            -S $MSSQL_SERVER_IP \
            -U $MSSQL_SA_USERNAME \
            -P $MSSQL_SA_PASSWORD \
            -Q '
        BACKUP DATABASE '${databaseName}'
        TO DISK = '\'$completeBackupFilePath\''
        WITH FORMAT, INIT,
        MEDIANAME = '\'$databaseName\'' ,
        NAME = '\'$databaseName\'',
        SKIP, REWIND, NOUNLOAD,  STATS = 10'

        echo "[ Docker Backup ] Downloading your $fileName ... "
        dockerMSSQLCopyFromVM $fileName $HOME/Downloads

        dockerRemoveFileFromContainer $MSSQL_CONTAINER_NAME $completeBackupFilePath
        echo "[ Docker Backup ] Backup complete!"
    else
        echo "[ERROR] Please pass parameters (arg1) = [DatabaseName] and also the (arg 2) = [FileNamePrefix]"
    fi
}

Here we go. Your function should look like something like this, except the "if" and "printing or echo" part of the code, which I will explain in shortly. The "if" part is all about just a small validation to see if we do get the parameters we expect and that's it. So we know that we expect two arguments, therefore, we are just checking if they are supplied by the caller. You can just wrap up your code in the "if" statement. The other thing with the "echo" is that we are just printing out the progress as we go.

Open up your terminal window and import your script that has all this implementation as follows.


source ~/workspace/informing-techies/docker-mssql-operations/docker-mssql.sh


After that you can try and run your new docker backup function, type something like: 


dockerMSSQLBackupDatabase AlimdaadSystem MyOwnPrefix

Sample : These are the results of an expected successful run of you new function click on the image to zoom in.


Take note of my backup file name after typing that command MyOwnPrefix_2020-03-12.bak, so finally, we see it an action and it make more sense. So that's it for backing up the Docker MSSQL Server Databases. You can play more with it the logic and add some error handling and all.

 

 

3. Restoring Your Database

 

3.1 Restore Challenge

By now you have an idea of what's going on. And there will be a lot of familiar code so instead of breaking it up in pieces like we did previously I am just going to leave the rest of the code for you to have a little fun challenge and see if you ca understand what's going on.




function dockerMSSQLRestoreDatabase() {

    if [[ -n $1 || -n $2 ]]; then
        databaseName=$1     
        backupFilePath=$2   
        backupFileName=$2:t 

        echo "[ Docker Restore ] Copying your file $backupFileName over to your container ..."
        dockerMSSQLCopyToVM $backupFilePath

        dockerBackupFilePath=$MSSQL_HOME_DIR"/"$backupFileName
        echo "[ Docker Restore ] Now restoring from $dockerBackupFilePath ..."

        docker exec -it $MSSQL_CONTAINER_NAME /opt/mssql-tools/bin/sqlcmd \
            -S $MSSQL_SERVER_IP \
            -U $MSSQL_SA_USERNAME \
            -P $MSSQL_SA_PASSWORD \
            -Q '
            RESTORE DATABASE '${databaseName}'
            FROM DISK = '\'$dockerBackupFilePath\''
            WITH REPLACE,
            NOUNLOAD,
            STATS = 5'

        echo "[ Docker Restore ] Removing your backup from the container "
        dockerRemoveFileFromContainer $MSSQL_CONTAINER_NAME $dockerBackupFilePath
        echo "[ Docker Restore ] Restore complete!"
    else
        echo "[ERROR] Please pass parameters (arg1) = [DatabaseName] and also the (arg 2) = [/backup/file/fle/path.bak]"
    fi
}


And that's it! The idea is to get the actual file we want to restore, we just copy it over to the docker container. From there we look into executing the SQL Restore command, using the copied file. When the restore process is complete we then just remove the file like we previously did with out backup process. 

A little new trick in this section, as you would have noticed, is the line with backupFilenae=$2:t. This command takes the supplied full path and just extracts the filename and its extension. That's all. Now let's test the restore process and then we can wrap up this excersice.


dockerMSSQLRestoreDatabase AlimdaadSystem MyOwnPrefix

Sample : These are the results of an expected successful run of you new function click on the image to zoom in.


The image above should give you an idea of the expected positive results of your process. And that's it you may now log into your database server to double check your data and see if it's been restored to the previous state. 



Conclusion

If you take the time on this you can improve this big time. More error handling and code refactoring. And this is not the only way you can achieve this but it's one of the ways and it works very well for local development.


Leave some comments below and you may checkout the GitHub Source Code to validate the steps.




Saturday, March 7, 2020

Java EE Micro Services Using Thorntail



Context


The Software Development world is growing every minute and as it evolves the systems are becoming more and more normalised and refined into Micro Services, this has also inspired the likes of Spring Boot and many other frameworks.

I have realised that when it comes to enterprise software most Java developers are either using Spring or  Java EE / Jakarta EE. For the who are using Spring we have Spring Boot to help us with fast project kick off and development. More to that some use it for the Micro Service pattern or methodology. What happens when you you as a developer want to use JEE? Spring Boot may not be what you are looking for and perhaps you are looking for something along the lines of WildFly.

We will have a look at a baby project along these lines, a new sidecar project supporting WildFly to enable deconstructing the application server as is and pasting just enough of it back together with your application to create a self-contained executable jar. This project is formerly known as "Wildly Swarm" and now known, as Thorntail. I mentioned Spring Boot and now you can imagine where this is going. 



1. Assumptions


Now remember that we spoke about Micro Services so the best use case for this article would be something along the lines of some sort of API. In this case we will create a RESTFul API. We assume that you have an understanding of the following tech / tools.

  • WildFly AS
  • Java 8 
  • Maven
  • JAX-RS


2. Project Setup Using Maven


Let's set up the project using maven, as our automated build tool and also  grab some wildly fractions (dependencies) we need. Worry not we will cover a little bit about fractions as we go. We will start off with some basic properties settings. These can be anything you prefer so feel free to change these properties up. Just the important one is that since this will be a web application we want to package it as a ".war" file, so keep that one as is in the example.


<groupId>za.co.anylytical.showcase</groupId>
<artifactId>thorntail-rest-services</artifactId>
<name>Thorntail REST API Showcase</name>
<version>1.0.0-SNAPSHOT</version>
<packaging>war</packaging>


We have more of our properties, mainly for our versions in this case. We are also now telling maven that our Java version will be 8. At the moment [07-03-2020] the latest version of Thorntail is "2.6.0.Final". So we will be using the latest version. As of Jboss 7.x we no longer really need the ".web.xml" so the tag  <failOnMissingWebXml>  also is there to make sure that we don't get failures as we will not be using the ".web.xml" file in this example. 


<properties>
    <version.thorntail>2.6.0.Final</version.thorntail>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>


Maven being a very good and strong dependency management tool, it also has a very cool feature using BOM (Bill Of Materials) which pretty much allows us to access a set collection of various dependencies and hand pick the ones we need for our app. So in this case we will import the Thorntail specific BOM and then choose the dependencies we need for our JAX-RS API.


<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>io.thorntail</groupId>
            <artifactId>bom-all</artifactId>
            <version>${version.thorntail}</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>


There's a Thorntail Maven Plugin maven plugin available that can help us work on packing our application much easier. Let's set that up next.


<build>
    <finalName>thorntail-api</finalName>
    <plugins>
        <plugin>
            <groupId>io.thorntail</groupId>
            <artifactId>thorntail-maven-plugin</artifactId>
            <version>${version.thorntail}</version>
            <executions>
                <execution>
                    <goals>
                        <goal>package</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>


Earlier in the article we mentioned "fractions". This is a concept based on Thorntail, a "fraction" can include all or none of the WildFly components. Ultimately a fraction contributes configuration or capabilities to a runtime system. This is simply because we are loading in smaller pieces that we need, thus, Fractions. Looking at our current topic, (Thorntail) these fractions are in the form of dependencies. Let's go shopping for some ingredients that make up our REST Service API.


<dependencies>
    <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>swagger</artifactId>
    </dependency>
    <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>jaxrs</artifactId>
    </dependency>
    <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>swagger-webapp</artifactId>
    </dependency>
</dependencies>


That's it for the project set up with maven. As you can see, the fractions we are pulling in are very basic and specific, not need for the entire heavy WildFly. We have hand picked only Swaggertogether with its own web application piece, for the Web UI and finally the core piece we need for our REST API, JAX-RSAt this point we are now ready for some code implementation. 



3. Java REST Resource Implementation Using JAX-RS API


Let's now create a Rest resource implementation using Java and JAX-RS API. 

package za.co.anylytical.showcase.rest;
 
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
 
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
 
@Path("/text")
@Api( value = "/text", tags = "Text API")
@Produces( MediaType.APPLICATION_JSON)
public class SampleResource {

    @GET
    @ApiOperation(
            value = "Gets a text message",
            notes = "Returns the message as HTTP Resposne",
            response = Response.class)
    @Produces( MediaType.APPLICATION_JSON)
    public Response get() {
        return Response.status(Response.Status.OK)
                .entity( "Hey, there, you requested for text?")
                .build();
    }
}


According to the JAX-RS standards, the next step is to include a class extending javax.ws.rs.core.Application to define the rest @ApplicationPath and also register our rest resources. 


package za.co.anylytical.showcase.config;
 
import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;
 
@ApplicationPath("/")
public class ResourceConfig extends Application {
 
}


We will be adding the second last piece to our implementation before we can test it out. We will include a class implementing the ContainerResponseFilter interface, which works like a filter for the ContainerResponse extension point on the server side, in order to filter the response message after the invocation has executed:


package za.co.anylytical.showcase.filters;
 
import java.io.IOException;
 
import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerResponseContext;
import javax.ws.rs.container.ContainerResponseFilter;
import javax.ws.rs.ext.Provider;
 
 
@Provider
public class CORSFilter implements ContainerResponseFilter {
 
    @Override
    public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext) throws IOException {
        responseContext.getHeaders().add("Access-Control-Allow-Origin", "*");
        responseContext.getHeaders().add("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS");
        responseContext.getHeaders().add("Access-Control-Max-Age", "-1");
        responseContext.getHeaders().add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
    }
 
}


Let's configure our Wildfly port and context-root just because port 8080 is so over rated, plus we will be looking at an alternative to setting up, similar to the spring boot way with the application.properties or .yml file. In your source code resources folder, add a new file project-defaults.yml

Add the following piece of yml in your file. 


thorntail:
  http:
    port: 8881
  context:
    path: that-service


We are now all set, it's time to build our app and then try to run it and see. Let's talk to maven to see if it can help us bootstrap the rest service.


4. Run The Project Using Maven


Build the app. Make sure you are in the root project folder where you can run your pom file.

$ mvn clean install


Run the application, remember there's no standalone application server. So we are actually going to just execute our ".jar" file as mentioned in the beginning of this article. Let's get on with it then. 

$ java -jar target/thorntail-api-thorntail.jar

NB: At this point it will boot up very quick and you should see something along these lines. Just click the image below to enlarge it.


5. Swagger Checkout


So let's check the swagger out for some basic testing and checks. Now head over to the link http://localhost:8881/swagger-uiand you should see a swagger screen with the default API loaded. 
NB: A quick look at the default swagger home page once you launch it. Just click the image below to enlarge it.


You probably want to check your API, so while you are on that swagger screen just insert the following URL into the top text box. http://localhost:8881/that-service/swagger.json and then press enter. This will load your API in the web app and you should be able to see your API as shown in the image below.
NB: Web view of what it looks like once you have pointed it to your API. Just click the image below to enlarge it.


Conclusion


So there you go. A standalone service using Wildfly and some Java EE, similar way you would have it using Spring Boot. We have learned how to build one and the nice part was that fractions section which allowed us to hand pick which bits of wildfly we would want. Sometimes working with a full fledged application server is necessarily bloated and heavy. Incase you wondered how you can work with Java EE in Micro Service then this is another way you can do it.  
Note: Postman example of our API test, incase you were wondering. 



You may also find the GitHub Code Base incase you want to double check our exercise. I hope this has been helpful, leave comments in the section below.