Friday, March 13, 2020

Docker Operations For MSSQL Server Using Unix Shell



Context


I am currently working on a piece of inherited software from another organization and one of the technologies that’s been used is MSSQL Server 2017. Now we are working more on Unix based OSes and you can imagine that was a very close one given that for the longest time there was no SQL Server for any other OS apart from Windows OS. We now have SQL Server for Linux variations, starting from SQL Server 2017. The trick however is that we don’t have this for Mac OS. This means that if you are using a Mac OS one of ways to get the database service up and running is to use Docker, my reason for saying this is that all Non-Linux hosts use Linux VMs for your Docker containers. Ultimately this means that we can then set up the database service inside a Linux container that’s running in Docker.

That’s fine, we managed to work with this idea when we took over the project mentioned above and the set up works nicely, but one day our team was discussing the pain of backing up and restoring on a MS SQL Server service that’s running on Docker. Some of the challenges include:



Restoration Process

- Having to copy your database backup file over to the docker container.
- Using a client app to locate the file.
- Setting up the restoration options
- After restoration you must may be delete the file from your container to reserve space and avoid duplicate files.



Backup Process

- You backup to the default docker container directory
- Must copy the file from the docker container for safe keeping.
- After you copy you may want to remove it from the container to reserve space on your container.



These can a bunch of tiresome processes when you have to perform them on a daily basis. The idea of scripting also motivates configurations & automation of various processes and, as much as there are many ways of solving a problem, I will share one of the ways we tackled these problems to assist the development team. We will start with the basic configurations then the backup process and then move on to the restore process, also note that we will be using some plain old unix shell scripting. Hopefully you have basic understanding of that.



1.Basic Configurations & Commons


1.1 Variables / Costants


We are going to set a few variables to use later in our script as configuration values. Create a new shell script with the following code:

NB: You can your script anything you prefer, for example I named mine docker-mssql.sh 
DOCKER_MSSQL_SERVER_SHELL_SCRIPT_PATH=${0:a:h} 

# Change these for your environment
MSSQL_HOME_DIR="/var/opt/mssql/data"
MSSQL_CONTAINER_NAME="mssql-server-2017"
MSSQL_SERVER_IP="localhost"
MSSQL_SERVER_PORT="1433"
MSSQL_SA_USERNAME="SA"
MSSQL_SA_PASSWORD="exe1Hou!"


The variable DOCKER_MSSQL_SERVER_SHELL_SCRIPT_PATH stores the current path of the script. So that's the ZShell way of dynamically resolving the script's current path. If you are not using ZShell, but Bash then you can do the same as follows: 
NB: Make sure that you type all this in one line.
DOCKER_MSSQL_SERVER_SHELL_SCRIPT_PATH=$(cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd)

The next step is to try and see the possible common processes, for example when backing up or restoring the database backups, we need to copy to and from the Docker container, so I believe we need a way of copying. We can write two functions that can help us, one will copy To the container and one will copy From the container. We will tailor these functions strictly for this Docker container. These functions will also make use of the Docker copy command. 



1.2 Common Functions


Add / Implement this function in your script : dockerMSSQLCopyToVM()

function dockerMSSQLCopyToVM() {
    sourceFilePath=$1
    docker cp $sourceFilePath $MSSQL_CONTAINER_NAME:$MSSQL_HOME_DIR
}

This function takes one argument at $1. It's the standard shell way of getting parameters. Unlike Java, shell is not Zero based so the parameters start at "1". This argument should be the path to a file, in this case it will be your backup file. We then take this argument and assign it to a new variable $sourceFilePath which then gives it more meaning. Now that everything is in place, we call the "Docker copy" command to copy the given file path to the docker container. According to the Docker standards, when you copy to a docker container you need to specify the {container-name}:{path-inside-the-container} docker. You will notice that to achieve that we are now using our configured variable(s) which we have set up in section "1.1" 


Add / Implement this function in your script : dockerMSSQLCopyFromVM()


function dockerMSSQLCopyFromVM() {
    fileName=$1
    destinationFilePath=$2
    docker cp $MSSQL_CONTAINER_NAME:$MSSQL_HOME_DIR"/"$fileName $destinationFilePath
}

In this function we expect two arguments or parameters. The first one at $1 being the name of the file which you want to copy. This will be the file that's on your docker container, so you want to copy it from the container to your machine. The second parameter at $2 will be the destination path where you want to copy it. 


Add / Implement this function in your script : dockerRemoveFileFromContainer()

function dockerRemoveFileFromContainer() {
    pathOfFileToRemove=$1
    docker exec $MSSQL_CONTAINER_NAME rm -rf $pathOfFileToRemove
}

A reminder that after backing up and restoring, we also want to remove the backup files from our docker container in order to reserve some space on the container. There's no use in keeping it there if you already have it on your host machine. So now we are expecting one argument which is the path to the file you want to remove. To execute a command against a running docker container we use the docker exec command. In this case we want to run a unix rm command to "remove" the file.

 

 

2. Backing Up Your Database

 

2.1 Function Skeleton

In this section we can now look into the core functionality, this is about the backup  process. We will build this up step by step then revise the complete function later. for now, create an empty function, I created mine as follows :

function dockerMSSQLBackupDatabase() {
    
}


2.2 Gathering Backup Process Information

Thinking about this more, we need to take in some information in the form of parameters / arguments. I imagine we will need to know about the "database' name that you want to backup and also may be something like a result "backup file name" prefix so that you can identify the file quickly when you are done. 

So let's fay for our example, to make this more verbose than just a file name and extention we would also want to see the date we created this backup file and achieve a format like FileNamePrefix_2020-03-02.bak. Keep this in mind because we will be using that format. Go to your new empty function and add the following code.

databaseName=$1                                   
backupDate=$(date +%F)                             
fileName=$2"_"$backupDate".bak" 
completeBackupFilePath=$MSSQL_HOME_DIR"/"$fileName

  • By now you should be able to follow on our parameter positions, so, in the first argument we expect the name of the existing database on the service. This will be the database you want to back up. We then assign that to a meaningful name, databaseName assign. 
  • We then use the "Bash Date" to get the current date in a spefic format, so the +%F will format our date as YYYY-MM-DD, which is what we are looking for to make up our desired file name.
  • From there we put everything together, including argument 2, which is where we expect the file name prefix. Ultimately we join / put together / concatenate all this to make up our file name format, which is then assigned to a variable named, fileName.
  • Finally, we are ready and we build up our full destination file path. This is where the file will be backedup. Using our variable / config at $MSSQL_HOME_DIR, joined up with the fileName variable.

AT this stage we have our information and we can now start the actual work, connecting to our container and loging into the SQL Service so that we can run the SQL Backup statement. 


2.3 Docker & SQL Login To Execute Backup Command

It's now time to revisit our "docker exec" to login into our container, so that we can login into the SQL Service and then run something against the service to backup. This is the core function of our backup process.

docker exec -it $MSSQL_CONTAINER_NAME /opt/mssql-tools/bin/sqlcmd \
    -S $MSSQL_SERVER_IP \
    -U $MSSQL_SA_USERNAME \
    -P $MSSQL_SA_PASSWORD \
    -Q '
BACKUP DATABASE '${databaseName}'
TO DISK = '\'$completeBackupFilePath\''
WITH FORMAT, INIT,
MEDIANAME = '\'$databaseName\'' ,
NAME = '\'$databaseName\'',
SKIP, REWIND, NOUNLOAD,  STATS = 10'

  • There we go with the exec command to login into our container. We are also using our constant $MSSQL_CONTAINER_NAME, to specify our container name. 
  • Then we have a command /opt/mssql-tools/bin/sqlcmd that's used to to execute the actual SQL Service followed by parameters with options of what we are tying to do, in this case we want to login so that we can execute a SQL command to run the backup process. -S $MSSQL_SERVER_IP, to specifcy the IP of the SQL Service we want to log into; -U $MSSQL_SA_USERNAME, just a username for the database; -P $MSSQL_SERVER_PASSWORD, password for the user and finally the -Q '...', which will be used to carry our SQL Queries & Statment. In this case you will notice that we are using the SQL Backup Command , joined up by our local function variables to make up a complete backup command.
  • One small thing you are probably wondering about is the backslashes in our command. That's just the unix shell way of breaking a command that's in a single line, to multiple lines.

2.4 Cleanup After Backing Up

What happens when the backup is done? Remember that we have some utility functions we can use. So we can now clean up nicely, this way we get you manage our backup files much easier and we also clear out the container and researve some space. To do that, first we copy that file from the container to our desired paht. In my case I am copying the file to my downloads folder, as follows :  


dockerMSSQLCopyFromVM $fileName $HOME/Downloads

The next step is to remove the file, as follows :


dockerRemoveFileFromContainer $MSSQL_CONTAINER_NAME $completeBackupFilePath


2.5 Backup Process Conlusion 

We are going to do some touch ups on backing up. This will give us an idea of the whole picture.



function dockerMSSQLBackupDatabase() {

    if [[ -n $1 || -n $2 ]]; then
        databaseName=$1                                    
        backupDate=$(date +%F)                             
        fileName=$2"_"$backupDate".bak"                    
        completeBackupFilePath=$MSSQL_HOME_DIR"/"$fileName

        echo "[ Docker Backup ] Backing up your file to $completeBackupFilePath ..."
        docker exec -it $MSSQL_CONTAINER_NAME /opt/mssql-tools/bin/sqlcmd \
            -S $MSSQL_SERVER_IP \
            -U $MSSQL_SA_USERNAME \
            -P $MSSQL_SA_PASSWORD \
            -Q '
        BACKUP DATABASE '${databaseName}'
        TO DISK = '\'$completeBackupFilePath\''
        WITH FORMAT, INIT,
        MEDIANAME = '\'$databaseName\'' ,
        NAME = '\'$databaseName\'',
        SKIP, REWIND, NOUNLOAD,  STATS = 10'

        echo "[ Docker Backup ] Downloading your $fileName ... "
        dockerMSSQLCopyFromVM $fileName $HOME/Downloads

        dockerRemoveFileFromContainer $MSSQL_CONTAINER_NAME $completeBackupFilePath
        echo "[ Docker Backup ] Backup complete!"
    else
        echo "[ERROR] Please pass parameters (arg1) = [DatabaseName] and also the (arg 2) = [FileNamePrefix]"
    fi
}

Here we go. Your function should look like something like this, except the "if" and "printing or echo" part of the code, which I will explain in shortly. The "if" part is all about just a small validation to see if we do get the parameters we expect and that's it. So we know that we expect two arguments, therefore, we are just checking if they are supplied by the caller. You can just wrap up your code in the "if" statement. The other thing with the "echo" is that we are just printing out the progress as we go.

Open up your terminal window and import your script that has all this implementation as follows.


source ~/workspace/informing-techies/docker-mssql-operations/docker-mssql.sh


After that you can try and run your new docker backup function, type something like: 


dockerMSSQLBackupDatabase AlimdaadSystem MyOwnPrefix

Sample : These are the results of an expected successful run of you new function click on the image to zoom in.


Take note of my backup file name after typing that command MyOwnPrefix_2020-03-12.bak, so finally, we see it an action and it make more sense. So that's it for backing up the Docker MSSQL Server Databases. You can play more with it the logic and add some error handling and all.

 

 

3. Restoring Your Database

 

3.1 Restore Challenge

By now you have an idea of what's going on. And there will be a lot of familiar code so instead of breaking it up in pieces like we did previously I am just going to leave the rest of the code for you to have a little fun challenge and see if you ca understand what's going on.




function dockerMSSQLRestoreDatabase() {

    if [[ -n $1 || -n $2 ]]; then
        databaseName=$1     
        backupFilePath=$2   
        backupFileName=$2:t 

        echo "[ Docker Restore ] Copying your file $backupFileName over to your container ..."
        dockerMSSQLCopyToVM $backupFilePath

        dockerBackupFilePath=$MSSQL_HOME_DIR"/"$backupFileName
        echo "[ Docker Restore ] Now restoring from $dockerBackupFilePath ..."

        docker exec -it $MSSQL_CONTAINER_NAME /opt/mssql-tools/bin/sqlcmd \
            -S $MSSQL_SERVER_IP \
            -U $MSSQL_SA_USERNAME \
            -P $MSSQL_SA_PASSWORD \
            -Q '
            RESTORE DATABASE '${databaseName}'
            FROM DISK = '\'$dockerBackupFilePath\''
            WITH REPLACE,
            NOUNLOAD,
            STATS = 5'

        echo "[ Docker Restore ] Removing your backup from the container "
        dockerRemoveFileFromContainer $MSSQL_CONTAINER_NAME $dockerBackupFilePath
        echo "[ Docker Restore ] Restore complete!"
    else
        echo "[ERROR] Please pass parameters (arg1) = [DatabaseName] and also the (arg 2) = [/backup/file/fle/path.bak]"
    fi
}


And that's it! The idea is to get the actual file we want to restore, we just copy it over to the docker container. From there we look into executing the SQL Restore command, using the copied file. When the restore process is complete we then just remove the file like we previously did with out backup process. 

A little new trick in this section, as you would have noticed, is the line with backupFilenae=$2:t. This command takes the supplied full path and just extracts the filename and its extension. That's all. Now let's test the restore process and then we can wrap up this excersice.


dockerMSSQLRestoreDatabase AlimdaadSystem MyOwnPrefix

Sample : These are the results of an expected successful run of you new function click on the image to zoom in.


The image above should give you an idea of the expected positive results of your process. And that's it you may now log into your database server to double check your data and see if it's been restored to the previous state. 



Conclusion

If you take the time on this you can improve this big time. More error handling and code refactoring. And this is not the only way you can achieve this but it's one of the ways and it works very well for local development.


Leave some comments below and you may checkout the GitHub Source Code to validate the steps.




Saturday, March 7, 2020

Java EE Micro Services Using Thorntail



Context


The Software Development world is growing every minute and as it evolves the systems are becoming more and more normalised and refined into Micro Services, this has also inspired the likes of Spring Boot and many other frameworks.

I have realised that when it comes to enterprise software most Java developers are either using Spring or  Java EE / Jakarta EE. For the who are using Spring we have Spring Boot to help us with fast project kick off and development. More to that some use it for the Micro Service pattern or methodology. What happens when you you as a developer want to use JEE? Spring Boot may not be what you are looking for and perhaps you are looking for something along the lines of WildFly.

We will have a look at a baby project along these lines, a new sidecar project supporting WildFly to enable deconstructing the application server as is and pasting just enough of it back together with your application to create a self-contained executable jar. This project is formerly known as "Wildly Swarm" and now known, as Thorntail. I mentioned Spring Boot and now you can imagine where this is going. 



1. Assumptions


Now remember that we spoke about Micro Services so the best use case for this article would be something along the lines of some sort of API. In this case we will create a RESTFul API. We assume that you have an understanding of the following tech / tools.

  • WildFly AS
  • Java 8 
  • Maven
  • JAX-RS


2. Project Setup Using Maven


Let's set up the project using maven, as our automated build tool and also  grab some wildly fractions (dependencies) we need. Worry not we will cover a little bit about fractions as we go. We will start off with some basic properties settings. These can be anything you prefer so feel free to change these properties up. Just the important one is that since this will be a web application we want to package it as a ".war" file, so keep that one as is in the example.


<groupId>za.co.anylytical.showcase</groupId>
<artifactId>thorntail-rest-services</artifactId>
<name>Thorntail REST API Showcase</name>
<version>1.0.0-SNAPSHOT</version>
<packaging>war</packaging>


We have more of our properties, mainly for our versions in this case. We are also now telling maven that our Java version will be 8. At the moment [07-03-2020] the latest version of Thorntail is "2.6.0.Final". So we will be using the latest version. As of Jboss 7.x we no longer really need the ".web.xml" so the tag  <failOnMissingWebXml>  also is there to make sure that we don't get failures as we will not be using the ".web.xml" file in this example. 


<properties>
    <version.thorntail>2.6.0.Final</version.thorntail>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>


Maven being a very good and strong dependency management tool, it also has a very cool feature using BOM (Bill Of Materials) which pretty much allows us to access a set collection of various dependencies and hand pick the ones we need for our app. So in this case we will import the Thorntail specific BOM and then choose the dependencies we need for our JAX-RS API.


<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>io.thorntail</groupId>
            <artifactId>bom-all</artifactId>
            <version>${version.thorntail}</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>


There's a Thorntail Maven Plugin maven plugin available that can help us work on packing our application much easier. Let's set that up next.


<build>
    <finalName>thorntail-api</finalName>
    <plugins>
        <plugin>
            <groupId>io.thorntail</groupId>
            <artifactId>thorntail-maven-plugin</artifactId>
            <version>${version.thorntail}</version>
            <executions>
                <execution>
                    <goals>
                        <goal>package</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>


Earlier in the article we mentioned "fractions". This is a concept based on Thorntail, a "fraction" can include all or none of the WildFly components. Ultimately a fraction contributes configuration or capabilities to a runtime system. This is simply because we are loading in smaller pieces that we need, thus, Fractions. Looking at our current topic, (Thorntail) these fractions are in the form of dependencies. Let's go shopping for some ingredients that make up our REST Service API.


<dependencies>
    <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>swagger</artifactId>
    </dependency>
    <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>jaxrs</artifactId>
    </dependency>
    <dependency>
        <groupId>io.thorntail</groupId>
        <artifactId>swagger-webapp</artifactId>
    </dependency>
</dependencies>


That's it for the project set up with maven. As you can see, the fractions we are pulling in are very basic and specific, not need for the entire heavy WildFly. We have hand picked only Swaggertogether with its own web application piece, for the Web UI and finally the core piece we need for our REST API, JAX-RSAt this point we are now ready for some code implementation. 



3. Java REST Resource Implementation Using JAX-RS API


Let's now create a Rest resource implementation using Java and JAX-RS API. 

package za.co.anylytical.showcase.rest;
 
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
 
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
 
@Path("/text")
@Api( value = "/text", tags = "Text API")
@Produces( MediaType.APPLICATION_JSON)
public class SampleResource {

    @GET
    @ApiOperation(
            value = "Gets a text message",
            notes = "Returns the message as HTTP Resposne",
            response = Response.class)
    @Produces( MediaType.APPLICATION_JSON)
    public Response get() {
        return Response.status(Response.Status.OK)
                .entity( "Hey, there, you requested for text?")
                .build();
    }
}


According to the JAX-RS standards, the next step is to include a class extending javax.ws.rs.core.Application to define the rest @ApplicationPath and also register our rest resources. 


package za.co.anylytical.showcase.config;
 
import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;
 
@ApplicationPath("/")
public class ResourceConfig extends Application {
 
}


We will be adding the second last piece to our implementation before we can test it out. We will include a class implementing the ContainerResponseFilter interface, which works like a filter for the ContainerResponse extension point on the server side, in order to filter the response message after the invocation has executed:


package za.co.anylytical.showcase.filters;
 
import java.io.IOException;
 
import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerResponseContext;
import javax.ws.rs.container.ContainerResponseFilter;
import javax.ws.rs.ext.Provider;
 
 
@Provider
public class CORSFilter implements ContainerResponseFilter {
 
    @Override
    public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext) throws IOException {
        responseContext.getHeaders().add("Access-Control-Allow-Origin", "*");
        responseContext.getHeaders().add("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS");
        responseContext.getHeaders().add("Access-Control-Max-Age", "-1");
        responseContext.getHeaders().add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
    }
 
}


Let's configure our Wildfly port and context-root just because port 8080 is so over rated, plus we will be looking at an alternative to setting up, similar to the spring boot way with the application.properties or .yml file. In your source code resources folder, add a new file project-defaults.yml

Add the following piece of yml in your file. 


thorntail:
  http:
    port: 8881
  context:
    path: that-service


We are now all set, it's time to build our app and then try to run it and see. Let's talk to maven to see if it can help us bootstrap the rest service.


4. Run The Project Using Maven


Build the app. Make sure you are in the root project folder where you can run your pom file.

$ mvn clean install


Run the application, remember there's no standalone application server. So we are actually going to just execute our ".jar" file as mentioned in the beginning of this article. Let's get on with it then. 

$ java -jar target/thorntail-api-thorntail.jar

NB: At this point it will boot up very quick and you should see something along these lines. Just click the image below to enlarge it.


5. Swagger Checkout


So let's check the swagger out for some basic testing and checks. Now head over to the link http://localhost:8881/swagger-uiand you should see a swagger screen with the default API loaded. 
NB: A quick look at the default swagger home page once you launch it. Just click the image below to enlarge it.


You probably want to check your API, so while you are on that swagger screen just insert the following URL into the top text box. http://localhost:8881/that-service/swagger.json and then press enter. This will load your API in the web app and you should be able to see your API as shown in the image below.
NB: Web view of what it looks like once you have pointed it to your API. Just click the image below to enlarge it.


Conclusion


So there you go. A standalone service using Wildfly and some Java EE, similar way you would have it using Spring Boot. We have learned how to build one and the nice part was that fractions section which allowed us to hand pick which bits of wildfly we would want. Sometimes working with a full fledged application server is necessarily bloated and heavy. Incase you wondered how you can work with Java EE in Micro Service then this is another way you can do it.  
Note: Postman example of our API test, incase you were wondering. 



You may also find the GitHub Code Base incase you want to double check our exercise. I hope this has been helpful, leave comments in the section below. 



Wednesday, January 22, 2020

Setting Up Custom PowerShell Scripts On Terminal Load

 

Context


So I am used to having my own shell functions load up automatically when I open my terminal window to perform various operations on a daily basis. In other words I prefer having my own short cuts as functions or aliases for commands that I need and use on a daily basis. For those who want to know more about the reason for this you may refer to this older article

One day a landed in an organization where the team preferred using Windows OS over others. If you have read the article above you will understand that this was a different ball game for me, and obviously I wanted to have the same effect on Windows OS. Yes with the latest Windows OS 10 one can set up bash, but then how would I ever learn something new, in this case PowerShell.

Oh well ... 


Let's Get Started


We will be looking at a step by step process of setting up  your own custom PowerShell functions which will load automatically every time you open your PowerShell terminal. This will simplify working with those long complex commands which you probably need to use every day.

 

1. Locate Or Create The Default Folder

  • Open up Windows Explorer and head over to the "Documents" folder.
  • Look for a folder with the name "WindowsPowerShell". If the folder is not there then create it. This folder is the home / default directory for your PowerShell profile. Make sure you use that specific name.
  • Open the folder. Just note that if you did not have to create this folder and it was already there then it will probably contain a have a sub folder named "Scripts", which you can ignore right now.





2. Creating Your Custom PowerShell Script / Function

  • While you're still in that "WindowsPowerShell" folder, create a new PowerShell script file i.e I created mine with the name and extension Message-Printer.ps1 and please note that you can name it whatever you want, just keep the extension ".ps1" and that's it.
  • Open up the file in your favorite text editor or IDE, I prefer VS Code for most of my scripting. I also think that in this case it works out because it's a Microsoft IDE which means there's good support for PowerShell out of the box. So go ahead and open that file to start editing the as follows: 


function showThisMessage() {
    # Print out the message
    Write-Host "Welcome to PowerShell automation!"
 }


  • Ultimately the following image shows how things should be on you machine at this point. Have a look to confirm and then we can carry on.




3. Configuring Auto Load Of Your Script  


For each time you open your powershell terminal, there's a session that gets created and that session has some sort of profile that it's running with it. When a user does not have any custom profiles setup then they can work with the default one. For example we are going to work with a default one which will be activated when we open up the PowerShell terminal. 

NB: You may click the following link for more info about PowerShell Profiles. Let's get to the work...

  • Create a new file inside your "WindowsPowerShell" folder with the following name, Microsoft.PowerShell_profile.ps1, keep the name of file exactly like that since that's the default profile naming.
  • Open up the file in your favourite text editor or IDE.


 # This code may show as two lines but, please write it in one line.
 Get-ChildItem -Path $PSScriptRoot\Message-Printer.ps1 | Foreach-Object { . $_.FullName } 

  • For more validaton you may refer to the image below: 



  • So what's happening in that line is basically a search for our new custom script, "Message-Printer.ps1". This search will then import this script into your PowerSehll session by default when a new PowerShell terminal tab is opened. This is the PowerShell sytax and way of loading other scripts. It's achieved through the power of the Get-ChildItem commandlet which is used to get the specified file locations on the file system. 
  • The actual path is picked up dynamically with the $PSScriptRoot part of the script and then later we can suffix with our file and its extension.
  • And finally the Foreach-Object commandlet used to perform the same operation on a collection of files.
 
 


4. Running The Custom Script


Great! So now make sure you have saved all your files from your editor or IDE and then give your script a test.
  • Open up the PowerShell terminal.
  • Immediately start typing the function you created earlier. So just type showThisMessage in your terminal tab followed by the "return" or "enter" key ... and BOOM! Your text show be displaying as follows: 

     

Conclusion

 

So there you go! You now know to automatically load up your custom PowerShell scripts. Hoping it was helpful and many of you will try and use this method. 


Wednesday, June 22, 2016

My Transition To UNIX From DOS: Working With Aliases

Background


It's been fun playing on a Windows OS machine as a Software Engineer. I always find it interesting and fun to explore other things. Funny enough though, I have always been scared of using commands because I always felt that I would mess up something on my machine and have to do a factory reset of some sort. It's always a drag when one has to do that on their work machine. Finally I landed at a place where the recommended OSes were the unix based ones, preferably, Linux Ubuntu or Macintosh OS. Ultimately I went for the Macintosh OS, just because... 

From my first day there I had to learn some basic and common unix commands from the likes of: tail -f /file/path/here up to, ssh username@server.ip.address.here. At some point I got a bit fed up with typing a long and same old command everyday and to a point, every hour even. Even though it was great practice, something had to be done and that's when I learned about Unix aliases. Aliases are pretty much abbreviations of FUCs "Frequently-Used Commands" 

For example one can shorten a command like: defaults write com.apple.finder AppleShowAllFiles NO, which toggles hidden files & folders off, in Mac OS's finder, to something like show-hidden-files which will essentially be the name "Alias" of that command. 

Now to the "Nuts & Bolts", we will look at the following:
  1. Creating / Adding a new alias
  2. Referencing an external file for aliases.
We will also need some "shell scripting skills", nothing major, very few lines of code so that's all fun and well

1. Creating / Adding a new alias

Perhaps this may be slightly different on other unix platforms, so this is more pertinent to Mac OS. Let's go...!
  • Open finder and head to your home directory. (Unless you are in your home directory by default) 
  • Look for a file named, ".bash_profile". Notice the dot before the file name. If you cannot see hidden files then use the command (in your terminal)  from the example above and just change that "NO" to "YES". Then hold down the "alt" button and then "alt-click (right-click)"  on the finder icon and then "left click"  the option "Relaunch".
  • Go back to looking for our file and then open it in any of your favourite text editor... (Sublime Text, Atom, even VI on your terminal etc ...)
  • Let's add a new line, this will be a new alias for any command you want to add, for example:  

  alias show_files_no="defaults write com.apple.finder AppleShowAllFiles NO"
  


  • To delineate things a bit more, "alias" is a reserved key word for shell, so you are letting shell know that you want to add a new alias, the next word "show_files_no" can be anything you want, this is the actual name or alias of your command. Then last part "defaults write com.apple.finder AppleShowAllFiles NO" is the actual shell command that you would normally type and execute on the terminal. We are almost done. 
  • If you were editing the file using the terminal with VI, VIM, Nano etc... The next thing is to reload the ".bash_profile"  by either closing and opening the terminal again or typing the command "bash -l". I believe there may be more commands out there that one can use. 
  • Finally open terminal if you closed it. If you reloaded the ".bach_profile"  using the command then just type the alias you added recently, in our case just type "show_files_no". This will execute the command associated with that alias. To see if this made an effect just follow the steps about relaunching finder above. That's it!

2. Referencing an external file for aliases.

I have recently been playing around with externalising some bash profile stuff because I have a lot of aliases and my bash profile artifact was just getting too congested. So main things to note here is that
... you don't have to create your new external file inside the same directory as your bash profile and the name of your file does not have to start with the word "bash"...
I just named it that way for the sake of naming it that way! So let's get to it:
  • Create a new file as follows, "~/bashes/.whatever_file_name", of which in my case, is inside a new folder that I created, "bashes", in my home directory and named it ".bash_aliases". Keep in mind that your folder can be named anything you want.



  • Now go back to your main file, ".bash_profile" and then replace your alias with the following shell scrip code.

  #Referencing path to file containing the aliases
  aliasesPath=~/bashes/.bash_aliases 
  if [ -f $aliasesPath ]; then 
     source $aliasesPath
  fi  
  


  • Something along those lines should help you out. It's basically a shell if statement that checks if that file path exists and if so we then reference it from the main ".bash_profile" so next thing you should try is to now reload like we did earlier if you did all this using terminal or just close and then open terminal.  
So now you have externalised the aliases and you can try with other stuff like your environment variable profiles can also be externalised and so forth. Like I said this is the first main thing I learned when moving to UNIX, I hope it helps someone out there. I would really love to learn from you on how I can improve this post and some feedback, Cheers!