Thursday, November 12, 2020

Bash It Out : Getting & Setting Values In A Properties Config File

 

Background


There are time developers get to a point where they need some sort of process automation on their machines. This can be anything you can think of. The solution often leads to a collection of scripts that nee to be developed for various operations. I was recently in such a case, where I had to work on the some scripts to assist the team with setting up certain components of a particular system. One of the processes which was interesting to me was to Get & Set configurations from a Java Properties file.

So, of course one could use Java ( which is heavy-wight ), given the config file we are were working wtih, there are also verious Java based scripting languages like Jython / Groovy / Kotlin Script etc ... In my case I wroked with plain old Unix Shell. I promise you it was fun. I really enjoyed it.
 
How about we get down the steps of how I achieved this. So we are just going to implement a way to Read "Get" and also how to Write "Set" some configuration values into a .properties file
 
 
 

Assumptions / Things To Keep In Mind 

So in this case we assume you have some base Shell Scripting experience. Meaning by now you know you can create functions and aliases and variables, you know how to import other scripts from other directories.



What You Need

You need your favourite text editing tool with some base bash /shell plugins or extensions. Nothing major really. Then you can use your terminal for testing purposes. No magic there. 



Getting On With It

So I am now going to get started with the "Get" part first and then we can move to the "Set", before you start, just create a properties file with some configurations for testing purposes. I named mine, "sample-conf.properties" and the contents of the file are as follows : 


some.conf.welcome.message = Believe it or not, I have been read from a shell script
some.conf.bashitout = Let's BASH it



Create a new script file and give it a name you prefer, i.e. mine is "properties-file.sh"


"GET" ting Config Values

Great, so now we have our configuration file and would like to get the configurations by their keys as shown in the images above. Let's look at the function we have for this operation.


# Used to extract the value of a specified key with the intention of reading from a ".properties" file.
# Usage : getProp this.property.key /from/this/file/here.properties
function getProp() {
propToRead=$1
propFilePath=$2

# While we read each file line
while read -r lineItem;
do

# Skip the line that have been commented out, all lines that start with the [ # ]
[[
"$lineItem" =~ ^#.*$ ]] && continue

# Check if the current line that we are on, matches the start of the property we want to read.
# If they starting of the strings match then we know this is the config we are looking for.
# If it's what we are looking for then we cut the line string at the point of the [ = ] character.
# After cutting the string into two parts, we take the second part which is the value we are looking for
if [[ $lineItem == $propToRead* ]]; then
configVal=$(echo $lineItem | cut -d'=' -f2)
echo $(trimText $configVal)
fi
done < "$propFilePath" # The file we are reading each line from
}

 

Cooooool stuff! So let's take a closer look at this function to learn about what's happening as much as possible.

 

  • The first two lines in the function are all about assigning the parameters or arguments to variables with proper names, for more understanding.
  • We then use the While Loop to go through each file line. We are simply saying that while we read each line from a file, then do certain things. The variable that represents a line is the lineItem. So keep that in mind.
  • The first thing we do in our while loop is to use regex to ignore comments. This means that the lines that start with the # character should be ignored. This is because lines starting with this character are comments. So then we say that if the current line we are at is a comment, then continue to the next line and leave this one as is.
  • Then we start building up on finding the value for the key that the client supplied. This build up is taking place in our if statement. This build up is based on a condition where the current lineItem matches the start of the key supplied by client, thus notice the * character after our variable propToRead. So we are saying that if the line we are on, starts with the given key then start finding the value of that key. 
  • At configVal=$(echo $lineItem | cut -d'=' -f2) We find the value of the key by first cutting / splitting / splicing the line by the "equals" = delimeter. This will cut the string into two pieces. We care about the second piece, which is the value. To cut the string we are using the Unix Cut Command which we perform on the lineItem we are at then we assign the result into a new variable, configVal that's local to the if statement.
  • Great, so there are chances that the value we are getting may need to be sanitized a bit, just to get rid of leading and trailing spaces. So for this one I wrote a little function which I will share in this artice.
  • Finally we return the value via the echo command.
  • The last line closes the while loop and also indicates what we were looping through. In this case the path of the file we were reading. Lookup some While loop basics using Unix Shell.

 

 

"SET" ting Config Values

Moving onto the next step, which is to set / update the configuration values by supplying a key. We will include something extra special where we insert a new records of "key" and "value". In the event we learn that the config is not in the file then we will insert or add one, based on the given arguments. 


# Used to modify a ".properties" file.
# This function will add the key if it does not exist
# Usage : setProp this.assumed.key "withThisNewValue' /inside/this/config/file.properties
function setProp() {
keyToAddOrModify=$1
valueToSet=$2
propFilePath=$3

# Check if the property exists in the file first
# Using Regex to ignore the lines that have been commented out.
if ! grep -R "^[#]*\s*${keyToAddOrModify} = .*" $propFilePath > /dev/null; then
logInfo $PROPERITES_FILE_SCRIPT_NAME "Property '${keyToAddOrModify}' not found, so we are adding it in."
echo "$keyToAddOrModify = $valueToSet" >> $propFilePath
else
# Handling a case where we have found out that the config exists so now it's a matter of editing its value
logInfo $PROPERITES_FILE_SCRIPT_NAME "Updating the property KEY : '${keyToAddOrModify}' and setting it to VALUE ${valueToSet}: in the file."
sed -ir "s/^[#]*\s*${keyToAddOrModify} = .*/$keyToAddOrModify = $valueToSet/" $propFilePath
fi
}

 
Zooming in closer to learn what's going on in the code. 


  • We are getting the three expected arguments and giving them meaningful names to understand more and to make the code more readable.
  • We then use the grep command to grab certain keys combined with some regex ( ^[#]*\s* ) that ignores configurations that have been commented out because we are not interested in them. On top of that we specify the file we want to read from at : $propFilePath > /dev/null. This line explained in short pretty much stipulates that we want to grab the configuration key and value that has not been commented out in the specified file.
  • The nice thing to add onto this is that we putting this whole statement into an if statement because we can evaluate if what we are grabbing is not there or not, notice the last part that checks for non-null value out of this statment at
    /dev/null and finally the negation with the ! mark. So let's revise it in simple english. If whatever I am looking for in this file is not there, then execute what's inside the if statement block. When executing the block then, we first print out to the user notifying them that we are did not find the specified config so then we are going to add it it into the file. The last line in the if then joins the "key" and "value" to format it properly like the .properties standards : "$keyToAddOrModify = $valueToSet" and finally we push that into the specified file path : >> $propFilePath and wrap it up there.
  • The second part of the if statement then executes in the case where we found the config from the file. So we use an almost similar regex together with a Sed command ( s/^[#]*\s* ) to avoid commented configurations and rather find the actual visible configuration. So the part of the Sed command that looks for the configuration only is the following : sed -ir "s/^[#]*\s*${keyToAddOrModify} = .* then the rest of the line is simply replacing the configuration in the file using : $keyToAddOrModify = $valueToSet/" $propFilePath.  

 

So there you have it! Now let's run a sample test to see how it behaves, then we are done.

 

Testing It Out

Open your terminal and then change to the directory where the script file is. For example :

 
cd /where/your/script/is/

Now import the script using the source command.

 
source properties-file.sh

 

You are doing great, now the next step is to start by the simpler one, getting a configuration.


getProp some.conf.welcome.message /where/your/configuration/file/is/sample-conf.properties

 

The results you should now be seeing on our terminal shoudl be something like : 


$ Believe it or not, I have been read from a shell script


Now let's set some fresh configurations. So while you are in the same directory and you have your script imported into your terminal session, now type somethign like :


setProp some.conf.bashitout "This is a Fresh Configuration, YEAH"
/where/your/configuration/file/is/sample-conf.properties


And you should have something like this : 


[INFO] 2020-11-12 22:07:48 [properties-file.sh] : Updating the property KEY : 'some.conf.bashitout' and setting it to VALUE This is a Fresh Configuration, YEAH: in the file

 

Also check your configuration file and is should have updated : 


 

And that's mainly it. If you want then you can try setting a configuration that's not in the file to see how it behaves, that test should insert a new record into the file. Anyway, this is how you Get and Set configurations inside a .properties file. I believe that one can use this same technique to play with other config files. 


You can refer to my GitHub Source Code to help you refer on what you may have missed. Leave your comments in the section below.




Wednesday, August 19, 2020

Docker Database Integration Testing

 

Background


One of the the most important things in the Software Engineering world is the ability to automate your tests properly against your database, and by this I mean your actual database. Something that can startup your database for you incase it's not running and then also run the tests and finally shutdown your database when the tests are done. 

In my experience I have seen developers introduce a bit of manual work to get the following going : 

  • Starting up the database before running tests.
  • Clearing out database manually when done or writing something to do that.
  • Sometimes, and I mean sometimes, shutting down the database when done with the tests.

One of the great solutions to this problem, is to use some nice "In-Memory" databases. This is great and has helped with automation of the key points mentioned above. No need to start the database and clear it out manually after running tests etc  ...

Now in most cases the in-memory databases don't necessarily match up to the actual database you will be running in the Development, QA and Production environments, i.e, when using something like H2 IN -Memory Database, then we need to keep in mind that it's not really the MSSQL Server or Postgres DB that you are running on the actual environments. This means that you may be limited when testing something more intense, complex and database specific processes and operations. 

Would it not be nicer to be able to test out against an actual database that we running on the various environments? I think it would be awesome. 

So then one day I was with my Chief Architect & CEO discussing about putting a small system together, all the way from tech stack selection, frameworks and to do some RND. I was excited that I will finally get to put this tool to practice and see how it works. I wanted something that can have much like the In-Memory databases except it should be a real database we will be running on. 


Things To Keep In Mind

Before we get started. If you will be checking out the article's code repository for reference then may be set up these tools before: 
  • Java 11
  • Maven 3.6.2
  • Docker

Otherwise this depends on some assumptions : 

  • You are familiar with docker.
  • You already have a Java project that already has datasource connection component. 
  • Some understanding of maven.
  • You have worked with JUnit, in this case we are talking about JUnit 5.


Context


Now that you are ready with the tools needed, we are going to go through a library named, Test Containers. This is a nice light weight Docker API of sorts. Through the power of JUnit 5 we will also be able to startup and shutdown the database automatically. The database for this article is Postgres. Test Containers supports a lot of database services, so you are not tied to Postgres database. Remember this also uses Docker meaning you can pretty much use Test Containers for anything else besides a database service, further more you can even play with some Docker Compose files. So this really opens a whole you avenue of possibilities in the Software Engineering world.


Getting Started


Adding Dependencies


If you check that home page there's a Gradle equivalent that you can try of you are using it. In our case you will need : 

<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>postgresql</artifactId>
    <version>1.14.3</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>testcontainers</artifactId>
    <version>1.14.3</version>
<scope>test</scope> </dependency>

Docker


For this to work you need docker to be running, then you don't have to worry about : 

  • The actual database service ...
  • Nor the database container ...
  • Or even, the container image. 

Test Containers will sort out the rest for you, even if you don't have the image on your docker. 

Datasource Configuration


It's as simple as using the "Default" Test Container datasource configurations : 

URLjdbc:tc:postgresql:13:///test
Username : test
Password : test
Drive Class Nameorg.testcontainers.jdbc.ContainerDatabaseDriver

Notice the "tc" inside the URL literal. That's how you know that it's a Test Containers url. The default database name is "test" by default, if you check the end of the url literal. This is the same with username and password. 

So then we are almost there. Believe or not, that's all you need. Now the final piece. Some Java JUnit 5 code to test out the magic. 

Java & JUnit


We are going to use the JUnit - @ClassRule, annotation to dow some work before before the the rest of the actual test loads. This is similar to the the @BeforeClass annotation. The reason we will be using this is to kick start our Docker container before the tests even get to run. So it's preparation for the actual test cases. So create a new test case for you existing Data Access code. Add this class field or variable in your code.

 @ClassRule
public static PostgreSQLContainer postgreSQLContainer = (PostgreSQLContainer) new PostgreSQLContainer("postgres:13");

Write some sample code to test this out. In my case I have something along the lines of ...

// Some code here
...

@Test
public void assert_That_We_Can_Save_A_Book() {
    Book saved_lordOfTheRings_001 = bookDataAccess.saveAndFlush( lordOfTheRings_001);
    assertNotNull( saved_lordOfTheRings_001 );
    assertNotNull( saved_lordOfTheRings_001.getId() );
    assertEquals( saved_lordOfTheRings_001.getTitle(), lordOfTheRings_001.getTitle() );
}

So that's pretty much it. You can run your test cases and it should integrate into your database and store some information. We try to test this out. 


Validating The Data Graphically


So we can look into it by debugging our test case with a break point immediately after the line that saves a database record and saves. 

Start up any docker client of your choice and then. At this point you will not have have any Postgres docker container running unless you already were using Postgres in docker. The name of the container that will be run by your test cases will be a random name, there's no way you will miss it plus you can check out the image version on your docker client i.e. postgres:13.

So this is my docker client before running the tests. By the way the docker client I am using is Portainer




So now we are going to run a test case and pause it just after saving to the database. 



As you can see, my debug break point is in place and has paused just after saving to the database. Now let's go back to our docker client interface, Portainer in my case... 




Notice the two new containers being created : 

  • testcontainers-ryuk-40abb5dc-... : it the test containers service running in there. 
  • eloquent_robinson : The more important one is your test database running, as you can see that it's a random name, notice the image, which is postgres:13 that's our guy right there. The port for this thread at this moment is 32769

The port number is important because it's also random like the container name. This is by design according to the Test Containers engineers. For this Test Case run we are going to try to connect to the database while the debug is still on. So that we see our saved data. So get on to your Postgress database client and connect using the following properties : 

  • Username : test
  • Password : test
  • Database Name : test
  • Port : 32769

The URL will be something like : jdbc:postgresql://localhost:32769/test
Connect and run a simple query statement to see your data, saved through that test case. You should get something like the one in the image below : 




Ka-BOOM! There you go. You have not only managed to run a successful integration test, but also validated that it does indeed save your data to the docker hosted database, as you expected. Done, done and done! 


In Closing


I hope this was fun and you have cases where this can help ease your software development processes.

Leave some comments below and you may checkout the GitHub Docker Database Test - Source Code for reference.




Saturday, June 20, 2020

Testing Your Java EE Thorintail Microservice

Context


Oops! Almost left one important thing, we had a view on how to "dockerize" your "Hollow JAR", but we did not focus on implementing some sort of an automated integration test for your web service. There are a lot of way one can perform this exercise. One of the recurrent practices is through the use of Postman followed by Postwoman.

For as long as your service is running then you can develop some really nice Javascript processes that can automate this for you through the tools mentioned above. Another way for traditional "Java" developers would be to go the "JUnit" route. At some point you will also need your service to be running somewhere somehow waiting for client requests.

The JUnit route later got improved for Java EE Integration Testing through the configuration and use of frameworks, Arquillian & ShrinkWrap as top ups to JUnit.


Arquillian & ShrinkWrap

The role that Arquillian plays is of a "middle-man" between an artifact (.jar .war .sar .ear) that you want to deploy and the container or application server you want to deploy to. There are two ways you can use deploy your artifact. Either you deploy the actual physical artifact that you have just built on your machine or you programmatically build one for testing. To build one programmatically you use SkrinkWrap.


Getting Started

For those that already know, you will agree that generally when you are working with a standalone application server, in this case, WildFly, setting up Arquillian is quite a bit of work, but at least you set it up once and the rewards are out of this world. On the other side I have noticed over the past 6+- (since 2013) years that it has improved with all the dependencies one must configure and configuration files that one needs to create. I am really happy with what they have done with it in Thorntail and that's what we are going to look at for our Java EE Microservice.

Once again we will continue working on the previous Thorntail repo we have been working on since our first article, Java EE Micro Services Using Thorntail to the one that followed, which is, Dockerizing Your Java EE Thorntail Microservice.

Start up by opening your "pom.xml" file and adding the following dependency. In the case of Thorntail and the fact that it's already packed nicely for you the developers. You are basically installing or including a "Fraction".

<dependency>
<groupId>
io.thorntail</groupId>
<artifactId>
arquillian</artifactId>
<scope>
test</scope>
</dependency>

Part of the normal process with Arquillian is to also include JUnit dependenies / libraries, right? Ha ha ha ha, well this Thorntail already includes it. Currently it's including JUnit 4.12 so you may exclude it using Maven and rather include JUnit 5 if you want. For the purpose of getting you up and running with Thorntail I am just going to keep it as is. This fraction also includes Arquillian & ShrinkWrap libraries that you would have to configure separately, but not today! The next thing is to configure our maven testing plugin, the Maven Failsafe Plugin
.

This plugin was desgined for integration testing which is exactly what we want since we want to integrate into our REST Service when testig and get real results.

<plugin>
<artifactId>
maven-failsafe-plugin</artifactId>
<version>
2.22.2</version>
<executions>
<execution>
<goals>
<goal>
integration-test</goal>
<goal>
verify</goal>
</goals>
</execution>
</executions>
</plugin>


Test Case Implementation

Now let's get to the fun stuff, our test case implementation. So create a new Test Suite aka Test Class. Make sure that he name of the class ends with "IT" (which means Integration Test) because by default, Maven will look for classes that end with "IT" in order to run them as Integration Tests. For example I named mine "SampleResourceIT". So let's move on to some action ...

Add an annotation at class level as follows :

...
@RunWith(
Arquillian.class)
public class
SampleResourceIT { }
...

You are instructing JUnit to run with Arquillian. Literally what's there. So this will not be a normal JUnit test case. You will notice that immediately after adding this annotation, you will have some errors already. This is because Arquillian now wants to know how you would like to package your artifact that it should deploy.

So then we should now add a new method with the @Deployment annotation. This annotation is from the Arquillian Framework. It is where we will build our artifact for Arquillian to deploy it.

...
@Deployment
public static
Archive createDeployment() { }
...

Now let's build our test ".war" file inside that method using the SkrinkWrap API.

...
@Deployment
public static
Archive createDeployment() {
WebArchive webArtifact = ShrinkWrap.create( WebArchive.class, "thorntail-test-api.war");
webArtifact.addPackages( Boolean.TRUE, "za.co.anylytical.showcase");
webArtifact.addAsWebResource("project-defaults.yml");

// Print all file and included packages
System.out.println( webArtifact.toString(true));

return
webArtifact;
}
...

So now have built our small simple test web archive file. We are giving our war file a name. "thorntail-test-api.war", I believe you know that you can name it anything you want so you are not tied to naming it almost similar to the original file name. The next thing we are doing is include a our package which contains pretty much the our REST Service and it's business logic. So in a nutshell it contains our application java classes, in "za.co.anylytical.showcase" this follows a flag that tells ShrinkWrap to search recursively. The next part is about including any of our resource file that we may want to include. So I knwo that some of our actual REST configuations are in our file, "projet-defaults.yml", and to be as close as possible to our actual application we should include it in our test web archive file. Last part is printing everything that's in our file just to see what this test war file contains and be sure that we have everything that we want in there.


Great stuff. So now we want to write a test that just calls our test service and affirm that we managed to reach the Web Resource just fine.

...
@Test
public void
test_That_We_Reach_Our_WebResource_Just_Fine_Yea() throws Exception {
Client client = ClientBuilder.newBuilder().build();
WebTarget target = client.target("http://localhost:8881/that-service/text");
Response response = target.request().get();
int statusCode = response.getStatusInfo().getStatusCode();
String reponseBody = response.readEntity(String.class);
assertEquals( 200, statusCode);

    
System.out.println("RESPONSE CODE : " + statusCode);
    System.out.println("RESPONSE BODY : " + reponseBody);
}

...

We have a simple test case that uses the standard JAX-RS 2.x Client API Client API, so no magic there. We build our client code and then call the REST Service we want to call and then validate that we get a HTTP Status Code 200 which means that things wet well. No issues, ZILCH!




These are my IntelliJ IDEA test run results. You can also go through with Maven as follows :


mvn clean install



This will perfom an build while running integration tests for us which should give you results like the ones in the image below :



If you pay close attention to this image above as you will also see that Arquillian started up Wildfly for us and also used ShrinkWrap to build a ".war" file and also deploy it for us, all done automatically. Together with this, JUnit kicked in our test case when Arquillian was done with the deploy. Now that's magic! KA-BOOM!

Full test case looks like :

@RunWith( Arquillian.class)
public class
SampleResourceIT {

@Deployment
public static
Archive createDeployment() {
WebArchive webArtifact = ShrinkWrap.create( WebArchive.class, "thorntail-test-api.war");
webArtifact.addPackages( Boolean.TRUE, "za.co.anylytical.showcase");
webArtifact.addAsWebResource("project-defaults.yml");

// Print all file and included packages
System.out.println( webArtifact.toString( true));

return webArtifact;
}

@Test
public void
test_That_We_Reach_Our_WebResource_Just_Fine_Yea() throws Exception {
Client client = ClientBuilder.newBuilder().build();
WebTarget target = client.target("http://localhost:8881/that-service/text");
Response response = target.request().get();
int statusCode = response.getStatusInfo().getStatusCode();
String reponseBody = response.readEntity(String.class);
assertEquals( 200, statusCode);

System.out.println("RESPONSE CODE : " + statusCode);
System.out.println("RESPONSE BODY : " + reponseBody);
}
}


As usual you may ...

Leave some comments below and you may checkout the GitHub Source Code to validate the steps.