Friday, November 29, 2013

Creating a Simple JBoss Cluster (7.1.1 final)


I will be demonstrating the tasks required to create a simple JBoss Cluster environment on a single machine. I will be using jboss-as-7.1.1.Final for this demonstration.

The Cluster is going to contain the following:
  • The complete Cluster is going to be hosted on one physical machine
  • The Master and Slave will each contain 3 server instances(Server 1, 2 & 3)
  • The Master instance will act as the Domain Controller
  • The Slave instance will act as a Host
  • The 3 server instances on both Master and Slave will be part of 2 Server Groups
  • Server 1 & 2 on both Master and Slave will be part of one Server Group
  • Server 3 on both Master and Slave will be part of one Server Group
  • The Server Group hosting Server 1 & 2 on both Master and Slave will not be Cluster-able but will be Domain Controlled
  • The Server Group hosting  Server 3 on both Master and Slave will be Cluster-able and Domain Controlled  
Here is a diagram illustrating the Design Architecture:

Why do we need a Cluster

  • High availability
  • Load balancing
  • Failover
  • Session Replication
  • Scalability


Download the jboss-as-7.1.1.Final package
Extract the contents of the package to 2 different directories. One directory called Master and the other called Slave.

Configuring the Master Server (Domain Controller)


Edit the file:

Replace with the IP address of the machine hosting Master in the following location:
   <interface name="management">  
     <inet-address value="${}"/>  
   <interface name="public">  
     <inet-address value="${jboss.bind.address:}"/>  
   <interface name="unsecured">      
     <inet-address value="" />    
We need to change this so that:
  1. The management interface ensures that Slave can connect to Master
  2. The public interface allows the application to be accessed from external addresses
  3. The unsecured interface allows RMI calls

Create 2 ManagementRealm user accounts for domain management authentication 

Execute the add-user script in the bin directory

User Account 1
UserName: admin
Password: password
Realm: ManagementRealm

User Account 2 - This user allows the Slave server to connect to the Master Server
UserName: slave
Password: password
Realm: ManagementRealm

For some reason JBoss enforces security checking on the Hornet queues. I don't know if this is intentional or a bug. You can supply any username and password even though you have not configured it anywhere and it does not even exist. My suggestion is to disable security.

Edit the file:


This needs to be updated in 2 locations of the domain.xml file:
 <subsystem xmlns="urn:jboss:domain:messaging:1.1">  
Change the Server Group profile configuration:
 <server-group name="other-server-group" profile="full-ha">  
       <jvm name="default">  
         <heap size="64m" max-size="512m"/>  
       <socket-binding-group ref="full-ha-sockets"/>  
Our Clustered Server group is called other-server-group. Change the profile and socket binding group reference as above. The profile is used for configuration related to Modules. These Modules are enabled and bound in the socket binding group. We need to specify a profile and socket binding group that supports Clustering.
The following modules are required:
  • infinispan
  • jgroup
  • mod_cluster
The reason we need to change this from the default configuration is because this version of JBoss has a bug that does not allow Clustering using the default value.

Update the domain.xml file:

 <subsystem xmlns="urn:jboss:domain:modcluster:1.0">  
         <mod-cluster-config advertise-socket="modcluster" proxy-list="<IP Address of Master>:10001">  
             <load-metric type="busyness"/>  
 <subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" instance-id="${}" native="false">  

This configuration ensures that JBoss is aware of mod_cluster and is publishing or advertising Server Status information to our mod_cluster.

Configuring the Slave Server (Host)


Delete the file:


Edit the file:


Set the host name on Slave by changing:
 <host name="master" xmlns="urn:jboss:domain:1.2">  
 <host name="slave" xmlns="urn:jboss:domain:1.2">  

Configure the Domain Controller as follows(Take note of the Security Realm configured):
     <remote host="<IP Address of Master>" port="9999" security-realm="ManagementRealm"/>  

Replace with the IP address of the machine hosting Slave in the following location:
   <interface name="management">  
     <inet-address value="${}"/>  
   <interface name="public">  
     <inet-address value="${jboss.bind.address:}"/>  
   <interface name="unsecured">      
     <inet-address value="" />    
The same reasons apply here as when we configured for the Master instance above.

Change the Security Realm to the following:
       <security-realm name="ManagementRealm">  
    <secret value="cGFzc3dvcmQ="/>  
           <properties path="" relative-to="jboss.domain.config.dir"/>  
This server identities created allows the Slave instance to connect to Master. We configured the host name of this instance to "slave". This needs to correspond to a user account created on Master with the same user name. From above this corresponds to User Account 2. The secret value is the password of User Account 2 (slave) in base64 format / encoded.

Change the port numbers on the management address to avoid port conflicts with Master:
       <native-interface security-realm="ManagementRealm">  
         <socket interface="management" port="${}"/>  
       <http-interface security-realm="ManagementRealm">  
         <socket interface="management" port="${}"/>  
Increment by 10 000 for convenience purposes.

Add a port offset to Server 1, 2 & 3 to avoid port conflicts with Master:
     <server name="server-one" group="main-server-group">  
       <socket-bindings port-offset="1000"/>  
     <server name="server-two" group="main-server-group" auto-start="true">  
       <socket-bindings port-offset="1150"/>  
     <server name="server-three" group="other-server-group" auto-start="false">  
       <socket-bindings port-offset="1250"/>  
Increment by 1 000 for convenience purposes.

Start Master and Slave

 ./ -b <IP Address of Master> -bmanagement <IP Address of Master>  

 ./ -b <IP Address of Slave> -bmanagement <IP Address of Slave>  

Sample Web Application

Create a very simple web application. You will need to add the distributable tag to the applications web.xml file to configure it to be Cluster-able (Session Replication).
I also suggest that you add System.out.println statements into your application especially in the event of a page being requested. In this way you will be able to easily identify which server is serving your request when running in Cluster mode.

Cluster Administration

You can access the JBoss administration console of the Domain Controller by going to this link:
 http://<IP Address of Master>:9990  
Start up Server 3 on both Master and Slave. Server 3 on both Master and Slave are part of other-server-group while Servers 1 & 2 on both Master and Slave are part of main-server-group. We are only concerned with other-server-group to demonstrate the Clustering concept.

Deploy the Web application to the other-server-group

Make sure that the deployment succeeded. If all was successful, you can access the web application on Master and Slave explicitly.
 http://<IP Address of Master>:8330/cluster-demo/  
 http://<IP Address of Slave>:9330/cluster-demo/  

(Please note that cluster-demo was a Sample Web application which I downloaded from github. Specify the context root to your own application)

If you reached this point the, you have setup a Domain controlled Multi-Server environment. Our next step is to Cluster.


If you were to access any of the server instances in the other-server-group hosting the web application, you would need to specify explicitly the ip address and port number of that particular instance. This means that if were to use the full potential behind Clustering we would need some "unified" address which would hide the Domain Server structure. This central point would then need to route requests to servers participating in the cluster. I think of it as some sort of a proxy or delegator speaking loosely. I have heard people calling it a load balancer. That is true to some degree but, remember it does more that just load balancing. It is also intelligent enough to only route requests to servers that are up and running.

We use the Apache Mod_Cluster to achieve all of the above. The Apache Mod_Cluster is an http daemon. I am using Mod_cluster version 1.2.6. I have never used this tool before and I had to wrap my head around a few concepts before understanding how it actually works.

Download the package meant for your specific operating system.

Extract the contents into the /opt directory. I found that the scripts do not work well if you do not extract it into this directory. You would also need root or sudo access. I found that it modifies files that not accessible by default to basic users.

Configure the httpd.conf file

Here a Sample file that you can use. You would only need to specify your IP Address:
 # This is the main Apache HTTP server configuration file. It contains the  
 # configuration directives that give the server its instructions.  
 # See <URL:> for detailed information.  
 # In particular, see  
 # <URL:>  
 # for a discussion of each configuration directive.  
 # Do NOT simply read the instructions in here without understanding  
 # what they do. They're here only as hints or reminders. If you are unsure  
 # consult the online docs. You have been warned.  
 # Configuration and logfile names: If the filenames you specify for many  
 # of the server's control files begin with "/" (or "drive:/" for Win32), the  
 # server will use that explicit path. If the filenames do *not* begin  
 # with "/", the value of ServerRoot is prepended -- so "logs/foo_log"  
 # with ServerRoot set to "/opt/jboss/httpd/httpd" will be interpreted by the  
 # server as "/opt/jboss/httpd/httpd/logs/foo_log".  
 # ServerRoot: The top of the directory tree under which the server's  
 # configuration, error, and log files are kept.  
 # Do not add a slash at the end of the directory path. If you point  
 # ServerRoot at a non-local disk, be sure to point the LockFile directive  
 # at a local disk. If you wish to share the same ServerRoot for multiple  
 # httpd daemons, you will need to change at least LockFile and PidFile.  
 ServerRoot "/opt/jboss/httpd/httpd"  
 # Listen: Allows you to bind Apache to specific IP addresses and/or  
 # ports, instead of the default. See also the <VirtualHost>  
 # directive.  
 # Change this to Listen on specific IP addresses as shown below to  
 # prevent Apache from glomming onto all bound IP addresses.  
 #Listen 80  
 # Dynamic Shared Object (DSO) Support  
 # To be able to use the functionality of a module which was built as a DSO you  
 # have to place corresponding `LoadModule' lines at this location so the  
 # directives contained in it are actually available _before_ they are used.  
 # Statically compiled modules (those listed by `httpd -l') do not need  
 # to be loaded here.  
 # Example:  
 # LoadModule foo_module modules/  
 LoadModule authn_file_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authn_dbm_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authn_anon_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authn_dbd_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authn_default_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authn_alias_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authz_host_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authz_groupfile_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authz_user_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authz_dbm_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authz_owner_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule authz_default_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule auth_basic_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule auth_digest_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule advertise_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule file_cache_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule cache_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule disk_cache_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule mem_cache_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule dbd_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule dumpio_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule reqtimeout_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule ext_filter_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule include_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule filter_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule substitute_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule deflate_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule log_config_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule log_forensic_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule logio_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule env_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule mime_magic_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule cern_meta_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule expires_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule headers_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule ident_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule usertrack_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule unique_id_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule setenvif_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule version_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule proxy_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule proxy_connect_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule proxy_ftp_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule proxy_http_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule proxy_scgi_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule proxy_ajp_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule proxy_cluster_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule ssl_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule mime_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule dav_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule status_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule autoindex_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule asis_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule info_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule suexec_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule cgi_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule cgid_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule jk_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule manager_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule slotmem_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule dav_fs_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule vhost_alias_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule negotiation_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule dir_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule imagemap_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule actions_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule speling_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule userdir_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule alias_module /opt/jboss/httpd/lib/httpd/modules/  
 LoadModule rewrite_module /opt/jboss/httpd/lib/httpd/modules/  
 <IfModule !mpm_netware_module>  
 <IfModule !mpm_winnt_module>  
 # If you wish httpd to run as a different user or group, you must run  
 # httpd as root initially and it will switch.  
 # User/Group: The name (or #number) of the user/group to run httpd as.  
 # It is usually good practice to create a dedicated user and group for  
 # running httpd, as with most system services.  
 User daemon  
 Group daemon  
 # 'Main' server configuration  
 # The directives in this section set up the values used by the 'main'  
 # server, which responds to any requests that aren't handled by a  
 # <VirtualHost> definition. These values also provide defaults for  
 # any <VirtualHost> containers you may define later in the file.  
 # All of these directives may appear inside <VirtualHost> containers,  
 # in which case these default settings will be overridden for the  
 # virtual host being defined.  
 # ServerAdmin: Your address, where problems with the server should be  
 # e-mailed. This address appears on some server-generated pages, such  
 # as error documents. e.g.  
 # ServerName gives the name and port that the server uses to identify itself.  
 # This can often be determined automatically, but we recommend you specify  
 # it explicitly to prevent problems during startup.  
 # If your host doesn't have a registered DNS name, enter its IP address here.  
 # DocumentRoot: The directory out of which you will serve your  
 # documents. By default, all requests are taken from this directory, but  
 # symbolic links and aliases may be used to point to other locations.  
 DocumentRoot "/opt/jboss/httpd/htdocs/htdocs"  
 # Each directory to which Apache has access can be configured with respect  
 # to which services and features are allowed and/or disabled in that  
 # directory (and its subdirectories).  
 # First, we configure the "default" to be a very restrictive set of  
 # features.  
 <Directory />  
   Options FollowSymLinks  
   AllowOverride None  
   Order deny,allow  
   Deny from all  
 # Note that from this point forward you must specifically allow  
 # particular features to be enabled - so if something's not working as  
 # you might expect, make sure that you have specifically enabled it  
 # below.  
 # This should be changed to whatever you set DocumentRoot to.  
 <Directory "/opt/jboss/httpd/htdocs/htdocs">  
   # Possible values for the Options directive are "None", "All",  
   # or any combination of:  
   #  Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews  
   # Note that "MultiViews" must be named *explicitly* --- "Options All"  
   # doesn't give it to you.  
   # The Options directive is both complicated and important. Please see  
   # for more information.  
   Options Indexes FollowSymLinks  
   # AllowOverride controls what directives may be placed in .htaccess files.  
   # It can be "All", "None", or any combination of the keywords:  
   #  Options FileInfo AuthConfig Limit  
   AllowOverride None  
   # Controls who can get stuff from this server.  
   Order allow,deny  
   Allow from all  
 # DirectoryIndex: sets the file that Apache will serve if a directory  
 # is requested.  
 <IfModule dir_module>  
   DirectoryIndex index.html  
 # The following lines prevent .htaccess and .htpasswd files from being  
 # viewed by Web clients.  
 <FilesMatch "^\.ht">  
   Order allow,deny  
   Deny from all  
   Satisfy All  
 # ErrorLog: The location of the error log file.  
 # If you do not specify an ErrorLog directive within a <VirtualHost>  
 # container, error messages relating to that virtual host will be  
 # logged here. If you *do* define an error logfile for a <VirtualHost>  
 # container, that host's errors will be logged there and not here.  
 ErrorLog "logs/error_log"  
 # LogLevel: Control the number of messages logged to the error_log.  
 # Possible values include: debug, info, notice, warn, error, crit,  
 # alert, emerg.  
 LogLevel warn  
 <IfModule log_config_module>  
   # The following directives define some format nicknames for use with  
   # a CustomLog directive (see below).  
   LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined  
   LogFormat "%h %l %u %t \"%r\" %>s %b" common  
   <IfModule logio_module>  
    # You need to enable mod_logio.c to use %I and %O  
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio  
   # The location and format of the access logfile (Common Logfile Format).  
   # If you do not define any access logfiles within a <VirtualHost>  
   # container, they will be logged here. Contrariwise, if you *do*  
   # define per-<VirtualHost> access logfiles, transactions will be  
   # logged therein and *not* in this file.  
   CustomLog "logs/access_log" common  
   # If you prefer a logfile with access, agent, and referer information  
   # (Combined Logfile Format) you can use the following directive.  
   #CustomLog "logs/access_log" combined  
 <IfModule alias_module>  
   # Redirect: Allows you to tell clients about documents that used to  
   # exist in your server's namespace, but do not anymore. The client  
   # will make a new request for the document at its new location.  
   # Example:  
   # Redirect permanent /foo  
   # Alias: Maps web paths into filesystem paths and is used to  
   # access content that does not live under the DocumentRoot.  
   # Example:  
   # Alias /webpath /full/filesystem/path  
   # If you include a trailing / on /webpath then the server will  
   # require it to be present in the URL. You will also likely  
   # need to provide a <Directory> section to allow access to  
   # the filesystem path.  
   # ScriptAlias: This controls which directories contain server scripts.  
   # ScriptAliases are essentially the same as Aliases, except that  
   # documents in the target directory are treated as applications and  
   # run by the server when requested rather than as documents sent to the  
   # client. The same rules about trailing "/" apply to ScriptAlias  
   # directives as to Alias.  
   ScriptAlias /cgi-bin/ "/opt/jboss/httpd/htdocs/cgi-bin/"  
 <IfModule cgid_module>  
   # ScriptSock: On threaded servers, designate the path to the UNIX  
   # socket used to communicate with the CGI daemon of mod_cgid.  
   #Scriptsock logs/cgisock  
 # "/opt/jboss/httpd/htdocs/cgi-bin" should be changed to whatever your ScriptAliased  
 # CGI directory exists, if you have that configured.  
 <Directory "/opt/jboss/httpd/htdocs/cgi-bin">  
   AllowOverride None  
   Options None  
   Order allow,deny  
   Allow from all  
 # DefaultType: the default MIME type the server will use for a document  
 # if it cannot otherwise determine one, such as from filename extensions.  
 # If your server contains mostly text or HTML documents, "text/plain" is  
 # a good value. If most of your content is binary, such as applications  
 # or images, you may want to use "application/octet-stream" instead to  
 # keep browsers from trying to display binary files as though they are  
 # text.  
 DefaultType text/plain  
 <IfModule mime_module>  
   # TypesConfig points to the file containing the list of mappings from  
   # filename extension to MIME-type.  
   TypesConfig conf/mime.types  
   # AddType allows you to add to or override the MIME configuration  
   # file specified in TypesConfig for specific file types.  
   #AddType application/x-gzip .tgz  
   # AddEncoding allows you to have certain browsers uncompress  
   # information on the fly. Note: Not all browsers support this.  
   #AddEncoding x-compress .Z  
   #AddEncoding x-gzip .gz .tgz  
   # If the AddEncoding directives above are commented-out, then you  
   # probably should define those extensions to indicate media types:  
   AddType application/x-compress .Z  
   AddType application/x-gzip .gz .tgz  
   # AddHandler allows you to map certain file extensions to "handlers":  
   # actions unrelated to filetype. These can be either built into the server  
   # or added with the Action directive (see below)  
   # To use CGI scripts outside of ScriptAliased directories:  
   # (You will also need to add "ExecCGI" to the "Options" directive.)  
   #AddHandler cgi-script .cgi  
   # For type maps (negotiated resources):  
   #AddHandler type-map var  
   # Filters allow you to process content before it is sent to the client.  
   # To parse .shtml files for server-side includes (SSI):  
   # (You will also need to add "Includes" to the "Options" directive.)  
   #AddType text/html .shtml  
   #AddOutputFilter INCLUDES .shtml  
 # The mod_mime_magic module allows the server to use various hints from the  
 # contents of the file itself to determine its type. The MIMEMagicFile  
 # directive tells the module where the hint definitions are located.  
 #MIMEMagicFile conf/magic  
 # Customizable error responses come in three flavors:  
 # 1) plain text 2) local redirects 3) external redirects  
 # Some examples:  
 #ErrorDocument 500 "The server made a boo boo."  
 #ErrorDocument 404 /missing.html  
 #ErrorDocument 404 "/cgi-bin/"  
 #ErrorDocument 402  
 # MaxRanges: Maximum number of Ranges in a request before  
 # returning the entire resource, or 0 for unlimited  
 # Default setting is to accept 200 Ranges  
 #MaxRanges 0  
 # EnableMMAP and EnableSendfile: On systems that support it,  
 # memory-mapping or the sendfile syscall is used to deliver  
 # files. This usually improves server performance, but must  
 # be turned off when serving from networked-mounted  
 # filesystems or if support for these functions is otherwise  
 # broken on your system.  
 #EnableMMAP off  
 #EnableSendfile off  
 # Supplemental configuration  
 # The configuration files in the conf/extra/ directory can be  
 # included to add extra features or to modify the default configuration of  
 # the server, or you may simply copy their contents here and change as  
 # necessary.  
 # Server-pool management (MPM specific)  
 #Include conf/extra/httpd-mpm.conf  
 # Multi-language error messages  
 #Include conf/extra/httpd-multilang-errordoc.conf  
 # Fancy directory listings  
 #Include conf/extra/httpd-autoindex.conf  
 # Language settings  
 #Include conf/extra/httpd-languages.conf  
 # User home directories  
 #Include conf/extra/httpd-userdir.conf  
 # Real-time info on requests and configuration  
 #Include conf/extra/httpd-info.conf  
 # Virtual hosts  
 #Include conf/extra/httpd-vhosts.conf  
 # Local access to the Apache HTTP Server Manual  
 #Include conf/extra/httpd-manual.conf  
 # Distributed authoring and versioning (WebDAV)  
 #Include conf/extra/httpd-dav.conf  
 # Various default settings  
 #Include conf/extra/httpd-default.conf  
 # Secure (SSL/TLS) connections  
 #Include conf/extra/httpd-ssl.conf  
 # Note: The following must must be present to support  
 #    starting without SSL on platforms with no /dev/random equivalent  
 #    but a statically compiled-in mod_ssl.  
 <IfModule ssl_module>  
 SSLRandomSeed startup builtin  
 SSLRandomSeed connect builtin  
 # This Listen port is for the mod_cluster-manager, where you can see the status of mod_cluster.  
 # Port 10001 is not a reserved port, so this prevents problems with SELinux.  
 # This directive only applies to Red Hat Enterprise Linux. It prevents the temmporary  
 # files from being written to /etc/httpd/logs/ which is not an appropriate location.  
 MemManagerFile /var/cache/httpd  
  <Directory />  
   Order deny,allow  
   Deny from all  
   Allow from 192.168.10.  
  # This directive allows you to view mod_cluster status at URL  
  <Location /mod_cluster-manager>  
   SetHandler mod_cluster-manager  
   Order deny,allow  
   Deny from all  
   Allow from 192.168.10.  
  KeepAliveTimeout 60  
  MaxKeepAliveRequests 0  
  ManagerBalancerName other-server-group  
  AdvertiseFrequency 5  
  ServerAdvertise On  
The main areas of concern is:
Somewhere early in the file and:
 # This Listen port is for the mod_cluster-manager, where you can see the status of mod_cluster.  
 # Port 10001 is not a reserved port, so this prevents problems with SELinux.  
 # This directive only applies to Red Hat Enterprise Linux. It prevents the temmporary  
 # files from being written to /etc/httpd/logs/ which is not an appropriate location.  
 MemManagerFile /var/cache/httpd  
  <Directory />  
   Order deny,allow  
   Deny from all  
   Allow from 192.168.10.  
  # This directive allows you to view mod_cluster status at URL  
  <Location /mod_cluster-manager>  
   SetHandler mod_cluster-manager  
   Order deny,allow  
   Deny from all  
   Allow from 192.168.10.  
  KeepAliveTimeout 60  
  MaxKeepAliveRequests 0  
  ManagerBalancerName other-server-group  
  AdvertiseFrequency 5  
  ServerAdvertise On  
Right at the end of the file.

Start up Mod_Cluster

 sudo ./apachectl start  

To Shut down Mod_Cluster

 sudo ./apachectl stop  

You can check that the Mod_Cluster is running by accessing:
 http://<Your IP Address>  
You should get a page that looks like this:
You can also use this url:
 http://<Your IP Address>:10001  
The result should be the same as above.

To check that Mod_Cluster is connecting to you 2 server instances:
 http://<Your IP Address>:10001/mod_cluster-manager  

If your page looks like this then Mod_Cluster is running but Communication between your Cluster group other-server-group and Mod_Cluster is not working correctly or the Servers might be down:


To test the Cluster, access the the web application via the Mod_Cluster:
 http://<Your IP Address>:10001/cluster-demo/  
 http://<Your IP Address>/cluster-demo/  

Mod_Cluster will route requests based on server availability. You can determine which server instance is serving the requests by checking the server console.

Now try shutting down one server instance. Check that the requests are still being processed by the other server.

If all is working then, you have Clustering working!!!


This exercise is merely meant to identify the concepts and principles. You can easily migrate this structure to multiple machines and even add more "slave" instances or nodes.

As always, I would love to hear any feedback, comments or questions. I can also provide technical advise to your organisation if required.

AS7 Cluster Howto
Coder36 - Setting up a JBoss 7.1 Cluster

Friday, November 15, 2013

Beginner's guide to using Maven

I have been trying to convince a fellow Engineer for a long time that Maven is this great build tool that he needs to start using ASAP and is much more than just a replacement to ANT. He finally gave it a try and came back SCREAMING!!!

Why was he Screaming?

He was using Maven in the wrong way

Why is Maven this great tool?

In my opinion it is a great tool because it handles library dependancies on your behalf. Whenever I build a new application, I do not need to scan my hard drive for libraries and then add it to my IDE project classpath. 

I worked in an organisation that had a cool way of handling this situation using ANT. Everyone worked on a specified "workspace" structure. There were scripts that created the structure for you. The workspace contained libraries, scripts and slots. You developed in a slot and referenced libraries in your workspace/libraries folder. The libraries and scripts were all stored in a source control repository. This meant that if you had to use a new library, you would need to download it from the internet, commit it to the code repository, run an update on your workspace and reference it in your ant scripts.

It worked like a charm if you understood the process behind the madness. In a nutshell I found Maven to this ALL for you out-of-the-box. Let me not undersell this tool(Maven). This is but just one of its cool features (Dependancy Management). 

Why do people think that MAVEN is rubbish?

I have promoted Maven and managed to convince developers to start using it. They did and the part that was meant to help them, turned out to be the monster. The dependancy management was a mess.

This made me wonder how could a tool be great for some people and at the same time be a disaster for others?

I soon realised that there are actually 2 ways of using Maven:
  1. Self Development using Maven
  2. Group Development using Maven

Self Development using Maven

If you are a developer who flies solo, then just download Maven and you are good to go.

What is happening behind the scenes?

When you run Maven for the first time, it creates a local cache folder for you. This is located in your user directory and is called ".m2".



If you look into this folder you will find:

  • A settings.xml file which should not concern you at this time
  • A repository folder which contains dependencies that you used to build your projects

How do these dependancies end up here?

When you compile or build your code using Maven; Maven first checks that the dependancy exists in your local cache. If not, it connects to the internet and searches for the dependancy on the Maven Central Repository. If it finds the dependancy, it downloads it to your local cache. If not, it searches on the other repositories that the Maven Central repository is connected to or references. You should be able to download most of the libraries in this way.

Let us consider 2 other possible scenarios:

1. The dependancy required is another project of yours. It is a completely separate project and you have decided not to include it as a module within your current project. If this second project; which is a dependancy to your first project is a maven project then, by running: 
 mvn install  
you will not only build and compile the project but it also installs the output artefacts of each of the modules to your local cache.

2. The dependancy may or not be another project of yours. If it is your project and it is not in a Maven structure, then building or installing the project will have no value. The only thing you do have is the built library(jar). You can install this library manually into you local cache by running: 
 mvn install:install-file -Dfile=<path-to-file> -DgroupId=<group-id> -DartifactId=<artifact-id> -Dversion=<version> -Dpackaging=<packaging>  

You can then add this dependancy details into your Maven POM(Project Object Model) file.

What is the Maven Central Repository? 

The Maven Central Repository is a Maven Artefact Repository hosted on the Internet that contains most of the common or often used java libraries. It some cases it is hosted on the Maven Central Repository and in other cases it references other Maven Artefact Repositories that hosts libraries themselves.

The above point is very interesting. I am saying that not only does a Maven Artefact Repository host libraries but it can also reference other Maven Artefact Repositories which has the same capability. This is huge!!! I guess you can call it chaining.

To summarise, this is what happens step by step:

Step 1 - Look for dependency in local cache, if not found, step 2 else if found then complete build process.
Step 2 - Look for dependency in Maven Central Repository, if not found then step 3 else if found, then it is downloaded to local cache.
Step 3 - Look for  dependency in remote repository or repositories, if found then it is downloaded to local cache otherwise Maven as expected stops processing and throws an error (Unable to find dependency).

Group Development using Maven

If you have a clear understanding of the above, you will by now have gathered why the above setup will not work for group development. Okay, it can work to some degree but it will be a mission!

The problem with the above setup is that every Developer in the group would need to manually add libraries into their local cache's when required. This can become tedious, error prone and amateur.

What can we do to solve this?

The answer is to install a Maven Artefact Repository for the organisation. I recommend using Sonatype Nexus. This is a dead simple installation. 

How to Install Nexus Artefact Repository

  • Download the latest war file
  • Deploy it on a Tomcat instance
  • Make sure that it can connect to the internet. You can check this by selecting Repositories and then checking that the Repository status is "In Service".

  • You might have to configure Proxy settings, if you connect to the internet via a Proxy Server. The default username is admin and the password is admin123. you can configure the proxy in the Administration --> Server settings

  • You will also need to enable a "Deployment" user. This user should have the ability to deploy or install artefacts to the Artefact Repository. Take notice of the roles applied here.

  • The next step is to get Maven to reference your newly configured Artefact Repository.
You can achieve this by configuring the "settings.xml" file. You can place this file in MAVEN_HOME/conf directory or .m2 directory. Maven first reads configuration from the .m2 location and if its not found it looks for the "settings.xml" file in the MAVEN_HOME/conf location.  

Getting Maven to call your own Artefact Repository

Use this as a temple and customise where necessary(You only need to change the deployment username & password and the host & port for this template to work):
1:  <?xml version="1.0" encoding="UTF-8"?>  
2:  <settings xmlns=""   
3:       xmlns:xsi=""   
4:       xsi:schemaLocation="">  
5:   <pluginGroups>  
6:   </pluginGroups>  
7:   <proxies>  
8:   </proxies>  
9:  <!-- This is used when deploying or publishing to the Artefact Repository  
10:     The user must exist and must have deploy rights -->  
11:   <servers>  
12:       <server>  
13:            <id>snapshots</id>  
14:            <username>deployment</username>  
15:            <password>password</password>  
16:       </server>  
17:     <server>  
18:            <id>releases</id>  
19:            <username>deployment</username>  
20:            <password>password</password>  
21:       </server>  
22:       <server>  
23:            <id>milestones</id>  
24:            <username>deployment</username>  
25:            <password>password</password>  
26:       </server>  
27:       <server>  
28:            <id>thirdparty</id>  
29:            <username>deployment</username>  
30:            <password>password</password>  
31:       </server>  
32:   </servers>  
33:   <mirrors>  
34:    <mirror>  
35:     <id>public</id>  
36:     <mirrorOf>*</mirrorOf>  
37:     <name>Public Repositories</name>  
38:     <url>http://host:port/nexus/content/groups/public</url>  
39:    </mirror>   
40:   </mirrors>  
41:   <profiles>  
42:       <profile>  
43:     <id>custom-repository</id>  
44:     <repositories>  
45:      <repository>  
46:       <id>custom-repository-group</id>  
47:       <name>Custom Maven Repository Group</name>  
48:       <url>http://host:port/nexus/content/groups/public</url>  
49:       <layout>default</layout>  
50:       <releases>  
51:        <enabled>true</enabled>  
52:        <updatePolicy>always</updatePolicy>  
53:       </releases>  
54:       <snapshots>  
55:        <enabled>true</enabled>  
56:        <updatePolicy>always</updatePolicy>  
57:       </snapshots>  
58:      </repository>  
59:     </repositories>  
60:     <pluginRepositories>  
61:      <pluginRepository>  
62:       <id>custom-repository-repository-group</id>  
63:       <name>Custom Maven Repository Group</name>  
64:       <url>http://host:port/nexus/content/groups/public</url>  
65:       <layout>default</layout>  
66:       <releases>  
67:        <enabled>true</enabled>  
68:        <updatePolicy>always</updatePolicy>  
69:       </releases>  
70:       <snapshots>  
71:        <enabled>true</enabled>  
72:        <updatePolicy>always</updatePolicy>  
73:       </snapshots>  
74:      </pluginRepository>  
75:             <pluginRepository>  
76:         <id>public</id>  
77:          <url>http://host:port/nexus/content/groups/public</url>  
78:           <snapshots>  
79:             <enabled>true</enabled>  
80:           </snapshots>  
81:                      <releases>  
82:                           <enabled>true</enabled>  
83:                           <updatePolicy>always</updatePolicy>  
84:                      </releases>  
85:        </pluginRepository>  
86:     </pluginRepositories>  
87:    </profile>  
88:   </profiles>  
89:   <activeProfiles>  
90:    <activeProfile>custom-repository</activeProfile>  
91:   </activeProfiles>  
92:  </settings>  

If you require further clarity I guess the Official Maven Site does have more information.

Once you have all this in place, you have modified the process mentioned above to the following:
Step 1 - Look for dependency in local cache, if not found, step 2 else if found then complete build process.
Step 2 - Look for dependency in the Custom Maven Repository, if not found then step 3 else if found, then it is downloaded to local cache.
Step 3 - Look for  dependency in remote repository or repositories, if found then it is downloaded to local cache otherwise Maven as expected stops processing and throws an error (Unable to find dependency).
This looks almost the same barring the fact that we are now referencing our own Artefact Repository instead of the Maven Central Artefact Repository. 

What if you wish to download an Artefact that is not referenced by the Maven Central Repository or your own Repository

Consider these scenarios:
  • You organisation is huge and you require a library from another section or department that hosts their own Artefact Repository.
  • You require a library from a Maven Artefact Repository hosted online.
Just add it as a another Hosted Repository on Nexus:

What are the advantages of Hosting your own Artefact Repository

  • Saving Bandwidth (It only downloads the artefact from the internet once. When request again the call does not go over the internet)
  • Control over libraries used
  • Building a standard for all Developers
  • Storing Organisation specific Milestones and Releases

Why do I need to configure a Deployment user on Nexus

This is to store your own custom Organisation specific Milestones and Releases (libraries). You can upload these custom libraries in the following ways:
  1. Using the Maven Release plugin, when creating a Release or Milestone. Check out this tutorial 
  2. Upload the Artefact to Maven manually via the front end


I strongly believe that Maven and an Artefact Repository go together like toothpaste & a toothbrush. You can use them separately but to get the most benefit, you got to combine them even if you are flying solo.

If you are adding dependancy libraries to your project in any other way than specifying them in your POM file, you are using Maven in the wrong way!

As always, I would love to hear some of your comments or questions. I can also provide my services if you require.


Tuesday, November 5, 2013

Creating a Release using the Maven Release plugin

What is a Release

I spent some time tutoring a junior on configuring the Maven Release plugin to create Milestone and Release artefacts. We were not making much ground up until I took a few steps back and discussed what exactly a Release process is.

Here is a simple Release process. I am pretty sure that if you understand this basic process, you will be able to extend and modify this process for your organisation's specific requirements:
  1. The Latest set of code exists in a Source Code repository, has been tested successfully and is ready to be promoted.
  2. We need to Tag the repository so that we have a way of knowing that something special happened at this point in time. A tag is simply a Snapshot of the code at a particular point in time. This tag gives us a reference point so that we can compare code before and after the release.
  3. We build or create our deployment artefact(s).
  4. We store our artefact(s) in a secure location so that it can be retrieved at a later date for rollback or auditing purposes.

What do I want the Maven Release Plugin to do for me

The Maven Release Plugin is by no means a One-Trick-Pony. It has the ability to Build Code, Tag Code, Change Version Numbers, Branch Code, Rollback what its done and Stage artefacts. I guess if you really want know want it does, please visit

The cool part of this Plugin is that you do not have use all of these features. You can choose what suits your release process and only use that.

I want the Maven Release to automate this process:
  • Build the Code
  • Tag the code
  • Change the code Version from a Snapshot to a Milestone or Release version
  • Build an Artefact
  • Change the code Version from a Milestone or Release version back to a Snapshot version
  • Deploy the Artefact to a Maven Repository

The 2 Parts to configure the Maven Release Plugin

There are two parts to focus on in order to get the Maven Release Plugin to achieve the goals mentioned above:
  1. Configuration in the POM file
  2. How to Run the Plugin

Configuration in the POM file

Make sure that every POM file in the Project has a Version, Artefact ID & Group ID Tag.

The Parent or Main POM file needs to have these 3 settings configured:

1.) The SCM tag must be configured.

The syntax changes slightly between the Source Code repositories. Fill it out with caution and consult the maven user guide as this can throw a very cryptic error. This section is used by the plugin to connect and Tag the Code in the Repository.

2.) The Distribution Management Tag Must be configured.

The Plugin deploys the generated artefact to the Artefact Repository configured in this Tag. 

3.) Add the Maven Release Plugin to the POM.

How to Run the Plugin

1:  mvn -Dtag=<Release_Tag>  
3:  -Dproject.rel.<POM_1_Group_id>:<POM_1_Artifact_id>=<Release_version><POM_1_Group_id>:<POM_1_Artifact_id>=<NextDevelopmentVersion>   
5:  -Dproject.rel.<POM_2_Group_id>:<POM_2_Artifact_id>=<Release_version><POM_2_Group_id>:<POM_2_Artifact_id>=<NextDevelopmentVersion>  
7:   .  
8:   .  
9:   .  
11:  -Dproject.rel.<POM_1_Group_id>:<POM_1_Artifact_id>=<Release_version><POM_1_Group_id>:<POM_1_Artifact_id>=<NextDevelopmentVersion>  
13:  release:clean  
14:  release:prepare  
15:  release:perform   
(The Script to Execute this Plugin should be run on a single line. I am only adding line breaks for illustrative purposes.)

Let us try and understand each line:
   mvn -Dtag=<Release_Tag>  
The general way to execute Maven and the message which we want to Tag the Code Repository with.
This needs to be configured for each POM file in the project. In essence, each Module is Built with the relevant Release version and the next Development version is set and committed to the Repository. The output artefact of each module is deployed to the Artefact Repository.
This plugin creates temporary files during its execution. The temporary files keep the previous state or version of the POM files. This is used if or when a Rollback is executed. The clean deletes these files.
This is the heart of the Plugin. This is responsible for changing the versions, tagging the code and committing to the Code Repository.
This deploys the generated Artefact to the Artefact Repository.

Suggestions and Hurdles to Look out for

  • The error or exceptions thrown by the plugin is very cryptic and often misleading.
  • Use the Maven -X option at the end of the script for the maven script to be run in verbose mode.
  • First try and get the script working locally in interactive mode before automating or deploying the job to Jenkins or some Build server.
  • Omit the release:perform step, up until you get the script to run up to and including the release:prepare step.
  • When working with a GIT repository, the plugin will commit and push the code.
  • Use the --batch-mode option to run in non-interactive mode.
  • Create a profile in the POM and use variables rather than hard coding the distribution management tag. In this way you can use the Plugin to create Milestones and Releases by merely changing the profile which reference the different locations in the Artefact Repository.



The Maven Release plugin is often seen to be a pain and very difficult to use but if you do manage to get it working, I guarantee that you will be saving yourself hours. Follow my instructions and tips and please contact me if you feel that I have missed something important. I will gladly add it in.

As always; please send through your comments, suggestions or questions and I will try to address them. I can also assist you if you require technical assistance from my side in developing scripts for your organisation.