/ Java

Bringing together Docker, Grunt, Maven, EmberJS & MongoDB

Abstract : Will it work? ... How can I be sure? ... Am I forgetting something? ... questions that pile up slowly and ruin our confidence once we cannot clearly answer them. For reasons like these we write tests - to be confident, to be certain, to sleep better. Yet, writing tests is one part of the problem. Getting them executed in an environment close to the production one is another . ... and so, more questions pile up : How do we keep a snapshot of our production environment for testing purposes? What if our app needs to run in different environments? Can we keep multiple virtual environment snapshots? How many? Can we have test parallelization? Is the sandboxing guaranteed? and so on, and so on ...

In this post, we are going to take a look at orchestrating Maven, Grunt & Docker to provide the basis for setting up integration tests.

Goal :
Prepare your development environment (using Maven, Grunt, and Docker) for integration tests.

Code :
To reach the goal, we will work with a small project containing a frontend based on EmberJS (via Yeoman) and backend based on Java (Jetty, Jersey, Guice, and Jongo ). As a storage we will use a MongoDB database.

Here is what the project structure looks like:

src
 `- main
     `- java
      - webapp
         `- ...
          - dist
          - ...
 `- test
     `- java
      - resources
         `- docker
             `- Dockerfile.template
              - docker_rm.sh.template
              - entrypoint.sh.template
pom.xml

The frontend part is embedded within the Maven backend (that's fine for a showcase but for large projects the two should be apart).

In this, otherwise standard Maven strucutre, the src/main/webapp/dist folder holds the final frontend code generated by Grunt. The src/test/resources/docker folder holds some reusable templates used for bridging Maven & Docker.

As for the business logic of our app, let's keep it simple and design a read-only library. We are going to have a MongoDB database called library and inside it a single books collection with pre-loaded documents. Our app will display book titles in a web browser. Nothing exciting but complex enough for orchestration.

The basic flow of data will be:

  1. Browser sends an AJAX request from the EmberJS based interface to the backend.
  2. Then the backend makes a request to the MongoDB database. The backend is built with Guice (dependency injection framework), embedded Jetty server to process HTTP requests and Jongo for bridging the backend logic and MongoDB.
  3. In the end the response from the database is presented in the browser.

Don't hesitate to checkout the code on GitHub

Docker Basics :

Docker is a tool (a virtualization engine) which allows you to build an environment for your application. An environment is wrapped into a Docker container where you can deploy your app. A Docker container is described by a Dockerfile which you can reuse as many times as you wish.

At the very least, an environment consists of an operating system (OS). Often, there is also a database installed (for our example - MongoDB) and some preconfigured environmental variables (in our case - PORT). In addition there could be also different configuration files and other things needed by your application (such as a JVM).

Truth be told, you don't need Docker to install an OS, a database, a JVM and a couple of files. What you need Docker for is to construct a virtual environment which can then be reused (for development, staging, production) and easily transferred to your co-workers (by means of a Dockerfile). But wait ... isn't that what my virtual machine (VM) does? Yes, only Docker does it in a smarter way.

Docker uses a Linux kernel virtualization technique called Linux containers (LXC) (don't worry you can still use Docker on Mac OS or Windows via an ordinary VM). Thanks to this virtualization method, you can have many sandboxed virtual environments. What's quite nice is that these containers will use not only common hardware resources but also software resources such as the OS and libraries. In this way you can host many more environments than if you are using VM in a traditional way. For further details checkout the docs How Do Docker Containers Work? (And How are they Different From VMs).

The process of building a virtual environment is quite simple. You create a Dockerfile and inside it you describe what things comprise your environment. Once you have the Dockerfile you can build a container and then you can run it, stop it, or delete it. You can also promote a container to a Docker image which can server as a basis for other containers and be published on the Docker Index.

Here is what the Dockerfile for our application looks. like

The Dockerfile :

FROM dockerfile/ubuntu

MAINTAINER {name: "Ivan Hristov", email: "hristov[DOT]iv[AT]gmail.com"}

# Install Java.
RUN \
  echo debconf shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
  echo debconf shared/accepted-oracle-license-v1-1 seen true | debconf-set-selections && \
  add-apt-repository -y ppa:webupd8team/java && \
  apt-get update && \
  apt-get install -y oracle-java8-installer

# Install MongoDB.
RUN \
  apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 && \
  echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list && \
  apt-get update && \
  apt-get install -y mongodb-org && \
  mkdir -p /data/db

# Define mountable directories.
VOLUME ["/data"]

#Create app directory
RUN \
  mkdir -p /{app.working.directory}/src/main/webapp \
  mkdir -p /{app.working.directory}/src/test/resources

# Add the application JAR
ADD target/{app.name}-{app.version}.jar /{app.working.directory}/{app.name}-{app.version}.jar

# Add the application WEBAPP directory
ADD src/main/webapp/dist /{app.working.directory}/src/main/webapp

# Add the test resources
ADD src/test/resources/books_array.json /{app.working.directory}/src/test/resources/books_array.json

# Add the application WEBAPP directory
ADD entrypoint.sh /{app.working.directory}/

#Set port for the application
ENV PORT 8080

# By default start the application
ENTRYPOINT ["/{app.working.directory}/entrypoint.sh"]

# Expose app port
EXPOSE 8080

Let's go through the file. The first line basically states: "Hey, I need Ubuntu!". The second line tells who's the person who maintains the file. In the third line, we imagine ourselves being already in an Ubuntu OS terminal and installing Java 8. After that we install MongoDB. The line with the VOLUME is kinda tricky. What it does is mapping the /data folder on your machine to a folder with the same name but in the virtual environment (the Docker container). You don't have to use the VOLUME command. I use it so that I can see the content of the MongoDB database from my local terminal without the need to go in the Docker container or connect to the MonogoDB daemon running in the container. For this to work, you need the /data folder on your hard-drive to be present. In case you don't use the VOLUME command you have to create a /data directory in the container (or specify another folder for MongoDB to place it's DB files). Next, we create some folders for our application. Since the Java backend is shaded (packed) into a single JAR file (thanks to the Maven Shade Plugin) we copy it onto the container. We also copy the frontend dist folder and a file called books___array.json. The file books_array.json is used to populate our database with some books. This is done within the entrypoint.sh bash script, which is specified with an ENTRYPOINT Docker command in the end of the Dockerfile. Furthermore, our backend needs an environmental variable called PORT which tells an embedded Jetty server on which port to run. We use the ENV Docker command to set it. We also expose the port on the container via the EXPOSE Docker command. Later we will see how to map the exposed container port to a port on the local machine. Once all this is done, we copy the aforementioned entrypoint.sh bash script and then we tell Docker (through the ENTRYPOINT command) that this script should be executed when the container runs.

Well that's quite of an explanation ... and although it's far from perfect, it should give you a basic understanding of how to describe a Docker container for your application's virtual environment. There is more on the Docker's docs.

After the Dockerfile:

As I said, the Dockerfile is a set of instructions for building a Docker container. In order to build the container you need to execute the docker command with the following syntax:

docker build -t="ihristov/ilibrary" .

Once the container is built you can start it with the command:

docker run -p 8080:8080 -i -t ihristov/ilibrary

Here is what the different options mean:

  • -p 8080:8080 - maps the container exposed port 8080 to the localhost 8080 port. Note: When using Mac OS, in addition to this option you have to enable the port forwarding through the VM. I've used VirtualBox (version 4.3.12 r93733) and with it you need to select the boot2docker-vm -> press Settings -> choose Network -> click on Port Forwarding -> Insert new rule
  • -i - From the docs: Keep stdin open even if not attached
  • -t - From the docs: Allocate a pseudo-tty

In case you want to run the container in the background you can do:

docker run -p 8080:8080 -d ihristov/ilibrary

The Maven Way:

Now, what we need is way to automatically do all this with a single command. Let's say:

mvn clean verify

In the next few lines I'm going to show you how you can:

  1. Run Grunt to build the frontend
  2. Create the right Dockerfile for your Maven artifact
  3. Build the Docker container
  4. Run the Docker container in the background

For these steps we need two Maven plugins: the Exec Maven Plugin and the Replacer Maven Plugin


Step 1: Run Grunt to build the frontend

Here is the relevant portion of the pom.xml :

<build>
 <plugins>
  <plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>exec-maven-plugin</artifactId>
   <version>${exec-maven-plugin.version}</version>
    <executions>
     <execution>
      <id>Build front-end via Grunt</id>
      <phase>package</phase>
      <goals>
       <goal>exec</goal>
      </goals>
      <configuration>
       <executable>grunt</executable>
       <workingDirectory>src/main/webapp</workingDirectory>
       <arguments>
        <argument>build</argument>
       </arguments>
      </configuration>
    </execution>
    ...                

What happens here is no magic at all. We change the working directory to src/main/webapp and we execute:

grunt build

Step 2: Create the right Dockerfile for your Maven artifact

The tricky part here is figuring out how to reuse what we do now for subsequent Maven based projects. One thing surely changes from project to project - the Maven artifact name. Thus we need to parameterize this and then replace it when more contextual information is available (during the Maven package phase for example). For this, we can use the Replacer Maven Plugin.

Here is the relevant portion of the pom.xml


<build>
 <plugins>
 ...
<plugin>
 <groupId>com.google.code.maven-replacer-plugin</groupId>
 <artifactId>replacer</artifactId>
 <version>${maven-replacer-plugin.version}</version>
 <execution>
  <id>Prepare Dockerfile</id>
  <phase>package</phase>
  <goals>
   <goal>replace</goal>
  </goals>
  <configuration>
 <file>src/test/resources/docker/Dockerfile.template</file>
   <outputFile>Dockerfile</outputFile>
   <regex>false</regex>
    <replacements>
     <replacement>
      <token>{app.version}</token>
      <value>${version}</value>
     </replacement>
     <replacement>
      <token>{app.name}</token>
      <value>${artifactId}</value>
     </replacement>
     <replacement>
      <token>{app.working.directory}</token>
      <value>${app.docker.working.directory}</value>
     </replacement>
    </replacements>
   </configuration>
  </execution>
  ...

{app.version}, {app.name}, and {app.working.directory} are placeholders which you can find in the src/test/resources/docker/Dockerfile.template

The end result of executing this plugin is a Dockerfile ready to use.


Step 3: Build the Docker container

We've already seen the Docker command for building a container out of a Dockerfile. The same command translated for Maven with the help of the Exec Maven Plugin looks like this:

<build>
 <plugins>
  <plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>exec-maven-plugin</artifactId>
   <version>${exec-maven-plugin.version}</version>
    <executions>
     <execution>
      <id>Build Docker container</id>
      <phase>pre-integration-test</phase>
      <goals>
       <goal>exec</goal>
      </goals>
      <configuration>
      <executable>docker</executable>
      <arguments>
       <argument>build</argument>
       <argument>-t=${docker.tag.name}</argument>
       <argument>.</argument>
      </arguments>
     </configuration>
    </execution>
    ...

Step 4: Run the Docker container in the background

Now let's run the container in the background:

<build>
 <plugins>
  <plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>exec-maven-plugin</artifactId>
   <version>${exec-maven-plugin.version}</version>
    <executions>
     <execution>
      <id>Start Docker container</id>
      <phase>pre-integration-test</phase>
      <goals>
       <goal>exec</goal>
      </goals>
      <configuration>
      <executable>docker</executable>
      <arguments>
       <argument>run</argument>
       <argument>-p</argument>
       <argument>8080:8080</argument>
       <argument>-d</argument>
       <argument>--name</argument>
       <argument>${docker.container.name}</argument>
       <argument>${docker.tag.name}</argument>
      </arguments>
     </configuration>
    </execution>
    ...

Final words: For those of you who've made it so far, I can only say - Congrats! (and I can only wish that you've found what you've been searching for) It's one of the longest (both in terms of time-taken and size) articles that I've ever written. Probably some of you expect to find here something along the lines: "That's all folks!" ... but unfortunately that's not all. Getting things done in a reliable way is quite tricky. As you can see, it takes quite some effort to configure Maven and take into account different possibilities. There is a project started by Alex Collins for developing a Docker Maven Plugin. Yet, orchestrating Maven & Docker (and also Grunt) by yourself gives you more flexibility.

On GitHub you can find the whole project should you need it. Bare in mind that there is more to consider, such as : "When executing integration tests how do we make sure that the container is hot and ready?". This I will leave for another article.

Ivan Hristov

Ivan Hristov

I am a lead software engineer working on software topics related to cloud, machine learning, and site reliability engineering.

Read More