This is a guest post from Alex Van Boxel, Software Engineer working at Alcatel-Lucent Antwerp. His pet interests within software are to keep the quality high, smooth running builds and the engineers productive. He believes having the complete Atlassian tool-chain certainly helps.

 

Finally we got it working… we wanted to know what our total test coverage was on our product with all the effort we did over the last year by adding new tests. It wasn’t easy because once you leave the path of simple unit testing you come in the terrain of multi-server setups. With multiple server it’s not easy anymore too collect all your metrics and get a descent report. But we pulled it off. So here’s our story.

We’re using the complete tool chain of Atlassian, so this story involves their tools. In this case Clover (the code coverage tool), Bamboo (Continues Integration server) and Maven (as out build tool). Let’s start with Clover. Clover is quite a clever coverage tool. It integrates in your build tool to create a special version of your product. It changes the source by injecting code to do the instrumentation. At the same time it will build a database with a collection of all the methods that are instrumented. With the special build of your product and the database it will collect metrics while your tests are running. So with a single Maven command line we had a complete coverage report of our unit tests.

But the tight integration with our build tool comes at a price. That price is not knowing what magic is going on behind the screen. And you need to know how everything fits together whenever you want to get more out of your tools. And that’s what we wanted… Let’s start at the beginning and start building our instrumented server.

Building the instrumented server

As I said, the Clover plugin for Maven does a lot a magic and after a normal Clover maven build you get a nice report of what the code coverage is of your unit tests. Getting this into Bamboo (our integration server) is easy. Start a source code checkout, do the clover maven build as you would on the commandline and publish the report as an artifact. Set this up first before you go any further. This will be the basis for our next phase and will already provide some usefull insights. Then it’s time to think about what we need if we want to get metrics on our server running on different machines. Lets have a look at the components.

Our instrumented product. Thats the easy part, your instrumented server will be located at the usual place in your maven project after a build. Just make sure that you don’t deploy this special version to your local or remote repository, so limit yourself to package (this still runs the unit tests).

This special build needs the clover.jar to be present on the application server. In our jboss server it’s enought that it’s present in the lib folder. Make sure to look at the server logs when you deploy your ears. If you find clover related NoSuchMethod exception the jar is in the wrong place.
Last thing we need to get on the server is the clover.db, without this database you will not get eny metrics.

The last component we need to worry about is how to get the instrumentation metrics in a consistent manner so we can get it easily from the remote server back to our Bamboo agent for later processing.

Looking at the list, the biggest problem is the clover.db, by default each maven module within a maven project has it’s own database. This would be a pain if we wanted to distribute those to a different server. Luckily we can force clover to build a single database by providing a path to the database. Although this had the strange effect that the report was generated in an unstable location (it’s actually the last module in the maven reactor). But you can force the report path as well. Here is an example of the modification in our parent pom:

<properties>
  <clover.version>3.1.0</clover.version>
  <clover.liclocation>/opt/clover.license</clover.liclocation>
  <clover.databasepath>/opt/work/clover/example</clover.databasepath>
  <clover.reportpath>${clover.databasePath}</clover.reportpath>
  <clover.enabled>false</clover.enabled>
</properties>
...
<build>
  <plugins>
    <plugin>
      <groupid>com.atlassian.maven.plugins</groupid>
      <artifactid>maven-clover2-plugin</artifactid>
      <version>${clover.version}</version>
      <configuration>
        <licenselocation>${clover.licLocation}</licenselocation>
        <cloverdatabase>${clover.dbPath}/db/clover.db</cloverdatabase>
        <historydir>${clover.dbPath}/history</historydir>
        <includetestsourceroots>false</includetestsourceroots>
        <excludes>
          <exclude>com\example\**\*</exclude>
        </excludes>
      </configuration>
    </plugin>
  </plugins>
</build>
...
<reporting>
  <plugins>
    <plugin>
      <groupid>com.atlassian.maven.plugins</groupid>
      <artifactid>maven-clover2-plugin</artifactid>
      <configuration>
        <licenselocation>${clover.licLocation}</licenselocation>
        <cloverdatabase>${clover.databasePath}/db/clover.db</cloverdatabase>
        <historydir>${clover.databasePath}/history</historydir>
        <outputdirectory>${clover.reportPath}/clover-report</outputdirectory>
        <excludes>
          <exclude>com\example\**\*</exclude>
        </excludes>
      </configuration>
    </plugin>
  </plugins>
</reporting>

 

The most importing thing to note here is the database path. This is a shared location we create that is available on each Bamboo agent. The user that the agents are running with need to have access to that location, so don’t forget to set the correct access rights. This location is outside of the normal Bamboo working directory because we’re going to manage it ourselves. It’s that working location that we are going to replicate on our test servers as well.

To create the first job of our master plan, we started out with our normal maven clover build. The build, with our adaptations, already produces the database, the instrumented server and the metrics of the unit test. Now we only need to add scripts to manage our central location. A cool feature of Bamboo is it’s task infrastructure. A job can comprise out of different tasks, as jobs can run in parallel, tasks will run sequential within a job. A lot of task types are provided or you write your own. One of the tasks that’s used here is the inline script task. Very useful for prototyping your CI build and you still can decide to later put the script in your source repository (Bamboo is able to do multiple source checkouts within one job) as the checkout is modeled as a task. Our pre-clover maven build prepares our shared location, basically cleaning up leftovers and recreating the structure we need.

rm -rf /opt/work/clover/example/db
mkdir -p /opt/work/clover/example/db
rm -f clover-db
ln -s /opt/work/clover/example/db clover-db

Next task in our “Build Instrumented Server” job is running maven. Always start cleaning everything, followed by clover2:setup which will build the database and modify the source code. Then do a package, that will build the server, run the unit tests and output the metrics.  Finally the clover2:clover will generate the report of the unit test part.

clean clover2:setup package clover2:clover -Dclover.reportPath=${bamboo.build.working.directory}

The nice side effect off having the single clover db is that all the metrics are saved at that location as well. That makes it easy to create the post execute script. The script is another inline Bamboo task that removes the database and tar’s the data in a single file.

# Remove old data
rm -f clover-instr.tar.gz
rm -rf clover-instr
# Copy from symbolic link, and delete clover.db
cp -RL clover-db clover-instr
rm -f clover-instr/clover.db
# Archive and compress
tar cvzf clover-instr.tar.gz clover-instr/

Finally we publish the clover database, instrumented ears and the metrics as a shared artifact that will be used in a later stage.

Running the integration tests

Stage two in our plan is running the different test suites. The different stages are jobs grouped together in a single stage. That makes it possible to run the tests concurrently on different agents. Building the job for the integration test can get a bit tricky. Not only are their 2 machines that come into play, now we need the use the instrumented server and the clover database as well.

As the clover.db and the instrumented server are published in the previous stage they are available to all the next stages. We only need to specify the artifacts we are interested in the location where we like to have them. When we got our artifacts we can use them in our task. Here is the list of tasks we have in one of our jobs:

  1. Checkout the server. Needed for generating the scoped report after the run.
  2. Checkout the test suite, in a sub-directory so it doesn’t conflict with the server.
  3. Inline script to cleanup old data, create a mirror of our working directory on the remote server, install the instrumented server and the clover.db on the remote server.
  4. Maven build that runs the integration run.
  5. Inline script to get the metrics back from our remote server to our agents.
  6. Maven build that uses the metric and our server source code checkout to generate a report (scope to these tests).
  7. Script to remove the clover.db from our working directory and pack the metrics data in a tar (just like in our server build).

Quite a number of tasks. Lets show one of them in detail. The inline script we’re showing here is the script that prepares everything and sends everything to the remote server.

# Remove all old Clover data
rm -rf /opt/work/clover/example/db
mkdir -p /opt/work/clover/example/db

# Recreate symbolic links
rm -f clover-db
ln -s /opt/work/clover/example/db clover-db

# Unpack clover database
cp ext/clover-db/* clover-db

# Prepare and send instrumented server
rm -rf deploy
mkdir -p deploy
find ./ext/server -iname 'example*.ear' -exec cp \{\} deploy \;
tar c deploy | ssh -i /opt/secure/qa-fat.private root@qa-fat tar x -C /opt/example/server/default/

# copy clover db
ssh -i /opt/secure/qa-fat.private root@qa-fat rm -rf /opt/work/clover/example/db
ssh -i /opt/secure/qa-fat.private root@qa-fat mkdir -p /opt/work/clover/example/db
scp -i /opt/secure/qa-fat.private /opt/work/clover/example/db/clover.db root@qa-fat:/opt/work/clover/example/db/

The rest of the scripts are trivial, just collect the data from the remote server and continue. Don’t forget to setup your remote server by adding the clover.jar to the libraries of your app server and add a VM parameter to tell clover where to find the database -Dclover.initstring.basedir = /opt/work/clover/example/db. The other test suites have similar setups, so we’re thinking about adding the scripts to a small git repo, so they can be reused in different jobs, with an extra checkout.

Creating the aggregated report

The final stage is not that difficult. In this stage we want to collect all the information collect throughout the previous stages. Again we use the artifacts shared by the previous stages, put them in the desired place and start a script to unpack them alongside the clover database. Because the metric data file names are always unique they will not clash with the files produced on the other machines. So we only need to place them in one place and start the maven clover report build again but now on all the metrics (unit-, api- and ui tests). Here’s a example of what the unpack script could look like:

# Remove all old Clover data
rm -rf /opt/work/clover/example/db
mkdir -p /opt/work/clover/example/db

# Recreate symbolic links
rm -f clover-db
rm -f clover-instr
ln -s /opt/work/clover/example/db clover-db
ln -s /opt/work/clover/example/db clover-instr

# Unpack instrumentation data
cp ext/clover-db/* clover-db
tar xvzf ext/instr/unit/clover-instr.tar.gz
tar xvzf ext/instr/api/clover-instr.tar.gz
tar xvzf ext/instr/ui/clover-instr.tar.gz

Conclusion

Creating this plan took a while, but I think the information you can collect from the code coverage is worth the cost. With this information we can adapt our test plan to write extra tests for the code that’s not covered and we thought we did. A special thanks to Francois and Bert for dedicating some of their time to make this plan possible.

…And an extra-special thanks to Alex for sharing all this hard-earned knowledge!  Got a how-to blog post involving Atlassian dev tools that you’d like to share?  Drop a comment here or hit us up on Twitter at @AtlDevTools.  It’s all about community, baby!

Not using Bamboo? Learn about easy import from Jenkins

About Sarah Goff-Dupont

I've been working in and around software teams since before blogging was a "thing". When not writing about all things agile, automated, and/or Atlassian, I can be found reading contemporary fiction, smashing it out & keeping it real at CrossFit, or rolling around on the floor with my kids. Find me on Twitter! @DevToolSuperFan

View all posts by Sarah Goff-Dupont »