Alexandra_SchladebeckToday we have a guest post from Alexandra Schladebeck, product manager for Jubula, an Eclipse open source project for cross-platform, functional test automation. Alex and her team use JIRA every day to manage their work, and got so excited about it, they couldn’t resist writing an integration.

Why do we test? Without getting too philosophical about it, I like using this definition: “We test to gain information so that we can make decisions”[1]. That fits especially well with agile processes, because feedback and adjustments based on that feedback are critical.

Jubula and communication

Our aim with Jubula, a tool for developing automated UI tests, has always been to get the whole team thinking – and talking – about testing. That guiding purpose is what makes Jubula different from other GUI testing tools:

  • We don’t use capture-replay because that would cancel any early acceptance testing you could do, delaying the point at which you start thinking and talking about tests in earnest. With Jubula, you can start writing tests straight from requirements, mockups, or stories, and start asking the important questions and finding problems before responding to them becomes onerous. (Also, capture-replay smacks of testing against the implementation instead of the requirements. But I digress.)
  • Jubula tests aren’t written in code, so that everyone on the product team can be involved in test automation. Anyone with a good “tester” perspective can (and should) automate tests. The tests themselves are easily readable, so they can be used as a communication basis with stakeholders.
  • We put a lot of focus on structure and reusability because tests need to accompany the software throughout any and all changes. The information from the tests is so important that we can’t afford to lose it because a test has become too difficult to maintain.
  • It’s free!

Jubula and JIRA 

We’re very excited about the recent integration we’ve done between Jubula and JIRA, and we hope you will be, too. There are two ways in which putting these powerful tools together can help a team to improve (not replace!) their communication.

failedtestjubula-2

  1. Keeping testers in the loop – If testers working in Jubula can see what changes are being made directly from within Jubula (comments on JIRA issues, commits for features), then it’s more likely that someone will pop over and ask the important question: What does that mean for the build and the tests?
  2. Making test status visible – If test status and results from Jubula are made visible quickly to everyone, directly in the JIRA issues they already use, two things happen. We can reduce the time required to make test results known, and we have a strong marker on any features with failing tests: They are not done (and here’s the proof).

Keeping testers in the loop

001-SeeingJiraQueriesInITE

If you can see something, you can react to it. For testers working with the standalone version of Jubula, we’ve added the JIRA connector so that they can see issues from JIRA filters in the ITE (Integrated Test Environment). If something changes (a new comment, a new issue), a pop-up appears. This shouldn’t replace face-to-face conversation; rather, it sets the ball rolling by keeping the whole team updated on important changes throughout the day, in real time.

Making test status quickly visible

The status of a test run directly impacts our understanding of a feature or release’s readiness. The quicker and more visibly those results are added to the JIRA issue relevant for a given test, the better. Instead of only the tester having a direct line-of-sight on these results, and having to manually update another tool in order to share the information, we‘ve now added the ability to push test results directly to the relevant JIRA issues. Here’s how it works:

  1. Configure your workspace to use JIRA as a repository. 002-Workspace
  2. Configure your project settings as to whether you want to report on pass/on fail/both/none. 003-Project
  3. Specify where the web dashboard will be running so that full test results can be seen in the browser by anyone that wants to.
  4. Link parts of your tests to issues in JIRA using their issue key. You can link any Test Case or Test Suite to an issue, but it’s a good idea to think about the granularity of your reporting – every link will create a test result comment, so take care not to spam yourself!004-PropertiesView
  5. With the link in place, you can open the issue directly via the context menu to use the automatic reporting after a test. At the moment, it can only be used for test runs kicked off manually via the Jubula GUI (i.e. not for headless runs via the command line or continuous integration tool), but changing that is high on our backlog. Once a test has run, the status of each node in the test is added as a comment in the linked JIRA issue. In addition, a link to the full test report for that node is included in the comment.

JIRAissueJubulacomment-2

So after running a test, all relevant issues addressed by that test will have a new comment based on the rules you set up (only comment on pass/on fail etc.) for the project.

The team isn’t freed from analyzing the results, nor from reacting to them. Indeed, the team is prompted to analyze and respond because the results are clearly visible for everyone.

Next directions

We’re pleased that we could use the Eclipse Mylyn API to get the support we’ve got so far. Our next steps are to improve the granularity of the reporting to make it easier to decide what to report and when. We also want to add support for reporting after a headless test run. And we’re looking forward to the feedback we get from the community and the directions that takes us in. Update your installation of Jubula to get the JIRA connector, or try it for the first time. Remember, it’s free!

Get Jubula

[1] http://www.developsense.com/blog/2010/09/why-exploratory-isnt-it-all-just-testing/