Test Tools don't have to be just for testing
Testers have a plethora of tools which are used in testing and not just the big fancy expensive tools either, but the small and often innovative ones too, such as the "Hoffman Box".

Sometimes though tester's find innovative ways of using test tools in the 'real' world.

One company I know of, had a migration exercise that due to regulatory reasons could not be performed through importing and exporting data. It looked as if they were faced with an enormous manual operation, where operators were going to have to manually input the data into the new system.

Until a test manager suggested that Quick Test Pro be used to automate the exercise.  He suggested creating scripts to simulate the operator and the data entry tasks to be performed. 

When the scripts ran, they had a group of testers on hand in case of failure, but no way was the team as extensive as originally planned.

Perhaps there are test tools are in your toolbox that users and operators would find helpful?
Simple status dashboard
For the last year or so I've been using a simple status dashboard to coordinate testing for releases. I find an easy way to share the dashboard is to use a spreadsheet Google doc. Using a Google doc makes it easy for everyone to make updates and see updates as they happen.

Here are the columns I'm currently tracking:

  • Client

  • Release Number

  • Code Complete Date

  • First Pass Test Complete Date

  • CM Review Date

  • Regression Test Complete Date

  • UAT Date

  • Production Date

  • Development Status

  • Test Status

  • Testing Lead

  • Deployment Ticket Number

  • Scope Summary


It looks like a lot, but it all fits on one screen (no scrolling needed) and each row in the spreadsheet represents a separate release. If a date is in the past, the cell is colored red. If it's today, it's colored yellow. And if it's done, it's colored green. With one quick glance, you get a high-level view of all the releases and their current statuses.
One page test plan - contents
In yesterday's post I outlined when I use a one page test plan. In this post, we look at the contents of a simple test plan.

For most small releases, I have three big concerns:

  • new functionality (what did we build for this release)

  • old functionality (did we break what use to work by introducing new functionality)

  • other quality criteria (as appropriate for the release - performance, security, usability, etc...)


For each of those areas, I'm interesting in the following areas of coverage:

  • specific tickets (stories, feature requests, bugs, etc... in the release)

  • general areas of functionality (either business facing or technology facing)


Ideally, for each area or ticket listed for each type of testing, there will be a brief description of the testing testing taking place along with a link to where I can find the details about the actual tests (a link into a test case management tool, a Google doc, etc...). Something that tells me at a high level what's going on and where I can find the details if I want them.
One page test plan
For small releases, I sometimes create what I call a One Page Test Plan. Using a One Page Test Plan assumes that the people involved are familiar with the technology, process, etc.... It's for small "boiler plate" releases (like a release to production after a two week sprint). That doesn't mean there's not complexity and risk, it just acknowledges that there are less documentation requirements since the team has a rhythm related to releases.

The contents of a One Page Test Plan are direct and straightforward. They assume the team is small and is in constant communication. Tomorrow's post will provide an outline of what's in a One Page Test Plan.
Let your bugs have social networks
One of the things I really like about JIRA is how much linking it allows. (Other tools do this too, but I wanted to namedrop the tool because they do it particularly well.) From a story, I can link related stories, defects, CM tickets, deployment tickets, etc.... Basically whatever ticket type I want. This is great, because over time I've developed some risk heuristics based on the number of links a ticket has:

  • If it has a lot of links to other stories, I likely need to test more around business functionality concerns.

  • If it has a lot of links to other bugs, I likely need to test more around technical functionality concerns.

  • If it has a lot of links to CM tickets, I likely need to test more around deployment and configuration.


I've also developed some similar heuristics around estimating how long work will take based on links, how much documentation there will be to review, etc...

JIRA also shows you how many people have participated in a ticket. That is, it tracks who's "touched" it. I have similar heuristics around that. The more people involved, the longer stuff will take, the more likely there was code integration, etc...

What does the social network of your tickets tell you about your testing?