Posts in Test Planning
One page test plan - contents
In yesterday's post I outlined when I use a one page test plan. In this post, we look at the contents of a simple test plan.

For most small releases, I have three big concerns:

  • new functionality (what did we build for this release)

  • old functionality (did we break what use to work by introducing new functionality)

  • other quality criteria (as appropriate for the release - performance, security, usability, etc...)


For each of those areas, I'm interesting in the following areas of coverage:

  • specific tickets (stories, feature requests, bugs, etc... in the release)

  • general areas of functionality (either business facing or technology facing)


Ideally, for each area or ticket listed for each type of testing, there will be a brief description of the testing testing taking place along with a link to where I can find the details about the actual tests (a link into a test case management tool, a Google doc, etc...). Something that tells me at a high level what's going on and where I can find the details if I want them.
One page test plan
For small releases, I sometimes create what I call a One Page Test Plan. Using a One Page Test Plan assumes that the people involved are familiar with the technology, process, etc.... It's for small "boiler plate" releases (like a release to production after a two week sprint). That doesn't mean there's not complexity and risk, it just acknowledges that there are less documentation requirements since the team has a rhythm related to releases.

The contents of a One Page Test Plan are direct and straightforward. They assume the team is small and is in constant communication. Tomorrow's post will provide an outline of what's in a One Page Test Plan.
Let your bugs have social networks
One of the things I really like about JIRA is how much linking it allows. (Other tools do this too, but I wanted to namedrop the tool because they do it particularly well.) From a story, I can link related stories, defects, CM tickets, deployment tickets, etc.... Basically whatever ticket type I want. This is great, because over time I've developed some risk heuristics based on the number of links a ticket has:

  • If it has a lot of links to other stories, I likely need to test more around business functionality concerns.

  • If it has a lot of links to other bugs, I likely need to test more around technical functionality concerns.

  • If it has a lot of links to CM tickets, I likely need to test more around deployment and configuration.


I've also developed some similar heuristics around estimating how long work will take based on links, how much documentation there will be to review, etc...

JIRA also shows you how many people have participated in a ticket. That is, it tracks who's "touched" it. I have similar heuristics around that. The more people involved, the longer stuff will take, the more likely there was code integration, etc...

What does the social network of your tickets tell you about your testing?
What's in a smoke test?
When figuring out what to smoke test for a release/build/environment, I run down the following list in my head:

  • What's changed in this build/release?

  • What features/functions are most frequently used?

  • What features/functions are most important or business critical to the client?

  • Is there technical risk related to the deploy, environment, or a third-party service?

  • Are there any automated tests I can leverage for any test ideas I came up with in the previous questions?


Based on my answers to those questions, I come up with my set of tests for the release. If it's a big release (aka more features or more visible), I'll do more smoke tests. It's been my experience that different releases often require different levels of smoke testing.
Calculating velocity
Catherine Powell had a great post yesterday on calculating velocity. From that post:
"So now we know that each QA engineer can do about 2.5 units of estimated work each week. When we go into the next estimation session, that's where we'll draw the line for test work. We estimate just like we always do, and we then will walk down the list committing to 2.5 units of work per week. When we run out of allotted time, we'll stop."

There are a couple of great tips in that post, and the overall focus on developing a method to calculate velocity if well done.