Posts in Test Planning
Test Strategy and Things You Hadn't Considered
Test strategy typically deals with what you plan to do. If you think of test problem space as a physical space, as surface area, then a test strategy helps define the map of what you plan to cover.

If you think about it as positive space/negative space, or figure/ground if you prefer, then I had always considered the things you plan to do representing positive space, and the things you plan not to do representing negative space.  But a recent post on Catherine Powell's always-good ABAKAS blog added a new dimension for me: things you hadn't considered. These are the things that don't make it to your map because you didn't think of them.

The takeaway for me was that by documenting what you plan to cover and what you don't, the things you didn't think of are really the negative space. As Powell says, this makes our assumptions (end by extensions, the things we missed) visible for validation. This adds value at all phases of a project, either helping avoid problems early on through helping understand strategy decisions during a post mortem.
Test Design with Mind Maps
Today's tip is two-fold.

As a first part, it's a great example of a rapid test design practice with XMind mind mapping tool, provided as experience report by Darren McMillan.


  • Mind mapping

    • Increases creativity

    • Reduces test case creation time

    • Increases visibility of the bigger picture

    • Very flexible to changing requirements

    • Can highlight areas of concern (or be marked for a follow up to any questions).



  • Grouping conditions into types of testing

    • Generate much better test conditions

    • Provides more coverage

    • Using templates of testing types makes you at least consider that type of testing, when writing conditions.

    • When re-run these often result in new conditions being added & defects found due to the increased awareness



  • Lean test cases

    • Easy to dump from the map into a test management tool

    • If available the folder hierarchy can become your steps

    • Blend in easily with exploratory testing.  Prevents a script monkey mentality.



  • Much lower cost to generate and maintain, whilst yielding better results.



As a second part, I link you back to 2006, to the article "X Marks the Test Case: Using Mind Maps for Software Design" by Rob Sabourin.


  • Mind Maps to Help Define Equivalence Classes

    • Identify the variables

    • Identify classes based on application logic, input, and memory (AIM)

    • Identify invalid classes



  • Mind Maps to Identify Usage Scenarios

  • Mind Maps to Identify Quality Factors


Tie performance to business goals
Following up on the performance pitch post, here are some tips for helping get your technology team talking in the language of your business team:

  • Take the time to define both top-line and bottom-line application business metrics

  • Work to prioritize that list of metrics to better understand what’s most important (creating tiers can help)

  • Identify what processes and transactions will affect those key metrics


When the team finds a possible performance issue later on, they can then translate what might otherwise be a generic metric (we're X seconds slower on transaction Y) into something that has meaning to the business (given that we're X seconds slower on transaction Y, we expect abandonment to go up Z%).

Defining those metrics, prioritizing them, and tying those to transactions doesn't necessarily need to be complicated. For some applications it will be. But for most applications I've tested, I suspect we could have done this over a couple one-hour workshops using Excel. Don't make it harder than it needs to be.
Look for overlap in the schedules
Whenever I'm managing a large testing project, I try my best to break the testing into different often overlapping iterations. I do this by looking at different types of testing (functional, performance, user acceptance, etc...) as well as different variations on testing scope (functionality by delivery date, by project priority, by component/module grouping, etc...). These iterations are often keyed primarily by code or environment delivery date, but they can also be keyed from resource availability. The idea isn't unique at all: take a big bloated project, and try to break it up into smaller more manageable units of work. A tale as old as waterfall time...

However I set it up, I always go back and double check to make sure I can manage all the overlapping iterations. I've gotten sideways before where by trying to collapse the testing schedule, I've overbooked either testers or environments. Or I've inadvertently over-committed another group (like development or the business) because they need to support the testing effort in some way.

For each activity/iteration, ask yourself:

  • Who will be doing this testing and where will they be doing it?

  • What will they really need to get started (code, data, etc...)?

  • What support will they likely need and how much of it will they need?

  • What holidays or other project/company events fall inside those iterations that you're not thinking of because they are months away?

Don't get trapped by reuse
I'm a big fan of reusing previous testing artifacts. If you have a test bed ready and available, I'm interested in taking a look at it if it can help me shave time off my project. However, don't let past testing artifacts trap you or bias you. Not only can they have the wrong focus, but they run the potential of slowing you down instead of speeding you up. If you have a lot of rework to do, or if you need to spend a lot of time disambiguating coverage, you might spend more time reviewing than you would have spent just starting from scratch.

Here are some things I look at when trying to determine if an existing test bed is a time sink:

  • There's no clear summary of what's included in the test bed.

  • No one on the team was around when the test bed was originally used. It's coverage is hearsay.

  • A preliminary review of the test bed shows that several different types of updates are required to make them useful (data elements, screen shots, endpoints, etc...).

  • 100% of the original test bed development and execution was done by an external provider and the artifacts were "turned over" at the end of the project.