Analyzing your testing afterwards

After chess players play a big game, they spend time afterward analyzing what they did and the consequences. They look for things they could have done better, or try to identify patters in their own play, or the play of their opponent. Testers can do this as well. After you test something, go back and analyze what you've done.

Analysis is different than review, don't just look at what you did. Question it. Ask what you could have done differently. Try to think about how that would have changed your testing. This doesn't just apply to exploratory testing, this very much applies to scripted testing as well. This isn't a personal debrief, it's you looking at the work you turned in... asking yourself if it was your best work... and trying to figure out how to make it better.

Here are some things I look for when I'm analyzing my own testing:

  • Where could I have moved faster? Where did I get stuck, or waste time?

  • Where could I have moved slower? Where did I potentially gloss over something important or skim information that might have been critical to my learning about the product or my analysis of a test result?

  • What parts of the system was I ignoring while doing this testing? Was that appropriate? Should I have been paying more attention to them? What possible issues would I have seen if I had been looking in those areas?

  • What techniques did I apply while testing? Did I apply them well? Where could I have changed my behavior to possibly develop a better test case? Are there different techniques I could have employed to make my testing better or faster?

  • What kind of coverage did I attain for the area of the system I was testing? Was that sufficient? What risks were not addressed given the coverage I achieved? Are there subtle changes I could have made to my testing to help me achieve higher coverage with the same amount of work?