Testing is questioning
I think it was Jon Bach's work on open book testing that really drove the analogy of every test being a question home for me. Each test you execute is really just a question you're asking the software. If you can think of a question to ask, you've though of a test to execute. It's elegant in it's simplicity.

So what does this mean for your testing? Because for each test you're thinking of running, you can use this analogy as a tool to develop a better test:

  • Before you run each test, in your mind, clearly articulate the question that you're asking. I've found that this not only can help clarify for me what I'm really trying to accomplish with any given test, but it also helps me think of things that I need to verify after my test has been executed. Once I think of a test as a question, I recognize that there might be multiple ways to answer it. That gives me multiple things I need to verify (like log files, database entries, etc...).

  • Before you run each test (or more practically, any given set of tests), try to express the tests in several different ways to help clarify them. This is an attempt to clarify the scope of the test. For example, what do I really mean when I say "Ensure that when you stack an utterance you can't bypass required script logic?" Does it matter what I stack? Does it matter what area of script logic? How would I know if I bypassed it successfully? Is there a way for it to be partially bypassed? How would I know? How else could I phrase the question my test is asking to help make these things more clear?

  • It can also be helpful to ask if any given test should be broken out into smaller tests. Are there smaller and simpler questions that could be asked? What are they? Do they tell you more or less?

  • Finally, the last thing you can do is ask yourself if you test might have multiple right answers. Does pass really mean pass? And does fail really mean fail? What conditions could take place that might make you change your answer? While this might not change this specific round of testing, it might give you ideas for other tests.

Clarifying your charter
When I'm teaching exploratory testing, I find that one of the most difficult things to learn can be chartering. If you're not practiced at moving from the abstract to the specific it can be very difficult. It becomes even more difficult to figure out what will actually fit in your session time box.

Here are some tips for clarifying the purpose of your testing:

  • Don't feel like you need to develop all your charters in one sitting or all of them upfront. Be comfortable with charters emerging slowly over time. While you'll need some charters defined upfront so you can get started, often you'll find that you charter-base will fill in as you go.

  • Take the time to state the mission (or purpose) of the charter as clearly as possible. Don't say "Test the portal for reporting accuracy" when you can instead say "Test reports X, Y, and Z for errors related to start and end time selection criteria, summing/totally, and rounding." In my experience, the more specific you are, the better your testing will be. If you need to take an extra 30 to 120 seconds to get more specific, take them.

  • Similar to the last tip, if you can't tell one mission from another, you've not done a good enough job defining it. If you have charters to "Test feature X," "Stress test feature X," or "Performance test feature X" can you tell me what the differences are between tests? Couldn't some stress tests be a subset of  simple feature testing? Couldn't some performance tests be a subset of stress testing? If you can't compare two missions side-by-side and have clear and distinct test cases come to mind, than you might benefit from spending a little bit more time refining your missions.

  • Finally, while you're testing go back and make sure you're mission is still correct. There's two goals here. First, you want to make sure you're on mission. If you need to test for rounding errors in reporting, but you find you just can't stop testing filters and sorting, than create the charter for testing filters and sorting and execute that charter instead. You can always go back to the charter for testing for rounding errors. Second, if you find as you test that you can better clarify your original mission, add that clarity as you go. It will help you when you go back to write new charters. The more clear you can make it, the easier it will be to recall what you actually tested three days later when you're looking back and trying to remember what work you still have in front of you.

Types of disaster-recovery tests
Today's tip (like yesterday's tip - it's a DR weekend here at QTT) comes from Klaus Schmidt's High Availability and Disaster Recovery. In the book Schmidt outlines three broad categories for disaster-recovery testing:

  • Hot simulation/fire drill: This is simply a walk through of the DR procedures. You're just making sure all the steps still make sense, everyone has what they need, and everyone knows what to do. A lot of corporations do these fire drills on a regular basis (quarterly, annually, etc...).

  • Singe component: A single component of the DR system is activated and used for functional or performance testing. Often when a piece of the primary system is updated, the DR component will also be updated and tested. When I was doing a lot of performance testing, this happened quite a bit.

  • Complete environment: In this category, the primary systems are deactivated and the DR systems are activated. This is a full test of the DR environment: systems and processes.


For more on HA and DR, pick up the book. It's a fairly dry read, but the author made it quite skim-able and it's packed full of great information.
Disaster-recovery quality characteristics
Today's tip comes from Klaus Schmidt's High Availability and Disaster Recovery. In the book Schmidt outlines a useful list for being able to test a disaster-recovery (DR) installation:

  • Being able to activate the DR system while the primary system is active.

  • Being able to test the DR system without requiring changes to the primary system.

  • Being able to component test different aspects of the DR system.

  • Availability of workload generators, regression tests, and monitoring on the DR installation.

  • A short time frame for resynchronization after testing is completed.