Posts in Practicing Testing
Blitz testing
In chess, the idea of a blitz game is that each side is given less time to make their moves than under the normal tournament time controls. It's also often used to help a chess player who's learning chess develop their intuition. When playing a blitz game, you don't have time to think 30 moves out (or for us mere mortals, 5 moves). The idea, when used as a learning tool, is to compare your first impression of positions with the way they are actually developed during the game.

I think this can also be a good learning tool for testers. For me, a normal test session is around 45 minutes. One way for me to practice is to give myself 5, 10, or 15 minutes instead of my traditional 45. I'll want to achieve the same level of coverage in that small amount of time, so I'll need to move incredibly quickly to do so. At the end of that blitz session, you go back and execute the full session (or the rest of the full session). When you're done, you can go back and analyze what you missed in your blitz session and try to understand why. You can also look at things that you found that you think you might not have found if you were being more slow and deliberate. (However, that analysis is more difficult to do.)

The goal here is to try to understand both how your testing changes given the amount of time you have (which is valuable to understand), but also to help you develop your intuition for where risk is within the product and which techniques are going to be most helpful and under what conditions.
Solving complex testing problems
There are a number of testers out there who pose practice problems to the testing community (Matt Heusser, James Bach, Michael Bolton, and Ross Collard to name but a few). Attempting to tackle those problems can be a great way to develop your testing stills. There's three big reasons why this type of practice can help you develop your skills better and faster than if you just log four or five extra hours at the office:

  • It's different than what you do everyday. Very likely if you're doing one of these examples, most of the time you'll be testing software that's significantly different than what you test every day at the office. That stretches your imagination, gets you away from your everyday biases and assumptions of what you can and can't test, and gets you focused on the mechanics of testing more than the mechanics of the product or the business problem. If you play chess, poker, sports, or even play in a band... you can get good by practicing with the same people all the time, but you can't get great. To get great you need a variety of experiences. You need to compete or play against people who are different than those you practice against. Testing is no different.

  • It's designed to be more complex than your average test. If you're going to practice, you need difficult problems. Yesterday a fellow tester gave me a programming problem from college related to the fibonacci sequence. I was able to show him example code solving the problem on my first try in about thirteen lines of code. That wasn't practice. It was problem number four on his page of practice programming problems. He than gave me problem number 200 (or some other similarly high number). It was so complex if I really wanted to solve it I'd have to spend some time looking up the math and geometry terms involved in the problem. Then I'd have to actually stand at a whiteboard and design a couple of algorithms to make sure it would execute in the time allowed. That's a practice problem. It's something that stretches you to do something you've not done before. The first problem wasn't practice (for me), the second one was. As you become more skilled as a tester, you need harder and harder problems to solve. In addition to being more complex than your average daily work-related testing problem, these problems are also (typically) focused on a specific aspect of testing. They're designed to get you thinking about a technique or concept of risk or coverage that you don't use everyday.

  • Feedback is built into the exercise. The most important aspect of these problems is that you get feedback. Sometimes you get direct and personal feedback from the people who presented the problem. James Bach for example is almost always willing to critique a solution you provide to a problem he presents. That's worth the price of admission right there! But it often goes farther than that. Often times you get to see your solution right next to twenty others. You see how other people solved it. That can be just as important as the personalized feedback. It gives you the ability to see and learn from others. It allows you to self analyze your own performance and work to make corrections in your technique or process.

Analyzing your testing afterwards
After chess players play a big game, they spend time afterward analyzing what they did and the consequences. They look for things they could have done better, or try to identify patters in their own play, or the play of their opponent. Testers can do this as well. After you test something, go back and analyze what you've done.

Analysis is different than review, don't just look at what you did. Question it. Ask what you could have done differently. Try to think about how that would have changed your testing. This doesn't just apply to exploratory testing, this very much applies to scripted testing as well. This isn't a personal debrief, it's you looking at the work you turned in... asking yourself if it was your best work... and trying to figure out how to make it better.

Here are some things I look for when I'm analyzing my own testing:

  • Where could I have moved faster? Where did I get stuck, or waste time?

  • Where could I have moved slower? Where did I potentially gloss over something important or skim information that might have been critical to my learning about the product or my analysis of a test result?

  • What parts of the system was I ignoring while doing this testing? Was that appropriate? Should I have been paying more attention to them? What possible issues would I have seen if I had been looking in those areas?

  • What techniques did I apply while testing? Did I apply them well? Where could I have changed my behavior to possibly develop a better test case? Are there different techniques I could have employed to make my testing better or faster?

  • What kind of coverage did I attain for the area of the system I was testing? Was that sufficient? What risks were not addressed given the coverage I achieved? Are there subtle changes I could have made to my testing to help me achieve higher coverage with the same amount of work?