Posts in Testing Techniques
Promiscuous Pairing
If your team already does some pair testing, take a look at promiscuous pairing.
"Promiscuity, it turns out, is a good way to spread a lot of information through a group quickly. Rapid partner swapping ensures that a good idea, once envisioned, is soon practiced by every pair. Replacing individual accountability with team accountability empowers each person to do those tasks at which he excels — and allow someone else to take over for his weaknesses."

Try 60 to 90 minute sessions where you alternate partners each session.
500 word minimum
To help them stay focused and productive, some writers give themselves daily minimums. In a recent issue of Writer's Digest, author Hank Schwaeble said he uses 500 words a day as his minimum. When most authors do this, they have little to no expectation that those are 500 "publishable" words. That is, they don't think they will be polished or won't need editing. I'd go so far to say, they might not even have an expectation to publish them at all. It's more about just making sure you log time doing the most basic unit of work - writing - so not all your time gets taken up by editing, revising, contacting editors, or other administrative work. Different writers do it for different reasons.

There are some other examples of how people in our industry use quotas like this. Jerry Weinberg’s Rule of Three is one: "If I can’t think of at least three different interpretations of what I received, I haven’t thought enough about what it might mean." And I have some similar rules for chartering that tell me that if I don't come up with at least X test/charter ideas, then I haven't really explored the problem enough. I would never expect I'd execute all the charters I come up with, but because I'm a believer in overproduction and abandonment, I feel it leads to better work on my part.

Think about different aspects of what you do on a day to day basis and see if there are any areas where you might benefit from overproduction and abandonment. What "minimums" could you put in place to help you get better at developing more test ideas, or help you better execute certain tasks, or just help you break through a wall you might have while learning something new? Sometimes setting aside judgments about the quality of your work, and just getting time logged doing it, can be an important step at becoming more effective at it.
What's in a smoke test?
When figuring out what to smoke test for a release/build/environment, I run down the following list in my head:

  • What's changed in this build/release?

  • What features/functions are most frequently used?

  • What features/functions are most important or business critical to the client?

  • Is there technical risk related to the deploy, environment, or a third-party service?

  • Are there any automated tests I can leverage for any test ideas I came up with in the previous questions?


Based on my answers to those questions, I come up with my set of tests for the release. If it's a big release (aka more features or more visible), I'll do more smoke tests. It's been my experience that different releases often require different levels of smoke testing.
Listing attributes
A lesson I learned from James Bach a number of years ago is to list out attributes of something before you test it. You can practice this with anything: the book on your desk, your keyboard, or this WordPress blog. List out as many attributes as you possibly can. Share your list with other people. Have them tell you what's missing.

This is a great way to come up with test ideas for a product. Practicing it can be fun and easy, but it's also very applicable to what we do as testers. It trains your mind to be able to quickly identify the relevant attributes of a product. You'll find that your test idea generation abilities improve as you get better at clearly identifying attributes.
Pulling apart sequence and activity diagrams
I like using sequence and activity diagrams to generate test ideas. When looking at a sequence and activity diagrams, ask yourself what would happen in the system if a specific line didn't happen or didn't happen correctly. For example, if you look at the following example from Donald Bell's article on UML Sequence Diagrams:



If I were to use this to create test cases, I'd ask questions like:

  • What happens if the amount doesn't come back?

  • What happens if the amount that comes back is in an invalid or unexpected format?

  • What happens if the amount that comes back is corrupted, very large, or if the connection is interrupted?

  • (similar questions for balance)

  • If the balance is less than the amount, what happens if I note the returned check without adding the insufficient fund fee? Can I do that?

  • What if, while noting the returned check, I use an invalid check number, use a check that's already been noted previously, or don't pass a check number?

  • (etc...)


You get the idea. When generating the test ideas, just list them out as you think of them. If you have a sequence or activity diagram of any complexity, you'll get a long list. Afterward, you can work with other project team members to shrink the list down into the scenarios you feel will provide the most value to the project.