Heuristics for Test Question Generation
Today at the Workshop on Open Certification we came up with the following (non-ordered) heuristics that might be useful in test question creation:

1) Plausible buzzwords
2) True but irrelevant
3) Write but for the wrong reason
4) Some fool said it
5) My boss will believe it
6) Two conclusions from the same reason
7) Incomplete reason
8) More detail is typical in the correct answer
9) Confusion test techniques
10) Incorrect application of technique
11) Formally phrased answers
12) Read learning objectives first
13) Variations of the theme to make it more challenging
14) Any time you feel the need to mention a source, then try to reword so we do not need to mention the source
15) Invert the cause and effect
16) Avoid inappropriate or confusion humor
17) The correct tends to be similar to the incorrect answer

There is no context for this list. This is more for distribution purposes for attendees. Others will post details around this later.

The attendees included:

  • Scott Barber

  • Tim Coulter

  • Zach Fisher

  • Dawn Haynes

  • Doug Hoffman

  • Andy Hohenner

  • Paul Holland

  • Kathy Iberle

  • Karen Johnson

  • Michael Kelly

  • Phil Kos

  • Baher Malek

  • Ben Simo

What is a boundary (part 2)?
In this post, I gave a working definition of a boundary. That definition was "a boundary is any criteria by which I factor my model of what I'm testing."

James Bach challenged me to come up with three specific examples and to tell him how they are boundaries. With that, I pulled out my moleskin and drew three different models: a UCML model, a system diagram, and the model I used to test the time-clock application.

I quickly came up with a list of 16 factors based on those three models. It became apparent to me that only 5 of those sixteen factors was a boundary. So much for that definition.

As I looked over the list I tried to figure out what was unique about the actual boundaries I identified. Then I thought about something Julian Harty once said to me about boundary testing. He asked the question, "do all boundaries have a quantifiable component?" (Or it was close to that in my memory. If I misquoted him, he'll let me know and I'll update the post.)

When he asked that, I immediately said no. He then asked me for an example, and I struggled to find one. At WHET #4, Rob Sabourin gave a beautiful example of boundary bugs from an experience testing Arabic to Latin text conversion. So I still think it's no, and I now have an example. However, it's still an excellent question, and remembering it gave me an insight to my working definition.

I have a new working definition:

"A boundary is any manipulatable criteria used to factor the model I'm using for the product I'm testing."

In the definition above, manipulatable means I can change it and it means I can measure/monitor it. Using that definition, when I went aback to the factors I identified from my three models, I was able to include those I felt were boundaries.

(At some point, if I think of it, I'll scan and post the moleskin pages.)