Listening Driven Testing
A lot of testers know that if you want to be able to test well, you have to ask lots of questions and not just to one or two developers, but to a whole range of people who have some vested interest in the product your testing.  We ask questions to understand the context. We ask questions to understand the application we are testing.

I wonder though, am I the only tester out there who is not  that great at listening to the answer?

I ask the question, I hear the answer, what the Project Manager, or Marketing Director says, but that's different to listening.

Listening means, "I hear what you say, and I will take that into account"

As a tester with a bit of experience, I have to confess I sometimes think I know best. I go into "Yeah sure, that's what you think you want, but here is what you really need" mode and forget to listen to other people.

It takes respect and humility to listen properly to other people. Its something I want to improve on. I know it will make me a better tester in the end.

So, next time you have that conversation with your colleagues or potential clients. Do yourself a favor and really listen to what they have to say. You might even learn something.

Merry Christmas
Testing Mobile Apps? Be Sure to Schedule Extra Time to Account for Physical Factors
Testing mobile applications is much different than using your computer. They are smaller, so they pose different challenges physically. It is hard to use the devices for as long as a workstation and laptop without feeling fatigue. Also, because they are smaller, it often just takes longer to complete tasks in the application than it would with a regular web app. If you are porting your traditional application to a mobile device, it will probably take longer to test due to physical factors.

Schedule time for other tasks in between mobile testing to prevent physical burnout from:

  • eye strain from staring at a small screen on a device that is hard to keep still

  • repetitive stress on your fingers and hands from inputs into the device

  • strain on your arms from holding the device

  • raw, sore fingertips, especially with touch devices

  • soreness in your back and neck when hunching over the device


Try to work with your testing team and be sure to schedule rest time to combat mobile-related physical fatigue. Work with them to identify more ergonomic ways of sitting and interacting with the devices. This is much harder than with our regular workstations and laptops, and it can sneak up on you during an intense release with time pressures. If your tester's fingers are too sore to test another daily build for your new iPhone app, there isn't a lot you can do about it. Plan ahead, factor in more time for testing, and make sure the schedule permits enough rest to avoid mobile burnout.
Draw it five times before writing it down
Any time I'm planning a testing approach for a project, I try to come up with a way to model it visually (flow charts, system diagrams, sequence diagrams, venn diagrams, other...) so I can quickly explain what I think we're testing and how we're testing. I usually end up drawing my picture of our testing on whiteboards, flip charts, and the back of napkins and scrap paper hundreds of times in a project. Over time, my diagram changes as I add more detail and my story of our testing gets richer.

The more I draw the picture of our testing, the more feedback I get. Its for this reason I make sure I draw it a minimum of five times, for at least five different audiences, before I commit my picture to any formal medium (like Visio or Word). History tells me I don't know anything until I'm at least at version 0.5 of my picture. And even then, things are still changing regularly. But after five, I normally have the outline.

As I draw, I'm telling the story of what our testing will look like. I always tell the entire story, even if I think the person I'm telling it to may already know what we're going to do. Early on, I hear "you can't do that" or "that's not how it works" at least once every five minutes. If I don't hear someone saw I can't do what I'm thinking, or express some other concern, I start to wonder if they're really listening to me. Because for any project of even medium complexity, testing is messy (environments, domain knowledge, resources, orders of magnitude estimates, etc...).

On a side note, my iPhone makes a great repository of photos of draft versions of the picture so I can see how it evolves over time. After the first week of a project, I almost have a flip-book for how our test approach unfolded. Kinda neat.
Order of magnitude estimates
I'm not a big fan of throwing around numbers when talking about testing. I understand that saying things like "We have 100 test cases" doesn't really communicate anything other than a number. However, I have found that using order of magnitude numbers can help communicate size and scope to other project/test managers when planning the project.

I find that as we're developing resource plans and course-grain timelines, orders of magnitude can help imply level of effort. If we're talking about scripted or automated tests, I'll talk about tens or hundreds of tests. If we're talking about sessions or charters, I'll talk in powers of two (2, 4, 8, 16, or 32).

Sometimes, when we think everyone's on the same page, we can quickly gauge each others understanding of how much testing might be involved. If we're talking about testing something and I'm not sure we have the same understanding of the amount of work involved, I might ask "How many tests do you think that is?" If you say tens of tests, and I'm thinking hundreds, that's a good indicator you and I need to get on the same page. We've identified an opportunity for talking about what our "tests" entail and what level of coverage we really want.