Look for opportunities to shortcut a process
When I'm testing applications, I keep an eye out for opportunities to shortcut a process. For example, if you got to book a hotel room on Expedia, you very likely do the following:

  • Enter an arrival date and end date

  • Enter a city or address

  • Change the default number of rooms and/or travelers


When you click search, you get a set of results back and all of them will have some indication or price and availability. I suspect that this is the "normal" process for booking on Expedia. However, you can also "repeat a trip" using Expedia. When you repeat a trip from your past itineraries, it defaults the hotel selection to the same one from the trip. You just select new dates.

Recently, I did a normal search for a hotel, and Expedia told me that the hotel was unavailable for my dates. When I repeated a previous trip for the same hotel, suddenly those dates were available and I could book. I found that interesting. Apparently, there are different rules that fire in each process flow.

When I'm evaluating what to test in an application, I look for those opportunities where I can shortcut one process with another. It's not uncommon to find differences in rules, available data, or messaging inconsistencies when doing so.
Do we really need a script or can we use a checklist?
On a current project, we have a lot of testing that requires deep domain knowledge. Given the level of domain knowledge, we're planning on getting domain experts to do a good portion of the testing. This has uncovered an interesting opportunity for us. To date, most testing done in this organization has been scripted (giant Word documents full of steps and screenshots). In the past I believe this was done because the testers didn't have the requisite domain knowledge to execute the tests without a lot of help. But now, by making this shift, we can reassess if we really need test scripts.

We've decided that for much of the testing, we'll switch to checklists instead of test scripts. While we might still have a couple of test procedure documents which provide high-level outlines for how to do something, the details of the testing will be in some (relatively) simple Excel checklists. This saves us a lot of work, since we don't need to update a bunch of artifacts. I suspect it will also increase our test execution velocity by reducing the overhead of running a series of tests.

Kaner has been talking a lot recently about the value of checklists. If you've not reviewed the work, check it out. Look for opportunities where you might leverage checklists instead of some of the more traditional heavyweight scripts associated with testing in IT corporate environments.
Ask for help
Here's how I can tell I need help:

  • I'm working at 1:00 AM and it's not because it's required (like some performance testing) or because I really like doing it (too much cool software and not enough daylight time).

  • People are talking about something that needs to be tested and I can't follow the conversation (due to a lack of domain knowledge or technical knowledge).

  • I'm missing deadlines (for the project I'm struggling with or for other projects because I can't get away from the project I'm struggling with).

  • I'm not happy with the quality of my work.


If one or two of those happen for a short period of time, I probably need help. If all of them are happening (like they are right now), I know I need help.

When you need help, ask for it. That's easy to say, but hard for a lot of people to do. I know it's hard for me. I also know it's hard for many people who've worked for me. For me, it's an ego thing. But sometimes I need to recognize that I'm not helping anyone (most of all myself) by trying to hide the fact that I need help. Often, help is there - ready and waiting to be had.

If you're overwhelmed, ask for help. You might be surprised where it comes from.
Look for overlap in the schedules
Whenever I'm managing a large testing project, I try my best to break the testing into different often overlapping iterations. I do this by looking at different types of testing (functional, performance, user acceptance, etc...) as well as different variations on testing scope (functionality by delivery date, by project priority, by component/module grouping, etc...). These iterations are often keyed primarily by code or environment delivery date, but they can also be keyed from resource availability. The idea isn't unique at all: take a big bloated project, and try to break it up into smaller more manageable units of work. A tale as old as waterfall time...

However I set it up, I always go back and double check to make sure I can manage all the overlapping iterations. I've gotten sideways before where by trying to collapse the testing schedule, I've overbooked either testers or environments. Or I've inadvertently over-committed another group (like development or the business) because they need to support the testing effort in some way.

For each activity/iteration, ask yourself:

  • Who will be doing this testing and where will they be doing it?

  • What will they really need to get started (code, data, etc...)?

  • What support will they likely need and how much of it will they need?

  • What holidays or other project/company events fall inside those iterations that you're not thinking of because they are months away?

Don't get trapped by reuse
I'm a big fan of reusing previous testing artifacts. If you have a test bed ready and available, I'm interested in taking a look at it if it can help me shave time off my project. However, don't let past testing artifacts trap you or bias you. Not only can they have the wrong focus, but they run the potential of slowing you down instead of speeding you up. If you have a lot of rework to do, or if you need to spend a lot of time disambiguating coverage, you might spend more time reviewing than you would have spent just starting from scratch.

Here are some things I look at when trying to determine if an existing test bed is a time sink:

  • There's no clear summary of what's included in the test bed.

  • No one on the team was around when the test bed was originally used. It's coverage is hearsay.

  • A preliminary review of the test bed shows that several different types of updates are required to make them useful (data elements, screen shots, endpoints, etc...).

  • 100% of the original test bed development and execution was done by an external provider and the artifacts were "turned over" at the end of the project.