Planning for reporting
Working on an article related to test analysis and reporting, I got to thinking about some of the questions I try to answer when I report test status. Here are some of the questions I might ask myself during a project:

  • How much testing did we plan to do and how much of that have we done?

  • What potential tests are remaining and in which areas?

  • What is our current test execution velocity and what has it been for the duration of the project? How does that break out across testing area or test case priority?

  • What is our current test creation velocity and what has it been for the duration of the project? How does that break out across testing area or test case priority?

  • How much of our tests have been focused on basic functionality (basic requirements, major functions, simple data) vs. common cases (users, scenarios, basic data, state, and error coverage)vs. stress testing (strong data, state, and error coverage, load, and constrained environments)?

  • How much of our testing has been focused on capability verses other types of quality criteria (performance, security, scalability, testability, maintainability, etc...)?

  • How many of the requirements have we covered and how much of the code have we covered?

  • If applicable, what platforms and configurations have we covered?

  • How many issue have we found, what severity are they, where have we found them, and how did we find them?



What did I miss? What do you ask?
How to determine test coverage
A while ago I answered the following question on SearchSoftwareQuality.com’s Ask The Software Quality Expert: Questions & Answers.


The project that I'm working on is to create regression test scripts for applications which are migrating from another location. This test team has mainly functional (system test) scripts and there are many applications that we are taking over. How would you approach the regression scripting considering we have many applications and limited resources?


Here is a clip from my answer:


I follow the same basic steps when I think about regression testing as I do for any other type of testing. I try to force myself to think about coverage, risk and cost. I'm always looking to evaluate what tests can best address my most important risks with the most coverage given the time, tools and resources I have.

Understanding what you have to cover
I would encourage you to start off by making an outline of all the things you could potentially cover in your regression testing. [...]

Understanding your risks
After you have a list of what you might want to cover in your regression testing, start thinking about the specific risks that you're concerned about. [...]

Putting your risks and coverage together (chartering your work)
Once you know what you want to test (coverage) and why you want to test it (risk) you're ready to charter your work. Chartering is the activity where you put it all together in meaningful terms. [...]

Prioritize the work and figure out what you can do
Once you've chartered your work, you should have some idea of how much work might be in front of you if you wanted to test it all (or everything you could think of at least). [...]

At the end of all this, you'll have a list of charters for each application that you should be able to execute given the resources you have. Following some sort of schedule that makes sense for your team, get in the habit of incrementally reviewing the coverage, risk, charters and priority you have for each application. You'll find that what you want to test and why you want to test it will change over time.


You can find the full posting here.