Testing labels can get in the way of talking about testing

Through answering a lot of questions on SearchSoftwareQuality.com, I've found that terms like integration testing, unit testing, system testing, and acceptance testing often get in the way of talking about testing. There are many of these terms, and they often mean different things to different people.

I suspect we use those labels in an attempt to do one (or more) of the following:

  • imply chronological position within a phased life-cycle

  • imply a set of common testing practices done within that phase (automation of unit tests, traceability back to requirements, client interviews and surveys, etc...)

  • imply who is doing the testing (developers, testers, customers, etc...)

  • imply some concept of what risks we might be looking for (a technical concern, how two parts interact, if we've implemented what the customer wants, etc...)

  • imply some concepts of what areas of the application we are covering (code, interfaces, data, features, subsystems, etc...)

  • imply some concept of what oracles we might use in our testing (asserts, specifications, competitor or past products, etc...)


I'm sure they imply other things as well. (Let me know if you think of some I missed. I'll add it to the list with attribution.)

The reason I think this is important is because while these labels can help, they can also hinder. The can confuse the issue we are talking about since each term comes with it's own baggage. Often, I find it easier to just talk about testing (oracles, coverage, risk, technique, etc...) as applied to specific problems then to talk in abstract labels we often apply to them.

I wonder if you've encountered the same issue?