Understanding negative feedback causality and its affects on testing
Negative feedback causality is where A causes B, and B has a counter effect on A. Over time, the two will develop an equilibrium. A testing example of this might be an application server's load distribution mechanism. The more requests a specific server in the cluster gets (event A) the higher it's load profile. The higher your load profile, the less attractive you are to the load distribution algorithm (event B). That means you'll get fewer requests from the load distribution algorithm. Over time, a theoretical balance in load is reached. In practice, the system continues to change, constantly taking into account both A and B as it works to obtain equilibrium.

If I suspect A and B have a negative feedback relationship, than my test cases look very much like they do when I suspect a circular relationship. I'll come up with test cases where I control the starting states of both A and B, I'll need to verify both A and B, and all the problems with measurement still come into play. Again, test setup and observation has to take these problems into account.

For more on this topic, see the section on scientific thinking and precision in the Miniature Guide on Scientific Thinking.
Understanding circular causality and its affects on testing
Circular causality is where A causes B, and B causes more A, which causes more B, etc... It's a classic example of a positive feedback loop. Once put in motion, it continues where A and B grow over time. A testing example of this might be load testing a news website with a widget for "most popular stories." If you look at a news story X number of times, it becomes popular. If a story is in the "most popular stories" widget, people are more likely to look at it. This makes it more popular. Etc...

If I suspect A and B have a circular relationship, than I'll come up with test cases for:

  • where A is some known input, and B is at some known starting state, verify B is some expected result and verify A is affected according to some expected result

  • where A is close to some possible limit, and B is at some known starting state, verify B is some expected result and verify A is affected according to some expected result

  • where A is some known input, and B is at some known starting state close to some possible limit, verify B is some expected result and verify A is affected according to some expected result

  • where A might be a stressful input, and B is at some known starting state, verify B is some expected result and verify A is affected according to some expected result

  • where A is some known input, and B is at some known stressful starting state, verify B is some expected result and verify A is affected according to some expected result

  • where A is ... etc...


If you contrast this with linear causality, you can see that we now have to control for both inputs A and B, as well as look at the affects on A and B. There's also a potential measurement problem with circular causality. If the feedback loop is fast, you might get multiple interactions before you have time to measure the effects. So test setup and observation has to take this into account.

For more on this topic, see the section on scientific thinking and precision in the Miniature Guide on Scientific Thinking.
Understanding linear causality and its affects on testing
Linear causality is where A causes B, but B has no affect on A. Establishing causality in systems can be more difficult than it might look. For example, I can assume that if I click Search on Google's search page, that the results that are then shown to me are caused by me clicking search. Clicking on the search button is event "A." It causes the results to appear, event "B."  It this case, I believe that B does not affect A. But I don't really know that. What if the results shown to me are dependent on how many times I've clicked the Search button? For example, what if Google decides to charge for searches and it starts to limit it's free search results based on how often you search?

This is important because when we assume we have linear causality, it affects our testing. If I suspect A causes B, than I'll come up with test cases for:

  • where A is some known input, verify B is some expected result

  • where A is close to some possible limit, verify B is some expected result

  • where A might be a stressful input, verify B is some expected result

  • where A is ... etc...


In a linear causality relationship, we often only vary the A input. And if it really is a linear relationship, that's appropriate. Our expectations for B are dependent on our A input. You wouldn't spend time controlling for B's impact on A in a linear relationship. In many ways we have to be able to assume linear causality for many items. We have a lot to test, and understanding relationships like this allows us to move efficiently from test to test. You can't take the time to question causality for each any every relationship. It wouldn't be practical.

However, if you don't know for sure that something's a linear relationship (and many things aren't, in future posts we'll look at circular, negative feedback, and chain reaction causality) than it might be worth some time investigating. It's been my experience that a lot of really critical bugs get missed because of this assumption. You can address it a number of ways. You can talk to people on the project team, making your assumptions around causality explicit and seeing if anyone refutes you. Or you can do some sampling and run some tests to show that something in fact linear (that option is more expensive, so use it sparingly).

For more on this topic, see the section on scientific thinking and precision in the Miniature Guide on Scientific Thinking.
Logging a politically difficult bug
On some projects, the word defect is another word for politics. If you've never been on one of those projects, you're lucky. But they exist. If you find yourself on one of those teams, or even if you're not on a politically charged team but still have to log a difficult issue, you might consider the following tips:

  • Restrict your claims about the defect to those that can be supported by the data you have. Don't do too much editorializing in the ticket. Just list out what you know, what you don't know, and let the team figure out what to do from there.

  • Be sure that before you log the ticket you spend some time searching for data that refutes the idea that the issues is in fact a defect. Try to prove yourself wrong. Really do some digging.

  • Make sure that all the data you list is relevant to the issue you've logged. Don't make it easy for someone to ignore your issue because it's either not clear why some data is there or because it includes a subset of information that can be trivialized. People will choose to fight the weakest set of data you include, so make sure it's all strong and speaks for itself.

  • Make sure you have enough data for the issue you've logged. For example, if you're reporting a performance issue, don't include just one test run. That's not enough. You'll need to establish a pattern of behavior for issues like that.

Understanding your point of view (persona, etc...)
All testing is done from a point of view. Some teams call these personas. For each round of testing you're doing, it can be helpful to explicitly state your point of view up front. Are you taking the point of view of the customer? The point of view of a sys admin? The point of view of a developer who's going to need to triage an issue in production? The point of view of a hacker? Whatever your point of view is, try to keep it clear in your mind as you're testing. If you're keeping test notes, write it down in your notes.

Each point of view will have different strengths and weaknesses based on what you're testing. After you're done testing, ask yourself what you would have done differently (if anything) based on a different point of view. If you feel you might have missed something important, capture that test idea somewhere.