# Understanding linear causality and its affects on testing

Linear causality is where A causes B, but B has no affect on A. Establishing causality in systems can be more difficult than it might look. For example, I can assume that if I click Search on Google's search page, that the results that are then shown to me are caused by me clicking search. Clicking on the search button is event "A." It causes the results to appear, event "B." It this case, I believe that B does not affect A. But I don't really know that. What if the results shown to me are dependent on how many times I've clicked the Search button? For example, what if Google decides to charge for searches and it starts to limit it's free search results based on how often you search?

This is important because when we assume we have linear causality, it affects our testing. If I suspect A causes B, than I'll come up with test cases for:

In a linear causality relationship, we often only vary the A input. And if it really is a linear relationship, that's appropriate. Our expectations for B are dependent on our A input. You wouldn't spend time controlling for B's impact on A in a linear relationship. In many ways we have to be able to assume linear causality for many items. We have

However, if you don't know for sure that something's a linear relationship (and many things aren't, in future posts we'll look at circular, negative feedback, and chain reaction causality) than it might be worth some time investigating. It's been my experience that a lot of really critical bugs get missed because of this assumption. You can address it a number of ways. You can talk to people on the project team, making your assumptions around causality explicit and seeing if anyone refutes you. Or you can do some sampling and run some tests to show that something in fact linear (that option is more expensive, so use it sparingly).

For more on this topic, see the section on scientific thinking and precision in the Miniature Guide on Scientific Thinking.

This is important because when we assume we have linear causality, it affects our testing. If I suspect A causes B, than I'll come up with test cases for:

- where A is some known input, verify B is some expected result
- where A is close to some possible limit, verify B is some expected result
- where A might be a stressful input, verify B is some expected result
- where A is ... etc...

In a linear causality relationship, we often only vary the A input. And if it really is a linear relationship, that's appropriate. Our expectations for B are dependent on our A input. You wouldn't spend time controlling for B's impact on A in a linear relationship. In many ways we have to be able to assume linear causality for many items. We have

*a lot*to test, and understanding relationships like this allows us to move efficiently from test to test. You can't take the time to question causality for each any every relationship. It wouldn't be practical.However, if you don't know for sure that something's a linear relationship (and many things aren't, in future posts we'll look at circular, negative feedback, and chain reaction causality) than it might be worth some time investigating. It's been my experience that a lot of really critical bugs get missed because of this assumption. You can address it a number of ways. You can talk to people on the project team, making your assumptions around causality explicit and seeing if anyone refutes you. Or you can do some sampling and run some tests to show that something in fact linear (that option is more expensive, so use it sparingly).

For more on this topic, see the section on scientific thinking and precision in the Miniature Guide on Scientific Thinking.