Do Not Unit Test Third Party Frameworks
I'm stealing today's tip from Brendan Enrick. He has written a series of blog posts on tips for unit testing. In one of them, he talks about unit testing with third party frameworks:
Make sure that what you're testing is part of your code and not someone else's. If you're testing code you have no access to you better not be keeping that around as an automated test. You're wasting your time if you're creating automated tests for this code. If you find a bug in the code you can tell someone about it, but you probably can't fix the code. If it is an open source project you're testing then go test the code in their test library and fix issues you find. You will be doing them and yourself a favor. Don't clutter your own test library.

You can checkout the rest of that post here. At the bottom you'll see links to other posts he's done on the topic.
Tips for reviewing defects
We spend a lot of time talking about how to write good bug reports and how to advocate for issues you feel are important. But what if you're on the other side? What if you're reviewing defects and/or running the meeting where you assign priority and severity? What if you make the decision for what gets fixed?

Here are some tips that can help you make sense of what you're looking at. Note, these tips are focused on the way the bug report is written. It's difficult to talk about hard and fast rules for prioritization without knowing the specifics of the the project, project team, and context. It's also not intended to be in any way a complete list. Just some of the things you might think about...

  • Try to identify any viewpoints that might be embedded in the defect report. If there is a bias to the report, don't let that necessarily influence your decision on how sever the issue is. Ask for clarifications or more data if needed.

  • Ask yourself how your stakeholders would view the defect report. Switch viewpoints. What would they care about if they were in the room?

  • Look for inconsistencies between defect reports and ask for clarifications. Why might something work in one instance but not in another? What information is missing that can help you (and the team reviewing) make sense of those inconsistencies?

  • Try to understand the implications of the ticket. Look past what's reported and see if anything occurs to you about the nature of the bug. What does that mean for the project team?

  • Is there any key information omitted or glossed over? Is there summary data without a link to the detailed data? Try to find areas where more research might need to be done before conclusions are drawn.

Don't forget that some tests are open ended
A lot of testing literature talks about inputs and expected results. And that's all well and good, but you can't forget that there are multiple reasons to test and multiple ways to test, and that for some of them you can't accurately predict expected results. I find that a lot of early tests that happen in a project are focused on expected results. "We built the system to do X, does it do it?" However, over time the nature of the testing changes. It becomes more "what if" focused. Examples include, "How many users can we support on our current production configuration?" or "What happens to our users when that batch process runs?" or "What kind of errors might we see in the logs on a typical day in production?"
Developing headlines for bugs that get fixed
One component of defect reporting is bug advocacy. That is, working to get the bugs you feel are important fixed. To be an effective bug advocate, you need a variety of skills. One of them is the ability to write effective ticket headlines. To construct an effective headline, you need to both know your audience (who are the people who review and prioritize these bugs) and you need to know what they want fixed (what do they consider relevant, what do they care about, how do they picture themselves in the corporate world). It's this information that allows you to craft the appropriate headline or lead for a bug report.

When you're drafting your headline:

  • be specific

  • create an initial definition of the problem (or the possible problem) for the reader

  • don't draw conclusions

  • check your spelling and grammar

Understanding chain reaction causality and its affects on testing
Chain reaction causality is where A causes B, C, and D, which then cause E, F, G, H, I, J, and K. It's a case where you have an event that has one or more effects, which in turn could have zero, one, or more effects, which (again) in turn could have zero, one, or more effects. That can go on indefinitely. If you've ever tested a rules engine, than you're familiar with testing systems like this. One simple event can trigger a cascade of triggering events.

When testing chain reaction relationships, you often need to use various tools such as decision trees or decision tables, state diagrams or sequence diagrams, or spreadsheets (or other tools) that layout various rules and what triggers them. When testing systems like this, often you start with simple scenarios (like our linear tests, if A then B) and build those out over time - making them more and more complex. You'll see this lead to test beds of XML or big spreadsheets of pre and post conditions for testing.

Recognizing when you have a chain reaction is a critical first step. It allows you to first try to decompose the problem into smaller parts. It's also often the case when testing this type of causality that you need to think about testability features. So much is happening that you feel like you need more visibility (logging, visualization, etc...) to help you understand what's happening while you're testing.

For more on this topic, see the section on scientific thinking and precision in the Miniature Guide on Scientific Thinking.