Testing early for the first time...

On a recent project where we were implementing a service oriented architecture (for the first time on a large scale) we wanted to leverage the design of the system to test early. The plan was to have testers run local servers, get the latest complete code (after developers were done and everything was unit tested), and run component level tests based on risks identified by the test team. We would use automation and manual testing to flush out edge conditions, mapping errors, and un-captured exceptions.

On schedule, several of the developers indicated their code was complete at a status meeting. I configured my local server to run the code and began my testing. Surprisingly (or perhaps not) this caused some problems. Developers were quick to come over and say things like:

  • "You can't test in a development environment!"

  • "We're not done with that functionality (even though we said we were)."

  • "I'm still unit testing, the code won't work until I'm done unit testing."

  • "We are working over 30 tickets, things can break at any time."



This was taken by some people on the project as a reason to stop our test-often test-early strategy. I would think each of these indicate a reason why you would want someone testing in the development environment. Let's address these one at a time:

"You can't test in a development environment!"

Actually I can. It's really easy. I just update the code, compile, and start a server. We have the technology. Is there some reason I shouldn't test in development? Isn't the whole idea to find issues earlier when they are supposedly easier and cheaper to fix?

Ok, that's my smart ass inner voice. That's not what I really said (but I really wanted to). I think I really said something like:


"I understand this is something we are doing for the first time on this release. Changes like this are painful as the team learns to deal with new ways of working. We need to test in development if we want to have any chance of hitting our release date. If we wait until all the development is done, we will need to do two months worth of testing in three weeks. We just can't do that.


Can you help me make this easier on you? What can I do to lower the visibility this adds to your work and the pressure that added visibility results in? How can I provide feedback to you in a way that doesn't cause problems for you?


Once the developers (or most of them) accepted we were going to do this testing, they started to warm up to it. After the first couple of weeks (very emotional weeks) we actually hit a groove. By the end of the release, we had very close relationships with many of the developers.

"We're not done with that functionality (even though we said we were)."

Our testing exposed a misunderstanding about project progress. What management thought the status was and what the actual status was did not match. We were told it was done, but it was not. This enabled management to correct their misunderstanding of project status so they could plan and react accordingly. This - aside from all the testing related issues identified - is valuable information. People hold meetings and purchase expensive tools to find out what we found out in 5 minutes of testing and asking questions.

Code does not lie. It works or it doesn't.

"I'm still unit testing, the code won't work until I'm done unit testing."

This is closely related to the above point. Up to this point, developers were in the habit of telling management they were done when they were done coding, not done unit testing. This caused a perception problem. I also have a suspicion that this was a sneaky way of asking for more time in a politically correct way. Who's going to say you can't do more unit testing?

Some of them were reporting they were still unit testing when they were still writing code. They looked farther along then they were. On the other hand, the developers who really were unit testing when we started our functional testing welcomed our scrutiny. "Find a bug if you can!", they said. Sometimes we did.

"We are working over 30 tickets, things can break at any time."

This is the, we break the build on a regular basis argument. That's a fine argument, except that like most teams, we're not suppose to break the build. Ever.

This statement means that developers regularly checkout code, make changes, then check it back in, and then test. This was a "deeper" problem. The developers should have been testing the code before they checked it back in. Some were not. There was an automatic expectation of failure on any code that was checked in, not an expectation of it working. Note, this expectation was the expectation of the development team.

This problem solved itself as we started reporting those defects (normally the same day). The added visibility provided by our defects for these types of issues tipped us into a more positive workflow of testing before integration.

All said and done, we worked side by side with the development team for a little over two months. When our testing stopped, the traditional functional and system level testing began. We had automated hundreds of tests at the component level. We had validated many of the mapping documents. We had become active participants in design meetings (not all of them, but enough that we felt like we had a small victory). And by the end of our testing, we had developers (again not all of them) coming over to us and asking us to review their work. After the initial growing pains of the added visibility to the status of the code, most of the emotions died down, and we were able to release some great software.