Developer to tester ratios, developers becoming testers, and quality criteria as a framework for selling testing value

Earlier this week I was asked by a developer consultant who is a friend of mine for some links to research on developer to tester ratios. Being a helpful guy, I provided him with a link to a fairly good presentation on the topic put together by Dana Spears and Susan Morgan. In a follow up email, the consultant asked for something a bit more formal. He was trying to work with an organization to get them started with a testing practice and was looking for how to justify brining testers on board. They had a large population of developers with no testers.

To this I responded that there was a lot of good research available online and that Dana could most likely point him in the right direction, but that I didn't see a lot of value in the ratios. I don't like ratios because they don't account for a lot of things. For example:

- Are the developers doing TDD or a lot of unit testing?
- Is there a fairly standard process being used?
- How long has the team been working together?
- How challenging is the software being developed?
- Is is new development or old?
- How many new people are added to projects?
- What type of peer review (code and design) process do the developers use?
- How much custom development is done (vs using packaged components - think SAP)?
- Are projects timeline-driven, cost-driven or quality-driven?

That list goes on and on. Based on how you answer those, I think the ratio changes in significant ways. Ways that I wouldn't even pretend to be smart enough to be able to predict.

I also don't like ratios because they don't tell you what type of tester you need. Do you need performance testers or functional testers? Security testers or localization testers? It depends...

After that, he asked how I would approach the company. If I don't like ratios, what do I like.

Instead of ratios I prefer to look at quality goals and to talk about the skills and tactics that enable you to hit those goals. I used James Bach's Heuristic Test Strategy Model to make my point. In the following list, do they value and to what degree do they value it?



Capability: Can it perform the required functions?

Reliability: Will it work well and resist failure in all required situations?
-- Error handling: the product resists failure in the case of errors, is graceful when it fails, and recovers readily.
-- Data Integrity: the data in the system is protected from loss or corruption.
-- Safety: the product will not fail in such a way as to harm life or property.

Usability: How easy is it for a real user to use the product?
-- Learnability: the operation of the product can be rapidly mastered by the intended user.
-- Operability: the product can be operated with minimum effort and fuss.
-- Accessibility: the product meets relevant accessibility standards and works with O/S accessibility features.

Security: How well is the product protected against unauthorized use or intrusion?
-- Authentication: the ways in which the system verifies that a user is who she says she is.
-- Authorization: the rights that are granted to authenticated users at varying privilege levels.
-- Privacy: the ways in which customer or employee data is protected from unauthorized people.
-- Security holes: the ways in which the system cannot enforce security (e.g. social engineering vulnerabilities)

Scalability: How well does the deployment of the product scale up or down?

Performance: How speedy and responsive is it?

Installability: How easily can it be installed onto its target platform(s)?
-- System requirements: Does the product recognize if some necessary component is missing or insufficient?
-- Configuration: What parts of the system are affected by installation? Where are files and resources stored?
-- Uninstallation: When the product is uninstalled, is it removed cleanly?
-- Upgrades: Can new modules or versions be added easily? Do they respect the existing configuration?

Compatibility: How well does it work with external components & configurations?
-- Application Compatibility: the product works in conjunction with other software products.
-- Operating System Compatibility: the product works with a particular operating system.
-- Hardware Compatibility: the product works with particular hardware components and configurations.
-- Backward Compatibility: the products works with earlier versions of itself.
-- Resource Usage: the product doesn't unnecessarily hog memory, storage, or other system resources.

Supportability. How economical will it be to provide support to users of the product?

Testability: How effectively can the product be tested?

Maintainability: How economical is it to build, fix or enhance the product?

Portability: How economical will it be to port or reuse the technology elsewhere?

Localizability: How economical will it be to adapt the product for other places?
--Regulations: Are there different regulatory or reporting requirements over state or national borders?
-- Language: Can the product adapt easily to longer messages, right-to-left, or ideogrammatic script?
-- Money: Must the product be able to support multiple currencies? Currency exchange?
-- Social or cultural differences: Might the customer find cultural references confusing or insulting?



If you value certain things on that list a lot, then you may value the specialized skills and tactics that a tester brings to the project, since that's what testers specialize in.

When I talk with companies thinking about building (or changing) a testing practice, I ask where specifically they feel pain or perceive risk, and talk about how certain types of testers might help relieve some of that pain or risk. Then
we talk about the cost of the testing investment and compare that to the relative value they would receive from relieving that pain or risk.

I try to tie the work of testers to the business value they provide. The same way I would justify bringing in more developers.

This information excited my consulting friend. It's similar to what he felt the approach should be, but he didn't have a framework to articulate it. Armed with this, he felt better going down that road. His mind made up, he informed me that he would recommend that the "not-so-hot developers [be] made full-time testers, requirements gatherers, release helpers, etc."

The only thing that makes me nervous in his approach now was that last statement: "I'd rather see the not-so-hot developers made full-time testers..." It's been my experience that bad developers make bad testers. Many of the same skills and tactics that make a developer successful also make a tester successful.

When making developers testers, you run the risk of creating a positive feedback loop in the wrong direction. (This holds true if you take not-so-hot testers and make them developers as well. But for some reason, I never hear about anyone trying that...)

Imagine the following scenario:

1) Organization doesn't value testing because they have a low opinion of testers and think developers can do it better.

2) Organization decides to take a chance. It allocates some developers to be testers. It decides to start with the more junior, lower skilled, or less critical developers.

3) Those developer-now-tester's struggle in their new role for all the same reasons they struggled with their old role (they don't read, are slow to learn or adapt, have communication issues or personality issues, whatever...).

4) Organization notices that they are not getting increased value from adding testers, but it is in fact struggling to do even the little bit that they did before testers were added. This lowers their opinion of testers even more.

5) The call goes out for more developers to become testers to help stabilize the experiment. Now that the tester negative perception has been "confirmed", even worse candidates are sent over to become testers. If external candidates are hired, they are hired by developer-now-tester's who don't know what a good tester looks like - so they hire those like themselves.

6) Those developer-now-tester's struggle even more in their role. Now they also have the added pressure of the confirmation bias that the company and project teams display when interacting with them.

7) Organization notices that they are getting no increased value from adding testers. This lowers their opinion of testers even more. The experiment is over. They gave it a shot, it didn't work out. Their earlier assumptions that just having developers is "confirmed." They knew it all along anyway, right?

Perhaps dramatic, but not far from true. I've been at several places that were in various stages of that very loop.

I instead recommended that he wanted a few superstar testers on key projects. This is a small investment for the company. It's a proof of concept, not a change. If they see value with those superstars, those successful testers then go out and hire other testers (who won't be superstars, but won't be paperweights either).

Imagine if you were going to get them to change to XP as a methodology. Or even if you were telling them to take up UML modeling as a practice for their projects. How would you do it? Would you tell them to just start implementing the XP practices or to just install Rose? Or would you go find a few key people who have been there and done it to serve as change agents within the organization. That way they could demonstrate leadership, mentor on the tools and techniques, answer questions about the value they bring (and answer from experience, not theory).

I don't know if this was the best advice or not. I imagine that others have vast experience in this area. But this is what I came up with in my ten minutes of email support to another consultant. It seemed useful enough that I thought I should post it to the blog.

If you would advocate a different approach, I would love to hear about it.