Software Testing Centers of Excellence (CoE)

This weekend we held the September session of the Indianapolis Workshops on Software Testing (IWST) at Butler University. The topic of the five-hour workshop was software testing Centers of Excellence (CoE). The participants in the workshop were the following:

  • Andrew Andrada

  • Patrick Beeson

  • Howard Clark

  • Matt Dilts

  • Randy Fisher

  • Mike Goempel

  • Rick Grey

  • James H. Hill

  • Michael Kelly

  • Panos Linos

  • Natalie Mego

  • Hal Metz

  • Patrick Milligan

  • Charles Penn

  • Brad Tollefson

  • Bobby Washington

  • Tina Zaza


We started the workshop by going around the room and asking each person to comment on what they thought a Center of Excellence was, and what their experience was with the topic. In general, there were only a handful of people who had either worked in a formal Center of Excellence or had experience building one out. The overwhelming feeling in the room was one of, "I'm here to learn more about what people mean when they use that term." and "It all sounds like marketing rubbish to me." Okay, perhaps that rubbish part was me embellishing, but I think other than me thought it - even if they didn't phrase it that way.

The first experience report came from me. I briefly presented Dean Meyer's five essential systems for organizations and shared some experiences of how I've used that model to help a couple of clients build out or fix their testing Centers of Excellence. I use the ISCMM mnemonic to remember the five systems:

  • Internal Economy: how money moves through the organization

  • Structure: the org chart

  • Culture: how people interact with one another, and what they value

  • Methods and Tools: how people do their work

  • Metrics and Rewards: how people are measured and rewarded


If you're not familiar with Meyer's work, I recommend his website or any of the following short but effective books on the topic:

I didn't really provide any new insights into how to use the five systems. If you read the books, or review the material on the website, you'll see that Meyer uses these systems help diagnose and fix problems within organizations. That's how I use them as well - I just focus them on problems in testing organizations. I then provided some examples of each from past clients.

Meyer also spends a good deal of time talking about "products." A product is what your organization offers to the rest of the organization. In a testing CoE, that might be products around general testing services, performance testing, security testing, usability testing, or test automation. Or it might be risk assessments, compliance audits, or other areas that sometimes tie in closely with the test organization. I personally use this idea of products as a quick test for identifying a CoE.

Meyer defines products as "things the customer owns or consumes." In his article on developing a service catalog, he points out that:
"...an effective catalog describes deliverables -- end results, not the tasks involved in producing them. Deliverables are generally described in nouns, not verbs. For example, IT sells solutions, not programming."

I believe that if your organization does not offer clear testing products, then it's not a CoE. It's just an organization that offers staff augmentation in the area of software testing. There is no technical excellence (in the form of culture, methods and tools, or metrics and rewards) that it brings to bear in order to deliver. To me, the term Center of Excellence implies that the "center" - that is the organization which has branded itself as excellent in some way - has some secret formula that it bakes into its products. It they delivers them to the organization by delivering those products.

After my experience report, Randy Fisher offered up his experiences on vendor selection criteria. Randy's company ( a large insurance company) is going through the process of deciding if they should build a CoE themselves, or if they should engage a vendor to help them build out the initial CoE. For Randy and his team, the business case for moving towards a CoE is to allow them to leverage the use of strategic assets (people, process and technology) to achieve operational efficiencies, reduce cost, improve software quality, and address business needs more effectively across all lines of business.

Randy and his team started with an evaluation pool of several vendors, and using the following weighted criteria narrowed that list down to two key vendors:

  • Understanding of company’s objectives

  • Test Process Improvement (TPI) Strategy

  • Assessment phase duration

  • Output from Assessment phase

  • Metrics/Benchmarking

  • Experience in co-location

  • Risk Based Testing Approach

  • Standards, Frameworks, Templates

  • Consulting Cost

  • Expected ROI

  • Expected Cost Reduction

  • Special Service Offerings/ Observations


After this initial evaluation, Randy offers the following advice for those who are looking to undergo a similar exercise:

  1. Have specific objectives in mind based on your organization when you meet with the vendors (this list contains a sampling of what I used…)

    • Create a benchmark (internally across systems and with peers in the industry) to facilitate ongoing measurement of organizational test maturity

    • Develop a roadmap for testing capability and maturity improvement

    • Leverage experience and test assets including: standards, frameworks, templates, tools etc.

    • Assess the use of tools, and to a perform a gap analysis to determine the need for additional tooling

    • Define touch points and handoffs between various groups (upstream/downstream) as they relate to testing

    • Assess test environments and create the appropriate standards and tools to address preparation, setup & maintenance

    • Utilize the knowledge of the vendor (both functional and insurance) to facilitate the creation of an enterprise test bed and test data management process

    • Assist with the improvement of capacity planning for test teams

    • Document the test strategy and process differences between groups



  2. Choose your selection criteria based on that factors that are important to you – nobody knows you like you do...

  3. Talk to as many vendors as you can.

  4. Don’t be afraid to negotiate cost and participation level for the engagement.


During the discussion that followed Randy's experience report, there were some interesting questions asked about his goals. That is, what pain are they trying to solve by moving to a CoE? Randy indicated that predictability (times/dates, quality, etc...) were big factors from a project perspective. He also indicated that he wanted his testers to have better tools for knowledge sharing. At the end of the day, he hopes a CoE makes it easier for them to do their jobs. Hal Metz had an interesting insight that for him, the goal should be to create an organization that enables the testers to increase their reputation (either through technical expertise or ability to deliver).

After Randy's experience report, Howard Clark shared an actual example of a slide deck he helped a client prepare to sell a test automation CoE internally. The slide deck walked through step-by-step what the executive would need to address and how building out the CoE would add value in their environment. I'd LOVE to share the slides, but can't. Howard has committed to distil those slides down in either a series of posts on his blog, or in a doctored set of slides. Once I get more info, I'll post an update here.

Either way, I think Howard's talk did a great job of moving the conversation from the abstract to the specific. This was a real business case for why they should build one, what it should look like, and what the challenges would be. I liked it because it used the client's language and addressed their specific concerns. That's one reason why I'm sort-of glad he can't share the slides. It's so specific, it would be a tragedy for someone to pull down those slides and try to use them in their context.

That idea, CoE's are always specific to a particular company's context, was something Howard tried to nail home throughout the day during his questions and comments. I think it's a critical point. No matter what you think a CoE is, it's likely different from company to company. And that's good. But it creates a fair bit of confusion when we talk about CoEs.

Finally, when we were all done presenting, Charles Penn got up and presented a summary of some of the trends he noticed across the various talks and discussion. In no particular order (and in my words, not his):

  • The building out of a CoE almost necessitates the role (formal or informal) of a librarian. Someone who owns tagging and organizing all the documents, templates, and various other information. It's not enough just to define it and collect it - someone has to manage it. (Some organizations call them knowledge managers.)

  • CoE seems largely to just be a marketing term. It means whatever you want it to mean.

  • There seems to be a desire to keep ownership of CoEs internal to the company.

  • There are assorted long term effects of moving toward a CoE model, and those need to be taken into account when the decision is made. It's not a 6 month decision, it's a multi-year decision.

  • There seem to be A LOT of "scattered" testers. That is, testers who are geographically dispersed within the various companies discussed. A large focus of the CoE model seems to be finding ways to deal with that problem.


There were more, but I either didn't capture them or couldn't find a way to effectively share them without a lot of context.

All said and done, it was a great workshop. We had excellent attendance and Butler was great. I hope they have us back for future workshops. We now need to start the planning for 2011. Our current thoughts are for around four workshops. We already have one topic selected given the amount of energy for the topic (teaching software testing - I'll need to let the WTST people know we are doing a session on that), but that leaves three workshops currently up in the air. I'd like to try to do one on testing in Rails, but given how the one earlier this year fell flat, perhaps that's not a good topic.

If you'd like to know more about IWST, checkout the website: www.IndianapolisWorkshops.com

If you'd like to participate next year or have ideas for a topic, drop me a line: mike@michaeldkelly.com
Events, IWSTMichael Kelly1 Comment