Posts in IWST
Software Testing Centers of Excellence (CoE)
This weekend we held the September session of the Indianapolis Workshops on Software Testing (IWST) at Butler University. The topic of the five-hour workshop was software testing Centers of Excellence (CoE). The participants in the workshop were the following:

  • Andrew Andrada

  • Patrick Beeson

  • Howard Clark

  • Matt Dilts

  • Randy Fisher

  • Mike Goempel

  • Rick Grey

  • James H. Hill

  • Michael Kelly

  • Panos Linos

  • Natalie Mego

  • Hal Metz

  • Patrick Milligan

  • Charles Penn

  • Brad Tollefson

  • Bobby Washington

  • Tina Zaza


We started the workshop by going around the room and asking each person to comment on what they thought a Center of Excellence was, and what their experience was with the topic. In general, there were only a handful of people who had either worked in a formal Center of Excellence or had experience building one out. The overwhelming feeling in the room was one of, "I'm here to learn more about what people mean when they use that term." and "It all sounds like marketing rubbish to me." Okay, perhaps that rubbish part was me embellishing, but I think other than me thought it - even if they didn't phrase it that way.

The first experience report came from me. I briefly presented Dean Meyer's five essential systems for organizations and shared some experiences of how I've used that model to help a couple of clients build out or fix their testing Centers of Excellence. I use the ISCMM mnemonic to remember the five systems:

  • Internal Economy: how money moves through the organization

  • Structure: the org chart

  • Culture: how people interact with one another, and what they value

  • Methods and Tools: how people do their work

  • Metrics and Rewards: how people are measured and rewarded


If you're not familiar with Meyer's work, I recommend his website or any of the following short but effective books on the topic:

I didn't really provide any new insights into how to use the five systems. If you read the books, or review the material on the website, you'll see that Meyer uses these systems help diagnose and fix problems within organizations. That's how I use them as well - I just focus them on problems in testing organizations. I then provided some examples of each from past clients.

Meyer also spends a good deal of time talking about "products." A product is what your organization offers to the rest of the organization. In a testing CoE, that might be products around general testing services, performance testing, security testing, usability testing, or test automation. Or it might be risk assessments, compliance audits, or other areas that sometimes tie in closely with the test organization. I personally use this idea of products as a quick test for identifying a CoE.

Meyer defines products as "things the customer owns or consumes." In his article on developing a service catalog, he points out that:
"...an effective catalog describes deliverables -- end results, not the tasks involved in producing them. Deliverables are generally described in nouns, not verbs. For example, IT sells solutions, not programming."

I believe that if your organization does not offer clear testing products, then it's not a CoE. It's just an organization that offers staff augmentation in the area of software testing. There is no technical excellence (in the form of culture, methods and tools, or metrics and rewards) that it brings to bear in order to deliver. To me, the term Center of Excellence implies that the "center" - that is the organization which has branded itself as excellent in some way - has some secret formula that it bakes into its products. It they delivers them to the organization by delivering those products.

After my experience report, Randy Fisher offered up his experiences on vendor selection criteria. Randy's company ( a large insurance company) is going through the process of deciding if they should build a CoE themselves, or if they should engage a vendor to help them build out the initial CoE. For Randy and his team, the business case for moving towards a CoE is to allow them to leverage the use of strategic assets (people, process and technology) to achieve operational efficiencies, reduce cost, improve software quality, and address business needs more effectively across all lines of business.

Randy and his team started with an evaluation pool of several vendors, and using the following weighted criteria narrowed that list down to two key vendors:

  • Understanding of company’s objectives

  • Test Process Improvement (TPI) Strategy

  • Assessment phase duration

  • Output from Assessment phase

  • Metrics/Benchmarking

  • Experience in co-location

  • Risk Based Testing Approach

  • Standards, Frameworks, Templates

  • Consulting Cost

  • Expected ROI

  • Expected Cost Reduction

  • Special Service Offerings/ Observations


After this initial evaluation, Randy offers the following advice for those who are looking to undergo a similar exercise:

  1. Have specific objectives in mind based on your organization when you meet with the vendors (this list contains a sampling of what I used…)

    • Create a benchmark (internally across systems and with peers in the industry) to facilitate ongoing measurement of organizational test maturity

    • Develop a roadmap for testing capability and maturity improvement

    • Leverage experience and test assets including: standards, frameworks, templates, tools etc.

    • Assess the use of tools, and to a perform a gap analysis to determine the need for additional tooling

    • Define touch points and handoffs between various groups (upstream/downstream) as they relate to testing

    • Assess test environments and create the appropriate standards and tools to address preparation, setup & maintenance

    • Utilize the knowledge of the vendor (both functional and insurance) to facilitate the creation of an enterprise test bed and test data management process

    • Assist with the improvement of capacity planning for test teams

    • Document the test strategy and process differences between groups



  2. Choose your selection criteria based on that factors that are important to you – nobody knows you like you do...

  3. Talk to as many vendors as you can.

  4. Don’t be afraid to negotiate cost and participation level for the engagement.


During the discussion that followed Randy's experience report, there were some interesting questions asked about his goals. That is, what pain are they trying to solve by moving to a CoE? Randy indicated that predictability (times/dates, quality, etc...) were big factors from a project perspective. He also indicated that he wanted his testers to have better tools for knowledge sharing. At the end of the day, he hopes a CoE makes it easier for them to do their jobs. Hal Metz had an interesting insight that for him, the goal should be to create an organization that enables the testers to increase their reputation (either through technical expertise or ability to deliver).

After Randy's experience report, Howard Clark shared an actual example of a slide deck he helped a client prepare to sell a test automation CoE internally. The slide deck walked through step-by-step what the executive would need to address and how building out the CoE would add value in their environment. I'd LOVE to share the slides, but can't. Howard has committed to distil those slides down in either a series of posts on his blog, or in a doctored set of slides. Once I get more info, I'll post an update here.

Either way, I think Howard's talk did a great job of moving the conversation from the abstract to the specific. This was a real business case for why they should build one, what it should look like, and what the challenges would be. I liked it because it used the client's language and addressed their specific concerns. That's one reason why I'm sort-of glad he can't share the slides. It's so specific, it would be a tragedy for someone to pull down those slides and try to use them in their context.

That idea, CoE's are always specific to a particular company's context, was something Howard tried to nail home throughout the day during his questions and comments. I think it's a critical point. No matter what you think a CoE is, it's likely different from company to company. And that's good. But it creates a fair bit of confusion when we talk about CoEs.

Finally, when we were all done presenting, Charles Penn got up and presented a summary of some of the trends he noticed across the various talks and discussion. In no particular order (and in my words, not his):

  • The building out of a CoE almost necessitates the role (formal or informal) of a librarian. Someone who owns tagging and organizing all the documents, templates, and various other information. It's not enough just to define it and collect it - someone has to manage it. (Some organizations call them knowledge managers.)

  • CoE seems largely to just be a marketing term. It means whatever you want it to mean.

  • There seems to be a desire to keep ownership of CoEs internal to the company.

  • There are assorted long term effects of moving toward a CoE model, and those need to be taken into account when the decision is made. It's not a 6 month decision, it's a multi-year decision.

  • There seem to be A LOT of "scattered" testers. That is, testers who are geographically dispersed within the various companies discussed. A large focus of the CoE model seems to be finding ways to deal with that problem.


There were more, but I either didn't capture them or couldn't find a way to effectively share them without a lot of context.

All said and done, it was a great workshop. We had excellent attendance and Butler was great. I hope they have us back for future workshops. We now need to start the planning for 2011. Our current thoughts are for around four workshops. We already have one topic selected given the amount of energy for the topic (teaching software testing - I'll need to let the WTST people know we are doing a session on that), but that leaves three workshops currently up in the air. I'd like to try to do one on testing in Rails, but given how the one earlier this year fell flat, perhaps that's not a good topic.

If you'd like to know more about IWST, checkout the website: www.IndianapolisWorkshops.com

If you'd like to participate next year or have ideas for a topic, drop me a line: mike@michaeldkelly.com
Managing focus when doing exploratory testing
Last weekend we held the final Indianapolis Workshops on Software Testing for 2009. The topic for the workshop was 'managing focus when doing exploratory testing.' The attendees for the workshop included:

  • Andrew Andrada

  • James Hill

  • Sreekala Kalidindi

  • Michael Kelly

  • Brett Leonard

  • Brad Tollefson

  • Christina Zaza


We opened the workshop with a presentation from me. I shared some tips for using different alternating polarities to help manage focus. The polarities discussed primarily come from Bach, Bach, and Bolton and their work captured in Exploratory Testing Dynamics. In the talk, I gave some examples of using polarities. Those included:

  • explicitly defining polarities in your mission

  • using polarities to help generate more test ideas

  • using polarities as headings or sub-headings when taking session notes

  • using polarities when pairing with other testers

  • focusing on specific polarities using time-boxes


Not much of it is that ground breaking, but when I put it all together, it seemed like an interesting angle on how one might use the polarities explicitly in their testing. If you end up trying any of them, I'd be interested in hearing how it works. I've found those exercises to be helpful with my testing.

After that talk, Brett Leonard shared his thoughts on software testing in the conceptual age. His presentation was heavily influenced by Daniel Pink's "A Whole New Mind." The foundation of Brett's talk was that exploratory testing is both high concept and high touch. Both are concepts Pink talks about in his book:

High Concept:
"Ability to create artistic beauty, to detect patterns and opportunities, to craft a satisfying narrative and to combine seemingly unrelated ideas into a novel invention"

High Touch
"Ability to empathize, to understand subtleties of human interaction, to find joy in one’s self and to elicit it in others, and to stretch beyond the everyday in pursuit of purpose and meaning"

In his slides, Brett summarized the points to say that exploratory testing is the "ability to create, detect, craft, and combine [...] to empathize, understand, to find and elicit, and stretch." The idea of exploratory testing as high concept and high touch resonated with me - those ideas accurately reflect my view of what exploratory testing is.

Brett went on to tie this into the testing he does every day. He shared how his stories about users drives his focus when he tests. He uses his stories to create a testing culture that focuses on value to the user. For him, managing focus is about setting up so that when you test, you're implicitly focusing on value to the user.

As a side note, in his presentation Brett brought to my attention a new polarity! He described using logic vs. empathy when doing testing. I'll see if I can lobby Bach to get it added to the list.

After Brett, Christina (Tina) Zaza shared an wonderful experience report about how she does her exploratory testing when she's working on projects. She talked about setting up for testing, doing testing, and how she avoids interruptions while testing. For me, the highlights of Tina's advice were the following:

  • block off time to test (1 to 2 hours), close out email and IM, and schedule a conference room if needed so people know you're unavailable

  • before you start, open up everything you think you'll need to do your testing effectively: applications, databases, spreadsheets, tools, etc... -- this way you won't get distracted or slowed down mid testing

  • prioritize your tests - she listed three methods she uses:

    • faster tests (or quick tests) first

    • higher risks tests (or tests more important to the business) first

    • group features together to reduce context switching while testing



  • don't stop your testing once you've started to write up defects or to ask developers/analysts questions -- note them while you test, and then do that after your session is completed


Finally, we finished the workshop with another experience report from me. This time I shared tips for writing more effective charters. I find that often when I can't focus when I'm testing, it's because my charter is unclear. The two most well received tips from that talk were the use of a template to help clarify test ideas or test missions; and the use of group thumb voting on priorities to force discussion / clarification on test missions.

While it was a small workshop, smallest this year, I enjoyed it a lot. It's my favorite topic, so that's not surprising. We captured some feedback for next year's workshops, and we'll be working on a schedule during the holiday season. Based on the feedback, next year we'll have at least one or two hands-on laptop required workshops where we can try some of this stuff.