Posts in Test Automation
September IWST: Testers who write code
A couple weeks ago we held the final IWST workshop for 2008. The topic for the full day of experience reports was "Testers who write code." The workshop was the best attended to date, with 16 participants:


  • Charlie Audritsh

  • Howard Bandy

  • Isaac Chen

  • David Christiansen

  • Jennifer Dillon

  • Michael Goempel

  • Jeremy D. Jarvis

  • Michael Kelly

  • Brett Leonard

  • Charles McMillan

  • Natalie Mego

  • Chris Overpeck

  • Anthony Panozzo

  • Charles Penn

  • Drew Scholl

  • Beth Shaw

  • David Warren



The first experience report for the day came from David Christiansen. Dave talked about a project he's working on for CSI where he's acting as the test lead for the project. For that team, test lead simply means providing leadership in the area of testing. It's a team of programmers (that includes Dave) and most of their testing is automated. David also does exploratory testing, but a lot of his time has been spent helping the team develop a robust and scalable implementation for their automated acceptance tests. In his experience report he related the evolution of the teams use of Selenium for generating acceptance tests, how they migrated those tests to RSpec, and how they then further abstracted their RSpec tests to create a domain specific language for their application. As part of his experience report, David shared some code he wrote that takes his Selenium tests and export them to RSpec.

I've seen Dave present on several occasions, and I have to say this was by far the best experience report or talk I've seen him give. It was truly enjoyable and educational. I've heard him talk about the project several times, and I walked away with new information. And to provide some coloring on how enjoyable it was to those who weren't there, during David's talk we had two outbreaks of African tick-bite fever among IWST attendees, as well as the first IWST Cicero quote which eventually led to the statement "Our app is totally greeked out." (Dave uses Lorem ipsum dolor, a common method of text generation used for typesetting, to randomly enter text in his acceptance tests.)

After David's talk, Anthony Panozzo related his experiences getting started with test driven development. On the project he was working on, the team (for a number of technical and business reasons) decided to build a Ruby on Rails app, running in JRuby, running in WebSphere. He very briefly mentioned their development methodology where they used a lean approach that I would love to hear more about some day (perhaps at a topic for next year's workshops) and talked about some of the practices and tools they used: continuous integration with CruiseControl, integration tests with Selenium, pair programming, and test driven development.

Anthony nicely summarized his experience into some personal lessons. I don't know if I caught them all in my notes, but here's what I did catch:

  • It's important to review your tests, perhaps more important than reviewing your code.

  • It's important to create separate test fixtures.

  • Test driven development makes the practice of pair programming easier.



Another note, and a topic that got a little bit of discussion, was Anthony's numerous references to Michael Feather's book "Working Effectively with Legacy Code."

After Anthony's talk, Charlie Audritsh shared some examples of simple Ruby scripts he had written to help him with everyday tasks while performance testing using HP Mercury LoadRunner/TestCenter. He had scripts for parsing and formatting performance testing logs, greping out information, and correlating errors. In addition to some of the Ruby scripts he shared, Charlie also presented the code he used to represent fractional percentages in load testing. Somewhere in Charlie's talk the pirate heuristic came up, and that of course generated a lot of pirate noises (as you would expect).

After Charlie's talk, Jennifer Dillon spoke about her experiences using TestComplete for testing application user interfaces and some of the struggles she and her team encountered when dealing with large changes to the UI. She shared several stories on the topic:

  • On one project they used a pixel-by-pixel comparison method that was fragile to changes. They eventually moved away from snapshot comparisons and switched to querying properties of the UI to verify the correct display. That story reminded me of a tool Marc Labranche once developed in the early days of Rational Robot which could do a fuzzy comparison (you could set tolerance). Later, Rational incorporated some similar features for image comparisons.

  • In another application they had the script serve up Visio wireframes of what the screen should look like along side of a screen shot, and a human would click the determination of correctness. An interesting idea and an interesting use of screen mock-ups.

  • Finally, in the last example, the development team completely changed the UI halfway through the project, which lead to the team using helper methods to encapsulate object references in the code. Using a rule of "only store it once, so if it changes you only change it once." A very pragmatic design principle for automation.



In some of the discussion following Jennifer's talk, Dave Christiansen brought up Julian Harty's use of proof sheets for testing mobile device user interfaces. Yet another interesting approach to the problem. I liked Jennifer's talk because it resonated well with my experience and I got a couple of interesting ideas from the second story.

Next, Charles Penn gave a demonstration of his recent use of SoapUI for regression testing web services. He showed some simple Groovy code that made a database connection and pulled a result into a call. He then showed some sequenced test cases (where the results of one service call are passed to the next call). And he finished by showing off a short load test. That part of the talk was interesting to me, since the load test looked like it was working. I haven't used SoapUI in some time, and the version I was using over a year ago didn't really have great load testing functionality.

After Charles, Brett Leonard gave a very dynamic talk about his experiences integrating HP Mercury's Quick Test Pro with Empirix's Hammer (used for automated call testing). His paper on the topic has been posted to the IWST site along with example HVB code. Brett's talk held a surprising amount of emotion (his presentation style is similar to Robert Sabourin's, if you've seen him talk) and was truly a pleasure to listen to. On the whiteboard he gave a little historical context for the project and his company. He drew out the larger pieces of the problem. And then Brett proceeded to tell the story about how he was able to get QTP to drive Hcmd from the command line, which then generated the hVB code that would call the VRS, which in turn generated the phone calls that would get the application-under-test to initiate the call process, which would allow QTP to run it's tests. Needless to say, it was a test system that had a lot of moving parts. I'm hoping he'll eventually provide some written details (if he can), since I think he may be the first to do this type of integration between the two tools. (Selfishly, I may need to do something similar with Hammer in the near future, so that play no small part in my request.)

Brett's main lessons he wanted to share, aside from the cool integration for those working in that space, were the following:

  • Don't always listen to the experts. Especially if they tell you something can't be done.

  • Sometimes, you can get good customer support if you ask for it. (Specifically, a kudos to Empirix.)

  • Finally, explaining a difficult problem to other people is sometimes the best way to help you solve the problem.



At the end of Brett's story, David very insightfully pointed out that Brett's story was a story about resourcing:


Obtaining tools and information to support your effort; exploring sources of such tools and information; getting people to help you.


After Brett, Howard B related a story around automation at the API level for physical devices. It was an interesting mix of automated and manual testing. A human would need to setup and launch the test on the device, then the automation (done using nunit) would take over, occasionally prompting the human for verification. Howard titled the experience "Partially Automated Testing."

The best part of Howard's talk was how he modeled the problem space and designed his solution. As he told the story, he drew out on the whiteboard how the automation evolved. He didn't have all the requirements or even all the devices that would need to be tested at the start of the project when he began building the framework. So he knew the framework had to be extensible; both in terms of the devices it would support as well as the types of tests it would run. Given those constraints, he added abstractions to the nunit tests to allow for an expanding list of devices. He then had the tests leverage a generic device object, that would allow for inheritance amoung the devices that were latter added.

The design made the end tests more usable for manual setup given device specifics by the class of devices. It also allowed him and his team to add failsafes for the manual testing portion of the automation - designing in ways of reducing errors on the manual setup and verification. This time, both David and I pointed out that this was a wonderfull illustration of modeling at work:


Composing, describing, and working with mental models of the things you are exploring; identifying dimensions, variables, and dynamics that are relevant.


Brett pointed out that Howard basically ended up creating a device factory for his test bed. A very impressive experience report.

Finally, we closed the day with Natalie Mego's experience report where she related some Ruby code that she was using to help automate some of the manual tasks she had in generating her automated tests. Natalie's team would get a list of requirements from the business. They would then mark some percentage of those requirements as automatable or not automatable. Then, once they had the list of what they would automate, they would pull details from IBM Rational Requisite Pro into Excel for use in driving the eventual test automation (data-driven approach). Her Ruby script would take the list of requirements and pull them out of Req Pro, formatting them in Excel.

After Natalie's talk, we did a checkout for the session, and I captured some notes for next year's workshops. In no particular order, here was some of the feedback I received:

  • Liked the structure and the format of the workshop.

  • Perhaps we could go back to a more frequent schedule then twice a year.

  • Need to think of ways to make experience reports for first time attendees less intimidating. My initial thoughts on this includes a small example video someone could watch.

  • Which leads to the next point, should we start video taping or podcasting these session>? If we do, how does that change the dynamic?

  • Can we come up with a way of capturing some of the whiteboard content (electron whiteboard for example)?



For me, I think it was my favorite workshop. I really enjoyed the participation, the number and variety of the experience reports, and all the new faces. A lot of first time attendees! I'll start discussions on the IWST mailing list soon about next year's line-up, frequency, and venues. Thanks to everyone who attended in 2008, to Mike and Andrew for helping organize this year, and to Mobius Labs for providing us with the free space.
IWST: Testers who write code
We are starting planning for the September Indianapolis Workshop on Software Testing focused on the topic of testers who write code. Here are the details:
Date: Friday, September 26
Time: 8:00 AM to 5:00 PM
Location: Indianapolis, IN @ Mobius Labs

In this workshop we will focus on those aspects of testing where we write code. There will be a strong focus on performance, automation, and security testing. A lot of discussion around tools, languages, frameworks, and design patterns. We will be looking for experience reports from people willing to share project stories, examples, and we would be willing to entertain proposals for hands-on coding activities during the workshop.

If you're interested in attending, please drop me a line. Preference will go to those who have experience reports to offer, but we always try to save space for people new to IWST and for people who are new to the topic.