Posts in Performance Testing
Automation and performance logging
This past weekend we held the September 2009 IWST workshop on the topic of “Automation and performance logging.” The workshop participants were:

  • Sameer Adibhatla

  • Randy Fisher

  • Mike Goempel

  • James Hill

  • Jason Horn

  • Gabriel Issah

  • Michael Kelly

  • Natalie Mego

  • Chad Molenda

  • Cathy Nguyen

  • Charles Penn

  • Debby Polley

  • Kumar Ramalingam

  • Tam Thai

  • Brad Tollefson

  • Chris Wingate


The first experience report for the day came from Jason Horn. Jason is a longtime veteran of intimidating speakers who come after him, and he lived up to his reputation. Jason's talk was about a logging framework he and his team developed within VSTS to help work around some issues they were seeing in response times generated by the tool while doing load testing. As a team, they had been creating reusable performance test modules. Jason would develop them while doing functional testing, and pass them along to the performance testers who would run them under load. An unintended side-effect of this framework-based approach, was that their response times got bloated while running under load. They included the time it took to process framework code.

Not to be defeated, the team did what any self respecting team of coders would do. They... ummm... wrote their own logging framework within VSTS. And I'm not talking about some wimpy line of text out to a log file somewhere. I mean they created a database, hooked into the VSTS object model, and siphoned off information about test execution opportunistically. Jason's talk was detail-filled; he walked us through sequence diagrams, object models, the database schema, and was even kind enough to share the code.

At the end of the day, the team was able to get everything they needed. They had test code reusability (Jason would develop the initial code for automation -> it would get leveraged for load testing -> it wasn't throw-away code like you'd see with most recorded performance test scripts), they had homegrown debugging logging for script errors and suspicious execution times, and they had performance test results they could trust.

The second talk came from a couple of guys from a local insurance company: Randy Fisher, Tam Thai, and Kumar Ramalingam. They shared some proof-of-concept code that they were working on for a centralized transaction logging database. Randy recently centralized test automation and performance at the company he works at. They primarily use HP Quality Center, and they've hit some issues around reporting on test execution across the enterprise. It's a big company (think hundreds of testers), with all the budgeting joys of big companies, so metrics on who's running what tests are important to them.

The team had pulled together a simple QTP function to log transactions to a centralized database. They would then go back through existing scripts adding this function call, and use it in all new scripts developed. It would allow for basic metrics on "we ran these tests on these dates." We talked about some other metrics they might include over time to better leverage the data. We also gave some feedback on the code, providing some advice on better connections to SQL, alerting, etc.... It was a good problem for starting some meta conversations around what to log and report on, and I think they got some good tips for a cleaner implementation.

After Randy, Tam, and Kumar finished their experince report Brad Tollefson shared his experince using Watin. Brad first gave a general overview of how his company does testing - you can find that in his short handout. He then went into the specifics of using Watin to test their web interface. I think one of the coolest things I saw was that Brad had written his own Windows client to manage Watin test cases. Someone in the Watin community should ping him about that and see if he can share it. (You can see a screen shot in the handout linked above.)

We spent some time talking about what Brad logs and how he uses that information. His automated tests appear to serve as an early performance warning. I like that. We also discussed how Brad might speed up his tests by using some different methods and settings in Watin. A couple good tips in there by Chad Molenda and Chris Wingate around changing typing speed, populating by value and then triggering events, and running with the browser hidden.

Finally, James Hill presented an overview of his work on CUTS. He actually pulled together what I think is one of the better presentations I've seen and used it as a backdrop for the overview portion of his talk. After the slides where done, he jumped into the gory details of how CUTS does logging. Not only did he go into how CUTS manages the problem of large scale distribution of clients, but he also went into some of the theory of why they chose to use execution traces, why they build the logs in reverse, and he walked us through a good bit of the logging code. I'm not sure I followed all of it, but I think I kept pace with most of it. I found a lot of parallels in the CUTS architecture and other performance test tools.

Next month we have a panel of speakers talking about time management. I hope to announce details soon (this week maybe). It should be fun. It's also open to the public. I'm hoping to get around 40-50 people in attendance. It's a topic several people have asked for, and it should be useful for anyone working in software development - not just testers.

Thanks to all who attended the workshop.