fivesecondtest
I've just spent a few minutes playing around with fivesecondtest. It's a bit addictive. It's an online tool for usability testing. From the site:
People use five second test to locate calls to action, optimize landing pages, and run A/B tests. You can use them for whatever you like.

You can either submit content for testing, or you can be a tester. There are two test scenarios: five second memory test and five second click test. With the memory test, you see an image for five seconds and then you're asked to list five things you remember. With the click test, you're asked to click on things you notice in five seconds, then describe what they are.

The memory test is hard. Five seconds is a lot of time, and I noticed the following patterns with my testing:

  • The more text a site had, the less I remembered

  • The more detailed the graphics the site had, the less I remembered

  • The larger the images, the less I remembered

  • The more correlated the logo and site look and feel were to the product, the more I remembered

  • The less fancy the font, the more I remembered

  • The fewer headings the site had, the more I remembered


The click test was much easier, and for me, more fun. I noticed the following:

  • I clicked on contact information when it was there

  • I clicked on headings when they were there

  • I clicked on social media links when they were there

  • I clicked on forms (submit a question, etc...) when they were there

  • I clicked on user ratings when they were there


I like the idea of using something like this to gauge if a call to action is effective. I also suspect it can help you easily determine if your site might be too busy. I know I froze up on the more complex sites. I both couldn't remember anything and I couldn't focus on anything long enough to click on it before time expired. It became apparent to me what types of designs "worked" for me.

If you need some simple usability feedback, give it a try. If you're a tester and just want something fun to practice on, I found this a nice short diversion. I suspect I'll check back from time to time to test other designs.
Throw away code can be ugly code
Recently, I needed to do a bunch of test setup in a web application. I had to manipulate around 5,000 records, and due to a time crunch and few people around over the holidays, it was easier for me to get at the GUI than it was the database. So I decided to write a quick Watir script to do the edits for me.

I had a spreadsheet with all the changes I needed, so I decided to let Excel write the bulk of my code. Using the CONCATENATE() function, I inserted the following into the first row:

CONCATENATE("ii == ", A1, " ? temp = ", A6, A3, A6, " : temp = temp")

That creates something that looks like this:

ii == 1 ? temp = "Some Value" : temp = temp

All I had to do was drag that down to fill each row and I was able to copy and paste a very large block of code from Excel into a loop in Ruby:

for ii in (1..5005)
#do some setup using Watir
ii == 1 ? temp = "Some Value" : temp = temp
ii == 2 ? temp = "Some Value" : temp = temp
ii == 3 ? temp = "Some Value" : temp = temp
ii == 4 ? temp = "Some Value" : temp = temp
#...etc...
ii == 5005 ? temp = "Some Value" : temp = temp
#do something afterward using Watir
end


Now, this is REALLY UGLY code. I know it's ugly, and you know it's ugly, but I'm defending it because it only took me five minutes to get my script running. It ran for a couple hours, and I was done. All of my test setup complete. I deleted the script and moved on.

While my solution wasn't elegant, it worked. Someone later told me about an Excel library I could have used, and I didn't feel bad about not using it. Sometimes you need to resist the urge to write an elegant solution if you're just hacking away at a simple problem.
Granularity of testing estimates
Yesterday I was discussing how I do testing estimates with a peer here in Indianapolis. He pointed out that testers in his organization estimate things in days and hours:

  • "That's half a day."

  • "That will take a day."

  • "That's likely 200 hours."


His concern with this technique, was that people didn't really understand where their estimates were coming from. In some cases, when he would ask how they arrived at their number, some testers didn't know. They just guessed.

Listening to him, I noticed that I have a pattern when I manage teams. Early on, when I'm working with a new team or people I've not worked with before, I look for estimates in minutes. For anything other than sessions, I want to see the breakdown of where the time's going in minutes. It's only after I've seen you estimate a couple of times that I'll take hours and days. This is because once I trust that you have a model that you're using to estimate, you no longer need to "show your work." If you don't have a model, then I'll help you develop one.
Test ideas that come from test automation
I know it's not fashionable to like GUI-level test automation any longer. But whatever, I still like it. I'm unfashionable in more ways than one. I still like GUI-level automation for reproducing bugs, automated acceptance testing, and to support my performance and exploratory testing. I also like non-GUI tests, but I've never disliked GUI automation.

One reason I still write GUI-level test automation is because it helps me learn about the product. I'm still amazed at how many times I say "Wow, really?" when I'm writing my tests. Because I'm always picking at the GUI with tools like FireBug and Web Developer while I'm testing, I'm seeing things I don't normally see when I'm just clicking around.

For example, today I noticed:

  • One of the applications I'm testing doesn't remove fields from the screen when they aren't active, it just hides them. I had never noticed until I counted on it not being there in my code.

  • One of the applications I'm testing sometimes shows a parent child relationship using icons, and sometime doesn't.  I had never noticed until I coded a rule expecting it to always be there.

  • One of the applications I'm testing appears to have a relationship between fields that I wouldn't have expected. I discovered this based on field naming conventions.


All of these give me new test ideas completely unrelated to my automated tests. Anytime I'm surprised, I use that as an indicator that I have more tests to run. When I automate at the GUI-level, I often get surprised. I rarely get surprised when I'm automating against an API.
Watir
I've neglected mentioning Watir on this blog because it's already a well known tool. However, since I recently had to do an install of Watir yesterday on a new computer, I noticed a couple of new things the Watir community has been up to since last I checked in with them. For those who don't know, Watir stands for Web Application Testing in Ruby and it's a very simple tool (dare I say painless) for generating test automation for web applications.

So what's new? Well, the last time I installed Watir (it was a while ago), it only had (working) distributions for IE and FireFox. Now it appears to support almost everything I'd care to test: IE, FireFox, Safari, Chrome, Flash, and work is underway on other ports. I also stumbled across Celerity, a headless Java browser with JavaScript support. Mmmmm.... sounds fast.

I like Watir for a couple of reasons. First, I enjoy programming in Ruby - it's strangely relaxing to me. That Watir leverages my language of choice is a big win. Second, Watir is so easy to use. Between IRB and FireBug, there are few web applications I've encountered that I can't have working Watir scripts for in a few minutes. Watir commands are predictable and relatively few - making them easy to remember. Finally, it's very easy for me to extend other scripting tasks using Watir. Because it's Ruby, I don't just use Watir for test automation. I sometimes use it to drive test data entry, assist with exploratory testing, parse websites for data, and other odd tasks.

Another fun fact, Watir is one of the few tools with it's own podcast. If you do a lot of test automation, it's well worth taking the time to listen to the podcasts. Lots of gems in there, and not all of them related to Watir.