Blitz testing
In chess, the idea of a blitz game is that each side is given less time to make their moves than under the normal tournament time controls. It's also often used to help a chess player who's learning chess develop their intuition. When playing a blitz game, you don't have time to think 30 moves out (or for us mere mortals, 5 moves). The idea, when used as a learning tool, is to compare your first impression of positions with the way they are actually developed during the game.

I think this can also be a good learning tool for testers. For me, a normal test session is around 45 minutes. One way for me to practice is to give myself 5, 10, or 15 minutes instead of my traditional 45. I'll want to achieve the same level of coverage in that small amount of time, so I'll need to move incredibly quickly to do so. At the end of that blitz session, you go back and execute the full session (or the rest of the full session). When you're done, you can go back and analyze what you missed in your blitz session and try to understand why. You can also look at things that you found that you think you might not have found if you were being more slow and deliberate. (However, that analysis is more difficult to do.)

The goal here is to try to understand both how your testing changes given the amount of time you have (which is valuable to understand), but also to help you develop your intuition for where risk is within the product and which techniques are going to be most helpful and under what conditions.
Solving complex testing problems
There are a number of testers out there who pose practice problems to the testing community (Matt Heusser, James Bach, Michael Bolton, and Ross Collard to name but a few). Attempting to tackle those problems can be a great way to develop your testing stills. There's three big reasons why this type of practice can help you develop your skills better and faster than if you just log four or five extra hours at the office:

  • It's different than what you do everyday. Very likely if you're doing one of these examples, most of the time you'll be testing software that's significantly different than what you test every day at the office. That stretches your imagination, gets you away from your everyday biases and assumptions of what you can and can't test, and gets you focused on the mechanics of testing more than the mechanics of the product or the business problem. If you play chess, poker, sports, or even play in a band... you can get good by practicing with the same people all the time, but you can't get great. To get great you need a variety of experiences. You need to compete or play against people who are different than those you practice against. Testing is no different.

  • It's designed to be more complex than your average test. If you're going to practice, you need difficult problems. Yesterday a fellow tester gave me a programming problem from college related to the fibonacci sequence. I was able to show him example code solving the problem on my first try in about thirteen lines of code. That wasn't practice. It was problem number four on his page of practice programming problems. He than gave me problem number 200 (or some other similarly high number). It was so complex if I really wanted to solve it I'd have to spend some time looking up the math and geometry terms involved in the problem. Then I'd have to actually stand at a whiteboard and design a couple of algorithms to make sure it would execute in the time allowed. That's a practice problem. It's something that stretches you to do something you've not done before. The first problem wasn't practice (for me), the second one was. As you become more skilled as a tester, you need harder and harder problems to solve. In addition to being more complex than your average daily work-related testing problem, these problems are also (typically) focused on a specific aspect of testing. They're designed to get you thinking about a technique or concept of risk or coverage that you don't use everyday.

  • Feedback is built into the exercise. The most important aspect of these problems is that you get feedback. Sometimes you get direct and personal feedback from the people who presented the problem. James Bach for example is almost always willing to critique a solution you provide to a problem he presents. That's worth the price of admission right there! But it often goes farther than that. Often times you get to see your solution right next to twenty others. You see how other people solved it. That can be just as important as the personalized feedback. It gives you the ability to see and learn from others. It allows you to self analyze your own performance and work to make corrections in your technique or process.

Analyzing your testing afterwards
After chess players play a big game, they spend time afterward analyzing what they did and the consequences. They look for things they could have done better, or try to identify patters in their own play, or the play of their opponent. Testers can do this as well. After you test something, go back and analyze what you've done.

Analysis is different than review, don't just look at what you did. Question it. Ask what you could have done differently. Try to think about how that would have changed your testing. This doesn't just apply to exploratory testing, this very much applies to scripted testing as well. This isn't a personal debrief, it's you looking at the work you turned in... asking yourself if it was your best work... and trying to figure out how to make it better.

Here are some things I look for when I'm analyzing my own testing:

  • Where could I have moved faster? Where did I get stuck, or waste time?

  • Where could I have moved slower? Where did I potentially gloss over something important or skim information that might have been critical to my learning about the product or my analysis of a test result?

  • What parts of the system was I ignoring while doing this testing? Was that appropriate? Should I have been paying more attention to them? What possible issues would I have seen if I had been looking in those areas?

  • What techniques did I apply while testing? Did I apply them well? Where could I have changed my behavior to possibly develop a better test case? Are there different techniques I could have employed to make my testing better or faster?

  • What kind of coverage did I attain for the area of the system I was testing? Was that sufficient? What risks were not addressed given the coverage I achieved? Are there subtle changes I could have made to my testing to help me achieve higher coverage with the same amount of work?

Stock prices and software testers
Similar to Friday's post on baseball cards, I have another interesting metric I think about. It's my stock price (also known is some circles as "street-cred"). It works like this, I think everyone on a team has a fluctuation stock-price. It represents how attractive they are to work with at any given time, encompassing both their perceived productivity and personability. If someone severely flubs up a project or is simply a jerk to work with, their stock price goes down. If they are particularly strong at a skill in high demand, their price goes up.

Think of it like this, if five managers were building teams from a pool 100 candidates, and they each had fixed and equal "budgets" and could choose team members from the pool, what would they be willing to pay for any given candidate? The price of any given member would reflect the "future earnings" (or project value) of the person being brought onto the team.

Editorial comment: It's a thought experiment... It's not perfect, roll with it. I'm not saying this would be a good idea for a company, I am saying it's a neat thought experiment for the following reason.

If you're a software tester (and I have to assume you are if you're reading QuickTestingTips), think about what might affect your stock price. What are the factors that you think generate "street-cred" and make you a more valuable team player (and thus a higher-value team commodity)? If you had to issue quarterly and annual (testing) reports, what would be in them? How would you position yourself so you could demand a higher price?

Similar to the Friday post, if five people respond, I'll post my answer. The more people respond past that, I'll try to get the other authors on this blog to post their thoughts as well.
Only list the technologies you're fluent in on your resume
Only list the technologies you're fluent in on your resume, or make it clear what your skill level is if you're listing things you've had exposure to in the past. I use to add qualifiers to my resume; like beginner, intermediate, and advanced when I listed a tool, language, or technology. Now I don't list very many technologies at all, since most managers I've interviewed with have been less concerned with technology and more concerned with skill. I think that's natural as you get more experience under your belt. But I could be wrong.

When you're pulling together your resume, only list tools and technologies that you actually know. Don't list the stuff you've had "exposure" to. If I can't ask you an interview question on it (like, write code in language X, or tell me how you would configure Y, etc...) then you either don't want to list it, or make it very clear that you've only had exposure to it. When looking at resumes nothing turns me off faster than seeing a laundry list of tools and technologies without context for how they have been applied by the candidate.

For me, it comes down to trust. In an interview, I'm trying to figure out what you can really do. If you get my hopes up by listing Java, Ruby, or some other techno-widgit I'm really looking for, and you can't actually do it, I don't know what else you can't really do that you're telling me you can. Even if you're really good at all the other stuff, once I feel like you've falsely represented your skills, it's hard to dig out. On the other hand, if someone has all the right skills but is missing the techno-widgit, I'm likely to hire them. I can teach someone Ruby.

For me, loosing the interviewer's trust isn't worth the eye-candy of listing all the technologies.