Build folders with sample files
I've been working with a web application that uses an assortment of graphic file types. I've built a folder with at least a "one of each" file type the application works with. When I need a graphic file, I don't have to go scrambling to find the file of the type I want.

I keep the folder with my project work (on a backed up secure drive) but I also keep the sample graphic folder on a flash drive so that when I switch what computer I'm testing on (this happens frequently) it takes no effort to have my sample files on hand.
Percent connection-pool utilization
When performance testing, a lot of time gets spent calibrating your tests. To do this effectively, you often have to calibrate using multiple methods. One method I use is to look at percent connection-pool utilization.

This is a specific example of a general metric. For any finite resource that might be important to your system, look at how that resource is utilized over your run and compare that to your target numbers. For example, if the production environment never uses more then 60% of its available connections, but your tests gets utilization of up to 90%, you might need to adjust your tests. Other things you might look at include CPU utilization, memory utilization and average queue depth.

You'll need to have some idea of what sessions might look like in production (both actual and forecasted).
Concurrent live and active sessions
When performance testing, a lot of time gets spent calibrating your tests. To do this effectively, you often have to calibrate using multiple methods. One method I use is to look at concurrent live and active sessions.

I happen to do a lot of web testing, so sessions can be a big deal. Looking at the number of concurrent live and active sessions generated by my load test and comparing that to the production environment can give me an idea of whether or not I've got the right number of users in the test at a given period of time or if I've got the right amount of user session abandonment.

For your application it might be important to recognize that different users might have different session sizes, abandonment rates, and time-out rates. You'll need to have some idea of what sessions might look like in production (both actual and forecasted). If your tool alows it, try to build in a way to programatically track and throttle these numbers as needed. It might save you a lot of time.
Transactions over a period of time
When performance testing, a lot of time gets spent calibrating your tests. To do this effectively, you often have to calibrate using multiple methods. One method I use is to look at transactions over a period of time.

When looking at transactions over a period of time, your concern is that your test isn't doing too much or too little over the time interval. Examples might include logins/logouts per minute, form or file submissions rates per minute, reports generated per minute, web service calls per second, searches per second, etc. The idea is that you are looking to roughly approximate load (as determined by transactions) with your test.

Likely, you won't just be looking at one transaction. You'll be looking at many of them. It's a bit of a balancing act. Tuning one might knock one of your other transactions out of whack. You then get to tune again.

You'll need to have some idea of what these transactions might look like in production (both actual and forecasted). You might decide to build a couple of different scenarios based on the data you get. If your tool alows it, try to build in a way to programatically track and throttle these numbers as needed. It might save you a lot of time.