JMeter
If you do performance testing on a regular basis, and you've not used JMeter yet, you need to give it a try. It's free, open source, and very lightweight. I find it's well suited to many of the day-to-day performance test projects, and because it's lightweight I can make new tests and changes to tests quickly.

From the website:
Apache JMeter is open source software, a 100% pure Java desktop application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions.

Apache JMeter may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load.
Calculating velocity
Catherine Powell had a great post yesterday on calculating velocity. From that post:
"So now we know that each QA engineer can do about 2.5 units of estimated work each week. When we go into the next estimation session, that's where we'll draw the line for test work. We estimate just like we always do, and we then will walk down the list committing to 2.5 units of work per week. When we run out of allotted time, we'll stop."

There are a couple of great tips in that post, and the overall focus on developing a method to calculate velocity if well done.
Where to start: Automation in an Agile environment
When you are switching from Waterfall to Agile, it's very easy to get caught up in discussions about tools and automation. Which tools are best for unit testing and acceptance testing? What tools will we need for test data and test environment creation?

It may be worthwhile to take a step back and first prioritize what you want to automate. Ask yourself: "Where are tools going to add the best efficiencies?"

One area that often creates the most pain and gets overlooked is creating the test environment, installation and configuration.  It's these early stages in testing where most often delays occur.

It's not sexy and perhaps not as much fun, but for testers,  automating these areas first can provide considerable benefit in cost and time savings.  Getting developers to start on installation and configuration scripts first, before writing any application code is another way to streamline the process. This way, you can get the test environment running ahead of any functionality testing.

Remember its, "Individuals and interactions over processes and tools"
CoE's might need sales people too
Yesterday I read this article by Jill Konrath on 7 Sales Mistakes Guaranteed to Make Your New Service Fail. It reminded me on when I was working to build out a centralized testing group in a large organization. In many ways, I was the business developer for my team within the organization. We may have been a Center of Excellence (CoE), but we weren't really required to be used. We had to earn our business.

The tips in the article that resonated with me the most were:

Setting up meetings to update customers about the new product or service can lead to trouble. Arranging the meeting isn't the mistake—just its premise. If sales reps tell customers they're bringing information about the new product or service, that's exactly what customers expect the meeting to be about. Sellers then find it exceedingly difficult to switch into a questioning mode—an essential step for determining valid business and financial reasons for changing. Instead they're expected to talk, talk, talk—and boy, do they ever!


and

If salespeople don't have a clearly defined next step implanted in their brains prior to the call, they are doomed. Just sharing exciting new product information gets sellers nowhere. Unless they have a clearly defined objective before the call and are ready to offer logical next steps, they'll be left sitting by the phone waiting for it to ring.


We "sold" automation and performance testing "products" to project teams. Getting teams to use us, and getting them to pay for enhancements to the products we provided, was in every way a sales call. Good advice - read the entire article.
Create multiple versions of your UCML load model
When I'm creating initial load models for an application, I find that I create multiple versions of UCML. The first draft is often quite large. It often has several branches per user. If I were to create it, I might have 25 to 75 scenarios. I use this first model as a discussion tool. What I'm interested in here is if I've accurately captured what users and and can't do, and roughly how often they do it.

Then, once I feel I've got that well understood, I create a consolidated model that I'll actually use to create and calibrate test scripts for. Where I might have previously had 50 of scenarios, here I'm going down to 10 (or even under 10 if I feel I can do that without affecting the validity of the testing). I'm looking for the simplest combination of scripts/scenarios that will give me the load profile I'm looking for.

The reason I take this approach is because I've never been on a project where I've had the time or resources to create all the scenarios. And even if I did, it would likely be a waste. But later in the project, when we find an issue and someone asks why I'm running my simplified load model (which will only look loosely like the original), I'll have something that shows what we started with and how that maps to what we're currently running. It reduces confusion around why we're running the model we're running.