Understanding where your testing fits

One of the things that can be difficult while testing in sprints is to know where your testing fits in the bigger picture. If you're testing a feature for the first time that was just developed in the sprint you're testing, it can be hard to know when it will be released, what it will be released with, and what other testing might take place. This creates a strong need for coordinated test planning than I've seen in more traditional methodologies.

When testing as part of a sprint, the focus is on the specific features being developed. While that might include some regression testing of areas around the change, it's likely going to be shallow. Someone on the team (a test manager, a lead, or someone in the role of test planning for a release) needs to keep the big picture in view. While each tester is focused on their upcoming features, someone else needs to look across the software and identify what risks might be introduced from a wider perspective.

This doesn't mean you need reams of documentation for each release with hundreds of test cases. However, I don't know what software you're testing, so it might mean that... But for me, I think it means you simply need a light test plan for each release where each of the testers can see where their individual features fit, where someone can look for interdependent risks, and where quality criteria other than basic functionality can be evaluated more easily/clearly.

Is there a way to capture that information in one or two pages or charts? That would be ideal. We'll see if anything occurs to me. If you think of something, or already have something, please share.