Test plans have other purposes than the reasons I describe above. They provide a formal basis from which to develop repeatable (that is, regression) tests. As systems evolve or as new builds are created during the debug cycle, it is essential to know that the existing stability of the system has not been broken. This can best be achieved through an ability to run the same tests over and over as each new build is produced. Also, test plans provide a basis from which the test strategy can be inspected and discussed by all interested parties.
A good test plan will start with a description of the system to be tested, followed by a brief discussion of the test's objectives. The following elements should be included in the plan:
- The objectives of the test exercise.
- A description of how the tests will be performed. This will explain the various degrees of reliance that will be made on key testing components, such as rerunnable test scripts, manual checklists, end-user involvement, and so on.
- A description of the environment in which the test will occur. For example, if your organization supports several base environment configurations, you should clearly state which of them you will be testing against.
- A listing of the test data that will need to be made available for the tests to be valid.
- A discussion of any restrictions that might be placed on the test team that could have an impact on the reliability of the test results. For example, if you are testing a system that is likely to be used by a very large number of people and that accesses a central database, it might be difficult for you to simulate this level of volume usage.
- A declaration of the relative orders of importance that you are placing on different criteria-for example, your concern for robustness compared to that of performance.
- Any features that you will not be testing, with a commentary explaining why not (to enlighten those who come after you).
- An intended test schedule showing milestones. This should tie into the overall project plan.
Then, using the same breakdown of functionality as was presented in the design specification, start to list each test scenario. Each scenario should include:
- A reference to the item to be tested
- The expected results
- Any useful comments that describe how these test results can definitely confirm that the item being tested actually works properly (success criteria)
Test Scripts
A test script can be either a set of instructions to a user or to another piece of code. Generally speaking, I am referring to code-based test scripts in this section. So a good test script should be approached in the same way as the code that it is supposed to be testing. Therefore, it should be designed, documented, commented, and tested. Tested? No, that doesn't necessarily mean writing a test script for it, but it does mean single-stepping through your test code while it runs to ensure that it is doing what you expect it to do. If the code that you are testing is a particularly important piece, the test code should be inspected and walked through as with any normal code. The following rules apply to test scripts:
Test script functionality should be kept in sync with the application code.
The version/revision number of the test script must be the same as the application.
Test scripts should be version controlled, just like the application code. Use Microsoft Visual SourceSafe (or an equivalent) to keep track of any changes that you make. That way, if you need to roll back to an earlier version of the code for any reason, you will have a valid set of test scripts to go with it.