CPSC 333: Development of a Test Plan

Location: [CPSC 333] [Listing by Topic] [Listing by Date] [Previous Topic] [Next Topic] Development of a Test Plan


This material was briefly covered during lectures on February 19, 1997; a few additional details have been added in these online notes.


Purpose

Requirements specifications are generally expected to include ``test plans,'' which give (or describe) tests that will be used to verify that a system to be developed will be consistent with the specification. These tests include the inputs that will be supplied to the system when the test is executed, as well as the outputs that might be received that will be considered to be acceptable.

There are at least two good reasons to begin test design during requirements specification, instead of waiting until after a system has been designed and implemented.

  1. This reduces the chance that a misunderstanding of requirements will be ``designed into the tests.'' That is, if designers or system implementers misunderstand the requirements (and code the misunderstanding into the system), it reduces the likelihood that the tests that are developed will also look for (and identify as acceptable) the ``wrong'' behaviour.
  2. This provides a way to ``test the specification:'' A requirement is only useful if there is a way to confirm that the requirement is satisfied. If the requirement can't be tested, then the requirement is unsatisfactory, and the requirements specification (that includes it) is unsatisfactory, as well. Thus, developing a test plan during requirements analysis and specification provides another check that requirements specification has been completed correctly.

We'll assume, once again, that ``structured development'' is being used. Similar remarks can be made for ``object-oriented development'' (and, we'll consider ``object-oriented testing'' later on in the course).

Finally, test design will be discussed in more detail, later on in the course. Some of the following points will be clarified or elaborated on, then.

Testing the Data Subsystem

Try to include tests that will result in the creation of a new instance, modification of an existing instance (if this isn't an instance of a ``relationship''), display of an instance, and deletion of an instance, for each component of the ERD.

Don't forget the error cases! Thus, you should also try to add a new instance whose primary key is identical to one that already exists. You should also attempt to modify, delete, and display an instance, given as input a ``primary key'' that isn't in use.

If there are nontrivial performance requirements involving system data, try to think of tests that can be performed - or statistics that can be gathered (as a set of tests is performed) - in order to check whether the performance requirements are satisfied.

Testing System Functions

Start by considering the event list (and associated message threads, for object-oriented analysis). Design at least one test for each event that corresponds to ``normal'' or ``error-free'' use. As well, design a test for each kind of ``error'' that the process handling the event is expected to check for and report.

An input (and output) list for the event's process can also be useful: If minimum or maximum allowable values have been specified for inputs (and outputs), try to design tests involving use of the ``boundary'' values for these inputs (and outputs), as well as illegal values that lie just outside the boundaries, and legal ``typical'' values that are inside (and nowhere near) the boundaries.

For object-oriented testing, try to design a test that executes each `'message thread'' that you identified when services and message connections were specified.

For either function-oriented (``structured'') or object-oriented development, try to think of ways to check any nontrivial performance requirements that have been identified for system functions. You may need a set of tests, gather statistics, and compare statistical values to specified ``targets'' or ``limits'' in order to check that these are satisfied.

Testing Nonfunctional Requirements

The testing of nonfunctional requirements is (for the most part) beyond the scope of this course. CPSC 481 includes a substantial amount of material about assessing a human-computer interface.

A Small Example

Once again, consider Version One of the Student Information System. The second event that was identified when data flow diagrams were developed for this example was as follows.

Event Number: 2
Event Name: Student passes the course
Inputs: ID number
Outputs: status message
Error Conditions:
  1. Syntactically incorrect input
  2. The given ID number is not in use
  3. The given ID number corresponds to a student who has already passed the course

We'll assume (as before) that a simple interface will be used, so that the user is required to type in an ID number, even when it's one that is supposed to be in use already. Thus, the error conditions listed above all make (some) sense. It's easy to see that these error conditions are exclusive - two or more can't occur at once - so this event list suggests four situations that we should try to test:

  1. The user requests that a student pass, and correctly supplies an ID number for an existing student who is registered in the course.
  2. The user requests that a student pass, and supplies a syntactically incorrect ID number as input.
  3. The user requests that a student pass, and supplies a syntactically correct, but unused, ID number as input.
  4. The user requests that a student pass, and supplies the ID number of a student who has already passed the course, as input.

Now, in order to specify a corresponding set of tests, we'll need to specify some conditions that the system data is expected to satisfy when the test begins, as well as the test's input and expected output. In particular, we'll require the system data to satisfy the following conditions when the tests begin:

Four tests that correspond to the four situations (given above) are as follows.

Test #1
Input: 12345678
Expected Output: The student has now passed the course.
Test #2
Input: 12A45678
Expected Output: This is not a syntactically valid ID number; please enter an 8-digit integer.
Test #3
Input: 11111111
Expected Output: There is no known student with this ID number.
Test #4
Input: 87654321
Expected Output: This student has already passed the course, and can't pass it again.

The above status messages aren't great; better ones might have been selected (and should appear here, instead) when the human-computer interface was defined. However, it should be the case that inputs, and expected outputs, can be specified for each test.

You might add additional tests after examining the data dictionary definition for ``ID number,'' in order to test syntax checking more thoroughly. For example, incorrect ID numbers consisting of 7-digit, 9-digit, and negative ID numbers might also be appropriate as inputs for tests.

You might also try to test performance requirements (if any are supplied) by measuring the response time for the system's handling of this event, requesting the corresponding command repeatedly and quickly (if a ``firing rate'' for the event and its process were specified), and so on.

A similar set of tests could be produced for each event in an event list for this system. If a menu (and a set of menu selections) will be used to prompt the user for a selection of a command, then a few tests could be added for improper use of the menu (in which the user makes invalid menu selections, in a variety of ways).

Finally a more helpful list of tests (than given here) would include cross references, so that the requirements ``covered'' or ``exercised'' by each test are listed, along with the test itself.

Location: [CPSC 333] [Listing by Topic] [Listing by Date] [Previous Topic] [Next Topic] Development of a Test Plan


Department of Computer Science
University of Calgary

Office: (403) 220-5073
Fax: (403) 284-4707

eberly@cpsc.ucalgary.ca