CPSC 333: System Testing

Location: [CPSC 333] [Listing by Topic] [Listing by Date] [Previous Topic] System Testing


This material was not covered in lectures in Winter, 1997, and can be considered to be optional.


Introduction

Software is sometimes only one component in a larger ``computer based system.'' Under these circumstances the entire system must be integrated and tested after software has been completed.

Planning and conducting system tests is at least partly ``outside the scope'' of software development, and software developers might not have much control over the tests to be performed.

In order to be prepared for system testing, developers should make sure that developed software validates all inputs received from other system components (including hardware and a data base, as well as people) and that the software reports any problems and fails gracefully when invalid input is received from other components.

Finally, it should be noted that the systems tests that are described below should be conducted, even if the software development isn't part of a larger ``systems development'' effort. For example, if you're developing commercial software (rather than building a system for a single client), you should probably be conducting these ``systems tests'' on multiple platforms, instead of a single one, if your commercial software is to be run on an operating system that could be used with more than one kind of machine.

Things get even more complicated if you're dealing with a software product that includes multiple versions that are all available (and that may be for different operating systems as well as machines). This is beyond the scope of this course.

However, more information about all of this (including software management) can be found in recent editions of Pressman's Software Engineering: A Practitioner's Approach.

Types of System Tests

Performance Testing

The system is tested to ensure that it meets performance requirements when run on inputs of ``typical'' size and complexity.

Note that you could be conducting at least some performance testing much earlier - from unit testing onward. However, reliable information about which parts of a system will be heavily used, will be resource bottlenecks, and which may need optimization, will frequently be unavailable until the entire system has been integrated - so that it's quite possible that most performance testing really will need to be conducted this late in the process.

It would follow that, if this is the case, you won't spend time trying to optimize any parts of the program until this late, either! When it isn't necessary, optimizing code wastes development time, and can make the code more difficult to maintain later on.

Recovery Testing

If the system is required to be fault tolerant, then this type of testing is important: During recovery testing, the system is forced to ``fail'' in a variety of ways, and the time and resources needed for (automatic or manual) recovery are measured, to assess recovery procedures.

Note that, if the system is required to be fault tolerant, then requirements related to recovery from failures (for example, such as the time needed to recover and restart) should have been included during requirements analysis and specification: Otherwise, you might not have any way to assess the outcome of these tests.

Security Testing

During security testing, attempts are made to break into the system in order to get access to (or change) privileged information.

One kind of security test, that you may already be aware of, is running a password assessment program in a multi-user system. Such a program will try to guess the password of every user, and will produce a list of users whose passwords are insecure.

For security testing, ``anything goes:'' any plausible means to break into a system can be used as part of a test. As well, no system will ever be ``completely secure,'' so that the goal of this kind of testing must generally be to ensure that it's difficult to break into a system - not impossible.

Stress Testing

Systems are sometimes (maybe even, frequently) run under nonuniform, or unexpectedly heavy, loads. Software might also run under conditions such that it's competing with other programs for system resources. (Think about what it's like to use the computer science department machines at certain times of the day - or just before an assignment is due!)

During stress testing, we run under the system under much heavier loads than usual - running it with much larger inputs, more frequent input requests, under circumstances inducing thrashing for disk accesses, etc. - essentially to see how far the system can be ``pushed'' until it fails, and to ensure that it fails in a ``graceful'' way as loads become larger than what it can handle.

Location: [CPSC 333] [Listing by Topic] [Listing by Date] [Previous Topic] System Testing


Department of Computer Science
University of Calgary

Office: (403) 220-5073
Fax: (403) 284-4707

eberly@cpsc.ucalgary.ca