CPSC 333 --- Lecture 26 --- Monday, March 18, 1996 System Testing Reference: Pressman's Practitioner's Guide, Section 19.5 It was mentioned early in the course that software is often only one component in a larger "computer based system." Under these circumstances the entire system must be integrated and tested after software has been completed. Planning and conducting system tests is largely "outside the scope" of software development, and software developers won't have much control over the tests to be performed. In order to be *prepared for* system testing, developers should make sure that developed software validates all inputs received from other system components (including hardware and a data base, as well as people) and that the software reports any problems and "fails gracefully" when invalid input is received from other components. Types of System Tests: Performance Testing: The system is tested to ensure that it meets performance requirements when run on inputs of "typical" size and complexity. Note: You could be performing at least some performance testing much earlier --- from unit testing onward. However, *reliable* information about which parts of a system will be heavily used, will be resource bottlenecks, and which may need optimization, will frequently be unavailable until the entire system has been integrated --- so that it's quite possible that most performance testing will be conducted this late in the process. It would follow that, if this is the case, you won't *optimize* any parts of the program until this late, either! Recovery Testing: If the system is required to be "fault tolerant" then this type of testing is important: The system is forced to "fail" in a variety of ways, and the time and resources needed for (automatic or manual) recovery are measured, to assess recovery procedures. Security Testing: Attempts are made to "break into" the system in order to get access to (or change) privileged information. One example of "security testing" that you may already be aware of is: running a password assessment program in a multi-user system. Such a program will try to guess the password of every user, and will produce a list of users whose passwords are insecure. For security testing, "anything goes:" *any* plausible means to break into a system can be used as part of a test. As well, no system will ever be "completely secure," so that the goal of this kind of testing is probably to ensure that it's *difficult* to break into a system --- not impossible. Stress Testing: Systems will often be run under "nonuniform" load conditions --- so they should be expected to run well when loads are substantially higher than usual. In "stress testing," we run under the system under *much* heavier loads than usual --- running it with much larger inputs, more frequent input requests, under circumstances inducing "thrashing" for disk accesses, etc. --- essentially to see how far the system can be "pushed" until it fails, and to ensure that it fails in a "graceful" way as loads become larger than what it can handle. Finally, some additional references for software testing: Brian Marick The Craft of Software Testing Prentice-Hall, 1995 This focuses on the testing of subsystems (which you could consider to be part of integration testing). Appendices A and B include two "Test Requirement Catalogs" --- a "student version," for people learning to use the subsystem testing method described in this book, and a full version, for people who have more experience using these methods. You may find this to be useful when you are trying to think of conditions that should be checked when designing tests --- possibly for unit testing, as well as during integration testing. Darrel Ince, "Software Testing" Chapter 19 of: Software Engineer's Reference Book, by John McDermid CRC Press, 1993 This is a survey article that describes some additional testing techniques (among other things) not covered in class. ---------------------------------------------------------------------- Introduction to Object-Oriented Development The older "structured" methods for development --- including analysis, design, and implementation --- frequently replaced practices in which virtually no attempt was made to analyze and specify requirements, or to specify a design architecture, before one "jumped into coding." The *code* produced without specifying requirements, etc., was frequently undocumented (as well as unstructured). Sadly, much of this is still in use today --- it's now called "legacy code," and it's often decades-old FORTRAN or COBOL code that organizations don't understand, but can't afford to replace. When dealing with such code, one sometimes can't even get to (or have a reason to spend time on) the question, "What does this module actually do?," because the more basic question, "Is this module actually ever used?" hasn't been answered yet, either. Graduates (and some senior students who've experienced co-op or internship terms, or who have worked as programmers before coming back to school) have found the experience of dealing with "legacy code" to be a substantial incentive for learning about requirements analysis and specification, design, etc. So, it can be argued that using "structured development" techniques is at least better than using no development techniques (apart from jumping straight to coding) at all. Structured Development Techniques *do* provide: - A statement of requirements that can be used as a basis for design, testing, and maintenance - A way to specify a design, and an implementation, such that each piece of the design or implementation is "traceable" back to part(s) of the requirements that it satisfies - A way to design a program that works reasonably well for business applications, without complex or demanding timing or performance, and with a limited (command-line or menu-base) human-computer interface However: - The methods that have been described have been, for the most part, top-down, and they have *not* made reuse of specifications, designs, or even code as easy as one might like. Critics of the older methods have noted that the time required for analysis and design is substantial and that specifications and designs are frequently "thrown away completely" after each project --- one has to invest a substantial amount of time on this project after project. Ideally, one should be able to reuse most or all of this, when moving from one project to another that's very similar. In addition, because of the emphasis on top-down development (and traceability from the *current* requirements, with so little reuse of the specifications) it's been argued that one frequently ends up writing substantially *more* code using these techniques than might be used for the same project, using different design and coding techniques. Critics have complained that the *quality* of the code or the system hasn't improved, even though so much more code has been written. It's been claimed that design techniques that are (at least partly) *bottom-up* might work better if one wishes to reuse code (etc.) more effectively --- and make effective use of software libraries. In particular, it's been argued that a more "bottom-up" style might allow a developer to create and maintain a personal library of software that's been developed on past projects and refined over time --- and which reduces the amount of new coding that will be required for future projects. As the demand for new software continues to grow, and it remains necessary to maintain existing software, there is reason to believe that we aren't going to be able to keep up unless we abandon methods that force us to "build everything from scratch" - The *design architecture* one gets easily using structured design doesn't work very well for systems with highly interactive interfaces (like the WIMP interfaces that take advantages of features supported by Macintosh, X-Windows, or MS-Windows systems): "Structured Design" produces an architecture that seems to work well when *the system* controls the order in which things happen to a great extent --- while the introduction of a mouse, windows in which one can fill in data in whatever order one wants to, etc., leads to systems in which *the user* has greater control over the order in which things happen, instead. Note that software libraries supporting human-computer interfaces were among the first widely used libraries that were developed using object-oriented techniques... - Information hiding has been "introduced" to structured design fairly late. Other programming support for reuse --- inheritance, etc., hasn't been introduced effectively at all --- so, a "structured design" isn't likely to make effective use of some of the features that modern (object-based and object-oriented) programming languages provide. *Object-Oriented* Development* attempts to improve on structured development by overcoming some of these problems. It is intended to be a development method that allows specifications, designs, and (especially) code to be reused to a much greater extent. Support for reuse provided by object-oriented programming languages includes: - Support for information hiding: In an "object-based" language like Modula-2, or a truly "object-oriented" language like Smalltalk or C++, it is possible to define a class or object, along with a public "interface" (a set of functions that can be called by the rest of the system) as well as a private part that can include data structures and additional functions --- which can only be called by functions "within" that object. A positive effect is that, if the "private" part's implementation is changed, then this won't affect the rest of the system unless the interface is changed as well. It is possible to write code in an "object-oriented" style in language like C or Pascal --- to design a set of interface functions for data areas, and then to enforce the rule "These can only be accessed using the interface functions" *yourself* --- but the programming language doesn't help you. With C++, you won't be *allowed* to access the data area by bypassing the interface functions --- and a compile-time (or perhaps run-time) error will be the result if you try. Object-Oriented Languages also add the following features (which "object-based" languages like Module typically lack): - Support for inheritance: A "specialized" class can be defined that "inherits" all functions from "general" class. Additional functions can be defined for the specialized class, and the "inherited" functions can be redefined. Unless a generalized class is "virtual" --- that is, no implementation is provided, so that every specialized class must provide an implementation for it --- a "default" implementation of every inherited function is provided by the generalized class. Some object-oriented languages (now, including C++) support "multiple inheritance" --- allowing a class to inherit from *several* generalized classes instead of just one. Other languages (include Smalltalk) support single inheritance only. - Templates: C++ now allows you to develop code in which types (or classes) are *parameters.* For example, it is possible to write code for "a stack of T," where "T" is a parameter for a type. If you then wish to declare and use a "stack of integers" you can use the template, instantiating "T" as the type of integers. If, later, you wish to use a stack of character strings, then the template can be used again. You can also use the template repeatedly, to define things like "stacks of stacks of integers. Some languages --- certainly including C++ --- are still changing. The definition of C++ has changed at least twice since the language developed. Initially only single inheritance was supported. One major change added support for multiple inheritance, and another (quite recent) change added templates. It won't be surprising if another major change occurs as developers gain experience using this language and identify potential improvements.