CS615 – Software Engineering I

Lecture 7

Chapter 13: Software Testing Strategies

Strategic Approach to Software Testing

Criteria for Completion of Testing


 

f(t) = (1/p) ln [ lopt + 1]

where

f(t)  =  cumulative number of failures that are expected to occur once the software has been tested for a certain amount of execution time, t,

lo  =  the initial software failure intensity (failures per time unit) at the beginning of testing,

p  =  the exponential reduction in failure intensity as errors are uncovered and repairs are made.

 

l(t) = 1o/( lopt + 1)

Strategic Testing Issues

Unit Testing
Focuses verification effort on the smallest unit of software design—the software component or module

(1) misunderstood or incorrect arithmetic precedence
(2) mixed mode operations
(3) incorrect initialization
(4) precision inaccuracy
(5) incorrect symbolic representation of an expression

(1)comparison of different data types
(2) incorrect logical operators or precedence
(3) expectation of equality when precision error makes equality unlikely
(4) incorrect comparison of variables
(5) improper or nonexistent loop termination
(6) failure to exit when divergent iteration is encountered
(7) improperly modified loop variables

 

Integration Testing

1.      Main control module used as a test driver and stubs are substitutes for components directly subordinate to it.

2.      Subordinate stubs are replaced one at a time with real components (following the depth-first or breadth-first approach).

3.      Tests are conducted as each component is integrated.

4.      On completion of each set of tests and other stub is replaced with a real component.

5.      Regression testing may be used to ensure that new errors not introduced.

 

0.      Low level components are combined in clusters that perform a specific software function.

1.      A driver (control program) is written to coordinate test case input and output.

2.      The cluster is tested.

3.      Drivers are removed and clusters are combined moving upward in the program structure.

 

0.      Representative sample of existing test cases is used to exercise all software functions.

1.      Additional test cases focusing software functions likely to be affected by the change.

2.      Tests cases that focus on the changed software components.

 

With each passing day, more of the software has been integrated and more has been demonstrated to work

1.      Software components already translated into code are integrated into a build.

2.      A series of tests designed to expose errors that will keep the build from performing its functions are created.

3.      The build is integrated with the other builds and the entire product is smoke tested daily (either top-down or bottom integration may be used).

Integration Test Documentation

Object-Oriented Testing

testing strategy changes
  • the concept of the ‘unit’ broadens due to encapsulation
  • o       

    integration focuses on classes and their execution across a ‘thread’ or in the context of a usage scenario

     

    Broadening the View of “Testing”

      • Review of OO analysis and design models is useful because the same semantic constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and code level.
      • Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis).  

     

    Testing the CRC Model

    1.  Revisit the CRC model and the object-relationship model.

    2.  Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator’s definition.

    3.  Invert the connection to ensure that each collaborator that is asked for service is receiving requests from a reasonable source.

    4.  Using the inverted connections examined in step 3, determine whether other classes might be required or whether responsibilities are properly grouped among the classes.

    5.  Determine whether widely requested responsibilities might be combined into a single responsibility.

    6.  Steps 1 to 5 are applied iteratively to each class and through each evolution of the OOA model.

     

    OOT Strategy

    ·        class testing is the equivalent of unit testing

    ·       

    integration applied three different strategies

     

     

    Validation Testing

    Acceptance Testing

    System Testing

    Debugging

    Bug Removal Considerations

     

    Chapter 14: Software Testing Techniques

    Testability

    n      Operability—it operates cleanly

    n      Observability—the results of each test case are readily observed

    n      Controllability—the degree to which testing can be automated and optimized

    n      Decomposability—testing can be targeted

    n      Simplicity—reduce complex architecture and logic to simplify tests

    n      Stability—few changes are requested during testing

    n      Understandability—of the design

    What is a “Good” Test?

    n     A good test has a high probability of finding an error

    n     A good test is not redundant.

    n     A good test should be “best of breed”

    n     A good test should be neither too simple nor too complex

    Exhaustive Testing

     

    Selective Testing

     

     

    White-Box Testing

    Testing from the inside--tests that test the actual program structure.

     

    Basis path testing

     Test every statement in the program at least once

     

    Loop tests

    Exercise each DO,WHILE, FOR, and other repeating statements several times.

     

    Input tests

    ·        Each procedure should be tested to make certain that the procedure actually received the data sent to it.

    ·        Finds type mismatches, bad pointers, and other such bugs (these are common!)

     

     

    Graph Matrices

    n     A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the number of nodes on a flow graph

    n     Each row and column corresponds to an identified node, and matrix entries correspond to connections (an edge) between nodes.

    n     By adding a link weight to each matrix entry, the graph matrix can become a powerful tool for evaluating program control structure during testing

    Control Structure Testing

    n     Condition testing — a test case design method that exercises the logical conditions contained in a program module

    n     Data flow testing — selects test paths of a program according to the locations of definitions and uses of variables in the program

     

    Black-Box Testing

    n     How is functional validity tested?

    n     How is system behavior and performance tested?

    n     What classes of input will make good test cases?

    n     Is the system particularly sensitive to certain input values?

    n     How are the boundaries of a data class isolated?

    n     What data rates and data volume can the system tolerate?

    n     What effect will specific combinations of data have on system operation?

     

    Comparison Testing

    n     Used only in situations in which the reliability of software is absolutely critical (e.g., human-rated systems)

    n    Separate software engineering teams develop independent versions of an application using the same specification

    n     Each version can be tested with the same test data to ensure that all provide identical output

    n    Then all versions are executed in parallel with real-time comparison of results to ensure consistency

     

    Orthogonal Array Testing

    n     Used when the number of input parameters is small and the values that each of the parameters may take are clearly bounded

    OOT—Test Case Design

    1.      Each test case should be uniquely identified and should be explicitly associated with the class to be tested,

    2.      The purpose of the test should be stated,

    3.      A list of testing steps should be developed for each test and should contain [BER94]:

    a.      a list of specified states for the object that is to be tested

            

    b.      a list of messages and operations that will be exercised as a consequence of the test

            

    c.      a list of exceptions that may occur as the object is tested

            

    d.      a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test)

            

    e.      supplementary information that will aid in understanding or implementing the test.

     

    Testing Methods

    n      Fault-based testing

    n      The tester looks for plausible faults (i.e., aspects of the implementation of the system that may result in defects). To determine whether these faults exist, test cases are designed to exercise the design or code.

    n      Class Testing and the Class Hierarchy

    n     Inheritance does not obviate the need for thorough testing of all derived classes. In fact, it can actually complicate the testing process.

    n      Scenario-Based Test Design

    n     Scenario-based testing concentrates on what the user does, not what the product does. This means capturing the tasks (via use-cases) that the user has to perform, then applying them and their variants as tests.

     

     

    OOT Methods

    Random testing

    n    identify operations applicable to a class

    n    define constraints on their use

    n    identify a miminum test sequence

    n   an operation sequence that defines the minimum life history of the class (object)

    n    generate a variety of random (but valid) test sequences

    n   exercise other (more complex) class instance life histories

     

    Partition Testing

    n    reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional software

    n    state-based partitioning

    n    categorize and test operations based on their ability to change the state of a class

    n    attribute-based partitioning

    n    categorize and test operations based on the attributes that they use

    n    category-based partitioning

    n    categorize and test operations based on the generic function each performs

     

    Inter-class testing

    n    For each client class, use the list of class operators to generate a series of random test sequences. The operators will send messages to other server classes.

    n    For each message that is generated, determine the collaborator class and the corresponding operator in the server object.

    n    For each operator in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits.

    n    For each of the messages, determine the next level of operators that are invoked and incorporate these into the test sequence