SE 616 – Introduction to Software Engineering

Lecture 7

Chapter 13: Software Testing Strategies

Strategic Approach to Software Testing

Criteria for Completion of Testing

Description: C:\Webstuff\SE616\L7New\fg1a.gif
 

f(t) = (1/p) ln [ lopt + 1]

where

f(t)  =  cumulative number of failures that are expected to occur once the software has been tested for a certain amount of execution time, t,

lo  =  the initial software failure intensity (failures per time unit) at the beginning of testing,

p  =  the exponential reduction in failure intensity as errors are uncovered and repairs are made.

 

l(t) = 1o/( lopt + 1)

Strategic Testing Issues

Unit Testing
Focuses verification effort on the smallest unit of software design—the software component or module

Description: C:\Webstuff\SE616\L7New\fg2.gif

(1) misunderstood or incorrect arithmetic precedence
(2) mixed mode operations
(3) incorrect initialization
(4) precision inaccuracy
(5) incorrect symbolic representation of an expression

(1)comparison of different data types
(2) incorrect logical operators or precedence
(3) expectation of equality when precision error makes equality unlikely
(4) incorrect comparison of variables
(5) improper or nonexistent loop termination
(6) failure to exit when divergent iteration is encountered
(7) improperly modified loop variables

 

Integration Testing

Description: C:\Webstuff\SE616\L7New\fg3.gif

 

Description: C:\Webstuff\SE616\L7New\fg4.gif

 

 

With each passing day, more of the software has been integrated and more has been demonstrated to work

Integration Test Documentation

Object-Oriented Testing

integration focuses on classes and their execution across a ‘thread’ or in the context of a usage scenario

 

Broadening the View of “Testing”

    • Review of OO analysis and design models is useful because the same semantic constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and code level.
    • Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis). 

 

Testing the CRC Model

1.  Revisit the CRC model and the object-relationship model.

2.  Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator’s definition.

3.  Invert the connection to ensure that each collaborator that is asked for service is receiving requests from a reasonable source.

4.  Using the inverted connections examined in step 3, determine whether other classes might be required or whether responsibilities are properly grouped among the classes.

5.  Determine whether widely requested responsibilities might be combined into a single responsibility.

6.  Steps 1 to 5 are applied iteratively to each class and through each evolution of the OOA model.

 

OOT Strategy

·         class testing is the equivalent of unit testing

    • operations within the class are tested
    • the state behavior of the class is examined

integration applied three different strategies

    • thread-based testing—integrates the set of classes required to respond to one input or event
    • use-based testing—integrates the set of classes required to respond to one use case
    • cluster testing—integrates the set of classes required to demonstrate one collaboration

 

 

Validation Testing

Acceptance Testing

System Testing

Debugging

Bug Removal Considerations

 

Chapter 14: Software Testing Techniques

Testability

n  Operability—it operates cleanly

n  Observability—the results of each test case are readily observed

n  Controllability—the degree to which testing can be automated and optimized

n  Decomposability—testing can be targeted

n  Simplicity—reduce complex architecture and logic to simplify tests

n  Stability—few changes are requested during testing

n  Understandability—of the design

What is a “Good” Test?

n  A good test has a high probability of finding an error

n  A good test is not redundant.

n  A good test should be “best of breed”

n  A good test should be neither too simple nor too complex

Exhaustive Testing

Description: ch14a

 

Selective Testing

Description: ch14b

 

 

White-Box Testing

Testing from the inside--tests that test the actual program structure.

 

Basis path testing

 Test every statement in the program at least once

 

Loop tests

Exercise each DO,WHILE, FOR, and other repeating statements several times.

 

Input tests

·         Each procedure should be tested to make certain that the procedure actually received the data sent to it.

·         Finds type mismatches, bad pointers, and other such bugs (these are common!)

 

 

Graph Matrices

n  A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the number of nodes on a flow graph

n  Each row and column corresponds to an identified node, and matrix entries correspond to connections (an edge) between nodes.

n  By adding a link weight to each matrix entry, the graph matrix can become a powerful tool for evaluating program control structure during testing

Control Structure Testing

n  Condition testing — a test case design method that exercises the logical conditions contained in a program module

n  Data flow testing — selects test paths of a program according to the locations of definitions and uses of variables in the program

 

Black-Box Testing

n  How is functional validity tested?

n  How is system behavior and performance tested?

n  What classes of input will make good test cases?

n  Is the system particularly sensitive to certain input values?

n  How are the boundaries of a data class isolated?

n  What data rates and data volume can the system tolerate?

n  What effect will specific combinations of data have on system operation?

 

Comparison Testing

n  Used only in situations in which the reliability of software is absolutely critical (e.g., human-rated systems)

n Separate software engineering teams develop independent versions of an application using the same specification

n  Each version can be tested with the same test data to ensure that all provide identical output

n Then all versions are executed in parallel with real-time comparison of results to ensure consistency

 

Orthogonal Array Testing

n  Used when the number of input parameters is small and the values that each of the parameters may take are clearly bounded

OOT—Test Case Design

1.      Each test case should be uniquely identified and should be explicitly associated with the class to be tested,

2.         The purpose of the test should be stated,

3.         A list of testing steps should be developed for each test and should contain [BER94]:

a.         a list of specified states for the object that is to be tested

          

b.         a list of messages and operations that will be exercised as a consequence of the test

          

c.         a list of exceptions that may occur as the object is tested

          

d.         a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test)

          

e.         supplementary information that will aid in understanding or implementing the test.

 

Testing Methods

n  Fault-based testing

n   The tester looks for plausible faults (i.e., aspects of the implementation of the system that may result in defects). To determine whether these faults exist, test cases are designed to exercise the design or code.

n  Class Testing and the Class Hierarchy

n  Inheritance does not obviate the need for thorough testing of all derived classes. In fact, it can actually complicate the testing process.

n  Scenario-Based Test Design

n  Scenario-based testing concentrates on what the user does, not what the product does. This means capturing the tasks (via use-cases) that the user has to perform, then applying them and their variants as tests.

 

 

OOT Methods

Random testing

n identify operations applicable to a class

n define constraints on their use

n identify a miminum test sequence

n an operation sequence that defines the minimum life history of the class (object)

n generate a variety of random (but valid) test sequences

n exercise other (more complex) class instance life histories

 

Partition Testing

n reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional software

n state-based partitioning

n categorize and test operations based on their ability to change the state of a class

n attribute-based partitioning

n categorize and test operations based on the attributes that they use

n category-based partitioning

n categorize and test operations based on the generic function each performs

 

Inter-class testing

n For each client class, use the list of class operators to generate a series of random test sequences. The operators will send messages to other server classes.

n For each message that is generated, determine the collaborator class and the corresponding operator in the server object.

n For each operator in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits.

n For each of the messages, determine the next level of operators that are invoked and incorporate these into the test sequence