Wednesday, November 21, 2007

Testing Terminology - 3

Monkey Testing (smart monkey testing): Input are generated from probability distributions that reflect actual expected usage statistics – e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event.

Maximum Simultaneous Connection Testing: This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.

Mutation Testing: A testing strategy where small variations to a program are inserted ( a mutant), followed by execution of an existing test suite. If the test suite detects the mutant, the mutant is retired. If undetected, the test suite must be revised. [R.V.Binder, 1999]

Multiple Condition Coverage: A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once. [G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.

Negative Test: A test whose primary purpose is falsification; that is tests designed to brake the software [B.Beizer1995]

Orthogonal Array Testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987

Orthogonal Array Testing: Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000]

Oracle (Test Oracle): A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test [from BS7925-1]

Parallel Testing: Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run. [ISO]

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specific performance requirements [BS7925-1]

Performance Testing: can be under taken to: 1) show that the system meets specified performance objectives, 2) tune the system, 3) determine the factors in hardware or software that limit the system’s performance, and 4) project the system’s future load- handling capacity in order to schedule its replacements [Software System Testing and Quality Assurance. Beizer, 1984, p. 256]

Prior Defect History Testing: Test cases are created or return for every defect found in prior tests of the system. [William E.Lewis, 2000]

Qualification Testing (IEEE): Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements.

Quality: The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.

Quality Assurance (QA): Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).

Quality Control (QC): Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.

Our Definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test.

Race Condition Defect: Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.

Recovery Testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Regression Testing: Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.

Regression Testing: Testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program [Glenford J.Myers, 1979]

Reengineering: The process of examining and altering an existing system to reconstitute it in a new form. Many include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).

Reference Testing: A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution. [Dorothy Grahan, 1999]

Reliability Testing: Verify the probability of failure free operation of a computer program in a specified environment for specified time.

Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of a time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of time t, the probability that the object will not fail within time t.

Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in – the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed – the software will appear to work properly. [Professor Dick Hamlet. Ph.D.]

Range Testing: For each input identifies the range over which the system behavior should be the same. [William E. Lewis, 2000]

Risk Management: An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.

Robust Test: A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails. [Dorothy Graham, 1999]

No comments: