Wednesday, November 21, 2007

Testing Terminology - 4

Sanity Testing: Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.

Scalability Testing: is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time. [Load Testing Terminology by Scott Stirling]

Sensitive Test: A test, that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test. [Dorothy Graham, 1999]

Smoke Test: describes an initial set of tests that determine if a new version of application performs well enough for further testing. [Louise Tamres, 2002]

Specification-based test: A test, whose inputs are derived from a specification.

Spike testing: To test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test. [Load Testing Terminology by Scott Stirling]

State-based testing: Testing with test cases developed by modeling the system under test a state machine.

State Transition Testing: Technique in which the states of a system are first identified and then test cases are written to test the triggers to cause a transition from one condition to another state. [William E. Lewis, 2000]

Static testing: Source code analysis. Analysis of source code to expose potential defects.

Statistical testing: A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases. [BCS]

Stealth bug: A bug that removes information useful for its diagnosis and correction [R.V.Binder, 1999]

Storage test: study how the program, either in resident memory or on ldisk, uses memory and space. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them. [Cem Kaner, 1999,p55]

Stress/Load/Volume test: tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.

Structural testing: (1) (IEEE) testing that takes into accounts the internal mechanism [structure] of a system, pr component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statements is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.

System testing: black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

Table Testing: test access, security, and data integrity of table entries. [William E.Lewis, 2000]

Test Bed: An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test [IEE 610]

Test Case: a set of test inputs, executions, and expected results developed for a particular objective.

Test conditions: The set of circumstances that a test invokes [Daniel J.Mosley, 2002]

Test coverage: The degree to which a given test or set of tests addresses all specified test cases for a given system or component.

Test Criteria: decision rules used to determine whether software item or software feature passes or fails a test.

Test data: The actual (Set of) values used in the test or that are necessary to execute the test. [Daniel J.Mosley, 2002]

Test documentation: (IEEE) Documentation describing plans for, or results of, the testing of a system or component, types include test case specification, test incident report, test log, test plan, test procedure, test report.

Test Driver: A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.

Test Harness: A system of test drivers and other tools to support test execution (e.g., stubs. Executable test cases, and test drivers). See: test driver.

Test Item: A software item, which is the object of testing.

Test log: A chronological record of all relevant details about the execution of a test.

Test plan: A high–level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life, cycle; including resource requirements, project schedule, and test requirements.

Test procedure: A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] often called-manual test script.

Test strategy: Describes the general approach and objectives of the test activities. [Denial J.Mosley, 2002]

Test status: The assessment of the result of funning tests on software.

Test stub: A dummy software component r object used (during development and testing) to simulate the behavior of a real component. The stub typically provides test output.

Test suites: A test suite consists of multiple test cases (procedures and data) that are combined and often managed by a test harness.

Test tree: A physical implementation of test suite.[Dorothy Graham, 1999]

Testability: Attributes of software that bear on the effort needed for validating the modified software [ISO 8402]

Testing: the execution of test with the intent of providing that the system and application under test does or does not perform according to the requirements specification.

Unit testing: Testing performed to isolate and expose faults and failures as soon as the source code is available, regardless of the external interfaces that may be required. Oftentimes, the detailed design and requirements documents are used as a basis to compare how and what the unit is able to perform. White and black box testing methods are combined during unit testing.

Usability testing: Testing for user-friendliness. Clearly this is subjective, and will depend on the targeted end-user or customer.

Validation: The comparison between the actual characteristics of something (e.g. a projects of a software project and the expected characterizes). Validation is checking that you have built the right system.

Verification: The comparison between the actual characteristics of something (e.g. a projects) and the specified characteristics. Verification list checking that we have built the system right.

Volume testing: Testing where the system is subjected to large volumes of data.

Walkthrough: In the most usual form of term, a walkthrough is step-by-step simulation of the execution of a procedure, as when walking through code line by line, with an imagined set of inputs. The term has been extended to the review of material hat is not procedural, such as data descriptions, reference manuals, specifications, etc.

White box testing (glass-box): Testing is done under a structural testing strategy and requires complete access to the object’s structure that is, the source code.

No comments: