Wednesday, November 21, 2007

Testing Terminology - 2

Failure: A failure is a deviation from expectations exhibited by software and observed as a set of symptoms by a tester or user. A failure is caused by one or more defects. The causal Trail. A person makes an error that causes a defect that caused a failure. [Robert M.Poston, 1996]

Follow-up testing: We vary a test that yielded a less-than spectacular failure. We vary the operation, data, or environment asking whether the underlying fault in the code can yield a more serious failure or a failure under a broader range of circumstances. [Measuring the Effectiveness of software tester, Cem Karner, STAR East 2003]

Formal Testing (IEEE): Testing conducted in a accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

Free Form Testing: Ad hoc or brainstorming using intuition to define test cases. [William E. Lewis, 2000]

Functional Decomposition Approach: An automation method in which the test cases are reduced to fundamental tasks, navigation, functional tests, data verification, and return navigation; also known as Framework Driven Approach. [Daniel J. Mosley, 2002]

Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.

Gray box Testing: Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester. [Cem Kaner]

Gray box Testing: Test designed based on the knowledge of algorithm, internal states, architectures, or other high-level descriptions of the program behavior. [Doug Hoffman]

Gray box Testing: Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are: A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error occurred. The test executes in full, but the contents of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results. [Elfriede Dustin. “Quality Web Systems: Performance, Security & Usability.”]

High-level tests: These tests involve testing whole, complete products [Kit, 1995]

Inspection: A formal evaluation technique in which software requirements, design, or code or examine in detail by person or group other than the author to detects faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).

Integration: The process of combining software components or hardware components or both into overall system.

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Integration Testing: Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) This method is contrary to the big-bang approach where all source modules are combined and tested in one step. The big-bang approach to integration should be discouraged.

Interface Tests: Programs that provide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.

Internationalization Testing (I18N): Testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth. [Clinton De Young, 2003].

Interoperability Testing: which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.

Inter-operability Testing: True inter-operability testing concerns testing for unforeseen interactions with other packages with which your software has no direct connection. In some quarters, inter-operability testing labor equals all other testing combined. This is the kind of testing that I say shouldn’t be done because it can’t be done. [from Quality Is Not The Goal. By Boris Beizer, Ph.D.]

Latent bug: A bug that has been dormant (unobserved) in two more releases. [R.V.Binder, 1999]

Lateral Testing: A test design technique based on lateral thinking principals, to identify faults. [Dorothy Graham, 1999]

Load Testing: Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Load-stability test: Test design to determine whether a web application will remain serviceable over extended time span.

Load & Isolation test: The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.

No comments: