Acceptance Test: Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal r external) to determine whether or not to accept a system.
Ad Hoc Testing: Testing carried out using no recognized test case design technique. [BCS]
Alpha Testing: Testing of a software product or system conducted at the developer’s site by the customer.
Assertion Testing: (NBS) A dynamic analysis technique, which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.
Automated Testing: Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.
Background Testing: Is the execution of normal functional testing while a realistic wok load exercises the SUT. This workload is being processed “in the background” as far as the functional testing is concerned. [Load Testing Terminology by Scott Sterling].
Bug: Glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion, elision, defect, issue, problem.
Beta Testing: Testing conducted at one or more customer sites by the end-user of a delivered software product or system.
Benchmarks: Program that provide performance comparison for software, hardware, and systems.
Benchmarking: Is specific type of performance test with the purpose of determining performance baselines for comparison. [Load Testing Terminology by Scott Sterling].
Big-bang Testing: Integration testing where no incremental testing takes place prior to all the system’s components being combined to form the system.
Black-Box Testing: A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behaviors of the program are evaluated and analyzed.
Boundary value Analysis (BVA): BVA is different from equivalence partitioning in that it focuses on “corner cases” or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive that value often used as a technique for stress load or volume testing. This type of validation is usually performed after positive functional validation has completed successfully using requirements specifications and user documentation.
Breadth Test: A test suite that exercises the full scope of a system from a top-down perspective, buy does not test any aspect in detail [Dorothy Graham, 1999]
Cause Effective Graphing: (1)[NBS] Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2) A systematic methods of generating test cases representing combinations of conditions. See: testing, functional. [G.Myers]
Clean Test: A test whose primary purpose is validation; that is, tests designed to demonstrate the software’s correct working. (Syn.positive test)
Code Inspection: A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. (Syn: Fagan Inspection)
Code walkthrough: A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmers logic and assumptions. [G.Myers/NBS] contrast with code audit, code inspection, code review.
Coexistence Testing: Coexistence isn’t enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. Its probably and exponentially hard problem rather than a square-law problem. [From Quality is not the goal. By Boris Beizer, Ph.D.]
Compatibility bug: A revision to the framework breaks a previously working feature: anew feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code. [R.V.Binder, 1999]
Compatibility Testing: The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.
Composability Testing: Testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks. [Timothy Dyck, ‘easy’ and other lies, eWEEK April 28,2003]
Condition Coverage: A test coverage criteria requiring enough test cases such tat each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, multiple condition coverage, path coverage and statement coverage. [G.Myers]
Conformance directed testing: Testing that seeks to establish conformance to requirements or specification. [R.V.Binder, 1999]
CRUD Testing: Build CRUD matrix and test all object creation, reads, updates, and deletion. [William E.Lewis, 2000]
Data-Driven testing: An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script.
Data flow testing: Testing in which test cases are designed based on variable usage within the code.
Database testing: Check the integrity of database field values.
Defect: The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system.
Defect: Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures.
Depth test: A test case, that exercises some part of a system to a significant level of detail.
Decision Coverage: A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.
Dirty testing: Negative testing.
Dynamic testing: Testing, based on specific test cases, by execution of the test object or running programs.
End-to-End Testing: Similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Partitioning: An approach where classes of inputs categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.
Error: An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests. [Robert M.Poston, 1996]
Errors: The amount by which a result is incorrect. Mistakes are usually a result of a human action. Human mistakes (errors) often result in faults contained in the source code, specification, documentation, or other product deliverable. Once a fault is encountered, the end result will be a program failure. The failure usually has some margin of error, high, medium, or low.
Error Guessing: Another common approach to Black-box validation. Black box testing is when every thing else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.
Error guessing: A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them [from BS7925-1]
Error seeding: The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program [R.V.Binder, 1999]
Exception Testing: Identify error messages and exception handling processes and conditions that trigger them. [William E.Lewis, 2000]
Exhaustive Testing (NBS): Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.
Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply. The outcome of this test influences the design of the next test. [James Bach]
No comments:
Post a Comment