Friday, November 23, 2007

Preface

Target: Quality Assurance fraternity and people into the beginner level of Automation.

Much of the content on this blog is referenced from the pioneers in the domain of Quality Assurance. The content is categorized into different sections like Overview of Quality Assurance and Quality Control, Tools used in Quality Assurance, the suppliers of these tools, interview questions related to tools of QA.

The purpose of the blog is to SHARE the knowledge among my friends and the persons who are into Quality Assurance and Automation. The various online resources are shared amongst them through this online resource being created for free access.

The main sections will include

· The details of the Tools

· Their procurement

· Interview questions related to those QA Tools

· Their online resources

Testing Tools Part - 4

Company: Rational Software Corporation Rational Robot Rational Robot is an award-winning functional testing tool for e-commerce and e-business applications. It allows you to create, modify, and run automated functional, regression, and smoke tests on Web, ERP, and client/server applications. To ensure tests can be reused from one build to the next and across configurations, Rational Robot uses Object Testing® technology. This unique capability allows Rational Robot to go far beyond GUI testing by enabling you to test both visible and invisible object properties.

Company: Rational Software Corporation
Rational Suite TestStudio Rational Suite TestStudio, the industry's only complete end-to-end testing solution, sets the standard for ease of use and accuracy in automated reliability, functional, and performance testing. It combines a full complement of testing tools with the Rational Suite Team Unifying Platform, an integrated platform for the cross-functional team. With Rational Suite TestStudio you can deliver on time with confidence by building the highest quality software in any given timeframe.

Company: Rational Software Corporation
Rational TeamTest Rational TeamTest includes tools for automated functional and performance testing, defect tracking, and test asset management. It includes Rational TestManager, which allows the entire team to easily share test assets and reports.

Company: Rational Software Corporation
Rational Visual Test Rational Visual Test is a functional testing tool that is integrated with Microsoft Visual Studio. With Rational Visual Test your team can develop reusable, maintainable, and extensible test scripts directly from Microsoft Visual Studio.

Company: RSW Software, Inc
e-Monitor e-Monitor ensures that the application remains fully functional and continues to perform adequately under real user load. Should performance problems occur, e-Monitor automatically sends an alert - an e-mail message, a pager message, an alarm, or SNMP trap sent to the system management tool, or restarts the Web application. Using e-Monitor, 24x7 monitoring of applications has never been easier.

Company: RSW Software, Inc
e-Tester e-Tester is used for functional/regression testing and serves as the script recorder for the entire e-TEST suite. e-Tester records all the objects on every page that is visited and automatically inserts test cases to validate these objects. The components of each page are represented graphically in the Visual Script and can be masked or augmented using simple point-and-click actions.

Company: Segue Software, Inc.
SilkPilot SilkPilot for functional and regression testing of CORBA and EJB servers.

Company: Segue Software, Inc.
SilkTest Functional and regression testing

Company: SilverMark, Inc.
Test Mentor - Java Edition SilverMark’s Test Mentor – Java Edition is a functional test and test modeling tool for Java developers to use as they develop their Java classes, clusters, subsystems, frameworks, and other components, either deployed on the client or the server during unit and integration testing.

Company: SoftSell Business Systems Inc.
VersaTest VersaTest is the new product name for VPRO-G. VersaTest can be placed at any process interface, and be used to simulate that interface to any level of complexity. This permits the testing of processes and even complete systems, from unit testing through stress / performance testing to regression testing.

Test scripts used by VersaTest can be built rapidly using either capture/replay techniques, or, for greater flexibility, using more programmatic techniques. The capture facility also proves to be useful in support environments, due to the integrated real-time display and analysis features.

Company: Tallecom Software
TALC2000 TALC2000 is a powerful test automation tool for testing character based legacy applications running on mainframes and midrange, proprietary and Unix based platforms.
TALC2000
Detailed Information Guide
A sophisticated PC based tool, TALC2000 works on the principle of test capture and replay emulating a manual tester testing host applications via a PC terminal emulator.

Company: TestQuest, Inc. TestQuest Pro TestQuest Pro consists of a base system plus a set of modules that allow it to connect to the system under test. The wide variety of simulation and capture modules currently available makes TestQuest Pro an ideal solution for automated testing of a wide variety of devices.

Company: Vermont Creative Software Vermont High Test Plus Vermont HighTest is a full featured automated software testing tool that runs under Windows 95, 98, Me, NT and 2000 and is much faster and easier to use than any other testing tool on the market. You don't need extra hardware; you don't need to buy Visual Basic; you don't even need to be a programmer to use it effectively!

Source: http://www.easy-qa.com/pages/easy-qa-tools-Functional-GUI-Testing-Tools.htm

Testing Tools Part - 3

Company: CYRANO Inc. Test Functional regression testing and high volume stress testing allowing the complete enterprise wide IT network to be tested as one unit.

Company: CYRANO Inc.
WebTester Create, maintain and execute regression testing and functional testing, load and scalability testing, and availability and reliability testing for your Web-based applications.

Company: imbus GmbH Bug Tracking System Bug Tracking System
Easy-to-use database-supported tool for managing error messages and change requests during software development projects.

Company: imbus GmbH
GUI Test Case Library GUI Test Case Library
Powerful add-on for Mercury Interactive's WinRunner®, granting quicker and more effective test programming.

Company: Mercury Interactive
Astra® Mercury Interactive's Astra® is a suite of tools that makes Web site testing fast and simple. Its integrated components, Astra® LoadTest, Astra® QuickTest™ and Astra® SiteManager™, validate Web site content, reliability and performance to accelerate testing and optimize user experience.

Company: Mercury Interactive
WinRunner WinRunner is an integrated, functional testing tool for your entire enterprise. It captures, verifies and replays user interactions automatically, so that you can identify defects and ensure that business processes, which span across multiple applications and databases, work flawlessly the first time and remain reliable.

Company: Mercury Interactive
QTP QTP is also an integrated, functional testing tool for your entire enterprise. It captures, verifies and replays user interactions automatically, so that you can identify defects and ensure that business processes, which span across multiple applications and databases, work flawlessly the first time and remain reliable.

Company: ObjectSoftware Inc.
iTester iTester is a testing tool for testing internet sites. iTester acts as a remote client that interacts with your application, over a web server, and analyzes the document served to it by the web server. The analysis consists of:

If the document consists of input fields, selections, hyperlinks etc.
If the document consists of buttons and their types.
Based on the contents of the document, iTester builds an internal map for elements that can be tested. Based on user configurations & data, provided as an input to iTester via an XML document, the test tool packages the document data and sends it for processing to the web server. The iterations continue till all the documents have been processed by iTester.

Company: Qronus Interactive
TestRunner TestRunner™ is an automated software testing tool for non-intrusive testing of standard and non-standard systems.

TestRunner™ provides solutions for functional and regression testing of non-standard embedded systems. TestRunner™ is a non-intrusive testing tool which verifies that applications work as expected. By capturing, replaying and verifying user interactions automatically, TestRunner™ identifies defects and ensures that business processes, which span across multiple applications and communication devices, work flawlessly the first time and remain reliable throughout the lifecycle.

Company: Rational Software Corporation
Rational preVue Rational preVue products are enterprise-wide testing solutions for X Window and terminal-based applications. Rational preVue automates regression and performance testing by emulating the activities of both users and physical devices to deliver a realistic representation of application workload. By recording your interactions with the application under test, Rational preVue generates tests that allow you to assess the performance of your application under varied loads.

Source: http://www.easy-qa.com/pages/easy-qa-tools-Functional-GUI-Testing-Tools.htm

Thursday, November 22, 2007

Testing Tools Part - 2

Company: +1 Software Engineering +1Test +1Test supports unit, integration, and regression testing. A unit test tests an individual source code module. Integration testing tests a "build" (i.e., a submodel) of the project. And regression testing runs all currently defined test cases.

For each module being tested, a test case, test shell script, and the expected and actual results are used to generate test reports.

Company: AutoTester Inc. AutoTester for Windows AutoTester provides immediate productivity through capture/replay style test creation, yet stores the tests as well-documented, easily maintainable, object-aware tests. Both skilled developers and application users benefit from using AutoTester. The product includes an easy to use menu-driven interface as well as a powerful command set for advanced scripting needs. AutoTester's software quality assurance experts work with you on-site to provide software training, implementation assistance and project support. From test execution and management to results analysis, AutoTester's experts help you maximize the results of your functional and regression testing efforts.

Company: AutoTester Inc. AutoTester with DataBuild™ AutoTester with DataBuild™ is a powerful tool that performs automated testing without the obligation to learn and use a complicated scripting language or devote time and resources to the creation and maintenance of both a scripting language and the ever growing library of test outlines. A programming or technical background is not necessary to use DataBuild™. DataBuild's™ simple menu system is accessed through any standard browser. Using DataBuild™ is intuitive and familiar. Freed from the onerous tasks of test creation and maintenance, focus shifts to the results of tests and how to use those results to better meet business needs and goals. Learning to use DataBuild™ is as easy as using it; the training and implementation course takes only four days and includes help converting use from your current AutoTester product to DataBuild™. Using DataBuild™ saves, time, resources and money.

Company: CenterLine Development Systems, Inc. QC/Replay It combines true widget awareness, a non-proprietary scripting language and automatic synchronization to provide object-based capture/playback verification of your most sophisticated applications.

Company: Compuware Corporation
QAHiperstation QAHiperstation’s advanced functional record capabilities allow you to capture user sessions on a keystroke-by-keystroke basis. The organizational record capability lets you capture any 3270 terminal activity and, optionally, LU6.2 conversations in the VTAM network. Further, QAHiperstation enhances the creation and maintenance of the test data environment with data management capabilities.

Company: Compuware Corporation QAHiperstation+ Adding a Windows-based interface to mainframe-based application testing can significantly improve productivity and speed up testing. QAHiperstation+ extends the capabilities of QAHiperstation by providing GUI-based test analysis and results reporting from a workstation. This increases an organization’s testing productivity and broadens its resource pool by permitting the use of non-technical personnel. Using QAHiperstation+’s versatile functions and features, testers can quickly capture and synchronize all test activities for mission-critical VTAM applications.

Company: Compuware Corporation
QARunTM Save considerable time and execute more test cycles by automating setup and execution of test scripts. QARun uses an object oriented approach to automate test script generation, which can significantly increase the speed and accuracy of testing. As you point and click, QARun records user actions and system responses into re-usable scripts that test specific application functions. You will create powerful, feature-rich tests even if you don’t have extensive knowledge of programming languages or application structure.

Source: http://www.easy-qa.com/pages/easy-qa-tools-Functional-GUI-Testing-Tools.htm

List of Tools

Testing Tools

Anteater

Description: Anteater is a testing framework designed around Ant, from the Apache Jakarta Project. It provides an easy way to write tests for checking the functionality of a Web application or of an XML Web service.

Requirement: OS Independent

Doit: Simple Web Application Testing

Description: Doit is a scripting tool and language for testing web applications that use forms. Doit can generate random or sequenced form fill-in information, report results (into a database, file, or stdout), filter HTML results, and compare results to previous results, without having to manually use a web browser. It uses a console-based web client tool (like Curl or Wget) to send and receive HTTP requests and responses respectively.

Requirement: You must have Perl 5 or greater and the appropriate Perl modules (detailed in Doit manual) installed on your system before you can use SPL.



Wednesday, November 21, 2007

Library-Dictionary of Load Testing Terms

Business Case: An interaction the user has with the web-based application or website that has meaning in a business context. It could be as simple as viewing a single page, or as complicated as performing an entire transaction. In web performance trainer, this represents a series of HTTP Transactions that should be repeated by virtual users during a test.

Cache: The web browser maintains a copy of recently requested resources (pages, images etc.) so that when the resource is needed again, it does not have to ask the server for another copy. This greatly improves the performance of the browser-especially on a graphics-laden website where images (menu bars, for instance) are reused on multiple pages.

Controller: Web performance trainer can be run in two modes-as the controller or as an engine. In controller mode, web performance trainer presents a GUI that allows the recording, editing and execution of load tests. Only one controller may be run on a network with the same license key.

Cookie: A small amount (less than 1k usually) of text that a web server asks the web browser to store on the browser computer. This information is sent back to the server each time the browser makes a request for a URL on that server. This is the most common (and most preferred) method of session tracking. Country to popular opinion, cookies cannot be used by hackers to run harmful programs on your computer or steal account numbers from your Quicken files (except for Microsoft Internet Explorer-which requires a security patch to prevent such abuse).

Delay time: The amount of time between receipt of one URL and the request of the next URL. Web performance trainer records this duration while recording a business case and uses it to accurately simulate a user behavior when performing a test. When a delay time occurs between a web page, the delay time is usually due to the processing time required by the browser to parse the page and render it (and the images) on the screen. When the delay time occurs after the final image in a web page and the next web page, the delay time represents the time spent buy the user reading the page and deciding what to do next. In this case, the delay time is referred to as Think Time.

Engine: Web performance trainer can be run in two modes-as the controller of as an engine. In engine mode, web performance trainer presents a console interface and listens for commands from a controller. The controller for generating virtual users uses it. Lmany engines can by used by a controller to generate massive network loads.

FTP (File Transfer Protocol): A network protocol for sending and receiving files. FTP is built on top of TCP/IP

HTTP (Hypertext Transfer Protocol): The protocol used between web browsers and web servers to transfer web pages and associated files (images, etc). It is the language of the World Wide Web. HTTP is built on top of TCP/IP.

HTTP Transaction: A request sent from the browser to the server and the corresponding response from the server to the browser, both sent using HTTP. This round-trip communication path allows the browser to request a resource (URL) and receive a response from the server. It may include content sent by the browser (data entered in from fields, uploaded files) and content returned from the server (web page, image, etc).

Host: A computer that is connected to a TCP/IP network, include the Internet. Each host has a unique IP address.

IP (Internet Protocol): A Network protocol that specifies the format of data transferred between two hosts (called packets or data grams) and the addressing scheme. IP by itself is something like the postal system. It allows you to address a package and droop it in the system, but there is no direct link between you and the recipient. ILP is generally used in conjunction with TCP.

IP address: An identifier used by the IP protocol to identify an individual host. The current version of IP, Ipv4 uses 4 numbers to identify each network address. Each number can be in Inc. web server. Note that certain IP addresses have special meanings. 127.0.0.1 is the ‘loop back’ address that a host uses to redirect traffic to it self (usually for diagnostic proposes). The address ranges 10. *. *. *And 192.168. *. * Are always reserved for internal networks. 127. *. *. *, 0. *. *. * and 255.255.255 are also reserved for special uses.

License key: An encrypted file that contains the critical license information for your installation of web performance Trainer.

Multihome: An adjective used to describe a host that is connected to two or more networks or has two or more network addresses. For example network server have multiple network interfaces to increase maximum throughput.

Proxy server: A server, typically on a private network, that allows access to external network resources. In a common network configuration, the computers on a company network are separated from the Internet by a firewall (for security reasons). Since these computers cannot access the Internet directly to browse web pages, the browser must be configured to use a proxy server (which is allowed to access the internet) to service requests for web pages from the Internet. All common browsers support this configuration, usually in a configuration section titled “Use a proxy server”.

Sample period- A time period during a load test during which data is aggregated. The statistics calculated by web performance Trainer are calculated for each sample period during the test.

Session Tracking: HTTP is ‘stateless’. This means that between the time your browser receives a web page and asks for the next page, the server has forgotten who you are-in other words, when your browser asks for the second page, it has no way to know that it was the same browser that asked for the first page. This is obviously a problem for any application that needs to remember who you are-such as an application that requires a login. The notion of a singles, unique user browsing from one page to another is referred to as a ‘session’. As the web has evolved, several techniques for session tracking have evolved. The most common are cookies and URL-rewriting.

SMPT (simple Mail Transfer Protocol): A network protocol for transferring e-mail messages between servers. Most e-mail systems that send mail over the Internet use SMPT. SMPT is built on top of TCP/IP.

TCP (Transmission control Protocol): A network protocol that enables two hosts to establish a connection and exchange streams of data. TCP guarantees delivery of data and also guarantees that packets will be delivered in the same order in which they were sent. TCP is a little like a phone call-there is an extended connection between two hosts during which either host can send data to the other.

TCP/IP: The suite of communications protocols used to connect host on the Internet. TCP/IP uses combines the TCP and IP protocols to provide addressing and reliable data transfer for a variety of other Internet protocols, including HTTP, FTP and SMTP.

Think Time: The time between the browser displaying a page to the user and the user clicking a link to browse to the next page. This time could be the time it takes the user to read the content of the page or decide what to do next. Web performance trainer records this time when recording a Business Case and uses it to accurately simulate the uses when performing a test. See also Delay Time.

TTFB-TTFB: Stands for “Time to First Byte” and is the duration between the times the virtual user made an HTTP request, and the time the first byte of the response from the web server arrived. This value gives an idea of the responsiveness of the network and web server, and consists of the socket connection time, the time to send the HTTP request, and the time to receive the first byte of the HTTP response.

URL (Uniform Resource Locator): A specially formatted string that describes a resource on the Internet. The browser to determine uses this where on the network the resource is located. A typical URL looks like this:

Virtual User: A software entity, internal to web performance trainer, that simulates a real user by repeatedly performing a Business case during a load test.

Testing Terminology - 4

Sanity Testing: Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.

Scalability Testing: is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time. [Load Testing Terminology by Scott Stirling]

Sensitive Test: A test, that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test. [Dorothy Graham, 1999]

Smoke Test: describes an initial set of tests that determine if a new version of application performs well enough for further testing. [Louise Tamres, 2002]

Specification-based test: A test, whose inputs are derived from a specification.

Spike testing: To test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test. [Load Testing Terminology by Scott Stirling]

State-based testing: Testing with test cases developed by modeling the system under test a state machine.

State Transition Testing: Technique in which the states of a system are first identified and then test cases are written to test the triggers to cause a transition from one condition to another state. [William E. Lewis, 2000]

Static testing: Source code analysis. Analysis of source code to expose potential defects.

Statistical testing: A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases. [BCS]

Stealth bug: A bug that removes information useful for its diagnosis and correction [R.V.Binder, 1999]

Storage test: study how the program, either in resident memory or on ldisk, uses memory and space. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them. [Cem Kaner, 1999,p55]

Stress/Load/Volume test: tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.

Structural testing: (1) (IEEE) testing that takes into accounts the internal mechanism [structure] of a system, pr component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statements is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.

System testing: black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

Table Testing: test access, security, and data integrity of table entries. [William E.Lewis, 2000]

Test Bed: An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test [IEE 610]

Test Case: a set of test inputs, executions, and expected results developed for a particular objective.

Test conditions: The set of circumstances that a test invokes [Daniel J.Mosley, 2002]

Test coverage: The degree to which a given test or set of tests addresses all specified test cases for a given system or component.

Test Criteria: decision rules used to determine whether software item or software feature passes or fails a test.

Test data: The actual (Set of) values used in the test or that are necessary to execute the test. [Daniel J.Mosley, 2002]

Test documentation: (IEEE) Documentation describing plans for, or results of, the testing of a system or component, types include test case specification, test incident report, test log, test plan, test procedure, test report.

Test Driver: A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.

Test Harness: A system of test drivers and other tools to support test execution (e.g., stubs. Executable test cases, and test drivers). See: test driver.

Test Item: A software item, which is the object of testing.

Test log: A chronological record of all relevant details about the execution of a test.

Test plan: A high–level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life, cycle; including resource requirements, project schedule, and test requirements.

Test procedure: A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] often called-manual test script.

Test strategy: Describes the general approach and objectives of the test activities. [Denial J.Mosley, 2002]

Test status: The assessment of the result of funning tests on software.

Test stub: A dummy software component r object used (during development and testing) to simulate the behavior of a real component. The stub typically provides test output.

Test suites: A test suite consists of multiple test cases (procedures and data) that are combined and often managed by a test harness.

Test tree: A physical implementation of test suite.[Dorothy Graham, 1999]

Testability: Attributes of software that bear on the effort needed for validating the modified software [ISO 8402]

Testing: the execution of test with the intent of providing that the system and application under test does or does not perform according to the requirements specification.

Unit testing: Testing performed to isolate and expose faults and failures as soon as the source code is available, regardless of the external interfaces that may be required. Oftentimes, the detailed design and requirements documents are used as a basis to compare how and what the unit is able to perform. White and black box testing methods are combined during unit testing.

Usability testing: Testing for user-friendliness. Clearly this is subjective, and will depend on the targeted end-user or customer.

Validation: The comparison between the actual characteristics of something (e.g. a projects of a software project and the expected characterizes). Validation is checking that you have built the right system.

Verification: The comparison between the actual characteristics of something (e.g. a projects) and the specified characteristics. Verification list checking that we have built the system right.

Volume testing: Testing where the system is subjected to large volumes of data.

Walkthrough: In the most usual form of term, a walkthrough is step-by-step simulation of the execution of a procedure, as when walking through code line by line, with an imagined set of inputs. The term has been extended to the review of material hat is not procedural, such as data descriptions, reference manuals, specifications, etc.

White box testing (glass-box): Testing is done under a structural testing strategy and requires complete access to the object’s structure that is, the source code.

Testing Terminology - 3

Monkey Testing (smart monkey testing): Input are generated from probability distributions that reflect actual expected usage statistics – e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event.

Maximum Simultaneous Connection Testing: This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.

Mutation Testing: A testing strategy where small variations to a program are inserted ( a mutant), followed by execution of an existing test suite. If the test suite detects the mutant, the mutant is retired. If undetected, the test suite must be revised. [R.V.Binder, 1999]

Multiple Condition Coverage: A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once. [G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.

Negative Test: A test whose primary purpose is falsification; that is tests designed to brake the software [B.Beizer1995]

Orthogonal Array Testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987

Orthogonal Array Testing: Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000]

Oracle (Test Oracle): A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test [from BS7925-1]

Parallel Testing: Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run. [ISO]

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specific performance requirements [BS7925-1]

Performance Testing: can be under taken to: 1) show that the system meets specified performance objectives, 2) tune the system, 3) determine the factors in hardware or software that limit the system’s performance, and 4) project the system’s future load- handling capacity in order to schedule its replacements [Software System Testing and Quality Assurance. Beizer, 1984, p. 256]

Prior Defect History Testing: Test cases are created or return for every defect found in prior tests of the system. [William E.Lewis, 2000]

Qualification Testing (IEEE): Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements.

Quality: The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.

Quality Assurance (QA): Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).

Quality Control (QC): Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.

Our Definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test.

Race Condition Defect: Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.

Recovery Testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Regression Testing: Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.

Regression Testing: Testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program [Glenford J.Myers, 1979]

Reengineering: The process of examining and altering an existing system to reconstitute it in a new form. Many include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).

Reference Testing: A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution. [Dorothy Grahan, 1999]

Reliability Testing: Verify the probability of failure free operation of a computer program in a specified environment for specified time.

Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of a time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of time t, the probability that the object will not fail within time t.

Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in – the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed – the software will appear to work properly. [Professor Dick Hamlet. Ph.D.]

Range Testing: For each input identifies the range over which the system behavior should be the same. [William E. Lewis, 2000]

Risk Management: An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.

Robust Test: A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails. [Dorothy Graham, 1999]

Testing Terminology - 2

Failure: A failure is a deviation from expectations exhibited by software and observed as a set of symptoms by a tester or user. A failure is caused by one or more defects. The causal Trail. A person makes an error that causes a defect that caused a failure. [Robert M.Poston, 1996]

Follow-up testing: We vary a test that yielded a less-than spectacular failure. We vary the operation, data, or environment asking whether the underlying fault in the code can yield a more serious failure or a failure under a broader range of circumstances. [Measuring the Effectiveness of software tester, Cem Karner, STAR East 2003]

Formal Testing (IEEE): Testing conducted in a accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

Free Form Testing: Ad hoc or brainstorming using intuition to define test cases. [William E. Lewis, 2000]

Functional Decomposition Approach: An automation method in which the test cases are reduced to fundamental tasks, navigation, functional tests, data verification, and return navigation; also known as Framework Driven Approach. [Daniel J. Mosley, 2002]

Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.

Gray box Testing: Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester. [Cem Kaner]

Gray box Testing: Test designed based on the knowledge of algorithm, internal states, architectures, or other high-level descriptions of the program behavior. [Doug Hoffman]

Gray box Testing: Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are: A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error occurred. The test executes in full, but the contents of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results. [Elfriede Dustin. “Quality Web Systems: Performance, Security & Usability.”]

High-level tests: These tests involve testing whole, complete products [Kit, 1995]

Inspection: A formal evaluation technique in which software requirements, design, or code or examine in detail by person or group other than the author to detects faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).

Integration: The process of combining software components or hardware components or both into overall system.

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Integration Testing: Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) This method is contrary to the big-bang approach where all source modules are combined and tested in one step. The big-bang approach to integration should be discouraged.

Interface Tests: Programs that provide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.

Internationalization Testing (I18N): Testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth. [Clinton De Young, 2003].

Interoperability Testing: which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.

Inter-operability Testing: True inter-operability testing concerns testing for unforeseen interactions with other packages with which your software has no direct connection. In some quarters, inter-operability testing labor equals all other testing combined. This is the kind of testing that I say shouldn’t be done because it can’t be done. [from Quality Is Not The Goal. By Boris Beizer, Ph.D.]

Latent bug: A bug that has been dormant (unobserved) in two more releases. [R.V.Binder, 1999]

Lateral Testing: A test design technique based on lateral thinking principals, to identify faults. [Dorothy Graham, 1999]

Load Testing: Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Load-stability test: Test design to determine whether a web application will remain serviceable over extended time span.

Load & Isolation test: The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.

Testing Terminology

Acceptance Test: Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal r external) to determine whether or not to accept a system.

Ad Hoc Testing: Testing carried out using no recognized test case design technique. [BCS]

Alpha Testing: Testing of a software product or system conducted at the developer’s site by the customer.

Assertion Testing: (NBS) A dynamic analysis technique, which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

Automated Testing: Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.

Background Testing: Is the execution of normal functional testing while a realistic wok load exercises the SUT. This workload is being processed “in the background” as far as the functional testing is concerned. [Load Testing Terminology by Scott Sterling].

Bug: Glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion, elision, defect, issue, problem.

Beta Testing: Testing conducted at one or more customer sites by the end-user of a delivered software product or system.

Benchmarks: Program that provide performance comparison for software, hardware, and systems.

Benchmarking: Is specific type of performance test with the purpose of determining performance baselines for comparison. [Load Testing Terminology by Scott Sterling].

Big-bang Testing: Integration testing where no incremental testing takes place prior to all the system’s components being combined to form the system.

Black-Box Testing: A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behaviors of the program are evaluated and analyzed.

Boundary value Analysis (BVA): BVA is different from equivalence partitioning in that it focuses on “corner cases” or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive that value often used as a technique for stress load or volume testing. This type of validation is usually performed after positive functional validation has completed successfully using requirements specifications and user documentation.

Breadth Test: A test suite that exercises the full scope of a system from a top-down perspective, buy does not test any aspect in detail [Dorothy Graham, 1999]

Cause Effective Graphing: (1)[NBS] Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2) A systematic methods of generating test cases representing combinations of conditions. See: testing, functional. [G.Myers]

Clean Test: A test whose primary purpose is validation; that is, tests designed to demonstrate the software’s correct working. (Syn.positive test)

Code Inspection: A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. (Syn: Fagan Inspection)

Code walkthrough: A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmers logic and assumptions. [G.Myers/NBS] contrast with code audit, code inspection, code review.

Coexistence Testing: Coexistence isn’t enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. Its probably and exponentially hard problem rather than a square-law problem. [From Quality is not the goal. By Boris Beizer, Ph.D.]

Compatibility bug: A revision to the framework breaks a previously working feature: anew feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code. [R.V.Binder, 1999]

Compatibility Testing: The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Composability Testing: Testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks. [Timothy Dyck, ‘easy’ and other lies, eWEEK April 28,2003]

Condition Coverage: A test coverage criteria requiring enough test cases such tat each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, multiple condition coverage, path coverage and statement coverage. [G.Myers]

Conformance directed testing: Testing that seeks to establish conformance to requirements or specification. [R.V.Binder, 1999]

CRUD Testing: Build CRUD matrix and test all object creation, reads, updates, and deletion. [William E.Lewis, 2000]

Data-Driven testing: An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script.

Data flow testing: Testing in which test cases are designed based on variable usage within the code.

Database testing: Check the integrity of database field values.

Defect: The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system.

Defect: Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures.

Depth test: A test case, that exercises some part of a system to a significant level of detail.

Decision Coverage: A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.

Dirty testing: Negative testing.

Dynamic testing: Testing, based on specific test cases, by execution of the test object or running programs.

End-to-End Testing: Similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Partitioning: An approach where classes of inputs categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

Error: An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests. [Robert M.Poston, 1996]

Errors: The amount by which a result is incorrect. Mistakes are usually a result of a human action. Human mistakes (errors) often result in faults contained in the source code, specification, documentation, or other product deliverable. Once a fault is encountered, the end result will be a program failure. The failure usually has some margin of error, high, medium, or low.

Error Guessing: Another common approach to Black-box validation. Black box testing is when every thing else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.

Error guessing: A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them [from BS7925-1]

Error seeding: The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program [R.V.Binder, 1999]

Exception Testing: Identify error messages and exception handling processes and conditions that trigger them. [William E.Lewis, 2000]

Exhaustive Testing (NBS): Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply. The outcome of this test influences the design of the next test. [James Bach]