Tuesday, January 8, 2008

Automated testing tools - 2

Will automated testing tools make testing easier?

· Possibly yes, for larger projects, or on-going long-term projects they are valuable. But in case of small projects, the time needed to learn and implement them may not be worth it unless personnel are already familiar with the tools.

· A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. Often the recorded script is manually modified and enhanced. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms.

· Another common type of approach for automation of functional testing is 'data-driven' or 'keyword-driven' automated testing, in which the test drivers are separated from the data and/or actions utilized in testing (an 'action' would be something like 'enter a value in a text box'). Test drivers can be in the form of automated test tools or custom-written testing software. The data and actions can be more easily maintained - such as via a spreadsheet - since they are separate from the test drivers. The test drivers 'read' the data/action information to perform specified tests. This approach can enable more efficient control, development, documentation, and maintenance of automated tests/test cases.

Other automated tools can include:

· Code analyzers - monitor code complexity, adherence to standards, etc.

· Coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.

· Memory analyzers - such as bounds-checkers and leak detectors.

· Load/performance test tools - for testing client/server and web applications under various load levels.

· Web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure.

· Other tools - for test case management, documentation management, bug reporting, and configuration management, file and database comparisons, screen captures, security testing, macro recorders, etc.

Test automation is, of course, possible without COTS tools. Many successful automation efforts utilize custom automation software that is targeted for specific projects, specific software applications, or a specific organization's software development environment. In test-driven agile software development environments, automated tests are built into the software during (or preceding) coding of the application.

What's the best way to choose a test automation tool over manual testing?

In manual testing, the test engineer exercises software functionality to determine if the software is behaving in an expected way. This means that the tester must be able to judge what the expected outcome of a test should be, such as expected data outputs, screen messages, changes in the appearance of a User Interface, XML files, database changes, etc. In an automated test, the computer does not have human-like 'judgment' capabilities to determine whether or not a test outcome was correct. This means there must be a mechanism by which the computer can do an automatic comparison between actual and expected results for every automated test scenario and unambiguously make a pass or fail determination. This factor may require a significant change in the entire approach to testing, since in manual testing a human is involved and can:

· Make mental adjustments to expected test results based on variations in the pre-test state of the software system

· Often make on-the-fly adjustments, if needed, to data used in the test

· Make pass/fail judgments about results of each test

· Make quick judgments and adjustments for changes to requirements.

· Make a wide variety of other types of judgments and adjustments as needed.

For those new to test automation, it might be a good idea to do some reading or training first. There are a variety of ways to go about doing this; some example approaches are:

· Read through information on the web about test automation such as general information available on some test tool vendor sites or some of the automated testing articles listed in the Softwareqatest.com Other Resources section.

· Obtain some test tool trial versions or low cost or open source test tools and experiment with them

· Attend software testing conferences or training courses related to test automation

As in anything else, proper planning and analysis are critical to success in choosing and utilizing an automated test tool. Choosing a test tool just for the purpose of 'automating testing' is not useful; useful purposes might include: testing more thoroughly, testing in ways that were not previously feasible via manual methods (such as load testing), testing faster, or reducing excessively tedious manual testing. Automated testing rarely enables savings in the cost of testing, although it may result in software lifecycle savings (or increased sales) just as with any other quality-related initiative.

With the proper background and understanding of test automation, the following considerations can be helpful in choosing a test tool (automated testing will not necessarily resolve them; they are only considerations for automation potential):

· Analyze the current non-automated testing situation to determine where testing is not being done or does not appear to be sufficient

· Where is current testing excessively time-consuming?

· Where is current testing excessively tedious?

· What kinds of problems are repeatedly missed with current testing?

· What testing procedures are carried out repeatedly (such as regression testing or security testing)?

· What testing procedures are not being carried out repeatedly but should be?

· What test tracking and management processes can be implemented or made more effective through the use of an automated test tool?

Taking into account the testing needs determined by analysis of these considerations and other appropriate factors, the types of desired test tools can be determined. For each type of test tool (such as functional test tool, load test tool, etc.) the choices can be further narrowed based on the characteristics of the software application. The relevant characteristics will depend, of course, on the situation and the type of test tool and other factors. Such characteristics could include the operating system, GUI components, development languages, web server type, etc. Other factors affecting a choice could include experience level and capabilities of test personnel, advantages/disadvantages in developing a custom automated test tool, tool costs, tool quality and ease of use, usefulness of the tool on other projects, etc.

Once a short list of potential test tools is selected, several can be utilized on a trial basis for a final determination. Any expensive test tool should be thoroughly analyzed during its trial period to ensure that it is appropriate and that its capabilities and limitations are well understood. This may require significant time or training, but the alternative is to take a major risk of a mistaken investment.

Automation testing


Software testing is an important part of the software development process. Manual testing becomes time consuming as the level of software sophistication increases. Software test automation is the solution to assure the quality of software that meets the test design specification and the target time frame for software release to the market.

There are several different techniques used to accomplish software test automation. The following are the areas investigated in this research paper:

1. Code instrumentation techniques that will assist in the white box testing of the software.

2. Structured design and development process that will support the automatic generation of test cases and test procedures directly from design documents.

3. The use of a test harness that has a well defined environment that can be controlled by software to provide known inputs for the Unit under Test (UUT) and measure the responses.

4. Screen captures techniques for testing Graphical User Interfaces.

· Test automation enables one to achieve detailed with significant reduction in test cycle

· The efficiency of automated testing incorporated in to product life cycle can generate sustainable time and money saving

· Automation is better and faster testing

· Automated testing increases the significance and accuracy of testing and results in greater test coverage

· Automation tests can be done faster and faster in a consistent manner and over and over again with fewer overheads

· Automation testing saves lots of effort needed for rigorous testing of the system

· Automation testing ensures the uniformity in the testing process each time the test is executed

· Automation testing has its own advantages and disadvantages and involves lots of challenges, if not planned carefully it may lead to poor quality of testing

There are many factors which affects the automation testing

1) Number of interface – The more the number of interfaced the system the more complex the automation testing will be.

2) Types of external interfaces- The type of external interface affect the automation testing because there are so many interfaces which cannot be simulated

3) Number of releases expected for testing- The number of releases affect the automation testing. If the number of releases are one or two then automation will not be practicable

4) Maturity of the product- A new product cannot be tested completely in the automated environment because automated testing assumes some stability in the product which may not be in the new product.