Wednesday, November 21, 2007

Testing Definitions

Q: How do you create a test strategy?

A: The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, and the test environment, a list of related tasks, pass/fail criteria and risk assessment.

Inputs for this process:

· A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.

· A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.

· Testing methodology. This is based on known standards.

· Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.

· Requirements that the system can not provide, e.g. system limitations.

Outputs for this process:

· An approved and signed off test strategy document, test plan, including test cases.

· Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

Q: How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking...

· Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.

· Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.

· It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.

· Test scenarios are executed through the use of test procedures or scripts.

· Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.

· Test procedures or scripts include the specific data that will be used for testing the process or transaction.

· Test procedures or scripts may cover multiple test scenarios.

· Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.

· Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.

· Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.

· A pre-test meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.

Inputs for this process:

· Approved Test Strategy Document.

· Test tools, or automated test tools, if applicable.

· Previously developed scripts, if applicable.

· Test documentation problems uncovered as a result of testing.

· A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.

Outputs for this process:

· Approved documents of test scenarios, test cases, test conditions and test data.

· Reports of software design issues, given to software developers for correction.

Q: How do you execute tests?

A: Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.

· The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.

· A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool.

· Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA (SWQA) Manager and/or Test Team Lead.

· After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager's formal acceptance.

· The test team reviews test document problems identified during testing, and update documents where appropriate.

Inputs for this process:

· Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.

· Test tools, including automated test tools, if applicable.

· Developed scripts.

· Changes to the design, i.e. Change Request Documents.

· Test data.

· Availability of the test team and project team.

· General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.

· A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.

· Test Readiness Document.

· Document Updates.

Outputs for this process:

· Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.

· Changes to the code, also known as test fixes.

· Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.

· Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.

· Formal record of test incidents, usually part of problem tracking.

· Base-lined package, also known as tested source and object code, ready for migration to the next level.

No comments: