Tuesday, January 8, 2008

Automated testing tools - 2

Will automated testing tools make testing easier?

· Possibly yes, for larger projects, or on-going long-term projects they are valuable. But in case of small projects, the time needed to learn and implement them may not be worth it unless personnel are already familiar with the tools.

· A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. Often the recorded script is manually modified and enhanced. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms.

· Another common type of approach for automation of functional testing is 'data-driven' or 'keyword-driven' automated testing, in which the test drivers are separated from the data and/or actions utilized in testing (an 'action' would be something like 'enter a value in a text box'). Test drivers can be in the form of automated test tools or custom-written testing software. The data and actions can be more easily maintained - such as via a spreadsheet - since they are separate from the test drivers. The test drivers 'read' the data/action information to perform specified tests. This approach can enable more efficient control, development, documentation, and maintenance of automated tests/test cases.

Other automated tools can include:

· Code analyzers - monitor code complexity, adherence to standards, etc.

· Coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.

· Memory analyzers - such as bounds-checkers and leak detectors.

· Load/performance test tools - for testing client/server and web applications under various load levels.

· Web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure.

· Other tools - for test case management, documentation management, bug reporting, and configuration management, file and database comparisons, screen captures, security testing, macro recorders, etc.

Test automation is, of course, possible without COTS tools. Many successful automation efforts utilize custom automation software that is targeted for specific projects, specific software applications, or a specific organization's software development environment. In test-driven agile software development environments, automated tests are built into the software during (or preceding) coding of the application.

What's the best way to choose a test automation tool over manual testing?

In manual testing, the test engineer exercises software functionality to determine if the software is behaving in an expected way. This means that the tester must be able to judge what the expected outcome of a test should be, such as expected data outputs, screen messages, changes in the appearance of a User Interface, XML files, database changes, etc. In an automated test, the computer does not have human-like 'judgment' capabilities to determine whether or not a test outcome was correct. This means there must be a mechanism by which the computer can do an automatic comparison between actual and expected results for every automated test scenario and unambiguously make a pass or fail determination. This factor may require a significant change in the entire approach to testing, since in manual testing a human is involved and can:

· Make mental adjustments to expected test results based on variations in the pre-test state of the software system

· Often make on-the-fly adjustments, if needed, to data used in the test

· Make pass/fail judgments about results of each test

· Make quick judgments and adjustments for changes to requirements.

· Make a wide variety of other types of judgments and adjustments as needed.

For those new to test automation, it might be a good idea to do some reading or training first. There are a variety of ways to go about doing this; some example approaches are:

· Read through information on the web about test automation such as general information available on some test tool vendor sites or some of the automated testing articles listed in the Softwareqatest.com Other Resources section.

· Obtain some test tool trial versions or low cost or open source test tools and experiment with them

· Attend software testing conferences or training courses related to test automation

As in anything else, proper planning and analysis are critical to success in choosing and utilizing an automated test tool. Choosing a test tool just for the purpose of 'automating testing' is not useful; useful purposes might include: testing more thoroughly, testing in ways that were not previously feasible via manual methods (such as load testing), testing faster, or reducing excessively tedious manual testing. Automated testing rarely enables savings in the cost of testing, although it may result in software lifecycle savings (or increased sales) just as with any other quality-related initiative.

With the proper background and understanding of test automation, the following considerations can be helpful in choosing a test tool (automated testing will not necessarily resolve them; they are only considerations for automation potential):

· Analyze the current non-automated testing situation to determine where testing is not being done or does not appear to be sufficient

· Where is current testing excessively time-consuming?

· Where is current testing excessively tedious?

· What kinds of problems are repeatedly missed with current testing?

· What testing procedures are carried out repeatedly (such as regression testing or security testing)?

· What testing procedures are not being carried out repeatedly but should be?

· What test tracking and management processes can be implemented or made more effective through the use of an automated test tool?

Taking into account the testing needs determined by analysis of these considerations and other appropriate factors, the types of desired test tools can be determined. For each type of test tool (such as functional test tool, load test tool, etc.) the choices can be further narrowed based on the characteristics of the software application. The relevant characteristics will depend, of course, on the situation and the type of test tool and other factors. Such characteristics could include the operating system, GUI components, development languages, web server type, etc. Other factors affecting a choice could include experience level and capabilities of test personnel, advantages/disadvantages in developing a custom automated test tool, tool costs, tool quality and ease of use, usefulness of the tool on other projects, etc.

Once a short list of potential test tools is selected, several can be utilized on a trial basis for a final determination. Any expensive test tool should be thoroughly analyzed during its trial period to ensure that it is appropriate and that its capabilities and limitations are well understood. This may require significant time or training, but the alternative is to take a major risk of a mistaken investment.

Automation testing


Software testing is an important part of the software development process. Manual testing becomes time consuming as the level of software sophistication increases. Software test automation is the solution to assure the quality of software that meets the test design specification and the target time frame for software release to the market.

There are several different techniques used to accomplish software test automation. The following are the areas investigated in this research paper:

1. Code instrumentation techniques that will assist in the white box testing of the software.

2. Structured design and development process that will support the automatic generation of test cases and test procedures directly from design documents.

3. The use of a test harness that has a well defined environment that can be controlled by software to provide known inputs for the Unit under Test (UUT) and measure the responses.

4. Screen captures techniques for testing Graphical User Interfaces.

· Test automation enables one to achieve detailed with significant reduction in test cycle

· The efficiency of automated testing incorporated in to product life cycle can generate sustainable time and money saving

· Automation is better and faster testing

· Automated testing increases the significance and accuracy of testing and results in greater test coverage

· Automation tests can be done faster and faster in a consistent manner and over and over again with fewer overheads

· Automation testing saves lots of effort needed for rigorous testing of the system

· Automation testing ensures the uniformity in the testing process each time the test is executed

· Automation testing has its own advantages and disadvantages and involves lots of challenges, if not planned carefully it may lead to poor quality of testing

There are many factors which affects the automation testing

1) Number of interface – The more the number of interfaced the system the more complex the automation testing will be.

2) Types of external interfaces- The type of external interface affect the automation testing because there are so many interfaces which cannot be simulated

3) Number of releases expected for testing- The number of releases affect the automation testing. If the number of releases are one or two then automation will not be practicable

4) Maturity of the product- A new product cannot be tested completely in the automated environment because automated testing assumes some stability in the product which may not be in the new product.

Different type of bug-tracking system


1) Abuky

http://abuky.sunsite.dk/index.html

Description:

Abuky stands for the Aoo BUg tracKing sYstem. Abuky is a system for tracking bugs and aiding the developer to fix them, written in Java with JSP as web interface.

Requirement:

Linux, Windows, Solaris

2) Anthill Bug Manager

http://anthillbm.sourceforge.net/

Description:

Anthill is a tool that aids code development by keeping track of bugs in a multi-project, multi-developer environment. It accomplishes this with a clean, simple, and fast interface that contains all the essential features but avoids the enormous complexity associated with most other projects of this type.

Requirement:

OS Independent

3) BugRat

http://www.gjt.org/pkg/bugrat/

Description:

BugRat is free Java software that provides a sophisticated, flexible bug reporting and tracking system

Requirement:

TBC

4) Bugs Online

http://bugsonline.sourceforge.net/

Description:

Bugs Online was originally developed in 1997 to serve as the primary bug and issue tracking system to be utilized during a large development oriented project. The Bugs Online system is a very flexible and capable system for bug and issue tracking.

Requirement:

Windows NT 4.0 SP3+, MS IIS 3 w/ ASP

5) Bugtrack

http://sourceforge.net/projects/btrack

Description:

Web based bug tracking system written in Perl/DBI. Supports multiple users, projects, components, versions and email notification.

Requirement:

Linux, Solaris, Windows

6) Bugzilla

http://www.mozilla.org/projects/bugzilla/

Description:

Bugzilla has matured immensely, and now boasts many advanced features. These include: integrated, product-based granular security schema, inter-bug dependencies and dependency graphing, advanced reporting capabilities, a robust, stable RDBMS back-end, extensive configurability, a very well-understood and well-thought-out natural bug resolution protocol, email, XML, console, and HTTP APIs, available integration with automated software configuration management systems.

Requirement:

TBC

7) CodeTrack

http://kennwhite.sourceforge.net/codetrack/

Description:

Bug database with a friendly web front end aimed at medium and small development shops. Particularly suited for intranet and extranet environments, CodeTrack includes built-in strong authentication, and allows custom access control to individual projects. No database is required as bug data and developer notes are stored using simple XML text files

Requirement:

Apache and PHP

8) Debian bug tracking software

http://www.chiark.greenend.org.uk/~ian/debbugs/

Description:

The Debian bug tracking system is a set of scripts which maintain a database of problem reports.

Requirement:

UNIX

9) GNATS

http://www.gnu.org/software/gnats/

Description:

A GNATS is a portable incident/bug report/help request-tracking system which runs on UNIX-like operating systems. It easily handles thousands of problem reports, has been in wide use since the early 90s, and can do most of its operations over e-mail. Several front end interfaces exist, including command line, emacs, and Tcl/Tk interfaces. There are also a number of Web (CGI) interfaces written in scripting languages like Perl and Python.

Requirement:

OS Independent

10) Helis

http://www.helis.org/

Description:

Helis includes the main features of most bug tracking systems.

Requirement:

Linux web server (php 4/mysql + cgi)

11) Issue Tracker Product

http://www.issuetrackerproduct.com/

Description:

A straight forward and user friendly web application built on top of the Zope application server

Requirement:

OS Independent, Zope

12) JIRA

http://www.atlassian.com/software/jira/

Description:

JIRA is an issue tracking and project management application developed to make bug process easier. JIRA has been designed with a focus on task achievement, is instantly usable and is flexible to work with. Free to academic and open source projects, commercial licenses come with the complete source code.

Requirement:

JDK

13) JitterBug

http://samba.anu.edu.au/cgi-bin/jitterbug

Description:

JitterBug is a web based bug tracking system. JitterBug operates by receiving bug reports via email or a web form. Authenticated users can then reply to the message, move it between different categories or add notes to it. In some ways JitterBug is like a communal web based email system.

Requirement:

TBC

14) Mantis

http://mantisbt.sourceforge.net/

Description:

Mantis is a php/MySQL/web based bugtracking system.

Requirement:

Windows, MacOS, OS/2, and a variety of UNIX

Operating systems: Any web browser should be able to function as a client Windows, MacOS, OS/2.

15) Open Track

http://www.tumblin.com/aws/opentrack.html

Description:

Open Track is a problem tracking system that is table driven and easily configurable/customizable for a variety of applications. Project defect tracking, help desk tracking, and requirements gathering can be easily handled by Open Track.

Requirement:

TBC

16) PEST

http://sourceforge.net/projects/pest/

Description:

PEST is a bug tracking system written especially for a web environment. It supports good testing and bug tracking processes, as well as notification.

Requirement:

TBC

17) Php Bug Tracker

http://phpbt.sourceforge.net/

Description:

Php Bug Tracker is an attempt to copy the functionality of Bugzilla while providing a code base that is independent of the database and presentation layers.

Requirement:

Web-server with PHP 4.1.0+

18) Request Tracker

http://www.bestpractical.com/rt/index.html

Description:

Request Tracker (RT) is an industrial-grade tracking system. It lets a group of people intelligently and efficiently manage requests submitted by a community of users. RT is used by systems administrators, customer support staffs, developers and even marketing departments at over a thousand sites around the world.

Requirement:

Written in object-oriented Perl, RT is a high-level, portable, platform independent system.

19) Roundup Issue Tracker

http://roundup.sourceforge.net/

Description:

Roundup is a simple-to-use and -install issue-tracking system with command-line, web and e-mail interfaces. It is based on the winning design from Ka-Ping Yee in the Software Carpentry "Track" design competition.

Requirement:

TBC

20) Scarab

http://scarab.tigris.org/

Description:

The goal of the Scarab project is to build an Issue / Defect tracking system that has the following features: A full feature set similar to those found in other Issue / Defect tracking systems: data entry, queries, reports, notifications to interested parties, collaborative accumulation of comments, dependency tracking In addition to the standard features, Scarab has fully customizable and unlimited numbers of Modules (on various projects), Issue types (Defect, Enhancement, etc), Attributes (Operating System, Status, Priority, etc), Attribute options (P1, P2, P3) which can all be defined on a per Module basis so that each of your modules is configured for your specific tracking requirements. Requirement:

TBC

21) Stabilizer

http://stabilizer.sf.net

Description:

The Stabilizer bug tracking system aims to quickly stabilize buggy GUI applications so that people can get real work done with them. Users collaboratively and quickly stabilize a buggy GUI application simply by using the application normally and reporting any bugs that they encounter. As soon as a few people report the same bug, warnings will be issued to all users whenever they are about to trigger that bug and they will be given the opportunity to abort the input event -- thus avoiding the bug altogether and keeping the application stable.

Requirement:

All POSIX (Linux/BSD/UNIX-like OSes), Linux

22) Trac

http://projects.edgewall.com/trac/

Description:

Trac is an enhanced issue tracking system for software development projects. Trac allows markup issue descriptions and commit messages, creating links and seamless references between bugs, tasks, change sets, files and pages. A timeline shows all project events in order, making getting an overview of the project and tracking progress very easy.

Requirement:

Python, CGI-capable web server

23) TrackIt

http://trackit.sourceforge.net/

Description:

TrackIt is a Web-based project tracking tool that incorporates defect tracking functionality. It is designed from the ground up to provide maximum flexibility, customization, and most importantly, usefulness to the developer. It has built-in support for various Extreme Programming constructs, as well as full CVS and Subversion integration. It also supports simple listings via HQL and advanced reporting via SQL.

Requirement:

JRE 1.5

24) WREQ

http://www.math.duke.edu/~yu/wreq/

Description:

Wreq is designed to be a distributed request/problem tracking system with built in knowledge database to help systems personnel to stay on top of requests and to prompt knowledge sharing among all local support groups.

Requirement:

To use wreq, we need Perl version 5 with GDBM support on web server.

Load testing tools


SilkPerformer

http://www.segue.com/products/ load-stress-performance-testing/silkperformer.asp

SilkPerformer is the industry's most powerful - yet easiest to use - automated load and performance testing system for maximizing the performance, scalability and reliability of enterprise applications. With SilkPerformer, you can accurately predict the "breaking points" in your application and its infrastructure before it is deployed, regardless of its size or complexity. SilkPerformer has the power to simulate thousands of simultaneous users working with multiple computing environments and interacting with various application environments such as Web, client/server, or ERP/CRM systems - all with a single script and one or more test machines. Yet its visual approach to scripting and root-cause analysis makes it amazingly simple and efficient to use. So you can create realistic load tests easily, find and fix bottlenecks quickly, and deliver high-performance applications faster than ever.

Mercury LoadRunner

http://www.mercury.com/us/products/performance-center/loadrunner/

10 days Free trial : http://www.astratryandbuy.com/cgi-bin/portal/download/index.jsp

Mercury LoadRunner™ is the industry-standard performance testing product for predicting system behavior and performance. Using limited hardware resources, LoadRunner emulates hundreds or thousands of concurrent users to put the application through the rigors of real-life user loads. Your IT group can stress an application from end-to-end and measure the response times of key business processes. Simultaneously, LoadRunner collects system and component-level performance information through a comprehensive array of system monitors and diagnostics modules. These metrics are combined into a sophisticated analysis module that allows teams to drill down to isolate bottlenecks within the architecture.

LoadRunner supports the widest range of enterprise environments and is the only performance testing product to be customized and certified to work with ERP/CRM applications from PeopleSoft, Oracle, SAP, and Siebel.

With LoadRunner, you can:

  • Obtain an accurate picture of end-to-end system performance.
  • Verify that new or upgraded applications meet specified performance requirements.
  • Identify and eliminate performance bottlenecks during the development lifecycle.

e-Load

http://www.empirix.com/ecd/ecforms/process/ets-process.asp

e-Load is the fastest and most accurate way to perform load testing, scalability testing and stress testing of your enterprise Web applications. Use e-Load to help tune the performance and cost effectiveness of your Web infrastructure.

Download a free trial
Experience the multimedia product demo

Learn more about key features

Read the technical datasheet (requires Acrobat PDF reader)
Contact our sales team about your needs



e-Load is a robust Web load testing solution that enables you to easily and accurately test the scalability and performance of your Web applications. Companies use this automated software load testing solution to predict how well their Web applications will handle user load. It can be used during application development and post-deployment to conduct stress testing.

OpenSTA

http://www.opensta.org/

OpenSTA is a distributed software testing architecture designed around CORBA; it was originally developed to be commercial software by CYRANO. The current toolset has the capability of performing scripted HTTP and HTTPS heavy load tests with performance measurements from Win32 platforms. However, the architectural design means it could be capable of much more.

The applications that make up the current OpenSTA toolset were designed to be used by performance testing consultants or other technically proficient individuals. This means testing is performed using the record and replay metaphor common in most other similar commercially available toolsets. Recordings are made in the tester's own browser producing simple scripts that can be edited and controlled with a special high level scripting language. These scripted sessions can then be played back to simulate many users by a high performance load generation engine. Using this methodology a user can generate realistic heavy loads simulating the activity of hundreds to thousands of virtual users.

Grinder

http://grinder.sourceforge.net/

The Grinder is a Java™ load-testing framework. It is freely available under a BSD-style open-source license.

The Grinder makes it easy to orchestrate the activities of a test script in many processes across many machines, using a graphical console application. Test scripts make use of client code embodied in Java plug-ins. Most users of The Grinder do not write plug-ins themselves; instead they use one of the supplied plug-ins. The Grinder comes with a mature plug-in for testing HTTP services, as well as a tool which allows HTTP scripts to be automatically recorded.

The Grinder was originally developed for the book Professional Java 2 Enterprise Edition with BEA Web Logic Server by Paco Gómez and Peter Zadrozny. Philip Aston took ownership of the code and reworked it to create The Grinder 2. Philip continues to enhance and maintain The Grinder, and welcomes all contributions. Recently Peter, Philip and Ted Osborne have published the book J2EE Performance Testing which makes extensive use of The Grinder.

The next major version of The Grinder, The Grinder 3 is currently available as a beta quality release. The Grinder 3 uses the powerful scripting language Jython, and allows any Java code to be tested without the need to write a plug-in.

The latest news downloads, and mailing list archives can be found on SourceForge.net.

WebLOAD Analyzer

http://www.radview.com/products/WebLOAD_Analyzer.asp

Evaluation: http://www.radview.com/eval/index.asp

Web LOAD Analyzer is a powerful solution for managing and ensuring optimal performance of your distributed application server environment by identifying the root cause of performance issues highlighted during load and stress testing. Web LOAD Analyzer monitors and collects detailed information on the application infrastructure and correlates that information with user transaction activity and traffic bursts. This combination of internal data with external activity provides detailed information on areas that are operating outside of expected and acceptable thresholds so users can quickly isolate and resolve their performance issues.

Key features include:

  • Time-synchronized and automated correlated view of application performance across all tiers: server (Web, database, and application), network, and system
  • Integration of the "external" end-user perspective and the "internal" application-infrastructure vie
  • Portability/Accessibility via a browser based UI
  • Drill-down capabilities to the Servlets level
  • Correlation and Analysis Engines for establishing dynamic thresholds
  • Automatic base lining of application behavior

ANTS Load:

http://www.red-gate.com/dotnet/load_testing.htm

Evaluation: http://www.red-gate.com/dynamic/downloadantsload.aspx

ANTS Load™ is a tool for load testing websites and web services. ANTS Load works particularly well for applications written using Microsoft technologies such as ASP.NET and ASP.

ANTS Load is used to predict a web application's behavior and performance under the stress of a multiple user load. It does this by simulating multiple clients accessing a web application at the same time, and measuring what happens.

Features:

Profiling web applications so you can identify slow code in your .NET websites and web services. ANTS Profiler will identify slow-loading pages and, more importantly, tell you why they are slow.

Profiling Windows Forms applications so you can optimize rich, client-side applications.

Simplicity has been one of the top design goals of ANTS Profiler. From the time you download it, you will get useful results about your .NET application in under 15 minutes.

Identify slow methods in your code and find out which third-party libraries are holding you back.

Drill down to identify individual lines of code that are slow, or being hit often.

Profile any .NET language. It doesn't matter if you're programming in C#, VB.NET, managed C++ or in COBOL. If you're programming for the .NET framework, then ANTS Profiler will help you.

Compuware’s- QA load

http://www.compuware.com/products/qacenter/qaload.htm

Using QALoad, you can emulate the load generated by hundreds or thousands of users on your application—without requiring the involvement of the end users or their equipment. You can easily repeat load tests with varied system configurations to achieve optimum performance.

From the Conductor module in QALoad, you set up a load testing scenario to control the conditions for the test, create the virtual users you need to simulate the load, initiate and monitor test and report the results. A Player module simulates the roles of users performing multiple functions using testing scripts that represent your application

Testing Tools for different types of testing


  1. Regression Testing
    • Test Partner by Compuware
    • Rational Robot by Rational

  1. Unit Testing

Junit

  1. JMeter

Apache JMeter is a 100% pure Java desktop application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions

  1. Http-Unit

  1. Bugzilla

  1. Forecast

Forecast by Facilita is mainly used for performance testing but functionally it is as strong as the other performance tools and the cost saving is usually at least 50% less. You can find out more about them on www.facilita.com.

  1. WAST (Web Application Stress Tool)

WAST from Microsoft is a free yes free, performance test tool that is very good, considering the cost J. It works best on ASP, Microsoft centric applications and quite often can do the job without having to buy additional performance test tools. Have a look on http://homer.rte.microsoft.com/

  1. E-test suite

For ease of functional and performance testing the e-test suite from Empirix looks very good however I have only used it for less than 30 days as an evaluation and so did not do anything complex. But for more details have a look on www.empirix.com

  1. OpenSTA

This is a very good performance test tool and once again this one is free. Yes free. It is the maturest free tool I’ve seen and in a few years it could rival some of the leading commercial products. It is governed by the GNU license and so you can modify the source code as you see fit. For more info see www.opensta.org

  1. DbUnit

DbUnit is a JUnit extension (also usable from Ant) targeted for database-driven projects that, among other things, puts your database into a known state between test runs. This is an excellent way to avoid the myriad of problems that can occur when one test case corrupts the database and causes subsequent tests to fail or exacerbate the damage.

DbUnit has the ability to export and import your database (or specified tables) content to and from XML datasets. This is targeted for functional testing, so this is not the perfect tool to backup a huge production database.

DbUnit also provides assert facility to verify that your database content match some expected values.

WEB TESTING TOOLS - 3

Ø MS APPLICATION CENTRE TEST

Application Center Test is designed to stress test Web servers and analyze performance and scalability problems with Web applications, including Active Server Pages (ASP) and the components they use. Application Center Test simulates a large group of users by opening multiple connections to the server and rapidly sending HTTP requests. Application Center Test supports several different authentication schemes and the SSL protocol, making it ideal for testing personalized and secure sites. Although long-duration and high-load stress testing is Application Center Test's main purpose, the programmable dynamic tests will also be useful for functional testing. Application Center Test is compatible with all Web servers and Web applications that adhere to the HTTP protocol.

During a test run, performance counters on the test client, and all web servers should be monitored. Application Centre Test will automatically monitor HTTP performance statistics during the test run, but performance counters must be explicitly configured before a test run.

Performance counter data is used to determine when a test client or Web server has reached its maximum CPU use. In case where the performance bottleneck for the web application is not the server CPU, performance counters will be the easiest way to determine where the bottleneck is occurring.

IMPORTANT COUNTERS FOR WEB TESTING

OBJECT

PERFORMANCE COUNTER

INDICATES

Memory

Available Bytes

Amount of memory available on the test client.

Active Server Pages

Requests Queued

This should remain to 0. if it exceeds the IIS queue length, “Server too busy” errors result.

Network Interface

Bytes Total/sec

Comparing this value against the total available bandwidth should give a clear indication of the potential network bottleneck. As a general rule, try to keep the bytes/sec under 50% of the total available bandwidth.

Processor

% Processor Time

This is the best counter for viewing processor saturation. Shows the amount of time spent processing threads by all CPUs. A number consistently above 90% on one or more processors indicates that the test is too intense for the hardware. Add the 0 through x instances of this counter for multi-processor servers.

If the web application uses Microsoft SQL Server or relies on any other applications to generate the response, then the performance counters for that program should also be monitored.

OBJECT

PERFORMANCE COUNTER

INDICATES

SQL Server: General Statistics

Logins/sec

This is the count of logins to SQL server per second.

SQL Server: Cache Manager

Cache Hit Ratio (all instances)

Shows the hit rate that data is found in the cache. A number consistently less than 85% indicates a memory problem.

SQL Server: General Statistics

User Connections

Shows the number of active SQL users. Compare this number to the Active Server pages: Requests/Sec counter to get an idea of how much the scripts are working the SQL Server. A large difference may indicate that the test script is not a valid stress of SQL Server.

SQL Server: Databases

Transactions/sec

The total number of transactions that have been started.

SQL Server: Locks

Lock Waits/sec

Shows the number of lock requests per second that force another process to wait until the current process is complete. If consistently greater than 0 it indicates transaction problems.