Tuesday, February 20, 2007

Sample Test Plan

1 overview
This document describes the Test Strategy and Plan for conducting the Performance and Volumetric Tests. These tests will verify the performance of the OSMIS application against the standards specified in the contract documents. During testing, the OSMIS application will be tested for consistent performance against expected data volumes and user load by simulating the load conditions using the Mercury LoadRunner testing tool. Testing will be conducted in the production environment with proper test setup, as required by the test procedures. The Performance and Volumetric Test Team will receive the necessary training before conducting the tests.

This document is developed in the following sequence:
· Develop a test strategy
· Develop a detailed test plan
· Explain the test process
· Develop a test procedure
· Composition of test team
· Develop test schedule
· Document test results
· Implement corrective measures if required
· Repeat tests after correction

The Performance and Volumetric testing will be conducted in several iterations including the following scenarios:
· Post system testing (pre production roll-out)
· Post data conversion (pre production roll-out)
· Post production (within a week of first phase of roll-out)
· Post production (after two months of the first roll-out)
· Post production (after completion of all phases of roll-out)
1.1 Abbreviations

Abbreviations
Descriptions
AWI
Agency for Workforce Innovation
Gulf
Gulf Computers, Inc.
OSMIS
One Stop Management Information System
OSS
One-Stop Staff
RFP
Request for Proposal
STO
State Technology Office
WFI
Workforce Florida, Inc


2 TEST STRATEGY
Performance and Volumetric Testing simulates Web browser sessions to given Web sites and measures the results. Three types of performance testing will be conducted for OSMIS:
· “Volumetric Testing” or "Load Testing" - measures the ability of OSMIS to handle a peak traffic load of 2000 concurrent users at a time by simulating a series of hits
· "Stress Testing" - tests the systems capacity to sustain this load over an extended period of time in the simulated real time environment
· “Capacity Testing” - determines the maximum number of concurrent users OSMIS can manage

Performance and Volumetric testing will be conducted for 3,000 concurrent users of the OSMIS application belonging to various categories, such as:
· Case Managers
· Employers
· Customers
· System Administrators
· Universal users

The system is expected to provide a throughput of less than an average of 2 seconds per transaction within the SRC environment, with 3,000 concurrent users.

The simulated user load of 3000 concurrent (virtual) users should be as realistic as possible to make the performance and volumetric testing meaningful. In order to achieve this objective, Gulf will analyze and understand the potential transactions and the prospective users of the system.

3 TEST PLAN
3.1 Test Environment
The Performance and Volumetric Testing will be conducted in the OSMIS Production Environment. Mercury Testing Tools typically use three major components to execute a test:
· A control console, ‘LoadRunner Controller’ which organizes, drives and manages the load
· Virtual users, which are processes that imitate the real user performing a business process on a client application
· Load Generator, which will simulate the required number of virtual users

Using these components, LoadRunner will:
· Simultaneously run numerous virtual users on a single load-generating machine and automatically measure transaction response times
· Easily repeat load scenarios to validate design and performance changes
· Record the performance parameters and log the results of the test

3.2 General Testing Process
Following is a systematic overview of the automated testing process:

3.2.1 System Analysis
System Analysis is critical to interpreting the OSMIS user’s Performance and Volumetric Testing needs and is used to determine whether the system will scale and perform to the user’s expectations.

Load Testers essentially translate existing requirements of the user into load testing objectives. A thorough evaluation of the requirements and needs of a system, prior to load testing, will provide more realistic test conditions.

1. Gulf will identify all key performance goals and objectives before executing any testing strategies. Examples include:
· Identifying which processes and / or transactions to test
· Identifying which components of a system architecture to use in the test
· Identifying the number of concurrent connections to test with
· Identifying the number of hits per second (and / or wait time between transactions) to initiate against the Web site

2. Gulf will then define the input data used for testing. The data will be created dynamically. Random browsing may also be used to obtain the data. Emulating data input can avoid potential problems with inaccurate load test results.

3. Gulf will determine the appropriate strategy for testing the application. There are three strategy models to choose from
· Load Testing - used to test an application against a requested number of users. The objective is to determine whether the site can sustain this requested number of users with acceptable response times
· Stress Testing - is load testing over extended periods of time to validate an application’s stability and reliability
· Capacity Testing - is used to determine the maximum number of concurrent users an application can manage

OSMIS Performance and Volumetric Testing will include all three strategy models.

4. Fourth, although not mandated by the RFP, Gulf will acquire a good understanding of the following components of the system architecture, which will help assist the State networking team in the diagnostics of any potential issues:
· Define the types of routers used in the network setup
· Determine whether multiple servers are being used
· Establish whether load balancers are used as part of the IP networks to manage the servers
· Determine which servers are configured into the system (Web, application, database)

5. Gulf will determine which resources are available to run the virtual users. This requires deciding whether there is a sufficient number of load generators or test machines to run the appropriate number of virtual users. This also requires determining whether the testing tool has multithreading capabilities and can maximize the number of virtual users being run. Ultimately, the goal is to minimize system resource consumption while maximizing the virtual user count.

3.2.2 Virtual User Scripts
A script recorder is used to encapsulate business processes into test scripts, referred to as virtual user scripts or virtual users. A virtual user emulates the real user by driving the real application as a client. It is necessary to identify and record the various business processes from start to finish. Defining these transactions will assist in the breakdown of actions and the time it takes to measure the performance of a business process.

3.2.3 Test Setting
Run-time settings define the way that the script runs in order to accurately emulate real users. Settings can configure think time, connection speed and error handling. Think times can vary in accordance with different user actions and the user’s level of experience with web technology.

System response times also can vary because they are dependent on connection speed, and all users connect to the web system at different speeds (e.g., modem, LAN, WAN). This feature emulates dial-up connections over PPP at varying modem speeds (e.g., 28.8 Kbps, 56.6 Kbps, etc.) and is useful for measuring application response times based on the connection speed. However, in the case of OSMIS, Performance and Volumetric Testing will be conducted in the lab environment (within the SRC) for the loading and response time specified in the contract.

Error handling is another setting that requires configuration. Errors arise throughout the course of a scenario and can impede the test execution. Gulf will configure virtual users to handle these errors so that the tests can run uninterrupted.

3.2.4 Test Scenarios
The test scenarios contain information about the groups of virtual users that will run the scripts and the load machines that the groups are running on.

In order to run a successful scenario, Gulf will:
· Define individual groups based on common user transactions
· Define and distribute the total number of virtual users. A varying number of virtual users will be assigned to individual business processes to emulate user groups performing multiple transactions
· Determine which load generating machines the virtual users will run on. Load generator machines will be added to the client side of the system architecture to run additional virtual users as needed
· Specify how the scenario will run. Virtual user groups will run in staggered and/or parallel formation as appropriate. Staggering the virtual users allows testers to examine a gradual increase of the user load to a peak.

3.2.5 Real-Time Monitoring
Real-time monitoring will allow Gulf testers to view the application’s performance at any time during the test. Every component of the system will be monitored:
· Clients
· Web server
· Application server
· Database
· All server hardware

Real-time monitoring will allow for early detection of performance bottlenecks during test execution. Gulf testers will have the ability to view the performance of every tier, server, and component of the system during testing. As a result, Gulf will identify performance bottlenecks during load testing. Gulf can then accelerate the test process and achieve a more stable application.

3.2.6 Analyzing Results
The most important step in collecting and processing the data is to resolve performance bottlenecks. The results will be analyzed from three different perspectives of load, stress and capacity testing. The analysis will yield a series of graphs and reports that help summarize and present the end-to-end test results. After changes are made, Gulf will rerun the test scenarios to verify the adjustments.

The performance-monitoring feature offers an accurate method for pinpointing bottlenecks while running the scripts. To fix these problems, testers can follow several steps.
1. A network specialist (DBA, consultants) can be used to make the necessary adjustments to the system
2. Testers need to rerun the scripts, to verify that the changes have taken place
3. A comparison of the results from before and after enables the tester to measure the amount of improvement that the system has undergone

3.2.6.1 LoadRunner Analysis Graphs
The graphs that LoadRunner Analysis provides are as follows:
· Running virtual users: Displays running virtual users during each second of a scenario
· Rendezvous: Indicates when and how virtual users were released at each point
· Transaction/sec (passed): Displays the number of completed, successful transactions performed per second
· Transaction/sec (failed): Displays the number of incomplete, failed transactions performed per second

3.2.6.2 LoadRunner Performance Graphs
LoadRunner provides a variety of performance graphs:
· Percentile: Analyzes percentage of transactions that were performed within a given time range
· Performance under load: Indicates transaction times relative to the number of virtual users running at any given point during the scenario
· Transaction performance: Displays the average time taken to perform transactions during each second of the scenario run
· Transaction performance summary: Displays the minimum, maximum and average performance times for all the transactions in the scenario
· Transaction performance by virtual user: Displays the time taken by an individual virtual user to perform transactions during the scenario
· Transaction distribution: Displays the distribution of the time taken to perform a transaction

3.2.6.3 LoadRunner Web Graphs
LoadRunner offers two types of Web graphs:
· Connections per second: Shows the number of connections made to the Web server by virtual users during each second of the scenario run
· Throughput: Shows the amount of throughput on the server during each second of the scenario run

3.3 OSMIS Testing Process
Following is a systematic overview of the precise OSMIS testing process:

3.3.1 System Analysis
· Identify all testing conditions
System architecture components
Processes being tested
Total number of virtual users to test
· Convert goals and requirements into a successful, automated test scripts

3.3.2 Analyzing Transactions
The following is an analysis of possible transactions in OSMIS. OSMIS has the following six functional modules:
· Registration
· Job Services
· Case Management
· Case Administration
· Financial
· System Administration

Given below is an analysis of the transactions, under various functional modules, based on the volumes and complexity of the business processes, that determines an estimation of concurrent users expected to hit the peak run time hours of the OSMIS application.

3.3.2.1 ONE STOP ACTIVITIES
Registration



User Type
Number of Business Process


Complexity
Average # of transactions/day
(Assumption)
System usage time = # of transactions x estimated duration
One Stop Staff
15
14 – Complex
1 - Medium
2,000 transactions (complex)
2,000 x 30 = 60,000 min
Customer
2
2 - complex
1000 transactions
1000 x 30 = 30,000 min
Employer
2
2 - medium
100 transactions
100 x 20 = 2,000 min


Job Services



User Type
Number of Business Process


Complexity
Average # of transactions/day
(Assumption)
System usage time = # of transactions x estimated duration
One Stop Staff
18
2 – Complex
13 – Medium
3 - simple
2,500 transactions (medium)
2,500 x 20 = 50,000

Customer
8
1 - Complex
6 – Medium
1 - simple
2,000 transactions (medium)
2,000 x 20 = 40,000

Employer
6
6 – medium
250 transactions
250 x 20 = 5,000


Case Management



User Type
Number of Business Process


Complexity
Average # of transactions/day
(Assumption)
System usage time = # of transactions x estimated duration
Case Manager
38
10– Complex
28 – Medium
1,300 transactions (complex)
3,700 transactions (medium)
1,300 x 30 = 39,000
3,700 x 20 = 74,000

Case Administration



User Type
Number of Business Process


Complexity
Average # of transactions/day
(Assumption)
System usage time = # of transactions x estimated duration
Case Manager
30
8 – Complex
22 – Medium

100 transactions (complex)
250 transaction (medium)
100 x 30 = 3,000
250 x 20 = 5,000


Financial



User Type
Number of Business Process


Complexity
Average # of transactions/day
(Assumption)
System usage time = # of transactions x estimated duration
AWI
19
9 – Complex
10 - Medium
25 transaction (complex)
25 transaction (medium)
25 x 30 = 750 minutes
25 x 20 = 750 minutes
Finance Admin
9
5 – Complex
4 - Medium
5 transactions (complex)
4 transactions (medium)
5 x 30 = 120 min
4 x 20 = 80 min
Region Admin
1
1- Medium
5 transaction

Region
10
5 complex
5 medium
125 transactions (complex)
125 transactions (medium)
125 x 30 = 3,750 min
125 x 20 = 2,500 min

System Administration



User Type
Number of Business Process


Complexity
Average # of transactions/day
(Assumption)
System usage time = # of transactions x estimated duration
Sys Admin
22
2 – Complex
20 – Medium
50 transaction (medium)
50 x 20 = 1,000 min

Total system usage time for one stop activities = 316,900min/day

(Gulf will assume, for testing purposes, these transactions take place during a 4-hour peak time period)

Number of concurrent users from one stops = 316,900/(4 x 60) = 1320
Number of concurrent users from AWI / WFI = 100
Number of concurrent casual users through web = 500
Number of total concurrent users = 1920

3.3.3 Virtual User Scripts
Record the business processes to create a test script
Script recording is done using Load Runner’s Virtual User Generator (VUGen). (VUGen is a component that runs on a client desktop to capture the communication between the real client application and the server. VUGen can emulate the exact behavior of a real browser by sending various e-business protocol requests to the server. VUGen also can record against Netscape or Internet Explorer browsers or any user-defined client that provides the ability to specify a proxy address. After the recording process, a test script is generated).
Add logic to the script to make it more realistic. Intelligence can be added to the scripts so that they emulate virtual user reasoning while executing a transaction. LoadRunner executes this stage using the transactions, as well as its verification and parameterization features.

3.3.4 Test Settings
LoadRunner provides comprehensive run-time settings to configure scripts that emulate the behavior of real users.
Here, Gulf will be attempting to simulate the test user base (virtual users) as realistically as possible. The test user base and the transactions for this test will be simulated based on the following analysis:
· The Performance and Volumetric Test will be conducted in a lab environment (within the LAN) for a total number of 3,000 concurrent (virtual) users
· It is proposed to conduct a one-time test by progressively increasing the load up to 5,000 concurrent users. This will verify the scalability of the application and ensure the capability of the application to handle higher loads than 3,000 concurrent users.
· The break up of the test user base of 3,000 concurrent users is given below:


Module/
Program

%
Test Sample Size

Selected Sample Transactions
Number of each Sample Transaction
Finance
5%
150
Simple – LogIn
Medium – Cash Request
Complex – Finance Report Summary
50
80
20
Registration – One Stop
20%
400
Simple – Login
Medium – Contact Details - Registration
Complex – Program Details-Registration
150
200

50
Job Services
25 %
500
Simple – Job Search
Medium – Build Resume
Complex – Apply for Job
250
150
100
Case Management
35 %
700
Simple – Case Follow Up
Medium – View Case Summary
Complex – Assign Activity
300
300
100
Registration – Online Job Seekers
15 %
300
Simple – Job Search
Medium – Resume Update
Complex – Gap analysis
150
100
50


4 TEST TEAM
The proposed test team composition for OSMIS Performance and Volumetric Test is given below:


Role

Name
Testing engineers application
Gulf Computers
Sanghosh Bhalla
Srinivasan Centhala
Santosh Chiplunkar
Roshan Poojari
Testing engineer database
Gulf Computers
Easwaran Ramasamy
Karthik Kandamuri
Systems/ network engineer
Gulf Computers
Ram Iyer
System specialist
Gulf Computers
Rajan Pillai
Application architect
Gulf Computers
Santosh Pradhan
STO representative
System Engineer (Sun Solaris)
Dan Hauversburk (Oracle AS and DB)
System Engineer (Web / Networking)
Customer representative
Tom Marinelli
Fielding Cooley
Fred Dietrich

The testing engineers will be provided training on the LoadRunner tools at Mercury Interactive, Inc. prior to testing.

In addition, Gulf suggests obtaining the assistance of Mercury Interactive Engineers by procuring the “Quick Start” package from Mercury (received with the package), which covers
· Scripting
· Testing
· Analyzing results
· System tuning for any 3 Business Processes

The rest will be handled by Gulf test engineers.

5 TEST SCHEDULE
The tentative start dates for various iterations of Performance and Volumetric Testing are given below:
· May 20, 2002
· July 08, 2002
· August 05, 2002
· October 08, 2002
· December 12, 2002

6 DOCUMENTING RESULTS and CORRECTIVE MEASURES
The results of the tests will be properly documented and submitted to AWI and STO. Test Results will be documented using the Report Formats of Mercury LoadRunner.

Any corrective measures or tuning of Hardware or Software will be implemented immediately after completion of the first iteration of the performance and volumetric tests. This will also be well documented. The complete set of Performance and Volumetric Tests will be repeated after correction and tuning up of system. There could be several iterations in this process of correction and repeat tests. The final test will be conducted to ensure the system meets all the expected performance parameters as envisaged in the RFP.

A Final Test Report will be prepared and submitted to AWI and STO upon completion of the tests.

6 comments:

Slavik said...

This post is great! Thank you.

testinggeek said...

Thanks a lot for the posting...it's very useful....

peterson said...

Your information about loadrunner tool is really interesting. Also I want to know the latest new techniques which are implemented in loadrunner. Can you update it in your website?
LoadRunner training in Chennai

Unknown said...

Thanks for sharing this valuable information to our vision. You have posted a trust worthy blog keep sharing.Nice article i was really impressed by seeing this article, it was very interesting and it is very useful for me.. CCNA training in chennai | CCNA training chennai | CCNA course in chennai | CCNA course chennai

Anjali Siva said...

Learned a lot from your blog. Good creation and hats off to the creativity of your mind. Share more like this.
UiPath Training in Chennai
UiPath Training Institutes in Chennai
ccna course in Chennai
AWS Training in Chennai
RPA courses in Chennai
DevOps Certification Chennai
Angular 6 Training in Chennai

sharan said...

I hope your Testing tools content has unique identity across the world. Each and every blog in your website is very informative users. I am eagerly waiting for the next content.
Microsoft Windows Azure Training | Online Course | Certification in chennai | Microsoft Windows Azure Training | Online Course | Certification in bangalore | Microsoft Windows Azure Training | Online Course | Certification in hyderabad | Microsoft Windows Azure Training | Online Course | Certification in pune