Thursday, June 28, 2007

30 WinRunner Interview Questions

Which scripting language used by WinRunner ?

WinRunner uses TSL-Test Script Language (Similar to C)

What's the WinRunner ?

WinRunner is Mercury Interactive Functional Testing Tool.

How many types of Run Modes are available in WinRunner ?

WinRunner provide three types of Run Modes.
Verify Mode
Debug Mode
Update Mode

What's the Verify Mode ?

In Verify Mode, WinRunner compare the current result of application to it's expected result.

What's the Debug Mode ?

In Debug Mode, WinRunner track the defects in a test script.

What's the Update Mode?


In Update Mode, WinRunner update the expected results of test script.

How many types of recording modes available in WinRunner ?

WinRunner provides two types of Recording Mode:
Context Sensitive
Analog

What's the Context Sensitive recording ?


WinRunner captures and records the GUI objects, windows, keyboard inputs, and mouse click activities through Context Sensitive Recording.

When Context Sensitive mode is to be chosen ?

a. The application contains GUI objects
b. Does not require exact mouse movements.


What's the Analog recording ?

It captures and records the keyboard inputs, mouse click and mouse movement. It's not captures the GUI objects and Windows.

When Analog mode is to be chosen ?

a. The application contains bitmap areas.
b. Does require exact mouse movements.


What are the components of WinRunner ?

a. Test Window : This is a window where the TSL script is generated/programmed.
b. GUI Spy tool : WinRunner lets you spy on the GUI objets by recording the Properties.

Where are stored Debug Result ?

Debug Results are always saved in debug folder.

What's WinRunner testing process ?

WinRunner involves six main steps in testing process.
Create GUI map
Create Test
Debug Test
Run Test
View Results
Report Defects

What's the GUI SPY ?


You can view the physical properties of objects and windows through GUI SPY.

How many types of modes for organizing GUI map files ?

WinRunner provides two types of modes-
Global GUI map files
Per Test GUI map files

What's the contained in GUI map files ?

GUI map files stored the information, it learns about the GUI objects and windows.

How does WinRunner recognize objects on the application ?


WinRunner recognize objects on the application through GUI map files.

What's the difference between GUI map and GUI map files ?

The GUI map is actually the sum of one or more GUI map files.

How do you view the GUI map content ?

We can view the GUI map content through GUI map editor.

What's the checkpoint ?

Checkpoint enables you to check your application by comparing it's expected results of application to actual results.

What's the Execution Arrow ?

Execution Arrow indicates the line of script being executed.

What's the Insertion Point ?

Insertion point indicates the line of script where you can edit and insert the text.

What's the Synchronization ?

Synchronization is enables you to solve anticipated timing problems between test and application.

What's the Function Generator ?

Function Generator provides the quick and error free way to add TSL function on the test script.

How many types of checkpoints are available in WinRunner ?

WinRunner provides four types of checkpoints-
GUI Checkpoint
Bitmap Checkpoint
Database Checkpoint
Text Checkpoint

What's contained in the Test Script ?

Test Script contained the Test Script Language.

How do you modify the logical name or the physical description of the objects in GUI map ?

We can modify the logical name or the physical description of the objects through GUI map editor.

What are the Data Driven Test ?

When you want to test your application, you may want to check how it performance same operation with the multiple sets of data.

How do you record a Data Driven Test ?

We can create a Data Driven Test through Flat Files, Data Tables, and Database.

How do you clear a GUI map files ?

We can clear the GUI map files through "CLEAR ALL" option.

What are the steps of creating a Data Driven Test ?

Data Driven Testing have four steps-
Creating test
Converting into Data Driven Test
Run Test
Analyze test

What is Rapid Test Script Wizard ?

It performs two tasks.
a. It systematically opens the windows in your application and learns a description of every GUI object. The wizard stores this information in a GUI map file.
b. It automatically generates tests base on the information it learned as it navigated through the application.

What are the different modes in learning an application under Rapid test script wizard ?

a. Express
b. Comprehensive.

What's the extension of GUI map files ?

GUI map files extension is ".gui".

What statement generated by WinRunner when you check any objects ?

Obj_check_gui statement.

What statement generated by WinRunner when you check any windows ?

Win_check_gui statement

What statement generated by WinRunner when you check any bitmap image over the objects ?

Obj_check_bitmap statement

What statement generated by WinRunner when you check any bitmap image over the windows ?


Win_check_bitmap statement

What statement used by WinRunner in Batch Testing ?

"Call" statement.

Which short key is used to freeze the GUI Spy ?

"Ctrl+F3"

How many types of parameter used by WinRunner ?

WinRunner provides three types of Parameter-
Test
Data Driven
Dynamic

How many types of Merging used by WinRunner ?

WinRunner used two types of Merging-
Auto
Manual

What's the Virtual Objects Wizard ?

Whenever WinRunner is not able to read an objects as an objects then it uses the Virtual Objects Wizard.

How do you handle unexpected events and errors ?

WinRunner uses the Exception Handling function to handle unexpected events and errors.

How do you comment your script ?

We comment script or line of the script by inserting "#" at the beginning of script line.

What's the purpose of the Set_Windows command ?

Set_Window command set the focus to the specified windows.

How you created your test script ?

Programming.

What's a command to invoke application?

Invoke_application

What do you mean by the logical name of objects ?

Logical name of an objects is determined by it's class but in most cases, the logical name is the label that appear on an objects.

How many types of GUI checkpoints ?

In Winrunner, three types of GUI checkpoints-
For Single Properties
For Objects/Windows
For Multiple Objects

How many types of Bitmap Checkpoints ?

In Winrunner, two types of Bitmap Checkpoints-
For Objects/Windows
For Screen Area

How many types of Database Checkpoints ?


In Winrunner, three types of Database Checkpoints-
Default Check
Custom Check
Runtime Record Check

How many types of Text Checkpoints ?

In Winrunner, four types of Text Checkpoints-
For Objects/Windows
From Screen Area
From Selection (Web Only)
Web text Checkpoints

What add-ins are available for WinRunner ?

Add-ins are available for Java, ActiveX, WebTest, Siebel, Baan, Stingray, Delphi, Terminal Emulator, Forte, NSDK/Natstar, Oracle and PowerBuilder.

Notes:

* Winrunner generates menu_select_item statement whenever you select any menu items.
* Winrunner generates set_window statement whenever you begin working in new window.
* Winrunner generates edit_set statement whenever you enter keyboard inputs.
* Winrunner generates obj_mouse_click statement whenever you click any object through mouse pointer.
* Winrunner generates obj_wait_bitmap or win_wait_bitmap statements whenever you synchronize the script through objects or windows.
* The ddt_open statement opens the table.
* The ddt_close statement closes the table.
* Winrunner inserts a win_get_text or obj_get_text statements in script for checking the text.
* The button_press statement press the buttons.
* Winrunner generates list_item_select statement whenever you want to select any value in drop-down menu.
* We can compare the two files in Winruuner using the file_compare function.
* tl_step statement used to determine whether section of a test pass or fail.
* Call_Close statement close the test when the test is completed

32 QTP Interview Questions

Full form of QTP ?

Quick Test Professional

What's the QTP ?

QTP is Mercury Interactive Functional Testing Tool.

Which scripting language used by QTP ?

QTP uses VB scripting.

What's the basic concept of QTP ?

QTP is based on two concept-
* Recording
* Playback

How many types of recording facility are available in QTP ?

QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording

How many types of Parameters are available in QTP ?

QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

What's the QTP testing process ?


QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects

What's the Active Screen ?


It provides the snapshots of your application as it appeared when you performed a certain steps during recording session.

What's the Test Pane ?

Test Pane contains Tree View and Expert View tabs.

What's Data Table ?

It assists to you about parameterizing the test.

What's the Test Tree ?

It provides graphical representation of your operations which you have performed with your application.

Which all environment QTP supports ?

ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL

How can you view the Test Tree ?

The Test Tree is displayed through Tree View tab.

What's the Expert View ?

Expert View display the Test Script.

Which keyword used for Nornam Recording ?


F3

Which keyword used for run the test script ?

F5

Which keyword used for stop the recording ?


F4

Which keyword used for Analog Recording ?

Ctrl+Shift+F4

Which keyword used for Low Level Recording ?

Ctrl+Shift+F3

Which keyword used for switch between Tree View and Expert View ?

Ctrl+Tab

What's the Transaction ?

You can measure how long it takes to run a section of your test by defining transactions.

Where you can view the results of the checkpoint ?

You can view the results of the checkpoints in the Test Result Window.

What's the Standard Checkpoint ?

Standard Checkpoints checks the property value of an object in your application or web page.

Which environment are supported by Standard Checkpoint ?

Standard Checkpoint are supported for all add-in environments.

What's the Image Checkpoint ?

Image Checkpoint check the value of an image in your application or web page.

Which environments are supported by Image Checkpoint ?

Image Checkpoint are supported only Web environment.

What's the Bitmap Checkpoint ?

Bitmap Checkpoint checks the bitmap images in your web page or application.

Which enviornment are supported by Bitmap Checkpoints ?

Bitmap checkpoints are supported all add-in environment.

What's the Table Checkpoints ?


Table Checkpoint checks the information with in a table.

Which environments are supported by Table Checkpoint ?

Table Checkpoints are supported only ActiveX environment.

What's the Text Checkpoint ?

Text Checkpoint checks that a test string is displayed in the appropriate place in your application or on web page.

Which environment are supported by Test Checkpoint ?

Text Checkpoint are supported all add-in environments

Note:


* QTP records each steps you perform and generates a test tree and test script.

* QTP records in normal recording mode.

* If you are creating a test on web object, you can record your test on one browser and run it on another browser.

* Analog Recording and Low Level Recording require more disk sapce than normal recording mode.

Pros & Cons of V-Model, water fall model

Pros & Cons of Water Fall Model

* Enforced discipline through documents.
* No phase is complete until the docs are done & checked by SQA group.
* Concrete evidence or progress.
* Testing is inherent in every phase.

* No fair division of phases in the life cycle.
* The following phase should not start until the previous phase has finished
* Document driven model as a result customers cannot understand these.
* Re-design is problematic.

Pros & Cons of V -Model


* Simple and easy to use.
* Each phase has specific deliverables.
* Higher chance of success over the waterfall model due to the development early on during the life cycle.
* Works well for small projects where requirements are easily understood.

* Very rigid, like the waterfall model.
* Little flexibility and adjusting scope is difficult and expensive.
* Software is developed during the implementation phase, so no early prototype software are produced.
* Model doesn’t provide a clear path for problems found during testing phases.

Wednesday, June 27, 2007

difference in testing a CLENT-SERVER application and a WEB application

The main difference is
In both of the Test we are performing Load and Performance Testing.Testing the application in intranet is an example for client -server.

Testing an application in internet(using browser) is called webtesting

Web Server
Application server
Webserver serves pages for viewing in web browser
application server provides exposes business logic for client applications through various protocols
Webserver exclusively handles http requests
Application server serves bussiness logic to application programs through any number of protocols.

Webserver delegation model is fairly simple,when the request comes into the webserver,it simply passes the request to the program best able to handle it(Server side program). It may not support transactions and database connection pooling
Application server is more capable of dynamic behaviour than webserver. We can also configure application server to work as a webserver.Simply applic! ation server is a superset of webserver.
Web Server serves static HTML pages or gifs, jpegs, etc., and can also run code written in CGI, JSP etc. A Web server handles the HTTP protocol. Eg of some web server are IIS or apache
An Application Server is used to run business logic or dynamically generated presentation code. It can either be .NET based or J2EE based (BEA WebLogic Server, IBM WebSphere, JBoss).

A J2EE application server runs servlets and JSPs (infact a part of the app server called web container is responsible for running servlets and JSPs) that are used to create HTML pages dynamically. In addition, J2EE application server can run EJBs - which are used to execute business logic.

A J2EE application server runs servlets and JSPs (infact a part of the app server called web container is responsible for running servlets and JSPs) that are used to create HTML

Saturday, June 23, 2007

Writing Test Case Using Use cases

Writing Test Case Using Use cases

Use case: Tag Notification to the Group
Confidence:Y
Exec Method: M
TestPlan:
Build #:
Pass/Fail:
TestScript:
CR#:
Comment:
TestCase:
Based On: UC

TestData:
Specific Preconditions
· Configure HMI and ensure that the alarm has been generated
· Configure three operators in the “ON CALL” list
· Every operator in the group should have configured with a voice phone number
Basic course

Input Specifications
Output Specifications
1. Configured HMI generates an alarm and SCADAlarm calls the configured operator.
2. Verify the number of retries without answering the call

o The number of retries equals to the value of “Number of retries before moving on to the next call” configured in the menu Configuration>>System Parameter>>Retrying tab
3. Verify that the call is diverted to the next person in the call list/schedule after exceeding the value in the menu Configuration>>System Parameter>>Retrying tab ----Number of retries before moving on to the next call
o Application should divert the call to next person as per call list/schedule depending on the alarm condition
4. The Second operator receives the call on the configured number
o The Second Operator is greeted if the greeting file is configured
5. Enter valid Operator ID and PIN number as requested by the SCADAlarm TUI

o Upon authentication Operator should be logged in to the SCADAlarm application
6. Press “0” key from telephone to logout from SCADAlarm
o The confirmation message will be played
7. Press “9” key for exiting the application without acknowledging the alarm
o SCADAlarm will log the information about the operators information, time when Operator logged in and logged out to the system SCADAlarm logger
o Operator will be notified for the current system time and greeted for “Good Bye”
8. Verify the call has been moved on to Third Operator ‘s configured number
o Operator got call on configured number

Specific Post conditions

Second Example:
Add Item to Cart

The actor for this alternative flow is the Authenticated Customer. The flow begins with the user on the Product Description page.
1. The user enters a product quantity to order.
2. The user clicks on the "Add to Cart" button.
3. The system validates the product order information.
4. If the product order information is invalid, the system displays an error message and the use case ends.
If the product order information is valid, the system populates (but does not display) the shopping cart, displays a confirmation message, and the use case ends. The system also populates the Mini Shopping Cart and displays it.



Test Case ID
Test Conditions
Actions to Perform Test
Expected Results
Actual Result
Pass/ Fail
1.
Validation of View Cart Button
1. Authenticated User clicks on “View Cart” button.
· System displays Shopping Cart


2.
Screen validation Shopping Cart Window
1. Check for the following attributes in the above mentioned screen
o Aesthetic conditions
o Validation conditions
o Navigation conditions
o Usability conditions
o Data Integrity conditions
o Specific field test
mentioned in the appendix as applicable
· All applicable parameters mentioned in appendix should be verified.


3.
Validation For Add Item to Cart
1. On the Product Description Page the authenticated user enters the quantity to order.
2.Click on “Add to Cart” button.
· The quantity entered should be displayed. The product order information is validated



4.
Validation For Add Item to Cart

· If the Product Order Information is invalid the system displays an error and a message box stating the same should pop up.


5.
Validation For Add Item to Cart

· If the product order information is valid then shopping cart should be populated and a confirmation message stating the same should be displayed.
· A Mini Shopping cart should be displayed after the confirmation.

Question asked in various Company interview...

Question asked in COVANSYS interview
What the GUI map will contain
After recording the script If made in change in the logical name in the script, it will run or not for example in edit_set (“Enter the user name ”, “siva”);. I changed the Logical name Enter the user name to Enter ID
In case in my Object Physical description there is no option like attached_text, Then what it will happende.
What are the running modes that present in the winrunner
what is the use of debug mode
what the update mode will do
are u ever heared about the exception handling
what are the types of exception handling
what the pop up exception will do
What are the default actions for pop up exception
Have you ever used the user defined exception
What is the function for exception handling
Give an example for web exception
what are the types of check points
What is gui check point, bitmap check point
What is the difference between RDBMS and DBMS
What is the RDBMS
What is the DDL and DML
What command u will use to erase an table . is that DROP or DELETE
What are the types of joints
What is equi joint and outer joint
What is the compile module
What are the types for class
What is the public, static, auto, extern
Give an example for an variable
Why we use double hash (C:\\win runner…..) instead of single hash for declaring path
What are the parameters for exception handling
Is there any posiblity of loading two GUI map files into GUI map editor
How you add GUI map files into the GUI map editior with out recording
How winrunner recognizes object
There is window containing only one object. I performed action on that, What GUI map will contain
There is a window containing 12 objects. I performed action on first object, What script it will generated and what it will contain
What are the general properties are available for the object
There is a default property that winrunner always learns when you perform action on that. What is that property
When It Is necessary to change the logical name
Is that rapid test script wizard is available always
What is the difference between the adding check points From toolbar and by adding functions from function generator or by manually
What is the purpose of the function generator
What is the use Compile module
What is the difference between the SILK test and Winrunner
What is the use of GUI SPY and GUI map editor
What is the purpose of the GUI merging
What is the extension of the expected result file
How you can see the actual and expected result for check points in result window
When are the folders present in the win runner
How you know that specified check point fails in winrunner
What is the purpose of the Virtual object wizard
If I created a script using virtual object wizard in one system, When I run that same script in different system having different resolution is it will run or not
What is the use of GUI map configaration
I want to use pop-up exception on pop up. I want to create exception on that pop up by clicking the close (X - Right side top right of the corner). In that written exception what will be the parameters
What the check list file will contain
What the expected file will contain
What is the compilation error
Win runner is an interrupter or compiler
variable is always be sting or any thing else


1. IsoftWhat shouldbe done after wrirting test case?? 2.Covansys
Testing
What is bidirectional traceability ??? and how it is implemented
What is Automation Testframe work ?
Define the components present in test strategy
Define the components present in test plan
Define database testing ?
What is the difference between QA and QC ....
What is the difference between V&V
What are different types of test case that u have written in your project..
Have u written Test plan ?....
SQL
What is joins and define all the joins ...
What is Foreign key ?
Write an SQL query if u want to select the data from one block which inturn reflects in another block ?
Unix
Which command is used to run an interface?
How will you see the hidden file ?
What is the command used to set the date and timings ...
Some basic commands like copy, move,delete ?
Which command used to the go back to the home directory ....
Which command used to view the the current directory
3. VirtusaTesting
Tell me about Yourself?
Testing process followed in your company …
Testing Methodology
Where u maintains the Repositories?
What is CVS?
Bug Tool used?
Howwill you prepare traceabilty matrix if there is no Business Doc and FunctionalDoc ?
How willyou validate the functionality of the Test cases, if there is no businessrequirement document or user requirement document as such...
Testing process followed in your company?
Tell meabout CMM LEVEL -4 ...what are steps that to be followed to achieve the CMM -IVstandards?
What isBack End testing?
What is Unit Testing?
How will u write test cases for an givenscenario...i.e. main page, login screen, transaction, Report Verification?
How will u write traceability matrix ?
What isCVS and why it is used?
What willbe specified in the Defect Report...?
What isTest summary Report...?
What is Test Closure report...?
ExplainDefect life cycle...
What will be specified in the Test Case...
What are the Testing methodologies that u havefollowed in your project ?
What kind of testing that u have been involvedin and explain about it....
What is UAT Testing??
What is joins and what are the different typesof joins in SQL and explain the same?
What isForeign Key in SQL...?
KLA Tencor

Bug life cycle?
Explain about the Project. …And draw the architecture of your project?
What are the different types of severity?
Defect tracking tools used?
what are the responsibilities of an tester?
Give some example how will you write the test cases if an scenario involves Login screen.
Aztec
What are the different types of testingfollowed .....
What are the different levels of testing used during testing the
application?
What type of testing will be done in Installation testing or system testing?
What is meant by CMMI ...what are different types of CMM Level?
Explain abt the components involved in CMM-4 level
Explain abt Performance testing ?
What is Tracebility matrix and how it is done ?
How can you differentiate Severity and Priority based on technical and business point of view.
What is the difference between Test life cycle and defect life cycle ?
How will u ensure that you have covered all the functionality while writing test cases if there is no functional spec and there is no KT about the application?
Important :Study UNIX and SQL Commands...Now-a-days in each and every interview they are asking questions related to SQL and UNIX...

Monday, June 18, 2007

Test Plan Outline

TEST PLAN OUTLINE
(IEEE 829 Format)

1. Test Plan Identifier
2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Suspension Criteria and Resumption Requirements
11. Test Deliverables
12. Remaining Test Tasks
13. Environmental Needs
14. Staffing and Training Needs
15. Responsibilities
16. Schedule
17. Planning Risks and Contingencies
18. Approvals
19. Glossary

IEEE TEST PLAN TEMPLATE

Test Plan Identifier

Some type of unique company generated number to identify this test plan, its level and the level of software that it is related to. Preferably the test plan level will be the same as the related software level. The number may also identify whether the test plan is a Master plan, a Level plan, an integration plan or whichever plan level it represents. This is to assist in coordinating software and testware versions within configuration management.

Keep in mind that test plans are like other software documentation, they are dynamic in nature and must be kept up to date. Therefore, they will have revision numbers.

You may want to include author and contact information including the revision history information as part of either the identifier section of as part of the introduction.

References

List all documents that support this test plan. Refer to the actual version/release number of the document as stored in the configuration management system. Do not duplicate the text from other documents as this will reduce the viability of this document and increase the maintenance effort. Documents that can be referenced include:

  • Project Plan
  • Requirements specifications
  • High Level design document
  • Detail design document
  • Development and Test process standards
  • Methodology guidelines and examples
  • Corporate standards and guidelines

Introduction

State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive summary part of the plan.

You may want to include any references to other plans, documents or items that contain information relevant to this project/process. If preferable, you can create a references section to contain all reference documents.

Identify the Scope of the plan in relation to the Software Project plan that it relates to. Other items may include, resource and budget constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and possible the process to be used for change control and communication and coordination of key activities.

As this is the "Executive Summary" keep information brief and to the point.

Test Items (Functions)

These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.

This can be controlled and defined by your local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements.

Remember, what you are testing is what you intend to deliver to the Client.

This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build.

Software Risk Issues

Identify what software is to be tested and what the critical areas are, such as:

    1. Delivery of a third party product.
    2. New version of interfacing software
    3. Ability to use and understand a new package/tool, etc.
    4. Extremely complex functions
    5. Modifications to components with a past history of failure
    6. Poorly documented modules or change requests

There are some inherent software risks such as complexity; these need to be identified.

    1. Safety
    2. Multiple interfaces
    3. Impacts on Client
    4. Government regulations and rules

Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.

The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.

One good approach to define where the risks are is to have several brainstorming sessions.

  • Start with ideas, such as, what worries me about this project/application.

Features to be Tested

This is a listing of what is to be tested from the USERS viewpoint of what the system does. This is not a technical description of the software, but a USERS view of the functions.

Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low. These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.

It should be noted that Section 4 and Section 6 are very similar. The only true difference is the point of view. Section 4 is a technical type description including version numbers and other technical information and Section 6 is from the User’s viewpoint. Users do not understand technical software terminology; they understand functions and processes as they relate to their jobs.

Features not to be Tested

This is a listing of what is NOT to be tested from both the Users viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a USERS view of the functions.

Identify WHY the feature is not to be tested, there can be any number of reasons.

  • Not to be included in this release of the Software.
  • Low risk, has been used before and is considered stable.
  • Will be released but not tested or documented as a functional part of the release of this version of the software.

Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project.

Approach (Strategy)

This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.

  • Are any special tools to be used and what are they?
  • Will the tool require special training?
  • What metrics will be collected?
  • Which level is each metric to be collected at?
  • How is Configuration Management to be handled?
  • How many different configurations will be tested?
  • Hardware
  • Software
  • Combinations of HW, SW and other vendor packages
  • What levels of regression testing will be done and how much at each test level?
  • Will regression testing be based on severity of defects detected?
  • How will elements in the requirements and design that do not make sense or are untestable be processed?

If this is a master test plan the overall project testing approach and coverage requirements must also be identified.

Specify if there are special requirements for the testing.

  • Only the full component will be tested.
  • A specified segment of grouping of features/components must be tested together.

Other information that may be useful in setting the approach are:

  • MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.
  • SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.

How will meetings and other organizational processes be handled?

Item Pass/Fail Criteria

What are the Completion criteria for this plan? This is a critical aspect of any test plan and should be appropriate to the level of the plan.

  • At the Unit test level this could be items such as:
    • All test cases completed.
    • A specified percentage of cases completed with a percentage containing some number of minor defects.
    • Code coverage tool indicates all code covered.
  • At the Master test plan level this could be items such as:
    • All lower level plans completed.
    • A specified number of plans completed without errors and a percentage with minor defects.

This could be an individual test case level criterion or a unit level plan or it can be general functional requirements for higher level plans.

What is the number and severity of defects located?

  • Is it possible to compare this to the total number of defects? This may be impossible, as some defects are never detected.
    • A defect is something that may cause a failure, and may be acceptable to leave in the application.
    • A failure is the result of a defect as seen by the User, the system crashes, etc.

Suspension Criteria and Resumption Requirements

Know when to pause in a series of tests.

  • If the number or type of defects reaches a point where the follow on testing has no value, it makes no sense to continue the test; you are just wasting resources.

Specify what constitutes stoppage for a test or series of tests and what is the acceptable level of defects that will allow the testing to proceed past the defects.

Testing after a truly fatal error will generate conditions that may be identified as defects but are in fact ghost errors caused by the earlier defects that were ignored.

Test Deliverables

What is to be delivered as part of this plan?

  • Test plan document.
  • Test cases.
  • Test design specifications.
  • Tools and their outputs.
  • Simulators.
  • Static and dynamic generators.
  • Error logs and execution logs.
  • Problem reports and corrective actions.

One thing that is not a test deliverable is the software itself that is listed under test items and is delivered by development.

Remaining Test Tasks

If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects.

If the project is being developed as a multi-party process, this plan may only cover a portion of the total functions/features. This status needs to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that do not relate to this plan.

When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups.

Environmental Needs

Are there any special requirements for this test plan, such as:

  • Special hardware such as simulators, static generators etc.
  • How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided?
  • How much testing will be done on each component of a multi-part feature?
  • Special power requirements.
  • Specific versions of other supporting software.
  • Restricted use of the system during testing.

Staffing and Training needs

Training on the application/system.

Training for any test tools to be used.

Section 4 and Section 15 also affect this section. What is to be tested and who is responsible for the testing and training.

Responsibilities

Who is in charge?

This issue includes all areas of the plan. Here are some examples:

  • Setting risks.
  • Selecting features to be tested and not tested.
  • Setting overall strategy for this level of plan.
  • Ensuring all required elements are in place for testing.
  • Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.
  • Who provides the required training?
  • Who makes the critical go/no go decisions for items not covered in the test plans?

Schedule

Should be based on realistic and validated estimates. If the estimates for the development of the application are inaccurate, the entire project plan will slip and the testing is part of the overall project plan.

  • As we all know, the first area of a project plan to get cut when it comes to crunch time at the end of a project is the testing. It usually comes down to the decision, ‘Let’s put something out even if it does not really work all that well’. And, as we all know, this is usually the worst possible decision.

How slippage in the schedule will to be handled should also be addressed here.

  • If the users know in advance that a slippage in the development will cause a slippage in the test and the overall delivery of the system, they just may be a little more tolerant, if they know it’s in their interest to get a better tested application.
  • By spelling out the effects here you have a chance to discuss them in advance of their actual occurrence. You may even get the users to agree to a few defects in advance, if the schedule slips.

At this point, all relevant milestones should be identified with their relationship to the development process identified. This will also help in identifying and tracking potential slippage in the schedule caused by the test process.

It is always best to tie all test dates directly to their related development activity dates. This prevents the test team from being perceived as the cause of a delay. For example, if system testing is to begin after delivery of the final build, then system testing begins the day after delivery. If the delivery is late, system testing starts from the day of delivery, not on a specific date. This is called dependent or relative dating.

Planning Risks and Contingencies

What are the overall risks to the project with an emphasis on the testing process?

  • Lack of personnel resources when testing is to begin.
  • Lack of availability of required hardware, software, data or tools.
  • Late delivery of the software, hardware or tools.
  • Delays in training on the application and/or tools.
  • Changes to the original requirements or designs.

Specify what will be done for various events, for example:

Requirements definition will be complete by January 1, 19XX, and, if the requirements change after that date, the following actions will be taken:

  • The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as most projects tend to have fixed delivery dates.
  • The number of test performed will be reduced.
  • The number of acceptable defects will be increased.
    • These two items could lower the overall quality of the delivered product.
  • Resources will be added to the test team.
  • The test team will work overtime (this could affect team morale).
  • The scope of the plan may be changed.
  • There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.
  • You could just QUIT. A rather extreme option to say the least.

Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past.

The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option.

Approvals

Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan)?

At the master test plan level, this may be all involved parties.

When determining the approval process, keep in mind who the audience is:

  • The audience for a unit test level plan is different than that of an integration, system or master level plan.
  • The levels and type of knowledge at the various levels will be different as well.
  • Programmers are very technical but may not have a clear understanding of the overall business process driving the project.
  • Users may have varying levels of business acumen and very little technical skills.
  • Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand the business process. These types of individuals can cause more harm than good if they do not have the skills they believe they possess.

Glossary

Used to define terms and acronyms used in the document, and testing in general, to eliminate confusion and promote consistent communications.

Tuesday, June 12, 2007

Web Application Security Testing

Web Application Security Testing

Web application and Client-Server, are they same? This question is very common in software testing interviews, if you are part of some e-groups related to testing, you might have heard it many times from different people.There are numerous differences in Client-Server and Web application architecture. As a tester if you are testing Web applications, it is important to understand what Client-Server architecture is and how Web is different from traditional Client-Server architecture.
Web is a specialized version of client server network, but it has got noticeable differences. In client server network, computing resources are conserved by delegating complex and time consuming task to powerful, expensive computers called server. These server machines are much more powerful in terms of large storage and computing power. They do all the computing and delivers result back to the machines called client over a communication path. Thus client-server architecture comprises of server, client and communication path connecting them.
If you see at the lower level, client server architecture is not that simple. In order to connect two computers, you need network level protocol, you need proper software at client side and server side to send and receive data over network. You need to take care of data loss during transmission, bandwidth issues, dropped connectivity etc. Most of these issues are already addressed by protocols like TCP/IP, UDP, ARP etc. and developers face very little problem in implementing them. These protocols are backbone of the client server architecture.
WWW was developed on top of existing client server architecture. It came into existence as a replacement for FTP and email as a mechanism of sharing files and data. New development in servers to handle more requests, new client software to connect and browse resources on server; new development like HTTP, HTML etc fueled the growth of Web. Main component of the Web architecture is the Web Server, which can serve request from any client. Initially, web started serving static content and soon it was explored for the possibility of doing much more than just static content.
Even though Web is built on top of client server, there are noticeable differences. For example

  • Web is a special case of client server architecture in which fat clients are used to communicate with the server using variety of protocols and standards like HTTP, HTML, XML, SOAP etc.
  • In client server architecture, both client and server exist within the walls of a single company, thus operates in a protected environment. Clients in that case become the trusted user. Web is different, since client can connect server from anywhere thus not a single connection can be treated as trusted.
  • Because client server is typically within a company’s firewall, issues related to security are not as important as in Web applications.
  • In client server architecture, clients are controlled as in who can access, how clients will communicate and use server’s resources etc. In Web, mostly anyone with a browser can connect to the Web.
  • In client server architecture, every client is known; every request received by server will have information on who originated this request. In Web, users are anonymous thus pose a greater security risk.
  • Web gives more opportunity to malicious users to tamper data at the client side as well as at the network level. Chances of data being tampered in the traditional client server architecture are much lesser as compare to Web.
  • Number of clients that can be connected to the server is predictable and can be controlled in the traditional client server, but it can not be controlled in the Web.
  • Clients are much more controlled in client-server. Which OS they will use, which platform they will run on, what browser will be used every thing can be controlled. In comparison to that, nothing can be controlled in Web.

Because of the fact that both are different, testing applications in client server, or web will also be different. The main areas where the testing gets affected can be summarized as:

  • Business Logic: Mostly in the cases of Client-Server client side business logic needs to be tested which is mostly not needed in for the web-based applications.
  • Platform / OS Dependence: The web based applications are O/S independent; they just need to be tested on different browsers. The Client-Server applications depend upon the Platform/ OS used, which accentuate their testing on different Platforms and OS.
  • Scalability: Web based Application have to be tested for performance against thousands of simultaneous users. This number will be considerably less for Client Server application
  • Security: This forms an integral part of web based applications but it might be relaxed just a bit for Client Server applications. The reason for this relaxation is based on the fact that the in case of Client-server interaction is taking place mostly between the trusted/known sources which is not the case for web based applications.

In a nutshell it can be stated that although web-based applications are a special case of client-server applications, yet their testing differ in many areas. All the areas identified above need to be addressed adequately in your testing, specially security since every client connected in the web environment is a potential threat to the system.
Hope with this article you can appreciate the difference between client-server architecture and web application architecture. Also, how testing applications based on these architecture is different from one another. Importance of security testing in the web application testing is also established in this article.

Handy MySQL Commands

Handy MySQL Commands

Description

Command

To login (from unix shell) use -h only if needed.

[mysql dir]/bin/mysql -h hostname -u root -p

Create a database on the sql server.

create database [databasename];

List all databases on the sql server.

show databases;

Switch to a database.

use [db name];

To see all the tables in the db.

show tables;

To see database's field formats.

describe [table name];

To delete a db.

drop database [database name];

To delete a table.

drop table [table name];

Show all data in a table.

SELECT * FROM [table name];

Returns the columns and column information pertaining to the designated table.

show columns from [table name];

Show certain selected rows with the value "whatever".

SELECT * FROM [table name] WHERE [field name] = "whatever";

Show all records containing the name "Bob" AND the phone number '3444444'.

SELECT * FROM [table name] WHERE name = "Bob" AND phone_number = '3444444';

Show all records not containing the name "Bob" AND the phone number '3444444' order by the phone_number field.

SELECT * FROM [table name] WHERE name != "Bob" AND phone_number = '3444444' order by phone_number;

Show all records starting with the letters 'bob' AND the phone number '3444444'.

SELECT * FROM [table name] WHERE name like "Bob%" AND phone_number = '3444444';

Use a regular expression to find records. Use "REGEXP BINARY" to force case-sensitivity. This finds any record beginning with a.

SELECT * FROM [table name] WHERE rec RLIKE "^a$";

Show unique records.

SELECT DISTINCT [column name] FROM [table name];

Show selected records sorted in an ascending (asc) or descending (desc).

SELECT [col1],[col2] FROM [table name] ORDER BY [col2] DESC;

Return number of rows.

SELECT COUNT(*) FROM [table name];

Sum column.

SELECT SUM(*) FROM [table name];

Join tables on common columns.

select lookup.illustrationid, lookup.personid,person.birthday from lookup
left join person on lookup.personid=person.personid=statement to join birthday in person table with primary illustration id;

Switch to the mysql db. Create a new user.

INSERT INTO [table name] (Host,User,Password) VALUES('%','user',PASSWORD('password'));

Change a users password.(from unix shell).

[mysql dir]/bin/mysqladmin -u root -h hostname.blah.org -p password 'new-password'

Change a users password.(from MySQL prompt).

SET PASSWORD FOR 'user'@'hostname' = PASSWORD('passwordhere');

Switch to mysql db.Give user privilages for a db.

INSERT INTO [table name] (Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,Create_priv,Drop_priv) VALUES ('%','db','user','Y','Y','Y','Y','Y','N');

To update info already in a table.

UPDATE [table name] SET Select_priv = 'Y',Insert_priv = 'Y',Update_priv = 'Y' where [field name] = 'user';

Delete a row(s) from a table.

DELETE from [table name] where [field name] = 'whatever';

Update database permissions/privilages.

FLUSH PRIVILEGES;

Delete a column.

alter table [table name] drop column [column name];

Add a new column to db.

alter table [table name] add column [new column name] varchar (20);

Change column name.

alter table [table name] change [old column name] [new column name] varchar (50);

Make a unique column so you get no dupes.

alter table [table name] add unique ([column name]);

Make a column bigger.

alter table [table name] modify [column name] VARCHAR(3);

Delete unique from table.

alter table [table name] drop index [colmn name];

Load a CSV file into a table.

LOAD DATA INFILE '/tmp/filename.csv' replace INTO TABLE [table name] FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' (field1,field2,field3);

Dump all databases for backup. Backup file is sql commands to recreate all db's.

[mysql dir]/bin/mysqldump -u root -ppassword --opt >/tmp/alldatabases.sql

Dump one database for backup.

[mysql dir]/bin/mysqldump -u username -ppassword --databases databasename >/tmp/databasename.sql

Dump a table from a database.

[mysql dir]/bin/mysqldump -c -u username -ppassword databasename tablename > /tmp/databasename.tablename.sql

Restore database (or database table) from backup.

[mysql dir]/bin/mysql -u username -ppassword databasename < /tmp/databasename.sql

Create Table Example 1.

CREATE TABLE [table name] (firstname VARCHAR(20), middleinitial VARCHAR(3), lastname VARCHAR(35),suffix VARCHAR(3),
officeid VARCHAR(10),userid VARCHAR(15),username VARCHAR(8),email VARCHAR(35),phone VARCHAR(25), groups
VARCHAR(15),datestamp DATE,timestamp time,pgpemail VARCHAR(255));

Create Table Example 2.

create table [table name] (personid int(50) not null auto_increment primary key,firstname varchar(35),middlename varchar(50),lastname varchar(50) default 'bato');

Monday, June 11, 2007

Testing Skills

Essential Testing Skills needed for Testers:


Test Planning : Analyzing a project to determine the kinds of testing needed, the kinds of people needed, the scope of testing needed, the kinds of people needed, the scope of testing (including what should and should not be tested), the time available for testing activities, the initiation criteria for testing, the completion criteria and the critical success factors of testing.
Test Tool Usage : Knowing which tools are most appropriate in a given testing situation, how to apply the tools to solve testing problems effectively, how to organize automated testing, and how to integrate test tools into an organization
Test Execution : Performing various kinds of tests, such as unit testing, system testing, UAT, stress testing and regression testing. This can also include how to determine which conditions to test and how to evaluate whether the system under test passes or fails. Test execution can often be dependent on your unique environment and project needs, although basic testing principles can be adopted to test most projects
Defect Management : Understanding the nature of defects, how to report defects, how to track defects and how to use the information gained from defects to improve the development and testing processes
Risk analysis: Understanding the nature of risk, how to assess project and software risks, how to use the results of a risk assessment to prioritize and plan testing, and how to use risk analysis to prevent defects and project failure.
Test Measurement: Knowing what to measure during a test, how to use the measurements to reach meaningful conclusions and how to use measurements to improve the testing and development processes