Friday, August 10, 2007

Major Portions in SQL Server testing

On aspect of SQL Server testing, it is totally different when compared to other type of testing, we can have following as major portions in SQL Server testing:

a) Code compliance testing.

b) Performance Testing.

c) Integration Testing.

Code Compliance Testing:

Under code compliance testing, the code will be checked in compliance with requirements; Developer can develop the code in different ways for a requirement

eg: A typical developer will divide the code into different functions/procedure of reuse and some redundant developer will put the functionality in the same code, as a tester you cannot say that, this is wrong but you can recommend which is good.

When a piece of code arrives to a tester, he/ she have to look into the following areas in SQL Server Code Compliance Testing:

a) Parameter passage.

b) Code blocking (Begin / End).

c) Conditions / loops.

d) Exception handling.

e) Code commenting / alignment.

f) Return values.

Parameter passage: These plays very important, with data types and length, parameter movement (Input /output etc).

Code Blocking: Fast coders will leave the begin and end for single statements and when they change code/modify later they will forget to put the begin and end statement and piece of code will execute even though condition fails, as a tester you need to make sure that which should be in block and which should be out of block.

Conditions /loops: simple condition is easy to understand, but long condition will put confuse to developer, as an example “Not of Not” and the meaning of loop will change with “do while “and “while” usage, As a tester you need to make sure that they are properly conditioned / looped. We can put this under “Branch Testing

Exception handling: This is very important area which comes under “Range Testing”, everything has to be reported back to the caller, either it is requested result or there is no result. Out bound and inbound value, failing of transaction etc are highlight points under exception handling.

Code commenting / alignment: This is low priority task compared to others but plays very important role in future versions and maintenance of the code, there should be proper comment for each items which is confusing or developer thinks that “this should have proper explanation”, alignment makes code readable.

Return values: For some of the code, return values are mandatory and the return values should be in the context of requirement, when there is result and when there is no result, here boundary value checking plays major role, and this will come under boundary and range value checking.

Performance Testing:

This is validating DBA’s (Database Administrator) role, performance of piece of code plays major role in network based projects, after code is in-compliance with the requirement.

Please do remember that if the code is not in-compliance with the requirements, then there is no scope of performance.

Code performance has to be checked in following area(s):

a) SQL Queries.
b) Indexing.
c) Disabling / enabling of items – bulk insertions.
d) Clustering.
e) Database designing
f) Data types.
g) Locking / unlocking.

SQL Queries: This is the closure area where you will be looking for optimization of code; Queries are source of interacting data on SQL Server, SQL Queries can be written in different forms and some of the queries will be differentiated only with “Execution Plan” to identify the optimized one. Most of the time tester has to look into “Join” conditions and another well known area “IN” condition, where developer can use “Exist” to make code perform better.

Indexing: This is area where fastness /performance could be increased for the requested queries. Database designer will identify where we may require indexing (clustered / non-clustered) and as a tester you need to identify whether all the indexes are properly configured or working with the query list.

Bulk Transactions (Disabling / Enabling of items): when there is bulk data transaction we must make sure that certain elements / objects are disabled for example, Triggers - when millions of records have to be transacted, it will take hell of time if trigger are enabled, make sure that required objects are enabled / disabled.

Clustering: SQL Server clustering can provide fault tolerance for many aspects of a SQL Server, such as hardware, network, operating system, and application failure, make sure that it will not affect the performance and database is clustered on required portions.

Database designing: This is the basic area to look for optimization; too much normalization will affect the performance. As a tester you have to identify which areas (reports, query results etc), are required for normalization and other areas of data manipulation (insertion/ update etc) are done with de-normalization.

Data Types: The lazy designers/developers always use the maximum data type’s usage, Ex: when it requires of true / false, they will keep constants as “VALID” or “INVALID”, and use varchar(8), these datatype will consume more memory, which is not used by the code but still occupying, which will intern increasing the paging.

Locking / unlocking: Some of the code will lock certain portions and until it is released other operations are not performed, as a tester you need to make sure that the correct portion is locked instead of global locking, eg: when row level locking is there you may not require table level locking.
Integration Testing: In order to get certain portions of data, we may require integration of different servers and following are major areas to look on aspect of integration:

a) Calling of External Stored procedure.

b) Usage of multiple Servers.

External Stored procedure is the complex item to test, which can call any windows components, discussion on External Stored procedure is vast area and out of scope of this article. Usage of multiple servers (Linked server) validating should be done on exception throwing cases like when one of the server is down etc.

Other focused areas: As a tester for SQL Server you have to know the general items like

a) Writing test plan.

b) Writing test cases.

c) Usage of bug track system.

d) Differentiating bugs with triage items.

e) Documenting required items.

SQL Server testing

On aspect of SQL Server testing, it is totally different when compared to other type of testing, we can have following as major portions in SQL Server testing:

a) Code compliance testing.

b) Performance Testing.

c) Integration Testing.

Code Compliance Testing:

Under code compliance testing, the code will be checked in compliance with requirements; Developer can develop the code in different ways for a requirement

eg: A typical developer will divide the code into different functions/procedure of reuse and some redundant developer will put the functionality in the same code, as a tester you cannot say that, this is wrong but you can recommend which is good.

When a piece of code arrives to a tester, he/ she have to look into the following areas in SQL Server Code Compliance Testing:

a) Parameter passage.

b) Code blocking (Begin / End).

c) Conditions / loops.

d) Exception handling.

e) Code commenting / alignment.

f) Return values.

Parameter passage: These plays very important, with data types and length, parameter movement (Input /output etc).

Code Blocking: Fast coders will leave the begin and end for single statements and when they change code/modify later they will forget to put the begin and end statement and piece of code will execute even though condition fails, as a tester you need to make sure that which should be in block and which should be out of block.

Conditions /loops: simple condition is easy to understand, but long condition will put confuse to developer, as an example “Not of Not” and the meaning of loop will change with “do while “and “while” usage, As a tester you need to make sure that they are properly conditioned / looped. We can put this under “Branch Testing

Exception handling: This is very important area which comes under “Range Testing”, everything has to be reported back to the caller, either it is requested result or there is no result. Out bound and inbound value, failing of transaction etc are highlight points under exception handling.

Code commenting / alignment: This is low priority task compared to others but plays very important role in future versions and maintenance of the code, there should be proper comment for each items which is confusing or developer thinks that “this should have proper explanation”, alignment makes code readable.

Return values: For some of the code, return values are mandatory and the return values should be in the context of requirement, when there is result and when there is no result, here boundary value checking plays major role, and this will come under boundary and range value checking.

Performance Testing:

This is validating DBA’s (Database Administrator) role, performance of piece of code plays major role in network based projects, after code is in-compliance with the requirement.

Please do remember that if the code is not in-compliance with the requirements, then there is no scope of performance.

Code performance has to be checked in following area(s):

a) SQL Queries.
b) Indexing.
c) Disabling / enabling of items – bulk insertions.
d) Clustering.
e) Database designing
f) Data types.
g) Locking / unlocking.

SQL Queries: This is the closure area where you will be looking for optimization of code; Queries are source of interacting data on SQL Server, SQL Queries can be written in different forms and some of the queries will be differentiated only with “Execution Plan” to identify the optimized one. Most of the time tester has to look into “Join” conditions and another well known area “IN” condition, where developer can use “Exist” to make code perform better.

Indexing: This is area where fastness /performance could be increased for the requested queries. Database designer will identify where we may require indexing (clustered / non-clustered) and as a tester you need to identify whether all the indexes are properly configured or working with the query list.

Bulk Transactions (Disabling / Enabling of items): when there is bulk data transaction we must make sure that certain elements / objects are disabled for example, Triggers - when millions of records have to be transacted, it will take hell of time if trigger are enabled, make sure that required objects are enabled / disabled.

Clustering: SQL Server clustering can provide fault tolerance for many aspects of a SQL Server, such as hardware, network, operating system, and application failure, make sure that it will not affect the performance and database is clustered on required portions.

Database designing: This is the basic area to look for optimization; too much normalization will affect the performance. As a tester you have to identify which areas (reports, query results etc), are required for normalization and other areas of data manipulation (insertion/ update etc) are done with de-normalization.

Data Types: The lazy designers/developers always use the maximum data type’s usage, Ex: when it requires of true / false, they will keep constants as “VALID” or “INVALID”, and use varchar(8), these datatype will consume more memory, which is not used by the code but still occupying, which will intern increasing the paging.

Locking / unlocking: Some of the code will lock certain portions and until it is released other operations are not performed, as a tester you need to make sure that the correct portion is locked instead of global locking, eg: when row level locking is there you may not require table level locking.
Integration Testing: In order to get certain portions of data, we may require integration of different servers and following are major areas to look on aspect of integration:

a) Calling of External Stored procedure.

b) Usage of multiple Servers.

External Stored procedure is the complex item to test, which can call any windows components, discussion on External Stored procedure is vast area and out of scope of this article. Usage of multiple servers (Linked server) validating should be done on exception throwing cases like when one of the server is down etc.

Other focused areas: As a tester for SQL Server you have to know the general items like

a) Writing test plan.

b) Writing test cases.

c) Usage of bug track system.

d) Differentiating bugs with triage items.

e) Documenting required items.

To become a good testing you need to invest your valuable time on different aspects. Hence forth if you are tester make yourself as creative tester.

Tuesday, July 10, 2007

Testing Tech' questions

1. What Technical Environments have you worked with?
Ans) Java, .NET, include the testing environment that u worked with..
2. Have you ever converted Test Scenarios into Test Cases?
Ans) Yes, define a suite of test scenarios, develop test cases that will validate the scenarios, and create test data to support the test cases
3. What is the ONE key element of 'test case'?
4. What is the ONE key element of a Test Plan?
5. What is SQA testing? tell us steps of SQA testing
Ans) to remove defects from software we use SQA, steps-->create a test strategy, test plan, test scnario's, test cases, test execution, test reporting, test log..
6. How do you promote the concept of phase containment and defect prevention?
Ans) Both are Metrics and both provides insight into the strengths and weaknesses of the error and defect detection processes in each phase of SDLC.
7. Which Methodology you follow in your Testcase?
Ans) Equivalence partitioning, Boundary value analysis, Error Guessing
8. Specify the tools used by MNC companies
Ans) It depends upon the companies that they effort to buy tools
9. What are the test cases prepared by the testing team
Ans) Integration, system, Accpetance(user), performance
10. During the start of the project how will the company come to an conclusion that tool is required for testing or not?
Ans) In my compnay, they decide upon the client request and also it depends on cost, time, resources
11. Define Bug Life Cycle? What is Metrics
12. What is a Test procedure?
13. What is the difference between SYSTEM TESTING and END-TO-END TESTING?
14. What is Traceability Matrix? Is there any interchangeable term for Traceability Matrix ?Are Traceability Matrix and Test Matrix same or Different ?
Ans) Both are different
Traceablilty matrix is nothing but tracing the id's right from the Requirement spec id till defect id such as tracking the specified defect if for that specified requirement id.
Test metrics is the mathmetical formula which can be drived from various factors (defect rised, test case pass etc).
15. What is the difference between an exception and an error?
Ans) Exception is a run time error where as interrupt is an event that arises due to software or hard ware.when ever specific event occurs interrupts are enabled
16. Correct bug tracking process - Reporting, Re-testing, Debigging, .....?
Ans) Reporting, debugging, Re-Testing
17. What is the difference between bug and defect?
Ans) due to mistakes in coding, testers find mismatches in the application, and they post them as defetcs, in the developing environment, if the developer accepts that the posted defect is a bug, then he will fix..
Note: Bug will deviate from expected result
18. How much time is/should be alloated for Testing out of total Development time based on industry standards?
Ans) time will be included in test plan and it also depends upon the project, minimum 25% - 35% of development time (time taken for analysis and coding)should be allocated for the testing process
19. What are test bugs?
Ans) A test bug? i dont know what is a test bug, i have heard about code error, design error bugs but not test bug, the bug found during the implemention of the test case is called testing bugs that means testing only.... but how it can be test bugs..
20. Define Quality - bug free, Functionality working or both?
Ans)Quality is some what dependent on requirements, If the product is released as per user requirements and, doesn’t have the complications, I would say, it is a quality product. Bug-free : If the product is reached 80% of the user requirements and, as long as there are no dangerous severity and priority bugs, the product could be considered as bug-free
21. If you have an application, but you do not have any requiremnts available, then how would you perform the testing?
Ans)By exploratory or adhoc testing, by going thru database design, then we will get an idea how to go about testing or first understand the functionality of the application by asking business analyst or PM and then know what is the business requirement for it and get a justification from them.
22. How can you know if a test case is necessary?
Ans) if the requirement upon an application is not satisfied then we write testcase for that particular requirement. It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub requirement must have at least one test case.
23. What is peer review in practical terms?
Ans) peer review means one to one(meeting), usually it will be done by team lead.
24. How do you know when you have enough test cases to adequately test a software system or module?
Ans) make sure the test cases covers the entire functionality to be tested and traceable with requirements.
25. Who approved your test cases?
Ans) test lead
26. What will you when you find a bug?
Ans) Post the bug in bug tracking tool
27. What test plans have you written?
28. What is QA? What is Testing? Are they both same or different?
29. How to write Negative Testcase? Give ex.
30. In an application currently in production, one module of code is being modified. Is it necessary to re-test the whole application or is it enough to just test functionality associated with that module?
Ans) if time is short, then we will test the modified module of code, otheriwse if that paticular module of code effecting other module functonality, then we have to test both the module functionlaities( what we call is impact analysis)
31. What is included in test strategy? What is overall process of testing step by step and what are various documnets used testing during process?
Ans) Test strategy is creating a procedure of how to test the software and creating a strategy what all to be tested(screens,process,modules,..)and time limts for testing process(automated or manual) .So everything has to be planned and implemented.Testing overall procedure is the duties of software test is to go through the requirment documents and functional specification and based on those documents one should focus on writing test cases , which covers all the functionality to be tested.The tester should carry out all these procedures at the time of application under development. Once the build is ready for testing we know what to test and how to proceed for the testing.
32. What is the most challenging situation you had during testing?
Ans) tell ur own challenging thing that u have faced( best answer to give is working at client place and facing client side testing).
33. What are you going to do if there is no Functional Spec or any documents related to the system and developer who wrote the code does not work in the company anymore, but you have system and need to test?
Ans) refer test cases, Functionality of application is known, otherwise refer database, so u will get knowledge on application
34. What is the major problem did you resolve during testing process
Ans) Testing aimed at showing software does not work
35. What are the types of functional testing?
36.
1. How will you write integration test cases
Ans) Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control from one component to the other.

So the Integration Test cases should typically focus on scenarios where one component is being called from another. Also the overall application functionality should be tested to make sure the app works when the different components are brought together.

2. How will you track bugs from winrunner. 3.How will you customise the bugs as pass/fail. 4. You find a bug How will you repair 5. In testcases you have bug or not. 6. What is use case ? what does it contains.
37. What is the difference between smoke testing and sanity testing
38. What is Random Testing?
39. What is smoke testing?
40. What is stage containment in testing?
Ans) The goal of stage containment is to identify defects in the system during development before they are passed to the next stage. This helps build quality into the system. Finding problems or errors in the stage they occur in is important because problems become more expensive and difficult to fix later in the project life cycle.
Apply stage containment to all project development stages using the following standard practices:
Entry criteria
Exit criteria
Entry/Exit Criteria: Entry and exit criteria are sets of conditions that must be satisfied before entering or exiting a project stage.

I have added few more questions---

Diff between spiral/waterfall/prototype models?
water fall Model: when the customer requirements are clear and complete.
Prototype Model: when the customer requirements are not clear ambiguous or our project team is following this model with sample first.
Spiral Model: when the requirements of the customer are exhausting.(Improving and Extending)
what is the diff between web based n client server appln?
A client is defined as a requester of services and a server is defined as the provider of services. A single machine can be both a client and a server depending on the software configuration
Web applications are popular due to the browser as a client, sometimes called a thin client. The ability to update and maintain Web applications without distributing and installing software on potentially thousands of client computers is a key reason for their popularity
Which model is best in Waterfall, Prototype, Spiral and why?
Waterfall Model is good for smaller projects as it takes a longer time to complete since each phase must be completed in its entirety before next phase can begin.-Spiral Model is good for larger projects, high amounts of risk is analyzed in each of phase of development. Expensive model to use.
What are the components of defect?
defect ids a variance form the desired product attribute.1. Variance from product spec 2.varience from customer/user expectation. the categories of defects are generally wrong-implemented incorrectly missing-specific requirement not in the building product extra-additionally incorporated into the product.
Variances from product specifications.
Variances from user expectations.
Can anybody tell me abt the Non Functional Requirements with the suitable example?
Non-functional requirements are properties the product must have, such as the desired look and feel, usability, performance, cultural aspects and so on.
Graphical display of the application is non functional requirements. Eg:, an image needs to be displayed,is the requirement, but that image is not linked with any other object/link, is non functional. So that image will be considered as Non-Functional Requirement.

Thursday, June 28, 2007

30 WinRunner Interview Questions

Which scripting language used by WinRunner ?

WinRunner uses TSL-Test Script Language (Similar to C)

What's the WinRunner ?

WinRunner is Mercury Interactive Functional Testing Tool.

How many types of Run Modes are available in WinRunner ?

WinRunner provide three types of Run Modes.
Verify Mode
Debug Mode
Update Mode

What's the Verify Mode ?

In Verify Mode, WinRunner compare the current result of application to it's expected result.

What's the Debug Mode ?

In Debug Mode, WinRunner track the defects in a test script.

What's the Update Mode?


In Update Mode, WinRunner update the expected results of test script.

How many types of recording modes available in WinRunner ?

WinRunner provides two types of Recording Mode:
Context Sensitive
Analog

What's the Context Sensitive recording ?


WinRunner captures and records the GUI objects, windows, keyboard inputs, and mouse click activities through Context Sensitive Recording.

When Context Sensitive mode is to be chosen ?

a. The application contains GUI objects
b. Does not require exact mouse movements.


What's the Analog recording ?

It captures and records the keyboard inputs, mouse click and mouse movement. It's not captures the GUI objects and Windows.

When Analog mode is to be chosen ?

a. The application contains bitmap areas.
b. Does require exact mouse movements.


What are the components of WinRunner ?

a. Test Window : This is a window where the TSL script is generated/programmed.
b. GUI Spy tool : WinRunner lets you spy on the GUI objets by recording the Properties.

Where are stored Debug Result ?

Debug Results are always saved in debug folder.

What's WinRunner testing process ?

WinRunner involves six main steps in testing process.
Create GUI map
Create Test
Debug Test
Run Test
View Results
Report Defects

What's the GUI SPY ?


You can view the physical properties of objects and windows through GUI SPY.

How many types of modes for organizing GUI map files ?

WinRunner provides two types of modes-
Global GUI map files
Per Test GUI map files

What's the contained in GUI map files ?

GUI map files stored the information, it learns about the GUI objects and windows.

How does WinRunner recognize objects on the application ?


WinRunner recognize objects on the application through GUI map files.

What's the difference between GUI map and GUI map files ?

The GUI map is actually the sum of one or more GUI map files.

How do you view the GUI map content ?

We can view the GUI map content through GUI map editor.

What's the checkpoint ?

Checkpoint enables you to check your application by comparing it's expected results of application to actual results.

What's the Execution Arrow ?

Execution Arrow indicates the line of script being executed.

What's the Insertion Point ?

Insertion point indicates the line of script where you can edit and insert the text.

What's the Synchronization ?

Synchronization is enables you to solve anticipated timing problems between test and application.

What's the Function Generator ?

Function Generator provides the quick and error free way to add TSL function on the test script.

How many types of checkpoints are available in WinRunner ?

WinRunner provides four types of checkpoints-
GUI Checkpoint
Bitmap Checkpoint
Database Checkpoint
Text Checkpoint

What's contained in the Test Script ?

Test Script contained the Test Script Language.

How do you modify the logical name or the physical description of the objects in GUI map ?

We can modify the logical name or the physical description of the objects through GUI map editor.

What are the Data Driven Test ?

When you want to test your application, you may want to check how it performance same operation with the multiple sets of data.

How do you record a Data Driven Test ?

We can create a Data Driven Test through Flat Files, Data Tables, and Database.

How do you clear a GUI map files ?

We can clear the GUI map files through "CLEAR ALL" option.

What are the steps of creating a Data Driven Test ?

Data Driven Testing have four steps-
Creating test
Converting into Data Driven Test
Run Test
Analyze test

What is Rapid Test Script Wizard ?

It performs two tasks.
a. It systematically opens the windows in your application and learns a description of every GUI object. The wizard stores this information in a GUI map file.
b. It automatically generates tests base on the information it learned as it navigated through the application.

What are the different modes in learning an application under Rapid test script wizard ?

a. Express
b. Comprehensive.

What's the extension of GUI map files ?

GUI map files extension is ".gui".

What statement generated by WinRunner when you check any objects ?

Obj_check_gui statement.

What statement generated by WinRunner when you check any windows ?

Win_check_gui statement

What statement generated by WinRunner when you check any bitmap image over the objects ?

Obj_check_bitmap statement

What statement generated by WinRunner when you check any bitmap image over the windows ?


Win_check_bitmap statement

What statement used by WinRunner in Batch Testing ?

"Call" statement.

Which short key is used to freeze the GUI Spy ?

"Ctrl+F3"

How many types of parameter used by WinRunner ?

WinRunner provides three types of Parameter-
Test
Data Driven
Dynamic

How many types of Merging used by WinRunner ?

WinRunner used two types of Merging-
Auto
Manual

What's the Virtual Objects Wizard ?

Whenever WinRunner is not able to read an objects as an objects then it uses the Virtual Objects Wizard.

How do you handle unexpected events and errors ?

WinRunner uses the Exception Handling function to handle unexpected events and errors.

How do you comment your script ?

We comment script or line of the script by inserting "#" at the beginning of script line.

What's the purpose of the Set_Windows command ?

Set_Window command set the focus to the specified windows.

How you created your test script ?

Programming.

What's a command to invoke application?

Invoke_application

What do you mean by the logical name of objects ?

Logical name of an objects is determined by it's class but in most cases, the logical name is the label that appear on an objects.

How many types of GUI checkpoints ?

In Winrunner, three types of GUI checkpoints-
For Single Properties
For Objects/Windows
For Multiple Objects

How many types of Bitmap Checkpoints ?

In Winrunner, two types of Bitmap Checkpoints-
For Objects/Windows
For Screen Area

How many types of Database Checkpoints ?


In Winrunner, three types of Database Checkpoints-
Default Check
Custom Check
Runtime Record Check

How many types of Text Checkpoints ?

In Winrunner, four types of Text Checkpoints-
For Objects/Windows
From Screen Area
From Selection (Web Only)
Web text Checkpoints

What add-ins are available for WinRunner ?

Add-ins are available for Java, ActiveX, WebTest, Siebel, Baan, Stingray, Delphi, Terminal Emulator, Forte, NSDK/Natstar, Oracle and PowerBuilder.

Notes:

* Winrunner generates menu_select_item statement whenever you select any menu items.
* Winrunner generates set_window statement whenever you begin working in new window.
* Winrunner generates edit_set statement whenever you enter keyboard inputs.
* Winrunner generates obj_mouse_click statement whenever you click any object through mouse pointer.
* Winrunner generates obj_wait_bitmap or win_wait_bitmap statements whenever you synchronize the script through objects or windows.
* The ddt_open statement opens the table.
* The ddt_close statement closes the table.
* Winrunner inserts a win_get_text or obj_get_text statements in script for checking the text.
* The button_press statement press the buttons.
* Winrunner generates list_item_select statement whenever you want to select any value in drop-down menu.
* We can compare the two files in Winruuner using the file_compare function.
* tl_step statement used to determine whether section of a test pass or fail.
* Call_Close statement close the test when the test is completed

32 QTP Interview Questions

Full form of QTP ?

Quick Test Professional

What's the QTP ?

QTP is Mercury Interactive Functional Testing Tool.

Which scripting language used by QTP ?

QTP uses VB scripting.

What's the basic concept of QTP ?

QTP is based on two concept-
* Recording
* Playback

How many types of recording facility are available in QTP ?

QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording

How many types of Parameters are available in QTP ?

QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

What's the QTP testing process ?


QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects

What's the Active Screen ?


It provides the snapshots of your application as it appeared when you performed a certain steps during recording session.

What's the Test Pane ?

Test Pane contains Tree View and Expert View tabs.

What's Data Table ?

It assists to you about parameterizing the test.

What's the Test Tree ?

It provides graphical representation of your operations which you have performed with your application.

Which all environment QTP supports ?

ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL

How can you view the Test Tree ?

The Test Tree is displayed through Tree View tab.

What's the Expert View ?

Expert View display the Test Script.

Which keyword used for Nornam Recording ?


F3

Which keyword used for run the test script ?

F5

Which keyword used for stop the recording ?


F4

Which keyword used for Analog Recording ?

Ctrl+Shift+F4

Which keyword used for Low Level Recording ?

Ctrl+Shift+F3

Which keyword used for switch between Tree View and Expert View ?

Ctrl+Tab

What's the Transaction ?

You can measure how long it takes to run a section of your test by defining transactions.

Where you can view the results of the checkpoint ?

You can view the results of the checkpoints in the Test Result Window.

What's the Standard Checkpoint ?

Standard Checkpoints checks the property value of an object in your application or web page.

Which environment are supported by Standard Checkpoint ?

Standard Checkpoint are supported for all add-in environments.

What's the Image Checkpoint ?

Image Checkpoint check the value of an image in your application or web page.

Which environments are supported by Image Checkpoint ?

Image Checkpoint are supported only Web environment.

What's the Bitmap Checkpoint ?

Bitmap Checkpoint checks the bitmap images in your web page or application.

Which enviornment are supported by Bitmap Checkpoints ?

Bitmap checkpoints are supported all add-in environment.

What's the Table Checkpoints ?


Table Checkpoint checks the information with in a table.

Which environments are supported by Table Checkpoint ?

Table Checkpoints are supported only ActiveX environment.

What's the Text Checkpoint ?

Text Checkpoint checks that a test string is displayed in the appropriate place in your application or on web page.

Which environment are supported by Test Checkpoint ?

Text Checkpoint are supported all add-in environments

Note:


* QTP records each steps you perform and generates a test tree and test script.

* QTP records in normal recording mode.

* If you are creating a test on web object, you can record your test on one browser and run it on another browser.

* Analog Recording and Low Level Recording require more disk sapce than normal recording mode.

Pros & Cons of V-Model, water fall model

Pros & Cons of Water Fall Model

* Enforced discipline through documents.
* No phase is complete until the docs are done & checked by SQA group.
* Concrete evidence or progress.
* Testing is inherent in every phase.

* No fair division of phases in the life cycle.
* The following phase should not start until the previous phase has finished
* Document driven model as a result customers cannot understand these.
* Re-design is problematic.

Pros & Cons of V -Model


* Simple and easy to use.
* Each phase has specific deliverables.
* Higher chance of success over the waterfall model due to the development early on during the life cycle.
* Works well for small projects where requirements are easily understood.

* Very rigid, like the waterfall model.
* Little flexibility and adjusting scope is difficult and expensive.
* Software is developed during the implementation phase, so no early prototype software are produced.
* Model doesn’t provide a clear path for problems found during testing phases.

Wednesday, June 27, 2007

difference in testing a CLENT-SERVER application and a WEB application

The main difference is
In both of the Test we are performing Load and Performance Testing.Testing the application in intranet is an example for client -server.

Testing an application in internet(using browser) is called webtesting

Web Server
Application server
Webserver serves pages for viewing in web browser
application server provides exposes business logic for client applications through various protocols
Webserver exclusively handles http requests
Application server serves bussiness logic to application programs through any number of protocols.

Webserver delegation model is fairly simple,when the request comes into the webserver,it simply passes the request to the program best able to handle it(Server side program). It may not support transactions and database connection pooling
Application server is more capable of dynamic behaviour than webserver. We can also configure application server to work as a webserver.Simply applic! ation server is a superset of webserver.
Web Server serves static HTML pages or gifs, jpegs, etc., and can also run code written in CGI, JSP etc. A Web server handles the HTTP protocol. Eg of some web server are IIS or apache
An Application Server is used to run business logic or dynamically generated presentation code. It can either be .NET based or J2EE based (BEA WebLogic Server, IBM WebSphere, JBoss).

A J2EE application server runs servlets and JSPs (infact a part of the app server called web container is responsible for running servlets and JSPs) that are used to create HTML pages dynamically. In addition, J2EE application server can run EJBs - which are used to execute business logic.

A J2EE application server runs servlets and JSPs (infact a part of the app server called web container is responsible for running servlets and JSPs) that are used to create HTML

Saturday, June 23, 2007

Writing Test Case Using Use cases

Writing Test Case Using Use cases

Use case: Tag Notification to the Group
Confidence:Y
Exec Method: M
TestPlan:
Build #:
Pass/Fail:
TestScript:
CR#:
Comment:
TestCase:
Based On: UC

TestData:
Specific Preconditions
· Configure HMI and ensure that the alarm has been generated
· Configure three operators in the “ON CALL” list
· Every operator in the group should have configured with a voice phone number
Basic course

Input Specifications
Output Specifications
1. Configured HMI generates an alarm and SCADAlarm calls the configured operator.
2. Verify the number of retries without answering the call

o The number of retries equals to the value of “Number of retries before moving on to the next call” configured in the menu Configuration>>System Parameter>>Retrying tab
3. Verify that the call is diverted to the next person in the call list/schedule after exceeding the value in the menu Configuration>>System Parameter>>Retrying tab ----Number of retries before moving on to the next call
o Application should divert the call to next person as per call list/schedule depending on the alarm condition
4. The Second operator receives the call on the configured number
o The Second Operator is greeted if the greeting file is configured
5. Enter valid Operator ID and PIN number as requested by the SCADAlarm TUI

o Upon authentication Operator should be logged in to the SCADAlarm application
6. Press “0” key from telephone to logout from SCADAlarm
o The confirmation message will be played
7. Press “9” key for exiting the application without acknowledging the alarm
o SCADAlarm will log the information about the operators information, time when Operator logged in and logged out to the system SCADAlarm logger
o Operator will be notified for the current system time and greeted for “Good Bye”
8. Verify the call has been moved on to Third Operator ‘s configured number
o Operator got call on configured number

Specific Post conditions

Second Example:
Add Item to Cart

The actor for this alternative flow is the Authenticated Customer. The flow begins with the user on the Product Description page.
1. The user enters a product quantity to order.
2. The user clicks on the "Add to Cart" button.
3. The system validates the product order information.
4. If the product order information is invalid, the system displays an error message and the use case ends.
If the product order information is valid, the system populates (but does not display) the shopping cart, displays a confirmation message, and the use case ends. The system also populates the Mini Shopping Cart and displays it.



Test Case ID
Test Conditions
Actions to Perform Test
Expected Results
Actual Result
Pass/ Fail
1.
Validation of View Cart Button
1. Authenticated User clicks on “View Cart” button.
· System displays Shopping Cart


2.
Screen validation Shopping Cart Window
1. Check for the following attributes in the above mentioned screen
o Aesthetic conditions
o Validation conditions
o Navigation conditions
o Usability conditions
o Data Integrity conditions
o Specific field test
mentioned in the appendix as applicable
· All applicable parameters mentioned in appendix should be verified.


3.
Validation For Add Item to Cart
1. On the Product Description Page the authenticated user enters the quantity to order.
2.Click on “Add to Cart” button.
· The quantity entered should be displayed. The product order information is validated



4.
Validation For Add Item to Cart

· If the Product Order Information is invalid the system displays an error and a message box stating the same should pop up.


5.
Validation For Add Item to Cart

· If the product order information is valid then shopping cart should be populated and a confirmation message stating the same should be displayed.
· A Mini Shopping cart should be displayed after the confirmation.

Question asked in various Company interview...

Question asked in COVANSYS interview
What the GUI map will contain
After recording the script If made in change in the logical name in the script, it will run or not for example in edit_set (“Enter the user name ”, “siva”);. I changed the Logical name Enter the user name to Enter ID
In case in my Object Physical description there is no option like attached_text, Then what it will happende.
What are the running modes that present in the winrunner
what is the use of debug mode
what the update mode will do
are u ever heared about the exception handling
what are the types of exception handling
what the pop up exception will do
What are the default actions for pop up exception
Have you ever used the user defined exception
What is the function for exception handling
Give an example for web exception
what are the types of check points
What is gui check point, bitmap check point
What is the difference between RDBMS and DBMS
What is the RDBMS
What is the DDL and DML
What command u will use to erase an table . is that DROP or DELETE
What are the types of joints
What is equi joint and outer joint
What is the compile module
What are the types for class
What is the public, static, auto, extern
Give an example for an variable
Why we use double hash (C:\\win runner…..) instead of single hash for declaring path
What are the parameters for exception handling
Is there any posiblity of loading two GUI map files into GUI map editor
How you add GUI map files into the GUI map editior with out recording
How winrunner recognizes object
There is window containing only one object. I performed action on that, What GUI map will contain
There is a window containing 12 objects. I performed action on first object, What script it will generated and what it will contain
What are the general properties are available for the object
There is a default property that winrunner always learns when you perform action on that. What is that property
When It Is necessary to change the logical name
Is that rapid test script wizard is available always
What is the difference between the adding check points From toolbar and by adding functions from function generator or by manually
What is the purpose of the function generator
What is the use Compile module
What is the difference between the SILK test and Winrunner
What is the use of GUI SPY and GUI map editor
What is the purpose of the GUI merging
What is the extension of the expected result file
How you can see the actual and expected result for check points in result window
When are the folders present in the win runner
How you know that specified check point fails in winrunner
What is the purpose of the Virtual object wizard
If I created a script using virtual object wizard in one system, When I run that same script in different system having different resolution is it will run or not
What is the use of GUI map configaration
I want to use pop-up exception on pop up. I want to create exception on that pop up by clicking the close (X - Right side top right of the corner). In that written exception what will be the parameters
What the check list file will contain
What the expected file will contain
What is the compilation error
Win runner is an interrupter or compiler
variable is always be sting or any thing else


1. IsoftWhat shouldbe done after wrirting test case?? 2.Covansys
Testing
What is bidirectional traceability ??? and how it is implemented
What is Automation Testframe work ?
Define the components present in test strategy
Define the components present in test plan
Define database testing ?
What is the difference between QA and QC ....
What is the difference between V&V
What are different types of test case that u have written in your project..
Have u written Test plan ?....
SQL
What is joins and define all the joins ...
What is Foreign key ?
Write an SQL query if u want to select the data from one block which inturn reflects in another block ?
Unix
Which command is used to run an interface?
How will you see the hidden file ?
What is the command used to set the date and timings ...
Some basic commands like copy, move,delete ?
Which command used to the go back to the home directory ....
Which command used to view the the current directory
3. VirtusaTesting
Tell me about Yourself?
Testing process followed in your company …
Testing Methodology
Where u maintains the Repositories?
What is CVS?
Bug Tool used?
Howwill you prepare traceabilty matrix if there is no Business Doc and FunctionalDoc ?
How willyou validate the functionality of the Test cases, if there is no businessrequirement document or user requirement document as such...
Testing process followed in your company?
Tell meabout CMM LEVEL -4 ...what are steps that to be followed to achieve the CMM -IVstandards?
What isBack End testing?
What is Unit Testing?
How will u write test cases for an givenscenario...i.e. main page, login screen, transaction, Report Verification?
How will u write traceability matrix ?
What isCVS and why it is used?
What willbe specified in the Defect Report...?
What isTest summary Report...?
What is Test Closure report...?
ExplainDefect life cycle...
What will be specified in the Test Case...
What are the Testing methodologies that u havefollowed in your project ?
What kind of testing that u have been involvedin and explain about it....
What is UAT Testing??
What is joins and what are the different typesof joins in SQL and explain the same?
What isForeign Key in SQL...?
KLA Tencor

Bug life cycle?
Explain about the Project. …And draw the architecture of your project?
What are the different types of severity?
Defect tracking tools used?
what are the responsibilities of an tester?
Give some example how will you write the test cases if an scenario involves Login screen.
Aztec
What are the different types of testingfollowed .....
What are the different levels of testing used during testing the
application?
What type of testing will be done in Installation testing or system testing?
What is meant by CMMI ...what are different types of CMM Level?
Explain abt the components involved in CMM-4 level
Explain abt Performance testing ?
What is Tracebility matrix and how it is done ?
How can you differentiate Severity and Priority based on technical and business point of view.
What is the difference between Test life cycle and defect life cycle ?
How will u ensure that you have covered all the functionality while writing test cases if there is no functional spec and there is no KT about the application?
Important :Study UNIX and SQL Commands...Now-a-days in each and every interview they are asking questions related to SQL and UNIX...

Monday, June 18, 2007

Test Plan Outline

TEST PLAN OUTLINE
(IEEE 829 Format)

1. Test Plan Identifier
2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Suspension Criteria and Resumption Requirements
11. Test Deliverables
12. Remaining Test Tasks
13. Environmental Needs
14. Staffing and Training Needs
15. Responsibilities
16. Schedule
17. Planning Risks and Contingencies
18. Approvals
19. Glossary

IEEE TEST PLAN TEMPLATE

Test Plan Identifier

Some type of unique company generated number to identify this test plan, its level and the level of software that it is related to. Preferably the test plan level will be the same as the related software level. The number may also identify whether the test plan is a Master plan, a Level plan, an integration plan or whichever plan level it represents. This is to assist in coordinating software and testware versions within configuration management.

Keep in mind that test plans are like other software documentation, they are dynamic in nature and must be kept up to date. Therefore, they will have revision numbers.

You may want to include author and contact information including the revision history information as part of either the identifier section of as part of the introduction.

References

List all documents that support this test plan. Refer to the actual version/release number of the document as stored in the configuration management system. Do not duplicate the text from other documents as this will reduce the viability of this document and increase the maintenance effort. Documents that can be referenced include:

  • Project Plan
  • Requirements specifications
  • High Level design document
  • Detail design document
  • Development and Test process standards
  • Methodology guidelines and examples
  • Corporate standards and guidelines

Introduction

State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive summary part of the plan.

You may want to include any references to other plans, documents or items that contain information relevant to this project/process. If preferable, you can create a references section to contain all reference documents.

Identify the Scope of the plan in relation to the Software Project plan that it relates to. Other items may include, resource and budget constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and possible the process to be used for change control and communication and coordination of key activities.

As this is the "Executive Summary" keep information brief and to the point.

Test Items (Functions)

These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.

This can be controlled and defined by your local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements.

Remember, what you are testing is what you intend to deliver to the Client.

This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build.

Software Risk Issues

Identify what software is to be tested and what the critical areas are, such as:

    1. Delivery of a third party product.
    2. New version of interfacing software
    3. Ability to use and understand a new package/tool, etc.
    4. Extremely complex functions
    5. Modifications to components with a past history of failure
    6. Poorly documented modules or change requests

There are some inherent software risks such as complexity; these need to be identified.

    1. Safety
    2. Multiple interfaces
    3. Impacts on Client
    4. Government regulations and rules

Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.

The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.

One good approach to define where the risks are is to have several brainstorming sessions.

  • Start with ideas, such as, what worries me about this project/application.

Features to be Tested

This is a listing of what is to be tested from the USERS viewpoint of what the system does. This is not a technical description of the software, but a USERS view of the functions.

Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low. These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.

It should be noted that Section 4 and Section 6 are very similar. The only true difference is the point of view. Section 4 is a technical type description including version numbers and other technical information and Section 6 is from the User’s viewpoint. Users do not understand technical software terminology; they understand functions and processes as they relate to their jobs.

Features not to be Tested

This is a listing of what is NOT to be tested from both the Users viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a USERS view of the functions.

Identify WHY the feature is not to be tested, there can be any number of reasons.

  • Not to be included in this release of the Software.
  • Low risk, has been used before and is considered stable.
  • Will be released but not tested or documented as a functional part of the release of this version of the software.

Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project.

Approach (Strategy)

This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.

  • Are any special tools to be used and what are they?
  • Will the tool require special training?
  • What metrics will be collected?
  • Which level is each metric to be collected at?
  • How is Configuration Management to be handled?
  • How many different configurations will be tested?
  • Hardware
  • Software
  • Combinations of HW, SW and other vendor packages
  • What levels of regression testing will be done and how much at each test level?
  • Will regression testing be based on severity of defects detected?
  • How will elements in the requirements and design that do not make sense or are untestable be processed?

If this is a master test plan the overall project testing approach and coverage requirements must also be identified.

Specify if there are special requirements for the testing.

  • Only the full component will be tested.
  • A specified segment of grouping of features/components must be tested together.

Other information that may be useful in setting the approach are:

  • MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.
  • SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.

How will meetings and other organizational processes be handled?

Item Pass/Fail Criteria

What are the Completion criteria for this plan? This is a critical aspect of any test plan and should be appropriate to the level of the plan.

  • At the Unit test level this could be items such as:
    • All test cases completed.
    • A specified percentage of cases completed with a percentage containing some number of minor defects.
    • Code coverage tool indicates all code covered.
  • At the Master test plan level this could be items such as:
    • All lower level plans completed.
    • A specified number of plans completed without errors and a percentage with minor defects.

This could be an individual test case level criterion or a unit level plan or it can be general functional requirements for higher level plans.

What is the number and severity of defects located?

  • Is it possible to compare this to the total number of defects? This may be impossible, as some defects are never detected.
    • A defect is something that may cause a failure, and may be acceptable to leave in the application.
    • A failure is the result of a defect as seen by the User, the system crashes, etc.

Suspension Criteria and Resumption Requirements

Know when to pause in a series of tests.

  • If the number or type of defects reaches a point where the follow on testing has no value, it makes no sense to continue the test; you are just wasting resources.

Specify what constitutes stoppage for a test or series of tests and what is the acceptable level of defects that will allow the testing to proceed past the defects.

Testing after a truly fatal error will generate conditions that may be identified as defects but are in fact ghost errors caused by the earlier defects that were ignored.

Test Deliverables

What is to be delivered as part of this plan?

  • Test plan document.
  • Test cases.
  • Test design specifications.
  • Tools and their outputs.
  • Simulators.
  • Static and dynamic generators.
  • Error logs and execution logs.
  • Problem reports and corrective actions.

One thing that is not a test deliverable is the software itself that is listed under test items and is delivered by development.

Remaining Test Tasks

If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects.

If the project is being developed as a multi-party process, this plan may only cover a portion of the total functions/features. This status needs to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that do not relate to this plan.

When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups.

Environmental Needs

Are there any special requirements for this test plan, such as:

  • Special hardware such as simulators, static generators etc.
  • How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided?
  • How much testing will be done on each component of a multi-part feature?
  • Special power requirements.
  • Specific versions of other supporting software.
  • Restricted use of the system during testing.

Staffing and Training needs

Training on the application/system.

Training for any test tools to be used.

Section 4 and Section 15 also affect this section. What is to be tested and who is responsible for the testing and training.

Responsibilities

Who is in charge?

This issue includes all areas of the plan. Here are some examples:

  • Setting risks.
  • Selecting features to be tested and not tested.
  • Setting overall strategy for this level of plan.
  • Ensuring all required elements are in place for testing.
  • Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.
  • Who provides the required training?
  • Who makes the critical go/no go decisions for items not covered in the test plans?

Schedule

Should be based on realistic and validated estimates. If the estimates for the development of the application are inaccurate, the entire project plan will slip and the testing is part of the overall project plan.

  • As we all know, the first area of a project plan to get cut when it comes to crunch time at the end of a project is the testing. It usually comes down to the decision, ‘Let’s put something out even if it does not really work all that well’. And, as we all know, this is usually the worst possible decision.

How slippage in the schedule will to be handled should also be addressed here.

  • If the users know in advance that a slippage in the development will cause a slippage in the test and the overall delivery of the system, they just may be a little more tolerant, if they know it’s in their interest to get a better tested application.
  • By spelling out the effects here you have a chance to discuss them in advance of their actual occurrence. You may even get the users to agree to a few defects in advance, if the schedule slips.

At this point, all relevant milestones should be identified with their relationship to the development process identified. This will also help in identifying and tracking potential slippage in the schedule caused by the test process.

It is always best to tie all test dates directly to their related development activity dates. This prevents the test team from being perceived as the cause of a delay. For example, if system testing is to begin after delivery of the final build, then system testing begins the day after delivery. If the delivery is late, system testing starts from the day of delivery, not on a specific date. This is called dependent or relative dating.

Planning Risks and Contingencies

What are the overall risks to the project with an emphasis on the testing process?

  • Lack of personnel resources when testing is to begin.
  • Lack of availability of required hardware, software, data or tools.
  • Late delivery of the software, hardware or tools.
  • Delays in training on the application and/or tools.
  • Changes to the original requirements or designs.

Specify what will be done for various events, for example:

Requirements definition will be complete by January 1, 19XX, and, if the requirements change after that date, the following actions will be taken:

  • The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as most projects tend to have fixed delivery dates.
  • The number of test performed will be reduced.
  • The number of acceptable defects will be increased.
    • These two items could lower the overall quality of the delivered product.
  • Resources will be added to the test team.
  • The test team will work overtime (this could affect team morale).
  • The scope of the plan may be changed.
  • There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.
  • You could just QUIT. A rather extreme option to say the least.

Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past.

The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option.

Approvals

Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan)?

At the master test plan level, this may be all involved parties.

When determining the approval process, keep in mind who the audience is:

  • The audience for a unit test level plan is different than that of an integration, system or master level plan.
  • The levels and type of knowledge at the various levels will be different as well.
  • Programmers are very technical but may not have a clear understanding of the overall business process driving the project.
  • Users may have varying levels of business acumen and very little technical skills.
  • Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand the business process. These types of individuals can cause more harm than good if they do not have the skills they believe they possess.

Glossary

Used to define terms and acronyms used in the document, and testing in general, to eliminate confusion and promote consistent communications.