Wednesday, November 23, 2011

Severity vs Priority!

Severity:-Define Impact of bug or describe the bug in terms of  functionality [Low,Medium,High,Critical]
Priority:- Importance of a bug to fix it or describe the bug in terms of customer.[Low,Medium,High]

High Severity(HS) & Low Priority(LP) :-
  • A application which generates some banking related reports weekly, monthly, quarterly and yearly by doing some calculation.If there is a error while calculating yearly reports it is HS but LP because this request can be change or fixed in next release. 
  • If we have typical scenario in which the application get crashed but the scenario exists rarely.

Low Severity & High Priority :-
  • If the company logo is not displaying properly on the front page.
  • There is a misspell in the URL like instead of google it displayed as gogle

High Severity & High Priority :-
  • Suppose you are doing online shopping and filled payment information but after submitting the form you get message like "order has been canceled" . 
  • A bug which is show stopper due to we are unable to proceed our testing. 
  • No error message to prevent wrong operation.

Low Severity & Low Priority :-
  • There is a mistake like you have registered success instead successfully.
  • Error message having complex meaning.  
  • Cosmetic Bugs.  

Sunday, November 20, 2011

Defect Injection Phase!!


  • IC   -  Interface capability 
  • IS   -  Interface specification
  • ID   -  Interface description 
  • II    -  Interface Implementation 
  • IU  -  Interface Use 
  • MD - Missing design 
  • MI  - Missing Implementation 
  • ME -Missing Error Handling
  • MA - Missing Assignment 
  • MC - Missing Call
  • WA - Wrong Algorithm
  • WE - Wrong Expression 
  • WC - Wrong Condition 
  • WN - Wrong Name 
  • WT - Wrong Type  

Tuesday, July 5, 2011

TC Document Format !


  1. Test Identification Particulars
  2. Test Case History 
  3. Traced Requirements
  4. Software Requirements 
  5. Hardware Requirements 
  6. Test Setup
  7. Inputs [test data]
  8. Outputs [Expected results] 
  9. Procedures 
  10. Evaluation
  11.  Test Execution
  12. Person List [executed test in past]

Enhance the test cases!

1 :- Priority / Risk Analysis
2 :- Pareto Principal
3 :- Requirement Traceability 
4 :- Equivalence Portioning
5 :- Boundary Value Analysis 
6 :- Cause Effect Graphing
7 :- Orthogonal Array Testing
8 :- Error Guessing
9 :- State Machine
10:- Decision Table

Desk-Checking


Desk Checking is a method to verify the portions of the code for correctness such verification
is done by comparing тeхt code with specifications.

Walk-Through

walk-through is a form of software peer review "in which a designer or programmer leads members of the development team and other interested parties through a software product, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems"

Inspection


Refers to peer review of any work product by trained individuals who look for defects using a well defined process.
The stages in the inspections process are: 
  • Planning: The inspection is planned by the moderator.
  • Overview meeting: The author describes the background of the work product.
  • Preparation: Each inspector examines the work product to identify possible defects.
  • Inspection meeting: During this meeting the reader reads through the work product, part by part and the inspectors point out the defects for every part.
  • Rework: The author makes changes to the work product according to the action plans from the inspection meeting.
  • Follow-up: The changes by the author are checked to make sure everything is correct.

Testing Types (Not Enough)


1. Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. It is usually performed by the customer.
2. Accessibility Testing: Type of testing which determines the usability of a product to the people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is conducted by persons having disabilities.
3. Agile Testing: Software testing practice that follows the principles of the agile manifesto, emphasizing testing from the perspective of customers who will utilize the system. It is usually performed by the QA teams.
4. Ad-hoc Testing: Testing performed without planning and documentation - the tester tries to 'break' the system by randomly trying the system's functionality. It is performed by the testing teams.
5. Alpha Testing: Type of testing a software product or system conducted at the developer's site. Usually it is performed by the end user.
6. API Testing: Testing technique similar to unit testing in that it targets the code level. API Testing differs from unit testing in that it is typically a QA task and not a developer task.
7. Automated Testing: Testing technique that uses automation testing tools to control the environment set-up, test execution and results reporting. It is performed by a computer and is used inside the testing teams.
8. Basis Path Testing: A testing mechanism which derives a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. It is used by testing teams when defining test cases.
9. Beta Testing: Final testing before releasing application for commercial purpose. It is typically done by end-users or others.
10. Backward Compatibility Testing: Testing method which verifies the behavior of the developed software with older versions of the test environment. It is performed by testing teams.
11. Benchmark Testing: Testing technique that uses representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration. It is performed by testing teams.
12. Big Bang Integration Testing: Testing technique which integrates individual program modules only when everything is ready. It is performed by the testing teams.
13. Binary Portability Testing: Technique that tests an executable application for portability across system platforms and environments, usually for conformation to an ABI specification. It is performed by the testing teams.
13. Boundary Value Testing: Software testing technique in which tests are designed to include representatives of boundary values. It is performed by the QA testing teams.
14. Bottom Up Integration Testing: In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time. It is usually performed by the testing teams.
15. Branch Testing: Testing technique in which all branches in the program source code are tested at least once. This is done by the developer.
16. Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail. It is performed by testing teams.
17. Black box Testing: A method of software testing that verifies the functionality of an application without having specific knowledge of the application's code/internal structure. Tests are based on requirements and functionality. It is performed by QA teams.
18. Compatibility Testing: Testing technique that validates how well a software performs in a particular hardware/software/operating system/network environment. It is performed by the testing teams.
19. Comparison Testing: Testing technique which compares the product strengths and weaknesses with previous versions or other similar products. Can be performed by tester, developers, product managers or product owners.
20. Component Testing: Testing technique similar to unit testing but with a higher level of integration - testing is done in the context of the application instead of just directly testing a specific method. Can be performed by testing or development teams.
21. Configuration Testing: Testing technique which determines minimal and optimal configuration of hardware and software, and the effect of adding or modifying resources such as memory, disk drives and CPU. Usually it is performed by the performance testing engineers.
22. Condition Coverage Testing: Type of software testing where each condition is executed by making it true and false, in each of the ways at least once. It is typically made by the automation testing teams.
23. Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. It is usually performed by testing teams.
24. Decision Coverage Testing: Type of software testing where each condition/decision is executed by setting it on true/false. It is typically made by the automation testing teams.
25. Dynamic Testing: Term used in software engineering to describe the testing of the dynamic behavior of code. It is typically performed by testing teams.
26. Domain Testing: White box testing technique which contains checkings that the program accepts only valid input. It is usually done by software development teams and occasionally by automation testing teams.
27. Error-Handling Testing: Software testing type which determines the ability of the system to properly process erroneous transactions. It is usually performed by the testing teams.
28. End-to-end Testing: Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. It is performed by QA teams.
29. Exploratory Testing: Black box testing technique performed without planning and documentation. It is usually performed by manual testers.
30. Equivalence Partitioning Testing: Software testing technique that divides the input data of a software unit into partitions of data from which test cases can be derived. it is usually performed by the QA teams.
31. Formal verification Testing: The act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. It is usually performed by QA teams.
32. Functional Testing: Type of black box testing that bases its test cases on the specifications of the software component under test. It is performed by testing teams.
33. Gorilla Testing: Software testing technique which focuses on heavily testing of one particular module. It is performed by quality assurance teams, usually when running full testing.
34. Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. It can be performed by either development or testing teams.
35. Glass box Testing: Similar to white box testing, based on knowledge of the internal logic of an application’s code. It is performed by development teams.
36. GUI software Testing: The process of testing a product that uses a graphical user interface, to ensure it meets its written specifications. This is normally done by the testing teams.
37. Globalization Testing: Testing method that checks proper functionality of the product with any of the culture/locale settings using every type of international input possible. It is performed by the testing team.
38. Integration Testing: The phase in software testing in which individual software modules are combined and tested as a group. It is usually conducted by testing teams.
39. Interface Testing: Testing conducted to evaluate whether systems or components pass data and control correctly to one another. It is usually performed by both testing and development teams.
40. Install/uninstall Testing: Quality assurance work that focuses on what customers will need to do to install and set up the new software successfully. It may involve full, partial or upgrades install/uninstall processes and is typically done by the software testing engineer in conjunction with the configuration manager.
41. Keyword-driven Testing: Also known as table-driven testing or action-word testing, is a software testing methodology for automated testing that separates the test creation process into two distinct stages: a Planning Stage and an Implementation Stage. It can be used by either manual or automation testing teams.
42. Load Testing: Testing technique that puts demand on a system or device and measures its response. It is usually conducted by the performance engineers.
43. Loop Testing: A white box testing technique that exercises program loops. It is performed by the development teams.
44. Manual Scripted Testing: Testing method in which the test cases are designed and reviewed by the team before executing it. It is done by manual testing teams.
45. Manual-Support Testing: Testing technique that involves testing of all the functions performed by the people while preparing the data and using these data from automated system. it is conducted by testing teams.
46. Model-Based Testing: The application of Model based design for designing and executing the necessary artifacts to perform software testing. It is usually performed by testing teams.
47. Mutation Testing: Method of software testing which involves modifying programs' source code or byte code in small ways in order to test sections of the code that are seldom or never accessed during normal tests execution. It is normally conducted by testers.
48. Modularity-driven Testing: Software testing technique which requires the creation of small, independent scripts that represent modules, sections, and functions of the application under test. It is usually performed by the testing team.
49. Non-functional Testing: Testing technique which focuses on testing of a software application for its non-functional requirements. Can be conducted by the performance engineers or by manual testing teams.
50. Negative Testing: Also known as "test to fail" - testing method where the tests' aim is showing that a component or system does not work. It is performed by manual or automation testers.
51. Operational Testing: Testing technique conducted to evaluate a system or component in its operational environment. Usually it is performed by testing teams.
52. Orthogonal array Testing: Systematic, statistical way of testing which can be applied in user interface testing, system testing, regression testing, configuration testing and performance testing. It is performed by the testing team.
53. Pair Testing: Software development technique in which two team members work together at one keyboard to test the software application. One does the testing and the other analyzes or reviews the testing. This can be done between one Tester and Developer or Business Analyst or between two testers with both participants taking turns at driving the keyboard.
54. Parallel Testing: Testing technique which has the purpose to ensure that a new application which has replaced its older version has been installed and is running correctly. It is conducted by the testing team.
55. Path Testing: Typical white box testing which has the goal to satisfy coverage criteria for each logical path through the program. It is usually performed by the development team.
57. Penetration Testing: Testing method which evaluates the security of a computer system or network by simulating an attack from a malicious source. Usually they are conducted by specialized penetration testing companies.
58. Performance Testing: Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements. It is usually conducted by the performance engineer.
59. Ramp Testing: Type of testing consisting in raising an input signal continuously until the system breaks down. It may be conducted by the testing team or the performance engineer.
60. Regression Testing: Type of software testing that seeks to uncover software errors after changes to the program (e.g. bug fixes or new functionality) have been made, by retesting the program. It is performed by the testing teams.
61. Recovery Testing: Testing technique which evaluates how well a system recovers from crashes, hardware failures, or other catastrophic problems. It is performed by the testing teams.
62. Requirements Testing: Testing technique which validates that the requirements are correct, complete, unambiguous, and logically consistent and allows designing a necessary and sufficient set of test cases from those requirements. It is performed by QA teams.
63. Security Testing: A process to determine that an information system protects data and maintains functionality as intended. It can be performed by testing teams or by specialized security-testing companies.
64. Sanity Testing: Testing technique which determines if a new software version is performing well enough to accept it for a major testing effort. It is performed by the testing teams.
65. Scenario Testing: Testing activity that uses scenarios based on a hypothetical story to help a person think through a complex problem or system for a testing environment. It is performed by the testing teams.
66. Scalability Testing: Part of the battery of non-functional tests which tests a software application for measuring its capability to scale up - be it the user load supported, the number of transactions, the data volume etc. It is conducted by the performance engineer.
67. Statement Testing: White box testing which satisfies the criterion that each statement in a program is executed at least once during program testing. It is usually performed by the development team.
68. Static Testing: A form of software testing where the software isn't actually used it checks mainly for the sanity of the code, algorithm, or document. It is used by the developer who wrote the code.
69. Stability Testing: Testing technique which attempts to determine if an application will crash. It is usually conducted by the performance engineer.
70. Smoke Testing: Testing technique which examines all the basic components of a software system to ensure that they work properly. Typically, smoke testing is conducted by the testing team, immediately after a software build is made .
71. Storage Testing: Testing type that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. It is usually performed by the testing team.
72. Stress Testing: Testing technique which evaluates a system or component at or beyond the limits of its specified requirements. It is usually conducted by the performance engineer.
73. Structural Testing: White box testing technique which takes into account the internal structure of a system or component and ensures that each program statement performs its intended function. It is usually performed by the software developers.
74. System Testing: The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It is conducted by the testing teams in both development and target environment.
75. System integration Testing: Testing process that exercises a software system's coexistence with others. It is usually performed by the testing teams.
76. Top Down Integration Testing: Testing technique that involves starting at the stop of a system hierarchy at the user interface and using stubs to test from the top down until the entire system has been implemented. It is conducted by the testing teams.
77. Unit Testing: Software verification and validation method in which a programmer tests if individual units of source code are fit for use. It is usually conducted by the development team.
78. User Interface Testing: Type of testing which is performed to check how user-friendly the application is. It is performed by testing teams.
 79. Usability Testing: Testing technique which verifies the ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. It is usually performed by end users.
80. Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner. It is usually conducted by the performance engineer.
81. White box Testing: Testing technique based on knowledge of the internal logic of an application’s code and includes tests like coverage of code statements, branches, paths, conditions. It is performed by software developers.
82. Informal Testing: This testing specially done by the developer after completing a segment of code.
83. Audit Testing: Testing where the user reconciles the output of the new system to the output of the current system to verify the new system perform the operation correctly.
84. Closed box testing:It is same as black box testing A type of testing that consider the only functionality of the application.
85. Database Testing:Method processes and data rule function as expected during to the access of the database ensure that data is not corrupted or unexpectedly deleted,updated or created. 
86. Exhaustive Testing: A test approach in which the test suite comprises all combination of input values and preconditions.
87.Gamma Testing: Testing of a software that has all the required features but it did not go through all the in house quality checks. 
88. Pilot Testing: Testing that involves the user just before the actual release to ensure that user become familiar with the release content and ultimately accept it.
89. Incremental Testing: Partial testing of an incomplete product the goal of incremental testing is to provide an early feedback to software developers.   



Sunday, February 13, 2011

Validation?


  • Dynamic, validating the actual product.
  • Involves executing the product.
  • Computer based execution of program.
  • Target
  1. Actual product
  2. A Unit
  3. A module
  4. Set of integrated module
  5. Final product
  • Methods
  1. Black Box
  2. White Box
  3. Grey Box   

Verification?


  1. static, verifying, document, design ,code.
  2. does not involving executing the code. 
  3. Human based checking.
  4. Target
  1. Requirement specification
  2. Application architecture
  3. High level and detailed design
  4. Data base design
  • Methods
  1. Inspection
  2. Walk through
  3. Desk-Checking

Quality Control Vs Quality Assurance ?

QA:- A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meets its objective.
  • Process Related.
  • focus on the processes used to developed a product.
  • quality of the processes.
  • Preventive control.
  • Allegiance is to development.  

QC:- A set of activities designed to evaluate a developed product work.  

  • Product related.
  • testing of a product under development.
  • Quality of the product.
  • Detective control.
  • allegiance is not to development. 

Software Development Life Cycle [SDLC]

How can be write Test Case ?


Purpose:-
Short sentence or two about the aspect of the system is being tested. If this gets too long, break the test case up or put more information into the feature descriptions. 
Prerequisite:-
Assumptions that must be met before the test case can be run. E.g., "logged in", "guest login allowed", "user test user exists".
Test Data:-
List of variables and their possible values used in the test case. You can list specific values or describe value ranges. The test case should be performed once for each combination of values. These values are written in set notation, one per line. E.g.:
loginID = {Valid loginID, invalid loginID, valid email, invalid email, empty}
password = {valid, invalid, empty}

Steps:-

Steps to carry out the test. See step formatting rules below.
  1. visit LoginPage
  2. enter userID
  3. enter password
  4. click login
  5. see the terms of use page
  6. click agree radio button at page bottom
  7. click submit button
  8. see PersonalPage
  9. verify that welcome message is correct username
Expected Results:-  
A results which you will get after run/execute of a test case.
Actual Results:-
 After run a test case manual or automatically  the result should be match with actual result if it is match with the actual result your test case goes in to pass status otherwise fail.

Software Testing Life Cycle - STLC ?

Software testing life cycle identifies what test activities to carry out and when (what is the best time) to accomplish those test activities. Even though testing differs between organizations, there is a testing life cycle.

Most comman Errors in software?


  1. User Interface Errors:- Missing /Wrong function does not do what the user wants to expect.missing information, Inappropriate error message,performance issue ,Cant redirect o/p.
  2. Error Handling:- Overflow,data comparison ,Error recovery,Aborting Error recovery from hardware Problems.
  3. Boundary Related Errors:- Boundaries in loop,Space,Time,Memory,mishandling of case outside boundary.
  4. Calculation Errors:-Bad logics,bad arithmetics outdated constant calculation error.wrong formulas,incorrect approximation.
  5. Hardware.
  6. Secure Version & id control.
  7. Testing Errors.
  8. Load Condition.
  9. Control Flow Errors.


  

Wednesday, February 2, 2011

QTP Framework


Frame Work is the model or structure we follow for a
Project. It is the approach that we follow to automate the
Project. There are so many types of framework in QTP.

1. Datadriven frame work
2. Keyword driven frame work
3. Module driven framework
4. Hybrid driven frame work

The framework you choose, depends on the model of your
Project. Usually most of projects fall under hybrid driven
frame work.

Datadriven framework: 
Here we divide the entire project in
to modules and start automation by writing data driven
scripting for each one of them. We will have test data
either in excel sheet or flat files or from database...we
will pass that test data in to the script and perform data
driven testing.

Keyword driven framework:
Firstly we should add the objects to repository( Shared
repository is preferable)
Then we can generate/write scripts for each functionality
and save them as the library files ( .vbs files)
Then we have to associate all these library files(which are
generated to test different funtionalities) to Quick Test.

Module driven frame work: 
we will divide entire project in
to modules and write functions or procedures for each and
every module and automate the project.

Hybrid driven framework: 
it is combination of Datadriven and module driven framework

Date Field Test Cases


1)  Check for the format as per requirement i.e. mm-dd-yy
2)  Check for day month and year( day should not be > 31 similarily month should be<=12 and year as per requirement)
3)  Also try characters and spcl. characters on date if textbox is editable.
4)  Try dates 30-02-2004 i.e validation for month of feb.
5)  Check as per requirement if all parts are separated with / or - or . sign
6)  Click on to the calender icon in front of the date field. calender should be opened.
7)  In the calender current date should be selected
8)  We should be able to select the desired date on to the calender
9)After click on to the selected date date should be come in to the box and calender should be disappeared from there.
10)Should not accept 000000 as a date
11)Should accept 01 or 1 as the month value. The same holds true for day value.

Saturday, January 8, 2011

Manual Testing Working Flow

As per existing project( but beware not tell them you only worked on existing project)  whenever new functionality or new module added to project we are doing the following things.

1. When we got a mock (Requirements documents from clients). 
2. Firstly go thoroughly of docs and writing test cases. 
3. Executing test cases who created behalf of Clients requirements.
4. If found a bug, logging that bugs in bug tracking tools like(Jira, Ticketbuilder,Quality Center).
5. When bug get fixed, again verifying that bug etc.
6. Whenever new build enforces, doing smoke testing for quick test of some major functionality after that doing       regression testing of whole sites.
7. On daily basis whatever the work done by the team, reporting to clients.
8. Assisting to newbie( new member) to get familiar with projects.
9. Providing technical support to team member to setup environment to carryout testing. etc.
10. Carryout chatting, with off-shore QA member to exchange latest information regarding projects. 

Important:  You must remember how to log a bug in bug tracking tools like following

1. A bug have ( Title,Summary or Heading)
2. Priority( Critical, High, Medium, Low)
3. Severity (Show Stopper, High, Medium, Low)
3. Status( Open, Alpha & Beta verified,Inquiry, Build hold, Re-assigned, Assigned,Close, In progress etc)
4. Ticket type( Bug, Re-work, Release, Maintenance, New functionality, Support)
5. Steps to reproduce.
6. Environment(  in my case qa2www.... or on production)
7. Os/ Brow( XP/Vista, win 7, Mac: IE7, 8, FF 3.6, Safari etc)
8. Assignee( either assign to developer or senior QA member for review)
9. Description( here u write, what u actually seeing in bug)
10. Comments( Generally but by qa member and developer)
11. File attachments( screenshot, video file) for bug proof.