Testing Interview Questions

Sub Category
Questions
Answers
Last Updated
5
13
Mar 1st, 2017
116
1052
Apr 21st, 2018
333
1777
Jan 18th, 2019
60
349
Oct 27th, 2017
96
273
Sep 27th, 2017
236
919
Oct 5th, 2018
129
367
Oct 19th, 2018
144
277
Aug 18th, 2017
353
1380
Jan 6th, 2019
160
637
Dec 8th, 2018
330
1270
Nov 8th, 2018
218
1427
Apr 5th, 2018
291
812
Dec 28th, 2018
535
1700
Sep 20th, 2018
118
420
Nov 8th, 2018
47
219
Feb 22nd, 2018

Showing Questions 1241 - 1249 of 1249 Questions
First | Prev | Next | Last Page
Sort by: 
 | 
Jump to Page:
  •  

    How to test the pop-up menus?give me detail description?

  •  

    Difference between Win Runner version 7.0 and 7.6?

    Suganthi Ramachandran

    • Nov 3rd, 2005

    hi,this is Suganthi from India.  Pls answer me the following questions:1) Is Win Runner7.6 is used only for web based Testing2) what are the features added in WinRunner7.1, 7.2, 7.3, 7.4, 7....

  •  

    Handle Bugs in Live / Production

    Suppose a bug has been produced in live for the piece of functionality you have carried out testing. How will you give explanation to PM/Manager?

    Star Read Best Answer

    Editorial / Best Answer

    kurtz182  

    • Member Since Nov-2009 | Nov 10th, 2009


    The question is:  Suppose a bug has been produced in live for the piece of functionality you have carried out testing. How will you give explanation to PM/Manager?  

    This issue can not be considered 'Out of Scope' and there must be a test case for it because it is 'a piece of functionality you have carried out testing'. 

    In this scenario, the tester will need to research to determine whether the issue was caused by 1) a faulty test case, 2) a difference between Production and Test environments, or 3) by the tester's mistake or lack of follow-through.  Whatever the case may be, it is the tester's responsibility to isolate the problem and take steps to correct it.  Once this has been accomplished, then the particulars of the issue should be fully disclosed to the appropriate individuals (Project Manager, Test Manager, etc.).

    1. Faulty Test Case: 
    a) Does the test case accurately map to the proper business requirement?  If not, then perhaps the business requirement was missed and this becomes the source of the problem.
    b) Is the business requirement incorrect?  If so, then the requirement needs to be rewritten and new test case(s) produced from this new requirement.
    c) Was the test case authored improperly.  That is, did the tester misunderstand the business requirement and create an improper test case?  If so, then the test case(s) need to be reauthored based on this newly correct understanding.

    2. Difference between Production and Test Environment:
    Does the defect occur only in the Production environment but not in the Test environment?  If so, then this must be made perfectly clear to management.  The tester may need to work with other functional groups to figure out how to bring the Test environment in alignment with Production in order to prevent this issue from reoccuring.

    3. Tester oversight or lack of follow through.  As humans, we sometimes make mistakes.  There are situations when the amount of time that a test team is allowed to test becomes constricted and testers feel they must hurry to finish their test runs.  In these situations, testers inadvertently miss test steps or even entire test cases.  And it is Murphy's Law that the overlooked test case will be the one that could have uncovered a significant defect!  If this happens, the tester must own up to the error and inform management.  I have made my share of mistakes--we all do.  It is best to admit the blunder and take personal measures to ensure it doesn't happen again.  The most important aspect of ANY relationship, work or otherwise, is trust.  And if you try to cover up your mistakes, you will quickly lose the trust of your managment and cohorts.  Honest is truly the best policy in any circumstance!

    trramai

    • Jan 18th, 2010

    We will check the compatibility and the environment which they are running the software. If there is the issue of compatibility then fault is not from Tester. We will go in Root Cause of the issue and...

  •  

    Prevent Defects

    How can the Testing Organization help prevent defects from occurring?

    Star Read Best Answer

    Editorial / Best Answer

    kurtz182  

    • Member Since Nov-2009 | Nov 20th, 2009


    The earlier QA?gets involved in the?software?development process, and the?greater their presence in this process, the more they will help prevent defects.?

    QA can review software design documents (cross-functional peer reviews)?before software engineers begin developing their code.? QA can maintain its presence and continue to offer feedback throughout the development process until the initial release of the program to test.??

    QA will not prevent defects from occurring, but can minimize the quantity and severity of defects by:
    1) Fully understanding the scope of the software development project,
    2) Getting involved at the earliest possible stage in the development cycle,
    3) Reviewing the project plan for development and offering feedback, and
    4) Maintaining a presence in the development process before the first release to test.

    kurtz182

    • Nov 20th, 2009

    The earlier QA?gets involved in the?software?development process, and the?greater their presence in this process, the more they will help prevent defects.? QA can review software design documents...

  •  

    Testing Effort Estimation

    How to estimate testing effort following two cases i.e. 1. If Client having high level requirement only. Ex:100 high level requirement2. If client having prototype of the application only.3. If Client having usecases only.4. If client does have any req or usecases.

    Star Read Best Answer

    Editorial / Best Answer

    kurtz182  

    • Member Since Nov-2009 | Nov 8th, 2009


    I'm not sure I fully understand the question, but I will take a stab at this:

    In our group, the testing effort is generally considered to be 30 percent of the total development effort in terms of resources when all of the deliverables are properly and thoroughly provided to the test team.   These deliverables include requirements and specifications.

    "Requirements" refers to the business requirements of the program
    "Specifications" refers to the technical specifications of the program

    In our group, it is a bonus if we receive usecases.  Typically, this isn't necessary when a complete and thorough listing of requirements have been provided.

    1.  If the client only has the high level requirements, then it depends on how "high" the requirements truly are.  If the requirements are so high that all of the necessary test cases can not be reasonably produced with the information given, then extra effort will be needed to query for more complete requirements.  If testers must ask specific questions because details are not fully explained in the requirements, this will necessarily incur an increase in test time and resources.

    Likewise, the test team has not been given technical specifications for the program.  This will incur even more test time and resources when questions are raised such as, "What is the maximum number of characters users can enter in this text box?" or "Are users required to enter a phone number in any particular format?"   

    In scenario 1, the testing effort estimation is much greater than 30 percent and its specific value depends on information that has not been divulged.

    2.  If the client only has a prototype, the test team will make it clear that it can not verify whether the program meets company business needs because there are no requirements.  Without requirements, the test team can only ensure that the program is stable and user-friendly.  The test team may push back and let project manager/marketing/engineering know that it does not recommend an appreciable amount of test effort be applied to the project until requirements are provided.  If the test team is compelled to devote a full test run on the project, then it must be made clear that test will not endorse (sign off) on the project and will have to test the program again when the requirements are furnished to the test team. This being the case, significantly more test time and resources will need to be applied to the project.  

    As we saw earlier, even more test time and effort will be needed when questions arise due to the lack of technical specifications.

    In scenario 2, the testing effort estimation is much greater than 30 percent and its specific value depends on information that has not been divulged.

    3.  If the client furnishes usecases, and if the usecases were based on business requirements, then test will need to verify whether all of the requirements were covered in the usecases.  If not, then more test time and effort will be needed to ask the appropriate questions to fill the gap of missing requirements.  If the usecases cover all the requirements, and if we can get project manager/marketing/engineering to confirm and sign off on this, then test can begin authoring test cases based on the use cases. 

    Yet, as we saw earlier, more test time and effort will be needed when questions arise due to the lack of technical specifications.

    In scenario 3, the testing effort estimation is greater than 30 percent but less than scenarios 1 and 2.  Still, its specific value depends on information that has not been divulged. 

    4. If the client provides the requirements and usecases, then we are still missing the technical specifications.  Yet, this scenario gets us closest to the test team's resources being 30 percent of total development cost.  Nevertheless, it is still over 30 percent.

  •  

    Test Design

    What is Test Design? Is it preparing the test documents or is it test identification? If the "Test Design" refers to only test identification and "Test Execution" refers to validation, then where should we fit "test document preparation"

    Star Read Best Answer

    Editorial / Best Answer

    kurtz182  

    • Member Since Nov-2009 | Dec 29th, 2009


    Test Design includes all of the steps leading up to Test Execution.  The preparation of test documents and the identification of tests are performed simultaneously.  Consequently, test document preparation is included in Test Design. 

    kurtz182

    • Dec 29th, 2009

    Test Design includes all of the steps leading up to Test Execution.  The preparation of test documents and the identification of tests are performed simultaneously.  Consequently, test document preparation is included in Test Design. 

  •  

    Handle Changes Before Ship Date

    How would you deal with changes being made a week or so before the ship date?

    Star Read Best Answer

    Editorial / Best Answer

    kurtz182  

    • Member Since Nov-2009 | Dec 28th, 2009


    I would determine the feasibility of meeting the ship date by evaluating the:

    1) necessity of the change; [ why do we need it? ]

    2) urgency of the change; [ can we defer it? ]

    3) complexity of the change; [ how much time and effort will be required to test it? ]

    4) risks involved with not including the change; [ what happens if we don't do it? ]

    5) additional resources required to test the change [do I have the resources? ]

    Then I would either rally the resources to test the change or I would deliver a (hopefully compelling) case against releasing on the target ship date based on my evaluations. 

Showing Questions 1241 - 1249 of 1249 Questions
First | Prev | Next | Last Page
Sort by: 
 | 
Jump to Page: