Testing Interview Questions

Sub Category
Last Updated
Mar 1st, 2017
Apr 21st, 2018
Sep 8th, 2022
Oct 27th, 2017
Sep 27th, 2017
Jan 24th, 2019
Jun 16th, 2019
Aug 18th, 2017
Sep 8th, 2022
Dec 17th, 2020
Sep 9th, 2022
Oct 31st, 2021
Nov 2nd, 2019
Sep 9th, 2022
Nov 7th, 2020
Feb 22nd, 2018

Showing Questions 1 - 20 of 24 Questions
First | Prev | Next | Last Page
Sort by: 
Jump to Page:

    SharePoint Application Testing

    How the sharepoint application testing will be done, I need detailed analysis how exactly the test team validates the application,what would be the test scenarios,Test coverage points.
    if any one has done the SharePoint testing,please provide me the inputs


    • Dec 21st, 2018

    I am new to this. can you please tell what is "restore/import/export site" and managed paths


    • Dec 21st, 2018

    Thanks alot. I have one question here. can you list out one test case on negative scenario?


    What is the difference between Error, defect,fault, failure and mistake


    • Aug 19th, 2018

    Explanation is tooo good thank u soo much

    Pradeep Singh

    • Jul 1st, 2016

    A mistake in coding is called ERROR, Error found by tester is called DEFECT, Defect accepted by developer is called BUG, If build does not meet requirements then it is FAILURE.


    Examples of Severity and priority of all combination

    Could please anybody give me the good examples of:

    High severity and Low priority
    Low severity and High priority
    Low severity and Low priority
    High severity and High priority
    Med severity and Med priority

    .... in the terms of functionality.. and Please tell me who will give the priority.


    • Apr 4th, 2018

    Logo issue would be high priority and low severity as its not affecting the functionality.


    • Mar 18th, 2018

    Logo and company name is identity of the company or organisation then how it should be low severity? Its high priority and high severity


    Why did you choose software testing?...

    If an interviewer asks me " Why did you choose software testing?..." then what should I answer....


    • Sep 15th, 2017

    In my point of view, giving a true answer is best because they will understand your fake answer about "making customer happy makes me happy" .. so make it simple use your true point of view .. it will be very beneficial for you.

    ankit p patel

    • Jun 28th, 2017

    In my point of view. Best answer of this question is I selected QA because I want to see Customer so happy that he will always remember us. And this is possible when we become error free application.
    That's it!!!


    What is Testing Techniques?

    Star Read Best Answer

    Editorial / Best Answer


    • Member Since Jan-2006 | Jan 6th, 2006

     Black Box and White Box are testing types and not tetsing techniques.

    Testing techniques are as follows:-

    The most popular Black box testing techniques are:-

    • Equivalence Partitioning.
    • Boundary Value Analysis.
    • Cause-Effect Graphing.
    • Error-Guessing.

    The White-Box testing techniques are: -

    • Statement coverage
    • Decision coverage
    • Condition coverage         
    • Decision-condition coverage
    • Multiple condition coverage
    • Basis Path Testing
    • Loop testing
    • Data flow testing

    Aakash Bhavsar

    • Aug 7th, 2017

    Follow this: Methods of testing: 1) Black box testing 2) White box testing 3) Gray box testing 1) Black box testing techniques: Equivalence class partitioning, Boundary value analysis, Cause effec...

    Mayuri Padhye

    • Jul 19th, 2017

    As black box and white box testing are the types of testing then what are the techniques of testing


    What is RTM? How is it useful in testing?

    Payal ugale

    • Jun 10th, 2017

    RTM is a requirement traceability matrix it traces the each and every deliverable right from the incitiation to the final stage


    • Mar 21st, 2017

    Requirement Traceability Matrix (RTM) captures all requirements proposed by client or development team. Used to check all testcases are covered, so that no functionality should miss.


    Exact difference between alpha and beta testing


    • Jun 7th, 2017

    Alpha testing takes place at the developers site by the internal teams, before release to external customers. This testing is performed without the involvement of the development teams. Beta testing ...


    • Dec 4th, 2014

    Alpha testing :::::: pre release testing by end user representatives at the developer site

    beta testing:::::::::testing performed by potential customers at their own location


    What is traceability matrix??

    Star Read Best Answer

    Editorial / Best Answer

    Answered by: Sahithi

    • Nov 25th, 2005

     this matrix defines the mapping between customer requirements and prepared testcases by testengineers.this matrix is requirements traceability matrix or requirements validation matrix.this is used by testing team to verify how far the testcases prepared have covered the requirements of the functionalities to be tested.


    • Aug 28th, 2015

    Mapping between test cases or requirement is called traceability matrix.

    Bhuvaneshwari K

    • Oct 10th, 2014

    A traceability matrix is a document, usually in the form of a table, that correlates any two baselined documents that require a many-to-many relationship to determine the completeness of the relations...


    How would you define a bug?


    • Nov 11th, 2013

    Bug is nothing but find out the mismatch between the Actual result and expected results

    Bipasha Banerjee

    • Nov 8th, 2013

    Any functionality not working as per the requirement of the customer and customer need is a failure product. The meaning for the words Bug, Incident ,Issue , Defect , Failure, Fault are the same . A computer bug is an error flaw ,mistake in a computer programs that produces an incorrect result .


    What are Entry and Exit criteria in Test Plan?

    Shahuraj Patil

    • Oct 14th, 2012

    Explain quality control and quality assurance and its processes with example?


    Conflict with Developer

    On what issues will a user have conflict with a developer?

    Star Read Best Answer

    Editorial / Best Answer


    • Member Since Nov-2009 | Nov 10th, 2009

    Conflicts with developers arise from the lack of patience and empathy; when a tester and developer forget that they are ultimately striving to reach the same goal. 

    Conflicts generally arise through lack of misunderstanding due to miscommunication.  In my organization, this typically occurs when a developer closes a defect stating that it can't be reproduced.  If this situation is not handled carefully, it can lead to conflict.  

    Conflict initiators are:
    1)  Tester did not provide a clear and/or accurate description of the defect.  For example, the tester may report the error name or text and the developer might be looking for an error number. 

    2)  Tester did not provide a thorough description of the defect.  For example, tester may not mention that the defect was found in a particular test environment and the developer assumed it was caused in a different one.

    3)  Developer misinterprets the defect even though it was authored accurately, clearly and thoroughly.  Devs aren't perfect.  Under time constraints to reach milestones, they sometimes overlook key points in defect reports and end up closing bugs that should get fixed. 

    Conflict negators are:
    1)  Establish a standard defect reporting process and get Test and Development to fully understand and adopt it. 

    2) Empathy: Understand that everyone is under pressure to meet deadlines and we are all on the same team to reach a common goal.

    3) Patience: Understand that nobody's perfect.


    • Sep 20th, 2012

    Tester - developer conflict occur when developer say "could not see the reported bug". This could happen when Testers test environment differ from developrs or when the bug is not reproducible, occuri...


    Give an example for High severity and Low Priority ?

    Star Read Best Answer

    Editorial / Best Answer


    • Member Since Jan-2006 | Jan 9th, 2006

    1]High Severity And Low Priority----If  there is an application , if that application crashes after mulple use of any functionality (exa--save Button use 200 times then that application will crash)

    Means High Severity because application chrashed but Low Priority because no need to debug right now you can debug it after some days.

    2]High Priority And Low Severity---- If any Web site say "Yahoo" now if the logo of site "Yahoo" spell s "Yho"  ----Than Priority is high but severity is low.

    Because it effect the name of site so important to do quick ---Priority

    but it is not going to crash because of spell chage so severity low.

    Amit Hambarde


    What is a job of a tester in SDLC?

    Star Read Best Answer

    Editorial / Best Answer


    • Member Since Mar-2007 | Jul 29th, 2007

    Overall Business Requirements Phase:
                  No Tester is required but User Acceptence Test Cases (UAT) need to be defines by the Customer.

    Requirements Analysis Phase:
                  Tester can go through the requirements document and needs to write System testing test cases

    High-Level Designing Phase:
                  Tester Needs to write Integratiosn test Cases

    Low-Level Test Cases:
                  Component testing Test Cases to be documented

           Unti Testing ( Done by Developers only)

    All the test cases activities starts at the end of that phase

    Once the coding has been done.. testing starts in phase wise.


    HP Mercury's Quality Center and Performance Center

    What is the difference between HP Mercury's Quality Center and Performance Center?


    • Dec 6th, 2010

    HP quality center is the Enterprise Repository tool where are the functioanal test cases are created and executed. Defects logged. Performance center is Enterprise Performance testing tool where the performance test cases/scripts are created and performance test scenarios are executed.


    What is Software Testing Principle?


    • Dec 14th, 2009

    Besides the more fundamental principles supplied by AN_QA from the syllabus for Certified Tester (which are darn good principles, by the way), many testing principles depend on the software developmen...


    What is the difference between DATA VALIDITY and DATA INTEGRITY?


    • Nov 17th, 2009

    The difference between data validity and data integrity is simply this:  Data validity deals with data that is input into a system (ex. a database) while data integrity deals with the maintenance...


    • Apr 26th, 2007

    Data IntegrityImportant data stored in the database include the catalog, pricing, shipping tables, tax tables, order database, and customer information. Testing must verify the correctness of the stor...


    Handle Bugs in Live / Production

    Suppose a bug has been produced in live for the piece of functionality you have carried out testing. How will you give explanation to PM/Manager?

    Star Read Best Answer

    Editorial / Best Answer


    • Member Since Nov-2009 | Nov 10th, 2009

    The question is:  Suppose a bug has been produced in live for the piece of functionality you have carried out testing. How will you give explanation to PM/Manager?  

    This issue can not be considered 'Out of Scope' and there must be a test case for it because it is 'a piece of functionality you have carried out testing'. 

    In this scenario, the tester will need to research to determine whether the issue was caused by 1) a faulty test case, 2) a difference between Production and Test environments, or 3) by the tester's mistake or lack of follow-through.  Whatever the case may be, it is the tester's responsibility to isolate the problem and take steps to correct it.  Once this has been accomplished, then the particulars of the issue should be fully disclosed to the appropriate individuals (Project Manager, Test Manager, etc.).

    1. Faulty Test Case: 
    a) Does the test case accurately map to the proper business requirement?  If not, then perhaps the business requirement was missed and this becomes the source of the problem.
    b) Is the business requirement incorrect?  If so, then the requirement needs to be rewritten and new test case(s) produced from this new requirement.
    c) Was the test case authored improperly.  That is, did the tester misunderstand the business requirement and create an improper test case?  If so, then the test case(s) need to be reauthored based on this newly correct understanding.

    2. Difference between Production and Test Environment:
    Does the defect occur only in the Production environment but not in the Test environment?  If so, then this must be made perfectly clear to management.  The tester may need to work with other functional groups to figure out how to bring the Test environment in alignment with Production in order to prevent this issue from reoccuring.

    3. Tester oversight or lack of follow through.  As humans, we sometimes make mistakes.  There are situations when the amount of time that a test team is allowed to test becomes constricted and testers feel they must hurry to finish their test runs.  In these situations, testers inadvertently miss test steps or even entire test cases.  And it is Murphy's Law that the overlooked test case will be the one that could have uncovered a significant defect!  If this happens, the tester must own up to the error and inform management.  I have made my share of mistakes--we all do.  It is best to admit the blunder and take personal measures to ensure it doesn't happen again.  The most important aspect of ANY relationship, work or otherwise, is trust.  And if you try to cover up your mistakes, you will quickly lose the trust of your managment and cohorts.  Honest is truly the best policy in any circumstance!


    • Jan 18th, 2010

    We will check the compatibility and the environment which they are running the software. If there is the issue of compatibility then fault is not from Tester. We will go in Root Cause of the issue and...

Showing Questions 1 - 20 of 24 Questions
First | Prev | Next | Last Page
Sort by: 
Jump to Page: