Hi friends, the below article will give an overview of software testing, the different types of testing that can be done and the activities involved in manual software testing.
1.1 what is software testing?
ieee definition states “testing is the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements” as such, there are many published definitions of software testing, however all of these definitions boils down to essentially the same thing:
software testing is the process of executing software in a controlled manner, in order to answer the question “does the software behave as specified?” on a whole, testing objectives could be summarized as:
• testing is a process of executing a program with the intent of finding an error.
• a good test is one that has a high probability of finding an as yet undiscovered error.
• a successful test is one that uncovers an as yet undiscovered error.
1.2 why test?
• developers not fallible
• bugs in compilers, languages, dbs, os
• certain bugs easier to find in testing
• don’t want customers to find bugs
• post release debugging is expensive
• good test design is challenging and rewarding
1.3 principles of testing
• all tests should be traceable to requirements
• tests should be planned ahead of execution
• pareto principle – isolate suspect components for vigorous testing (80% of defects can be traced back to 20% of requirements)
• testing should be done in an outward manner (unit ->system)
1.4 can testing be replaced?
though there are other approaches possible to create good software like inspection, reviews, design style, static analysis, language checks etc software testing cannot be done away with. Review, inspect, read, walkthrough and test mandatorily
1.5 test strategy definition
a test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied, and the methods, techniques and tools.
1.6 test plan
the next task would be the preparation of a test plan a test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied testing of each item, and describes the test environment. Different test plans are prepared based on the level of testing.
The objective of each test plan is to provide a plan for verification, by testing the software, that the software produced fulfills the requirements or design statements of the appropriate software specification.
unit test plan: a unit test plan describes the plans for testing individual units / components / modules of the software
integration test plan: an integration test plan describes the plan for testing integrated software components.
system test plan: a system test plan describes the plan for testing the system as a whole.
acceptance test plan: an acceptance test plan describes the plan for acceptance testing of the software. Normally “acceptance test plan” is prepared by the customer
1.7 test design
once the test plan for a level of testing has been written, the next stage is to specify a set of test cases or test paths for each item to be tested at that level. A number of test cases will be identified for each item to be tested at each level of testing.
Each test case will specify how the implementation of a particular requirement or design is to be tested and the criteria for success of each test.
• a unit test specification will detail the test cases for testing individual units of the software
• a integration test specification will detail the test cases for each stage of integration of tested software components
• a system test specification will detail the test cases of system testing of the software
• an acceptance test specification will detail the test cases of acceptance testing of the software it is important to design test cases for both positive and negative testing. Positive testing checks whether the software is doing what it is supposed to do and negative testing checks whether the software is doing what it is not supposed to do.
1.8 test execution and reporting
the next stage is performing the necessary testing. The output of test execution is recorded in a test results file normally called as the test log. These actual results are then compared with the expected result of the test specification to determine if the test case has been successful or not. Accordingly a “pass / fail” is marked against the respective test case. If the test case has been a failure, then a re-run of the test case is done and results noted.
find attached the file for the flowchart depicting the testing process
1.9 when do you stop testing?
before we ask this question, let us try to answer another question
“is complete testing possible?”
obviously the answer is no. To prove that a program is completely free of bugs is practically impossible and theoretically a mammoth exercise.
Therefore the aim of testing is to provide a suitable, convincing demonstration that the program has been tested enough some of the stop criterias are:
• time runs out (poor criteria !!)
• resources run out
• testing stops when all test cases execute without producing any error
• testing stops when certain number of errors are found
• testing stops when all statements and all branches are executed and all test cases execute without failure
• testing stops when the result is unproductive (no. Of errors per person per day reduces)
2.0 useful tips
• manage testing seriously as development projects are managed
• foster a quality culture that wants to find and prevent problems
• set clear directions and expectations
• delegate responsibility and accountability to good people
• invest in team tooling and support infrastructure
2.1 common pitfalls
some of the aspects that hinder testing itself could be
• optimism
• belief that the system works
• negative attitude towards effective testing
• ego
• don’t want to fail
• conflict between testers and developers
• testing is least structured
• testing is expensive
• delivery commitments
some of the pitfalls of manual testing could be
• testing speed cannot match development speed
• each build not fully tested
• test coverage decreases, more bugs left undetected