Please find the attached pdf.. It may be useful to understand this.
Please find the attached pdf.. It may be useful to understand this.
Testing client/server applications requires some additional techniques to handle the new effects introduced by the client/server architecture.
Testing client/server systems is definitely different - but it's not "from another planet" type different.
We're still testing software, so the picture looks like this:
This class includes all the core testing techniques that you'll need to test any system, including systems that have a client/server design, plus the special techniques needed for client/server.
Even if you're an experienced tester we think you'll find some interesting and useful new ideas for testing all types of systems.
So what's different about client/server?
Testing client/server applications is more challenging than testing traditional systems because:
1. New kinds of things can go wrong.
(Example: Data and messages can get lost in the network)
2. It’s harder to set up, execute, and check the test cases.
(Example: Testing for proper responses to timeouts)
3. Regression testing is harder to automate.
(Example: It’s not easy to create an automated ‘server is busy’ condition)
4. Predicting performance and scalability become critical.
(Example: It seems to work fine with 10 users. But what about with 1,000 users? 10,000 users?)
Obviously some new techniques are needed to test client/server systems. Most of these apply to distributed systems of all kinds, not just the special case of a client/server architecture. So as you encounter various design permutations, “three-tiered” and so on, you’ll be able to put together an effective test plan for them.
The key to understanding how to test these systems is to understand exactly how and why each type of potential problem arises. With this insight, the testing solution will usually be obvious. So we’ll take a look at how things work, and from that we’ll develop the new testing techniques we need.
First, let’s define “client/server”.
We mean it very broadly: one program requests a service from another program. Often the service is ‘providing data’, but we won’t make any assumptions about what the service is.
Here’s an example of the new kinds of things that can go wrong:
Suppose Program A, (a client program, because it makes a request) asks Program B, a server program, to update some fields in a database.
Program B is on another computer.
Program A expects Program B to report that either:
(1) the operation was successfully completed,
or
(2) the operation was unsuccessful (for example, because the requested record was locked.)
However, time passes and A hears nothing.
What should A do?
Depending on the speed and reliability of the network connecting A and B, there comes a time when A must conclude that something has probably gone wrong. But what?
Some possibilities are:
1. The request got lost and it never reached B.
2. B received the request, but is too busy to respond yet.
3. B got the request, but crashed before it could begin processing it.
B may or may not have been able to store the request before it crashed.
If B did store the request, it might try to service it upon awakening.
4. B started to process the request, but crashed while processing it.
5. B finished processing the request (successfully or not), but crashed before it could report the result.
6. B reported the result, but the result got lost and was never received by A.
There are more possibilities, but you get the idea.
So what should A do?
The problem: A can’t tell the difference between any of the above cases (without taking some further action).
Dealing with this problem involves complex schemes such as the Two Phase Commit.
When we test the client program A, we need to see whether it’s robust enough to at least do something intelligent for each of the above scenarios. Otherwise we can’t say that A “works right”.
This example illustrates one new type of testing that we have to do for client/server systems – testing the client program for correct behavior in the face of uncertainty. To do that we're going to have to create or simulate various conditions, such as "no response from server".
Later, we’ll look at the other new type of testing, this time on the server side: performance testing and scalability issues.
For now let's look at some major reasons why client/server systems cause new effects:
(1) Most of these systems are event-driven.
Basically, this means:
"Nothing Happens Until Something Happens"
Most program action is triggered by an event, such as the user hitting a key, some I/O being completed, or a clock timer expiring.
The event is intercepted by an "event handler" or "interrupt handler" piece of code, which generates an internal message (or lots of messages) about what it detected.
This means that it's harder to set up test cases than it is, say, to define a test case for a traditional system that prints a check.
To set up a test case you need to create events, or to simulate them. That's not always easy, especially because you have to generate these events when the system is in the proper state - but there are ways to do it.
We have a whole lesson coming up on States and Events, and how to use them for testing.
(2) The systems never stop.
Many client server/systems are set up to never stop running unless something goes really wrong.
It's true for the servers, and in many cases, it's true for the client machines.
Traditional systems complete an action, such as printing a report, and then turn in for the night. When you restart the program it's a whole fresh new day for it.
In systems that don't stop (on purpose), things are different.
Errors accumulate.
As someone put it, "Sludge builds up on the walls of the operating system."
So things like memory leaks that are wrong, but that probably wouldn't affect most traditional systems, will eventually bring non-stop systems down if they aren't detected and corrected.
One good way to minimize these effects is to use something called SOW and HILT variables in testing, which we'll cover in detail in a few lessons.
(3) The system contains multiple computers and processors which can act (and fail) independently.
Worse, they communicate with each other over less than perfectly reliable communication lines.
These two factors are the root cause of the problem detailed above, where Program A makes a request of Program B, but gets no response.
We'll cover several techniques for testing the robustness of programs in the kind of circumstances that client/server systems can generate.
First, we want to look at several concepts that form the bedrock for all of testing.
Summary:
Client/server systems can create conditions that require new testing techniques.
Despite the added complexity, there are methods available to deal with most of the testing problems.
Client/server systems aren't that different from traditional systems. We're going to need to use basic testing techniques as well as more exotic ones to get the job done.
Can anyone explain me the difference between web testing and client/server testing. I have done only web functionality testing manually. What are the problems (or may be defects)that could be faced in integration and system testing? Is client/server same as intranet testing? Thanks in advance.
A few of things need to keep in mind while testing c/s applications and web applications. The things are like protocols. So,
1.Need to know what are the protocols are being used for your web app or c/s application
2.And there we generally do testing on web URL, hyper link testing on web app
3.Server architecture will be different for web and c/s app, so need to have look into this as well
4.Levels of doing security , penetration testing is different on each
5.For both performance testing again will have different views that also need to consider
Plz let me know if have any doubt
Thanks for the reply, but I still have doubts. I was trying to understand what will be the difference in test cases? Of course, there will not be URL testing. Considering server archetecture, say for simple two tier, how will be the testcases/test plan? Or functionality cases remain same but the difference is in the response? I am really not clear...
I also read somewhere that integration testing is especially relevant to c/s or distributed systems.....why?
Hi,
The main aspect that would differentiate C/S testing from a web-based application testing would be System Testing or End-to-End testing.
Functional Iintegration testing would more or less remain on the same lines as it would focus on testing the intergration between individual modules either on Client Side of C/S system or a web-based application.
But when testing a n-tier C/S system, System testing would actually encompass testing the data flow, navigation Progressing from -
Client - AppServer - DBServer.
Based on the tier acrhitechture, more entities would be added to the above flow.
But generally we test web-based applications with scope limited to website alone and do not focus on testing WebServer part.
So your System testing testcases would take into consideration testing -
- an Input supplied at the Client Side
- response from Server Side
- Changes effected at DB Server side
Cheers....
Thanks...! Makes sense to me.
Hi,
Following will differentiate you the difference between client server and web based application.
In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.
Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.
Testing will happen based on the above edplanations.
Regards,
Ganesan
How to identify if the project is client-server or web-based ?
Hi,
If you are accessing the application using browser then its a Client-Server application. (Thin Client).
Client-Server application also can be non-browser based. (Thick Client)
For example: SAP Client, Yahoo Messenger Client, Gtalk etc etc. (Sometimes
thick client are hard to identify whether its on a client-server or
standlone)
If you still can't find out. You can use the ethereal packet sniffer to find out if the application is client-server application. It will capture the
communication if there is any. (you can download ethereal packet sniffer for free)
Thanks,
J7
Explained the basic concept very nicely
Web based application
1) It communicates to the entire world. 2) Clients are unlimited 3) Server doesn’t have any information about the client.
Client server application
1) It can not communicate to entire world. 2) Clients are limited. 3) Server knows the configuration of each and every client.
Very simple web based applications are the examples of 3 tier client sever apllications are 2 tier and window applications are 1 tier archictechture
thank you for the answer !!
In case of Desktop application, the application itself maintains a state [session] for the user. While in case of web applicatioin it is stateless[http], so everytime the user needs authentication.
Very Nice Explanation, Thank You very much
Client/Server Appn:
Server is one place and we are accessing the server remotely.
Less issues.
Web based Appn:
Browser needed and its 3 tier appn.more issues with the web appn.
Hi..i need the test case for web application examples of...
Last edited by admin; 12-13-2012 at 08:05 AM.