-
Moderator
Testing for Agile Software Development
Testing for Agile Software Development testing for agile software development in this tutorial you will learn about testing for agile software development - background, understanding agile software development, how is testing approach different in an agile development scenario? what to test? typical bugs found when doing agile testing? steps taken to effectively test in agile development methodology, ensuring software test coverage and summary
Testing for Agile Software Development
Last edited by admin; 06-19-2012 at 08:09 AM.
-
Junior Member
Re: Testing for Agile Software Development
This article gave a very brief idea on testing for Agile Software Development. Very interesting article to read.
-
Expert Member
Re: Testing for Agile Software Development
very good article.. do read it
-
Junior Member
Re: Testing for Agile Software Development
a metric leading to agility althaf hussain nearly every metric can be perverted, since up- and down-ticks in the metric can come from good or bad causes. Teams driven by metrics often game the metrics rather than deliver useful software. Ask the team to deliver and measure running tested features, week in and week out, over the course of the entire project. Keeping this single metric looking good demands that a team become both agile and productive. Can metrics lead to agility?1 camille bell, on the agile project management mailing list, asked about using metrics to produce agility. In the ensuing discussion, i offered this: what is the point of the project? i'm just guessing, but i think the point of most software development projects is software that works, and that has the most features possible per dollar of investment. I call that notion running tested [features], and in fact it can be measured, to a degree. Imagine the following definition of rtf: 1. The desired software is broken down into named features (requirements, stories) which are part of what it means to deliver the desired system. 2. For each named feature, there are one or more automated acceptance tests which, when they work, will show that the feature in question is implemented. 3. The rtf metric shows, at every moment in the project, how many features are passing all their acceptance tests. How many customer-defined features are known, through independently-defined testing, to be working? now there's a metric i could live with. What should the rtf curve look like? with any metric,we need to be clear about what the curve should look like. Is the metric defects? we want them to go down. Is it lines of code? then, presumably, we want them to go up. Number of tests? up. Rtf should increase essentially linearly from day one through to the end of the project. To accomplish that, the team will have to learn to become agile. From day one, until the project is finished, we want to see smooth, consistent growth in the number of running, tested features. • running means that the features are shipped in a single integrated product. • tested means that the features are continuously passing tests provided by the requirements givers -- the customers in xp parlance. • features? we mean real end-user features, pieces of the customer-given requirements, not techno-features like "install the database" or "get web server running". Running tested features should start showing up in the first week or two of the project, and should be produced in a steady stream from that day forward. Wide swings in the number of features produced (typically downward) are reason to look further into the project to see what is wrong. But our purpose here is to provide a metric which, if the team can produce to the graph, will induce them to become agile. A team producing running tested features in a straight line from day zero to day n will have to become agile to do it. They can't do big design up front -- if they do, they'll ship no features. They can't skip acceptance testing -- if they do, their features aren't tested. They can't skip refactoring -- if they do, their curve will drop. And so on. It might be possible to game rtf. My guess, however, is that it would be easier just to become agile. And the thing is, it would take a truly evil team to respond to demands for features with lies. And i don't think there are many evil teams out there. The reason that xp and scrum and crystal clear work is that they demand the regular delivery of real software. That's what rtf is about. What should rtf not look like? the rtf curve isn't right if it starts flat for a few months. Rtf metrics are right when features start coming out right away. Bduf projects and projects that focus on infrastructure first need not apply. Rtf isn't right if it starts fast with features and then slows down. Projects that don't refactor need not apply. Rtf isn't right unless it starts at day one of the project and delivers a consistent number of features right along. Let's look at how this metric might look on an agile project, and in comparison, a waterfall project. Agile projects: early features an agile project really does focus on delivering completed features from the beginning. Assuming that the features are tested as you go, the rtf metric for an agile project could look like this: this is a very simple metric, often called the "burn-up" chart. Real live features, done, tested, delivered consistently from the very beginning of the project to the very end. Demand this from any project, and they must become agile in response to the demand. Waterfall: features later on waterfall-style projects "deliver" things other than rtf early on. They deliver analysis documents, requirements documents, design documents, and the like, for a long time before they start delivering features. The theory is that when they start delivering features, it will be the right features, done right, because they have the right requirements and the right design. Then they code, which might be considered delivered software, but not running tested features, because the features aren't tested, and often don't really run at all. Finally they test, which mostly discovers that the rtf progress wasn't what they thought. It looks like this: look at all the fuzzy stuff on this picture. There's blue fuzzy unpredictable overhead in requirements and design and testing. We can time-box these, and often do, but we don't know how much work is really done, or how good it is. And the apparent feature progress itself may also be fuzzy, if the project is planning to do post-hoc testing. We think that we have features done, but there is some fuzzy amount of defect generation going on. After a bout of testing -- itself vaguely defined -- we get a solid defect list, and do some rework. Only then do we really know what the rtf curve looked like. The bottom line is that a non-agile project simply cannot produce meaningful, consistent rtf metrics from day one until the end. The result, if we demand them, is that a non-agile project will look bad. Everyone will know it, and will push back against the metric. If we hold firm, they'll have no choice but to become more agile, so as to produce decent rtf. Ron, you're being unfair! one might argue that i'm being unfair in drawing these pictures this way. Surely there are hidden and vague things going on in an agile project as well. And surely there are ways to measure requirements quality, design quality. There might even be ways to predict testing and to estimate the number of defects. And surely, it can't all be that simple! well, as judge roy bean said, if you wanted fair you should have gone somewhere else. My job here is to shoot waterfall in the back. But in all honesty, that's the fate non-agile processes deserve. The waterfall chart shows graphically why: the things we do in non-agile style are difficult to measure, and do not measure what we really care about. The metric i'm calling here "rtf", running tested features, can be made as concrete as we wish. All we have to do is to define our feature stories in terms of concrete, independent, acceptance tests, which must pass. The rtf metric is very hard to game, and it will show just what's happening on the project. As an example, let's look at another project's rtf curve: what's going on here? early on, we see a sudden drop in rtf. What could that indicate? we don't know, but we know it probably isn't good. Perhaps the programmers have suddenly made a change that breaks a bunch of tests. We better hunt them down and hold their feet to the fire. But wait: maybe the requirements have changed, and the tests have been changed to reflect the new requirements, and the software has taken a setback. What we know is that there's something to learn, but as yet we don't know what. What should we do? ask the team! then, later in the project, progress flattens out. Again, clearly trouble. But what is the cause? it might be that the team isn't doing decent refactoring to keep the design clean, and they have slowed down. It might be that we've taken all the programmers off the project. It might be that the customer has stopped defining requirements, or stopped writing tests. Again, we don't know. Ask the team! how does rtf cause agility? • rtf requires feature count to grow from day one. Therefore the team must focus on features, not design or infrastructure. • rtf requires feature count to grow continuously. Therefore the team must integrate early and often. • rtf requires features to be tested by independent tests. Therefore the team has comprehensive contact with a customer. • rtf requires that tests continue to pass. Therefore the tests will be automated. • rtf requires that the curve grow smoothly. Therefore the design will need to be kept clean. • with rtf, more features is better. Therefore, the team will learn how to deliver features in a continuous, improving stream. They'll have to invent agility to do it. The agile methods scrum and crystal clear implement rtf. "all" they ask is that the team ship ready-to-go software every iteration. Everything else follows. Xp gives a leg up, in that you get a free list of things that will help. Rtf demands agility if an organization decides to measure project progress by running tested features, demanding continuous delivery of real features, verified to work using independent acceptance tests, the rtf metric, all by itself, will tell management almost everything it needs to know about what's going on. More important, since this metric is really difficult to game, the metric itself will cause the team to manage itself to do what management really wants. Becoming more agile isn't easy. It requires that the whole organization, and particularly the software team, commit itself to learning a new way of working. Agility is demanded by the metric. The metric just requires that the team deliver running tested features. And that, after all, is the real point of software development. From an agile pilot project to enterprise deployment, rally helps thousands of software professionals respond faster to changing customer needs by shortening their development cycles.
-
Junior Member
Re: Testing for Agile Software Development
-
Junior Member
Re: Testing for Agile Software Development
Hi lokesh,
Nice article.now i got a brief idea of Agile software development.
thank you.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules