Agile “Defects” Explained
You won’t find a definition of Defect in the Agile Manifesto, Lean Software Development or Kanban but we all know we encounter problems and bugs in Software Development Projects but “Defects” are optional. So what is my definition of a Defect in Agile Software Development?
What is my definition of a Defect though? Generally, defect, bug, problem are synonyms. At some point we think a certain amount of functionality should be working but then it doesn’t. I’m going to take it a bit farther with the term “defect” though. Because in Traditional Development a “defect” is JUST a bug or problem. It’s a marring of functionality that has been deemed “done” by the Developers who created it. Each of these marring “defects” are tracked usually using a tracking system such as Quality Center, a Developer is assigned, it is scheduled to be analyzed. fixed and retested and in some larger organizations cataloged into a defect database that proves problems were found, decisions made and solved prior to deploying to production. So in Software Development a “defect” is really a much bigger deal that just a bug or problem.
So how does Agile deal with software Defects?
The short version:
A Minimal Defect Management Practice: It’s only a defect if we discover the problem AFTER the User Story (functionality) has been accepted by the Product Owner (PO).
A Better Defect Management Practice: Fix any broken functionality as soon as it is found in the Iteration it is discovered. No Tolerance. No Defects.
(If you don’t want to read the “longer more detailed version” you can skip down to the section beginning with “In Agile Software Development there are generally two Problem/Defect Management Practices.)”
The longer more detailed version:
In Traditional (aka “Waterfall”) development after the system is Analyzed, Designed and Coded the system is considered “done” so it is passed on to the Quality Assurance (QA) Testing Phase, while Developers go on to their next endeavor (usually). Thus all problems discovered during the QA Testing Phase are created as “defects” because Development is “done.” These defects need to be researched and fixed by Development then re-tested (with possibly cycles of this) before going to Production. This QA Testing Phase is normally a very drawn-out painful and stressful time for everyone involved.
Agile’s view on this method of Traditional Development Defect Management is that it is foundationally flawed. Here are just some of the reasons:
1) In the traditional QA Testing Phase QA Testers are Defect Finders instead of Defect Preventers. Their one and only job is to break the software and report on the results. (Agile Note: A much better primary use for QA Testers are to help define the functional requirements and work with Developers to ensure those requirements are developed to correctly before any Acceptance Testing is done. This then changes the definition of when functionality is “done” as compared to traditional development.)
2) Closely related to #1 above is the fact that traditional QA Phase defects surface because QA interpretation of the Analysis and Design artifacts can, and usually does, differ from that of the Developers during the Coding Phase. In my experience this happens no matter how much effort is put into Analysis and Design Phases. Also considering the amount of coding needing to fix the found defects in a traditional QA Testing Phase Defects should we really be saying coding is “done” at the end of the Coding Phase? Probably not.
3) The Defects that are discovered in the traditional QA Testing Phase that need to be fixed by Developers who have moved on to other projects causes those projects to delay their own in-flight activities adding stress and uncertainty.
Agile Software Development takes a totally different perspective on traditional defects. First it’s important to know that Agile prefers a focus on Quality over Speed. It’s also important to understand that when you do this you generally become faster over time but it takes a while to get there. But if you think you are moving to Agile to go faster at the beginning is inherently wrong.
It’s also important to understand that in Agile Software Development a User Story is not considered to be marked “done” until it has been accepted by the Product Owner (PO). This means that a Story travels through it’s different process states (Such as Ready for Development to Development to QA Testing to PO Acceptance) and only when it is pronounced “Accepted” by the Product Owner (PO) is there some kind of a parallel between Traditional development and Agile for a specific coding of functionality being considered “done”.
This is one of the significant differences between Traditional development and Agile Development. If a “problem” such as a failed Test Scenario is found in the Agile QA Test state then the Developers and QA work together to fix it immediately. They can do this because the Story is not considered “done” yet and they are one cross-functional team, not separate teams in different “phases” as in Traditional development. However, after a Story is accepted as “working software” and it breaks it is deemed to be a defect, created and tracked as such.
Traditional and Agile Software Development are so different that the reasoning we place on Defects in Traditional Development simply don’t apply to Agile. An example: In Traditional Development when a Defect is found it is documented to the hilt and the fix is also fully documented including the resolution steps so that if it occurs again we know how to fix it. This may make sense for Traditional Development because Coding is separate from Testing and thus the same problem could be encountered many timeds at different points in the Testing Phase. And different Developers could be engaged each time meaning time spent reading the Analysis and Design documentation, understanding the code that was written by someone else months ago and then fixing and testing the code makes fixing Traditional Defects VERY expensive. (See the Defect Cost Curve as Figure 1 in this page from IBM http://www.ibm.com/developerworks/rational/library/08/0429_gutz1/) This is a long drawn out process made even longer by having to write so much documentation and track everything. All this documentation is rarely if ever looked at again making it a vast waste.
This is one of the ways Agile projects generally cost less than similar Traditional projects. In Agile projects we fix the problems sooner in the cycle so that the developer who wrote the code is able to fix and the sooner they find it and fix it the cheaper it is to do so. And we don’t need to document everything. We only document what we feel is necessary and will retain it’s value over the long run.
In Agile Software Development there are generally two Problem/Defect Management Practices (my terminology).
The Minimal Problem Management Practice:
You’ll want to use this Problem Management Practice at a minimum. There in four circumstances that Defects can be recognized in this method:
1) A defect can be found while QA or the PO are acceptance testing a User Story and specifically while performing “satellite regression testing”
We know that the earlier a problem is detected the quicker and cheaper it is to fix it to it is most efficient that any found problem in QA is fixed immediately, however in some instances this may not happen. If a Developer called Story G “Dev finished” and while QA is testing Story G they test both the Story G Test Scenario’s as well as “satellite regression testing” of functionality of other Stories that have already been accepted. So they retest some of the Test Scenario’s associated with Story C which was previously accepted by the PO whose functionality is near Story G’s and in doing so the QA person finds and surfaces a problem with one of Story C’s Test Scenario’s. If the Story C Test Scenario cannot be immediately and efficiently fixed by a Developer then a Defect (Story) is created, handed over to the PO to prioritize its resolution and tracked.
Note: It’s better to fix the the regression problem with Story C and NOT create a Defect to track.
Note: “satellite regression testing” is my terminology for when a team member is regression testing “around” the target User Story functionality for features that have already been accepted by the Product Owner. I use this term to differentiate this type of limited regression testing from Full-on Regression Testing.
2) Periodically a team may choose to do Regression Testing on all functionality to date.
Since a team only does Regression Testing on stories that has already been Accepted by the PO any problem found here could be a candidate for a defect to be created, tracked and resolved.
Again, you could create a Defect for these found problems but it would be better to fix the regression problems as soon as they are found and NOT create a Defect to track.
3) A defect can be found while demonstrating a User Story
User Stories are normally demonstrated to Stakeholders and others every two weeks or so during a “Demo”. Since we usually only Demo Accepted Stories anything that breaks in a Demo we create a Defect (Story) for it to be prioritized, fixed and tracked.
IF we Demo unaccepted stories (this is an exception to the rule and should only be contemplated when particular feedback is desired for a Story that is in mid development) and problem encountered is not considered a defect because the Story is not considered complete yet. [Note: Mike Cohn has a great blog on this topic at http://www.mountaingoatsoftware.com/blog/unfinished-work-at-the-end-of-a-sprint-is-not-evil]
If you focus on circumstances 1 & 2 above and fix the problems as soon as they are found you will find circumstance 3 problems rarely occur.
4) A defect can be found in a post-development UAT/End-to-End Testing (IF you are using this. Not all products do.)
If a system is purposefully designated that it must go through a formal post-development UAT, CAT or End-to-End process by a separate team (this is very situationally and organizationally dependent) then any post-development testing that is covered by Accepted User Stories that break are legitimate “defects” to be created, fixed and tracked.
However, any feature gap or problem not identified by an Accepted User Story that is discovered or suggested by the post-development testing group needs to be considered on a case-by-case basis. Thus a release-dependent significant gap that is discovered and surfaced at this time could also be construed to be a “defect” to be created, fixed and tracked after it has been reviewed by Project or Program authorities. I’m not sure this is technically a defect but when you are this close to a production implementation you need one thing, get it done.
So, in these 4 instances we could create, fix and track “defects”. Some refer to these as “Defect Stories” but the name really is of no consequence. The defect is a place holder to remind the team to fix a problem in their product under construction wherever the PO places its priority.
The Better Problem Management Practice:
This is the preferred Problem Management Practice. It is very simple and encouraged to be your primary Problem Management Practice. Discover and fix your Problem so they don’t become Defects.
What. No Defects?
Yes. As soon as a problem arises be it in the Story at hand or in any kind of regression testing apply a No Tolerance Problem Management Practice. Crush it as soon as you find it.
Using this practice you will never experience a Defects in circumstances 1 & 2 and by doing so you dramatically reduce the chances of any defects from cropping up in circumstances 3 & 4.
One of the natural results from handling problems in this manner (cross-functional teams, evolutionary requirements and incremental development of working software) is the number of discovered actual “defects” should be dramatically reduced (or eliminated) as compared to Traditional development.