Recent Posts

Friday, September 5, 2008

Let's look at QA!

A personal note: I've formally left my BA position and am transitioning to a Project Manager role (PM) in another group at the bank I work in. Not to fear though. As a PM, I will have oversight of all the activities going on, including BA, QA and development.

I've been thinking about quality for the last few weeks and I've realized that we really aren't doing a good job at QA. At least on the first few tries! We release software that is riddled with bugs, that have data integrity issues and crash as soon as the user tries something new.

First of all, where I am, QA is given short-shrift. We don't think of them till the very end. They're out of the loop for the most part and then are brought into Mission Impossible: Make sure this application is ready to go in 3 days!

The thing they say about quality is true: You never have enough time to do it right, but you always have enough time to do it again!

How do we ensure quality? or more realistically, how do we aim for as high quality as we can get? Quality is elusive. Heck, Pirsig wrote a whole book on it (Zen & the Art of Motorcycle Maintenance).

Well, I'm no quality expert but I have learned a few things in the last few months.
  1. You need to have a separation of concerns. The team pushing out the software (BAs, PMs, Devs) should not be responsible for Quality. You need a Congress to push back on the President. The mindset for driving out and releasing software is at odds with ensuring quality. I've signed off on software to be released just so I could get it out there only to find that there was a crash a day later. My goal as a BA is to satisfy the user and to get the software out, not to find fault and hold it up. As long as the former is my motivation, I cannot do an adequate job of QA'ing anything.
  2. We need to have some sort of up-to-date specification to hand off to QA. I've been a cowboy for the last year or so, BA'ing by the seat of my pants, embracing agile, and iterative development. Which really translates to: No writing any specification documents. I would instead create prototypes, mockups, write up notes in Excel and have plenty of phone conversations and netmeetings with my team. This was swell and we produced some good results which were very close to the mark to what I required. However, QA suffered. The way I handled QA was to also do the netmeetings, and do demos and such and then let them play with the software. QA could write up their own test-cases based on what I showed them. Voila! No spec needed. After all development created the software without a spec either. However, and here's the kicker: I realize now that I only show QA what *I* think is important. I demo features that I think should be tested, so when they go away, they concentrate on that. Of course it's the features that I don't show them (because I don't think they're important to demo) that wind up being used by the user and then fail! It's really remarkable (and arrogant of me) to think that I know what QA should do. Again, as a BA or anyone looking to driving software into the marketplace I am severely hampered by this same motivation from ensuring as high a level of quality as the software (& user) deserve. So which translates to: have a spec ready that highlights what the software does and let QA work off that, not what I show them. Guess specification documents are a *good* thing! :)
What led me to posting this about QA was some recent experiences (or failures) with our tools. I did an analysis and here's what I've come up with.

1) Bug because of a hidden feature that wasn't tested.
Occurred because (1) I didn't demo that feature to QA (see point #2 above!), so QA never even knew to look for it or that it existed, (2) test cases from v1 of the tool were not reviewed and used for v2; the test-cases may have had this hidden-feature; in essence, no Regression-testing! and (3) there was no written spec given to QA which would have had this feature detailed.

2) Bug because a common case was not tested.
Occurred because again (1) I didn't demo this to QA or stress it at all as a point to be tested. They saw what I showed them and mimicked it. (2) Test cases from v1 not reviewed and used.

There were a few other things that contributed to the breakdown:
  • Inadequate training of QA; a junior QA person with not much experience signed off on the project.
  • QA not brought in early enough to the specification phase of the project
  • QA spec/test cases should be developed ahead of time and be prepared/reviewed. QA needs to have this as a mandatory deliverable. Even if they are brought in dead-last in a project, their efforts should result in a test-case deliverable that can be re-used for the next go-around.
So in summary, here are some essentials:
  • Always get QA to hand over a test-case or QA deliverable that can be re-used or handed over in case resources get shifted (QA seems to be get 're-deployed' more than any other group these days)
  • Use above deliverable as basis for modifications and to use for regression-testing in major releases
  • BAs need to write a spec specifying what the system does as a deliverable to QA (and to developers, although in my instance, I don't seem to have all that much trouble getting development to deliver what I want since we're both aligned towards the same goals) so they can poke holes and create their deliverable.
QA will either use your spec (or any other documents, literature, conversations, screenshots, etc) or if brought in really late, will use the actual working software itself to create test-cases. This deliverable is vitally important. (Implementation-wise, if QA keeps it in a doc or in a tool like Quality Center, it is irrelevant)

No comments: