Recent Posts

Saturday, February 27, 2010

Where the heck have I been?

Well, I've taken quite the break there! Where have I been? I'm still at the same firm, just extremely busy. And plus, I just saw "Julie & Julia" and it brought back to my mind this blog. I've also been soaking in a lot of new experiences. In a way, I feel I've graduated. Joining my 'new' group (in late 2008) after my last BA position was like being hazed. It was like going from the minor leagues to the pros. We have to handle IT for all groups like Front Office (traders), risk, product control, operations, financial accting etc. I became a PM in the Architecture arena and I had to manage database migrations, and upgrades of various software components: work that could only be done on weekends and evenings and in a timely, controlled manner with lots of testing beforehand. It was anything unlike I had done before.

What was different?
  • As a BA I was responsible for only one part of the project and getting that right. As a PM, I am responsible for everything. Talk about pressure. And especially on projects where the DB better be back up on Sunday night (Monday AM in Asia)
  • I had to get people to do things; I am not an official manager. No one reported to me. I had to keep pushing. There was no time to sit still and coast after I got one thing done. There was always something else to look at. PMs can never sit still!
  • I find a satisfaction in being a PM that I don't know if I had being a BA. Not to say that one is better than the other. But there is something to be said for utter and complete accountability and responsibility of a project from start to finish. It all lies with you. You make or break the project.
There are tons more differences and not everything can be listed. But my experience as a BA has helped vastly. I can break down a requirements document and find holes in it very quickly and then have the authority as a PM to get them addressed. And I don't mean to imply that I don't trust my BAs but I've seen some pretty bad requirements specifications and usually accompanied by no test case documents. "Trust but verify" is an adage I like.

Friday, September 5, 2008

Let's look at QA!

A personal note: I've formally left my BA position and am transitioning to a Project Manager role (PM) in another group at the bank I work in. Not to fear though. As a PM, I will have oversight of all the activities going on, including BA, QA and development.

I've been thinking about quality for the last few weeks and I've realized that we really aren't doing a good job at QA. At least on the first few tries! We release software that is riddled with bugs, that have data integrity issues and crash as soon as the user tries something new.

First of all, where I am, QA is given short-shrift. We don't think of them till the very end. They're out of the loop for the most part and then are brought into Mission Impossible: Make sure this application is ready to go in 3 days!

The thing they say about quality is true: You never have enough time to do it right, but you always have enough time to do it again!

How do we ensure quality? or more realistically, how do we aim for as high quality as we can get? Quality is elusive. Heck, Pirsig wrote a whole book on it (Zen & the Art of Motorcycle Maintenance).

Well, I'm no quality expert but I have learned a few things in the last few months.
  1. You need to have a separation of concerns. The team pushing out the software (BAs, PMs, Devs) should not be responsible for Quality. You need a Congress to push back on the President. The mindset for driving out and releasing software is at odds with ensuring quality. I've signed off on software to be released just so I could get it out there only to find that there was a crash a day later. My goal as a BA is to satisfy the user and to get the software out, not to find fault and hold it up. As long as the former is my motivation, I cannot do an adequate job of QA'ing anything.
  2. We need to have some sort of up-to-date specification to hand off to QA. I've been a cowboy for the last year or so, BA'ing by the seat of my pants, embracing agile, and iterative development. Which really translates to: No writing any specification documents. I would instead create prototypes, mockups, write up notes in Excel and have plenty of phone conversations and netmeetings with my team. This was swell and we produced some good results which were very close to the mark to what I required. However, QA suffered. The way I handled QA was to also do the netmeetings, and do demos and such and then let them play with the software. QA could write up their own test-cases based on what I showed them. Voila! No spec needed. After all development created the software without a spec either. However, and here's the kicker: I realize now that I only show QA what *I* think is important. I demo features that I think should be tested, so when they go away, they concentrate on that. Of course it's the features that I don't show them (because I don't think they're important to demo) that wind up being used by the user and then fail! It's really remarkable (and arrogant of me) to think that I know what QA should do. Again, as a BA or anyone looking to driving software into the marketplace I am severely hampered by this same motivation from ensuring as high a level of quality as the software (& user) deserve. So which translates to: have a spec ready that highlights what the software does and let QA work off that, not what I show them. Guess specification documents are a *good* thing! :)
What led me to posting this about QA was some recent experiences (or failures) with our tools. I did an analysis and here's what I've come up with.

1) Bug because of a hidden feature that wasn't tested.
Occurred because (1) I didn't demo that feature to QA (see point #2 above!), so QA never even knew to look for it or that it existed, (2) test cases from v1 of the tool were not reviewed and used for v2; the test-cases may have had this hidden-feature; in essence, no Regression-testing! and (3) there was no written spec given to QA which would have had this feature detailed.

2) Bug because a common case was not tested.
Occurred because again (1) I didn't demo this to QA or stress it at all as a point to be tested. They saw what I showed them and mimicked it. (2) Test cases from v1 not reviewed and used.

There were a few other things that contributed to the breakdown:
  • Inadequate training of QA; a junior QA person with not much experience signed off on the project.
  • QA not brought in early enough to the specification phase of the project
  • QA spec/test cases should be developed ahead of time and be prepared/reviewed. QA needs to have this as a mandatory deliverable. Even if they are brought in dead-last in a project, their efforts should result in a test-case deliverable that can be re-used for the next go-around.
So in summary, here are some essentials:
  • Always get QA to hand over a test-case or QA deliverable that can be re-used or handed over in case resources get shifted (QA seems to be get 're-deployed' more than any other group these days)
  • Use above deliverable as basis for modifications and to use for regression-testing in major releases
  • BAs need to write a spec specifying what the system does as a deliverable to QA (and to developers, although in my instance, I don't seem to have all that much trouble getting development to deliver what I want since we're both aligned towards the same goals) so they can poke holes and create their deliverable.
QA will either use your spec (or any other documents, literature, conversations, screenshots, etc) or if brought in really late, will use the actual working software itself to create test-cases. This deliverable is vitally important. (Implementation-wise, if QA keeps it in a doc or in a tool like Quality Center, it is irrelevant)

Saturday, June 14, 2008

A PM's utility

Warning: This post is really an experimental one; just playing with some thoughts about the PM function. Not my intention to offend but the question is:

So how useful are PMs (Project Managers just in case you didn't know)?

Frankly I'm not very sure!

Of course it depends on the project. Not every project needs a BA or PM.

Everyone on a team should be thoroughly invested in 'caring' about the product, not just task-mastering so that things get done. PM activities are essential actually. But is a separate PM truly necessary? I think the best approach is to either hire BAs with PM experience or Developmental Leads with PM experience. That way PM tasks are kept to a minimum but also reside with personnel who are invested in the product.

In Six Sigma terms, the most useful ppl to have on a project are those who are adding value to the product.

A BA talks to the business and therefore represents and speaks for the business to the dev team. The BA represents the business so s/he wants to ensure a quality product.

The developers are developing the product and so will put care and precision into crafting something of quality.

In my own (and certainly limited experience) PMs engage in many not-required non-value-add activities. Is the customer willing to pay for communication or chasing down people to get tasks done? These are overhead tasks that are necessary but also don't add value. In six-sigma terms, these are NVA, non-value add activities and should be minimized. Hence, my call to eliminate the PM position in areas where you are able and to place only the least amount of PM activity with other members of the team who also engage in value add activities.

Let's look at wikipedia's definition of the activities a PM engages in:
  1. Analysis & design of objectives and events
  2. Planning the work according to the objectives
  3. Assessing and controlling risk (or Risk Management)
  4. Estimating resources
  5. Allocation of resources
  6. Organizing the work
  7. Acquiring human and material resources
  8. Assigning tasks
  9. Directing activities
  10. Controlling project execution
  11. Tracking and reporting progress (Management information system)
  12. Analyzing the results based on the facts achieved
  13. Defining the products of the project
  14. Forecasting future trends in the project
  15. Quality Management
  16. Issues management
  17. Issue solving
  18. Defect prevention
  19. Identifying, managing & controlling changes
  20. Project closure (and project debrief)
  21. Communicating to stakeholders
  22. Increasing/ decreasing a company's workers ...
In particular contexts, these activities make sense and in others they don't. If you're running an agile or lean software development project then a lot of these can be eliminated entirely and others can be offloaded to other personnel.

The gist of this post is to just get you thinking on the type of projects you're running and not to blindly throw a PM, BA, Dev Lead team together and expect to get the job done. You could be slowing the project down actually. The BA and Dev Team could run with the project a whole lot quicker than with a PM overseeing everything trying to get updates.

In my environment, what seems to truly work is:
1) Hire BAs who are comfortable with and can talk to both the business & development on an everyday basis.
2) Hire BAs with PM knowledge
3) Hire Dev leads with PM knowledge

Saturday, April 5, 2008

Lean Sigma Thoughts

I've recently been going through some training on the Lean Sigma methodology. It's all about Process Improvement and it employs the DMAIC (or DMAEC) steps: Define, Methodology, Analyze, Implement (or Engineer) and finally Control.

It's a very sequential model (not unlike a waterfall software life cycle methodology) with tollgates at each step. It's pretty interesting and I'm learning a lot of academic material. I'm doing my Green-belt training and I am using the textbook material on an actual project that is causing my boss pain. However, as I've been digging into the project, I've discovered that a lot of the steps were carried out 2 years ago on much the same pain points. In fact, there are reams of documentation all the way to the Engineer Phase. The solution was never implemented however. Now there's a big project underway (of which I'm working on a very small chunk) that is looking at the same issues again. So my question is: why? Why will the result be different from last time? Why wasn't the last solution that was brought to the table 2 years ago not carried out?
Heck, we can do a Lean Sigma project on this question alone!

And furthermore as I was digging into the items, I started getting innovative ideas. What if we could do this or that? and implement it with a small group? If it worked, we could spread it to the other groups. And then it hit me. The Lean Sigma process to some extent is devoid of innovation! That flash of inspiration; that 'hmm, what if we tried this?' is somehow lost along the way. A practitioner can so concentrate on each one of the steps at a time that s/he never sees the issues holistically. The emphasis on only defining a problem or measuring or analyzing precludes the 'what-if-we-did-this?' The problem is engineered to death.

In this project I'm on, the solutions proposed from the last go around made sense. I have a feeling that the stakeholders involved were change-averse to implement some of the bigger changes and didn't want to rock the boat. I also feel that our Lean Sigma group is seen as 'outsiders' or consultants who come in, do their thing, drop you the results and leave. Their stuff is good but they're done and they're off to the next thing. Stakeholders might like the analysis and agree with some of the solutions but they take a risk when implementing. If the proposed solutions don't work out in the stakeholder's favor, the project manager is left holding the bag. When Lean Sigma or management consultants leave, the project manager (who may have contributed to the development of the project) is not invested in the solution, so will be less likely to implement it (at least from the e.g. I've seen).

Again, on this particular project, I saw comments from one of the stakeholders who more or less shot the proposed solution in the foot. The Powerpoint slides depicting the proposed changes were passed around and then the stakeholder fwded the slides to his managers with many more criticisms rather than favorable remarks. It's no wonder the project stalled. What top level manager would give the go-ahead if their employees thought there were unresolved issues?
Questions:
1) Why wasn't the solution developed jointly with the stakeholders?
2) Was there a chance to rectify/address some of the concerns of this particular stakeholder?

There've been tons of issues with this particular product line; everyone agrees it is broken; The product line manager is definitely risk-averse and doesn't want to do anything drastic.