Here's a link to a little blurb on being an effective software manager:
Effective Software Manager Blurb
and here's a link to the article the blurb also references:
What Makes an Effective Software Manager?
The author in the first link writes about his manager getting rid of ineffective people. This is really interesting because it's exactly the first thing Jim Collins writes about in "Good to Great": First, get the right people on the bus!
I also feel "Making the Call" [referenced in the two articles above] is incredibly important. I've found 'making the call' and 'taking ownership' go hand-in-hand. I'm seeing real-life examples where people who don't do so lead their employees in circles, running only to stand still and never without any sense of completion.
An example:
There is a service provided to us by an outside party and the contract was never signed. The service was executed nevertheless and the first quarter invoice for 2007 is still yet to be paid. Myriad attempts to get this invoice paid has led nowhere. I'm not sure who was responsible for dropping the ball. I do know there were many attempts by IT to pay the invoice. In addition, the contract was never signed for this service and this has been outstanding for almost a year. An email that I had sent out regarding the matter was never responded to. It would be amazing for a change to hear someone say, "OK, great. Let me look at that contract, sign it and hand it right back to you" and for them to actually do it. Instead, unreturned emails and phone calls are the norm. If my business user would 'make the call' or 'take ownership' this would make mine and many employees' lives easier. Instead, the issue gets buried until the next go-round.
I think in an environment like the bank where IT serves its business users and IT has traditionally acted subservient to the business, making the call is especially important. Many IT people I've seen don't want to challenge their business users and state their terms. Instead, they're led astray by business users who have no IT knowledge. I've actually found that when I grab the reigns and state unequivocally what the options are and what the challenges are of each option, then usually the business will comply. After all, we serve their interests. Why shouldn't they trust us? In fact, IT can be in a position where we can actually LEAD the business. However, when we are too eager to please or afraid to be clear about our intent, communication breaks down and we (IT) can be sent off on wild goose chases; goose chases, we might be resentful about, because we weren't clear from the get-go and because we were afraid to be straight with our users.
This blog is about my experiences as a Business Analyst (BA) & Project Manager (PM) as well as forays into Quality Assurance (QA) in an investment banking environment and includes: thoughts, lessons learned, best practices, insights, predictions, foolish assertions, & outlandish statements, etc.
Recent Posts
Saturday, October 13, 2007
Saturday, October 6, 2007
Metrics
We use MQC (Mercury Quality Center) to store our defects and enhancements. We have about a year's worth of data in there and no one has thought to look and see what it tells us. I recently undertook such an effort. I export ALL defects/enhancements from MQC into an Excel file which is then parsed by an Access Database. I then wrote various queries to examine the data.
Here are some metrics that I want to examine after each release of our software to see how we are doing:
1) # of Outstanding Defects as of completion of this release ; broken down by severity and what version they were found in [to see how long they've been lying around]
2) # of Outstanding Enhancements to be implemented; broken down by priority and when they were requested [to see how long they've been lying around]
3) Breakdown of Defects/Enhancements for this latest release by Component to see which component of the software was touched the most.
4) Average [Mean, Median, & mode] Time to Fix a Defect [Critical, High, Medium, Low] until now
5) Average [Mean, Median & mode] Time to Implement a New Request [Critical, High, Medium, Low] until now
NOTE: If we apply Metrics 4 & 5 after each release, we can see if the average decreases over time to see if we're getting faster at delivering fixes and enhancements.
These metrics are pretty simple but will give us terrific information on where we are and what we need to work on to improve our process.
Here are some metrics that I want to examine after each release of our software to see how we are doing:
1) # of Outstanding Defects as of completion of this release ; broken down by severity and what version they were found in [to see how long they've been lying around]
2) # of Outstanding Enhancements to be implemented; broken down by priority and when they were requested [to see how long they've been lying around]
3) Breakdown of Defects/Enhancements for this latest release by Component to see which component of the software was touched the most.
4) Average [Mean, Median, & mode] Time to Fix a Defect [Critical, High, Medium, Low] until now
5) Average [Mean, Median & mode] Time to Implement a New Request [Critical, High, Medium, Low] until now
NOTE: If we apply Metrics 4 & 5 after each release, we can see if the average decreases over time to see if we're getting faster at delivering fixes and enhancements.
These metrics are pretty simple but will give us terrific information on where we are and what we need to work on to improve our process.
Friday, October 5, 2007
Support from the Top
If Bosses, Line Managers & PMs made it clear they want to see:
1) Burndown charts where they can see the progress to date vs. what was originally planned. And that they expect to see progress every day. This means that even if the manager doesn't have time for a status meeting today, if they wanted to see one s/he should be able to. It should be updated daily in accordance with what was accomplished. Management needs to catch errors early and to see if a project is going off-course in order to be pro-active.
As with daily scrum meetings, why not have a scrum-like managerial meeting where a burndown chart is presented and progress to-date is discussed, progress to-be-made is discussed and obstacles are presented?
2) Working software after each iteration! This is the only true measure of progress! Managers should ask that the software be demo'ed after each iteration.
more projects would be delivered on-time and on-budget! More users would be happy.
1) Burndown charts where they can see the progress to date vs. what was originally planned. And that they expect to see progress every day. This means that even if the manager doesn't have time for a status meeting today, if they wanted to see one s/he should be able to. It should be updated daily in accordance with what was accomplished. Management needs to catch errors early and to see if a project is going off-course in order to be pro-active.
As with daily scrum meetings, why not have a scrum-like managerial meeting where a burndown chart is presented and progress to-date is discussed, progress to-be-made is discussed and obstacles are presented?
2) Working software after each iteration! This is the only true measure of progress! Managers should ask that the software be demo'ed after each iteration.
more projects would be delivered on-time and on-budget! More users would be happy.
Wednesday, October 3, 2007
Some Agile Practices
Agile Practices that I'm thinking of implementing and as adapted to the environment I work in.
1) Have daily meeting for 10 minutes; Being agile requires open communication; everyone should know what's going on all the time.
Each person states:
a. What s/he accomplished yesterday
b. What s/he's going to do today
c. Any pain points/issues/obstacles
2) Have a process improvement meeting every month or even bi-weekly in the beginning
a. Figure out what's working/not working in the process and make adjustments
3) The BA should be available as development proceeds in each iteration to answer questions that come up and to test. This happens naturally in practice anyway. Much as we prefer a complete spec up-front (which few people read anyway), we should be honest about what the actual process is. Do we want to spend time developing the best spec ever up-front or proceed with Just Good enough requirements and start coding so we get working software?
These questions can be asked in a one-off basis (developer calls BA) or during the short, daily status meeting.
4) Use Burndown charts rather than project plans to track progress. Burndown charts are effective and especially look good to management because you can compare your original estimate to how you're actually doing. You can absorb all kinds of info including your 'velocity', where you hit bumps in the road, etc. Traditional project plans do not offer this because you just keep overwriting the same plan over and over with the latest dates. The n-th plan, after it is updated, looks like it was supposed to happen by design when in reality, you made it conform to the current status of the project. There is no way to compare the n-th plan to the 1st plan to show you how close you were to your estimates. And if you can't do this, how are you going to improve your process? Process Improvement depends on measurement and re-measurement! [I know people will object and say MS Project allows you to do such and such; and this might be true but to date, I haven't found anyone who uses this feature.]
5) Iterative Development and scheduling by feature delivery rather than Initiation, Definition, Design, etc.
6) Get estimates based on time to implement features rather than Initiation, Definition, Design, etc. These estimates would likely be more accurate and something developers can give and live up to.
7) Deploy prototypes and let developers have a go at them to give feedback so you can make it even more bulletproof and to also advertise to them what's coming. Prototypes help developers get familiar with upcoming changes and new features and will enable them to give better estimates on implementation time after playing with a prototype. Prototypes also act as requirements specifications in and of themselves. Rather than have a developer wade through a huge specification document, s/he can fire up the prototype and see the actual behavior demonstrated! A prototype is worth a thousand pages [of requirements documents]!
8) After incorporating feedback from the Dev team, I would go show my users. The prototype needn't do everything. It's a good idea to storyboard a typical scenario and walk the user through it and paint a picture. Picture a child enraptured by a parent telling a story. You want to fully engage the user with a prototype and have s/he contribute meaningful feedback.
1) Have daily meeting for 10 minutes; Being agile requires open communication; everyone should know what's going on all the time.
Each person states:
a. What s/he accomplished yesterday
b. What s/he's going to do today
c. Any pain points/issues/obstacles
2) Have a process improvement meeting every month or even bi-weekly in the beginning
a. Figure out what's working/not working in the process and make adjustments
3) The BA should be available as development proceeds in each iteration to answer questions that come up and to test. This happens naturally in practice anyway. Much as we prefer a complete spec up-front (which few people read anyway), we should be honest about what the actual process is. Do we want to spend time developing the best spec ever up-front or proceed with Just Good enough requirements and start coding so we get working software?
These questions can be asked in a one-off basis (developer calls BA) or during the short, daily status meeting.
4) Use Burndown charts rather than project plans to track progress. Burndown charts are effective and especially look good to management because you can compare your original estimate to how you're actually doing. You can absorb all kinds of info including your 'velocity', where you hit bumps in the road, etc. Traditional project plans do not offer this because you just keep overwriting the same plan over and over with the latest dates. The n-th plan, after it is updated, looks like it was supposed to happen by design when in reality, you made it conform to the current status of the project. There is no way to compare the n-th plan to the 1st plan to show you how close you were to your estimates. And if you can't do this, how are you going to improve your process? Process Improvement depends on measurement and re-measurement! [I know people will object and say MS Project allows you to do such and such; and this might be true but to date, I haven't found anyone who uses this feature.]
5) Iterative Development and scheduling by feature delivery rather than Initiation, Definition, Design, etc.
6) Get estimates based on time to implement features rather than Initiation, Definition, Design, etc. These estimates would likely be more accurate and something developers can give and live up to.
7) Deploy prototypes and let developers have a go at them to give feedback so you can make it even more bulletproof and to also advertise to them what's coming. Prototypes help developers get familiar with upcoming changes and new features and will enable them to give better estimates on implementation time after playing with a prototype. Prototypes also act as requirements specifications in and of themselves. Rather than have a developer wade through a huge specification document, s/he can fire up the prototype and see the actual behavior demonstrated! A prototype is worth a thousand pages [of requirements documents]!
8) After incorporating feedback from the Dev team, I would go show my users. The prototype needn't do everything. It's a good idea to storyboard a typical scenario and walk the user through it and paint a picture. Picture a child enraptured by a parent telling a story. You want to fully engage the user with a prototype and have s/he contribute meaningful feedback.
Subscribe to:
Posts (Atom)