Getting better at getting better

On Saturday I attended my first ever Product Camp Boston. This event, an unconference devoted to product management and product marketing, was massive in terms of attendance (over 500) and content covered (some 58 sessions). I was fortunate enough to nab a speaking slot. I debated what to speak about, and ultimately ended up giving a talk on applying agile scrum to the work of product managers to help a team improve their PM craft.

About now my non-engineering friends and family are looking at me with a little white showing in their eyes, and my engineering savvy readers may be skeptical as well. But I’ve written about this idea before in the context of agile marketing, that by committing to work up front for a limited period of time, documenting what we work on, publishing what you achieved, and being purposefully retrospective (what went well, what didn’t, what will we change), we can improve our effectiveness as individuals and teams.

For PMs the big payoff is in slowly transitioning out of firefighting mode and into bigger-picture thinking. It’s too easy to succumb to the steady pull of today’s emergency and tomorrow’s engineering release and lose strategic focus. Our kaizen has given me the ability to think farther ahead and be more purposeful about the work I take in.

I’ve posted the slides for the talk, and will write a little more about this topic soon.

Roadmaps in Agile, part 1

As a product manager in an agile development model, one of the most difficult things to do is building a roadmap. This is because making feature commitments for six to nine months out feels contrary to the spirit of being “agile” and maintaining flexibility to change course to support the needs of the business.

Why is having a roadmap when you’re agile so hard? One word: sales. It’s relatively easy (provided you know how to do it) to move requirements around when the only people you’re communicating with are internal stakeholders. It’s much harder when a sales guy has already told the Big Prospect that the frimfram feature is going to be added in third quarter, based on a roadmap that he saw six months ago. Sales cycles have their own momentum, they have their own set of unforeseen requirements that weren’t planned for, and they’re very hard to sync up with a roadmap that’s moving to respond to current and future needs of the company and customers.

So how do you do it? There are three important dimensions to any roadmap, and those are priorities, cost/benefit, and time. Getting the first two properly defined is critically important; once you have that, distributing the roadmap across time is more of a mechanical exercise (but not without its hazards). First, though you have to know:

What do we need to do? If you’re a product manager who documents every requirement you ever unearth from a customer, prospect, sales guy, internal operations perspective, or executive, stores it in a backlog, and periodically revisits the backlog to organize and categorize it, this step might actually be relatively easy. If not, it will require some legwork–talk with each stakeholder, make sure customer voices are represented through input from the sales force (and/or SalesForce), write everything down, send the list around and make sure that nothing got missed.

What do we do first? Prioritization has to be done as a conversation; there’s no way around it. You can do prioritization in a vacuum if you’re using a static set of priorities (high, medium, low, for example), but if you’re stack ranking your requirements (which I firmly believe is the best way to go) you need the appropriate stakeholders to get together and make the tradeoffs. To do a good job in stack ranking, it helps to set some ground rules (e.g. the requirement contributes to current revenue, builds groundwork for future revenue, or reduces operational costs) and set some rules about how those translate into ranks (e.g. current revenue more important than future revenue). There are some finer dimensions to the problem; generally there are different types of requirements and different types of people who will work on them, so if you slice the prioritized list by those dimensions do the priorities still make sense? (They should.)

What’s the cost and benefit? Ideally this should be done before prioritization, since it informs it, particularly the benefit part. If you have a major initiative that’s supposed to drive new business, someone should be able to estimate how much new  business it will drive. Engineering requirements can be estimated by cost. Combining the two can help to drive prioritization. It’s important to know the details of the business model behind the requirement, too. If the revenue plan for the requirement assumes that it will help to drive revenue for two quarters, it had better be released by the middle of the year.

When do we do what? Here’s where the rubber hits the road. Up until this point, it’s better to deal with requirements as high level objects–epics of work that can span multiple teams and releases. But to actually assign features to releases, you need to be able to at least guess at a high level division of work (we’ll work on this for three release cycles before it becomes available to the public) and of responsibility (both teams A and C contribute, so we need to put something in both team’s work plans).

The reason that time based planning is trickier, too, is that it needs to make explicit assumptions about certainty. You can do a pretty concrete plan for releases early in the calendar year, because you know that the business requirements won’t change too much between planning time and the release date. But the back half of the year is far trickier. So each release during the year needs an increasing “change reserve,” unallocated capacity that can take on new requirements. Alternatively, management has to be comfortable with the proposition that the out quarters will be highly subject to change.

Once you’ve done the basic blocking and tackling, the real fun begins: how do you communicate this nuanced plan in a consumable format to management and sales? Well, that’s part 2.

Release planning: How you prioritize matters

I hope I have the time to come back to this thought tomorrow (along with some overdue Thanksgiving blogging). But I had the opportunity to meet up with an old colleague for lunch today and to discuss, among other things, two different agile project cycles. One project cycle ships every four to five months, has seven or eight two-week iterations inside the release cycle, and uses MoSCoW-style prioritization (that is, Must, Should, Could, Won’t) for feature stories and for backlog. The other ships every six weeks, has one iteration inside the release cycle, and uses forced stack ranking for feature stories and backlog.

Which of the differences (iterations per release, release length, prioritization) is most important between the two projects? Which has the greatest impact on the release?

I’m going to give away the answer when I say I think there’s a stack rank of impact:

  1. Prioritization method
  2. Release length
  3. Iteration frequency

Why is prioritization so important? And which method is better, forced stack ranking or must, should, could, won’t?

The problem with any bounded priority system, whether it’s MoSCoW, Very High/High/Medium/Low, or simply 1, 2, 3, 4, is that it leads to “priority inflation.” When I was selling ITIL compatible software, we had a feature in our system that used a two factor method and customizable business logic to set priority for customer IT incidents. It was necessary to go to that length because, left to their own devices, customers push everything inexorably to the highest priority. Why? Because they learn, over time, that that’s all that ever gets fixed.

It’s true in software development too. I can’t count the number of features that were ranked as “must haves” on the project that used MoSCoW. It was very difficult to defend phasing the work, because everything was a must.

The project that uses forced stack ranking doesn’t have the problem of too many “must haves” because there can be only one #1 priority, and only one #2, and so on. Developers can work down the list of priorities through a release. If there’s been an error in estimation and the team has overcommitted for the release, it’s the lower priority items that slip.

The forced stack ranking works with stakeholders outside engineering too, because it forces them to evaluate requirements against each other in a systematic way. Rather than saying “everything is a must,” stakeholders can give answers about whether requirement A or B is more important within the scope of the release.

Release length and iteration frequency matter, too, because they provide mechanisms for market-driven and internal-driven course correction. But from my experience, as long as the release length and iteration frequency aren’t too far out of whack, the right prioritization method is a crucial ingredient for successful delivery of software that meets stakeholder expectations and for defining feature lists that have a reasonable shot of getting completed within a single release.