I’ve been teaching and mentoring for several years and over this time, I’ve found that classes tend to respond to worksheets and ‘tools’; forms that you fill in and spit out a result or a decision, almost like if you press the button the correct answer will pop out at the other end.
I find this to be a form of delegated thinking, individuals hiding behind a tool to help them make decisions. The reality is that tools just make decisions more systematic and easier, they don’t replace judgement.
I Digress.
I was in a debate with a colleague about Kano. Kano in simple terms is a model that helps to prioritise feature development based on how likely they are to satisfy customers, there are 3 categories – excitement features, performance features and basic features, as measure by two factors – degree of customer delight and implementation investment. A feature that has high customer delight and high implementation investment falls under the excitement feature category. The focus of the model is on how much a feature will satisfy users.
The issue I have with Kano is that it’s very subjective on a small sample set, additionally, it seems to measure the effect after the fact, by this, I mean, you have to develop and release the feature before you can get the feedback, defeating the purpose. Sure, you could argue that you can use a prototype or describe the feature and ask hypothetical questions to get a result, but the reality is that users tend to bias and give inaccurate responses even when looking at a real tangible release feature, let alone a hypothetical one.
So, I believe that Kano really has limited applications as a tool, there are better alternatives that can provide a better decision making framework for product managers.
One of the tools that I particularly like is the ICE model, granted I probably use it a little differently. ICE stand for Impact, Confidence, Easy; Impact being the level of impact the feature you focus on is likely to have on the product, Confidence meaning the level of confidence you have that it will meet this need, Easy meaning the level of ease in developing the feature.
I like it as a framework as it’s simple enough to be understood by all, but allows for factualness, it allows us to add data as evidence of the score with give to each of I, C, E.
With Kano, its hard to apply the same factfulness, delighting the user is subjective, it also neglects the ‘Ease’ component of determining the cost of developing a feature. The situation we’re trying to avoid here is where Kano says that a feature would be amazingly delightful to users based on a limited, biased sample, and the feature takes a significant amount of time and effort to develop, then, when released, the feature falls flat.
Of The ICE model, the only I had the most reservations about was the ‘confidence’ measure, impact we could measure via looking at signals such as how much more we could charge or how much it would improve conversation %, Ease could be measured via the time-cost of development, but Confidence I find challenging.
Which is where I see Kano and ICE working together, Kano can give us some indication of if the feature we’re planning on developing will have its intended effect, the more we survey the more data we have to affirm or disprove our assumptions the more we can place a confidence score into ICE.
So, this is how we can think about combining the two tools (Kano and ICE) to give a better signal on our feature prioritisation.