Gallery
JBREC Prioritization Practices
Share
Explore

JBREC Prioritization Practices

The prioritization process at JBREC is still in the early stages and will continue to evolve as we develop new ways to articulate the business strategy and align new initiatives to it that matter most. Our first pass at prioritization will take into consideration two main factors; the intended outcome (new revenue, improve value for existing product, etc) and a score. The score is based on a common framework in the product management community called RICE. This stands for:
Reach - how many customers will this impact
Impact - how much will the initiative impact each customer
Confidence - how confident are we in our estimates
Effort - how many uninterrupted person-months will the project take

Here’s how you calculate the RICE score.
formula.png

What are we trying to achieve?

At the end of the day our goal isn't just to assign a number to an initiative which declares it's importance, but rather to start with some sort of quantifiable understanding of an initiative that can drive discussion; allowing for a better opportunity to compare and contrast different opportunities. Understanding a 1-N of initiatives allows us to focus our time on the areas that matter most, creating better focus and ability for cross-team collaboration.

How to Put RICE into Practice

Reach

Reach is typically based on the number of customers impacted by the initiative. In our case, we can determine reach based on the products that are impacted by the initiative (i.e. MAF, RAF, etc) and using the rough number of customers for each product to calculate a reach. We can update these values quarterly as customers are added/removed from each product.
For example: SFR (200) + RCAF (25) = 225

Impact

Assigned one of the following:
Massive = 3x, High = 2x, Medium = 1x, Low = 0.5x, Minimal = 0.25x
Impact is a bit more of a subjective measure, but we can put some guardrails around it to create more consistency. The impact is a scaler value for the RICE score. It should represent our best guess for the effect that an initiative will have on the aligned outcome for the users that will experience that feature or update.
What not to do
Impact should not factor in the number of customers that will be impacted, since that's already accounted for with reach.

Confidence

Confidence is defined as a percentage:
100% is “high confidence”, 80% is “medium”, 50% is “low”. Anything below that is “total moonshot”.
The confidence rating is another input that adjusts for the level of evidence that we have to support the reach, confidence, and effort expectations. If we have a ton of quantitative evidence on all 3, then that would likely represent high confidence (100%). If we're internally conflicted on impact and the level of effort is tricky to determine, that might be a good candidate for a low (50%).

Effort

Estimated as number of person-months.
Initiatives can have huge impact, impact every user, and we can do it with 100% confidence, BUT if it takes 100 person-months to complete, then that might be a sign that the initiative is either too big, or that we should hold off on it until we can complete some short-term wins.
Estimating might look something like this.
The initiative will take about a week of planning, 1-2 weeks of design, and 2-4 weeks of engineering time. I’ll give it an effort score of 2 person-months.

Want to learn more?

Here's a if you're looking to learn more or read some examples of each RICE input
Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.