Share
Explore

Metricizing Key Results

The art of crafting clear, measurable results

🖼️ Background

Like many companies, Coda uses
to set goals and measure progress. Teams plan around 6 weeks sprints, set goals for the sprint and evaluate success at the end. Crafting clear, measurable key results can be tricky, so we developed the simple guide of best practices.
Copy this doc
to make it your own and use with your team.

⛳ Goals

Improve our collective muscle of understanding growth levers of our business. Get better at anticipating how each project impacts metrics, help explain project objectives as clearly as possible and connect our work back to company goals.

To help achieve these, we have a Metric column per Key Result. In many cases Key Results already had metrics, just not always consistently structured. The Metric column captures what metric a Key Result seeks to move, and by how much. Note these metrics generally shouldn’t be top level company metrics, but rather should be something more directly driven by the project.

📐 What metrics for Key Results look like

There’s a
of Metrics:
Impact > Usage > Execution
. While we all strive to prove the impact of our work, sometimes it’s hard or impossible.
Feature launches should have at least usage metrics
(which means logging + dashboards present). Longer projects that span many sprints are admittedly sometimes hard to graduate to usage or impact metrics before they launch, and tend to stick with execution metrics.

Here are a few examples of each type of metric with a few real recent KRs:
Example KR Metric Types
0
Impact Metric
Usage Metric
Execution Metric
No Metric
1
Weekly company stats meeting analysis attributed as justification for staffing >= 1 project
Weekly company stats meeting attendee satisfaction survey averages > 4.5/5
Company stats meeting occurs every week
Run company stats meeting
2
New Focus Area dashboards attributed >= 1 time for reprioritizing work
5 new Focus Area dashboards each get > 20 views/wk
Build one dashboard per Focus Area
Make sure Focus Areas have data resources
3
Deploy Coda Slack integration to workspaces representing at least X paid editors, reactivating Y editors
Launch Slack: deploy to Coda workspaces representing at least X paid editors
Launch Slack integration to general availability
Finish up Slack work
4
Reduce user complaint volume about bugs in next NPS survey by 25%
Reduce bug impression rate by 75%
Bug fix rate matches or exceeds incoming rate
Buffer time for bug fixing
5
Reduce user complaint volume about bugs in next NPS survey by 25%
Reduce bug impression rate by 75%
All recent regressions fixed within 2 weeks
Fix regressions
6
(Next sprint hope: Bazel reduces build times by 20%)
(Next sprint hope: launch Bazel to 100% of builds)
Finish 7 of 10 blockers for Bazel Migration
Continue Bazel Migration
There are no rows in this table

There’s also a hierarchy of clarity:
specific usage metrics > ambiguous impact metrics
. Similarly,
crystal clear execution metrics > impractical usage metrics
that are just too hard or time consuming to collect:
Clear Metrics > Ambiguous or Impractical Metrics
0
Clear metric
Clear metric type
Ambiguous metric
Ambiguous metric type
Comment
1
Weekly company stats meeting attendee satisfaction survey averages > 4.8/5
Usage
Weekly company stats meeting analysis attributed as justification for staffing >= 1 project
Impact
In practice, it might be very hard to collect attribution information about how some analysis was used as justification. We’re much more likely to be able to collect clean, unambiguous data on meeting satisfaction than fuzzier justification data.
2
Finish 7 of 10 blockers for Bazel Migration
Execution
Bazel reduces build times by X%
Impact
This project is either fully migrated or not for this sprint, so trying to measure any impact this early would be confusing at best. The following sprint when it goes to launch, we could consider measuring time savings impact
3
All recent regressions fixed within 2 weeks
Execution
Reduce bug impression rate by 75%
Usage
While the Usage metric around the visibility of bugs helps emphasize the goal of not having users experience bugs, it may be impractical to instrument and tally each bug for its impression count. Like all metrics, this one’s a judgement call, but speed to fix may be significantly easier to use in practice and therefore create more clarity and alignment
There are no rows in this table
Note sometimes we may be tempted to add multiple KRs with different metrics for the same project. Exploding the number of KRs can create more confusion instead of providing clarity and focus. Our best practice is to
pick the one strongest metric
per KR rather than expanding a project’s many metrics into many separate KRs.

👯 Data team is here to help!

During planning cycles, the Coda team is available to help brainstorm Metrics for KRs. We hold Jam Sessions to chat more during planning week:
Planning Jam Sessions w/ Data Team
1
Time (PST)
Duration
RSVP
Signups
Calendar link
Calendar of Planning Jam Sessions w/ Data Team
1
Today
November 15-21, 2020
MonthWeekDay
8:00 AM
8:30 AM
9:00 AM
9:30 AM
10:00 AM
10:30 AM
11:00 AM
11:30 AM
12:00 PM
12:30 PM
1:00 PM
1:30 PM
2:00 PM
2:30 PM
3:00 PM
3:30 PM
4:00 PM
4:30 PM
5:00 PM
5:30 PM
November 16, 2020
11/16/2020 3:00 PM, 3:00 PM
November 16, 2020
11/16/2020 3:30 PM, 3:30 PM
November 16, 2020
11/16/2020 4:00 PM, 4:00 PM
November 16, 2020
11/16/2020 4:30 PM, 4:30 PM
November 17, 2020
11/17/2020 3:30 PM, 3:30 PM
November 17, 2020
11/17/2020 4:00 PM, 4:00 PM
November 17, 2020
11/17/2020 4:30 PM, 4:30 PM
November 18, 2020
11/18/2020 10:00 AM, 10:00 AM
November 18, 2020
11/18/2020 4:00 PM, 4:00 PM

💯 The art of grading Key Results

Metrics in Key Results have two main values:
1)
Storytelling.
To tell as specific and clear a story as possible about the work we’re doing. A great KR and Metric convey the motivation and story of a project to other teams and our future selves looking back on it.
2)
Feedback loops.
Metrics and Key Results are one way to hold ourselves accountable. There’s two components of that:
[A]
Goal setting.
It can be hard to predict impact, usage, or execution. But the better we can predict, the better we can prioritize (knowing the costs and ROIs for projects), and the better we can manage dependencies. Grading KRs helps us evaluate how accurate our forecasting was and incorporate learnings into the next planning cycle.
[B] Execution reflection.
Did we knock it out of the park unexpectedly? What worked well and what didn’t? What was really challenging? Grading KRs is a great moment to crystalize and share learnings with the full team.
Grading key results can be hard and sometimes emotional. We encourage teams not to get overly focused on the exact score (78% done or 79% done?) but instead focus on reminding teams of the motivations and story and sharing learnings, whether those learnings are about how to set goals better next time or what worked and didn’t in execution.


Subscribe to David
More by David
0
1
2
3
4
Loading…
5
6
Loading…
7
No results from filter


A few of the 25,000+ teams that 🏃‍♀️ on Coda.

a77eed08d9f915aabec0346b67a40b7ec93457a9864e51bde1e9239fad175c59624ad1bd252ec5041198bc8e9a52ce31b441ab4767fbb714b9172474278d0b29434f118c46abc5c7a7377f4ad1ea733677ca74d97bc8661e4bbfde8d9cdafabc285f5616.jpeg
Coda is an all-in-one doc for your team’s unique processes — the rituals that help you succeed. Teams that use Coda get rid of hundreds of documents, spreadsheets, and even bespoke apps, to work quickly and clearly in one place. This template is a Coda doc. Click around to explore.

Find out how to Coda-fy your rituals.
Connect with a Rituals Architect

Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.