icon picker
Product // 1 pager template

A template to provoke the right questions/answers as you work through a feature
This template runs under the format for the following 4 stages
Kick off
Implementation Review
Release
Impact Assessment
Now you may want to run these as 4 stages in a process...but beware you don’t slip into a waterfall process.
So use it wisely.

Kick Off

Summary

Brief description of:
What problem does this solve?
Why is it important?
What might you build?

OKR

Which OKR does this relate to?

Supporting insights

Qualitative, quantitative and competitor research that provides context on the problem we’re trying to solve.

Impact estimate

Back-of-envelope calculation or “what would need to believe for this to be worth building” type sensitivity analysis

Expected approach

Current high level thinking on how we might solve the problem, if known

Success metrics

What are the leading metrics that will drive the OKR? E.g. UV to sub-topic pages

Scope

Roughly how big a solution are we considering? What is in and out of scope?

RACI

Who are the key stakeholders involved?
Table
0
Role
People
1
Responsible
[the team doing the work]
2
Accountable
[the PM]
3
Consulted
[stakeholders whose requirements need to be taken into account]
4
Informed
[stakeholders who need to be kept up to date]
There are no rows in this table

Links

Any links to supporting documents (e.g. designs, Miro boards, stand alone research, Trello, etc)

Next steps

What needs to happen before this is ready for Implementation Review? What is the plan for discovery?

Expected Implementation Review

How long do we expect to spend on Discovery? When do we expect to bring this to Product Review for Implementation Review?

Implementation Review

Insights

What do we know from user research and deeper analysis that lead to proposed design?

User flows

Optional

Prototype link

Optional

Final designs

What are we going to build? Should include final copy and content

Technical approach

High level explanation of technical approach and feasibility. Covers security / scalability concerns if appropriate.

Effort

Engineering estimate in pair sprints based on the tickets created. The team should also be in a position to give a projected release date that follows from this. We are happy giving a high confidence estimate at this stage, because any technical investigation needed to remove uncertainty has been completed in Discovery (i.e. before Implementation Review)

Estimated delivery date

A date please, i.e. "w/c 6th Sep" / "9th Sep"

Reporting and tracking requirements

Does the planned work require additional/new reporting data collection or tracking and are we ensuring existing collection/tracking is not impeded or stopped? Do we need to involve the Data teams (Engineers, Analysts or Scientists) to ensure current or future ETL, reporting or analyses are taken into account?

Data Privacy and Information Security

Does this feature require any additional personal data collection or a different use of existing personal data? Have we considered if a Data Privacy Impact Assessment (DPIA) is required, what the data retention policy for this new data should be, and if we need to make any changes to the Subject Access Request (SAR) process? Are there any specific information security requirements we need to consider?

Testing strategy

How will we know if this feature has been successful? What is the expected timeline on this?
What experiments will we run (if any) through Optimizely?

Go To Market plans

How will we roll out this feature. What is the product marketing plan if any?
Client Marketing
Expert Marketing

Expected Impact Assessment

When do we expect to bring this to Product Review for Impact Assessment?

Impact Assessment

Decision

For AB tests only. Did we roll this out / roll it back? Why?

Success metric

How has this changed since launch. If this wasn't an AB test, how confident are we that this difference can be attributed to the feature?

Feedback

Any qualitative feedback from users on feature

Analysis

Any other insights / analysis deep dives that we’ve done to understand impact in more detail
What did we learn from any experiments we ran? (even if they weren't successful)

Next steps

Note on whether any follow up features are planned (and link to relevant documentation)

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.