Process Improvement Impact on Process (EL & PM)
Why we need to track this:
To understand if process improvements are genuinely enhancing the customer experience. Tracking helps us validate whether changes lead to higher satisfaction and identify which improvements have the most positive impact.
Customer Satisfaction Rate – for the specific process Feedback on the improvement from customers Who initiated & implemented the improvement Calculation:
Average rating per week/month
Process Improvement Impact through Automation (EL & AS)
Why we need to track this:
To quantify the efficiency gains from automation in terms of cost savings, not just time. This helps justify investments in automation and recognize high-impact initiatives that reduce manual work or resource load.
FTEs saved – for automation [reflect cost rather than time] Time saved from the improvement Who initiated & implemented the improvement Previous time spent – Current time spent
SOP Helpfulness Score (KM)
Why we need to track this:
To gauge the usefulness of SOPs from the team’s perspective. This helps us identify which SOPs are effective and which may need updates or further clarification, ultimately improving self-service and reducing support dependency.
How to track:
Using Coda by adding a “Helpful” button on each SOP What Data Needed:
Number of people who clicked the “Helpful” button Calculation:
Total number of clicks per week per SOP
Average Document Usage (KM)
Why we need to track this:
To measure how frequently internal documents are being accessed and used. This helps us understand which resources are valuable, spot outdated or unused docs, and ensure critical information is easily accessible and actually being used by the team.
How to track:
Page views or access logs from Coda (or your document platform) What Data Needed:
Total number of views per document Unique users (optional, for deeper insight) Calculation:
Total document views ÷ Number of documents
(Optionally: filter to specific date range like per week/month)
Accessibility Score (KM)
Why we need to track this:
To ensure key information and resources are easy to find, access, and use. A high accessibility score means the team can move faster, reduce repeat questions, and work more independently. It also highlights areas where structure or visibility needs improvement.
How to track:
Quick internal pulse surveys (e.g., “How easy was it to find what you needed?”) Feedback form linked in SOP What Data Needed:
User feedback on document/tool navigation Example searches that led to dead ends or required help Calculation:
Average rating score from survey (e.g., 1–5 scale)
(Optional: % of people who rated 4 or 5 = “easily accessible”)
SOP Adoption Rate (KM)
Why we need to track this:
To ensure team members understand and retain key information from SOPs. Tracking knowledge checks helps identify gaps in understanding, validate the clarity of documentation, and reinforce learning for better execution.
How to track:
Use Quizizz to create short quizzes tied to specific SOPs Monitor completion and scores What Data Needed:
Quiz completion rate per team member Individual and average quiz scores Link to the corresponding SOP Calculation:
% of team members who passed (based on a defined passing score, e.g., 80%) Completion rate = (Number of people who took the quiz ÷ Total expected) x 100
Satisfaction Rate for Non-Enablement Request Support (EL & PM)
Why we need to track this:
To measure how effectively the team handles ad-hoc or support requests that fall outside of planned enablement. Tracking satisfaction helps maintain service quality, ensure responsiveness, and identify areas where we can improve how we support the org beyond our core focus.
How to track:
Feedback form or quick rating (e.g., 1–5 stars or thumbs up/down) sent after request resolution Optionally embedded in follow-up message or ticket closure What Data Needed:
Requester feedback or rating Type and volume of support requests Responder/team who handled the request Calculation:
Average rating per request Overall satisfaction rate = (Sum of all ratings ÷ Total responses)
(Optional: % of responses rated 4 or 5 = "satisfied") Average Number of Workers Participated in Project Training
Why we need to track this:
To assess engagement and reach of training efforts tied to specific projects. This helps ensure the right people are getting trained, identify gaps in participation, and improve planning for future training sessions.
How to track:
Participant taking the training What Data Needed:
List of participants per training session Associated project/training topic Total number of target worker type. Calculation:
Total number of participants ÷ Number of training sessions
(Optional: Participation rate = Actual attendees ÷ Expected attendees × 100)
% Reduction in Skill Gaps (LD)
Why we need to track this:
To measure the effectiveness of training or upskilling efforts over time. Tracking skill gap reduction helps show progress, guide future learning plans, and ensure the team is gaining the capabilities needed to support business goals.
How to track:
Skills assessment before and after training Use Quizizz to create short quizzes tied to specific training What Data Needed:
List of key skills per role or project Baseline (pre-training) skill score Post-training skill score Calculation:
% Reduction = ((Initial Skill Gap – Final Skill Gap) ÷ Initial Skill Gap) × 100
(Skill Gap = Target proficiency – Actual proficiency)
(Can be averaged across individuals or teams for reporting)
% Increase in Speed of Training Program Development
Why we need to track this:
To evaluate how efficiently we’re creating training programs over time. Tracking speed improvements highlights process efficiencies, helps allocate resources better, and ensures we’re meeting enablement needs promptly.
How to track:
Record the time taken (in days or hours) to develop each training program from planning to launch Compare development timelines over different periods (e.g., quarterly) What Data Needed:
Start and end date of each training program development Number of programs developed Development process notes (optional, for identifying blockers or improvements) Calculation:
% Increase in speed = ((Previous Avg. Dev Time – Current Avg. Dev Time) ÷ Previous Avg. Dev Time) × 100
(Lower development time = increased speed)
System Adoption Rate (AS)
Why we need to track this:
To measure how well team members are adopting new systems or tools. A high adoption rate indicates successful onboarding and training, while lower rates may highlight areas where additional support or follow-up is needed.
How to track:
Use Quizizz to create quizzes that assess knowledge of the system’s features and functionality Monitor participation and scores to gauge comprehension and adoption What Data Needed:
Completion rate of the Quizizz quiz Average score on the quiz Number of team members who have completed the quiz Associated system or tool being adopted Calculation:
Adoption rate = (Number of people who completed the quiz ÷ Total number of people expected to adopt the system) × 100 Average score (optional) = Total score ÷ Number of participants
Number of High Impact Opportunities Identified vs. Delivered (AS)
Why we need to track this:
To measure the effectiveness of identifying high-priority opportunities and ensuring they are successfully delivered. Tracking this ratio helps us assess if we're translating potential value into actionable outcomes and meeting organizational goals.
How to track:
Track the number of identified high-impact opportunities (based on strategic priorities or team input) Track the number of those opportunities that are actually delivered (completed, implemented, or achieved) Categorize opportunities by type: Process, Revenue-Generating, Cost-Saving What Data Needed:
List of identified opportunities with priority status and type (process, revenue-generating, cost-saving) Status of each opportunity (e.g., planned, in progress, delivered) Expected vs. actual impact of each delivered opportunity Calculation:
Overall Delivery Ratio = (Number of opportunities delivered ÷ Number of opportunities identified) × 100 Category Delivery Ratios (separate calculations for Process, Revenue-Generating, and Cost-Saving): Process Delivery Ratio = (Delivered process opportunities ÷ Identified process opportunities) × 100 Revenue-Generating Delivery Ratio = (Delivered revenue-generating opportunities ÷ Identified revenue-generating opportunities) × 100 Cost-Saving Delivery Ratio = (Delivered cost-saving opportunities ÷ Identified cost-saving opportunities) × 100 Average SLA (Time to Completion)
Why we need to track this:
To ensure timely delivery of tasks or projects, monitor compliance with service level agreements (SLAs), and identify areas where delays can be minimized. Tracking this helps us improve efficiency and meet internal/external expectations consistently.
How to track:
Track the time it takes to complete tasks, projects, or requests from start to finish Compare against pre-defined SLAs or expected timeframes What Data Needed:
Task/project start and end dates/times SLA targets (expected completion time) Task/project type or priority level Calculation:
Average SLA = (Total time to completion for all tasks ÷ Number of tasks)
(Optional: Track by task/project type for more detailed insights) Number of Processes Automated
Why we need to track this:
To measure the scale of automation initiatives and their impact on efficiency. Tracking this metric helps highlight the progress in reducing manual tasks, freeing up resources, and streamlining operations.
How to track:
Track each process or task that has been automated Record the date and details of the automation implementation What Data Needed:
List of processes identified for automation Status of each process (e.g., automated, in progress) Time saved or resources freed up per automated process (optional) Calculation:
Total number of processes automated
(Optional: Track the automation rate = Number of automated processes ÷ Total number of processes identified for automation)