Skip to content
Development & Progression

icon picker
Score moderation

Here's what happens in the background!
By now you should know more about our appraisal cycle and why we do it this way
A major part of the appraisal process is the moderation, and here’s some info about how the moderation team works through each team member’s appraisal
Moderation team 2
Name
Role
1
Dan Hully
Founder
2
Sam Wilkinson
Co-founder
3
Jag Gill
Head of People
4
Anuli Vyas
People and Culture Manager
There are no rows in this table


Moderation process 2
Search
Moderation 📝
We go band by band - we start by looking at FinOps Associates at Band 1, then Band 1.5 and so on... We discuss: Feedback commentary we want to make sure that feedback for everybody is complete and useful, and if there’s anything that warrants further discussion, it can be identified at this stage Average feedback givers’ score this is the average of the scores that a team member’s peers have given to everyone they have scored eg.: say the average score that your feedback giver has given to their peers is 4.5/5 (quite generous!) and we know that the average score across all feedback givers is 3.8/5 - in this case we adjust that score so that people with more ‘generous’ feedback givers don’t have an inflated score, and vice versa wherein you don’t lose out on a good total score because your feedback givers’ average score is lower than the average! Mix of feedback givers we want to ensure that team members are getting feedback from peers across levels Split between squad and partner work this is to make sure that all aspects of a team member’s work are looked at and feedback is provided for both partner work and squad work equitably We look out for: Exceptions / special cases
The numbers 🔢
I. Leapsome data we look at to arrive at the final score: Average scores for technical and behavioural criteria from all feedback givers Average feedback givers’ score Feedback given by the team member to team members they’ve scored If the feedback you’ve given is really good and has all the components of good feedback, you get a +0.1. If it could use improvement then a -0.1. And if it’s just okay then there’s no change. II. From Finance squad: Data to calculate GPM We look at the margin we make for each role We look at billable team mix that’s the split between associate, manager and lead billable time across all our partners Gross Profit Margin for the business overall and we’re aiming to sustain this at min. 50% Subsequently, based on the above data, the benchmark score is decided III. From Scheduling / Ops / Sales squads Specifically for movement from FOA → FOIM and FOL → FOM, we want to make sure we have capacity for more leads in the business Bottomline: anybody with a score equal to or above the benchmark moves up a band This score is reviewed every quarter to sustain the GPM at a min. 50%
Progression 🔝
How availability is decided: Based on data and guidance from the Scheduling / Ops / Sales squads, the number of available positions for the FinOps International Manager and FinOps Lead roles are finalised for each quarterly cycle I. For FinOps Associate → FinOps International Manager (new) Associates who have expressed interest in the new role will be asked to work on a task, details are shared closer to the appraisal cycle II. For FinOps Manager → FinOps Lead FinOps Managers who get a final moderated score higher than the benchmark set for that quarter are eligible for promotions The moderation team then meets to discuss the feedback for the FinOps Managers who qualify. Essentially, the moderation team looks for FinOps Lead role model behaviour to promote the right team member(s)
Discussion 🤼
We have a final leadership discussion to look at special cases: This is to cover any team member at the last band for their role - FinOps Associate Level 3, FOM Level 2.5 and Lead Level 3 It also covers any non-FinOps team members It also covers any team members for whom a case can be made to move them 2 bands instead of 1 And finally any rare question mark cases - in case the initial moderation left us with some doubts Post-discussion, the results are shared with the team, along with a note from PeopleOps explaining the moderation and resulting scores QT meetings are scheduled for the team by the PeopleOps squad to give everybody an opportunity to explore and interpret their feedback That’s it!

Moderation 📝

-
We go band by band - we start by looking at FinOps Associates at Band 1, then Band 1.5 and so on...
We discuss:
Feedback commentary
we want to make sure that feedback for everybody is complete and useful, and if there’s anything that warrants further discussion, it can be identified at this stage
Average feedback givers’ score
this is the average of the scores that a team member’s peers have given to everyone they have scored
eg.: say the average score that your feedback giver has given to their peers is 4.5/5 (quite generous!) and we know that the average score across all feedback givers is 3.8/5 - in this case we adjust that score so that people with more ‘generous’ feedback givers don’t have an inflated score, and vice versa wherein you don’t lose out on a good total score because your feedback givers’ average score is lower than the average!
Mix of feedback givers
we want to ensure that team members are getting feedback from peers across levels
Split between squad and partner work
this is to make sure that all aspects of a team member’s work are looked at and feedback is provided for both partner work and squad work equitably
We look out for:
Exceptions / special cases



Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.