Skip to content
Rituals for hypergrowth: An inside look at how YouTube scaled
Share
Explore

icon picker
Planning cadence

Setting and achieving meaningful goals with strategic planning every 6 months (26 weeks) and sprint planning every 6 weeks.
Ok, let’s get into the details a bit! This page drills down into how our planning cadence actually operated. As a reminder, we planned on a 6 month / 6 week schedule, focusing the former on aspirations and the latter on commitments.
We initially ran on the same schedule as Google, who is well-known for a . While there were some characteristics we really liked (e.g. setting quantifiable but aspirational goals), the OKR-centered quarterly cadence felt awkward to us. We had two challenges:
13 weeks felt like an awkward length: Google naturally planned on quarterly boundaries. But 13 weeks (i.e. a quarter) fell in an awkward middle for us. It was too long to really “commit” since we were in hypergrowth, and so a lot could change over 3 months. But it was also too short to achieve meaningful strategic goals. We found it didn’t accomplish either objective very well.
The aspirational 70% approach: One of the core characteristics of the Google OKR process is “aspirational goals” - i.e. you are only supposed to hit 70% of your target, and if you hit more, then you aimed too low. While this is great in theory, at YouTube, we had a heavily interconnected model between our teams so OKRs were naturally used for cross-team coordination (Team A depends on Team B doing something and verifies that they will do it by ensuring it’s “on their OKRs”). This led to many tough situations as teams inappropriately used OKRs to track dependencies and ended up being disappointed by a partner team. The 70% approach meant that OKRs could not be used as contracts between teams.
So we gradually shifted to our own new model.

Overall: Strategic planning every 6 months (26 weeks) & sprint planning every 6 weeks

Our system focused on two independent processes:
Strategic planning occurred every 6 months (26 weeks) and was called H1 and H2 planning. There were two key outputs: (a) a list of and (b) . It took awhile (generally 2-3 very focused weeks) and was very comprehensive (every team in the company was involved).
Sprint planning occurred much more frequently at a ~6 week cycle with a . This was a set of true commitments that every team was making on what they would get done in the next 6 weeks and reflected dependencies between teams. It was meant to be a very short planning process (a couple of days), but I’d say we were still working on actually getting it to be that lightweight.

Strategic planning every 6 months → Big Rocks and project allocations

Every 6 months we did our strategic planning process. At the end of the process, we would publish two things:
Big Rocks: A list of the O(6) Big Rocks. These were meant to be the key focus points for the next 6 months — the goals to prioritize ahead of others. These were prioritized (with this ), described in , and then each Big Rock team would put together a “pitch presentation” that they would present in the kick-off all-hands, along with a summary “poster” outlining their aspirations for that rock.
Project allocations: A big spreadsheet with a row for each team, a link to their “two-pager,” and an allocation of headcount for the next 6 months. For reference, at YouTube, we had ~1000 engineers divided into ~70 teams (grouped into 12 “Investment Areas” which were then grouped into 4 “Focus Areas”). The output was critical to ensuring we actually had alignment, as it showed headcount allocated to teams and Big Rocks and the matrix of how resources were mapped between them. It was open to the whole company and people regularly referred to it.

The process of achieving these outputs was done both bottoms-up and top-down.
Bottom-up process: Two-pager reviews
At the beginning of the process, we conducted a very intense process of comprehensively reviewing every team’s status and goals. Each team wrote (often more like 4-5 pages), covering what they accomplished in the last six months, and what they would like to focus on in the next six months. As we refined the process, I asked everyone to specifically call out three things:
3 things you want to be sure Shishir knows.
3 things that would make you move faster.
The {graph, chart, picture, etc} that you are obsessed with.

The last one (the “obsession” chart) turned out to be a very good mechanism for really understanding what’s driving the team. Sometimes teams would pick a single metric (“we want this line to cross value X”) while others would pick a motivating picture or sentence. As an example, one of my favorites was from our Video Knowledge team, who were tasked with learning everything possible about what is in a video. Their “obsession” was a video with a sentence underneath it that said “This is a video of a lonely gentleman walking alone on a Paris bridge last month, quietly contemplating his thoughts.” Their goal was to be able to use machine learning to get to that level of understanding of a video (sidenote: they did it!).
A key part of making this process work was very fast reviews. We had ~70 teams to get through, and each review could easily have taken 3 hours, but we wanted to get it all done in a week to minimize disruption. We did this by having every team publish their two-pagers a few days in advance, and then I (and the other team leaders) would pre-read all of them carefully. Each team review was ~10-15 minutes (allocated in ~3 hour blocks per Investment Area) and there was no presentations. I generally came to the meeting with a list of a few focused questions I had for the team, and we would just go through that list. I tried to make sure that at minimum, I completely understood the 3 things they wanted me to know and the 3 things that would make them move faster, but would often ask questions in other areas of their write-up.
Teams generally enjoyed these sessions and put a lot of effort into making sure the materials were well done, comprehensive, and concise. The write-ups were also a great forcing function for teams to write their goals and plan — and they were regularly referenced by team members, new employees, etc.
Personally, this was always one of my favorite weeks of the year since it was a great reminder of all the awesome work being done across the teams. Some teams were regularly in the limelight, but most (vast majority) of the teams were not — so this was a good chance to make sure their work was well-understood and on track. The teams often commented on how much they valued the feedback and visibility that this process gave them.
To be fair, the process was a large time commitment, but for me it was well worth it, and regularly got positive reviews from the participating teams as well.
Top-down process
In addition to the bottom-up process, we also did a few things from a top-down perspective, including Big Rock brainstorming and driving alignment through a . In particular, we book-ended the bottoms-up process with a set of early kickoff discussions and then closed with the top-down process making the final decisions.
Kickoff brainstorming meeting with tech-staff: Early in the process (~2 weeks before the two-pager reviews) we would hold meetings to brainstorm with tech-staff.
In the first meeting, we would take stock of how we felt the last 6 months had gone and discuss different options for what to prioritize going forward. Then we’re ask everyone to add ideas to prioritize for the next 6 months in a shared spreadsheet. We would do a pass of “voting” - everyone got a column in the spreadsheet and allocated . I would then quickly go through and pick a subset of the items for us to carry forward to the next meeting. The voting was seen as input, but everyone was comfortable that I could override it, generally by adding a couple things that didn’t get votes but I still thought were important.
In the second meeting, we would see short one-slide presentations on each of the items we selected from the first meeting. These were generally pretty high-level — reframing a goal that might not have been clear in the last meeting, etc. After these presentations, we would vote again (same technique), and again I would select a set of items to carry forward using the voting as an input but not a rule.
Cross-functional input meetings: We then held a series of meetings with each of the functions (sales, marketing, etc) to get their input. The leaders of each function came up with an unprompted list of potential priorities, and those priorities would be compared with the results of the tech-staff exercise. Most of the time, the deviations were small, since we tended to have heard this input on an ongoing basis, but occasionally this input resulted in changes.
Ending offsite: After all the two-pager reviews were complete, tech-staff came back together to make final decisions. At this point, a lot of data was clear — what our overall headcount budget should be, how much “organic” growth we needed to account for, etc. Before the meeting, I generally proposed a list of potential Big Rocks to discuss — usually informed by both our initial offsite results and what we saw in the two-pager reviews. For each of these Rocks, we would assign a “Rock Manager.” That Rock Manager’s task was to define the Rock more precisely, and also to enumerate the team (with names of individuals if possible) they felt they needed to be successful on the rock. During the meeting, the focus was on aligning the — the Focus Areas would each propose where they would like to allocate resources, and then we would look at what each of the Rock Managers requested. Inevitably, we would have too many Rocks to staff but this helped us see where the true conflicts were. We held successive rounds of group discussion and breakout groups to try to reconcile these. In the end, we usually got to consensus, but if tie-breaks were needed, I would decide.
Final writeup: Even after the offsite where the decisions were made, there was still a round of write-up required to explain all of this to the rest of the organization. I would draft an overall mail covering the Rocks that were selected (and why), each of the teams would fill in the spreadsheet for headcount planning according to our decisions, and each of the Rock Managers would prepare their summary “poster.” Then we would send out the materials by email and have an all-hands meeting the next day where each of the Rock Managers presented their posters and took questions.

Sprint planning every 6 weeks → Commitments and dependencies

The goal of sprint planning was similar to a scrum-of-scrums model. There were two parallel processes:
Team-level sprint planning: Each team would do their own planning exercise and prioritize their list of goals for the 6 weeks. At minimum, each team needed to publish their list of goals (usually 4-5) so that the second process could happen.
Goals here were meant to be concrete and measurable. Because of the fast frequency, it meant that sometimes they would be intermediate milestones before a full ship. Note that because they were also the unit of coordination, they often included coordination milestones — "Team A will get X ready so that team B can do Y in the next sprint."
Dependency coordination: Then the teams would get together and discuss dependencies on each other. We used a (fairly complex) for this.

The key to this process is that these were true “100%” commitments, not the “70% aspirations” that was typical of Google OKRs. That’s it — the process was as lightweight as possible.

FAQ

If you want to contribute comments / thoughts, feel free to to see the comment-enabled version of this doc.
Many companies do planning every quarter (13 weeks). What are the implications of splitting into 6 and 26 weeks?
There were some implications of splitting this process:
Strategic planning every 6 months gave us more time to set and achieve meaningful goals—and include every team in the company along the way.
Sprint planning more frequently allowed teams to narrow commitments to what they could realistically get done in 6 weeks and avoid using dependencies as rationale for aspirational goals.
Every team had an in-depth review at least every 6 months. This made sure everyone felt included, and for the leader, it meant that you could be totally plugged into what’s happening across the company.
Less resource angst. Because we only did macro resource allocation every 6 months, there was a lot less “random requests for headcount” outside of that cycle.

Of course, there were some downsides:
Not everything lines up to a six month cycle. So some discussions seemed premature (we weren’t quite ready to decide if X should be a Big Rock), and others seemed like overkill (this effort will only take 1 month, and yes it’s crazy important, but do we really want to spend a Big Rock on it?).
Required vigilant management. These processes could easily balloon into a lot of low value-add process if not vigilantly managed with expectations and communication to teams. And we tried to refine the process each cycle to cut out unnecessary parts and focus on what really mattered.

Why Big Rocks?
Lots written on this - watch , try or perhaps a is clear enough:
image.png
There’s also more description in .
How many Big Rocks is the right number?
This is very tricky and we went back and forth on this.
Our rule of thumb: ~6 Big Rocks which represent ~30% of all our resources.
More detail:
Let’s start with a simpler question: Are the Big Rocks comprehensive of what the team is working on? To ask differently, is every person on the team assigned to a Big Rock team?
I think this is one of the natural traps. The typical pattern is that team X’s effort Y gets left off the Big Rocks list. They raise the question “why not just add Y to the list? We’re going to work on it anyways, it doesn’t impact anyone else, etc - what’s the big deal?” I would strongly resist this urge.
So we developed some litmus tests for Big Rocks. There were 3 in particular, and we ordered them like this:
The top priorities - if we do nothing else, we should make sure these N things happen
Resolve conflict for competing resources - choosing the correct Big Rocks would help clarify priorities for some teams that were always caught in the middle of multiple priorities (e.g. the Clients team)
Non-goal: Be comprehensive of what teams were working on - this was least important, and actually anti-thetical to our model
With those rules in mind, we generally ended with ~5-7 Big Rocks. Also, since we assigned “Big Rock teams”, we could gauge what % of the team was working on a Big Rock vs focusing on the priorities for their functional area. This number was generally ~30-40%, meaning that 60-70% of the teams’ work was not covered by the Big Rocks.
Someone also told me: I think some people felt that the list was long enough that it didn't feel sufficiently opinionated -- I think I remember there being 9 one cycle, which made some feel like we couldn't say no. The fact that they only accounted for 30-40% of resources arguably exacerbated this -- because everyone knew there were tons of other things happening that weren't on the list.

To see one way to visualize this information, check out .
Wait, the Big Rocks only covered 30-40% of your resources? Shouldn’t it be more?
No I don’t think so. It’s very tempting to have “100% of your resources on your top 5 initiatives”. But in my experience, this generally backfires. One person told me: “30-40% seems small. I would think you'd reserve 30-40% and commit the rest to the "big rocks" or goals.”
The truth is that there are always components / teams that need to make progress, sometimes just to stay afloat - for example, just because we didn’t want to put a big emphasis on ContentID in one cycle didn’t matter - just to keep up with organic growth, they were going to have to rebuild the system.
So if you insist on getting to the Big Rocks covering 100% of your initiatives, you’ll begrudgingly be drawn the opposite way - in other words, you’ll be drawn to rephrase your Big Rocks to be comprehensive of things that are really not the top priorities, and not how you actually intend to resolve conflicts.
Was this just tech or was it cross-functional?
We tried to make it cross-functional, but it was definitely tech-driven. This wasn’t desirable IMHO, just a product of how the process was developed and how Google worked. In particular, many of the YouTube cross-functional resources were a virtual matrix — e.g. we didn’t own our own sales team, it was spread throughout a 10k+ Google sales team.
If I were to do this again in a new organization, I would try to be more cross-functional.
Ownership / team model for Big Rocks? Do they get assigned a "Rock Manager”?
TLDR: Yes, and they probably shouldn’t (just) be someone on the leadership team.
For the first couple cycles, we would pick Big Rocks, but just let our normal leadership team (generally members of tech-staff) run the Big Rocks. This had the pro of having someone reporting to me being accountable for the Rock. But it didn’t seem to quite work well.
Then we all read about . It became very clear why this wasn’t working: we were assigning Chickens, but not Pigs.
So we shifted our model to ensure that every Big Rock had a Rock Manager, and in particular, that person was a “Pig” for that effort — i.e. that person didn’t own a lot of other things, they felt not only invested, but committed to making that Rock successful. The Rock Managers were generally the cross-functional leads for the most affected team (e.g. a PM lead, eng lead, Ux lead, and sometimes a marketing lead, sales lead, etc).
What about the rest of the team working on the Big Rock if they were spread across teams? (aka “hoodie squad”)
As we got deeper into applying philosophy, we not only assigned specific Rock Managers to rocks, we also assigned a “Hoodie Squad to support them.
The “hoodie squad” was defined as the set of people across multiple teams that were Pigs working on that Rock - it was so-named because of the litmus test “if you were to wear a hoodie with the name of the team you were on, which hoodie would you wear?”
This was tracked in . There was a row per employee, and an indicator of whether they were on a Big Rock hoodie squad or not (and in very small cases, if they were on multiple). A few implications of this:
No confusion on resource allocation: Before we did this, we would have situations where, for example, the Mobile team would say “sure we’ll sign up to have 2 people work on the Music Big Rock”, but never actually name the people. This forced actually naming them.
Allowed for true accounting: We could see how much effort a Big Rock was costing, and cleared up FUD in discussions - e.g. “It feels like half the company is working on the Music Big Rock!! → Nope actually it’s 5%”. It also allowed us to clearly account for what % of overall resources where on Big Rocks vs Focus Area led efforts — so we could hold ourselves to the litmus test that Big Rocks should account for ~30% of our work.
For individuals, it drove clarity: The worst feeling as an individual is to be torn across multiple teams. We would send these folks actual hoodies so they understood. So someone on the Mobile team would start wearing their Music hoodie, and symbolically / literally take it off when the Big Rock was finished and they returned to the Mobile team.
It identified resource bottlenecks and over-allocation: Our general rule was that people should be on zero or one hoodie team. Of course, there were exceptions. Sometimes there were special skillsets (e.g. Ux Research was a common one) where they had to stretch across Big Rocks, and in other cases it was specific individuals (there were a few engineers in particular that always seemed to be torn between multiple Big Rocks). In a given planning cycle, the number of YouTubers on multiple Hoodie Squads was generally <10 (out of 1000+ people), so less than 1%.

Developing this nomenclature was a key breakthrough in being able to make cross-FocusArea Big Rocks actually work.
Why do resource allocation as part of this process?
Some teams resist doing resource allocation as part of planning to try to focus on “strategies over resources”. While I initially resisted, I grew to like having resource allocation be a part of this process. A couple main reasons:
It forced true prioritization - it was too easy to say that everything was a priority if there were no cost.
By allocating resources every 6 months, it actually prevented a lot of discussion on resource allocation the rest of the time. We kept a holdback of resources that we could allocate but for the most part, major resource requests just happened on the 6 month boundaries.
Drove clarity on ownership and assignment - esp with the Hoodie Squad approach (see previous question), it allowed teams and individuals to know exactly what they were accountable to.

Can you provide examples of what some of Big Rock aspirations looked like?
I can't provide actuals since those are still at Google. But just to give you a sense of the types of aspirations:
Some were simple quantifiable metrics - "We will achieve a latency of X, or an RPM of Y, or a transcoding-time-frame of Z"
Some were more qualitative - "We will build the product to match this mock", "we will build something that users who like X say Y about it", etc.

And generally the aspiration poster had multiple parts to show different parts of the goal. It was like a science fair poster.
What constituted an investment area and how often did they change? Also, how often did you think about reshuffling resources and who made that decision?
As a reminder, I organized YouTube to have two primary levels of hierarchy:
4 Focus Areas structured around our 4 audiences: Viewer, Creator, Advertiser, Infrastructure
~12 Investment Areas which laddered into those focus areas. For example: The Viewer Focus Area divided into 4 Investment Areas - Clients (building all the frontends), Search & Discovery, Growth (handling onboarding, notifications, accounts, etc), and Knowledge (forming a foundational layer of knowledge about videos, channels, and users).

There’s a good view of the YouTube investment areas in .
Investment areas were very stable - almost never changed. Maybe one change to an investment area every year.

Why was sprint planning done in 6 weeks?
The limiting factor on this was the Apple app submission process - given the overhead of that process, it was hard to do any less than 6 weeks. I would have much preferred 4 if possible.
👉 Next:

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.