Share
Explore

icon picker
Experiment PRD — Session-Level Diversity & Fatigue Control

A mock PRD for a real potential feature on Fortnite Discover.
so-so
Hi Epic / Fortnite people, when examining Fortnite Discover and comparing it to other game marketplaces like Roblox and Steam, I noticed a lack of diversity / novelty implements on the recommendation system that reduces fatigued exposure to the same (genre of) content over and over (or maybe it is in place but I can’t find documentation or evidence of it online). I think this could be an area worth exploring for Discover and have drafted up a plan on how to assess the feasibility and impact of such a feature.

PRD — Reduce Repetitive Fatigue (Fortnite Discover)

1) Problem statement

Players sometimes get a browsing feed that feels “samey” within a single session (the same island again, many items from the same creator, or very similar themes in a row). This can cause fatigue and early exits. We want to gently increase variety within the session and see if players actually explore more, stay longer, and come back the next day—without harming overall discovery or creator fairness.

2) Background

Discover already optimizes over multiple days for attraction, engagement, and satisfaction.
What’s not clearly addressed is the moment-to-moment experience during a single browsing session.
Large content platforms often add deliberate variety to avoid fatigue. This is a focused test to validate that idea for Fortnite Discover.

3) Goals

Players play more different islands per browsing session.
Players spend more total minutes across the islands they play in that session (or at least do not spend less).
Players are more likely to continue after one play (keep browsing instead of quitting).
Players are as likely or more likely to return the next day.
No harm to overall discovery (how often islands get new players across the system) and no harm to fairness (exposure across genres, regions, and creator sizes).

4) Non-goals

We are not changing multi-day scoring rules.
We are not introducing paid promotions or mixed organic/paid rules.
We are not changing creator tools (thumbnails, titles, etc.) in this test.

5) Users and scenarios

Player: scrolls Discover, decides whether to open an island, sometimes returns to Discover to try another.
Creator: wants fair discovery; does not want their islands hidden by a blunt diversity rule.
Internal teams: PM, engineering, data, quality assurance, creator ops—need reliable logs, clear guardrails, and a quick readout.

6) Key definitions

Impression: an island appears in a player’s feed.
Play: the player clicks into an island and starts it.
Browsing session: a continuous period where the player browses Discover (resets after long idle or app close).
Bounce: the player starts an island but leaves very quickly (example: under one minute).
Repetitive feed: back-to-back impressions that are the same island, same creator, or very similar in theme.

7) Key approach

Light touch: nudge variety without blocking truly relevant islands.
Player outcomes first: we ship only if behavior improves (more exploration, longer sessions, healthy return).
Fair to creators: watch exposure balance and do not widen unfair gaps.
Simple to operate: keep logic fast and easy to reason about.

8) Solution concept

We will test two simple controls, alone and together:
Small cap on repeats within a session
In a span of 10 impressions, do not show the same island more than once.
In that same span, show the same creator at most twice.
Soft nudge toward variety
If the last few items were very similar (same creator or similar theme), gently lower their ranking and lift a different choice next.
The nudge is small and bounded so highly relevant items still appear.
Surfaces: start with the main Discover grid; exclude curated or paid rows for this first test.

9) Hypotheses

Players will play more different islands in a session with the caps or the nudge.
These sessions will last longer or at least not be shorter; players will more often come back to Discover to try another after one play.
Players who had a more varied session will be as likely or more likely to return the next day.
Overall discovery and fairness will not get worse.

10) Success metrics

Primary

Different islands played per browsing session (higher is better).
Total minutes played per browsing session (higher or flat is good).
Keep-browsing rate after a play (how often they return to Discover and try another) (higher or flat is good).
Next-day return (higher or flat is good).

Guardrails

System-wide new-player rate per impression (zoomed-out discovery health).
Fairness: exposure balance across genres, regions, and creator sizes.
Quality: fast exits under one minute and “not interested” clicks.

Mechanism checks

Within sessions: how often we would have repeated the same island/creator and how much that repetition fell compared to the control group.

Example thresholds for a ship decision

+5% or more different islands played per session,
Total minutes per session flat or higher,
Next-day return flat or higher,
No worse on system discovery and fairness.

11) Requirements

Functional

Track session boundaries reliably.
Apply repeat cap and variety nudge only to the chosen rows.
Keep a short memory of the last few impressions in the session.
Avoid “fighting” with editor-curated or paid rows (exclude them here).

Data / logging

Log: session id, impression (island, creator, theme/genre label, row/slot, region, platform, time), plays (start/end, first-time or returning), favorites/ratings, “not interested.”
Daily job to compute: per-session exploration, minutes, keep-browsing, next-day return, system discovery, fairness.

Performance

Added ranking work must not slow feed noticeably (target: negligible added time).
Creator similarity/cluster should be pre-computed offline if needed.

13) Experiment design

Groups
Control: no change.
Variant 1: repeat cap only.
Variant 2: soft variety nudge only.
Variant 3: cap + smaller nudge.
Traffic
10% → 25% → 50% if safe.
Duration
At least 14 days (capture weekdays and weekends).
Stratification and monitoring
Watch results by region, platform, and genre.
Keep a small set of islands entirely out of the test to monitor ecosystem stability.

14) QA plan and acceptance checks

Simulate sessions to confirm caps and nudges behave as intended.
Confirm consistent assignment (a given player/session stays in the same group).
Logging completeness checks (impressions, plays, session starts/ends).
Pre-launch dashboards: daily view of the four primary outcomes and the guardrails.
Gate to ramp traffic
All dashboards green for two days at 10% traffic.
No obvious fairness or discovery regressions.

15) Risks and mitigations

Hiding great content by mistake → the nudge is small and bounded; monitor system-wide discovery and per-slot diagnostics.
Skewed exposure → monitor exposure across genres/regions/creator sizes; pause if variance widens.
Latency → use a short session memory; pre-compute similarity offline.
Players feel forced → we’re not forcing weird content; we’re preventing back-to-back near-duplicates.

16) Open questions

What is the best span length for the repeat cap (10 impressions vs 12)?
How do we define “very similar” for theme (rule-based labels vs a lightweight embedding)?
Do we need different settings for short sessions vs long sessions?

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.