In-App Motion: New GenAI motion variants (MVS, TDMB) show +30% conversion and +30–40% retention vs. older GenAI stills; HQ motion clearly drives higher ARPU and engagement.
Picking 10 shows in Nov for in-app motion (in-house + agency)
50-Hour AutoAI no-QC: Shadow Slave’s AutoAI variant lifted activation (+30%) and ARPU (6×), while Torn Between Alphas saw mixed results.
Allocations are ~200 LDAUs, so we are waiting one more week to conclude;
Hypothesis for drop post H3 is that batch-specific character canvases are needed to avoid repetitive images and maintain relevance.
AutoAI Upgrades: Major revamp of the character-canvas flow (ethnicity, skin tone, aliases, cinematic tone, negative prompts) has improved photorealism and diversity;
Nano Banana + Seedream stabilization and Seedance video gen are under iteration to reach Midjourney-level authenticity and smoother motion.
Talent & New Tools: Film-school workshops yielded 8 strong trainees; external trainings with Google & Bytedance ongoing; ACD training program revamp is next key initiative to upskill our team.
In-app GenAI experiments
Manual GenAI results
MVS 10H (1H Motion + 9H Stills):Conversion is up 31% vs older GenAI (Var A); Retention uptick is between 30-40% [In-house team -
Takeaway: Replacing stills with HQ motion is adding value
10 shows in pipeline In pipeline for November
WW: TAB, TAM, M3W
Fantasy: MVS, Shadow Slave, FLBM, Princess Ebony
D/R: TRA (beyond ep. 100), TGS, HIH
50H In-app GenAI results
Var A - Thumbnail; Var B - No-QC AutoAI for 50H on Shadow Slave (Fantasy) and Torn Between Alphas (Werewolf)
Shadow Slave: All metrics are up till H3 - activation (+30%), conversion (+18%), ARPU (6x)
Torn Between Alphas: 6% Conversion uptick; but, activation (-6%), ARPU (-23%) are dropping; retention is mixed
Takeaways:
Need to increase allocations to build more confidence in results
Manually check episodes beyond 3H to see if user retention is dropping because of repetitive images
Hypothesis: Character canvas needs to be specific to batches (e.g. Every 5-10H) to generate highest quality of images as the story evolves and characters look different
AutoAI Updates
Three priorities:
Improve character canvas creation - On Track
Key problem: Characters generated by AutoAI are homogenous - primarily caucasian, and with limited differentiation among characters (esp. female)
Changes we’ve made: Include ethnicity, skin tones, and cultural context, unifies all aliases under one title, prompts mandate more cinematic and evocative, and adds richer examples and clearer negative prompts for consistency. [
Next steps: Refine prompts to reach Midjourney-level authenticity; Automate character canvas creation process
Stabilize image gen on Nano Banana + Seedream - Delayed
Key problem: Quality of images generated by Leonardo has hit a ceiling vs Midjourney; exploring Nano Banana + Seedream to generate HQ images at scale
Changes we’ve made: New universal prompt based on guidance from Google + Bytedance team; editing flow so we generate once per scene and edit for 100% consistency
), Bytedance is covering best-practices in using Seedream on Thursday, October 30th.
Next steps:
Follow-up sessions with Bytedance & Google; In discussions with OpenAI & Leonardo for upcoming training sessions.
Revamped ACD training plan to be ready by next week.
Appraisals: One-time exercise to bring ~25 older team members to median + retain top 10% talent who have spent over a year at Pocket FM
We’ll revisit comp on contract expiry for key team members - aligned with HR.
Next steps:
Growth paths for CDs / ACDs / VEs / SEs
Evaluate variable pay linked to performance during contract renewal (similar to writer contracts)
New Tools
Additions
Veo 3.1 - Promising One-stop solution for cinematic motion generation featuring multi-shot support, enhanced lipsync, and an extend clip function. Currently the top pick, delivering the most consistent and production-ready results. [
Kling 2.5 Turbo - Promising High-quality motion and action generation with prompt-based camera control, multi-step direction, and built-in SFX for dynamic scene creation. [
Grok Imagine V9 -DropCan only generate 6s motion outputs; works well with Midjourney images, minimal distortion. Not as powerful as current video gen models [