Skip to content

29th October, 2025

Summary

In-App Motion: New GenAI motion variants (MVS, TDMB) show +30% conversion and +30–40% retention vs. older GenAI stills; HQ motion clearly drives higher ARPU and engagement.
Picking 10 shows in Nov for in-app motion (in-house + agency)
50-Hour AutoAI no-QC: Shadow Slave’s AutoAI variant lifted activation (+30%) and ARPU (6×), while Torn Between Alphas saw mixed results.
Allocations are ~200 LDAUs, so we are waiting one more week to conclude;
Hypothesis for drop post H3 is that batch-specific character canvases are needed to avoid repetitive images and maintain relevance.
AutoAI Upgrades: Major revamp of the character-canvas flow (ethnicity, skin tone, aliases, cinematic tone, negative prompts) has improved photorealism and diversity;
Nano Banana + Seedream stabilization and Seedance video gen are under iteration to reach Midjourney-level authenticity and smoother motion.
Talent & New Tools: Film-school workshops yielded 8 strong trainees; external trainings with Google & Bytedance ongoing; ACD training program revamp is next key initiative to upskill our team.

In-app GenAI experiments

Manual GenAI results

MVS 10H (1H Motion + 9H Stills): Conversion is up 31% vs older GenAI (Var A); Retention uptick is between 30-40% [In-house team - | ]
Next steps: Extend motion to 10H and stills to 40H to match new audio
Screenshot 2025-10-29 at 2.09.49 PM.png
TDMB XH (10H Motion): Variant with motion from Ep. 6-45 provides 20% uplift in ARPU 40% uplift in conversion. [Agency team - ]
Takeaway: Replacing stills with HQ motion is adding value
image (4).png
10 shows in pipeline In pipeline for November
WW: TAB, TAM, M3W
Fantasy: MVS, Shadow Slave, FLBM, Princess Ebony
D/R: TRA (beyond ep. 100), TGS, HIH

50H In-app GenAI results

Var A - Thumbnail; Var B - No-QC AutoAI for 50H on Shadow Slave (Fantasy) and Torn Between Alphas (Werewolf)
Shadow Slave: All metrics are up till H3 - activation (+30%), conversion (+18%), ARPU (6x)
Screenshot 2025-10-29 at 4.18.45 PM.png
Torn Between Alphas: 6% Conversion uptick; but, activation (-6%), ARPU (-23%) are dropping; retention is mixed ​
Screenshot 2025-10-29 at 4.27.21 PM.png
Takeaways:
Need to increase allocations to build more confidence in results
Manually check episodes beyond 3H to see if user retention is dropping because of repetitive images
Hypothesis: Character canvas needs to be specific to batches (e.g. Every 5-10H) to generate highest quality of images as the story evolves and characters look different

AutoAI Updates

Three priorities:
Improve character canvas creation - On Track
Key problem: Characters generated by AutoAI are homogenous - primarily caucasian, and with limited differentiation among characters (esp. female)
Changes we’ve made: Include ethnicity, skin tones, and cultural context, unifies all aliases under one title, prompts mandate more cinematic and evocative, and adds richer examples and clearer negative prompts for consistency. []
Results: Characters look more photorealistic and diverse (
)
Next steps: Refine prompts to reach Midjourney-level authenticity; Automate character canvas creation process
Stabilize image gen on Nano Banana + Seedream - Delayed
Key problem: Quality of images generated by Leonardo has hit a ceiling vs Midjourney; exploring Nano Banana + Seedream to generate HQ images at scale
Changes we’ve made: New universal prompt based on guidance from Google + Bytedance team; editing flow so we generate once per scene and edit for 100% consistency
Results: Mixed feedback on AutoAI output ( | ) vs manual gen (
)
Shots are homogeneous (medium, close-ups); character consistency is breaking if the images are from Midjourney; some errors where photorealism breaks
Next steps: Iterations to fix these problems
Improve video gen on Seedance - Delayed
Key problem: Video gen is coming across as GIF-like; not yet smooth and seamless
Next steps: Test with 30 IPM asset; review prompt structure; evaluate first frame-last frame integration to generate longer scenes

Talent Density Initiatives - Training, Film School Workshop, Appraisals

2-week filmmaking workshop: Of the 20 recent graduates who joined the workshop, 4 had a film school background + 16 were freelancers.
We liked 8 candidates’ final assignment - 3 from film schools (75% hit rate), 5 are freelancers (~30% hit rate)
Takeaways:
The 2-week program alone doesn’t yet make them ready to deliver at par with folks who have GenAI experience
We need to give them 1-2 months as an intern and convert folks who are adapting to our workflows
External training: Google’s session 1 covered prompting basics (
), Bytedance is covering best-practices in using Seedream on Thursday, October 30th.
Next steps:
Follow-up sessions with Bytedance & Google; In discussions with OpenAI & Leonardo for upcoming training sessions.
Revamped ACD training plan to be ready by next week.
Appraisals: One-time exercise to bring ~25 older team members to median + retain top 10% talent who have spent over a year at Pocket FM
We’ll revisit comp on contract expiry for key team members - aligned with HR.
Next steps:
Growth paths for CDs / ACDs / VEs / SEs
Evaluate variable pay linked to performance during contract renewal (similar to writer contracts)

New Tools

Additions
Veo 3.1 - Promising One-stop solution for cinematic motion generation featuring multi-shot support, enhanced lipsync, and an extend clip function. Currently the top pick, delivering the most consistent and production-ready results. [
]
Kling 2.5 Turbo - Promising High-quality motion and action generation with prompt-based camera control, multi-step direction, and built-in SFX for dynamic scene creation. [
]
Sora 2 - Mid Instant power-start motion generation; highly efficient and scalable but concerns over cost effectiveness and character consistency. [
]
Wan 2.5 - Mid Produces solid motion quality but requires heavy and detailed prompting. [
]
Grok Imagine V9 - Drop Can only generate 6s motion outputs; works well with Midjourney images, minimal distortion. Not as powerful as current video gen models [
]
Detailed Reviews -
Next up for testing
Image gen: Meta AI vs Midjourney (image quality),
Video gen: Higgsfield Popcorn (video workflows),
Video editing: Shotstack (timeline automation), JSON2Video (JSON/CSV video rendering), Plainly (template-based editing), Creatomate (CSV-driven stitching & overlays)

Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.