You want fancy?
→ As you will see in the chapters below, most clients opt for finding their most pressing business questions and then finding answers instantly instead of spending money on pretty dashboards ;). But this is possible in Tableau, e.g., and I have built this design with client data before. Domo is more limited in its fancy design. Looker Studio sits in the middle of both. The most critical driver for deciding which visualization tool to use should always be which business questions you need to answer and how much data volume will be pumped for that.
But what about PowerBI, Looker, AgencyAnalytics...
... Apache Superset, Dash, Grow, Chart, Google Sheets, Excel, QlikSense, Sisense, ThoughtSpot, Metabase, GoodData, Klipfolio, ... and the other 50 tools on the market?
They all work fairly similarly from a data engineer and a visualization expert’s perspective:
We gather user stories, define dimensions and metrics, apply best-practice chart types, and consult on design. The real difference lies less in what we build and more in the ecosystem the tool plugs into.
It often boils down to preference. For Microsoft-heavy enterprises, PowerBI integrates naturally, though we’ve seen use cases where Tableau was the preferred choice. Outside the Microsoft bubble, it tends to be a less cost-effective, less flexible option. Looker Studio is the best free tool on the market, with some surprisingly strong features, but it quickly hits limits with governance and data scale. Open-source and all-in-one platforms can look attractive, but they increase lock-in risks, which is especially critical for regulated industries like healthcare and wellness e-commerce.
At LazyAnalyst, we have worked with many of them and concluded that even with new players on the market all the time, the tools that consistently deliver reliable results across our client base are Looker Studio for e-commerce businesses up to $3M gross annual revenue, Tableau for $3-20M, and Domo for anybody above that.
Sales Performance
What’s our revenue year over year and month over month?
Domo version
Executive Summary
Year-to-date revenue is $130.4K, which is 195.8% higher than last year, a very strong year-over-year growth. Monthly revenue trends show that while some months outperform last year significantly, there are still dips in certain periods.
For the current month, revenue is $26.4K so far, which is down 10.5% compared to last month. Daily revenue trends show that while cumulative growth continues steadily, day-to-day revenue has been a bit weaker than the prior month.
Recommendations
Celebrate year-over-year growth: Nearly doubling revenue year-on-year is a strong result and signals that the overall strategy is working. Address month-to-month dips: Investigate why this month is slightly down. Check campaign timing, seasonality, or product availability. Smooth daily revenue swings: Explore how to reduce volatility (e.g., via promotions spread across the month instead of clustered). Plan for sustainability: With such strong annual growth, focus on maintaining momentum without over-relying on a few peak months.
How well does our sales team do?
Tableau version
Executive Summary
The sales team made 120 outbound calls in July, down 22% from the previous period, but nearly every call connected (98% contact rate). This led to 118 live conversations, with a 30% close rate, slightly down compared to before. In total, 36 deals were closed, but that’s 20% fewer than last time and only 44% of the monthly goal (100 sales).
Performance by rep shows clear differences: Jasmine is a star performer, making more calls and closing at a strong 39% rate in June, while Caleb makes fewer calls and closes at a lower rate (~6–7%). The coaching matrix places Jasmine in the top-right quadrant (high effort + high success), while Caleb falls in the low effort + low success quadrant, signaling the need for coaching and support.
Recommendations
Recognize and retain top talent: Jasmine is a high performer. Ensure recognition, rewards, and knowledge sharing with the team. Coach underperformers: Caleb needs focused training on call techniques, motivation, or role alignment to improve both call volume and close rate. Boost overall call activity: Call volume is trending down. Scaling outreach while maintaining quality could lift results. Tighten pipeline management: With strong contact rates, the opportunity is there; focus on converting more conversations into sales. Revisit targets: If volume continues to dip, either adjust goals realistically or invest in new tools/resources to help the team scale.
How well did each event perform?
Looker Studio version
→ How is our event performance increasing over time?
→ How well are we converting event attendees?
Executive Summary
Event performance has been mixed across the year. Revenue peaked in June 2022 at $166K, with both strong attendance and conversion. Other months delivered significantly less ($6K–$38K). Attendee rates remain high, with 92% attendance in March and 74% in September, showing strong event engagement. Conversion from attendees to sales, however, has declined over time: 34% in March vs. 28% in June and 33% in September. The interview-to-sales conversion also dipped from 90% in March to 65% in September, suggesting reduced effectiveness in later events.
Recommendations
Replicate high-performing events: June’s revenue spike indicates best practices worth repeating (content, offers, follow-up strategy). Tighten conversion process: While attendance is strong, fewer attendees are turning into buyers. Strengthen sales follow-up, improve offers, or refine targeting. Focus on quality over quantity: September had healthy attendance but weaker conversions; prioritize deeper engagement with fewer high-quality leads instead of just filling seats. Track interview quality: Declining interview-to-sales conversions suggest the need to reassess how sales conversations are being conducted or how qualified attendees are before interviews.
Are we hitting our sales call targets?
Looker Studio version
Executive Summary
Between Sept 1–20, 2022, the team made 192 calls, which led to 34 meaningful conversations and 13 new bookings. That’s a call conversion rate of 18% and a booking conversion rate of 38%. On average, reps are making 5.6 calls per hour, generating about 1 conversation per hour. While calls and conversations are happening, very few are translating into verified or confirmed bookings, and no sales have been closed yet.
From Typeform submissions, 19 total forms were completed, with 17 qualified (89%), showing strong lead quality. However, almost none progressed into booked sessions or sales. Paid Typeforms contributed the majority of leads, while organic Typeforms had fewer completions but a slightly higher qualification rate.
Recommendations
Improve handoff from calls to bookings: Strong qualification rates mean leads are solid, but the drop-off before booking and sale suggests gaps in scheduling or follow-up. Tighten the booking process: Simplify confirmations and reduce friction so more qualified leads actually commit to appointments. Review rep efficiency: With only 18% call-to-conversation conversion, scripts or targeting may need refinement to increase efficiency. Bridge form-to-sale gap: Leads from Typeforms are high quality; align sales follow-up more closely with these submissions to convert them into revenue. Track reasons for no-shows: Capture data on why leads fail to book or cancel. These insights can shape better retention tactics.
How do our two bestselling products fare across the regions?
Tableau version
Executive Summary
Neither of the two bestselling products - Turmeric Chai Latte and Cacao & Cardamom - consistently hits the weekly sales target of 12 units in any region. The strongest performance is in Greater London, where both products sell above average compared to other regions, followed by the South West and Scotland. Other regions like Wales, East Midlands, and Yorkshire & Humber show much lower sales volumes.
Overall, sales are spread thin across regions, with no standout market consistently driving the highest volumes.
Recommendations
Strengthen top-performing regions: Focus marketing and promotions in Greater London, South West, and Scotland to amplify sales where there is already stronger traction. Boost weak regions: Investigate why regions like Wales and the East Midlands underperform. Is it distribution, awareness, or local preferences? Test regional campaigns: Tailor promotions by region to lift sales, rather than applying one-size-fits-all offers. Reassess sales targets: If no region is close to the 12-unit weekly goal, either adjust the target realistically or design strategies to aggressively push volume in high-potential areas.
Where do we sell the most?
Tableau version
Executive Summary
Sales are concentrated heavily in a few key states, led by New York ($1.29M) and New Jersey ($782K). Other notable states include Florida, California, Ohio, and Pennsylvania, but their sales volumes are much smaller in comparison.
At the city level, Brooklyn, NY alone generates $732K, making it the single biggest city contributor. Other strong cities include Lakewood, NJ and Monsey, NY, though their totals are far lower. Many other states and cities contribute only small amounts, showing a sharp concentration of sales in the Northeast.
Recommendations
Double down on the Northeast: With New York and New Jersey driving the bulk of revenue, invest more in marketing, partnerships, and distribution in this region. Explore expansion in top secondary states: Florida, California, and Ohio show traction and could be scaled further with localized campaigns. Broaden geographic reach: Sales are highly concentrated. Test growth strategies in underperforming states to reduce over-reliance on NY/NJ. Target city hotspots: Brooklyn is an outlier performer; replicate what works there (channels, promotions, or demographics) in other high-potential urban areas.
What’s our sales trend among all products?
Tableau version
Executive Summary
Sales have grown steadily from 2008 through 2017, with peaks in later years. The overall trend shows a consistent upward trajectory, though there are noticeable seasonal spikes, likely tied to specific campaigns or seasonal demand.
Breaking sales down by product category, a few categories (such as Category1, Category2, and Category3) show consistent sales momentum over the years, while others have more sporadic or declining activity. This suggests that while the portfolio overall is healthy, not every product is contributing equally to growth.
Recommendations
Lean into consistent performers: Continue investing in product categories with stable growth. These are the backbone of long-term revenue. Revisit underperforming products: For categories with flat or declining trends, review marketing, pricing, or consider phasing them out if ROI is weak. Capitalize on seasonality: Plan inventory, campaigns, and promotions to take advantage of the recurring sales spikes. Diversify growth sources: Explore how to replicate the success of top categories across weaker ones, potentially through bundling or cross-promotion.
What’s our average order value over time?
Domo version
Executive Summary
The average order value over the tracked months is $19.75 overall. AOV peaked between Nov and Jan at around $24–$26 per order, before declining sharply by March. Customer counts follow a similar seasonal trend, with the largest number of customers in December, which coincides with the highest spending per order.
This indicates that holiday or seasonal peaks (Nov–Dec) not only bring in more customers but also higher-value orders, while early spring sees both order volume and value decline.
Recommendations
Capitalize on seasonal peaks: Since AOV and customer count are strongest in Nov-Dec, build campaigns that maximize this seasonal momentum. Lift off-season performance: Target promotions or bundles in low months (Feb-Mar) to sustain AOV outside peak season. Segment customer behavior: Analyze whether holiday shoppers differ from year-round customers. Tailor offers to retain high-value seasonal buyers. Monitor early signals: If AOV starts declining earlier than usual, adjust pricing or promotional strategies to maintain margins.
Paid Ads Performance
What is our live monthly ad budget pacing?
Looker Studio version
Executive Summary
Ad spend is far behind planned pacing. Against a monthly budget of £30,000, only £1,420 has been spent so far (5%), leaving £28,580 unspent. The daily spend target was £1,071, but the actual daily spend is averaging only £68, putting pacing at -94% behind plan.
Looking at the current month’s budget in dollars, the situation is inverted: $8,215 budgeted, with $1,474 already spent (118%), leaving the campaign overspent by $1,515. Average daily pacing is more than 1,100% above plan, signaling a mismatch between planned and actual campaign delivery.
Recommendations
Fix pacing controls: Spending is swinging between extreme under-delivery and overspend. Adjust campaign pacing rules or daily caps to keep delivery aligned. Audit campaign setup: Double-check campaign dates, budget allocation, and whether dummy/test data is skewing results (noted in the dashboard). Reallocate budget: If some ad groups or platforms are underspending while others are overspending, redistribute funds to balance performance. Set automated safeguards: Implement alerts or automated budget controls to avoid future overspend situations.
How much engagement are our campaigns driving?
Looker Studio version
→ Are our campaigns converting?
Executive Summary
Campaigns generated 29,231 impressions with a cost per thousand (CPM) of £49. This resulted in 251 clicks, giving a click-through rate of 1%, which is on the lower side for typical ad benchmarks.
However, despite impressions and clicks, no conversions were recorded. This means cost per conversion, conversion value, and return on ad spend (ROAS) all remain at zero. The campaigns are not currently generating measurable sales or leads.
Recommendations
Review targeting and messaging: A low CTR suggests ads may not be compelling or reaching the right audience. Test new creatives, audiences, or offers. Check landing pages: With clicks but no conversions, the issue may be poor landing page experience, unclear CTAs, or technical tracking errors. Audit tracking setup: Ensure conversion tracking pixels/tags are properly firing; the zero conversions may reflect setup issues rather than actual performance. Reallocate spend: If fixes don’t improve performance quickly, pause underperforming campaigns and redirect spend to proven channels.
What are our best and worst performing campaigns?
Looker Studio version
Executive Summary
The Facebook ad campaigns for Centre Open From 12th April are performing well based on click-through rate.
The first ad delivered 6,004 impressions, generating 447 clicks, with a CTR of 7.4%. The second ad delivered 12,903 impressions, generating 875 clicks, with a CTR of 6.8%. Both campaigns achieved a very low cost per click of just £0.04, showing they are cost-effective at driving engagement. Recommendations
Double down on winning creatives: Both ads have strong CTRs above typical benchmarks, scale spend while maintaining CPC efficiency. Test variations: Since performance is similar, experiment with creative tweaks (imagery, copy, audience) to see if CTR can be pushed even higher. Track downstream conversions: Engagement is strong, but ensure these clicks are actually turning into sales or sign-ups, not just cheap traffic. Optimize budget allocation: With such low CPC, these campaigns could be prioritized in the budget over less efficient ones.
Which of our video ads performed best across all KPIs?
Tableau version
Executive Summary
Among the top 20 videos ranked by minutes watched, the clear leader is “ImiFMab4-eY” with 1,164 minutes watched and over 6,000 views, making it the strongest performer overall. Other notable videos include nvGxwMkZKnk (429 minutes, 10,068 views) and pWrquJV2G3G (415 minutes, 9,391 views).
Engagement quality varies: while some videos attract high views, their watch duration per view stays short (~1 minute average). Playlist activity also highlights which videos are being saved for rewatching: “a2otfuakFVQ” and “tla3ditC0dA” had thousands of playlist adds, signaling strong long-term interest.
Recommendations
Scale top performers: Promote high-retention videos like “ImiFMab4-eY” more widely since they combine both strong views and watch time. Analyze playlist trends: Videos with many playlist adds (e.g., “a2otfuakFVQ”) are highly valued by audiences. Reuse their style or content themes. Lift underperformers: Many videos show 0 minutes watched despite thousands of views. These may have weak intros or misleading thumbnails. Rework opening hooks to hold attention. Experiment with formats: Test whether shorter or longer content impacts watch time, since most videos average just 1 minute per view.
Subscription Performance
What are our net subscriptions this month?
Looker Studio version
→ Have we improved subscriptions over the last 90 days?
→ When do people typically cancel in their cycle?
Executive Summary
For October 2020, there were 578 new sales and 577 cancellations, leaving net subscriptions flat for the month. Subscription sales generated £141,387, up 72.7% from last year, contributing to a YTD total of £2M.
Membership levels currently sit at 16,763, down 20.7% year-over-year, with about 4,501 accounts frozen. On average, customers stay subscribed for 18.3 months, and cancellations show noticeable spikes mid-month. Average yield per customer in the last 30 days is £19.75.
Recommendations
Address churn: With cancellations nearly equal to new sales, focus on retention programs (loyalty perks, re-engagement campaigns) to stabilize net growth. Analyze cancellation spikes: Peaks mid-month suggest billing or service-cycle related drop-offs. Investigate to reduce predictable churn. Thaw frozen accounts: With over 4,500 frozen memberships, targeting reactivation campaigns could quickly boost active membership numbers. Build on sales growth: While sales revenue is strong versus last year, the benefits are being offset by cancellations, balancing the two is key.
When do customers cancel in their lifecycle?
Domo version
Executive Summary
Most cancellations happen very early in the customer lifecycle. The highest spikes occur within the first few days after joining, with over 100 cancellations on day 1 alone. A second noticeable peak appears around day 20–30, before tapering off gradually over the following months.
This suggests that cancellations are clustered at the very start of a subscription (possibly linked to trial periods or onboarding issues) and again shortly after the first billing cycle. After that, cancellations continue but at much lower, steadier levels.
Recommendations
Improve early onboarding: Since cancellations are highest at day 1, focus on immediate value delivery and clearer communication of benefits to reduce “buyer’s remorse.” Target first-billing window: With another spike around 20–30 days, use proactive engagement (emails, incentives, check-ins) to encourage customers to stay past the first month. Identify churn signals: Track behavior in the first 30 days to flag high-risk customers early and intervene with retention tactics. Refine trial or guarantee policies: If cancellations align with trial expirations, adjust offers or add stickiness features to encourage longer stays.
What’s our net subscription change?
Domo version
Executive Summary
Net subscriptions increased by 231 during the period shown. The green bars indicate new subscriptions, while the red bars represent cancellations. The black line shows the overall net growth — starting small in early January 2019, spiking around mid-January with a surge in sign-ups, and then steadily climbing through February.
Although cancellations continue throughout the period, the volume of new sign-ups consistently outpaces them, keeping the overall trend positive.
Recommendations
Capitalize on spikes: The big surge in mid-January suggests campaigns or promotions are effective at driving growth. Replicate or expand on what worked during that window. Reduce steady churn: Although outweighed by new sign-ups, cancellations remain high. Target retention strategies to slow the outflow and strengthen net growth. Sustain momentum: The upward trajectory through February shows strong performance. Reinforce acquisition tactics while doubling down on early retention to lock in gains.
How quickly do our cohorts repurchase with us?
Domo version
Executive Summary
This chart shows how quickly different customer cohorts (by first purchase month) return for repeat orders. Most cohorts repurchase within 20–30 days after their first order. For example:
Customers from June 2018 placed their second order in about 23 days, their third in another 27 days, and their fourth in 27 days, staying engaged for ~77 days total. Later cohorts (e.g., Jan-Feb 2019) show longer gaps between orders and smaller customer counts, suggesting weaker repeat engagement. Overall, the earlier cohorts repurchased more frequently and consistently than the later ones.
Recommendations
Reinforce early engagement: Since repurchase speed is fastest in the first 20–30 days, focus loyalty campaigns, upsells, or reminders during this window. Analyze declining cohorts: Later groups take longer to place repeat orders. Review acquisition sources or onboarding differences that may explain the slowdown. Encourage third/fourth orders: Many customers stop after 2-3 purchases. Incentives (e.g., discounts, exclusive offers) at this stage could push them further along the lifecycle. Benchmark best cohorts: June-October cohorts show stronger retention. Replicate the campaigns or conditions from those periods.
What’s our cohort retention rate? How quickly does each cohort re-order?
Domo version
Executive Summary
This chart shows how many customers from each monthly cohort return to make another purchase, and how quickly they do so.
Retention rates hover between 24% and 39% for most cohorts. For example, in July, 34% of customers placed another order within 31-60 days of their first purchase. Cohorts from late 2018 into early 2019 show slightly stronger retention, with December hitting 39% and January at 37%. The most recent cohort shows 0% so far, likely because not enough time has passed for repeat purchases. Overall, retention is relatively steady but not high, with about one-third of customers repurchasing within two months.
Recommendations
Boost early repeat orders: Since most reorders happen within 31-60 days, targeted follow-ups (emails, discounts, loyalty points) should focus on this window. Replicate stronger cohorts: December and January cohorts retained more customers. Analyze what drove better engagement during these months (e.g., promotions, product mix, seasonality). Experiment with incentives: Consider testing personalized offers or bundles aimed at pushing the ~60-90 day group to repurchase more consistently. Track recent cohorts: Keep monitoring newer months like February as more data comes in to confirm whether retention patterns are improving or declining.
Which cohorts stay subscribed longer? What’s the drop-off rate?
Domo version
Executive Summary
This table shows how long customers from each starting month stay subscribed and when they drop off.
December 2018 had the strongest early retention, with 72% still active at 15 days and 42% at 60 days. November 2018 also held up well, with 67% at 15 days and 35% at 60 days, plus 30% lasting to 120 days. October 2018 dropped sharply. Only 25% lasted to 60 days and almost none beyond 120 days. February and March 2019 show steep fall-off, with 0% retention by 90 days. Overall, most cohorts lose half their subscribers by 60 days, with only the stronger months (Nov-Dec 2018) showing meaningful retention past 90 days.
Recommendations
Study November–December 2018: These cohorts lasted the longest. Identify what offers, onboarding, or seasonal campaigns contributed to stronger retention. Focus on 30–60 day retention: Since most drop-offs occur here, interventions (personalized outreach, value reminders, loyalty perks) should target this period. Reduce early churn: Some cohorts lose 40% of subscribers by day 30. Improve the first-month experience to keep customers engaged longer. Track newer cohorts: February and March 2019 results suggest worsening retention. Test whether acquisition sources or customer expectations have shifted.
Funnel Performance
How is our funnel performing?
Looker Studio version
→ Did we increase top and/or bottom of the funnel in the last month?
Executive Summary
This page shows how well your sales funnel is moving leads through each stage: from new leads to paying customers.
Top of funnel (Leads): You brought in 389 new leads, which is a 151% increase compared to before, a strong improvement! Marketing Qualified Leads: Only 88 leads became MQLs, which is a 5% drop. This suggests a weaker marketing qualification or nurturing. Sales Qualified Leads: A big jump here. 1,049 SQLs, up 24%, showing sales are picking up many leads. Opportunities: Dropped slightly to 39, down 2.5%, showing fewer SQLs are turning into opportunities. Customers: End result is 51 new customers, meaning conversions are happening even if the opportunity stage is weaker. Recommendations
Tighten marketing qualification: The gap between Leads and MQLs is too wide. Review how leads are scored or nurtured so more qualify. Fix the SQL → Opportunity handoff: Many SQLs are not turning into opportunities. Align sales and marketing on what “qualified” really means. Double down on what’s working at the top: Lead generation is strong right now. Build on the channels that are driving the 151% increase.
Customer LTV & Retention
Which channels are driving LTV and retention?
Looker Studio version
Executive Summary
Retention this month sits at 9%, down 7% compared to last month, with 9K returning users (a -15.9% drop). Overall traffic also declined slightly (-5.8%), showing both fewer total users and fewer repeat visits.
When it comes to channels, Organic Search is the clear leader, bringing in nearly half of all users (47%) and driving the highest revenue overall ($24B). Direct traffic ranks second with $7B in revenue, while Paid Search delivers fewer users (10K) but very high value per user ($350K each, $3B total). Social, referral, and affiliates contribute smaller shares with lower revenue per user.
Recommendations
Double down on Organic Search: Continue investing in SEO since it drives both volume and revenue at scale. Leverage Paid Search efficiency: Although smaller in volume, these users deliver the highest value per head. It’s worth exploring how to scale this channel without losing efficiency. Investigate retention decline: Dig into what’s changed in user experience, campaigns, or competitive landscape to address the drop in returning users. Boost underperforming channels: Social and referral show weak revenue per user, refine targeting or reallocate budget toward better-performing channels.
Who is churning in which order?
Tableau version
Executive Summary
Most of our customers are churning quickly, around 89–95% drop-off each quarter across the years shown. Only a small portion reactivate, and even fewer are retained consistently from one period to the next. For example, in Q1, while 17,990 customers churned, only 534 were retained and 301 reactivated. The pattern is consistent across years: churn dominates, retention is small, and reactivation is even smaller.
Recommendations
Focus on churn prevention first: Since the majority of customers leave after their initial purchase periods, early engagement and onboarding are critical to reduce drop-off. Build a reactivation strategy: The reactivated group is small but meaningful. Personalized win-back campaigns (discounts, targeted emails, or reminders) could expand this segment. Segment by cohorts: Different customer groups may churn at different rates, analyzing by product line, acquisition source, or geography could reveal where interventions are most effective. Test loyalty drivers: Retained customers are rare but valuable. Studying their behavior and replicating what keeps them active (bundles, subscription models, exclusive offers) can help grow this segment.
What does LTV look like per discount that brought our customers in?
Domo version
Executive Summary
Customer lifetime value varies significantly depending on the discount that brought them in. Customers who used no discount code or specific product coupons (Product-B, Product-C) delivered the highest LTV, averaging up to $100+ per customer. In contrast, deep discounts like 50JAVA or FREESHIPPING produce very low or even negative LTV (around -$15 to -$40). This shows that not all promotions attract valuable customers. Some drive one-time bargain hunters who don’t return.
Recommendations
Prioritize high-LTV discounts: Invest more in campaigns using targeted product coupons or limited promotions that historically bring in quality customers. Reevaluate deep discount codes: Heavy offers like 50% off or free shipping may not justify the long-term loss. Test scaling them back or replacing them with smaller incentives. Segment by acquisition type: Different codes attract different customer profiles; tailoring post-purchase nurturing by code could improve retention. Run ROI analysis on discounts: Tie each campaign to LTV, not just first-purchase conversion, to ensure promotions are profitable in the long run.
What does the LTV look like for our different cohorts?
Executive Summary
Customer lifetime value varies depending on when a customer first joined. Cohorts from mid-2018 show relatively stable LTV, ranging from $41 to $52 after 6 months, with September 2018 being the strongest cohort ($52). Later cohorts (end of 2018 into early 2019) brought in more customers overall, but their LTV growth is more modest over the same timeframes. This suggests that while acquisition volume increased, the average quality or spend per customer may have dropped slightly.
Recommendations
Replicate successful months: September 2018 produced the highest 6-month LTV; review campaigns, channels, or promotions used during this period and consider reapplying them. Balance growth and value: Larger cohorts in late 2018 drove scale but not higher spend per customer. Adjust targeting to prioritize quality alongside volume. Track early signals: Monitoring LTV at 30 and 60 days can help predict which cohorts will underperform, allowing earlier intervention. Segment further: Break down cohorts by acquisition channel, product purchased, or discount used to identify why some groups deliver higher LTV.
Product Performance
→ What is our repeat purchase timing?
→ Which products are bought in the same cart?
Promote or bundle these products to increase average order value.
→ Which products are typically bought in the next order?
Include these products in post-purchase flows to drive customer lifetime value.
What’s our product sentiment on Amazon?
Tableau version
→ How’s the sentiment trend developing over time?
Executive Summary
This dashboard shows how customers feel about your products on Amazon based on reviews.
Strong performers: Products 7, 5, 6, and 1 stand out with average ratings above 4.3 and mostly positive sentiment, supported by thousands of reviews. Weaker products: Products 2 and 3 are rated lower (around 3.6), with a higher share of negative sentiment. Critical concern: Product 4 has an average rating of just 1.42 from 138 reviews. This could harm brand reputation if not addressed. Trend over time: Positive reviews increased sharply around 2015-2017, then declined after 2017, suggesting either fewer sales, weaker product quality, or more critical buyers in recent years. Recommendations
Leverage high-rated products (7, 5, 6, 1): Use their reviews in marketing and bundle them with weaker products to lift perception. Fix or retire Product 4: With such low ratings, it risks pulling down your brand average. Consider quality fixes, relaunch, or discontinuation. Investigate 2017–2018 decline: Understand if the dip was due to product issues, stronger competition, or review policy changes.
What do customers say about our product?
Tableau version
Executive Summary
This word cloud highlights the most common themes customers mention in reviews.
Positive themes: Customers often describe the product as “cheap,” “great,” “friendly,” “memorable,” and “good value.” Wifi and central facilities are also seen as positives. Negative themes: A significant number of reviews highlight serious concerns such as “mould,” “unsafe,” “drugs,” “cockroach,” “broken,” “theft,” and “infection.” Words like “bathroom,” “noise,” and “airconditioning” also appear frequently in a negative context. Mixed: Wifi appears strongly but in both positive and negative contexts, suggesting inconsistent performance. Recommendations
Address safety and cleanliness issues immediately: Mentions of theft, mould, cockroaches, and infection can cause major reputational harm and drive churn. Promote value messaging in marketing: Positive mentions of cheap, great, memorable, and friendly should be reinforced in ads and customer communications. Investigate inconsistent services (e.g., Wifi): Strong mentions indicate importance. Resolving reliability issues could significantly boost satisfaction.
RFM Analysis
Executive Summary
Our top customers are those who buy often, spend significantly, and have purchased recently. A handful stand out, spending between $4K–$9K with 40-60+ purchases each. At the other end, we see customers who buy often but spend very little, and a group of once-valuable customers who haven’t purchased in over 1.5 years (churned best customers).
The RFM scores show:
Recency: The average customer hasn’t purchased in ~340 days, but a strong group is still active within the last 30 days. Frequency: Most customers buy 7–12 times, but a small number purchase more than 50 times. Monetary: Average lifetime value is $1,200 (median $800), though our very best reach nearly $9,000. Recommendations
Nurture best customers: Continue engaging the top spenders with VIP treatment, early access, or loyalty perks to secure their loyalty. Re-engage churned best customers: Personalized win-back campaigns could bring back those who previously spent heavily but lapsed over a year ago. Upsell low-spending active customers: Since they buy frequently but spend little, targeted bundles or minimum-order incentives could increase their average order value. Monitor frequency tiers: Use the insights to spot early signals of churn (e.g., those slipping from frequent to occasional) and intervene with retention offers. Email Performance
How do our Klaviyo email campaigns perform in improving our onboarding process?
Looker Studio & Domo version
→ Is our Klaviyo event tracking set up properly?
Executive Overview
Activation Flow is excellent: 95–99% completion across touchpoints. Emails are sequenced well and sustain engagement. Verification & Account Activation happen same-day for 93–94% of customers, with minimal slippage. Entry Questionnaire is the weak link: only 82–89% complete, with ~15% delaying beyond Day 1. Delays Analysis confirms the bottleneck: questionnaire lags behind verification and activation, pulling down onboarding completion. Recommendations
Shorten Questionnaire. Test fewer fields or progressive profiling to reduce friction. Accelerate Completion. Add urgency with “finish in 2 mins” banners or incentives within 24h. Automate Nudges. Trigger SMS/push/email reminders specifically for stalled questionnaire users. Segment Drop-offs. Identify if delays cluster by device, region, or acquisition channel, then personalize follow-ups. Track Impact to Retention. Connect questionnaire completion data with LTV/repurchase cohorts to quantify lost revenue from incomplete onboarding.
Looker Studio
Domo
Customer Segmentation Analysis
Tableau version
What is the age and gender of our audience?
Executive Summary
Our audience is primarily young adult men (ages 25–34), with the strongest viewership in India and the US. While engagement is high, we’re also seeing notable subscriber losses in the US and Indonesia, signaling areas to review our content strategy.
Recommendations
We should tailor more content toward young adult male interests while testing localized campaigns for India and the US to deepen engagement. At the same time, reviewing the types of videos causing drop-offs in the US and Indonesia can help us adjust tone, format, or frequency to reduce churn. Expanding female-targeted or younger audience content could also diversify our reach and open new growth opportunities.
Who are our most profitable customers?
Executive Summary
Our most profitable customers are repeat buyers, with a handful generating significantly higher profit than the rest. The darker and larger blocks on the chart highlight these “VIP” customers who not only spend more but also order more frequently. The list below shows the top repeat customers by profit, with most of them being male.
Recommendations
We should consider building a loyalty or exclusive perks program to strengthen retention with these high-value customers. At the same time, identifying patterns in what these customers buy and how often they purchase can help us design marketing campaigns to grow more customers like them. Finally, outreach from sales or customer success could ensure these top customers feel valued and remain loyal.
Which of our products are more loved by male/female customers?
Executive Summary
Our product preferences differ noticeably by gender. Female customers show a clear favorite in Category 2, which strongly dominates their purchases. Male customers, in contrast, spread their purchases more evenly, with Categories 6, 10, and 4 being their top choices. Overall sales per customer are similar between genders, but the split by product category highlights important differences in buying behavior.
Recommendations
We should double down on Category 2 with female-focused marketing, bundles, or loyalty campaigns to further grow that strong preference. For male customers, positioning a “top 3 favorites” package or tailored cross-selling strategy could increase basket size. Finally, using these insights to refine product messaging, promotions, and segmentation can help us maximize sales by gender-specific appeal.
Customer Support Performance
Are we hitting our support KPIs?
Tableau version
Executive Overview
Support volume is healthy, with 429 conversations in the observed period, of which 427 are closed and only 2 remain open. Most customers (393) only needed one interaction, and repeat contacts are rare (14 chatted twice, 3 chatted 3–5 times). Average replies per case are 10, which suggests conversations sometimes require lengthy back-and-forth.
Recommendation
Customer support is resolving issues promptly, but efficiency could improve by reducing the number of replies needed per case. Invest in clearer first responses (templates, knowledge base links) and proactive communication to cut down conversation length. This will improve both agent productivity and customer satisfaction.
Professional Services Performance
Are we on target in our projects?
Looker Studio version
Executive Overview
Professional Services projects show an 82% remaining budget across fixed fee engagements, with 465 hours logged out of 2,531 total. Most projects are operating under blended rates, which allows for flexible resource allocation and billing. While this indicates good control over budget utilization, the relatively low percentage of hours logged suggests that project delivery is either early-stage or delayed in execution pacing.
Recommendations
Closely monitor under-utilized project hours to avoid last-minute overrun risks. Align resource allocation with project timelines to ensure steady budget burn. Track progress milestones more tightly against budgeted hours to spot delivery delays early.
Which projects are costing us the most time?
Looker Studio version
→ Are project efforts aligned with our original estimates?
Executive Overview
Customer support handled 429 conversations, with 427 successfully closed and only 2 left open. Most customers (393) needed just one chat to resolve their issue, while a small number required repeat follow-ups. The team averaged 10 replies per case, suggesting that while issues are generally resolved, some require extended back-and-forth.
Recommendations
Focus on reducing average replies by creating clearer support scripts and FAQs to streamline resolutions. Track repeat-contact cases more closely to identify root causes and prevent recurring issues. Consider adding automation (chatbots or guided flows) for common issues to lower agent workload and shorten resolution time.
How do our services score?
Tableau version
→ What’s the feedback we’re receiving?
→ Did we encounter any outliers in the team?
Executive Overview
The service score sits at 48, trailing behind peers with mixed results across criteria. Some strengths show up in Criteria 1, 5, and 8 (all scoring above 60), but weaknesses are evident in Criteria 3 and 4, pulling the overall score down. The trend shows slight improvement over time (from 43 in Q2 to 55 in Q4), though the overall slope remains modest at 18%. The boxplot highlights wide variation across criteria, with multiple outliers, suggesting inconsistent service quality within the team.
Recommendations
Focus efforts on the weaker service elements, particularly Criteria 3 and 4, to lift the baseline. Outlier analysis should be prioritized: teams with extremely low scores need targeted coaching, while consistently strong performers can share best practices. Finally, embed a quarterly review process to reduce variability across criteria and improve overall stability in service delivery.
Are we meeting our monthly work pacing per project?
Looker Studio version
Executive Overview
The retainer projects are under-pacing this month. Against a budget of 1,064 hours, only 799 hours have been logged (75%), leaving 265 hours (25%) unused. The pacing is currently at –13%, meaning the team is behind schedule in meeting the allocated workload. Some projects (e.g., Campaign Ops, Marketing Automation) are trending closer to expectations, but others are significantly under-utilized, pulling down overall pacing.
Recommendations
Re-evaluate whether workload distribution matches the contracted retainer hours. Shift additional tasks into under-utilized projects to balance pacing. Increase monitoring mid-month to adjust earlier instead of waiting until month-end. If chronic under-pacing continues, renegotiate contract hours to better align with actual effort.
What are our average turnaround times for specific tasks and who is outperforming those?
Tableau version
Executive Overview
Turnaround times vary significantly across task types. Management Information production tasks take the largest share of time (around 25%), followed by root cause analysis (22%) and Management Information design (24%). Rule review and amendments are quicker at 11%. Performance outliers are evident, with some individuals completing tasks much faster or slower than the team average. This uneven spread suggests both efficiency opportunities and potential bottlenecks.
Recommendations
Identify high performers in each task category and extract best practices for broader training. Standardize processes for root cause analysis and design tasks, where time variance is highest. Reallocate work where possible to team members with consistently faster turnaround times. Monitor outliers for recurring delays to spot where additional coaching or resource support is required.
HR Performance
Have we hired according to our nationality diversity rules?
Tableau version
→ Which of our departments has the highest hiring needs?
Executive Overview
Hiring peaked in 2016 (63 hires) and has since declined to 48 in 2018. Diversity goals show room for improvement, as the majority of hires are from the United States, with smaller representation from Australia, Singapore, China, France, and Russia. Faculty roles dominate across Elementary, Middle, and High Schools (83% in Middle and High), while Support and Administrative roles are limited. Job titles are heavily weighted toward Teachers (50%), with the next largest group being Instructional Assistants (13%).
Recommendations
Broaden hiring pipelines to increase nationality diversity, particularly beyond the U.S. and English-speaking countries. Balance recruitment efforts by addressing underrepresented support and administrative roles, which currently make up only 8-20% of hires. Consider focusing on departments with faculty-heavy hiring patterns (Middle and High School) to diversify job functions and ensure adequate non-instructional support. Monitor hiring trends closely to prevent further decline and align staffing levels with institutional growth needs.
How well are we doing on male versus female diversity?
Tableau version
Executive Overview
The organization shows a clear gender imbalance across different levels and roles, with female staff heavily concentrated in Elementary School, Instructional Assistant, and Support roles, while male staff are more visible in departments such as Science, Math, and Community Sports/Activities. High School faculty and some grade levels display a closer gender balance, but overall staffing still leans female. When looking at job titles, Teachers and Instructional Assistants are predominantly female, whereas IT Support Engineers and Baseball Coaches skew male. A few departments, including Learning Support and Counseling, display near parity, but the overall trend highlights traditional gender clustering in both instructional and non-instructional roles.
Recommendations
Set role-specific balance goals (e.g., attract more male applicants for Elementary/IA roles, more female applicants for STEM/IT/sports). Broaden sourcing: use gender-diverse job boards, adjust job ad wording, and require mixed-gender shortlists. Track funnel metrics by gender (applications, interviews, offers, hires) to pinpoint where the imbalance occurs. Develop internal mobility pathways, such as mentoring Instructional Assistants into STEM teaching roles or encouraging male staff to move into student-support functions. Review quarterly and share a simple scorecard by school level, department, and title to maintain accountability.
Statistical Analysis
Hotel Risk vs. Customer Satisfaction Matrix
Tableau version
Executive Overview
The portfolio of properties is distributed across four risk quadrants, with several high-risk outliers.
Hostal Blanes La Barca and Hostel Mon Vie fall in the “Delist” quadrant, combining both high risk and high customer dissatisfaction, making them critical concerns. A cluster of properties, including Ocean View Motel, Prince Myshkin, and Alicante Hostal, sit close to the delist threshold and may warrant proactive intervention.
On the other side, properties such as BLC Design Hotel and Burj Al Arab appear in the “Low Risk” quadrant with low risk scores and relatively strong or neutral customer satisfaction. Some, like Amazon Clipper Premium and Cultural House, score poorly on risk while showing mixed customer sentiment, placing them in the “Monitor” category.
The overall distribution suggests a small number of urgent problem properties dragging portfolio performance, while most assets cluster in low-to-moderate risk zones.
Recommendations
Immediate Action: Prioritize remediation or exit strategies for Hostal Blanes La Barca and Hostel Mon Vie given their placement in the Delist quadrant. Targeted Audits: Investigate the moderate-risk cluster (Ocean View Motel, Prince Myshkin, Alicante Hostal) to identify operational or compliance issues driving risk. Protect Strong Performers: Highlight and market low-risk, positive performers (BLC Design Hotel, Burj Al Arab) as benchmarks and use their practices to inform improvements elsewhere. Monitoring Protocol: Establish quarterly reviews for properties in the Monitor quadrant (Amazon Clipper Premium, Cultural House) to ensure risks do not escalate further. Strategic Rebalancing: Consider whether divesting or repositioning high-risk assets could free up resources for growth in stable, low-risk segments.