Share
Explore

icon picker
Truebill

Takehome stuff

Truebill Product Manager Take Home Assignment


1. Smart Savings works by users selecting an amount and frequency to save. For example, $10/week. Once activated, it autosaves until the user stops it. You notice that smart savings balances appear to plateau at $100. Why do you think this is happening and what would you do to increase people’s savings over time?
Assumptions (for the sake of the exercise):
Bugs/defects are not at fault here. Users are setting the limite at $100 on average
Student Loans can be tracked within Truebill
Saving Goals can be applied within Truebill
Steps to look as to reasons why:
Look at the following metrics:
On average, how many users are setting the plateau for $100?
Identify the following attributes/segmentations for those users that are plateauing at $100:
How much is the user spending on a monthly average?
On average what is the average dollar amount for subscriptions do they have?
What are the top spending categories for these users? Any correlation?
How much interest is being applied toward credit cards/debt on average?
Does this cohort have student loans? If so, what is the average amount owed?
Based on the results/answers above, let’s assume we can segment our users on a few key insights that were delivered: A large group of users that plateau at $100 have a ton of interest that they continue to pay off, student loans, and spend a good amount on eating at restaurants
Suggestions to improve savings:
Expand upon Super Saver to indicate areas of improvement by calculating a better minimum payment for debt/loans in order to save
Prompt user to put more money and round off to the highest dollar amount and put it toward savings
Build a ‘savings bonus’ every few months to reward users
Reward these users for paying off their student loans every x amount of months through a reward program
Offer a ‘TrueBill’ savings account that increases overtime that links directly to a direct deposit
Display an automated Savings Calculator
Leverage A.I. powered financial savings bot to highlight and make recommendations to save beyond the 100$ mark
Suggest a higher savings in correlation to their current spending habits, especially when Super Saver gets closer to $100

2. Marketing sends you a slack message stating that ROAS numbers are looking lower than normal, and are wondering if something happened on the product side that is causing this. What do you do?
Assumptions:
Let’s assume this was for the past month
Steps to take
Ask why they feel that this is an issue, specifically asked for the past month from the total costs of ads compared to revenue
If the response remains the same, look at the (assuming we have one) campaign platform to look at the # and cost of Ads run for the the past month and divide it by the revenue to confirm (and compare against average/previous months)
If average is indeed lower, look at the current usage:revenue for the past month and dig into what was occurring, but confirm with marketing that they are indeed correct.
The next action would be to look at the specific costs:
Partner/vendor costs
Affiliate Commissions
Clicks and Impressions (was there a higher bidding/price per ad price?)
Look at current usage to see where the possible drop-off is occurring or CTR is low for ads to see if the efficacy is weaker there

3. You run a new AB test raising the prices of the premium upgrade. The test only applies to new users (old users are grandfathered into the old prices). The results show a decrease in premium conversion rate by 15% but an increase in ARPU (of premium users) by 15%. Do you roll out the new pricing to all users or revert to control or something else? Why?
Assumptions:
If an A/B test was done correctly, I imagine that there would be a hypothesis that measured goals/outcomes to evaluate upon. I assume for this, there was none.

I would calculate the amount of revenue generated by the ARPU increase and compare it to the premium conversion rate’s costs of potential lost revenue. If I HAD (unlikely) to make a call, I would go with the approach that generated the most revenue.
In reality, I would probably look to improve the conversion rate a bit more to get more users converted though more experimentation. For example, A/B test a ‘trial’ period with the premium rate as another experiment to see if users would convert more and increase the time-value of the premium upgrade so that they are retained.
4. You communicated to the leadership team that a certain new product would be rolled out by the 30th. When you check in with the dev team, they tell you it looks like things are going to take several weeks longer. What do you do?
Assumptions
development hasn’t begun
Grooming is finished at the user story level (vs epic) and everything is sized
GTM activities have not been sent out
Steps to take:
Understand why it would take so long. Ask if there was anything we could do to cut scope to meet the MVE (minimum viable experience) and possibly bring in XD if that is an option to consider that option. If there is scope that can be cut, leverage that as a fast follow post-launch
If scope cannot be cut, I would take ownership of the miscommunication, lay out the reasons why we cannot launch on time, and come up with a proposed plan to make sure we have accounted for implementation time and bring that to leadership

5. The Cancellations team has launched a new bot to automatically cancel subscriptions. The bot takes 1 minute or more to give the user feedback on whether their credentials for the service they would like to cancel are valid. The bot is also failing 40% of the time because credentials are invalid, and only a small portion of users are submitting their credentials for a second time. What do you do?
Assumptions:
The bot is feature-toggled
Steps to take
Look at the 1 minute+ timeframe and ask devs to look as to why it’s taking so long and to see if there are ways to improve it.
I would assume there could be quite a bit of drop-off
Calculate the drop-off rate or amount of exited/closed sessions
I would look at the 40% occurrence to see why (probably dev work) what is occurring around authentication
I would work with the dev team to see what the effort would be fix it and how likely it would succeed
I would also see if we could explore a way if we can keep a user logged in so the bot does not have to authenticate to cancel (
Determine if it’s happening with certain services or not
Take the above information, toggle the bot-off and take steps to improve authentication timing and accuracy and conduct proper testing

6. You roll out a new AB test where half the people get a normal premium upsell and the other half get a 7-day free trial before billing begins (everything else is the same except messaging about the 7 day trial). How would you compare results and decide whether the test or control is the winner?
Assumptions:
If an A/B test was done correctly, I imagine that there would be a hypothesis that measured goals/outcomes to evaluate upon. I assume for this, there was none.
Let’s assume this was done over the course of 2 months with a population of at least 1000 people (split equally)
Let’s assume that no one in the experiment had expired credit cards and had to update the card when up-selling or change payment methods
Steps to take:
Look at the CTR for those that decided to upsell
Success is at least 80-95%
Look at the MRR for the 2 months
Success is tied to your (or Sales) OKR
Look at the Churn/Retention rate over 2 months
Determine the point at which users churned (e.g. day, time)
Look at how often the app was used between users that did get up-sold vs not
Difference in avg. session time
Difference in DAU
If the majority if the above was generally positive for the test, in MRR, usage, CTR, and Retention, then I would say it’s a success

7. Open the truebill app and go to the main welcome screen (if you are logged in already, simply logout). Browse through the welcome screens, tap get started, and look at the signup page. Create wireframes for a new user experience from initial app open through signup (you must gather user name, email address, and password at a minimum) that you think would improve our install -> signup rate. (these can be low fidelity)
Assumptions:
Marketing/Sales is totally fine with dropping/swapping certain pages
Signup is considered once a user has created a password
a user’s name (not username) is still required in terms of first name and last name for certain features within the app

Goals:
Increase sign-up rate
:
image.png
Include popular sign-on methods
If user selects FB/Apple ID/ Google, they simply have to create a password and they will be ‘signed up’
Include some value prop on initial signup page
Place sign up flow before ‘Choose Your Top Goals’ screen
Not sure if this would work, would like to test this
image.png
Sign up with Email option Include FE error validation, right now Truebill requires the user to hit the CTA before indicating invalid inputs
Updated password requirements, so it’s clear and more likely that the user gives valid input on the first go. As the user types, password requirements are checked off

8. The Bill Negotiations team is receiving complaints from angry customers who are upset that Truebill is charging them 40% of the first year savings for Bill Negotiation. These are a small subset of customers, but they are leaving poor (1 start) app reviews. What do you do next?
Assumptions:
Let’s assume this is a bug, and not an expected behavior
Let’s assume the responses have not been responded to
Steps to take:
Refund the appropriate amount or full (given the legal/financial authority) amount back
Possibly offer them an extended trial of premium membership
Parter with team and Immediately reach out to the app store reviews and respond to let those individuals know
After an extended amount of time, reach out to those users in-app to provide a review again
Log the defect as a High priority (not critical), and prioritize it accordingly

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.