Pecan Goals Drivers Best Focus groups Q2 2021
Share
Explore

Group E Session

Q2 Goals, Drivers, Bets
Group E
Session

View of Here is how this session is going to look like: 2
4
1
15 mins

Go Over Our Goals Drivers and Current Bets

2
10 mins

Each one to suggest his perspective on priorities

3
20 mins

Each one to suggest new features and ideas we have not thought about

4
45 mins

Talk about suggestions

5
6
7
There are no rows in this table
4
Count


Part one: Understand and feedback current drivers priorities

View 2 of Our Current Drivers and priorities 2
3
Goal
Drivers
Description
Driver Priority
Higher
Lower
Just right
10 Self-Built Active Projects
2
Understandability & Conviction
Users have understanding & conviction in their self built projects Action Owner and builders need to understand and trust model output improve dash output prediction - flexi benchmark against random
Low
19
Easy Activation
Users can easily integrate models output with their business process Builders should be able to easily utilise predictions either by direct integration or post processing
Medium
10
3
75 Active projects
1
Monitored Predictions
users have a clear way to monitor prediction-performance on an on-going basis platform should ensure prediction performance over time and create transparency over performance clear separation of test vs on going performance proactive alerting on model drift feature importance over time data drift compare between time-interval
Low
4
3
35 Builders
3
Easy & Simple Building
Users can easily translate a business problem into a pecan model builders can easily translate their business problem to ‘Entity Target and Attributes’ queries (this includes a better UI, user training etc etc)
High
6
1
Independent & streamlined Model-Training
Users have a quick and straightforward way to identify model issues and gaps and resolve them Users should be able to self resolve data and model discrepancies Explainable and self-resolvable model/data issues Independent improvement: Users know if their model is good enough and how to improve it if it’s wrong
High
2
4
Scalable & maintainable training pipeline
assuming all queries were defined correctly, the training pipeline should finish processing rapidly and without errors, despite large data set and complex computation.
High
9
5M ARR
3
Flexible Data Sharing
Pecan should be able to easily add any new connector in a timely manner (up to two weeks)
Low
5
1
Reduce Prediction Cost by 50%
Low
1
3
Accurate Regressions
Significant improvement in regression based model Currently both LTV and demand fail to meet expectations in many of the cases
Medium
11
1
Increase engineering velocity
1
Engineering Velocity and Quality
In order to be able to reduce the lead time (number of days from the moment a feature enters development process until it is successfully used by users)
High
4
4


Part two: Suggest Bets/Features that can push these drivers higher

View of Topic: What feature are we missing and for which driver/goal 2
2
Idea
Drivers
Author
Vote
Comment
1
Write idea here
LS
2
Clearer overfit / leakage indications
Understandability & Conviction
YK
Adding clearer indications of modelling problems (such as overfit, leakage, disparity on feat importance, dropped features) to the builder
3
Incorporate Rawdata analyser script / Rawdata quality score
Independent & streamlined Model-Training
YK
Also related to “Flexible Data Sharing”. Raw data quality analyzer (or conformity to Pecan requirements) might help indicating data issues early and prevent running into errors during training
4
Slow but robust Training → Fast Prediction
Scalable & maintainable training pipeline
YK
Also affects “Accurate Regressions”. Training containing smart DS data handling strategies (Population sampling strategies / Gradual sampling / Feature selection / Time series Cross-validation) might yield better results and speed up future predictions. IMO to customers Predictions are way more time-sensitive than training.
5
Transparency of training pipeline
Understandability & Conviction
YK
Somehow showing somewhere statistics about training pipeline (how much data crunched, samplings tried, models fit, etc.) might convince customers better of model strength.
No results from filter



Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.