Objectives
Client can choose from off-the-shelf audience Client can see enriched overview of survey responses
Can clean data to remove non-sensical data (after surveys and throughout) Can group open text responses into categories Can communicate with product team to advise on how to enrich data or better construct questions Can add tags to variables to associate them with various categories. Variables can have many associated tags (e.g. Finance → lending) Should we have tag hierarchy? Variables can be mapped into one another so that the answers from the same question from different surveys can be matched. Survey answer checks system Once a survey has been completed, the following checks occur which impact the participant’s (see how it works in the page) Scramble checking survey responses Database automatically soft deletes user if there is a a duplicate of the same profile Survey Rules
Only participants who meet demographic criteria can do surveys Only participants who have Bad Actor Score < 2 can do surveys User can send template message to participants in target audience who have opted-in to received template whatsapp message
Paul Brittan says
“The best option is to take the data from your App DB (marketing) and run it through ELT packages to clean the data into a new DB. This new DB with clean data is what we will then use to report off of.
We then have control of what info we are displaying to the reporting layer as well as keep our source data intacted”
Questions
If we use Data Studio as our analytics tool, which DB should we use? Notable Points
Paul found 74 participants that had the same number and sometimes the same name but different Landbot IDs. We attributed this to the following reasons after looking through various chat histories
Fraudy with 2 phones/whatsapp accounts Connected to one another (married/dating) 1 has a non-SA number so using a friend who has SA number Participants got new phones and had to re-sign up