icon picker
Refine Call Topics/Categories

Author:
@Li, Bolin
Contextual Points

Roadmap

Ground Truth Topics

What call topics can be considered as representations of ground-truth?
we consider two types of topics:
During one call, an agent either chooses call topics from a dropdown list in or input topics manually. Topics derived from agent-generated call topics were further refined by domain knowledge. These refined topics can be summarized based on managers’ expectations with their domain knowledge or be linked to frequently occurring words, phrases, or sentences that are commonly found in historical transcripts.
Topics that are newly identified and relevant to the content of recent transcripts. These new topics can only be extracted from corresponding words, phrases, or sentences in the most recent transcripts.
Evaluating above call topics is important. Previously refined topics and newly identified call topics are used as labels of the target variable for exploratory analysis and topic modeling. External changes can impact the accuracy of topic assignment. Given that call topics are influenced by subjective interpretations of transcripts and domain knowledge, we use validation and visualization techniques for iterative assessment and enhancement of chosen topics. For above two types of topics, if they successfully pass the validation process and demonstrate meaningful connections with transcripts in visualizations are assumed to be representations of ground-truth.

What call topics cannot be considered as representations of ground-truth?
Call topics that are nonsense, filled with irrelevant noise or unlikely to provide meaningful insights for the business team.
A topic may not be considered as one accurate representation of ground-truth if most of its transcripts show inconsistent patterns.

What are the relationships between ground-truth topics, refined topics and new topics?
Utilizing ground-truth topics is crucial for evaluating topics generated by machine learning techniques and GPT prompt engineering. These include:
Refined topics: topics that are previously identified and refined by business tams,
New topics: topics newly identified as meaningful to business teams.
Representations of the ground-truth topics: topics that are refined and subsequently validated through evaluation can be considered as representations of the ground-truth topics
Initial refined topics may not be fully complete since we found that some refined topics are too general and some are very constrained. Identifying and extracting new meaningful topics also require domain expertise, subjective understanding and judgement, especially when differentiating actual emergent topics from unexpected noises. Therefore, processes of finalizing refined topics and identifying new topics require an iterative improvement with human involvement (e.g. domain knowledge, proper evaluation metrics, reliable visualizations, etc.) to guarantee the quality of the final outcomes.

Screenshot 2023-12-12 at 1.51.46 PM.png

Our development and experimental focuses are mainly accurately predicting two types of topics:
(1) Evaluate the quality of refined topics and build ML models with representations of ground-truth to assign topics for input transcripts,
(2) Among new topics that differ from previous representations of ground-truth, identify topics that represent emerging trends rather than meaningless noise.
Specifically
First, we evaluate the quality and validate previously refined call topics. With validated topic labels or representations of ground-truth, we utilize machine learning models to predict topics based on input transcripts.
Evaluating and validating refined topics ensure that data exploration, text mining, and model building are proceeding correctly. We also need to be cautious that recently added transcripts may include some new emergent topics.
Secondly, we aim to deliver solutions can generate new topics. Topics evaluation and refinement ensure new emergent topics are valuable to business teams and not merely noise.

Objectives and Approaches


What are long-term objectives?
In the long run, we aim to develop one system that combines custom rules, embeddings, machine learning, and GPT to automatically generate accurate call topics through iterative improvements. This will involve creating a tailored machine learning model for topic prediction, implementing a validation layer to evaluate the quality of assigned topics combined with human knowledge, and fine-tuning GPT to effectively identify previously validated topics and newly emerging ones.

For short-term objectives, we are focusing on two aspects:
1.1 In first aspect, we engage in data preprocessing and cleaning to assess previously identified topics using selected evaluation metrics and visualizations. The objective is to establish a validation layer that incorporates chosen visual methods and evaluation metrics for effective human evaluation.
1.2 Following this, delve into experimenting with text embeddings, dimensionality reduction techniques, and soft-clustering algorithms. The aim is to build a machine learning system that leverages transcript data and previously refined topics to predict a range of topics for each transcript. The final output will enable us to rank topics for each transcript, allowing us to categorize them as primary, secondary, and tertiary, among others.
2. Upon completing the first aspect, we work on prompt engineering and fine-tuning to ensure it produces relatively stable topics. Most of GPT generated results should be similar to previously refined topics.
In summary, exploring various evaluation methods is essential to identify ground-truth topics for a set of transcripts. Machine learning solutions can be customized to provide stable and reliable outputs. GPT introduces the possibility of identifying new topics that reflect emerging trends. By assessing and integrating each component, we aim for more comprehensive and stable outcomes. Through continuous iterations, we plan to further refine the capabilities of both customized machine learning and GPT, aligning with our long-term objectives.

How do we plan to approach the problem?


Screenshot 2024-01-15 at 3.36.19 PM.png


Build topic modeling solution with agent-generated topics:
After preprocessing and cleaning transcripts, topics and other variables, we work on exploring visualization methods and evaluation metrics to estimate the quality of refined call topics. Our approach is to synthesize qualitative analysis, visualizations, and quantitative evaluation metrics to validate and refine topics for machine learning experiments. This step may incorporate additional variables and data sources, such as raw and simplified GPT-generated topics, to derive deeper insights and enhance visualization capabilities.
Having validated call topic labels, we further delve into the historical transcripts to learn the associations between the validated topics and relevant key words, phrases, or sentences in cleaned transcripts. We then experiment and optimize machine learning models to categorize transcripts into topic clusters. Our objective is to develop a machine learning system capable of assigning multiple topics to each transcript and identifying topics that are different from previously known topics.
Optimize GPT-generated topics:
To ensure GPT produces outputs that are stable, reliable, and of high quality, we implement prompt engineering and fine-tuning of GPT's hyperparameters. A considerable portion of GPT's topics should correspond with our validated refined topics or representations of ground-truth.
Furthermore, GPT's newly generated topics must be clear and distinct to be practical. One output refinement layer can be used to evaluate the quality of GPT-generated results. With this output refinement layer and evaluation functions developed for topic modeling, we can visualize and assess GPT's output. This refinement layer also help detect and categorize meaningless and nonsensical patterns, using both qualitative and quantitative metrics. Outputs classified as noise are preserved for enhancing future noise-pattern detection. Conversely, outputs that differ from existing refined topics yet are coherent are considered potential new topics. These emergent topics can be reviewed and then annotated to refine machine learning methods.

High-level Tasks


Screenshot 2024-01-15 at 11.02.01 PM.png

Step 1In step 1, we extract and merge datasets for data exploration and visualizations. We then build data preprocessing and cleaning functions for following topic modeling experiments. One important part in this step is to explore visualization methods and evaluation metrics to assess topic-based transcripts. Some subtasks includes converting text data into different vector representations, building visualizations that aid in qualitative analysis, and selecting metrics that measure the coherence and relevance of the topic clusters.

Step 2
In step 2, we first focus on delving into pattern discovery within transcripts that have assigned topics. We extract insights using both visual and evaluation functions, and then carry out a series of experiments for machine learning solutions (e.g. BERTopic, Top2vec, LDA, Guided LDA, NMF, etc.) Our goal is to capture connections between the frequency of specific keywords, phrases, or structural elements within a given cluster or topic of transcripts, as well as to examine their relationship with associated variables such as call-related data, the refined outputs from GPT, and so on.

Steps 3 & 4
In steps 3 & 4 We focus on improving and refining the performance of ChatGPT and collaboration with the infrastructure team to conduct prompt engineering, fine-tune hyperparameters, and construct output refinement layer.

Steps 5
In step 5, we analyze and coordinate the strengths and weaknesses of rule-based methods, ML models and ChatGPT. We will collaborate with other teams to develop an integrated framework. This framework will integrate rule-based functions, ML predictive models, input prompts, output refinement layer as well as GPT-generated results. The objective is to effectively combines the unique advantages of each approach, ensuring both stability and optimal performance in topic generation.

Agent Topics Modeling (Step 1 & 2)



Screenshot 2024-01-15 at 3.49.17 PM.png


Code Review and Data Extraction

This step involves reviewing earlier notebooks and Python code created for Zowie, with the aim to identify, extract, and integrate datasets for subsequent exploration.
High-level Tasks
Tasks
Detailed Tasks
1
Code Review and Data Extraction
Review previous notebooks and run code built for Zowie
Spike
• Review and run code, check previous insights, check useful variables, visualizations, etc.
• Review functions used for data extractions and manipulations
• Identify reusable preprocessing and cleaning code
• Comprehend explroations and visualizations insights
• Understand previous modeling results
• Identify and save other reusable code for future tasks
2
Code Review and Data Extraction
Identify, extract and merge datasets for data exploration
Discuss with team to extract and merge datasets: • Locate datasets that contain raw transcripts, agent's initial and refined topics, GPT-generated topics, and simplified GPT topics, along with relevant variables. • Filter based on specified timeframes to extract valid and complete observations for selected datasets • Identify specific time when refined topics in dropdown menu have been updated • Included dataset that have GPT topics and simplified GPT topics • Merge with other relevant variables
There are no rows in this table

Data Preprocessing and Cleaning

This step contains preprocessing and cleaning the merged dataset for topic modeling, refining the dataset for tokennizations, and processing transcripts for sentence transformers.
High-level Tasks
Tasks
Detailed Tasks
1
Data Preprocessing and Cleaning
Preprocess and clean the merged dataset for topic modeling
Transform the merged dataset • Convert data types including datetime, strings, numerical, categorical, text columns, etc. • Check qualities of topics and transcripts • Detect frequently occurring symbols, characters, and words in each transcript and each topic. • Identify words that are either highly repeated or extremely rare. • Detect call timestamps and identify irrelevant information in each transcripts • Check unnecessary characters, repeated signs and noises. • Save transcripts and topics after preprocessing and initial cleaninig
2
Data Preprocessing and Cleaning
Preprocess and clean the merged dataset for topic modeling (words, n-grams, etc.)
Clean, process and explore transcripts at level of individual words and n-grams • Tokenize transcripts into individual words and apply lowercasing • Remove unnecessary characters, punctuations, stop words, etc. • Apply stemming and lemmatization for word frequency • Frequency analysis for bigrams, trigrams, quadgrams • Save cleaning functions and transcripts and word embedding that can be used for LDA, Guided LDA, etc.
3
Data Preprocessing and Cleaning
Preprocess and clean the merged dataset for topic modeling (sentence transformers)
Clean and process sentences transcripts for sentence transformers • Might convert selected certain components to lowercase • Remove unnecessary elements like encryptions, special characters. • Double check to remove unnecessary parts while make sure maintaining the complete sentence structure. • Experiment with several pre-trained sentence transformers to choose one efficient pre-trained model • Save cleaning functions and transcripts for sentence transformers used in BERTopic
There are no rows in this table

Topics Visualization and Evaluation

This step includes exploring and building visualization methods for evaluate topic-based transcripts. It also involves developing and metrics to assess previous agent's topics, refined topics, GPT topic and simplified GPT topics and other topic modeling results.
High-level Tasks
Tasks
Detailed Tasks
1
Topics Visualization and Evaluation
Explore visualization methods to evaluate topic-based transcripts
Spike ​• Explore visualization methods used in LDA and BERTopic topic models and other visualizations used in network analysis for community detections
2
Topics Visualization and Evaluation
Build visualization methods to evaluate topic-based transcripts
Build visualization functions to • Visualize topics with extracted words from transcripts • Visualize topics with bigram, trigram and explore related insights • Visualiz all transcripts in one plot with their topic names • Visualize heatmaps of the topic's similarity matrix • Visualize topics, their sizes, and their corresponding words
3
Topics Visualization and Evaluation
Explore evaluation metrics to assess previous topics, LDA and BERTopic.
Spike ​• Explore and understand theories behind key evaluation metrics in LDA and BERTopic based topic modeling (e.g. topic coherencem, perplexity, topic diversity, etc.)
4
Topics Visualization and Evaluation
Build evaluation metrics to assess previous topics, LDA and BERTopic.
Evaluate previously generated agent's topics, refined topics, GPT topics and simplified GPT topics • Develop functions for top word, keywords, top n-grams extractions as topic representations • Utilize visualizations to display the performance of topic clustering. • Apply chosen evaluation metrics to assess the quality and relevance of the topics. • Develop functions to measure similarities between generated topic representations
There are no rows in this table

Machine Learning Experiments

In this step, two distinct topic models are trained and optimized: BERTopic and Guided LDA. BERTopic utilizes transformers, dimensionality reduction, fuzzy/soft clustering, and topic representations. Guided LDA is based on the Dirichlet distribution and incorporates the use of pre-defined seed words.
For additional information, please refer to the details provided in the following link:

High-level Tasks
Tasks
Detailed Tasks
1
Machine Learning Experiments
Build BERTopic (Main Components: Sentence Transformers, Demensionality Reduction, Fuzzy/Soft Clustering, Topic Representations, Fine-tune Topics)
Spike ​• Review BERTopic theories and documentations for topic modeling • Investigate semi-supervised BERTopic to leverage previous labels in code • Explore the setup and methods to apply fuzzy clustering algrothms with BERTopic • Explore fine-tuning of topic representations in BERTopic for the generated results
2
Machine Learning Experiments
Build BERTopic
Implement and evaluate BERTopic • Choose embedding method, dimensionalityl reduction, clustering algorithm • Implement C-TF-IDF for topic representations and visualize representations • Update and apply visualization methods and evaluation metrics specific for BERTopic
3
Machine Learning Experiments
Build BERTopic
Develop topic extraction logic and build function to select 3 most relevant topics. Handle edge special cases and integrate clustering results.
4
Machine Learning Experiments
Build BERTopic
Optimize BERTopic Model for best performance • Experiment with different embedding and dimensionalityl reduction methods • Identify key hyperparameters in dimensionalityl reduction and clustering algorithm • Build functions to systematically explore and optimize BERTopic, and record results • Choose the set of hyperparameters that yield the best balance of performance and efficiency. • Evaluate performances with updated visualization methods and evaluation metrics
5
Machine Learning Experiments
Build Guided LDA (One semi-supervised learning method that generates fuzzy clustering results)
Spike ​• Understand theories in Guided LDA to set up keyword seeds • Examine Guided LDA approach and available package
6
Machine Learning Experiments
Build Guided LDA
Keyword Selection for Implementation of Guided LDA • Develop functions to extract pertinent keywords and n-grams for assigned topics. • Examine and select keywords and n-grams for to guide LDA • Choose parameters for Implementation of Guided LDA
7
Machine Learning Experiments
Build Guided LDA
Execute and evaluate Guided LDA • Implement Guided LDA Model with the selected parameters, previous topics, and keywords. • Establish rules to identify the three most relevant topics for each transcript • Evaluate the performance of guided LDA and benchmark against BERTopic. • Update and apply visualization methods and evaluation metrics specific for Guided LDA
8
Machine Learning Experiments
Build Guided LDA
Hyperparameter Optimization for Guided LDA • Determine key hyperparameters in the guided LDA model that significantly affect performance. • Build functions to systematically explore and optimize these hyperparameters. • Document the visualization outcomes and modeling performances. • Choose the optimal set of hyperparameters and compare with optimized BERTopic performance • Apply visualizations to compare and evaluate previous agent's topics, GPT topics, BERTopic results, etc.
There are no rows in this table

GPT Topics Generation (Step 3 & 4)


Step 3. GPT Prompt engineering — refined topics in historical transcripts
3.1 Set up one development environment using same or similar model API for valid experimentation with various prompts. Ensure the transcripts dataset exclusively contains historically refined topics, excluding any new ones.
Set up development environment to test GPT correctly
3.2 Explore a range of prompt inputs and adjust hyperparameter values in GPT to reduce randomness, with the goal of replicating identical or closely resemble the refined topics. It's important to note that the input prompts include these previously refined topics for selection purposes.
Develop various prompts and fine-tune GPT’s hyperparameters
3.3 Utilize similarity metrics, coherence measures, and other measurements to evaluate the accuracy of matching between GPT-generated topics and the previously assigned refined topics. Continuously optimize this matching accuracy by experimenting with different prompts and adjusting hyperparameter values.
Evaluate GPT-generated topics and productize the most effective prompts and hyperparameters settings

Step 4. GPT and output refinement layer — new topics in recent transcripts
4.1 As with step 3.1, setting up one identical development environment. However, for a valid evaluation of newly generated topics, it's important to use recent transcripts data that actually include new emergent topics, and ideally these new emergent topics can be identified with business team.
Extract recent transcripts that contain previously refined topics and new topics
4.2 Develop output refinement layer utilizing similarity metrics, coherence measures, and domain knowledge to categorize GPT-generated topics. This includes classifying them as similar to refined topics, exact matches, or distinct topics that could be either noisy or new emergent topics.
Develop output refinement layer to classify outputs as previously refined, new and noisy topics.
4.3 Experiment with rules and thresholds of similarity metrics or coherence measures to to more accurately distinguish between noisy topics and actual new emergent topics Apply text mining techniques to extract patterns in noisy topics. Compared with noisy topics, new topics are expected to have more similarities to previously refined topics.
Test output refinement layer and store the optimized rules and thresholds for production.

Step 5. Design an architecture in PROD to Integrate rules, ML model and GPT outcomes
Prior to outlining the items in step 5, it’s important to address following questions:
Do the previously refined topics encompass all aspects of the corresponding historical data, and are they valid for building ML models?
Is GPT capable of consistently generating accurate topics that match or closely resemble the previously refined topics?
Does GPT consistently generate new topics with very few noises that are relevant to business teams?
Are rules-based functions accurately mapping transcripts to the refined topics?
Does the optimized ML model accurately map transcripts to the refined topics?
How to make sure previously refined topics and unknown new topics to be systematically generated through a refinement layer?
how can we separately measure the matching accuracy of known refined topics and the matching accuracy of unknown new topics?
How can we optimize matching accuracy by synergistically combining rules, the ML model, and GPT outcomes to complement each other?
(Work in Progress for step 5)

Background

Contextual-Points
Subject
Context
Date
1
Quality of call topics selected or entered by agents
For a long time, agents have been able to choose call topics from a dropdown list or enter another different topic when they were talking with clients.
However, the selected topics might not always represent the main point of the call, especially if chosen early by agents. Also in agent’s inputs, there could be noises, some of them may be subtopics of some topics in the pool.
12/8/2023
2
Is one agent adhering the perscriped script?
Whether an agent is adhering the prescribed script or not may not be a significant factor in assessing the quality of their calls. Adherence to the script does not necessarily indicate if the outcome of the call is positive or negative.
12/8/2023
3
Process and clean agents’ raw call topics text data
In historical dataset, there might be 100+ call topics selected by agents. Kit collaborated with Ali to correct spelling errors and preprocess the text data. Kit then mapped raw call topics to simplified topics relying on Ali’s expertise.]
Review Kit’s notebooks: there should be one cell that maps agents’ call topics to a list of simplified topics.
12/8/2023
4
Standardize call topics with domain knowledge
In 2023-June, Operation team decided to standardize call topics. Janet and Chris worked on standardizing the list of topics and narrow it down to unique 36 call topics.
Based on MECE (mutually exclusive and collectively exhaustive) principle, Kit made some modifications to make sure every single one of those topics is non-overlapping and really unique category.
6/1/2023
5
GPT3.5 generates unstable and messy call topics
The Zowie team pass one list of topics (possibly added 3 or so different topics?) to Chat GPT3.5 asking “Based on this call transcript, can you select up to 3 call topics from this list?” (Zowie contact: Harry Cam)
However, GPT3.5 cannot consistently follow prompts to generate expected results. Some new topics were generated. It can return messy values, unexpected results, input prompts and others.
Over several weeks of calls, GPT3.5 generated thousands of call topics. As more and more data came in, more and more new call topic were generated by GPT3.5. We were considering applying algorithms to simplify the topics generated by Chat GPT.
6/1/2023
6
Preliminary Topic Modeling and Sentiment Analysis
Kit and Zac have conducted preliminary analysis using LDA for topic modeling and VADER for sentiment analysis. The formats of call center datasets have changed after moving them into MongoDB. New data uploading and cleaning functions need to be built based on current tables in BigQuery.
Preliminary topic clusters were generated. We tried to add names to them based on words in clusters, but the performance is not convincing.
9/1/2023
7
Zowie insights and transcriptions datasets
There are two datasets in Zowie (pg-zinnia-data-production-v1). Currently there are small number of simple prompts, so these prompts can be further improved. Audio files are not utilized for insights generation. Input prompts consist of refined topics, but Chat GPT could return messy strings, unexpected values, or repeat input prompts and others. Confirm with (Zowie Team Contact: Harry Cam)
12/1/2023
8
Notebooks in Zowie Github repository
There are 3 notebooks in the repository and each contains one block of sql code to download dataset from bigquery. These notebooks were made prior to having production data available to do this analysis. It is suggested by Zac applying a similar strategy to the download block in advisors-excel-text-analysis.ipynb, but we have the datasets pg-zinnia-data-production-v1.call_center_v1 and the raw datasets that those figures are built from.
Using the production data is probably going to reflect the correct data most closely. Especially for the topics selected, the pg-zinnia-data-production-v1.call_center_v1.call_logger_all_call_entries dataset will have all of the topics from orion/call logger.
12/28/2023
9
Where to find the refined topics and previous raw call topics?
After June-2023, the options of call topics have changed in the dropdown menu.
The refined topics and previous raw call topics were all in the same single column, so it’s more of a feature of the data over time rather than separately organized data. After June-2023, we changed a drop down menu for the call representatives.
In order to do partition to compare old raw topics and new refined topics, try using call_logger_all_call_entries.InitiatedDate if we work on call logger, or using se2ivr.FIVE9_Call_Log.DATE if we work on se2ivr.
12/28/2023
10
what tables column we could use to examine refined call topics, raw agent’s call topics, raw transcripts and clean transcripts?
Raw Agent Topics: Call Logger has the CallTypeID column, which each call logger database decodes in CallTypes. So I would join the call_logger_all_call_entries table to one of the call_logger_<xyz>.CallTypes table to retrieve the CSR-selected call type.
Refined Call Topics: we need to partition the same CallTypeID column before and after June 2023, and use that to compare old vs. new.
GPT Call Topics: If we want to use GPT ones, pg-zinnia-data-production-v1.call_center_v1.call_insights WHERE prompt_key like '%Call%Type%' ( the prompt key might be wrong here, but it should be something like “call type”)
Transcripts: the transcripts are in pg-zinnia-data-production-v1.call_center_v1.call_transcripts . We only really have the encrypted ones right now, but are planning for the unencrypted ones in the future.
transcript_df: clean_transcripts_extra and clean_transcripts in transcript_df are previously saved data that had stop words removed
12/29/2023
11
Two sources may be used as inputs to evaluate and validate refined call topics
gpt_topic_mappings:
gpt_topic (gpt results)
gpt_topic_simplified (refined topics but for gpt results in this case)
It’s important to evaluate and validate previous agents’ topics, refined topics, GPT topics and simplified GPT topics with visualizations and metrics.
1/2/2024
12
Starting year-month-date 2022-08-08
SessionId doesn’t appear in the call logger data until 2022-08-08. We currently cannot look back past 2022-08-08 with Zowie/Five9 and Call Logger datasets.
1/3/2024
13
Discuss data extraction using pg-zinnia-data-production-v1
Document the relationships among the three principal datasets: call_insights, call_transcription, and call_logger_all_entries in section Join Production Datasets

1/24/2024
There are no rows in this table

Call Center Datasets

pg-zinnia-data-production-v1

call-logger datasets

pasted image 0.png
Call logger databases contain observations collected from user interface that is used by agents during the call. There are 20+ carriers and each carrier has one call logger table that has been sync into BigQuery. These call logger dataset contain client ID, phone number, agent’s typing notes, timestamps, contract, call transcript summary, etc. At this point in time, the transcripts themselves are not yet integrated into the call logger system.
Column contact: for most of calls, there are contract number to help identify what policy was talked about during the call.
Session_ID is important for joining call_logger datasets with other datasets.

pasted image 0.png

pasted image 0.png


We can join CallEntries dataset in call_logger carrier database with Five9_Call_Log in se2ivr database using Session_ID.

se2ivr Initial Voice Routing

Databases in Se2ivr can be considered as the metadata for calls, offering more abstract information. This includes details like the caller's name, phone number, the call's origin, and the agent handling the call, among other things.
Once a day in the morning datasets in se2ivr filled by one report coming out of FIVE9.
pasted image 0.png
Screenshot 2023-12-04 at 10.31.17_PM.png
Screenshot 2023-12-04 at 10.28.24_PM.png

The FIVE9-ACD-Detail encompasses the automatic call distribution system, detailing the routing mechanisms and identifying which agent received each call.
The FIVE9-Call-Log serves as a repository for metadata related to calls, which includes information about the caller, their phone number, the origin of the call, and the identity of the agent who handled it.

five9 datasets

Five9 operates as the core servicing platform for the call center, acting as a downstream monitoring source system. It archives audio files of calls and provides a comprehensive summary of the agent's state, detailing the duration an agent spends in each state during a call.
These datasets contain the length of time a call rings, the talk time, and the duration for which a call is on hold, offering a detailed overview of call handling and agent activity. Call service agents, on the other hand, use Call Logger as their primary interface for interacting with Five9.
Screenshot 2023-12-04 at 10.35.48_PM.png

Screenshot 2023-12-04 at 10.47.28_PM.png

Zowie MongoDB datasets

To obtain the source data for Zowie, we can execute a data join by matching the 'CTI_CALL_NUMBER' field in Zowie with the 'Session_ID' field in the Call Logger datasets.
Zowie integrates with audio downloaded from the Five9 API. Zowie features two main datasets: an Insights Dataset and a Transcriptions Dataset.
pasted image 0.png
pasted image 0.png

The Transcriptions dataset utilizes diarization to distinguish between the agent and the client during conversations. It also includes other functionality for example identify events occurring a few seconds before each call. The transcription process is handled by WhisperX. Zowie is configured to synchronize this data every hour. Additionally, it pulls certain metadata from the se2ivr and call logger datasets, incorporating this information into its metadata column.
pasted image 0.png
Example prompt: “Given the context and input list, what are the top 3 themes can be generated?
There are small number of prompts in the dataset.

Join Production Datasets

pg-zinnia-data-production-v1
call_logger_all_call_entries
CallTime
CallTypeID — Call Topics selected from the fix list.
CallerTypeID — Who is the caller?
InitiatedDate/CreatedDate — What date did one call start on
SessionID: one stream of one call or multiple calls if some of them were transferred, can be used as one join key for agent information.
Call transfer might be related to the change of call topic. It’s better to assign topics for each call transcript even though they have same SessionID.
Agents assign/choose topics for each record. We can do topic assignment for each record and not necessarily across the session ID, and consistent with GPT setup.
carrier - CallEntryID (composite primary key): with each carrier, it should/may be used as one unique identifier on calls. Between carriers, they might share same CallEntryID.
CallSummary: shorter version of transcripts
GPT output summary using prompt like summary the whole thing in 250 words or less.
Bart score can be used to monitor the summary performance.

Agent Topics & Role — call_logger_xyz

Join with each relevant call_logger_xyz tables
for each carrier - CallEntryID
left join on CallTypeID in call_logger_xyz to get CallTypes
can also get CallerTypes from relevant call_logger_xyz table

Identify call_logger_all_call_entries as the base dataset to join with other tables in pg-zinnia-data-production-v1.
se2ivr.FIVE9_Call_Log.
SESSION_ID
Agent info:
Screenshot 2024-01-24 at 9.58.38 PM.png

Screenshot 2024-01-24 at 9.57.25 PM.png

Transcripts — call_transcriptions

Screenshot 2024-01-24 at 10.08.55 PM.png
Screenshot 2024-01-24 at 10.10.45 PM.png
Join call_center_v1.call_transcriptions that is sourced from Zowie_mongodb.transcriptions
Zowie_mongodb.transcriptions table store the actual transcripts.
call_center_v1.call_transcriptions
ctiCallNumber same as SessionID
call_center_v1.call_transcriptions transcription column contains the transcript text
actual timestamps of the call: startTime, endTime
general call statistics:
agentId is jus email address

GPT Topics — call_insights

Join call_center_v1.call_insights that is sourced from Zowie_mongodb
Call Types: required min distribution — RMD being all caps can be encrypted and look like noise.
Join using callId is like sessionID that is not unique to a call.
callId is same for multiple transferred calls
Cross Join — It’s hard to identify these insight objects were generated for this part of the call. There is no way to uniquely link that.
When prompt_key is summary, the result column will be the actual text of the summary.
Use callId in call_insights to join call_transcriptions
from call_transcriptions, take ctiCallNumber
use ctiCallNumber to join with SessionID in call_logger_all_call_entries
take result/summary from call_insights to join with CallSummary in call_logger_all_call_entries
CallSummary is also from GPT summary
One issue is that callID is similar to SessionID, both of them are not unique for each call. We have duplicated callID and SessionID for transferred calls. There is no second key for call_insights.
Screenshot 2024-01-24 at 10.20.55 PM.png

Caller Role — CallerTypes

Join call_logger_xyz.CallerTypeID with CallerTypeID in call_logger_all_call_entries
left join on CallerTypeID in call_logger_xyz to get CallerTypes

Call Agents & Policy Advisors

Use call_logger_all_call_entries to join with Se2ivr.FIVE9_Call_Log on SESSION_ID/Session_ID
Se2ivr.FIVE9_Call_Log contains AGENT_LAST_NAME, DEST_AGENT_NAME
Se2ivr.FIVE9_Call_Log and FIVE9_ACD_Detail provide skill, service_level, advisor names, advisor department, etc.
Policy advisors are the people who sell the policy to customers. Often times, advisors call into the call center and won’t be the internal representatives or agents to take calls.
Call center agents can receive calls from both clients and policy advisors.
The process of generating records in FIVE9 is heavily manual. Sometimes the call center agents were supposed to generate new session ID, but they were not.

Policy Information — Contract Number

Contract column in call_logger_all_call_entries, it is the policy number pertaining to the call, is manually typed in by the caller. It’s the key to join other policy-related information.
Annuity policy — lifecad_prod
T_LIPO_POLICY.PO_POL_NUM can be linked with contract number.
Life insurance policy — FAST database
FAST is not in the warehouse yet.
Screenshot 2024-01-25 at 2.03.29 PM.png
Screenshot 2024-01-25 at 2.04.02 PM.png

Previous Development Datasets

pg-zinnia-data-development-v1

Simplified GPT Topics

zowie_stop_gap_analytics:
Previous agent_call_type and newest_agent_call_type pg-zinnia-data-development-v1.zowie_stop_gap_analytics.gpt_call_topics
zowie_stop_gap_analytics.call_types
Screenshot 2024-01-25 at 2.35.03 PM.png
Screenshot 2024-01-25 at 2.43.11 PM.png

Refined Agent Topics

Refined topics and previous agent topics pg-zinnia-data-development-v1.zowie_stop_gap_analytics.call_types
newest_agent_call type contains the merged topics and rest previous agent topics in the excel.
agent_call_type contains all original topics
Screenshot 2024-01-25 at 2.53.12 PM.png


Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.