Skip to content
Gallery
User Manual
Share
Explore
Feedloop AI

icon picker
Testing

User Manual Documentation of Testing

What is Testing ?

Automated Test Scenario Assessment,Let AI Take the Wheel. This Feature is essential step ensures that your AI solutions meet your business goals and perform as intended. It encompasses crafting customized tests, whether one by one or in bulk using CSV imports. Automated scenario assessment simplifies the process, allowing AI to play a significant role. The data collected from these tests offers valuable insights that can be used to enhance the AI's performance and provide valuable AI-driven insights.
The key functionalities of testing within Feedloop AI include:
Scenario Management: Users can create, configure, and manage testing scenarios to assess the behavior of their AI agents.
Scenario Creation: You can create new testing scenarios manually or import them from CSV files. These scenarios typically consist of an expected question and answer that serve as the criteria for evaluating the AI responses.
Scenario Execution: You can initiate scenario testing, which involves multi-step testing, where each step is evaluated to determine if the AI responses meet expectations.
Scenario Overview: You can view detailed information about each testing scenario, including its name, creator, status (draft or published), and historical run information.
Testing Overview: This feature displays the expected question and answer, along with the pass/fail status of the scenario based on the most recent execution. In case of a failed scenario, the system provides reasons for the failure.
Testing is an essential part of ensuring the effectiveness and reliability of your AI applications and chatbots, helping you improve their performance and meet your specific needs.

User Manual

Getting Started

Navigate to your Feedloop AI project settings.
Within the project settings, find and select the desired agent that you want to test.
look for the "Testing" tab or section. This section is dedicated to managing and conducting testing scenarios for the selected agent.
image.png

Scenario Management

Creating Testing Scenarios

You can create new testing scenarios manually or import them from CSV files.
To create a scenario manually:
Click the "Create Scenario" button.
Choose "Manual" and enter the Scenario Name and choose Agent
Then Input Question, and Expected Answer.and Expected Answer.
image.png
Question : question is the user input or query that you expect the chatbot to receive. It should simulate a real user's question
Expected Answer : provides guidance or rules on how the chatbot should responde,provide when it encounters the expected question. The expected answer should be a helpful and informative response to the anticipated question.
Repeat this process by clicking "Plus" button to create more scenarios one by one.
Click "Create Scenario."

To import scenarios using CSV:
Click the "Create Scenario" button.
Choose "Import by CSV."
image.png
Choose "Manual" and enter the Scenario Name and choose Agent
upload the CSV file containing your testing scenarios.

The CSV file format should have headers for "question" and "answer," and the fields should be separated by commas (","), which is the standard delimiter for CSV files.
example csv
export-csv
Scenario Example 1.csv
3.5 kB

The system will validate the CSV format to ensure it's correct. If the format is accurate, you'll see an example CSV displayed for reference.

image.png
If the format is correct, an example CSV will be displayed. Click "Create Scenario" to import the scenarios. If incorrect, an error message will be shown.
image.png
=

Editing Testing Scenarios

To edit a scenario:
Select the scenario from the list.
Modify the scenario details.

Deleting Testing Scenarios

To delete a scenario:
In the scenario list, select the scenario to delete.
image.png
Confirm the deletion when prompted.

Initiating Scenario Testing

Select the Scenario: From the list of scenarios you've created, choose the specific scenario that you want to test. The list typically includes scenarios you've manually created or imported from a CSV file.
image.png
Click the "Run Scenario" Button: Once you've selected the scenario you want to test, click the "Run Scenario" button. This action triggers the scenario evaluation process.
image.png
After running scenarios, you can analyze the results and gather insights on your chatbot's performance. The multi-step testing approach helps you identify areas for improvement and fine-tuning.
image.png

Understanding Multi-Step Testing

Scenario testing multi-step evaluation. This means that during the testing process, the system performs a series of steps to compare the current answer and question generated by the AI with the expected answer and question specified for the scenario.
Here's a deeper understanding of this multi-step evaluation:
Step Comparison: At each step, the system compares the current question generated by the AI with the expected question specified in the scenario. Similarly, it compares the current answer generated by the chatbot with the expected answer provided in the scenario.
Pass or Fail: If a step comparison passes, it means that the chatbot's response matches the expected response for that step. In this case, the system proceeds to the next step in the scenario.
Failure Analysis: If a step comparison fails, the system may provide reasons for the failure. These reasons typically help in diagnosing why the chatbot's response didn't align with expectations. It could be due to issues in the chatbot's training data, dialogue flow, or understanding of the user's query.
Iterative Process: The multi-step testing process is often iterative. If one step fails, you can review the reasons for the failure and make necessary adjustments to improve the chatbot's performance. After making improvements, you can rerun the scenario to ensure the issues have been addressed.
This multi-step testing approach is essential for thoroughly evaluating and fine-tuning chatbot responses to ensure they align with the expected user interactions. It helps in enhancing the chatbot's performance and user experience.

Viewing Testing Steps

When you click on a specific testing step, the system will display the following information: ​
image.png
Question: The user input or query that was expected during the testing.
Expected Answer: The predefined answer that your chatbot should provide when encountering the expected question.
Current Answer: The actual response given by your chatbot during the test.
Reason if Fail: If the test step fails, the system may provide a reason for the failure. This helps you pinpoint issues in your chatbot's responses.

Inspect Chat

By clicking the "Inspect Chat" option, you can access additional information to help you improve your chatbot's performance: ​
image.png
Facts: This section highlights the facts or information that were relevant during the conversation. It provides insights into what the user is looking for.
Relevant Training: training that was relevant to the conversation. Understanding which training influenced the chat can help you fine-tune your AI
image.png
Context: The context of the conversation is displayed, helping you understand how the AI interpreted and maintained context throughout the interaction.
Reason for Fail: If a testing step fails, you will receive insights into why the failure occurred. This information is valuable for enhancing your AI agent's responses and capabilities.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.