This product guide will help you discover the functionalities of the NIMA platform and make sure you get the best value from its functionalities.
Throughout this product guide, and as you dive into the utilisation of the NIMA product, if you have any feedback, or want to suggest any feature amelioration or just discuss what’s cooking on our product roadmap, please reach out to us @
We hope you will enjoy your reading and we’re looking forward to extensive product conversations with you.
Accessing the NIMA platform
Signing in the NIMA platform
Signing in the NIMA platform via SAML SSO
Setting up user accounts with NIMA
How to create user groups ?
How to create manual moderation queues in NIMA ?
Viewing the number of cases/flags that are pending review for each moderation queue based on SLAs ?
How to develop a risk based approach via our case prioritisation tool ?
Viewing and browsing through cases as an admin for quick audits and investigation purposes
Walkthrough of the flag details screen’s information architecture
How to view reporter Analytics to inform product decision ?
Automated Workflow : Setting up acknowledgement message for incoming User Reports
Automated Workflow : Setting up statement of reasons notifications to be sent to perpetrators of Policy Guidelines upon Content Moderation decision
Automated Workflow : How can users make an appeal following a content moderation decsion (aka Complaint Mechanism workflow)
Configuring content moderation actions and connecting actions to your platform server via webhooks
How to support LEA requests via the NIMA platform ?
1. Accessing the NIMA platform
Our product and tech team will provide you with a domain specific production environment to start using our tool NIMA. This production environment is a dedicated space for your company’s Trust and Safety needs, requirements, and ecosystem, which means your company’s data is scoped to one single and unique cloud environment and thus not shared nor exposed to Tremau’s other clients.
Together with the domain URL, the Tremau team will provide you with an admin access and credentials so that you can get started.
2. Signing in the NIMA platform
Via your preferred browser, go to the following URL :
Enter your admin credentials provided by the Tremau team, and Voila !
To change your initial password, please follow the Forgot Password link.
3. Signing in the NIMA platform via SAML SSO
The NIMA platform is SAML SSO ready; please reach out to our product-and-tech team so that we can setup you integration.
4. Setting up user accounts in NIMA
Once you are signed in the NIMA platform, you can set up new user accounts for your team and colleagues, via the User Management screen.
Instructions
Click on Add New User CTA
Specify First Name, Last Name, Email, Role, Password and Password confirmation, as well as Status and click on Create User CTA
Communicate to your newly created user in your team email and password you’ve set up
Your newly created user can reset his/her password by clicking on the “Forgot Password“ link on the sign in form
5. Creating User Groups in NIMA
Nima gives you the ability to create user groups and assign users to those user groups.
Creating user groups for your Trust and Safety space will basically enable you to streamline your content moderation processes.
By creating groups of users, you can assign specific content moderation queues so that you leverage your team’s expertise to tackle reports in which they might be specialised.
For example, create a group of Hateful Speech Experts for collaborators and assign to them only flags that pertain to Hateful Speech.
Please refer to the “Create moderation queues“ section for further reference.
Instructions
On the User Management Screen, go to the User Groups tab
Click on Add New User Group CTA
Add a User Group Name and click on Create User Group CTA
Go to an already created user and assign that user a group or multiple groups that you have previously created
6. Create Manual Moderation Queues in NIMA
Flexibility is at the core of the design of the NIMA platform to serve your Trust and Safety needs as best as we can. Creating moderation queues is yet another way to streamline your content moderation processes.
We provide you with a query based builder so that you can create moderation queues on the fly and in a very autonomous way, based on your very specific needs, and according to a set of predefined criteria.
Once you have tried it, please reach out to us via
The Query Based Builder for creating moderation queues exposes a set of criteria which you can use to redirect flags based on these criteria to specific moderation queues.
These criteria are :
Channel - This is the incoming source of the report or flag, Example : User Report, LEA, Trusted Flagger, a specific AI detection tool, …
Label - The label(s) is(are) provided by the reporters in the case of User Reports and specifies a potential category of harmful content (Pornography, Hateful Content, …), the label can also be provided by AI detection tools
Content Type - The NIMA platform is content agnostic which means we support as many content types as your online platform needs (Image, Chat, Livestream Videos, Audio, …)
Priority - A priority tag is attached to each case based on a proritisation score. Configuring the prioritisation score is discussed here in this document.
This Query Based Builder UI offers you the ability to add as many conditions as you need to filter incoming flags and redirect them to the specific moderation queue you are creating or you want to create.
In the above screenshot, we have created a moderation queue that contains flags :
Having ChannelequalsUser Report
Having Label that containsCSAM and Nudity
Having Content Type that equals Video Stream
Having Priority that equalsHigh
Attach specific actions to specfic moderation queues :
NIMA enables you to attach specific actions to a specific and unique moderation queue.
While creating a moderation queue, select each and every action you believe is relevant to the queue you are creating.
Assign to specific group of content moderators
Assign specific group of content moderators to the newly created moderation queue; streamline your process and ensure you have your best team members process specific types of flags.
Content moderators who sign in the NIMA platform will see only queues that are assigned to them.
7. Viewing the number of cases/flags that are pending review for each moderation queue based on SLAs.
NIMA provides information on the amount of cases that pending review based on three SLAs :
Cases pending review since < 24 hours
Cases pending review since < 48 hours
Cases pending review since > 48 hours
Content moderators and Trust and Safety leaders need to understand how flags and reports are evolving and being processed on a day to day basis.
Based on how flags and cases are ramping us in your system, you can consequently increase the number of individuals in specific content moderator user groups.
Viewing pending cases via the Dashboard screen
The “Pending cases in queues“ analytics component enables you to view at a glance how your content moderation queues are performing.
In the above example :
The Live Streaming Queue has an increased number of flags that are pending review since more than 48 hours
There are no pending cases in the CSAM and Hateful Speech queues
The default case view has a fair amount of pending cases within each SLA.
Viewing pending cases via the “My Queues“ screen
After a content moderator with User level access has signed in the NIMA platform, he or she will be redirected to the “My Queues“ screen.
The “My Queues“ screen lists all queues assigned to the currently signed in user and informs the content moderator on how many cases are pending review as per SLAs, enabling the latter to set his/her right level of goals and aims for the day !
8. How to develop a risk based approach via our case prioritisation engine
Case prioritisation depends on a few factors that can change over time. Depending on your specific domain business or particular situation your online community may face at some point in time, NIMA enables you to configure prioritisation rules based on a set of criteria.
Our case prioritisation engine enables your content moderation team to always address what you believe is more urgent. Never leave a report pending for more than x hours and make sure the threats to your community users are treated at the earliest time possible.
The prioritisation engine takes as input :
a criteria and a weight associated to it
for each criteria, the number of hours within which the case has to be treated by your content moderators.
a time factor
Cases can be time sensitive. Items would be more prioritised if they sit longer in moderation queues.
Hence, our system uses two functions :
Time for which the case has been sitting in the moderation queue. This function increases as time spent increases.
Time left until deadline if the deadline has been specified. This increases as the time left decreases and increase exponentially once the time left is negative.
The maximum of the above results can then be used as a multiplier to compute the prioritisation score of the case/flag.
You can add as many criteria as you need.
Based on each criteria or rule created, a priority score will be associated to each flag entering the system.
When content moderators start moderating a specifc queue, flags will be displayed to them in descending order of the priority score.
9. Viewing and browsing through cases as an admin for quick audits and investigation purposes
NIMA offers two screens for your Trust and Safety Admins and Team Leaders to get detailed insights on cases being processed.
Instructions
In the menu items list, click on “Manage Screens“
Choose to query the “Processed“ cases list or the “Pending Review“ cases list
Visibility into the system is critical. Moreover, Trust and Safety team leaders and admins need to query the system to understand, who in their team has done what for particular flags.
The Manage Cases screens enable you to query the system by :
- Filtering cases by priority
- Filtering cases by channel
- Filtering cases within dates
- Search cases by content_id, reporter_id, reportee_id
- Search cases processed by a specific content moderator within dates
Enable your team leaders to query with minimum effort what action was taken by whom for a particular case.
👉 Surface the number of cases based on your search criteria
👉 View action taken for displayed cases based on your search criteria
Search cases by users ids involved in the reports, we provide both reporter ids and reportee ids
View in which queue a particular case is contained
10. Walkthrough of the Flag Details Screen’s Information Architecture
11. How to view reporter Analytics to inform Product Decisions ?
Our API welcomes user specific metadata that can inform content moderators’s decision.
In particular, our API requires you send over to NIMA user ids as they appear in your database. This enables us to derive valuable information as the total number of reports from a specfic user.
While processing a specific case, the content moderator can access these information by cliking on the reporter section card as illustrated below :
12. Automated Workflow : Setting up acknowledgement message for incoming User Reports
13. Automated Workflow : Setting up statement of reasons notifications to be sent to perpetrators of Policy Guidelines, upon content moderation decisions
14. Automated Workflow : How can users make an appeal following a content moderation decision (aka Content Moderation Workflow) ?
15. Configuring Actions and connecting actions to your platform server via Webhooks
Actions in NIMA are basically references to endpoints on your server. These API endpoints are called everytime a content moderator clicks on a corresponding action button.
Nima enables you to configure actions and endpoints via two tabs : User and Post.
Configuring Actions will require two mandatory inputs :
Action name
A url that will be used to call an API endpoint on your server when an action is triggered.