Skip to content

Source of Truth

About CACCA

Purpose and scope

What problem does CACCA exist to solve, and for whom, in one sentence?
CACCA helps maintaining compliance by continuously monitoring the organizational risk as per the standards like ISO 27001, PCI DSS etc.
What is in scope vs. explicitly out of scope for the application (now vs. future)?
The entire policy lifecycle management comes within the scope of the tool, which includes defining policies, policy implementation over each asset, automatic task generation, auditing and continuous risk evaluation’=
What does “continuous” mean in our context (data freshness, checks cadence, evidence updates) without referencing KPIs?
Continuous monitoring means its a real-time evaluation of risk associated with the implemented policies on each asset.
What are the core promises CACCA makes to a user (qualitative outcomes only, not numeric metrics)?
Peace of mind related to the organizational risk, compliance maturity and real-time information on organizational risk, ease of defining and implementing of IT Policies.

Core concepts and glossary

What does “Compliance Score” mean conceptually (inputs, dimensions) without formulas?
We try to indicate the organization’s risk level on the basis of the impact of these risks to the organization.
What constitutes “audit readiness” in qualitative terms?
Since policies are implemented, maintained and monitored on a regular basis, organization will be ready for an audit at any point of time.
How do we define relationships among entities (e.g., Policy → Controls → Tests → Evidence → Findings/Tickets → Risks → Assets)?
We start from defining and approving the policies, once approved we implement the policies against each asset. once implemented the tool will automatically generate policy adherence tasks on a regular basis as per how its defined in the policies - the completion of these tasks on a timely basis would also be monitored and updated in the risk dashboard against the SLA violation. Furthermore, as per how its scheduled in the policies the audits would be automatically generated, then the auditor could perform the system driven audit and update the audit findings within the system - each audit finding is then monitored until its closure ensuring compliance. All non-compliance or incomplete implementation would automatically trigger as a risk in the risk register.

Modules and capabilities (v1)

Organizational Dashboard (Main)
Items are as follows
Organizational Risk Score
Referenced from the Risk Module
Open Risk’s highest level is the overall organisation’s risk level - the pointer would indicate towards the highest risk level among all the risks
Representation can be something like this but the red would be towards the right indicating critcality
image.png failed to upload
from the right: Critical 1, High 4, Medium 3, Low 40 - the number on the side indicates the number of risks - area of each color could be with respect to the number of risks in a certain level.
Risk Level Status
Risk Module
Total Risk / Open Risk Level wise & Status (in the above we’re only showing the level) - x axis - Level Name, Y axis Status count (level names - Critical, High, Medium, Low)
image.png failed to upload
Policy Status
Policy module - status of the policy
nos with Approved, Draft, In Review, Waiting for Approval
image.png failed to upload
Policy Implementation Status
about 24-30 items might come in the x axis here
x axis - policy name / y axis - implementation status (count by statuses)
Implementation statuses
Implemented
Partially Implemented
Not Implemented
image.png failed to upload
Audit Findings
Category wise, closure statuses (count) from the audit findings tab
Status Values
Open
Closed
Category Values
Major NC
Minor NC
Observation
image.png failed to upload
Policy Implementation Status - Asset
Asset Category - x axis
Categories
End user device
Computing
Network
Application
General
y axis - implementation status (count)
Statuses
Compliant
Non Compliant
Partially Compliant
SLA breached tickets
Number - count
Running Risk
Number of tickets violated against each policy
X - policy name
y - SLA violated tickets count
Bar chart + a trend line like this
Policy Development & Workflow
What is the policy lifecycle (draft → review → approval → publish → deprecate)?
Within the application we hold a repository of 30+ pre defined policy templates which could be directly imported as a draft and configured further as per each client organizational requirements. Once it is approved in the system the final policy will be published and ready for execution and monitoring. Any future changes in the policies will follow the lifecycle of review and approval
What templates, versioning, and approval mechanics are supported?
There are about 30+ policy templates available, the versioning could start from a custom basis from which further versions are numbered. The approval mechanism is entirely system driven with a maker checker model, with the policy being configured by one person, reviewed by another and finally approved by another person.
How are policies linked to standards, controls, assets, risks, and tickets?
Each policy statement is mapped towards specific standards such as ISO 27001, PCI DSS etc. The policy statements are also mapped towards specific asset categories. Furthermore each asset is mapped towards its relevant policy statements and can be tracked to the implementation of each statement. Tickets would be generated against all activities configured in the policies, closure of these tickets are monitored and reported as a SLA violation. Incomplete implementation of policies are automatically reported in the Risk register.
Policy Implementation
How do we track coverage and status across departments/assets?
within the tool, Against each policy statement the implementation status and the procedure of implementation is documented and also Against each asset implementation status and procedure of implementation is documented.
What qualifies as “implemented,” “partially implemented,” or “not-implemented”?
Implemented is when the actions mentioned in the policies are adhered to for assets.
Partially implemented is when the actions are not adhered to for all assets.
Not-implemented is when the actions are not performed at all.
How are gaps surfaced and assigned?
If implementation is not-implemented or partially implemented, automatically risk will be generated in the risk register.
Audit Module
How do users plan and scope audits?
As per how it is defined in the policies, automatically audits are generated which the auditor could perform and update within the system. The audit is entirely system driven, the system would generate questions against all the approved policies for the auditor. Once the initial auditing is completed, a review will be performed by the peer-auditor and if any evidence is not satisfactory then they can return the audit for further capturing the information.
How are findings captured, rated, and linked to evidence and entities?
All audit findings will be captured in a centralized audit register, and can be assigned to the respective owners for the closure. Beyond this there is also a provision to register the audit findings from external audits done by compliance authority or a third party.
What artifacts/reports can be generated from within the module?
Audit findings report against each audit or the entire audit findings, complete audit report are the two reports that could be generated.
Ticket Module
What events create tickets (violations, failed tests, manual)?
Tickets are created as per the policy defined schedules against each asset category. OR based on the schedules configured by the user.
What statuses and transitions exist (new, triage, in-progress, resolved, verified)?
All tickets generated in the tickets module is automatically pushed to the external ticketing system. We only fetch the status in the tickets module, the only status we’re mapping is closed/open.
How are escalations and ownership handled inside CACCA?
The ownership for each ticket is assigned on the basis of the organization role created within the system.
Risk Register
What risk taxonomy do we support (threats, vulnerabilities, controls, processes)?
The risk register will follow the complete lifecycle, starting from identification, analysis, treatment, post-treatment analysis, acceptance and closure.
How are risks identified, assessed, and treated?
Risks are identified as per
Automatically generated on the basis of non-compliance of the policy or
Manually identified by the organization.
Each risk would have a specific owner assigned, whose responsibility would be to perform the impact analysis, risk treatment, identified treatment implementation, residual risk analysis and closure.
How are risks tied to policies, assets, tickets, and audits?
Each risk is mapped with respective assets - any policy implementation failure would automatically generate the risk.
Asset Register
What asset types are represented (apps, services, cloud resources, data stores, identities, endpoints)?
The asset types are - End user devices (laptops, desktops, mobiles etc.), Computing (Servers, Storage etc.), Networking (Switch, routers, access points etc.), Security devices (Firewall, Web applications, etc.), Infrastructure Applications (Active Directory, DNS, Database, application servers, etc.), Business Application (ERP, CRM, etc.), General (Websites, domain names, SSL certificates etc.), Components (Keyboard, mouse, monitor etc.)
How are assets discovered/ingested and classified?
All assets are either manually ingested and classified, alternatively the system could be directly integrated with external asset management tools for automatically capturing the assets.
How do assets relate to policies, controls, risks, and findings?
Based on the asset category, each asset is mapped to the respective policy statements. The owner of the asset has to update the status of implementation of each policy statements. Against each risk the respective assets will be mapped and this can be viewed from the individual assets.
The owner and the custodian of an asset is mapped within the assets module along with other details of the asset such as the asset status, hardware/software details, purchase details, and asset value.
Each asset are mapped towards users who have access and their privileges, associated assets, and risks.

Integrations and data sources (definition-level)

Which source categories does CACCA integrate with conceptually (cloud providers, IAM, SIEM/logs, ITSM, CMDB, code repos)?
Ticketing systems, Asset management systems, and in future updates the tool would be integrated with cloud providers, SIEM/logs, ITSM, CMDB, code repos, IAM etc.
For each category, what types of signals/configs/events are relevant to compliance and evidence?
Policy statements that are mapped to asset category.
What connection modes do we support in principle (API, webhooks, agentless, agent-based)?
API based.

For later

Control library and framework mapping
Which compliance frameworks are represented conceptually (e.g., SOC 2, ISO 27001, GDPR) and how is mapping handled?
How are controls defined, grouped, and versioned?
What types of control tests exist (automated, semi-automated, manual attestation)?
How do controls link to evidence, assets, policies, tickets, and audit findings?
Evidence management and auditability
What counts as acceptable evidence types (logs, configs, screenshots, attestations, exports, API snapshots)?
How is evidence collected (auto-capture, upload, API snapshot) and associated with tests/findings?
How is provenance handled (timestamps, source, collector, immutability)?
How are redaction, retention, and chain-of-custody handled within the app?
How can users assemble “audit packets” or scoped data rooms?
Automation, rules, and exceptions
What triggers exist for automation (policy update, failed control, configuration drift, missed attestation)?
What rule types are supported (conditions, thresholds, schedules, owners)?
How are exceptions and compensating controls modeled, approved, and expired?
How is auto-remediation represented (conceptually) vs. human-in-the-loop workflows?
User roles, access, and permissions
What default roles/personas exist (Admin, Compliance Manager, Auditor, Control Owner, IT Ops, Executive)?
What permissions and separation-of-duties expectations do we enforce?
How are scoped views supported (per business unit, asset group, framework, project)?
How do external auditors access the platform (read-only, scoped, time-bound)?
Dashboards, analytics, and reporting (structure, not metrics)
What standard dashboards exist per persona and module?
What drill-down and pivot capabilities are expected?
What standard reports exist (board, operational, audit) and what can be customized?
What export and sharing options are supported (CSV, PDF, APIs, links)?
Security and privacy posture (application-level)
How do we isolate tenants conceptually (data, compute, keys)?
How is data protected in transit and at rest (encryption approach)?
How do we manage secrets and credentials for integrations?
What sensitive data classes may appear (PII, secrets, configs) and how are they classified and protected?
What internal audit logs does the application maintain (admin actions, access, exports)?
Architecture and platform characteristics
What are the major subsystems (ingestion, processing, storage, analytics, UI, integration layer)?
What is the intended deployment model (multi-tenant SaaS; any dedicated options conceptually)?
What data stores are used conceptually (transactional store, object store for evidence, search index)?
How do we design for scalability and bursty ingestion at a high level?
What is the high-level approach to backup/restore and disaster recovery?
Performance, reliability, and operability (definitions)
What does “near real-time” mean for different signal types (definitions, not targets)?
How are rate limits, quotas, and back-pressure handled conceptually?
What operational health indicators are exposed to admins (integration health, job status, evidence capture status)?
What is our approach to multi-region resilience and failover conceptually?
Configuration and customization
What can customers configure (frameworks enabled, policies/templates, fields, workflows, access scopes)?
How is the compliance score model customizable at a high level (weights, dimensions, guardrails)?
What theming/branding and localization options are available?
How are configuration changes tracked and auditable?
APIs, webhooks, and extensibility
What APIs are exposed (entities, search, events) and authentication model (conceptual)?
What webhooks/notifications exist for key events?
What SDKs or integration patterns do we envision for partners and customers?
How do we support scripted or low-code automation safely?
Notifications and communication
What notification channels exist (in-app, email, chat integrations)?
How are notification preferences, digesting, and noise controls handled?
How do we highlight “next best action” to users?
Internationalization and localization
What time zones, locales, and date/number formats are supported?
What languages are planned and how is translation handled?
How are region-specific frameworks and data residency represented in the app?
Trust, transparency, and governance (in-product)
What in-product trust signals exist (status page links, subprocessor list, security docs)?
What configurable data retention and deletion controls are exposed to tenants?
How are access reviews and periodic attestations handled within CACCA?
Versioning and change management
How are policy, control, and framework updates versioned and communicated?
How are breaking changes avoided or migrated (deprecations, compatibility windows)?
How are release notes and feature toggles handled in-product?
Known constraints and assumptions
What minimum customer prerequisites exist (identity source, ticketing system, cloud provider)?
What data volume and scale assumptions guide the design?
What limitations exist in v1 that customers should be aware of?

Product and Positioning

Category and one‑line positioning

What category do we intentionally claim: “Continuous Assurance,” “Next‑gen GRC,” or something else?
What is the exact one‑line positioning statement we want prospects to repeat verbatim?
What short tagline best reinforces it (5–7 words, memorable)?

Core value narrative (before/after)

In plain language, what is the customer’s “life before CACCA” vs. “life after CACCA”?
Which 2–3 moments of chaos we eliminate should anchor this story (e.g., audit scramble, undefined ownership, evidence hunting)?
What’s the simplest metaphor that explains CACCA (e.g., “autopilot for compliance operations”)?

Differentiators and “only CACCA” moments

What are the top 3–5 product differentiators versus spreadsheets/status‑quo and well‑known platforms?
Which “only CACCA does this” moments must we highlight (e.g., policy‑driven auto‑audits, auto‑risk creation from implementation gaps)?
Which differentiators are provable in a 10‑minute demo without integrations?

Product pillars and proof

What 3–4 product pillars will we message consistently (e.g., policy‑to‑evidence traceability, real‑time risk view, system‑driven audits)?
For each pillar, what in‑product proof point or screen validates it instantly?
What language do we use to tie each pillar to a buyer’s pain (no metrics, just outcomes)?

Ideal buyer and “job” we’re hired for

Who is the primary buyer and primary champion we design copy for (titles)?
What single “job‑to‑be‑done” does the buyer expect CACCA to own end‑to‑end?
What secondary jobs do we acknowledge but deprioritize in positioning to avoid scope creep?

Competitive framing (positioning, not feature list)

Which alternatives do we want to be compared against—and which do we want to avoid?
What short “why we win” statement applies in our best‑fit scenarios?
What disarming line reframes common objections (“We already have a GRC tool…”) without going feature‑by‑feature?

Trust and assurance signals

What in‑product trust signals do we emphasize in positioning (immutable evidence, system audit logs, maker‑checker approvals)?
How do we describe external‑auditor access/read‑only views as a confidence booster?
What privacy/security phrasing belongs on product pages (without diving into certifications)?

Language, taxonomy, and terms to standardize

Which core terms do we standardize in outward copy (Policy, Control, Test, Evidence, Finding, Ticket, Risk, Asset, Exception)?
Are there terms we will avoid or replace because customers find them confusing?
How do we explain “Compliance Score” conceptually in one sentence for non‑experts?

Hero demo storyline (positioning lens)

Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.