How is data being consumed from our internal sources (e.g., RDS)?
A lightweight 5X‑managed connector container (Airbyte‑compatible) pulls the authorised rows/columns over a secure private link and streams them into the Kubernetes cluster.
How is data being processed (ETL) on your side?
Ingestion pipelines and transformation models run in a k8s cluster. Additionally, other platform workloads including modeling, data apps, semantic layer, BI also run in an siolated k8s cluster.
How is data (temporarily) stored on your side (caching, etc.)?
Rows that are ingested as part of ETL pipelines exist only in ephemeral pod volumes for the duration of a job. We persist only metadata (job status, column stats, lineage) in the Control Plane’s Metadata DB.
How do you feed data back into our warehouse (e.g., Redshift)?
The paired destination connector writes batch files or streaming inserts directly into your Redshift cluster or S3 bucket; once the write succeeds, the pipeline deletes its working data.