Support for initiating workflows via webhook calls, enabling integration with external systems and event-driven architectures.
11/12/2024
Stream the logs for the jobs that are running instead of waiting for the entire job to be finished
11/30/2024
Implementation of Slim CI (Continuous Integration) practices for dbt, enabling faster and more efficient CI processes by selectively running only the models and tests affected by recent changes.
11/12/2024
Add 'Next Run' time stamp to Job listing page
11/7/2024
Block - Refresh a Dashboard
Functionality to automatically refresh connected dashboards upon completion of relevant data processing tasks.
10/16/2024
Conditional logic within workflows, allowing different execution paths based on specified conditions or data states.
10/10/2024
Export data from warehouse to cloud storage
Conditional logic within workflows, allowing different execution paths based on specified conditions or data states.
Run downstream block if one dependency upstream has failed
Option to continue partial workflow execution even if one upstream dependency fails, providing flexibility in error handling.
10/9/2024
Import Dagster repo directly and run in Orchestration as a block
Feature to directly import and execute Dagster repositories within the orchestration platform.
10/3/2024
Functionality to re-run historical data through the workflow, useful for processing retroactive data or fixing past errors.
10/16/2024
Wait for callback actions
Ability to pause workflow execution at specific points, waiting for external actions or human interventions before proceeding.
10/16/2024
Add functionality to duplicate pipelines.
Multiple HH and MM combinations on scheduled trigger option
For the scheduled node "Custom Time" configuration, enable the selection of multiple HH and MM combinations (currently limited to just one). (Screenshot attached below)
To build custom connectors using the Custom Node, introduce the ability to send and receive the last run state of the code. This state, typically a Python dictionary, keeps track of the last run's time offset and can be acc
One of the biggest challenges when using custom code is efficiently loading data into the warehouse. Typically, this is handled through either a warehouse streaming API or database toolkit libraries like SQLAlchemy or Psycopg. However, to streamline this process, it would be beneficial to introduce a node capable of reading output data (in formats like data frames or JSON) from the previous state. By incorporating schema evolution and a data injection node, this feature could automatically push data into the warehouse. It should offer the flexibility to:
Select merge keys for combining new data with existing records. Choose a snapshotting option to manage historical versions of data.
Average run duration for a job on the Job Overview Page
Showing Avg duration is helpful to quickly get on overview of all jobs and to understand overall orchestration
Ability to start workflows based on specific events or conditions in connected systems, enhancing automation. S3, Azure Blob, Google cloud storage
11/20/2024