Compare commits

..

2 Commits

Author SHA1 Message Date
6b9d7d6cf2 still working on workflows. 2026-02-27 16:57:10 -06:00
daeff10f18 [WIP] Workflows 2026-02-27 16:34:17 -06:00
96 changed files with 5893 additions and 2098 deletions

View File

@@ -54,7 +54,7 @@ attune/
## Service Architecture (Distributed Microservices) ## Service Architecture (Distributed Microservices)
1. **attune-api**: REST API gateway, JWT auth, all client interactions 1. **attune-api**: REST API gateway, JWT auth, all client interactions
2. **attune-executor**: Manages execution lifecycle, scheduling, policy enforcement 2. **attune-executor**: Manages execution lifecycle, scheduling, policy enforcement, workflow orchestration
3. **attune-worker**: Executes actions in multiple runtimes (Python/Node.js/containers) 3. **attune-worker**: Executes actions in multiple runtimes (Python/Node.js/containers)
4. **attune-sensor**: Monitors triggers, generates events 4. **attune-sensor**: Monitors triggers, generates events
5. **attune-notifier**: Real-time notifications via PostgreSQL LISTEN/NOTIFY + WebSocket 5. **attune-notifier**: Real-time notifications via PostgreSQL LISTEN/NOTIFY + WebSocket
@@ -126,6 +126,11 @@ docker compose logs -f <svc> # View logs
``` ```
Sensor → Trigger fires → Event created → Rule evaluates → Sensor → Trigger fires → Event created → Rule evaluates →
Enforcement created → Execution scheduled → Worker executes Action Enforcement created → Execution scheduled → Worker executes Action
For workflows:
Execution requested → Scheduler detects workflow_def → Loads definition →
Creates workflow_execution record → Dispatches entry-point tasks as child executions →
Completion listener advances workflow → Schedules successor tasks → Completes workflow
``` ```
**Key Entities** (all in `public` schema, IDs are `i64`): **Key Entities** (all in `public` schema, IDs are `i64`):
@@ -210,14 +215,30 @@ Enforcement created → Execution scheduled → Worker executes Action
- **JSON Fields**: Use `serde_json::Value` for flexible attributes/parameters, including `execution.workflow_task` JSONB - **JSON Fields**: Use `serde_json::Value` for flexible attributes/parameters, including `execution.workflow_task` JSONB
- **Enums**: PostgreSQL enum types mapped with `#[sqlx(type_name = "...")]` - **Enums**: PostgreSQL enum types mapped with `#[sqlx(type_name = "...")]`
- **Workflow Tasks**: Stored as JSONB in `execution.workflow_task` (consolidated from separate table 2026-01-27) - **Workflow Tasks**: Stored as JSONB in `execution.workflow_task` (consolidated from separate table 2026-01-27)
- **FK ON DELETE Policy**: Historical records (executions, events, enforcements) use `ON DELETE SET NULL` so they survive entity deletion while preserving text ref fields (`action_ref`, `trigger_ref`, etc.) for auditing. Pack-owned entities (actions, triggers, sensors, rules, runtimes) use `ON DELETE CASCADE` from pack. Workflow executions cascade-delete with their workflow definition. - **FK ON DELETE Policy**: Historical records (executions) use `ON DELETE SET NULL` so they survive entity deletion while preserving text ref fields (`action_ref`, `trigger_ref`, etc.) for auditing. The `event`, `enforcement`, and `execution` tables are TimescaleDB hypertables, so they **cannot be the target of FK constraints**`enforcement.event`, `execution.enforcement`, `inquiry.execution`, `workflow_execution.execution`, `execution.parent`, and `execution.original_execution` are plain BIGINT columns (no FK) and may become dangling references if the referenced row is deleted. Pack-owned entities (actions, triggers, sensors, rules, runtimes) use `ON DELETE CASCADE` from pack. Workflow executions cascade-delete with their workflow definition.
- **Entity History Tracking (TimescaleDB)**: Append-only `<table>_history` hypertables track field-level changes to `execution`, `worker`, `enforcement`, and `event` tables. Populated by PostgreSQL `AFTER INSERT OR UPDATE OR DELETE` triggers — no Rust code changes needed for recording. Uses JSONB diff format (`old_values`/`new_values`) with a `changed_fields TEXT[]` column for efficient filtering. Worker heartbeat-only updates are excluded. See `docs/plans/timescaledb-entity-history.md` for full design. - **Event Table (TimescaleDB Hypertable)**: The `event` table is a TimescaleDB hypertable partitioned on `created` (1-day chunks). Events are **immutable after insert** — there is no `updated` column, no update trigger, and no `Update` repository impl. The `Event` model has no `updated` field. Compression is segmented by `trigger_ref` (after 7 days) and retention is 90 days. The `event_volume_hourly` continuous aggregate queries the `event` table directly.
- **Enforcement Table (TimescaleDB Hypertable)**: The `enforcement` table is a TimescaleDB hypertable partitioned on `created` (1-day chunks). Enforcements are updated **exactly once** — the executor sets `status` from `created` to `processed` or `disabled` within ~1 second of creation, well before the 7-day compression window. The `resolved_at` column (nullable `TIMESTAMPTZ`) records when this transition occurred; it is `NULL` while status is `created`. There is no `updated` column. Compression is segmented by `rule_ref` (after 7 days) and retention is 90 days. The `enforcement_volume_hourly` continuous aggregate queries the `enforcement` table directly.
- **Execution Table (TimescaleDB Hypertable)**: The `execution` table is a TimescaleDB hypertable partitioned on `created` (1-day chunks). Executions are updated **~4 times** during their lifecycle (requested → scheduled → running → completed/failed), completing within at most ~1 day — well before the 7-day compression window. The `updated` column and its BEFORE UPDATE trigger are preserved (used by timeout monitor and UI). Compression is segmented by `action_ref` (after 7 days) and retention is 90 days. The `execution_volume_hourly` continuous aggregate queries the execution hypertable directly. The `execution_history` hypertable (field-level diffs) and its continuous aggregates (`execution_status_hourly`, `execution_throughput_hourly`) are preserved alongside — they serve complementary purposes (change tracking vs. volume monitoring).
- **Entity History Tracking (TimescaleDB)**: Append-only `<table>_history` hypertables track field-level changes to `execution` and `worker` tables. Populated by PostgreSQL `AFTER INSERT OR UPDATE OR DELETE` triggers — no Rust code changes needed for recording. Uses JSONB diff format (`old_values`/`new_values`) with a `changed_fields TEXT[]` column for efficient filtering. Worker heartbeat-only updates are excluded. There are **no `event_history` or `enforcement_history` tables** — events are immutable and enforcements have a single deterministic status transition, so both tables are hypertables themselves. See `docs/plans/timescaledb-entity-history.md` for full design.
- **History Large-Field Guardrails**: The `execution` history trigger stores a compact **digest summary** instead of the full value for the `result` column (which can be arbitrarily large). The digest is produced by the `_jsonb_digest_summary(JSONB)` helper function and has the shape `{"digest": "md5:<hex>", "size": <bytes>, "type": "<jsonb_typeof>"}`. This preserves change-detection semantics while avoiding history table bloat. The full result is always available on the live `execution` row. When adding new large JSONB columns to history triggers, use `_jsonb_digest_summary()` instead of storing the raw value. - **History Large-Field Guardrails**: The `execution` history trigger stores a compact **digest summary** instead of the full value for the `result` column (which can be arbitrarily large). The digest is produced by the `_jsonb_digest_summary(JSONB)` helper function and has the shape `{"digest": "md5:<hex>", "size": <bytes>, "type": "<jsonb_typeof>"}`. This preserves change-detection semantics while avoiding history table bloat. The full result is always available on the live `execution` row. When adding new large JSONB columns to history triggers, use `_jsonb_digest_summary()` instead of storing the raw value.
- **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option<Id>` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, and `event.source` are also nullable. - **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option<Id>` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, and `event.source` are also nullable. `enforcement.event` is nullable but has no FK constraint (event is a hypertable). `execution.enforcement` is nullable but has no FK constraint (enforcement is a hypertable). All FK columns on the execution table (`action`, `parent`, `original_execution`, `enforcement`, `executor`, `workflow_def`) have no FK constraints (execution is a hypertable). `inquiry.execution` and `workflow_execution.execution` also have no FK constraints. `enforcement.resolved_at` is nullable — `None` while status is `created`, set when resolved.
**Table Count**: 22 tables total in the schema (including `runtime_version` and 4 `*_history` hypertables) **Table Count**: 20 tables total in the schema (including `runtime_version`, 2 `*_history` hypertables, and the `event`, `enforcement`, + `execution` hypertables)
**Migration Count**: 9 consolidated migrations (`000001` through `000009`) — see `migrations/` directory **Migration Count**: 9 migrations (`000001` through `000009`) — see `migrations/` directory
- **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order. - **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order.
### Workflow Execution Orchestration
- **Detection**: The `ExecutionScheduler` checks `action.workflow_def.is_some()` before dispatching to a worker. Workflow actions are orchestrated by the executor, not sent to workers.
- **Orchestration Flow**: Scheduler loads the `WorkflowDefinition`, builds a `TaskGraph`, creates a `workflow_execution` record, marks the parent execution as Running, builds an initial `WorkflowContext` from execution parameters and workflow vars, then dispatches entry-point tasks as child executions via MQ with rendered inputs.
- **Template Resolution**: Task inputs are rendered through `WorkflowContext.render_json()` before dispatching. Supports `{{ parameters.x }}`, `{{ item }}`, `{{ index }}`, `{{ number_list }}` (direct variable), `{{ task.task_name.field }}`, and function expressions. **Type-preserving**: pure template expressions like `"{{ item }}"` preserve the JSON type (integer `5` stays as `5`, not string `"5"`). Mixed expressions like `"Sleeping for {{ item }} seconds"` remain strings.
- **Function Expressions**: `{{ result() }}` returns the last completed task's result. `{{ result().field.subfield }}` navigates into it. `{{ succeeded() }}`, `{{ failed() }}`, `{{ timed_out() }}` return booleans. These are evaluated by `WorkflowContext.try_evaluate_function_call()`.
- **Publish Directives**: Transition `publish` directives (e.g., `number_list: "{{ result().data.items }}"`) are evaluated when a transition fires. Published variables are persisted to the `workflow_execution.variables` column and available to subsequent tasks. Uses type-preserving rendering so arrays/numbers/booleans retain their types.
- **Child Task Dispatch**: Each workflow task becomes a child execution with the task's actual action ref (e.g., `core.echo`), `workflow_task` metadata linking it to the `workflow_execution` record, and a parent reference to the workflow execution. Child executions re-enter the normal scheduling pipeline, so nested workflows work recursively.
- **with_items Expansion**: Tasks declaring `with_items: "{{ expr }}"` are expanded into child executions. The expression is resolved via the `WorkflowContext` to produce a JSON array, then each item gets its own child execution with `item`/`index` set on the context and `task_index` in `WorkflowTaskMetadata`. Completion tracking waits for ALL sibling items to finish before marking the task as completed/failed and advancing the workflow.
- **with_items Concurrency Limiting**: When a task declares `concurrency: N`, ALL child execution records are created in the database up front (with fully-rendered inputs), but only the first `N` are published to the message queue. The remaining children stay at `Requested` status in the DB. As each item completes, `advance_workflow` counts in-flight siblings (`scheduling`/`scheduled`/`running`), calculates free slots (`concurrency - in_flight`), and calls `publish_pending_with_items_children()` which queries for `Requested`-status siblings ordered by `task_index` and publishes them. The DB `status = 'requested'` query is the authoritative source of undispatched items — no auxiliary state in workflow variables needed. The task is only marked complete when all siblings reach a terminal state. Without a `concurrency` value, all items are dispatched at once (previous behavior).
- **Advancement**: The `CompletionListener` detects when a completed execution has `workflow_task` metadata and calls `ExecutionScheduler::advance_workflow()`. The scheduler rebuilds the `WorkflowContext` from persisted `workflow_execution.variables` plus all completed child execution results, sets `last_task_outcome`, evaluates transitions (succeeded/failed/always/timed_out/custom with context-based condition evaluation), processes publish directives, schedules successor tasks with rendered inputs, and completes the workflow when all tasks are done.
- **Transition Evaluation**: `succeeded()`, `failed()`, `timed_out()`, and `always` (no condition) are supported. Custom conditions are evaluated via `WorkflowContext.evaluate_condition()` with fallback to fire-on-success if evaluation fails.
- **Legacy Coordinator**: The prototype `WorkflowCoordinator` in `crates/executor/src/workflow/coordinator.rs` is bypassed — it has hardcoded schema prefixes and is not integrated with the MQ pipeline.
### Pack File Loading & Action Execution ### Pack File Loading & Action Execution
- **Pack Base Directory**: Configured via `packs_base_dir` in config (defaults to `/opt/attune/packs`, development uses `./packs`) - **Pack Base Directory**: Configured via `packs_base_dir` in config (defaults to `/opt/attune/packs`, development uses `./packs`)
- **Pack Volume Strategy**: Packs are mounted as volumes (NOT copied into Docker images) - **Pack Volume Strategy**: Packs are mounted as volumes (NOT copied into Docker images)
@@ -343,6 +364,7 @@ Rule `action_params` support Jinja2-style `{{ source.path }}` templates resolved
- Multi-segment paths use Catmull-Rom → cubic Bezier conversion for smooth curves through waypoints (`buildSmoothPath` in `WorkflowEdges.tsx`) - Multi-segment paths use Catmull-Rom → cubic Bezier conversion for smooth curves through waypoints (`buildSmoothPath` in `WorkflowEdges.tsx`)
- **Orquesta-style `next` transitions**: Tasks use a `next: TaskTransition[]` array instead of flat `on_success`/`on_failure` fields. Each transition has `when` (condition), `publish` (variables), `do` (target tasks), plus optional `label`, `color`, `edge_waypoints`, and `label_positions`. See "Task Transition Model" above. - **Orquesta-style `next` transitions**: Tasks use a `next: TaskTransition[]` array instead of flat `on_success`/`on_failure` fields. Each transition has `when` (condition), `publish` (variables), `do` (target tasks), plus optional `label`, `color`, `edge_waypoints`, and `label_positions`. See "Task Transition Model" above.
- **No task type or task-level condition**: The UI does not expose task `type` or task-level `when` — all tasks are actions (workflows are also actions), and conditions belong on transitions. Parallelism is implicit via multiple `do` targets. - **No task type or task-level condition**: The UI does not expose task `type` or task-level `when` — all tasks are actions (workflows are also actions), and conditions belong on transitions. Parallelism is implicit via multiple `do` targets.
- **Ref immutability**: When editing an existing workflow, the pack selector and workflow name fields are disabled — the ref cannot be changed after creation.
## Development Workflow ## Development Workflow
@@ -483,8 +505,10 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
14. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare` 14. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare`
15. **REMEMBER** packs are volumes - update with restart, not rebuild 15. **REMEMBER** packs are volumes - update with restart, not rebuild
16. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh` 16. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh`
17. **REMEMBER** when adding mutable columns to `execution`, `worker`, `enforcement`, or `event`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration 17. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row).
18. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history` 18. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history`
19. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures.
20. **REMEMBER** `execution`, `event`, and `enforcement` are all TimescaleDB hypertables — they **cannot be the target of FK constraints**. Any column referencing them (e.g., `inquiry.execution`, `workflow_execution.execution`, `execution.parent`) is a plain BIGINT with no FK and may become a dangling reference.
## Deployment ## Deployment
- **Target**: Distributed deployment with separate service instances - **Target**: Distributed deployment with separate service instances
@@ -495,8 +519,8 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
- **Web UI**: Static files served separately or via API service - **Web UI**: Static files served separately or via API service
## Current Development Status ## Current Development Status
- ✅ **Complete**: Database migrations (22 tables, 9 consolidated migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker, enforcement, event), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution, enforcement, event), TimescaleDB continuous aggregates (5 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector) - ✅ **Complete**: Database migrations (20 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`)
- 🔄 **In Progress**: Sensor service, advanced workflow features, Python runtime dependency management, API/UI endpoints for runtime version management - 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management
- 📋 **Planned**: Notifier service, execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage - 📋 **Planned**: Notifier service, execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage
## Quick Reference ## Quick Reference

View File

@@ -137,6 +137,11 @@ pub struct ActionResponse {
#[schema(value_type = Object, nullable = true)] #[schema(value_type = Object, nullable = true)]
pub out_schema: Option<JsonValue>, pub out_schema: Option<JsonValue>,
/// Workflow definition ID (non-null if this action is a workflow)
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(example = 42, nullable = true)]
pub workflow_def: Option<i64>,
/// Whether this is an ad-hoc action (not from pack installation) /// Whether this is an ad-hoc action (not from pack installation)
#[schema(example = false)] #[schema(example = false)]
pub is_adhoc: bool, pub is_adhoc: bool,
@@ -186,6 +191,11 @@ pub struct ActionSummary {
#[schema(example = ">=3.12", nullable = true)] #[schema(example = ">=3.12", nullable = true)]
pub runtime_version_constraint: Option<String>, pub runtime_version_constraint: Option<String>,
/// Workflow definition ID (non-null if this action is a workflow)
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(example = 42, nullable = true)]
pub workflow_def: Option<i64>,
/// Creation timestamp /// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")] #[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>, pub created: DateTime<Utc>,
@@ -210,6 +220,7 @@ impl From<attune_common::models::action::Action> for ActionResponse {
runtime_version_constraint: action.runtime_version_constraint, runtime_version_constraint: action.runtime_version_constraint,
param_schema: action.param_schema, param_schema: action.param_schema,
out_schema: action.out_schema, out_schema: action.out_schema,
workflow_def: action.workflow_def,
is_adhoc: action.is_adhoc, is_adhoc: action.is_adhoc,
created: action.created, created: action.created,
updated: action.updated, updated: action.updated,
@@ -229,6 +240,7 @@ impl From<attune_common::models::action::Action> for ActionSummary {
entrypoint: action.entrypoint, entrypoint: action.entrypoint,
runtime: action.runtime, runtime: action.runtime,
runtime_version_constraint: action.runtime_version_constraint, runtime_version_constraint: action.runtime_version_constraint,
workflow_def: action.workflow_def,
created: action.created, created: action.created,
updated: action.updated, updated: action.updated,
} }

View File

@@ -53,10 +53,6 @@ pub struct EventResponse {
/// Creation timestamp /// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")] #[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>, pub created: DateTime<Utc>,
/// Last update timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
pub updated: DateTime<Utc>,
} }
impl From<Event> for EventResponse { impl From<Event> for EventResponse {
@@ -72,7 +68,6 @@ impl From<Event> for EventResponse {
rule: event.rule, rule: event.rule,
rule_ref: event.rule_ref, rule_ref: event.rule_ref,
created: event.created, created: event.created,
updated: event.updated,
} }
} }
} }
@@ -230,9 +225,9 @@ pub struct EnforcementResponse {
#[schema(example = "2024-01-13T10:30:00Z")] #[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>, pub created: DateTime<Utc>,
/// Last update timestamp /// Timestamp when the enforcement was resolved (status changed from created to processed/disabled)
#[schema(example = "2024-01-13T10:30:00Z")] #[schema(example = "2024-01-13T10:30:01Z", nullable = true)]
pub updated: DateTime<Utc>, pub resolved_at: Option<DateTime<Utc>>,
} }
impl From<Enforcement> for EnforcementResponse { impl From<Enforcement> for EnforcementResponse {
@@ -249,7 +244,7 @@ impl From<Enforcement> for EnforcementResponse {
condition: enforcement.condition, condition: enforcement.condition,
conditions: enforcement.conditions, conditions: enforcement.conditions,
created: enforcement.created, created: enforcement.created,
updated: enforcement.updated, resolved_at: enforcement.resolved_at,
} }
} }
} }

View File

@@ -6,6 +6,7 @@ use serde_json::Value as JsonValue;
use utoipa::{IntoParams, ToSchema}; use utoipa::{IntoParams, ToSchema};
use attune_common::models::enums::ExecutionStatus; use attune_common::models::enums::ExecutionStatus;
use attune_common::models::execution::WorkflowTaskMetadata;
/// Request DTO for creating a manual execution /// Request DTO for creating a manual execution
#[derive(Debug, Clone, Deserialize, ToSchema)] #[derive(Debug, Clone, Deserialize, ToSchema)]
@@ -62,6 +63,11 @@ pub struct ExecutionResponse {
#[schema(value_type = Object, example = json!({"message_id": "1234567890.123456"}))] #[schema(value_type = Object, example = json!({"message_id": "1234567890.123456"}))]
pub result: Option<JsonValue>, pub result: Option<JsonValue>,
/// Workflow task metadata (only populated for workflow task executions)
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Option<Object>, nullable = true)]
pub workflow_task: Option<WorkflowTaskMetadata>,
/// Creation timestamp /// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")] #[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>, pub created: DateTime<Utc>,
@@ -102,6 +108,11 @@ pub struct ExecutionSummary {
#[schema(example = "core.timer")] #[schema(example = "core.timer")]
pub trigger_ref: Option<String>, pub trigger_ref: Option<String>,
/// Workflow task metadata (only populated for workflow task executions)
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Option<Object>, nullable = true)]
pub workflow_task: Option<WorkflowTaskMetadata>,
/// Creation timestamp /// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")] #[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>, pub created: DateTime<Utc>,
@@ -150,6 +161,12 @@ pub struct ExecutionQueryParams {
#[param(example = 1)] #[param(example = 1)]
pub parent: Option<i64>, pub parent: Option<i64>,
/// If true, only return top-level executions (those without a parent).
/// Useful for the "By Workflow" view where child tasks are loaded separately.
#[serde(default)]
#[param(example = false)]
pub top_level_only: Option<bool>,
/// Page number (for pagination) /// Page number (for pagination)
#[serde(default = "default_page")] #[serde(default = "default_page")]
#[param(example = 1, minimum = 1)] #[param(example = 1, minimum = 1)]
@@ -190,6 +207,7 @@ impl From<attune_common::models::execution::Execution> for ExecutionResponse {
result: execution result: execution
.result .result
.map(|r| serde_json::to_value(r).unwrap_or(JsonValue::Null)), .map(|r| serde_json::to_value(r).unwrap_or(JsonValue::Null)),
workflow_task: execution.workflow_task,
created: execution.created, created: execution.created,
updated: execution.updated, updated: execution.updated,
} }
@@ -207,6 +225,7 @@ impl From<attune_common::models::execution::Execution> for ExecutionSummary {
enforcement: execution.enforcement, enforcement: execution.enforcement,
rule_ref: None, // Populated separately via enforcement lookup rule_ref: None, // Populated separately via enforcement lookup
trigger_ref: None, // Populated separately via enforcement lookup trigger_ref: None, // Populated separately via enforcement lookup
workflow_task: execution.workflow_task,
created: execution.created, created: execution.created,
updated: execution.updated, updated: execution.updated,
} }
@@ -256,6 +275,7 @@ mod tests {
action_ref: None, action_ref: None,
enforcement: None, enforcement: None,
parent: None, parent: None,
top_level_only: None,
pack_name: None, pack_name: None,
rule_ref: None, rule_ref: None,
trigger_ref: None, trigger_ref: None,
@@ -274,6 +294,7 @@ mod tests {
action_ref: None, action_ref: None,
enforcement: None, enforcement: None,
parent: None, parent: None,
top_level_only: None,
pack_name: None, pack_name: None,
rule_ref: None, rule_ref: None,
trigger_ref: None, trigger_ref: None,

View File

@@ -126,7 +126,7 @@ impl HistoryQueryParams {
/// Path parameter for the entity type segment. /// Path parameter for the entity type segment.
#[derive(Debug, Clone, Deserialize, IntoParams)] #[derive(Debug, Clone, Deserialize, IntoParams)]
pub struct HistoryEntityTypePath { pub struct HistoryEntityTypePath {
/// Entity type: `execution`, `worker`, `enforcement`, or `event` /// Entity type: `execution` or `worker`
pub entity_type: String, pub entity_type: String,
} }

View File

@@ -168,6 +168,10 @@ pub async fn list_executions(
filtered_executions.retain(|e| e.parent == Some(parent_id)); filtered_executions.retain(|e| e.parent == Some(parent_id));
} }
if query.top_level_only == Some(true) {
filtered_executions.retain(|e| e.parent.is_none());
}
if let Some(executor_id) = query.executor { if let Some(executor_id) = query.executor {
filtered_executions.retain(|e| e.executor == Some(executor_id)); filtered_executions.retain(|e| e.executor == Some(executor_id));
} }

View File

@@ -27,14 +27,14 @@ use crate::{
/// List history records for a given entity type. /// List history records for a given entity type.
/// ///
/// Supported entity types: `execution`, `worker`, `enforcement`, `event`. /// Supported entity types: `execution`, `worker`.
/// Returns a paginated list of change records ordered by time descending. /// Returns a paginated list of change records ordered by time descending.
#[utoipa::path( #[utoipa::path(
get, get,
path = "/api/v1/history/{entity_type}", path = "/api/v1/history/{entity_type}",
tag = "history", tag = "history",
params( params(
("entity_type" = String, Path, description = "Entity type: execution, worker, enforcement, or event"), ("entity_type" = String, Path, description = "Entity type: execution or worker"),
HistoryQueryParams, HistoryQueryParams,
), ),
responses( responses(
@@ -127,56 +127,6 @@ pub async fn get_worker_history(
get_entity_history_by_id(&state, HistoryEntityType::Worker, id, query).await get_entity_history_by_id(&state, HistoryEntityType::Worker, id, query).await
} }
/// Get history for a specific enforcement by ID.
///
/// Returns all change records for the given enforcement, ordered by time descending.
#[utoipa::path(
get,
path = "/api/v1/enforcements/{id}/history",
tag = "history",
params(
("id" = i64, Path, description = "Enforcement ID"),
HistoryQueryParams,
),
responses(
(status = 200, description = "History records for the enforcement", body = PaginatedResponse<HistoryRecordResponse>),
),
security(("bearer_auth" = []))
)]
pub async fn get_enforcement_history(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(id): Path<i64>,
Query(query): Query<HistoryQueryParams>,
) -> ApiResult<impl IntoResponse> {
get_entity_history_by_id(&state, HistoryEntityType::Enforcement, id, query).await
}
/// Get history for a specific event by ID.
///
/// Returns all change records for the given event, ordered by time descending.
#[utoipa::path(
get,
path = "/api/v1/events/{id}/history",
tag = "history",
params(
("id" = i64, Path, description = "Event ID"),
HistoryQueryParams,
),
responses(
(status = 200, description = "History records for the event", body = PaginatedResponse<HistoryRecordResponse>),
),
security(("bearer_auth" = []))
)]
pub async fn get_event_history(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(id): Path<i64>,
Query(query): Query<HistoryQueryParams>,
) -> ApiResult<impl IntoResponse> {
get_entity_history_by_id(&state, HistoryEntityType::Event, id, query).await
}
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Shared helpers // Shared helpers
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
@@ -231,8 +181,6 @@ async fn get_entity_history_by_id(
/// - `GET /history/:entity_type` — generic history query /// - `GET /history/:entity_type` — generic history query
/// - `GET /executions/:id/history` — execution-specific history /// - `GET /executions/:id/history` — execution-specific history
/// - `GET /workers/:id/history` — worker-specific history (note: currently no /workers base route exists) /// - `GET /workers/:id/history` — worker-specific history (note: currently no /workers base route exists)
/// - `GET /enforcements/:id/history` — enforcement-specific history
/// - `GET /events/:id/history` — event-specific history
pub fn routes() -> Router<Arc<AppState>> { pub fn routes() -> Router<Arc<AppState>> {
Router::new() Router::new()
// Generic history endpoint // Generic history endpoint
@@ -240,6 +188,4 @@ pub fn routes() -> Router<Arc<AppState>> {
// Entity-specific convenience endpoints // Entity-specific convenience endpoints
.route("/executions/{id}/history", get(get_execution_history)) .route("/executions/{id}/history", get(get_execution_history))
.route("/workers/{id}/history", get(get_worker_history)) .route("/workers/{id}/history", get(get_worker_history))
.route("/enforcements/{id}/history", get(get_enforcement_history))
.route("/events/{id}/history", get(get_event_history))
} }

View File

@@ -601,8 +601,8 @@ async fn write_workflow_yaml(
/// Create a companion action record for a workflow definition. /// Create a companion action record for a workflow definition.
/// ///
/// This ensures the workflow appears in action lists and the action palette in the /// This ensures the workflow appears in action lists and the action palette in the
/// workflow builder. The action is created with `is_workflow = true` and linked to /// workflow builder. The action is linked to the workflow definition via the
/// the workflow definition via the `workflow_def` FK. /// `workflow_def` FK.
async fn create_companion_action( async fn create_companion_action(
db: &sqlx::PgPool, db: &sqlx::PgPool,
workflow_ref: &str, workflow_ref: &str,
@@ -643,7 +643,7 @@ async fn create_companion_action(
)) ))
})?; })?;
// Link the action to the workflow definition (sets is_workflow = true and workflow_def) // Link the action to the workflow definition (sets workflow_def FK)
ActionRepository::link_workflow_def(db, action.id, workflow_def_id) ActionRepository::link_workflow_def(db, action.id, workflow_def_id)
.await .await
.map_err(|e| { .map_err(|e| {

View File

@@ -368,7 +368,6 @@ mod tests {
runtime_version_constraint: None, runtime_version_constraint: None,
param_schema: schema, param_schema: schema,
out_schema: None, out_schema: None,
is_workflow: false,
workflow_def: None, workflow_def: None,
is_adhoc: false, is_adhoc: false,
parameter_delivery: attune_common::models::ParameterDelivery::default(), parameter_delivery: attune_common::models::ParameterDelivery::default(),

View File

@@ -120,23 +120,21 @@ async fn test_sse_stream_receives_execution_updates() -> Result<()> {
println!("Updating execution {} to 'running' status", execution_id); println!("Updating execution {} to 'running' status", execution_id);
// Update execution status - this should trigger PostgreSQL NOTIFY // Update execution status - this should trigger PostgreSQL NOTIFY
let _ = sqlx::query( let _ =
"UPDATE execution SET status = 'running', start_time = NOW() WHERE id = $1", sqlx::query("UPDATE execution SET status = 'running', updated = NOW() WHERE id = $1")
) .bind(execution_id)
.bind(execution_id) .execute(&pool_clone)
.execute(&pool_clone) .await;
.await;
println!("Update executed, waiting before setting to succeeded"); println!("Update executed, waiting before setting to succeeded");
tokio::time::sleep(Duration::from_millis(500)).await; tokio::time::sleep(Duration::from_millis(500)).await;
// Update to succeeded // Update to succeeded
let _ = sqlx::query( let _ =
"UPDATE execution SET status = 'succeeded', end_time = NOW() WHERE id = $1", sqlx::query("UPDATE execution SET status = 'succeeded', updated = NOW() WHERE id = $1")
) .bind(execution_id)
.bind(execution_id) .execute(&pool_clone)
.execute(&pool_clone) .await;
.await;
println!("Execution {} updated to 'succeeded'", execution_id); println!("Execution {} updated to 'succeeded'", execution_id);
}); });

View File

@@ -896,7 +896,6 @@ pub mod action {
pub runtime_version_constraint: Option<String>, pub runtime_version_constraint: Option<String>,
pub param_schema: Option<JsonSchema>, pub param_schema: Option<JsonSchema>,
pub out_schema: Option<JsonSchema>, pub out_schema: Option<JsonSchema>,
pub is_workflow: bool,
pub workflow_def: Option<Id>, pub workflow_def: Option<Id>,
pub is_adhoc: bool, pub is_adhoc: bool,
#[sqlx(default)] #[sqlx(default)]
@@ -988,7 +987,6 @@ pub mod event {
pub source: Option<Id>, pub source: Option<Id>,
pub source_ref: Option<String>, pub source_ref: Option<String>,
pub created: DateTime<Utc>, pub created: DateTime<Utc>,
pub updated: DateTime<Utc>,
pub rule: Option<Id>, pub rule: Option<Id>,
pub rule_ref: Option<String>, pub rule_ref: Option<String>,
} }
@@ -1006,7 +1004,7 @@ pub mod event {
pub condition: EnforcementCondition, pub condition: EnforcementCondition,
pub conditions: JsonValue, pub conditions: JsonValue,
pub created: DateTime<Utc>, pub created: DateTime<Utc>,
pub updated: DateTime<Utc>, pub resolved_at: Option<DateTime<Utc>>,
} }
} }
@@ -1484,8 +1482,6 @@ pub mod entity_history {
pub enum HistoryEntityType { pub enum HistoryEntityType {
Execution, Execution,
Worker, Worker,
Enforcement,
Event,
} }
impl HistoryEntityType { impl HistoryEntityType {
@@ -1494,8 +1490,6 @@ pub mod entity_history {
match self { match self {
Self::Execution => "execution_history", Self::Execution => "execution_history",
Self::Worker => "worker_history", Self::Worker => "worker_history",
Self::Enforcement => "enforcement_history",
Self::Event => "event_history",
} }
} }
} }
@@ -1505,8 +1499,6 @@ pub mod entity_history {
match self { match self {
Self::Execution => write!(f, "execution"), Self::Execution => write!(f, "execution"),
Self::Worker => write!(f, "worker"), Self::Worker => write!(f, "worker"),
Self::Enforcement => write!(f, "enforcement"),
Self::Event => write!(f, "event"),
} }
} }
} }
@@ -1518,10 +1510,8 @@ pub mod entity_history {
match s.to_lowercase().as_str() { match s.to_lowercase().as_str() {
"execution" => Ok(Self::Execution), "execution" => Ok(Self::Execution),
"worker" => Ok(Self::Worker), "worker" => Ok(Self::Worker),
"enforcement" => Ok(Self::Enforcement),
"event" => Ok(Self::Event),
other => Err(format!( other => Err(format!(
"unknown history entity type '{}'; expected one of: execution, worker, enforcement, event", "unknown history entity type '{}'; expected one of: execution, worker",
other other
)), )),
} }

View File

@@ -57,7 +57,7 @@ impl FindById for ActionRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, label, description, entrypoint, SELECT id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action FROM action
WHERE id = $1 WHERE id = $1
"#, "#,
@@ -80,7 +80,7 @@ impl FindByRef for ActionRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, label, description, entrypoint, SELECT id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action FROM action
WHERE ref = $1 WHERE ref = $1
"#, "#,
@@ -103,7 +103,7 @@ impl List for ActionRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, label, description, entrypoint, SELECT id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action FROM action
ORDER BY ref ASC ORDER BY ref ASC
"#, "#,
@@ -142,7 +142,7 @@ impl Create for ActionRepository {
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
RETURNING id, ref, pack, pack_ref, label, description, entrypoint, RETURNING id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
"#, "#,
) )
.bind(&input.r#ref) .bind(&input.r#ref)
@@ -256,7 +256,7 @@ impl Update for ActionRepository {
query.push(", updated = NOW() WHERE id = "); query.push(", updated = NOW() WHERE id = ");
query.push_bind(id); query.push_bind(id);
query.push(" RETURNING id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated"); query.push(" RETURNING id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, param_schema, out_schema, workflow_def, is_adhoc, created, updated");
let action = query let action = query
.build_query_as::<Action>() .build_query_as::<Action>()
@@ -296,7 +296,7 @@ impl ActionRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, label, description, entrypoint, SELECT id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action FROM action
WHERE pack = $1 WHERE pack = $1
ORDER BY ref ASC ORDER BY ref ASC
@@ -318,7 +318,7 @@ impl ActionRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, label, description, entrypoint, SELECT id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action FROM action
WHERE runtime = $1 WHERE runtime = $1
ORDER BY ref ASC ORDER BY ref ASC
@@ -341,7 +341,7 @@ impl ActionRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, label, description, entrypoint, SELECT id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action FROM action
WHERE LOWER(ref) LIKE $1 OR LOWER(label) LIKE $1 OR LOWER(description) LIKE $1 WHERE LOWER(ref) LIKE $1 OR LOWER(label) LIKE $1 OR LOWER(description) LIKE $1
ORDER BY ref ASC ORDER BY ref ASC
@@ -354,7 +354,7 @@ impl ActionRepository {
Ok(actions) Ok(actions)
} }
/// Find all workflow actions (actions where is_workflow = true) /// Find all workflow actions (actions linked to a workflow definition)
pub async fn find_workflows<'e, E>(executor: E) -> Result<Vec<Action>> pub async fn find_workflows<'e, E>(executor: E) -> Result<Vec<Action>>
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
@@ -363,9 +363,9 @@ impl ActionRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, label, description, entrypoint, SELECT id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action FROM action
WHERE is_workflow = true WHERE workflow_def IS NOT NULL
ORDER BY ref ASC ORDER BY ref ASC
"#, "#,
) )
@@ -387,7 +387,7 @@ impl ActionRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, label, description, entrypoint, SELECT id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action FROM action
WHERE workflow_def = $1 WHERE workflow_def = $1
"#, "#,
@@ -411,11 +411,11 @@ impl ActionRepository {
let action = sqlx::query_as::<_, Action>( let action = sqlx::query_as::<_, Action>(
r#" r#"
UPDATE action UPDATE action
SET is_workflow = true, workflow_def = $2, updated = NOW() SET workflow_def = $2, updated = NOW()
WHERE id = $1 WHERE id = $1
RETURNING id, ref, pack, pack_ref, label, description, entrypoint, RETURNING id, ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, runtime, runtime_version_constraint,
param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated param_schema, out_schema, workflow_def, is_adhoc, created, updated
"#, "#,
) )
.bind(action_id) .bind(action_id)

View File

@@ -80,6 +80,19 @@ pub struct EnforcementVolumeBucket {
pub enforcement_count: i64, pub enforcement_count: i64,
} }
/// A single hourly bucket of execution volume (from execution hypertable directly).
#[derive(Debug, Clone, Serialize, FromRow)]
pub struct ExecutionVolumeBucket {
/// Start of the 1-hour bucket
pub bucket: DateTime<Utc>,
/// Action ref; NULL when grouped across all actions
pub action_ref: Option<String>,
/// The initial status at creation time
pub initial_status: Option<String>,
/// Number of executions created in this bucket
pub execution_count: i64,
}
/// Aggregated failure rate over a time range. /// Aggregated failure rate over a time range.
#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
pub struct FailureRateSummary { pub struct FailureRateSummary {
@@ -454,6 +467,69 @@ impl AnalyticsRepository {
Ok(rows) Ok(rows)
} }
// =======================================================================
// Execution volume (from execution hypertable directly)
// =======================================================================
/// Query the `execution_volume_hourly` continuous aggregate for execution
/// creation volume across all actions.
pub async fn execution_volume_hourly<'e, E>(
executor: E,
range: &AnalyticsTimeRange,
) -> Result<Vec<ExecutionVolumeBucket>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
sqlx::query_as::<_, ExecutionVolumeBucket>(
r#"
SELECT
bucket,
NULL::text AS action_ref,
initial_status::text AS initial_status,
SUM(execution_count)::bigint AS execution_count
FROM execution_volume_hourly
WHERE bucket >= $1 AND bucket <= $2
GROUP BY bucket, initial_status
ORDER BY bucket ASC, initial_status
"#,
)
.bind(range.since)
.bind(range.until)
.fetch_all(executor)
.await
.map_err(Into::into)
}
/// Query the `execution_volume_hourly` continuous aggregate filtered by
/// a specific action ref.
pub async fn execution_volume_hourly_by_action<'e, E>(
executor: E,
range: &AnalyticsTimeRange,
action_ref: &str,
) -> Result<Vec<ExecutionVolumeBucket>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
sqlx::query_as::<_, ExecutionVolumeBucket>(
r#"
SELECT
bucket,
action_ref,
initial_status::text AS initial_status,
execution_count
FROM execution_volume_hourly
WHERE bucket >= $1 AND bucket <= $2 AND action_ref = $3
ORDER BY bucket ASC, initial_status
"#,
)
.bind(range.since)
.bind(range.until)
.bind(action_ref)
.fetch_all(executor)
.await
.map_err(Into::into)
}
// ======================================================================= // =======================================================================
// Derived analytics // Derived analytics
// ======================================================================= // =======================================================================

View File

@@ -263,11 +263,6 @@ mod tests {
"execution_history" "execution_history"
); );
assert_eq!(HistoryEntityType::Worker.table_name(), "worker_history"); assert_eq!(HistoryEntityType::Worker.table_name(), "worker_history");
assert_eq!(
HistoryEntityType::Enforcement.table_name(),
"enforcement_history"
);
assert_eq!(HistoryEntityType::Event.table_name(), "event_history");
} }
#[test] #[test]
@@ -280,14 +275,8 @@ mod tests {
"Worker".parse::<HistoryEntityType>().unwrap(), "Worker".parse::<HistoryEntityType>().unwrap(),
HistoryEntityType::Worker HistoryEntityType::Worker
); );
assert_eq!( assert!("enforcement".parse::<HistoryEntityType>().is_err());
"ENFORCEMENT".parse::<HistoryEntityType>().unwrap(), assert!("event".parse::<HistoryEntityType>().is_err());
HistoryEntityType::Enforcement
);
assert_eq!(
"event".parse::<HistoryEntityType>().unwrap(),
HistoryEntityType::Event
);
assert!("unknown".parse::<HistoryEntityType>().is_err()); assert!("unknown".parse::<HistoryEntityType>().is_err());
} }
@@ -295,7 +284,5 @@ mod tests {
fn test_history_entity_type_display() { fn test_history_entity_type_display() {
assert_eq!(HistoryEntityType::Execution.to_string(), "execution"); assert_eq!(HistoryEntityType::Execution.to_string(), "execution");
assert_eq!(HistoryEntityType::Worker.to_string(), "worker"); assert_eq!(HistoryEntityType::Worker.to_string(), "worker");
assert_eq!(HistoryEntityType::Enforcement.to_string(), "enforcement");
assert_eq!(HistoryEntityType::Event.to_string(), "event");
} }
} }

View File

@@ -1,6 +1,9 @@
//! Event and Enforcement repository for database operations //! Event and Enforcement repository for database operations
//! //!
//! This module provides CRUD operations and queries for Event and Enforcement entities. //! This module provides CRUD operations and queries for Event and Enforcement entities.
//! Note: Events are immutable time-series data — there is no Update impl for EventRepository.
use chrono::{DateTime, Utc};
use crate::models::{ use crate::models::{
enums::{EnforcementCondition, EnforcementStatus}, enums::{EnforcementCondition, EnforcementStatus},
@@ -36,13 +39,6 @@ pub struct CreateEventInput {
pub rule_ref: Option<String>, pub rule_ref: Option<String>,
} }
/// Input for updating an event
#[derive(Debug, Clone, Default)]
pub struct UpdateEventInput {
pub config: Option<JsonDict>,
pub payload: Option<JsonDict>,
}
#[async_trait::async_trait] #[async_trait::async_trait]
impl FindById for EventRepository { impl FindById for EventRepository {
async fn find_by_id<'e, E>(executor: E, id: i64) -> Result<Option<Self::Entity>> async fn find_by_id<'e, E>(executor: E, id: i64) -> Result<Option<Self::Entity>>
@@ -52,7 +48,7 @@ impl FindById for EventRepository {
let event = sqlx::query_as::<_, Event>( let event = sqlx::query_as::<_, Event>(
r#" r#"
SELECT id, trigger, trigger_ref, config, payload, source, source_ref, SELECT id, trigger, trigger_ref, config, payload, source, source_ref,
rule, rule_ref, created, updated rule, rule_ref, created
FROM event FROM event
WHERE id = $1 WHERE id = $1
"#, "#,
@@ -74,7 +70,7 @@ impl List for EventRepository {
let events = sqlx::query_as::<_, Event>( let events = sqlx::query_as::<_, Event>(
r#" r#"
SELECT id, trigger, trigger_ref, config, payload, source, source_ref, SELECT id, trigger, trigger_ref, config, payload, source, source_ref,
rule, rule_ref, created, updated rule, rule_ref, created
FROM event FROM event
ORDER BY created DESC ORDER BY created DESC
LIMIT 1000 LIMIT 1000
@@ -100,7 +96,7 @@ impl Create for EventRepository {
INSERT INTO event (trigger, trigger_ref, config, payload, source, source_ref, rule, rule_ref) INSERT INTO event (trigger, trigger_ref, config, payload, source, source_ref, rule, rule_ref)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
RETURNING id, trigger, trigger_ref, config, payload, source, source_ref, RETURNING id, trigger, trigger_ref, config, payload, source, source_ref,
rule, rule_ref, created, updated rule, rule_ref, created
"#, "#,
) )
.bind(input.trigger) .bind(input.trigger)
@@ -118,49 +114,6 @@ impl Create for EventRepository {
} }
} }
#[async_trait::async_trait]
impl Update for EventRepository {
type UpdateInput = UpdateEventInput;
async fn update<'e, E>(executor: E, id: i64, input: Self::UpdateInput) -> Result<Self::Entity>
where
E: Executor<'e, Database = Postgres> + 'e,
{
// Build update query
let mut query = QueryBuilder::new("UPDATE event SET ");
let mut has_updates = false;
if let Some(config) = &input.config {
query.push("config = ");
query.push_bind(config);
has_updates = true;
}
if let Some(payload) = &input.payload {
if has_updates {
query.push(", ");
}
query.push("payload = ");
query.push_bind(payload);
has_updates = true;
}
if !has_updates {
// No updates requested, fetch and return existing entity
return Self::get_by_id(executor, id).await;
}
query.push(", updated = NOW() WHERE id = ");
query.push_bind(id);
query.push(" RETURNING id, trigger, trigger_ref, config, payload, source, source_ref, rule, rule_ref, created, updated");
let event = query.build_query_as::<Event>().fetch_one(executor).await?;
Ok(event)
}
}
#[async_trait::async_trait] #[async_trait::async_trait]
impl Delete for EventRepository { impl Delete for EventRepository {
async fn delete<'e, E>(executor: E, id: i64) -> Result<bool> async fn delete<'e, E>(executor: E, id: i64) -> Result<bool>
@@ -185,7 +138,7 @@ impl EventRepository {
let events = sqlx::query_as::<_, Event>( let events = sqlx::query_as::<_, Event>(
r#" r#"
SELECT id, trigger, trigger_ref, config, payload, source, source_ref, SELECT id, trigger, trigger_ref, config, payload, source, source_ref,
rule, rule_ref, created, updated rule, rule_ref, created
FROM event FROM event
WHERE trigger = $1 WHERE trigger = $1
ORDER BY created DESC ORDER BY created DESC
@@ -207,7 +160,7 @@ impl EventRepository {
let events = sqlx::query_as::<_, Event>( let events = sqlx::query_as::<_, Event>(
r#" r#"
SELECT id, trigger, trigger_ref, config, payload, source, source_ref, SELECT id, trigger, trigger_ref, config, payload, source, source_ref,
rule, rule_ref, created, updated rule, rule_ref, created
FROM event FROM event
WHERE trigger_ref = $1 WHERE trigger_ref = $1
ORDER BY created DESC ORDER BY created DESC
@@ -256,6 +209,7 @@ pub struct CreateEnforcementInput {
pub struct UpdateEnforcementInput { pub struct UpdateEnforcementInput {
pub status: Option<EnforcementStatus>, pub status: Option<EnforcementStatus>,
pub payload: Option<JsonDict>, pub payload: Option<JsonDict>,
pub resolved_at: Option<DateTime<Utc>>,
} }
#[async_trait::async_trait] #[async_trait::async_trait]
@@ -267,7 +221,7 @@ impl FindById for EnforcementRepository {
let enforcement = sqlx::query_as::<_, Enforcement>( let enforcement = sqlx::query_as::<_, Enforcement>(
r#" r#"
SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload,
condition, conditions, created, updated condition, conditions, created, resolved_at
FROM enforcement FROM enforcement
WHERE id = $1 WHERE id = $1
"#, "#,
@@ -289,7 +243,7 @@ impl List for EnforcementRepository {
let enforcements = sqlx::query_as::<_, Enforcement>( let enforcements = sqlx::query_as::<_, Enforcement>(
r#" r#"
SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload,
condition, conditions, created, updated condition, conditions, created, resolved_at
FROM enforcement FROM enforcement
ORDER BY created DESC ORDER BY created DESC
LIMIT 1000 LIMIT 1000
@@ -316,7 +270,7 @@ impl Create for EnforcementRepository {
payload, condition, conditions) payload, condition, conditions)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)
RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload, RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload,
condition, conditions, created, updated condition, conditions, created, resolved_at
"#, "#,
) )
.bind(input.rule) .bind(input.rule)
@@ -363,14 +317,23 @@ impl Update for EnforcementRepository {
has_updates = true; has_updates = true;
} }
if let Some(resolved_at) = input.resolved_at {
if has_updates {
query.push(", ");
}
query.push("resolved_at = ");
query.push_bind(resolved_at);
has_updates = true;
}
if !has_updates { if !has_updates {
// No updates requested, fetch and return existing entity // No updates requested, fetch and return existing entity
return Self::get_by_id(executor, id).await; return Self::get_by_id(executor, id).await;
} }
query.push(", updated = NOW() WHERE id = "); query.push(" WHERE id = ");
query.push_bind(id); query.push_bind(id);
query.push(" RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload, condition, conditions, created, updated"); query.push(" RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload, condition, conditions, created, resolved_at");
let enforcement = query let enforcement = query
.build_query_as::<Enforcement>() .build_query_as::<Enforcement>()
@@ -405,7 +368,7 @@ impl EnforcementRepository {
let enforcements = sqlx::query_as::<_, Enforcement>( let enforcements = sqlx::query_as::<_, Enforcement>(
r#" r#"
SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload,
condition, conditions, created, updated condition, conditions, created, resolved_at
FROM enforcement FROM enforcement
WHERE rule = $1 WHERE rule = $1
ORDER BY created DESC ORDER BY created DESC
@@ -429,7 +392,7 @@ impl EnforcementRepository {
let enforcements = sqlx::query_as::<_, Enforcement>( let enforcements = sqlx::query_as::<_, Enforcement>(
r#" r#"
SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload,
condition, conditions, created, updated condition, conditions, created, resolved_at
FROM enforcement FROM enforcement
WHERE status = $1 WHERE status = $1
ORDER BY created DESC ORDER BY created DESC
@@ -450,7 +413,7 @@ impl EnforcementRepository {
let enforcements = sqlx::query_as::<_, Enforcement>( let enforcements = sqlx::query_as::<_, Enforcement>(
r#" r#"
SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload,
condition, conditions, created, updated condition, conditions, created, resolved_at
FROM enforcement FROM enforcement
WHERE event = $1 WHERE event = $1
ORDER BY created DESC ORDER BY created DESC

View File

@@ -6,6 +6,15 @@ use sqlx::{Executor, Postgres, QueryBuilder};
use super::{Create, Delete, FindById, List, Repository, Update}; use super::{Create, Delete, FindById, List, Repository, Update};
/// Column list for SELECT queries on the execution table.
///
/// Defined once to avoid drift between queries and the `Execution` model.
/// The execution table has DB-only columns (`is_workflow`, `workflow_def`) that
/// are NOT in the Rust struct, so `SELECT *` must never be used.
pub const SELECT_COLUMNS: &str = "\
id, action, action_ref, config, env_vars, parent, enforcement, \
executor, status, result, workflow_task, created, updated";
pub struct ExecutionRepository; pub struct ExecutionRepository;
impl Repository for ExecutionRepository { impl Repository for ExecutionRepository {
@@ -54,9 +63,12 @@ impl FindById for ExecutionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
sqlx::query_as::<_, Execution>( let sql = format!("SELECT {SELECT_COLUMNS} FROM execution WHERE id = $1");
"SELECT id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated FROM execution WHERE id = $1" sqlx::query_as::<_, Execution>(&sql)
).bind(id).fetch_optional(executor).await.map_err(Into::into) .bind(id)
.fetch_optional(executor)
.await
.map_err(Into::into)
} }
} }
@@ -66,9 +78,12 @@ impl List for ExecutionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
sqlx::query_as::<_, Execution>( let sql =
"SELECT id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated FROM execution ORDER BY created DESC LIMIT 1000" format!("SELECT {SELECT_COLUMNS} FROM execution ORDER BY created DESC LIMIT 1000");
).fetch_all(executor).await.map_err(Into::into) sqlx::query_as::<_, Execution>(&sql)
.fetch_all(executor)
.await
.map_err(Into::into)
} }
} }
@@ -79,9 +94,26 @@ impl Create for ExecutionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
sqlx::query_as::<_, Execution>( let sql = format!(
"INSERT INTO execution (action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) RETURNING id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated" "INSERT INTO execution \
).bind(input.action).bind(&input.action_ref).bind(&input.config).bind(&input.env_vars).bind(input.parent).bind(input.enforcement).bind(input.executor).bind(input.status).bind(&input.result).bind(sqlx::types::Json(&input.workflow_task)).fetch_one(executor).await.map_err(Into::into) (action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) \
RETURNING {SELECT_COLUMNS}"
);
sqlx::query_as::<_, Execution>(&sql)
.bind(input.action)
.bind(&input.action_ref)
.bind(&input.config)
.bind(&input.env_vars)
.bind(input.parent)
.bind(input.enforcement)
.bind(input.executor)
.bind(input.status)
.bind(&input.result)
.bind(sqlx::types::Json(&input.workflow_task))
.fetch_one(executor)
.await
.map_err(Into::into)
} }
} }
@@ -130,7 +162,8 @@ impl Update for ExecutionRepository {
} }
query.push(", updated = NOW() WHERE id = ").push_bind(id); query.push(", updated = NOW() WHERE id = ").push_bind(id);
query.push(" RETURNING id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated"); query.push(" RETURNING ");
query.push(SELECT_COLUMNS);
query query
.build_query_as::<Execution>() .build_query_as::<Execution>()
@@ -162,9 +195,14 @@ impl ExecutionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
sqlx::query_as::<_, Execution>( let sql = format!(
"SELECT id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated FROM execution WHERE status = $1 ORDER BY created DESC" "SELECT {SELECT_COLUMNS} FROM execution WHERE status = $1 ORDER BY created DESC"
).bind(status).fetch_all(executor).await.map_err(Into::into) );
sqlx::query_as::<_, Execution>(&sql)
.bind(status)
.fetch_all(executor)
.await
.map_err(Into::into)
} }
pub async fn find_by_enforcement<'e, E>( pub async fn find_by_enforcement<'e, E>(
@@ -174,8 +212,31 @@ impl ExecutionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
sqlx::query_as::<_, Execution>( let sql = format!(
"SELECT id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated FROM execution WHERE enforcement = $1 ORDER BY created DESC" "SELECT {SELECT_COLUMNS} FROM execution WHERE enforcement = $1 ORDER BY created DESC"
).bind(enforcement_id).fetch_all(executor).await.map_err(Into::into) );
sqlx::query_as::<_, Execution>(&sql)
.bind(enforcement_id)
.fetch_all(executor)
.await
.map_err(Into::into)
}
/// Find all child executions for a given parent execution ID.
///
/// Returns child executions ordered by creation time (ascending),
/// which is the natural task execution order for workflows.
pub async fn find_by_parent<'e, E>(executor: E, parent_id: Id) -> Result<Vec<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let sql = format!(
"SELECT {SELECT_COLUMNS} FROM execution WHERE parent = $1 ORDER BY created ASC"
);
sqlx::query_as::<_, Execution>(&sql)
.bind(parent_id)
.fetch_all(executor)
.await
.map_err(Into::into)
} }
} }

View File

@@ -194,7 +194,7 @@ impl WorkflowRegistrar {
/// ///
/// This ensures the workflow appears in action lists and the action palette /// This ensures the workflow appears in action lists and the action palette
/// in the workflow builder. The action is linked to the workflow definition /// in the workflow builder. The action is linked to the workflow definition
/// via `is_workflow = true` and `workflow_def` FK. /// via the `workflow_def` FK.
async fn create_companion_action( async fn create_companion_action(
&self, &self,
workflow_def_id: i64, workflow_def_id: i64,
@@ -221,7 +221,7 @@ impl WorkflowRegistrar {
let action = ActionRepository::create(&self.pool, action_input).await?; let action = ActionRepository::create(&self.pool, action_input).await?;
// Link the action to the workflow definition (sets is_workflow = true and workflow_def) // Link the action to the workflow definition (sets workflow_def FK)
ActionRepository::link_workflow_def(&self.pool, action.id, workflow_def_id).await?; ActionRepository::link_workflow_def(&self.pool, action.id, workflow_def_id).await?;
info!( info!(

View File

@@ -54,8 +54,8 @@ async fn test_create_enforcement_minimal() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -89,7 +89,7 @@ async fn test_create_enforcement_minimal() {
assert_eq!(enforcement.condition, EnforcementCondition::All); assert_eq!(enforcement.condition, EnforcementCondition::All);
assert_eq!(enforcement.conditions, json!([])); assert_eq!(enforcement.conditions, json!([]));
assert!(enforcement.created.timestamp() > 0); assert!(enforcement.created.timestamp() > 0);
assert!(enforcement.updated.timestamp() > 0); assert_eq!(enforcement.resolved_at, None); // Not yet resolved
} }
#[tokio::test] #[tokio::test]
@@ -125,8 +125,8 @@ async fn test_create_enforcement_with_event() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -192,8 +192,8 @@ async fn test_create_enforcement_with_conditions() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -257,8 +257,8 @@ async fn test_create_enforcement_with_any_condition() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -333,10 +333,12 @@ async fn test_create_enforcement_with_invalid_rule_fails() {
} }
#[tokio::test] #[tokio::test]
async fn test_create_enforcement_with_invalid_event_fails() { async fn test_create_enforcement_with_nonexistent_event_succeeds() {
let pool = create_test_pool().await.unwrap(); let pool = create_test_pool().await.unwrap();
// Try to create enforcement with non-existent event ID // The enforcement.event column has no FK constraint (event is a hypertable
// and hypertables cannot be FK targets). A non-existent event ID is accepted
// as a dangling reference.
let input = CreateEnforcementInput { let input = CreateEnforcementInput {
rule: None, rule: None,
rule_ref: "some.rule".to_string(), rule_ref: "some.rule".to_string(),
@@ -351,8 +353,9 @@ async fn test_create_enforcement_with_invalid_event_fails() {
let result = EnforcementRepository::create(&pool, input).await; let result = EnforcementRepository::create(&pool, input).await;
assert!(result.is_err()); assert!(result.is_ok());
// Foreign key constraint violation let enforcement = result.unwrap();
assert_eq!(enforcement.event, Some(99999));
} }
// ============================================================================ // ============================================================================
@@ -392,8 +395,8 @@ async fn test_find_enforcement_by_id() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -464,8 +467,8 @@ async fn test_get_enforcement_by_id() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -542,8 +545,8 @@ async fn test_list_enforcements() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -613,8 +616,8 @@ async fn test_update_enforcement_status() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -628,9 +631,11 @@ async fn test_update_enforcement_status() {
.await .await
.unwrap(); .unwrap();
let now = chrono::Utc::now();
let input = UpdateEnforcementInput { let input = UpdateEnforcementInput {
status: Some(EnforcementStatus::Processed), status: Some(EnforcementStatus::Processed),
payload: None, payload: None,
resolved_at: Some(now),
}; };
let updated = EnforcementRepository::update(&pool, enforcement.id, input) let updated = EnforcementRepository::update(&pool, enforcement.id, input)
@@ -639,7 +644,8 @@ async fn test_update_enforcement_status() {
assert_eq!(updated.id, enforcement.id); assert_eq!(updated.id, enforcement.id);
assert_eq!(updated.status, EnforcementStatus::Processed); assert_eq!(updated.status, EnforcementStatus::Processed);
assert!(updated.updated > enforcement.updated); assert!(updated.resolved_at.is_some());
assert!(updated.resolved_at.unwrap() >= enforcement.created);
} }
#[tokio::test] #[tokio::test]
@@ -675,8 +681,8 @@ async fn test_update_enforcement_status_transitions() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -689,26 +695,30 @@ async fn test_update_enforcement_status_transitions() {
.await .await
.unwrap(); .unwrap();
// Test status transitions: Created -> Succeeded // Test status transitions: Created -> Processed
let now = chrono::Utc::now();
let updated = EnforcementRepository::update( let updated = EnforcementRepository::update(
&pool, &pool,
enforcement.id, enforcement.id,
UpdateEnforcementInput { UpdateEnforcementInput {
status: Some(EnforcementStatus::Processed), status: Some(EnforcementStatus::Processed),
payload: None, payload: None,
resolved_at: Some(now),
}, },
) )
.await .await
.unwrap(); .unwrap();
assert_eq!(updated.status, EnforcementStatus::Processed); assert_eq!(updated.status, EnforcementStatus::Processed);
assert!(updated.resolved_at.is_some());
// Test status transition: Succeeded -> Failed (although unusual) // Test status transition: Processed -> Disabled (although unusual)
let updated = EnforcementRepository::update( let updated = EnforcementRepository::update(
&pool, &pool,
enforcement.id, enforcement.id,
UpdateEnforcementInput { UpdateEnforcementInput {
status: Some(EnforcementStatus::Disabled), status: Some(EnforcementStatus::Disabled),
payload: None, payload: None,
resolved_at: None,
}, },
) )
.await .await
@@ -749,8 +759,8 @@ async fn test_update_enforcement_payload() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -768,6 +778,7 @@ async fn test_update_enforcement_payload() {
let input = UpdateEnforcementInput { let input = UpdateEnforcementInput {
status: None, status: None,
payload: Some(new_payload.clone()), payload: Some(new_payload.clone()),
resolved_at: None,
}; };
let updated = EnforcementRepository::update(&pool, enforcement.id, input) let updated = EnforcementRepository::update(&pool, enforcement.id, input)
@@ -810,8 +821,8 @@ async fn test_update_enforcement_both_fields() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -824,10 +835,12 @@ async fn test_update_enforcement_both_fields() {
.await .await
.unwrap(); .unwrap();
let now = chrono::Utc::now();
let new_payload = json!({"result": "success"}); let new_payload = json!({"result": "success"});
let input = UpdateEnforcementInput { let input = UpdateEnforcementInput {
status: Some(EnforcementStatus::Processed), status: Some(EnforcementStatus::Processed),
payload: Some(new_payload.clone()), payload: Some(new_payload.clone()),
resolved_at: Some(now),
}; };
let updated = EnforcementRepository::update(&pool, enforcement.id, input) let updated = EnforcementRepository::update(&pool, enforcement.id, input)
@@ -871,8 +884,8 @@ async fn test_update_enforcement_no_changes() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -889,6 +902,7 @@ async fn test_update_enforcement_no_changes() {
let input = UpdateEnforcementInput { let input = UpdateEnforcementInput {
status: None, status: None,
payload: None, payload: None,
resolved_at: None,
}; };
let result = EnforcementRepository::update(&pool, enforcement.id, input) let result = EnforcementRepository::update(&pool, enforcement.id, input)
@@ -907,6 +921,7 @@ async fn test_update_enforcement_not_found() {
let input = UpdateEnforcementInput { let input = UpdateEnforcementInput {
status: Some(EnforcementStatus::Processed), status: Some(EnforcementStatus::Processed),
payload: None, payload: None,
resolved_at: Some(chrono::Utc::now()),
}; };
let result = EnforcementRepository::update(&pool, 99999, input).await; let result = EnforcementRepository::update(&pool, 99999, input).await;
@@ -952,8 +967,8 @@ async fn test_delete_enforcement() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -1025,8 +1040,8 @@ async fn test_find_enforcements_by_rule() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -1047,8 +1062,8 @@ async fn test_find_enforcements_by_rule() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -1117,8 +1132,8 @@ async fn test_find_enforcements_by_status() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -1206,8 +1221,8 @@ async fn test_find_enforcements_by_event() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -1290,8 +1305,8 @@ async fn test_delete_rule_sets_enforcement_rule_to_null() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -1323,7 +1338,7 @@ async fn test_delete_rule_sets_enforcement_rule_to_null() {
// ============================================================================ // ============================================================================
#[tokio::test] #[tokio::test]
async fn test_enforcement_timestamps_auto_managed() { async fn test_enforcement_resolved_at_lifecycle() {
let pool = create_test_pool().await.unwrap(); let pool = create_test_pool().await.unwrap();
let pack = PackFixture::new_unique("timestamp_pack") let pack = PackFixture::new_unique("timestamp_pack")
@@ -1355,8 +1370,8 @@ async fn test_enforcement_timestamps_auto_managed() {
trigger: trigger.id, trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(), trigger_ref: trigger.r#ref.clone(),
conditions: json!({}), conditions: json!({}),
action_params: json!({}), action_params: json!({}),
trigger_params: json!({}), trigger_params: json!({}),
enabled: true, enabled: true,
is_adhoc: false, is_adhoc: false,
}, },
@@ -1369,24 +1384,23 @@ async fn test_enforcement_timestamps_auto_managed() {
.await .await
.unwrap(); .unwrap();
let created_time = enforcement.created; // Initially, resolved_at is NULL
let updated_time = enforcement.updated; assert!(enforcement.created.timestamp() > 0);
assert_eq!(enforcement.resolved_at, None);
assert!(created_time.timestamp() > 0);
assert_eq!(created_time, updated_time);
// Update and verify timestamp changed
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
// Resolve the enforcement and verify resolved_at is set
let resolved_time = chrono::Utc::now();
let input = UpdateEnforcementInput { let input = UpdateEnforcementInput {
status: Some(EnforcementStatus::Processed), status: Some(EnforcementStatus::Processed),
payload: None, payload: None,
resolved_at: Some(resolved_time),
}; };
let updated = EnforcementRepository::update(&pool, enforcement.id, input) let updated = EnforcementRepository::update(&pool, enforcement.id, input)
.await .await
.unwrap(); .unwrap();
assert_eq!(updated.created, created_time); // created unchanged assert_eq!(updated.created, enforcement.created); // created unchanged
assert!(updated.updated > updated_time); // updated changed assert!(updated.resolved_at.is_some());
assert!(updated.resolved_at.unwrap() >= enforcement.created);
} }

View File

@@ -2,13 +2,14 @@
//! //!
//! These tests verify CRUD operations, queries, and constraints //! These tests verify CRUD operations, queries, and constraints
//! for the Event repository. //! for the Event repository.
//! Note: Events are immutable time-series data — there are no update tests.
mod helpers; mod helpers;
use attune_common::{ use attune_common::{
repositories::{ repositories::{
event::{CreateEventInput, EventRepository, UpdateEventInput}, event::{CreateEventInput, EventRepository},
Create, Delete, FindById, List, Update, Create, Delete, FindById, List,
}, },
Error, Error,
}; };
@@ -56,7 +57,6 @@ async fn test_create_event_minimal() {
assert_eq!(event.source, None); assert_eq!(event.source, None);
assert_eq!(event.source_ref, None); assert_eq!(event.source_ref, None);
assert!(event.created.timestamp() > 0); assert!(event.created.timestamp() > 0);
assert!(event.updated.timestamp() > 0);
} }
#[tokio::test] #[tokio::test]
@@ -363,162 +363,6 @@ async fn test_list_events_respects_limit() {
assert!(events.len() <= 1000); assert!(events.len() <= 1000);
} }
// ============================================================================
// UPDATE Tests
// ============================================================================
#[tokio::test]
async fn test_update_event_config() {
let pool = create_test_pool().await.unwrap();
let pack = PackFixture::new_unique("update_pack")
.create(&pool)
.await
.unwrap();
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
.create(&pool)
.await
.unwrap();
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
.with_config(json!({"old": "config"}))
.create(&pool)
.await
.unwrap();
let new_config = json!({"new": "config", "updated": true});
let input = UpdateEventInput {
config: Some(new_config.clone()),
payload: None,
};
let updated = EventRepository::update(&pool, event.id, input)
.await
.unwrap();
assert_eq!(updated.id, event.id);
assert_eq!(updated.config, Some(new_config));
assert!(updated.updated > event.updated);
}
#[tokio::test]
async fn test_update_event_payload() {
let pool = create_test_pool().await.unwrap();
let pack = PackFixture::new_unique("payload_update_pack")
.create(&pool)
.await
.unwrap();
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
.create(&pool)
.await
.unwrap();
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
.with_payload(json!({"initial": "payload"}))
.create(&pool)
.await
.unwrap();
let new_payload = json!({"updated": "payload", "version": 2});
let input = UpdateEventInput {
config: None,
payload: Some(new_payload.clone()),
};
let updated = EventRepository::update(&pool, event.id, input)
.await
.unwrap();
assert_eq!(updated.payload, Some(new_payload));
assert!(updated.updated > event.updated);
}
#[tokio::test]
async fn test_update_event_both_fields() {
let pool = create_test_pool().await.unwrap();
let pack = PackFixture::new_unique("both_update_pack")
.create(&pool)
.await
.unwrap();
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
.create(&pool)
.await
.unwrap();
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
.create(&pool)
.await
.unwrap();
let new_config = json!({"setting": "value"});
let new_payload = json!({"data": "value"});
let input = UpdateEventInput {
config: Some(new_config.clone()),
payload: Some(new_payload.clone()),
};
let updated = EventRepository::update(&pool, event.id, input)
.await
.unwrap();
assert_eq!(updated.config, Some(new_config));
assert_eq!(updated.payload, Some(new_payload));
}
#[tokio::test]
async fn test_update_event_no_changes() {
let pool = create_test_pool().await.unwrap();
let pack = PackFixture::new_unique("nochange_pack")
.create(&pool)
.await
.unwrap();
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
.create(&pool)
.await
.unwrap();
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
.with_payload(json!({"test": "data"}))
.create(&pool)
.await
.unwrap();
let input = UpdateEventInput {
config: None,
payload: None,
};
let result = EventRepository::update(&pool, event.id, input)
.await
.unwrap();
// Should return existing event without updating
assert_eq!(result.id, event.id);
assert_eq!(result.payload, event.payload);
}
#[tokio::test]
async fn test_update_event_not_found() {
let pool = create_test_pool().await.unwrap();
let input = UpdateEventInput {
config: Some(json!({"test": "config"})),
payload: None,
};
let result = EventRepository::update(&pool, 99999, input).await;
// When updating non-existent entity with changes, SQLx returns RowNotFound error
assert!(result.is_err());
}
// ============================================================================ // ============================================================================
// DELETE Tests // DELETE Tests
// ============================================================================ // ============================================================================
@@ -561,7 +405,7 @@ async fn test_delete_event_not_found() {
} }
#[tokio::test] #[tokio::test]
async fn test_delete_event_sets_enforcement_event_to_null() { async fn test_delete_event_enforcement_retains_event_id() {
let pool = create_test_pool().await.unwrap(); let pool = create_test_pool().await.unwrap();
// Create pack, trigger, action, rule, and event // Create pack, trigger, action, rule, and event
@@ -616,17 +460,19 @@ async fn test_delete_event_sets_enforcement_event_to_null() {
.await .await
.unwrap(); .unwrap();
// Delete the event - enforcement.event should be set to NULL (ON DELETE SET NULL) // Delete the event — since the event table is a TimescaleDB hypertable, the FK
// constraint from enforcement.event was dropped (hypertables cannot be FK targets).
// The enforcement.event column retains the old ID as a dangling reference.
EventRepository::delete(&pool, event.id).await.unwrap(); EventRepository::delete(&pool, event.id).await.unwrap();
// Enforcement should still exist but with NULL event // Enforcement still exists with the original event ID (now a dangling reference)
use attune_common::repositories::event::EnforcementRepository; use attune_common::repositories::event::EnforcementRepository;
let found_enforcement = EnforcementRepository::find_by_id(&pool, enforcement.id) let found_enforcement = EnforcementRepository::find_by_id(&pool, enforcement.id)
.await .await
.unwrap() .unwrap()
.unwrap(); .unwrap();
assert_eq!(found_enforcement.event, None); assert_eq!(found_enforcement.event, Some(event.id));
} }
// ============================================================================ // ============================================================================
@@ -756,7 +602,7 @@ async fn test_find_events_by_trigger_ref_preserves_after_trigger_deletion() {
// ============================================================================ // ============================================================================
#[tokio::test] #[tokio::test]
async fn test_event_timestamps_auto_managed() { async fn test_event_created_timestamp_auto_set() {
let pool = create_test_pool().await.unwrap(); let pool = create_test_pool().await.unwrap();
let pack = PackFixture::new_unique("timestamp_pack") let pack = PackFixture::new_unique("timestamp_pack")
@@ -774,24 +620,5 @@ async fn test_event_timestamps_auto_managed() {
.await .await
.unwrap(); .unwrap();
let created_time = event.created; assert!(event.created.timestamp() > 0);
let updated_time = event.updated;
assert!(created_time.timestamp() > 0);
assert_eq!(created_time, updated_time);
// Update and verify timestamp changed
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
let input = UpdateEventInput {
config: Some(json!({"updated": true})),
payload: None,
};
let updated = EventRepository::update(&pool, event.id, input)
.await
.unwrap();
assert_eq!(updated.created, created_time); // created unchanged
assert!(updated.updated > updated_time); // updated changed
} }

View File

@@ -7,6 +7,7 @@
//! - Detecting inquiry requests in execution results //! - Detecting inquiry requests in execution results
//! - Creating inquiries for human-in-the-loop workflows //! - Creating inquiries for human-in-the-loop workflows
//! - Enabling FIFO execution ordering by notifying waiting executions //! - Enabling FIFO execution ordering by notifying waiting executions
//! - Advancing workflow orchestration when child task executions complete
use anyhow::Result; use anyhow::Result;
use attune_common::{ use attune_common::{
@@ -14,10 +15,14 @@ use attune_common::{
repositories::{execution::ExecutionRepository, FindById}, repositories::{execution::ExecutionRepository, FindById},
}; };
use sqlx::PgPool; use sqlx::PgPool;
use std::sync::atomic::AtomicUsize;
use std::sync::Arc; use std::sync::Arc;
use tracing::{debug, error, info, warn}; use tracing::{debug, error, info, warn};
use crate::{inquiry_handler::InquiryHandler, queue_manager::ExecutionQueueManager}; use crate::{
inquiry_handler::InquiryHandler, queue_manager::ExecutionQueueManager,
scheduler::ExecutionScheduler,
};
/// Completion listener that handles execution completion messages /// Completion listener that handles execution completion messages
pub struct CompletionListener { pub struct CompletionListener {
@@ -25,6 +30,9 @@ pub struct CompletionListener {
consumer: Arc<Consumer>, consumer: Arc<Consumer>,
publisher: Arc<Publisher>, publisher: Arc<Publisher>,
queue_manager: Arc<ExecutionQueueManager>, queue_manager: Arc<ExecutionQueueManager>,
/// Round-robin counter shared with the scheduler for dispatching workflow
/// successor tasks to workers.
round_robin_counter: Arc<AtomicUsize>,
} }
impl CompletionListener { impl CompletionListener {
@@ -40,6 +48,7 @@ impl CompletionListener {
consumer, consumer,
publisher, publisher,
queue_manager, queue_manager,
round_robin_counter: Arc::new(AtomicUsize::new(0)),
} }
} }
@@ -50,6 +59,7 @@ impl CompletionListener {
let pool = self.pool.clone(); let pool = self.pool.clone();
let publisher = self.publisher.clone(); let publisher = self.publisher.clone();
let queue_manager = self.queue_manager.clone(); let queue_manager = self.queue_manager.clone();
let round_robin_counter = self.round_robin_counter.clone();
// Use the handler pattern to consume messages // Use the handler pattern to consume messages
self.consumer self.consumer
@@ -58,12 +68,14 @@ impl CompletionListener {
let pool = pool.clone(); let pool = pool.clone();
let publisher = publisher.clone(); let publisher = publisher.clone();
let queue_manager = queue_manager.clone(); let queue_manager = queue_manager.clone();
let round_robin_counter = round_robin_counter.clone();
async move { async move {
if let Err(e) = Self::process_execution_completed( if let Err(e) = Self::process_execution_completed(
&pool, &pool,
&publisher, &publisher,
&queue_manager, &queue_manager,
&round_robin_counter,
&envelope, &envelope,
) )
.await .await
@@ -88,6 +100,7 @@ impl CompletionListener {
pool: &PgPool, pool: &PgPool,
publisher: &Publisher, publisher: &Publisher,
queue_manager: &ExecutionQueueManager, queue_manager: &ExecutionQueueManager,
round_robin_counter: &AtomicUsize,
envelope: &MessageEnvelope<ExecutionCompletedPayload>, envelope: &MessageEnvelope<ExecutionCompletedPayload>,
) -> Result<()> { ) -> Result<()> {
debug!("Processing execution completed message: {:?}", envelope); debug!("Processing execution completed message: {:?}", envelope);
@@ -115,6 +128,26 @@ impl CompletionListener {
execution_id, exec.status execution_id, exec.status
); );
// Check if this execution is a workflow child task and advance the
// workflow orchestration (schedule successor tasks or complete the
// workflow).
if exec.workflow_task.is_some() {
info!(
"Execution {} is a workflow task, advancing workflow",
execution_id
);
if let Err(e) =
ExecutionScheduler::advance_workflow(pool, publisher, round_robin_counter, exec)
.await
{
error!(
"Failed to advance workflow for execution {}: {}",
execution_id, e
);
// Continue processing — don't fail the entire completion
}
}
// Check if execution result contains an inquiry request // Check if execution result contains an inquiry request
if let Some(result) = &exec.result { if let Some(result) = &exec.result {
if InquiryHandler::has_inquiry_request(result) { if InquiryHandler::has_inquiry_request(result) {

View File

@@ -152,6 +152,7 @@ impl EnforcementProcessor {
UpdateEnforcementInput { UpdateEnforcementInput {
status: Some(EnforcementStatus::Processed), status: Some(EnforcementStatus::Processed),
payload: None, payload: None,
resolved_at: Some(chrono::Utc::now()),
}, },
) )
.await?; .await?;
@@ -170,6 +171,7 @@ impl EnforcementProcessor {
UpdateEnforcementInput { UpdateEnforcementInput {
status: Some(EnforcementStatus::Disabled), status: Some(EnforcementStatus::Disabled),
payload: None, payload: None,
resolved_at: Some(chrono::Utc::now()),
}, },
) )
.await?; .await?;
@@ -356,7 +358,7 @@ mod tests {
condition: attune_common::models::enums::EnforcementCondition::Any, condition: attune_common::models::enums::EnforcementCondition::Any,
conditions: json!({}), conditions: json!({}),
created: chrono::Utc::now(), created: chrono::Utc::now(),
updated: chrono::Utc::now(), resolved_at: Some(chrono::Utc::now()),
}; };
let mut rule = Rule { let mut rule = Rule {

View File

@@ -21,6 +21,7 @@ mod scheduler;
mod service; mod service;
mod timeout_monitor; mod timeout_monitor;
mod worker_health; mod worker_health;
mod workflow;
use anyhow::Result; use anyhow::Result;
use attune_common::config::Config; use attune_common::config::Config;

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,7 @@ use anyhow::Result;
use attune_common::{ use attune_common::{
models::{enums::ExecutionStatus, Execution}, models::{enums::ExecutionStatus, Execution},
mq::{MessageEnvelope, MessageType, Publisher}, mq::{MessageEnvelope, MessageType, Publisher},
repositories::execution::SELECT_COLUMNS as EXECUTION_COLUMNS,
}; };
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -105,17 +106,16 @@ impl ExecutionTimeoutMonitor {
); );
// Find executions stuck in SCHEDULED status // Find executions stuck in SCHEDULED status
let stale_executions = sqlx::query_as::<_, Execution>( let sql = format!(
"SELECT * FROM execution "SELECT {EXECUTION_COLUMNS} FROM execution \
WHERE status = $1 WHERE status = $1 AND updated < $2 \
AND updated < $2 ORDER BY updated ASC LIMIT 100"
ORDER BY updated ASC );
LIMIT 100", // Process in batches to avoid overwhelming system let stale_executions = sqlx::query_as::<_, Execution>(&sql)
) .bind(ExecutionStatus::Scheduled)
.bind(ExecutionStatus::Scheduled) .bind(cutoff)
.bind(cutoff) .fetch_all(&self.pool)
.fetch_all(&self.pool) .await?;
.await?;
if stale_executions.is_empty() { if stale_executions.is_empty() {
debug!("No stale scheduled executions found"); debug!("No stale scheduled executions found");

View File

@@ -2,6 +2,22 @@
//! //!
//! This module manages workflow execution context, including variables, //! This module manages workflow execution context, including variables,
//! template rendering, and data flow between tasks. //! template rendering, and data flow between tasks.
//!
//! ## Function-call expressions
//!
//! Templates support Orquesta-style function calls:
//! - `{{ result() }}` — the last completed task's result
//! - `{{ result().field }}` — nested access into the result
//! - `{{ succeeded() }}` — `true` if the last task succeeded
//! - `{{ failed() }}` — `true` if the last task failed
//! - `{{ timed_out() }}` — `true` if the last task timed out
//!
//! ## Type-preserving rendering
//!
//! When a JSON string value is a *pure* template expression (the entire value
//! is `{{ expr }}`), `render_json` returns the raw `JsonValue` from the
//! expression instead of stringifying it. This means `"{{ item }}"` resolving
//! to integer `5` stays as `5`, not the string `"5"`.
use dashmap::DashMap; use dashmap::DashMap;
use serde_json::{json, Value as JsonValue}; use serde_json::{json, Value as JsonValue};
@@ -31,6 +47,15 @@ pub enum ContextError {
JsonError(#[from] serde_json::Error), JsonError(#[from] serde_json::Error),
} }
/// The status of the last completed task, used by `succeeded()` / `failed()` /
/// `timed_out()` function expressions.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum TaskOutcome {
Succeeded,
Failed,
TimedOut,
}
/// Workflow execution context /// Workflow execution context
/// ///
/// Uses Arc for shared immutable data to enable efficient cloning. /// Uses Arc for shared immutable data to enable efficient cloning.
@@ -55,6 +80,12 @@ pub struct WorkflowContext {
/// Current item index (for with-items iteration) - per-item data /// Current item index (for with-items iteration) - per-item data
current_index: Option<usize>, current_index: Option<usize>,
/// The result of the last completed task (for `result()` expressions)
last_task_result: Option<JsonValue>,
/// The outcome of the last completed task (for `succeeded()` / `failed()`)
last_task_outcome: Option<TaskOutcome>,
} }
impl WorkflowContext { impl WorkflowContext {
@@ -75,6 +106,46 @@ impl WorkflowContext {
system: Arc::new(system), system: Arc::new(system),
current_item: None, current_item: None,
current_index: None, current_index: None,
last_task_result: None,
last_task_outcome: None,
}
}
/// Rebuild a workflow context from persisted workflow execution state.
///
/// This is used when advancing a workflow after a child task completes —
/// the scheduler reconstructs the context from the `workflow_execution`
/// record's stored `variables` plus the results of all completed child
/// executions.
pub fn rebuild(
parameters: JsonValue,
stored_variables: &JsonValue,
task_results: HashMap<String, JsonValue>,
) -> Self {
let variables = DashMap::new();
if let Some(obj) = stored_variables.as_object() {
for (k, v) in obj {
variables.insert(k.clone(), v.clone());
}
}
let results = DashMap::new();
for (k, v) in task_results {
results.insert(k, v);
}
let system = DashMap::new();
system.insert("workflow_start".to_string(), json!(chrono::Utc::now()));
Self {
variables: Arc::new(variables),
parameters: Arc::new(parameters),
task_results: Arc::new(results),
system: Arc::new(system),
current_item: None,
current_index: None,
last_task_result: None,
last_task_outcome: None,
} }
} }
@@ -112,7 +183,28 @@ impl WorkflowContext {
self.current_index = None; self.current_index = None;
} }
/// Render a template string /// Record the outcome of the last completed task so that `result()`,
/// `succeeded()`, `failed()`, and `timed_out()` expressions resolve
/// correctly.
pub fn set_last_task_outcome(&mut self, result: JsonValue, outcome: TaskOutcome) {
self.last_task_result = Some(result);
self.last_task_outcome = Some(outcome);
}
/// Export workflow variables as a JSON object suitable for persisting
/// back to the `workflow_execution.variables` column.
pub fn export_variables(&self) -> JsonValue {
let map: HashMap<String, JsonValue> = self
.variables
.iter()
.map(|entry| (entry.key().clone(), entry.value().clone()))
.collect();
json!(map)
}
/// Render a template string, always returning a `String`.
///
/// For type-preserving rendering of JSON values use [`render_json`].
pub fn render_template(&self, template: &str) -> ContextResult<String> { pub fn render_template(&self, template: &str) -> ContextResult<String> {
// Simple template rendering (Jinja2-like syntax) // Simple template rendering (Jinja2-like syntax)
// Supports: {{ variable }}, {{ task.result }}, {{ parameters.key }} // Supports: {{ variable }}, {{ task.result }}, {{ parameters.key }}
@@ -143,10 +235,49 @@ impl WorkflowContext {
Ok(result) Ok(result)
} }
/// Render a JSON value (recursively render templates in strings) /// Try to evaluate a string as a single pure template expression.
///
/// Returns `Some(JsonValue)` when the **entire** string is exactly
/// `{{ expr }}` (with optional whitespace), preserving the original
/// JSON type of the evaluated expression. Returns `None` if the
/// string contains literal text around the template or multiple
/// template expressions — in that case the caller should fall back
/// to `render_template` which always stringifies.
fn try_evaluate_pure_expression(&self, s: &str) -> Option<ContextResult<JsonValue>> {
let trimmed = s.trim();
if !trimmed.starts_with("{{") || !trimmed.ends_with("}}") {
return None;
}
// Make sure there is only ONE template expression in the string.
// Count `{{` occurrences — if more than one, it's not a pure expr.
if trimmed.matches("{{").count() != 1 {
return None;
}
let expr = trimmed[2..trimmed.len() - 2].trim();
if expr.is_empty() {
return None;
}
Some(self.evaluate_expression(expr))
}
/// Render a JSON value, recursively resolving `{{ }}` templates in
/// strings.
///
/// **Type-preserving**: when a string value is a *pure* template
/// expression (the entire string is `{{ expr }}`), the raw `JsonValue`
/// from the expression is returned. For example, if `item` is `5`
/// (a JSON number), then `"{{ item }}"` resolves to `5` not `"5"`.
pub fn render_json(&self, value: &JsonValue) -> ContextResult<JsonValue> { pub fn render_json(&self, value: &JsonValue) -> ContextResult<JsonValue> {
match value { match value {
JsonValue::String(s) => { JsonValue::String(s) => {
// Fast path: try as a pure expression to preserve type
if let Some(result) = self.try_evaluate_pure_expression(s) {
return result;
}
// Fallback: render as string (interpolation with surrounding text)
let rendered = self.render_template(s)?; let rendered = self.render_template(s)?;
Ok(JsonValue::String(rendered)) Ok(JsonValue::String(rendered))
} }
@@ -170,6 +301,28 @@ impl WorkflowContext {
/// Evaluate a template expression /// Evaluate a template expression
fn evaluate_expression(&self, expr: &str) -> ContextResult<JsonValue> { fn evaluate_expression(&self, expr: &str) -> ContextResult<JsonValue> {
// ---------------------------------------------------------------
// Function-call expressions: result(), succeeded(), failed(), timed_out()
// ---------------------------------------------------------------
// We handle these *before* splitting on `.` because the function
// name contains parentheses which would confuse the dot-split.
//
// Supported patterns:
// result() → last task result
// result().foo.bar → nested access into result
// result().data.items → nested access into result
// succeeded() → boolean
// failed() → boolean
// timed_out() → boolean
// ---------------------------------------------------------------
if let Some(result_val) = self.try_evaluate_function_call(expr)? {
return Ok(result_val);
}
// ---------------------------------------------------------------
// Dot-path expressions
// ---------------------------------------------------------------
let parts: Vec<&str> = expr.split('.').collect(); let parts: Vec<&str> = expr.split('.').collect();
if parts.is_empty() { if parts.is_empty() {
@@ -244,7 +397,8 @@ impl WorkflowContext {
Err(ContextError::VariableNotFound(format!("system.{}", key))) Err(ContextError::VariableNotFound(format!("system.{}", key)))
} }
} }
// Direct variable reference // Direct variable reference (e.g., `number_list` published by a
// previous task's transition)
var_name => { var_name => {
if let Some(entry) = self.variables.get(var_name) { if let Some(entry) = self.variables.get(var_name) {
let value = entry.value().clone(); let value = entry.value().clone();
@@ -261,6 +415,56 @@ impl WorkflowContext {
} }
} }
/// Try to evaluate `expr` as a function-call expression.
///
/// Returns `Ok(Some(value))` if the expression starts with a recognised
/// function call, `Ok(None)` if it does not match, or `Err` on failure.
fn try_evaluate_function_call(&self, expr: &str) -> ContextResult<Option<JsonValue>> {
// succeeded()
if expr == "succeeded()" {
let val = self
.last_task_outcome
.map(|o| o == TaskOutcome::Succeeded)
.unwrap_or(false);
return Ok(Some(json!(val)));
}
// failed()
if expr == "failed()" {
let val = self
.last_task_outcome
.map(|o| o == TaskOutcome::Failed)
.unwrap_or(false);
return Ok(Some(json!(val)));
}
// timed_out()
if expr == "timed_out()" {
let val = self
.last_task_outcome
.map(|o| o == TaskOutcome::TimedOut)
.unwrap_or(false);
return Ok(Some(json!(val)));
}
// result() or result().path.to.field
if expr == "result()" || expr.starts_with("result().") {
let base = self.last_task_result.clone().unwrap_or(JsonValue::Null);
if expr == "result()" {
return Ok(Some(base));
}
// Strip "result()." prefix and navigate the remaining path
let rest = &expr["result().".len()..];
let path_parts: Vec<&str> = rest.split('.').collect();
let val = self.get_nested_value(&base, &path_parts)?;
return Ok(Some(val));
}
Ok(None)
}
/// Get nested value from JSON /// Get nested value from JSON
fn get_nested_value(&self, value: &JsonValue, path: &[&str]) -> ContextResult<JsonValue> { fn get_nested_value(&self, value: &JsonValue, path: &[&str]) -> ContextResult<JsonValue> {
let mut current = value; let mut current = value;
@@ -313,7 +517,12 @@ impl WorkflowContext {
} }
} }
/// Publish variables from a task result /// Publish variables from a task result.
///
/// Each publish directive is a `(name, expression)` pair where the
/// expression is a template string like `"{{ result().data.items }}"`.
/// The expression is rendered with `render_json`-style type preservation
/// so that non-string values (arrays, numbers, booleans) keep their type.
pub fn publish_from_result( pub fn publish_from_result(
&mut self, &mut self,
result: &JsonValue, result: &JsonValue,
@@ -323,16 +532,11 @@ impl WorkflowContext {
// If publish map is provided, use it // If publish map is provided, use it
if let Some(map) = publish_map { if let Some(map) = publish_map {
for (var_name, template) in map { for (var_name, template) in map {
// Create temporary context with result // Use type-preserving rendering: if the entire template is a
let mut temp_ctx = self.clone(); // single expression like `{{ result().data.items }}`, preserve
temp_ctx.set_var("result", result.clone()); // the underlying JsonValue type (e.g. an array stays an array).
let json_value = JsonValue::String(template.clone());
let value_str = temp_ctx.render_template(template)?; let value = self.render_json(&json_value)?;
// Try to parse as JSON, otherwise store as string
let value = serde_json::from_str(&value_str)
.unwrap_or_else(|_| JsonValue::String(value_str));
self.set_var(var_name, value); self.set_var(var_name, value);
} }
} else { } else {
@@ -405,6 +609,8 @@ impl WorkflowContext {
system: Arc::new(system), system: Arc::new(system),
current_item: None, current_item: None,
current_index: None, current_index: None,
last_task_result: None,
last_task_outcome: None,
}) })
} }
} }
@@ -513,6 +719,122 @@ mod tests {
assert_eq!(result["nested"]["value"], "Name is test"); assert_eq!(result["nested"]["value"], "Name is test");
} }
#[test]
fn test_render_json_type_preserving_number() {
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
ctx.set_current_item(json!(5), 0);
// Pure expression — should preserve the integer type
let input = json!({"seconds": "{{ item }}"});
let result = ctx.render_json(&input).unwrap();
assert_eq!(result["seconds"], json!(5));
assert!(result["seconds"].is_number());
}
#[test]
fn test_render_json_type_preserving_array() {
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
ctx.set_last_task_outcome(
json!({"data": {"items": [0, 1, 2, 3, 4]}}),
TaskOutcome::Succeeded,
);
// Pure expression into result() — should preserve the array type
let input = json!({"list": "{{ result().data.items }}"});
let result = ctx.render_json(&input).unwrap();
assert_eq!(result["list"], json!([0, 1, 2, 3, 4]));
assert!(result["list"].is_array());
}
#[test]
fn test_render_json_mixed_template_stays_string() {
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
ctx.set_current_item(json!(5), 0);
// Mixed text + template — must remain a string
let input = json!({"msg": "Sleeping for {{ item }} seconds"});
let result = ctx.render_json(&input).unwrap();
assert_eq!(result["msg"], json!("Sleeping for 5 seconds"));
assert!(result["msg"].is_string());
}
#[test]
fn test_render_json_type_preserving_bool() {
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
ctx.set_last_task_outcome(json!({}), TaskOutcome::Succeeded);
let input = json!({"ok": "{{ succeeded() }}"});
let result = ctx.render_json(&input).unwrap();
assert_eq!(result["ok"], json!(true));
assert!(result["ok"].is_boolean());
}
#[test]
fn test_result_function() {
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
ctx.set_last_task_outcome(
json!({"data": {"items": [10, 20]}, "stdout": "hello"}),
TaskOutcome::Succeeded,
);
// result() returns the full last task result
let val = ctx.evaluate_expression("result()").unwrap();
assert_eq!(val["data"]["items"], json!([10, 20]));
// result().stdout returns nested field
let val = ctx.evaluate_expression("result().stdout").unwrap();
assert_eq!(val, json!("hello"));
// result().data.items returns deeper nested field
let val = ctx.evaluate_expression("result().data.items").unwrap();
assert_eq!(val, json!([10, 20]));
}
#[test]
fn test_succeeded_failed_functions() {
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
ctx.set_last_task_outcome(json!({}), TaskOutcome::Succeeded);
assert_eq!(ctx.evaluate_expression("succeeded()").unwrap(), json!(true));
assert_eq!(ctx.evaluate_expression("failed()").unwrap(), json!(false));
assert_eq!(
ctx.evaluate_expression("timed_out()").unwrap(),
json!(false)
);
ctx.set_last_task_outcome(json!({}), TaskOutcome::Failed);
assert_eq!(
ctx.evaluate_expression("succeeded()").unwrap(),
json!(false)
);
assert_eq!(ctx.evaluate_expression("failed()").unwrap(), json!(true));
ctx.set_last_task_outcome(json!({}), TaskOutcome::TimedOut);
assert_eq!(ctx.evaluate_expression("timed_out()").unwrap(), json!(true));
}
#[test]
fn test_publish_with_result_function() {
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
ctx.set_last_task_outcome(
json!({"data": {"items": [0, 1, 2]}}),
TaskOutcome::Succeeded,
);
let mut publish_map = HashMap::new();
publish_map.insert(
"number_list".to_string(),
"{{ result().data.items }}".to_string(),
);
ctx.publish_from_result(&json!({}), &[], Some(&publish_map))
.unwrap();
let val = ctx.get_var("number_list").unwrap();
assert_eq!(val, json!([0, 1, 2]));
assert!(val.is_array());
}
#[test] #[test]
fn test_publish_variables() { fn test_publish_variables() {
let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
@@ -524,6 +846,23 @@ mod tests {
assert_eq!(ctx.get_var("my_var").unwrap(), result); assert_eq!(ctx.get_var("my_var").unwrap(), result);
} }
#[test]
fn test_rebuild_context() {
let stored_vars = json!({"number_list": [0, 1, 2]});
let mut task_results = HashMap::new();
task_results.insert("task1".to_string(), json!({"data": {"items": [0, 1, 2]}}));
let ctx = WorkflowContext::rebuild(json!({"count": 5}), &stored_vars, task_results);
assert_eq!(ctx.get_var("number_list").unwrap(), json!([0, 1, 2]));
assert_eq!(
ctx.get_task_result("task1").unwrap(),
json!({"data": {"items": [0, 1, 2]}})
);
let rendered = ctx.render_template("{{ parameters.count }}").unwrap();
assert_eq!(rendered, "5");
}
#[test] #[test]
fn test_export_import() { fn test_export_import() {
let mut ctx = WorkflowContext::new(json!({"key": "value"}), HashMap::new()); let mut ctx = WorkflowContext::new(json!({"key": "value"}), HashMap::new());
@@ -539,4 +878,28 @@ mod tests {
json!({"result": "ok"}) json!({"result": "ok"})
); );
} }
#[test]
fn test_with_items_integer_type_preservation() {
// Simulates the sleep_2 task from the hello_workflow:
// input: { seconds: "{{ item }}" }
// with_items: [0, 1, 2, 3, 4]
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
ctx.set_current_item(json!(3), 3);
let input = json!({
"message": "Sleeping for {{ item }} seconds ",
"seconds": "{{item}}"
});
let rendered = ctx.render_json(&input).unwrap();
// seconds should be integer 3, not string "3"
assert_eq!(rendered["seconds"], json!(3));
assert!(rendered["seconds"].is_number());
// message should be a string with the value interpolated
assert_eq!(rendered["message"], json!("Sleeping for 3 seconds "));
assert!(rendered["message"].is_string());
}
} }

View File

@@ -196,7 +196,7 @@ impl WorkflowRegistrar {
/// ///
/// This ensures the workflow appears in action lists and the action palette /// This ensures the workflow appears in action lists and the action palette
/// in the workflow builder. The action is linked to the workflow definition /// in the workflow builder. The action is linked to the workflow definition
/// via `is_workflow = true` and `workflow_def` FK. /// via the `workflow_def` FK.
async fn create_companion_action( async fn create_companion_action(
&self, &self,
workflow_def_id: i64, workflow_def_id: i64,
@@ -223,7 +223,7 @@ impl WorkflowRegistrar {
let action = ActionRepository::create(&self.pool, action_input).await?; let action = ActionRepository::create(&self.pool, action_input).await?;
// Link the action to the workflow definition (sets is_workflow = true and workflow_def) // Link the action to the workflow definition (sets workflow_def FK)
ActionRepository::link_workflow_def(&self.pool, action.id, workflow_def_id).await?; ActionRepository::link_workflow_def(&self.pool, action.id, workflow_def_id).await?;
info!( info!(

View File

@@ -67,8 +67,19 @@ History rows are written by `AFTER INSERT OR UPDATE OR DELETE` triggers on the o
|--------|--------------|---------------------|-----------------| |--------|--------------|---------------------|-----------------|
| `execution` | `execution_history` | `action_ref` | *(none)* | | `execution` | `execution_history` | `action_ref` | *(none)* |
| `worker` | `worker_history` | `name` | `last_heartbeat` (when sole change) | | `worker` | `worker_history` | `name` | `last_heartbeat` (when sole change) |
| `enforcement` | `enforcement_history` | `rule_ref` | *(none)* |
| `event` | `event_history` | `trigger_ref` | *(none)* | > **Note:** The `event` and `enforcement` tables do **not** have separate `_history`
> tables. Both are TimescaleDB hypertables partitioned on `created`:
>
> - **Events** are immutable after insert (never updated). Compression and retention
> policies are applied directly. The `event_volume_hourly` continuous aggregate
> queries the `event` table directly.
> - **Enforcements** are updated exactly once (status transitions from `created` to
> `processed` or `disabled` within ~1 second of creation, well before the 7-day
> compression window). The `resolved_at` column records when this transition
> occurred. A separate history table added little value for a single deterministic
> status change. The `enforcement_volume_hourly` continuous aggregate queries the
> `enforcement` table directly.
## Table Schema ## Table Schema
@@ -100,11 +111,11 @@ Column details:
## Hypertable Configuration ## Hypertable Configuration
| History Table | Chunk Interval | Rationale | | Table | Chunk Interval | Rationale |
|---------------|---------------|-----------| |-------|---------------|-----------|
| `execution_history` | 1 day | Highest expected volume | | `execution_history` | 1 day | Highest expected volume |
| `enforcement_history` | 1 day | Correlated with execution volume | | `event` (hypertable) | 1 day | Can be high volume from active sensors |
| `event_history` | 1 day | Can be high volume from active sensors | | `enforcement` (hypertable) | 1 day | Correlated with execution volume |
| `worker_history` | 7 days | Low volume (status changes are infrequent) | | `worker_history` | 7 days | Low volume (status changes are infrequent) |
## Indexes ## Indexes
@@ -138,22 +149,22 @@ Each tracked table gets a dedicated trigger function that:
Applied after data leaves the "hot" query window: Applied after data leaves the "hot" query window:
| History Table | Compress After | `segmentby` | `orderby` | | Table | Compress After | `segmentby` | `orderby` |
|---------------|---------------|-------------|-----------| |-------|---------------|-------------|-----------|
| `execution_history` | 7 days | `entity_id` | `time DESC` | | `execution_history` | 7 days | `entity_id` | `time DESC` |
| `worker_history` | 7 days | `entity_id` | `time DESC` | | `worker_history` | 7 days | `entity_id` | `time DESC` |
| `enforcement_history` | 7 days | `entity_id` | `time DESC` | | `event` (hypertable) | 7 days | `trigger_ref` | `created DESC` |
| `event_history` | 7 days | `entity_id` | `time DESC` | | `enforcement` (hypertable) | 7 days | `rule_ref` | `created DESC` |
`segmentby = entity_id` ensures that "show me history for entity X" queries are fast even on compressed chunks. `segmentby = entity_id` ensures that "show me history for entity X" queries are fast even on compressed chunks.
## Retention Policies ## Retention Policies
| History Table | Retain For | Rationale | | Table | Retain For | Rationale |
|---------------|-----------|-----------| |-------|-----------|-----------|
| `execution_history` | 90 days | Primary operational data | | `execution_history` | 90 days | Primary operational data |
| `enforcement_history` | 90 days | Tied to execution lifecycle | | `event` (hypertable) | 90 days | High volume time-series data |
| `event_history` | 30 days | High volume, less long-term value | | `enforcement` (hypertable) | 90 days | Tied to execution lifecycle |
| `worker_history` | 180 days | Low volume, useful for capacity trends | | `worker_history` | 180 days | Low volume, useful for capacity trends |
## Continuous Aggregates (Future) ## Continuous Aggregates (Future)
@@ -181,9 +192,8 @@ SELECT
time_bucket('1 hour', time) AS bucket, time_bucket('1 hour', time) AS bucket,
entity_ref AS trigger_ref, entity_ref AS trigger_ref,
COUNT(*) AS event_count COUNT(*) AS event_count
FROM event_history FROM event
WHERE operation = 'INSERT' GROUP BY bucket, trigger_ref
GROUP BY bucket, entity_ref
WITH NO DATA; WITH NO DATA;
``` ```

View File

@@ -2,6 +2,12 @@
-- Description: Creates trigger, sensor, event, enforcement, and action tables -- Description: Creates trigger, sensor, event, enforcement, and action tables
-- with runtime version constraint support. Includes webhook key -- with runtime version constraint support. Includes webhook key
-- generation function used by webhook management functions in 000007. -- generation function used by webhook management functions in 000007.
--
-- NOTE: The event and enforcement tables are converted to TimescaleDB
-- hypertables in migration 000009. Hypertables cannot be the target of
-- FK constraints, so enforcement.event is a plain BIGINT with no FK.
-- FKs *from* hypertables to regular tables (e.g., event.trigger → trigger,
-- enforcement.rule → rule) are supported by TimescaleDB 2.x and are kept.
-- Version: 20250101000004 -- Version: 20250101000004
-- ============================================================================ -- ============================================================================
@@ -140,8 +146,7 @@ CREATE TABLE event (
source_ref TEXT, source_ref TEXT,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(), created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
rule BIGINT, rule BIGINT,
rule_ref TEXT, rule_ref TEXT
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
-- Indexes -- Indexes
@@ -154,12 +159,6 @@ CREATE INDEX idx_event_trigger_ref_created ON event(trigger_ref, created DESC);
CREATE INDEX idx_event_source_created ON event(source, created DESC); CREATE INDEX idx_event_source_created ON event(source, created DESC);
CREATE INDEX idx_event_payload_gin ON event USING GIN (payload); CREATE INDEX idx_event_payload_gin ON event USING GIN (payload);
-- Trigger
CREATE TRIGGER update_event_updated
BEFORE UPDATE ON event
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments -- Comments
COMMENT ON TABLE event IS 'Events are instances of triggers firing'; COMMENT ON TABLE event IS 'Events are instances of triggers firing';
COMMENT ON COLUMN event.trigger IS 'Trigger that fired (may be null if trigger deleted)'; COMMENT ON COLUMN event.trigger IS 'Trigger that fired (may be null if trigger deleted)';
@@ -178,13 +177,13 @@ CREATE TABLE enforcement (
rule_ref TEXT NOT NULL, rule_ref TEXT NOT NULL,
trigger_ref TEXT NOT NULL, trigger_ref TEXT NOT NULL,
config JSONB, config JSONB,
event BIGINT REFERENCES event(id) ON DELETE SET NULL, event BIGINT, -- references event(id); no FK because event becomes a hypertable
status enforcement_status_enum NOT NULL DEFAULT 'created', status enforcement_status_enum NOT NULL DEFAULT 'created',
payload JSONB NOT NULL, payload JSONB NOT NULL,
condition enforcement_condition_enum NOT NULL DEFAULT 'all', condition enforcement_condition_enum NOT NULL DEFAULT 'all',
conditions JSONB NOT NULL DEFAULT '[]'::jsonb, conditions JSONB NOT NULL DEFAULT '[]'::jsonb,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(), created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(), resolved_at TIMESTAMPTZ,
-- Constraints -- Constraints
CONSTRAINT enforcement_condition_check CHECK (condition IN ('any', 'all')) CONSTRAINT enforcement_condition_check CHECK (condition IN ('any', 'all'))
@@ -203,18 +202,13 @@ CREATE INDEX idx_enforcement_event_status ON enforcement(event, status);
CREATE INDEX idx_enforcement_payload_gin ON enforcement USING GIN (payload); CREATE INDEX idx_enforcement_payload_gin ON enforcement USING GIN (payload);
CREATE INDEX idx_enforcement_conditions_gin ON enforcement USING GIN (conditions); CREATE INDEX idx_enforcement_conditions_gin ON enforcement USING GIN (conditions);
-- Trigger
CREATE TRIGGER update_enforcement_updated
BEFORE UPDATE ON enforcement
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments -- Comments
COMMENT ON TABLE enforcement IS 'Enforcements represent rule triggering by events'; COMMENT ON TABLE enforcement IS 'Enforcements represent rule triggering by events';
COMMENT ON COLUMN enforcement.rule IS 'Rule being enforced (may be null if rule deleted)'; COMMENT ON COLUMN enforcement.rule IS 'Rule being enforced (may be null if rule deleted)';
COMMENT ON COLUMN enforcement.rule_ref IS 'Rule reference (preserved even if rule deleted)'; COMMENT ON COLUMN enforcement.rule_ref IS 'Rule reference (preserved even if rule deleted)';
COMMENT ON COLUMN enforcement.event IS 'Event that triggered this enforcement'; COMMENT ON COLUMN enforcement.event IS 'Event that triggered this enforcement (no FK — event is a hypertable)';
COMMENT ON COLUMN enforcement.status IS 'Processing status'; COMMENT ON COLUMN enforcement.status IS 'Processing status (created → processed or disabled)';
COMMENT ON COLUMN enforcement.resolved_at IS 'Timestamp when the enforcement was resolved (status changed from created to processed/disabled). NULL while status is created.';
COMMENT ON COLUMN enforcement.payload IS 'Event payload for rule evaluation'; COMMENT ON COLUMN enforcement.payload IS 'Event payload for rule evaluation';
COMMENT ON COLUMN enforcement.condition IS 'Logical operator for conditions (any=OR, all=AND)'; COMMENT ON COLUMN enforcement.condition IS 'Logical operator for conditions (any=OR, all=AND)';
COMMENT ON COLUMN enforcement.conditions IS 'Condition expressions to evaluate'; COMMENT ON COLUMN enforcement.conditions IS 'Condition expressions to evaluate';

View File

@@ -3,6 +3,14 @@
-- Includes retry tracking, worker health views, and helper functions. -- Includes retry tracking, worker health views, and helper functions.
-- Consolidates former migrations: 000006 (execution_system), 000008 -- Consolidates former migrations: 000006 (execution_system), 000008
-- (worker_notification), 000014 (worker_table), and 20260209 (phase3). -- (worker_notification), 000014 (worker_table), and 20260209 (phase3).
--
-- NOTE: The execution table is converted to a TimescaleDB hypertable in
-- migration 000009. Hypertables cannot be the target of FK constraints,
-- so columns referencing execution (inquiry.execution, workflow_execution.execution)
-- are plain BIGINT with no FK. Similarly, columns ON the execution table that
-- would self-reference or reference other hypertables (parent, enforcement,
-- original_execution) are plain BIGINT. The action and executor FKs are also
-- omitted since they would need to be dropped during hypertable conversion.
-- Version: 20250101000005 -- Version: 20250101000005
-- ============================================================================ -- ============================================================================
@@ -11,25 +19,25 @@
CREATE TABLE execution ( CREATE TABLE execution (
id BIGSERIAL PRIMARY KEY, id BIGSERIAL PRIMARY KEY,
action BIGINT REFERENCES action(id) ON DELETE SET NULL, action BIGINT, -- references action(id); no FK because execution becomes a hypertable
action_ref TEXT NOT NULL, action_ref TEXT NOT NULL,
config JSONB, config JSONB,
env_vars JSONB, env_vars JSONB,
parent BIGINT REFERENCES execution(id) ON DELETE SET NULL, parent BIGINT, -- self-reference; no FK because execution becomes a hypertable
enforcement BIGINT REFERENCES enforcement(id) ON DELETE SET NULL, enforcement BIGINT, -- references enforcement(id); no FK (both are hypertables)
executor BIGINT REFERENCES identity(id) ON DELETE SET NULL, executor BIGINT, -- references identity(id); no FK because execution becomes a hypertable
status execution_status_enum NOT NULL DEFAULT 'requested', status execution_status_enum NOT NULL DEFAULT 'requested',
result JSONB, result JSONB,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(), created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
is_workflow BOOLEAN DEFAULT false NOT NULL, is_workflow BOOLEAN DEFAULT false NOT NULL,
workflow_def BIGINT, workflow_def BIGINT, -- references workflow_definition(id); no FK because execution becomes a hypertable
workflow_task JSONB, workflow_task JSONB,
-- Retry tracking (baked in from phase 3) -- Retry tracking (baked in from phase 3)
retry_count INTEGER NOT NULL DEFAULT 0, retry_count INTEGER NOT NULL DEFAULT 0,
max_retries INTEGER, max_retries INTEGER,
retry_reason TEXT, retry_reason TEXT,
original_execution BIGINT REFERENCES execution(id) ON DELETE SET NULL, original_execution BIGINT, -- self-reference; no FK because execution becomes a hypertable
updated TIMESTAMPTZ NOT NULL DEFAULT NOW() updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
@@ -65,9 +73,9 @@ COMMENT ON COLUMN execution.action IS 'Action being executed (may be null if act
COMMENT ON COLUMN execution.action_ref IS 'Action reference (preserved even if action deleted)'; COMMENT ON COLUMN execution.action_ref IS 'Action reference (preserved even if action deleted)';
COMMENT ON COLUMN execution.config IS 'Snapshot of action configuration at execution time'; COMMENT ON COLUMN execution.config IS 'Snapshot of action configuration at execution time';
COMMENT ON COLUMN execution.env_vars IS 'Environment variables for this execution as key-value pairs (string -> string). These are set in the execution environment and are separate from action parameters. Used for execution context, configuration, and non-sensitive metadata.'; COMMENT ON COLUMN execution.env_vars IS 'Environment variables for this execution as key-value pairs (string -> string). These are set in the execution environment and are separate from action parameters. Used for execution context, configuration, and non-sensitive metadata.';
COMMENT ON COLUMN execution.parent IS 'Parent execution ID for workflow hierarchies'; COMMENT ON COLUMN execution.parent IS 'Parent execution ID for workflow hierarchies (no FK — execution is a hypertable)';
COMMENT ON COLUMN execution.enforcement IS 'Enforcement that triggered this execution (if rule-driven)'; COMMENT ON COLUMN execution.enforcement IS 'Enforcement that triggered this execution (no FK — both are hypertables)';
COMMENT ON COLUMN execution.executor IS 'Identity that initiated the execution'; COMMENT ON COLUMN execution.executor IS 'Identity that initiated the execution (no FK — execution is a hypertable)';
COMMENT ON COLUMN execution.status IS 'Current execution lifecycle status'; COMMENT ON COLUMN execution.status IS 'Current execution lifecycle status';
COMMENT ON COLUMN execution.result IS 'Execution output/results'; COMMENT ON COLUMN execution.result IS 'Execution output/results';
COMMENT ON COLUMN execution.retry_count IS 'Current retry attempt number (0 = first attempt, 1 = first retry, etc.)'; COMMENT ON COLUMN execution.retry_count IS 'Current retry attempt number (0 = first attempt, 1 = first retry, etc.)';
@@ -83,7 +91,7 @@ COMMENT ON COLUMN execution.original_execution IS 'ID of the original execution
CREATE TABLE inquiry ( CREATE TABLE inquiry (
id BIGSERIAL PRIMARY KEY, id BIGSERIAL PRIMARY KEY,
execution BIGINT NOT NULL REFERENCES execution(id) ON DELETE CASCADE, execution BIGINT NOT NULL, -- references execution(id); no FK because execution is a hypertable
prompt TEXT NOT NULL, prompt TEXT NOT NULL,
response_schema JSONB, response_schema JSONB,
assigned_to BIGINT REFERENCES identity(id) ON DELETE SET NULL, assigned_to BIGINT REFERENCES identity(id) ON DELETE SET NULL,
@@ -114,7 +122,7 @@ CREATE TRIGGER update_inquiry_updated
-- Comments -- Comments
COMMENT ON TABLE inquiry IS 'Inquiries enable human-in-the-loop workflows with async user interactions'; COMMENT ON TABLE inquiry IS 'Inquiries enable human-in-the-loop workflows with async user interactions';
COMMENT ON COLUMN inquiry.execution IS 'Execution that is waiting on this inquiry'; COMMENT ON COLUMN inquiry.execution IS 'Execution that is waiting on this inquiry (no FK — execution is a hypertable)';
COMMENT ON COLUMN inquiry.prompt IS 'Question or prompt text for the user'; COMMENT ON COLUMN inquiry.prompt IS 'Question or prompt text for the user';
COMMENT ON COLUMN inquiry.response_schema IS 'JSON schema defining expected response format'; COMMENT ON COLUMN inquiry.response_schema IS 'JSON schema defining expected response format';
COMMENT ON COLUMN inquiry.assigned_to IS 'Identity who should respond to this inquiry'; COMMENT ON COLUMN inquiry.assigned_to IS 'Identity who should respond to this inquiry';

View File

@@ -1,6 +1,13 @@
-- Migration: Workflow System -- Migration: Workflow System
-- Description: Creates workflow_definition and workflow_execution tables -- Description: Creates workflow_definition and workflow_execution tables
-- (workflow_task_execution consolidated into execution.workflow_task JSONB) -- (workflow_task_execution consolidated into execution.workflow_task JSONB)
--
-- NOTE: The execution table is converted to a TimescaleDB hypertable in
-- migration 000009. Hypertables cannot be the target of FK constraints,
-- so workflow_execution.execution is a plain BIGINT with no FK.
-- execution.workflow_def also has no FK (added as plain BIGINT in 000005)
-- since execution is a hypertable and FKs from hypertables are only
-- supported for simple cases — we omit it for consistency.
-- Version: 20250101000006 -- Version: 20250101000006
-- ============================================================================ -- ============================================================================
@@ -49,7 +56,7 @@ COMMENT ON COLUMN workflow_definition.out_schema IS 'JSON schema for workflow ou
CREATE TABLE workflow_execution ( CREATE TABLE workflow_execution (
id BIGSERIAL PRIMARY KEY, id BIGSERIAL PRIMARY KEY,
execution BIGINT NOT NULL REFERENCES execution(id) ON DELETE CASCADE, execution BIGINT NOT NULL, -- references execution(id); no FK because execution is a hypertable
workflow_def BIGINT NOT NULL REFERENCES workflow_definition(id) ON DELETE CASCADE, workflow_def BIGINT NOT NULL REFERENCES workflow_definition(id) ON DELETE CASCADE,
current_tasks TEXT[] DEFAULT '{}', current_tasks TEXT[] DEFAULT '{}',
completed_tasks TEXT[] DEFAULT '{}', completed_tasks TEXT[] DEFAULT '{}',
@@ -78,7 +85,7 @@ CREATE TRIGGER update_workflow_execution_updated
EXECUTE FUNCTION update_updated_column(); EXECUTE FUNCTION update_updated_column();
-- Comments -- Comments
COMMENT ON TABLE workflow_execution IS 'Runtime state tracking for workflow executions'; COMMENT ON TABLE workflow_execution IS 'Runtime state tracking for workflow executions. execution column has no FK — execution is a hypertable.';
COMMENT ON COLUMN workflow_execution.variables IS 'Workflow-scoped variables, updated via publish directives'; COMMENT ON COLUMN workflow_execution.variables IS 'Workflow-scoped variables, updated via publish directives';
COMMENT ON COLUMN workflow_execution.task_graph IS 'Execution graph with dependencies and transitions'; COMMENT ON COLUMN workflow_execution.task_graph IS 'Execution graph with dependencies and transitions';
COMMENT ON COLUMN workflow_execution.current_tasks IS 'Array of task names currently executing'; COMMENT ON COLUMN workflow_execution.current_tasks IS 'Array of task names currently executing';
@@ -89,22 +96,15 @@ COMMENT ON COLUMN workflow_execution.paused IS 'True if workflow execution is pa
-- ============================================================================ -- ============================================================================
ALTER TABLE action ALTER TABLE action
ADD COLUMN is_workflow BOOLEAN DEFAULT false NOT NULL,
ADD COLUMN workflow_def BIGINT REFERENCES workflow_definition(id) ON DELETE CASCADE; ADD COLUMN workflow_def BIGINT REFERENCES workflow_definition(id) ON DELETE CASCADE;
CREATE INDEX idx_action_is_workflow ON action(is_workflow) WHERE is_workflow = true;
CREATE INDEX idx_action_workflow_def ON action(workflow_def); CREATE INDEX idx_action_workflow_def ON action(workflow_def);
COMMENT ON COLUMN action.is_workflow IS 'True if this action is a workflow (composable action graph)'; COMMENT ON COLUMN action.workflow_def IS 'Reference to workflow definition (non-null means this action is a workflow)';
COMMENT ON COLUMN action.workflow_def IS 'Reference to workflow definition if is_workflow=true';
-- ============================================================================ -- NOTE: execution.workflow_def has no FK constraint because execution is a
-- ADD FOREIGN KEY CONSTRAINT FOR EXECUTION.WORKFLOW_DEF -- TimescaleDB hypertable (converted in migration 000009). The column was
-- ============================================================================ -- created as a plain BIGINT in migration 000005.
ALTER TABLE execution
ADD CONSTRAINT execution_workflow_def_fkey
FOREIGN KEY (workflow_def) REFERENCES workflow_definition(id) ON DELETE CASCADE;
-- ============================================================================ -- ============================================================================
-- WORKFLOW VIEWS -- WORKFLOW VIEWS
@@ -143,6 +143,6 @@ SELECT
a.pack as pack_id, a.pack as pack_id,
a.pack_ref a.pack_ref
FROM workflow_definition wd FROM workflow_definition wd
LEFT JOIN action a ON a.workflow_def = wd.id AND a.is_workflow = true; LEFT JOIN action a ON a.workflow_def = wd.id;
COMMENT ON VIEW workflow_action_link IS 'Links workflow definitions to their corresponding action records'; COMMENT ON VIEW workflow_action_link IS 'Links workflow definitions to their corresponding action records';

View File

@@ -163,7 +163,7 @@ BEGIN
'config', NEW.config, 'config', NEW.config,
'payload', NEW.payload, 'payload', NEW.payload,
'created', NEW.created, 'created', NEW.created,
'updated', NEW.updated 'resolved_at', NEW.resolved_at
); );
PERFORM pg_notify('enforcement_created', payload::text); PERFORM pg_notify('enforcement_created', payload::text);
@@ -203,7 +203,7 @@ BEGIN
'config', NEW.config, 'config', NEW.config,
'payload', NEW.payload, 'payload', NEW.payload,
'created', NEW.created, 'created', NEW.created,
'updated', NEW.updated 'resolved_at', NEW.resolved_at
); );
PERFORM pg_notify('enforcement_status_changed', payload::text); PERFORM pg_notify('enforcement_status_changed', payload::text);

View File

@@ -1,10 +1,15 @@
-- Migration: TimescaleDB Entity History and Analytics -- Migration: TimescaleDB Entity History and Analytics
-- Description: Creates append-only history hypertables for execution, worker, enforcement, -- Description: Creates append-only history hypertables for execution and worker tables.
-- and event tables. Uses JSONB diff format to track field-level changes via -- Uses JSONB diff format to track field-level changes via PostgreSQL triggers.
-- PostgreSQL triggers. Includes continuous aggregates for dashboard analytics. -- Converts the event, enforcement, and execution tables into TimescaleDB
-- Consolidates former migrations: 20260226100000 (entity_history_timescaledb), -- hypertables (events are immutable; enforcements are updated exactly once;
-- 20260226200000 (continuous_aggregates), and 20260226300000 (fix + result digest). -- executions are updated ~4 times during their lifecycle).
-- Includes continuous aggregates for dashboard analytics.
-- See docs/plans/timescaledb-entity-history.md for full design. -- See docs/plans/timescaledb-entity-history.md for full design.
--
-- NOTE: FK constraints that would reference hypertable targets were never
-- created in earlier migrations (000004, 000005, 000006), so no DROP
-- CONSTRAINT statements are needed here.
-- Version: 20250101000009 -- Version: 20250101000009
-- ============================================================================ -- ============================================================================
@@ -114,67 +119,76 @@ CREATE INDEX idx_worker_history_changed_fields
COMMENT ON TABLE worker_history IS 'Append-only history of field-level changes to the worker table (TimescaleDB hypertable)'; COMMENT ON TABLE worker_history IS 'Append-only history of field-level changes to the worker table (TimescaleDB hypertable)';
COMMENT ON COLUMN worker_history.entity_ref IS 'Denormalized worker name for JOIN-free queries'; COMMENT ON COLUMN worker_history.entity_ref IS 'Denormalized worker name for JOIN-free queries';
-- ---------------------------------------------------------------------------- -- ============================================================================
-- enforcement_history -- CONVERT EVENT TABLE TO HYPERTABLE
-- ============================================================================
-- Events are immutable after insert — they are never updated. Instead of
-- maintaining a separate event_history table to track changes that never
-- happen, we convert the event table itself into a TimescaleDB hypertable
-- partitioned on `created`. This gives us automatic time-based partitioning,
-- compression, and retention for free.
--
-- No FK constraints reference event(id) — enforcement.event was created as a
-- plain BIGINT in migration 000004 (hypertables cannot be FK targets).
-- ---------------------------------------------------------------------------- -- ----------------------------------------------------------------------------
CREATE TABLE enforcement_history ( -- Replace the single-column PK with a composite PK that includes the
time TIMESTAMPTZ NOT NULL DEFAULT NOW(), -- partitioning column (required by TimescaleDB).
operation TEXT NOT NULL, ALTER TABLE event DROP CONSTRAINT event_pkey;
entity_id BIGINT NOT NULL, ALTER TABLE event ADD PRIMARY KEY (id, created);
entity_ref TEXT,
changed_fields TEXT[] NOT NULL DEFAULT '{}',
old_values JSONB,
new_values JSONB
);
SELECT create_hypertable('enforcement_history', 'time', SELECT create_hypertable('event', 'created',
chunk_time_interval => INTERVAL '1 day'); chunk_time_interval => INTERVAL '1 day',
migrate_data => true);
CREATE INDEX idx_enforcement_history_entity COMMENT ON TABLE event IS 'Events are instances of triggers firing (TimescaleDB hypertable partitioned on created)';
ON enforcement_history (entity_id, time DESC);
CREATE INDEX idx_enforcement_history_entity_ref -- ============================================================================
ON enforcement_history (entity_ref, time DESC); -- CONVERT ENFORCEMENT TABLE TO HYPERTABLE
-- ============================================================================
CREATE INDEX idx_enforcement_history_status_changes -- Enforcements are created and then updated exactly once (status changes from
ON enforcement_history (time DESC) -- `created` to `processed` or `disabled` within ~1 second). This single update
WHERE 'status' = ANY(changed_fields); -- happens well before the 7-day compression window, so UPDATE on uncompressed
-- chunks works without issues.
CREATE INDEX idx_enforcement_history_changed_fields --
ON enforcement_history USING GIN (changed_fields); -- No FK constraints reference enforcement(id) — execution.enforcement was
-- created as a plain BIGINT in migration 000005.
COMMENT ON TABLE enforcement_history IS 'Append-only history of field-level changes to the enforcement table (TimescaleDB hypertable)';
COMMENT ON COLUMN enforcement_history.entity_ref IS 'Denormalized rule_ref for JOIN-free queries';
-- ----------------------------------------------------------------------------
-- event_history
-- ---------------------------------------------------------------------------- -- ----------------------------------------------------------------------------
CREATE TABLE event_history ( ALTER TABLE enforcement DROP CONSTRAINT enforcement_pkey;
time TIMESTAMPTZ NOT NULL DEFAULT NOW(), ALTER TABLE enforcement ADD PRIMARY KEY (id, created);
operation TEXT NOT NULL,
entity_id BIGINT NOT NULL,
entity_ref TEXT,
changed_fields TEXT[] NOT NULL DEFAULT '{}',
old_values JSONB,
new_values JSONB
);
SELECT create_hypertable('event_history', 'time', SELECT create_hypertable('enforcement', 'created',
chunk_time_interval => INTERVAL '1 day'); chunk_time_interval => INTERVAL '1 day',
migrate_data => true);
CREATE INDEX idx_event_history_entity COMMENT ON TABLE enforcement IS 'Enforcements represent rule triggering by events (TimescaleDB hypertable partitioned on created)';
ON event_history (entity_id, time DESC);
CREATE INDEX idx_event_history_entity_ref -- ============================================================================
ON event_history (entity_ref, time DESC); -- CONVERT EXECUTION TABLE TO HYPERTABLE
-- ============================================================================
-- Executions are updated ~4 times during their lifecycle (requested → scheduled
-- → running → completed/failed), completing within at most ~1 day — well before
-- the 7-day compression window. The `updated` column and its BEFORE UPDATE
-- trigger are preserved (used by timeout monitor and UI).
--
-- No FK constraints reference execution(id) — inquiry.execution,
-- workflow_execution.execution, execution.parent, and execution.original_execution
-- were all created as plain BIGINT columns in migrations 000005 and 000006.
--
-- The existing execution_history hypertable and its trigger are preserved —
-- they track field-level diffs of each update, which remains valuable for
-- a mutable table.
-- ----------------------------------------------------------------------------
CREATE INDEX idx_event_history_changed_fields ALTER TABLE execution DROP CONSTRAINT execution_pkey;
ON event_history USING GIN (changed_fields); ALTER TABLE execution ADD PRIMARY KEY (id, created);
COMMENT ON TABLE event_history IS 'Append-only history of field-level changes to the event table (TimescaleDB hypertable)'; SELECT create_hypertable('execution', 'created',
COMMENT ON COLUMN event_history.entity_ref IS 'Denormalized trigger_ref for JOIN-free queries'; chunk_time_interval => INTERVAL '1 day',
migrate_data => true);
COMMENT ON TABLE execution IS 'Executions represent action runs with workflow support (TimescaleDB hypertable partitioned on created). Updated ~4 times during lifecycle, completing within ~1 day (well before 7-day compression window).';
-- ============================================================================ -- ============================================================================
-- TRIGGER FUNCTIONS -- TRIGGER FUNCTIONS
@@ -341,118 +355,6 @@ $$ LANGUAGE plpgsql;
COMMENT ON FUNCTION record_worker_history() IS 'Records field-level changes to worker table in worker_history hypertable. Excludes heartbeat-only updates.'; COMMENT ON FUNCTION record_worker_history() IS 'Records field-level changes to worker table in worker_history hypertable. Excludes heartbeat-only updates.';
-- ----------------------------------------------------------------------------
-- enforcement history trigger
-- Tracked fields: status, payload
-- ----------------------------------------------------------------------------
CREATE OR REPLACE FUNCTION record_enforcement_history()
RETURNS TRIGGER AS $$
DECLARE
changed TEXT[] := '{}';
old_vals JSONB := '{}';
new_vals JSONB := '{}';
BEGIN
IF TG_OP = 'INSERT' THEN
INSERT INTO enforcement_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'INSERT', NEW.id, NEW.rule_ref, '{}', NULL,
jsonb_build_object(
'rule_ref', NEW.rule_ref,
'trigger_ref', NEW.trigger_ref,
'status', NEW.status,
'condition', NEW.condition,
'event', NEW.event
));
RETURN NEW;
END IF;
IF TG_OP = 'DELETE' THEN
INSERT INTO enforcement_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'DELETE', OLD.id, OLD.rule_ref, '{}', NULL, NULL);
RETURN OLD;
END IF;
-- UPDATE: detect which fields changed
IF OLD.status IS DISTINCT FROM NEW.status THEN
changed := array_append(changed, 'status');
old_vals := old_vals || jsonb_build_object('status', OLD.status);
new_vals := new_vals || jsonb_build_object('status', NEW.status);
END IF;
IF OLD.payload IS DISTINCT FROM NEW.payload THEN
changed := array_append(changed, 'payload');
old_vals := old_vals || jsonb_build_object('payload', OLD.payload);
new_vals := new_vals || jsonb_build_object('payload', NEW.payload);
END IF;
-- Only record if something actually changed
IF array_length(changed, 1) > 0 THEN
INSERT INTO enforcement_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'UPDATE', NEW.id, NEW.rule_ref, changed, old_vals, new_vals);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION record_enforcement_history() IS 'Records field-level changes to enforcement table in enforcement_history hypertable';
-- ----------------------------------------------------------------------------
-- event history trigger
-- Tracked fields: config, payload
-- ----------------------------------------------------------------------------
CREATE OR REPLACE FUNCTION record_event_history()
RETURNS TRIGGER AS $$
DECLARE
changed TEXT[] := '{}';
old_vals JSONB := '{}';
new_vals JSONB := '{}';
BEGIN
IF TG_OP = 'INSERT' THEN
INSERT INTO event_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'INSERT', NEW.id, NEW.trigger_ref, '{}', NULL,
jsonb_build_object(
'trigger_ref', NEW.trigger_ref,
'source', NEW.source,
'source_ref', NEW.source_ref,
'rule', NEW.rule,
'rule_ref', NEW.rule_ref
));
RETURN NEW;
END IF;
IF TG_OP = 'DELETE' THEN
INSERT INTO event_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'DELETE', OLD.id, OLD.trigger_ref, '{}', NULL, NULL);
RETURN OLD;
END IF;
-- UPDATE: detect which fields changed
IF OLD.config IS DISTINCT FROM NEW.config THEN
changed := array_append(changed, 'config');
old_vals := old_vals || jsonb_build_object('config', OLD.config);
new_vals := new_vals || jsonb_build_object('config', NEW.config);
END IF;
IF OLD.payload IS DISTINCT FROM NEW.payload THEN
changed := array_append(changed, 'payload');
old_vals := old_vals || jsonb_build_object('payload', OLD.payload);
new_vals := new_vals || jsonb_build_object('payload', NEW.payload);
END IF;
-- Only record if something actually changed
IF array_length(changed, 1) > 0 THEN
INSERT INTO event_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'UPDATE', NEW.id, NEW.trigger_ref, changed, old_vals, new_vals);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION record_event_history() IS 'Records field-level changes to event table in event_history hypertable';
-- ============================================================================ -- ============================================================================
-- ATTACH TRIGGERS TO OPERATIONAL TABLES -- ATTACH TRIGGERS TO OPERATIONAL TABLES
-- ============================================================================ -- ============================================================================
@@ -467,20 +369,11 @@ CREATE TRIGGER worker_history_trigger
FOR EACH ROW FOR EACH ROW
EXECUTE FUNCTION record_worker_history(); EXECUTE FUNCTION record_worker_history();
CREATE TRIGGER enforcement_history_trigger
AFTER INSERT OR UPDATE OR DELETE ON enforcement
FOR EACH ROW
EXECUTE FUNCTION record_enforcement_history();
CREATE TRIGGER event_history_trigger
AFTER INSERT OR UPDATE OR DELETE ON event
FOR EACH ROW
EXECUTE FUNCTION record_event_history();
-- ============================================================================ -- ============================================================================
-- COMPRESSION POLICIES -- COMPRESSION POLICIES
-- ============================================================================ -- ============================================================================
-- History tables
ALTER TABLE execution_history SET ( ALTER TABLE execution_history SET (
timescaledb.compress, timescaledb.compress,
timescaledb.compress_segmentby = 'entity_id', timescaledb.compress_segmentby = 'entity_id',
@@ -495,28 +388,39 @@ ALTER TABLE worker_history SET (
); );
SELECT add_compression_policy('worker_history', INTERVAL '7 days'); SELECT add_compression_policy('worker_history', INTERVAL '7 days');
ALTER TABLE enforcement_history SET ( -- Event table (hypertable)
ALTER TABLE event SET (
timescaledb.compress, timescaledb.compress,
timescaledb.compress_segmentby = 'entity_id', timescaledb.compress_segmentby = 'trigger_ref',
timescaledb.compress_orderby = 'time DESC' timescaledb.compress_orderby = 'created DESC'
); );
SELECT add_compression_policy('enforcement_history', INTERVAL '7 days'); SELECT add_compression_policy('event', INTERVAL '7 days');
ALTER TABLE event_history SET ( -- Enforcement table (hypertable)
ALTER TABLE enforcement SET (
timescaledb.compress, timescaledb.compress,
timescaledb.compress_segmentby = 'entity_id', timescaledb.compress_segmentby = 'rule_ref',
timescaledb.compress_orderby = 'time DESC' timescaledb.compress_orderby = 'created DESC'
); );
SELECT add_compression_policy('event_history', INTERVAL '7 days'); SELECT add_compression_policy('enforcement', INTERVAL '7 days');
-- Execution table (hypertable)
ALTER TABLE execution SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'action_ref',
timescaledb.compress_orderby = 'created DESC'
);
SELECT add_compression_policy('execution', INTERVAL '7 days');
-- ============================================================================ -- ============================================================================
-- RETENTION POLICIES -- RETENTION POLICIES
-- ============================================================================ -- ============================================================================
SELECT add_retention_policy('execution_history', INTERVAL '90 days'); SELECT add_retention_policy('execution_history', INTERVAL '90 days');
SELECT add_retention_policy('enforcement_history', INTERVAL '90 days');
SELECT add_retention_policy('event_history', INTERVAL '30 days');
SELECT add_retention_policy('worker_history', INTERVAL '180 days'); SELECT add_retention_policy('worker_history', INTERVAL '180 days');
SELECT add_retention_policy('event', INTERVAL '90 days');
SELECT add_retention_policy('enforcement', INTERVAL '90 days');
SELECT add_retention_policy('execution', INTERVAL '90 days');
-- ============================================================================ -- ============================================================================
-- CONTINUOUS AGGREGATES -- CONTINUOUS AGGREGATES
@@ -530,6 +434,7 @@ DROP MATERIALIZED VIEW IF EXISTS execution_throughput_hourly CASCADE;
DROP MATERIALIZED VIEW IF EXISTS event_volume_hourly CASCADE; DROP MATERIALIZED VIEW IF EXISTS event_volume_hourly CASCADE;
DROP MATERIALIZED VIEW IF EXISTS worker_status_hourly CASCADE; DROP MATERIALIZED VIEW IF EXISTS worker_status_hourly CASCADE;
DROP MATERIALIZED VIEW IF EXISTS enforcement_volume_hourly CASCADE; DROP MATERIALIZED VIEW IF EXISTS enforcement_volume_hourly CASCADE;
DROP MATERIALIZED VIEW IF EXISTS execution_volume_hourly CASCADE;
-- ---------------------------------------------------------------------------- -- ----------------------------------------------------------------------------
-- execution_status_hourly -- execution_status_hourly
@@ -582,17 +487,18 @@ SELECT add_continuous_aggregate_policy('execution_throughput_hourly',
-- event_volume_hourly -- event_volume_hourly
-- Tracks event creation volume per hour by trigger ref. -- Tracks event creation volume per hour by trigger ref.
-- Powers: event throughput monitoring widget. -- Powers: event throughput monitoring widget.
-- NOTE: Queries the event table directly (it is now a hypertable) instead of
-- a separate event_history table.
-- ---------------------------------------------------------------------------- -- ----------------------------------------------------------------------------
CREATE MATERIALIZED VIEW event_volume_hourly CREATE MATERIALIZED VIEW event_volume_hourly
WITH (timescaledb.continuous) AS WITH (timescaledb.continuous) AS
SELECT SELECT
time_bucket('1 hour', time) AS bucket, time_bucket('1 hour', created) AS bucket,
entity_ref AS trigger_ref, trigger_ref,
COUNT(*) AS event_count COUNT(*) AS event_count
FROM event_history FROM event
WHERE operation = 'INSERT' GROUP BY bucket, trigger_ref
GROUP BY bucket, entity_ref
WITH NO DATA; WITH NO DATA;
SELECT add_continuous_aggregate_policy('event_volume_hourly', SELECT add_continuous_aggregate_policy('event_volume_hourly',
@@ -629,17 +535,18 @@ SELECT add_continuous_aggregate_policy('worker_status_hourly',
-- enforcement_volume_hourly -- enforcement_volume_hourly
-- Tracks enforcement creation volume per hour by rule ref. -- Tracks enforcement creation volume per hour by rule ref.
-- Powers: rule activation rate monitoring. -- Powers: rule activation rate monitoring.
-- NOTE: Queries the enforcement table directly (it is now a hypertable)
-- instead of a separate enforcement_history table.
-- ---------------------------------------------------------------------------- -- ----------------------------------------------------------------------------
CREATE MATERIALIZED VIEW enforcement_volume_hourly CREATE MATERIALIZED VIEW enforcement_volume_hourly
WITH (timescaledb.continuous) AS WITH (timescaledb.continuous) AS
SELECT SELECT
time_bucket('1 hour', time) AS bucket, time_bucket('1 hour', created) AS bucket,
entity_ref AS rule_ref, rule_ref,
COUNT(*) AS enforcement_count COUNT(*) AS enforcement_count
FROM enforcement_history FROM enforcement
WHERE operation = 'INSERT' GROUP BY bucket, rule_ref
GROUP BY bucket, entity_ref
WITH NO DATA; WITH NO DATA;
SELECT add_continuous_aggregate_policy('enforcement_volume_hourly', SELECT add_continuous_aggregate_policy('enforcement_volume_hourly',
@@ -648,6 +555,34 @@ SELECT add_continuous_aggregate_policy('enforcement_volume_hourly',
schedule_interval => INTERVAL '30 minutes' schedule_interval => INTERVAL '30 minutes'
); );
-- ----------------------------------------------------------------------------
-- execution_volume_hourly
-- Tracks execution creation volume per hour by action_ref and status.
-- This queries the execution hypertable directly (like event_volume_hourly
-- queries the event table). Complements the existing execution_status_hourly
-- and execution_throughput_hourly aggregates which query execution_history.
--
-- Use case: direct execution volume monitoring without relying on the history
-- trigger (belt-and-suspenders, plus captures the initial status at creation).
-- ----------------------------------------------------------------------------
CREATE MATERIALIZED VIEW execution_volume_hourly
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', created) AS bucket,
action_ref,
status AS initial_status,
COUNT(*) AS execution_count
FROM execution
GROUP BY bucket, action_ref, status
WITH NO DATA;
SELECT add_continuous_aggregate_policy('execution_volume_hourly',
start_offset => INTERVAL '7 days',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '30 minutes'
);
-- ============================================================================ -- ============================================================================
-- INITIAL REFRESH NOTE -- INITIAL REFRESH NOTE
-- ============================================================================ -- ============================================================================
@@ -664,3 +599,4 @@ SELECT add_continuous_aggregate_policy('enforcement_volume_hourly',
-- CALL refresh_continuous_aggregate('event_volume_hourly', NULL, NOW()); -- CALL refresh_continuous_aggregate('event_volume_hourly', NULL, NOW());
-- CALL refresh_continuous_aggregate('worker_status_hourly', NULL, NOW()); -- CALL refresh_continuous_aggregate('worker_status_hourly', NULL, NOW());
-- CALL refresh_continuous_aggregate('enforcement_volume_hourly', NULL, NOW()); -- CALL refresh_continuous_aggregate('enforcement_volume_hourly', NULL, NOW());
-- CALL refresh_continuous_aggregate('execution_volume_hourly', NULL, NOW());

View File

@@ -1,17 +1,58 @@
#!/bin/bash #!/bin/sh
# List Example Action # List Example Action
# Demonstrates JSON Lines output format for streaming results # Demonstrates JSON Lines output format for streaming results
#
# This script uses pure POSIX shell without external dependencies like jq.
# It reads parameters in DOTENV format from stdin until the delimiter.
set -euo pipefail set -e
# Read parameters from stdin (JSON format) # Initialize count with default
read -r params_json count=5
# Extract count parameter (default to 5 if not provided) # Read DOTENV-formatted parameters from stdin until delimiter
count=$(echo "$params_json" | jq -r '.count // 5') while IFS= read -r line; do
case "$line" in
*"---ATTUNE_PARAMS_END---"*)
break
;;
count=*)
# Extract value after count=
count="${line#count=}"
# Remove quotes if present (both single and double)
case "$count" in
\"*\")
count="${count#\"}"
count="${count%\"}"
;;
\'*\')
count="${count#\'}"
count="${count%\'}"
;;
esac
;;
esac
done
# Validate count is a positive integer
case "$count" in
''|*[!0-9]*)
count=5
;;
esac
if [ "$count" -lt 1 ]; then
count=1
elif [ "$count" -gt 100 ]; then
count=100
fi
# Generate JSON Lines output (one JSON object per line) # Generate JSON Lines output (one JSON object per line)
for i in $(seq 1 "$count"); do i=1
while [ "$i" -le "$count" ]; do
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ") timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
echo "{\"id\": $i, \"value\": \"item_$i\", \"timestamp\": \"$timestamp\"}" printf '{"id": %d, "value": "item_%d", "timestamp": "%s"}\n' "$i" "$i" "$timestamp"
i=$((i + 1))
done done
exit 0

View File

@@ -12,9 +12,9 @@ runner_type: shell
# Entry point is the shell script to execute # Entry point is the shell script to execute
entry_point: list_example.sh entry_point: list_example.sh
# Parameter delivery: stdin for secure parameter passing # Parameter delivery: stdin for secure parameter passing (no env vars)
parameter_delivery: stdin parameter_delivery: stdin
parameter_format: json parameter_format: dotenv
# Output format: jsonl (each line is a JSON object, collected into array) # Output format: jsonl (each line is a JSON object, collected into array)
output_format: jsonl output_format: jsonl

View File

@@ -6,57 +6,64 @@
* Response DTO for action information * Response DTO for action information
*/ */
export type ActionResponse = { export type ActionResponse = {
/** /**
* Creation timestamp * Creation timestamp
*/ */
created: string; created: string;
/** /**
* Action description * Action description
*/ */
description: string; description: string;
/** /**
* Entry point * Entry point
*/ */
entrypoint: string; entrypoint: string;
/** /**
* Action ID * Action ID
*/ */
id: number; id: number;
/** /**
* Whether this is an ad-hoc action (not from pack installation) * Whether this is an ad-hoc action (not from pack installation)
*/ */
is_adhoc: boolean; is_adhoc: boolean;
/** /**
* Human-readable label * Human-readable label
*/ */
label: string; label: string;
/** /**
* Output schema * Output schema
*/ */
out_schema: any | null; out_schema: any | null;
/** /**
* Pack ID * Pack ID
*/ */
pack: number; pack: number;
/** /**
* Pack reference * Pack reference
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**
* Unique reference identifier * Unique reference identifier
*/ */
ref: string; ref: string;
/** /**
* Runtime ID * Runtime ID
*/ */
runtime?: number | null; runtime?: number | null;
/** /**
* Last update timestamp * Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0")
*/ */
updated: string; runtime_version_constraint?: string | null;
/**
* Last update timestamp
*/
updated: string;
/**
* Workflow definition ID (non-null if this action is a workflow)
*/
workflow_def?: number | null;
}; };

View File

@@ -6,41 +6,48 @@
* Simplified action response (for list endpoints) * Simplified action response (for list endpoints)
*/ */
export type ActionSummary = { export type ActionSummary = {
/** /**
* Creation timestamp * Creation timestamp
*/ */
created: string; created: string;
/** /**
* Action description * Action description
*/ */
description: string; description: string;
/** /**
* Entry point * Entry point
*/ */
entrypoint: string; entrypoint: string;
/** /**
* Action ID * Action ID
*/ */
id: number; id: number;
/** /**
* Human-readable label * Human-readable label
*/ */
label: string; label: string;
/** /**
* Pack reference * Pack reference
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Unique reference identifier * Unique reference identifier
*/ */
ref: string; ref: string;
/** /**
* Runtime ID * Runtime ID
*/ */
runtime?: number | null; runtime?: number | null;
/** /**
* Last update timestamp * Semver version constraint for the runtime
*/ */
updated: string; runtime_version_constraint?: string | null;
/**
* Last update timestamp
*/
updated: string;
/**
* Workflow definition ID (non-null if this action is a workflow)
*/
workflow_def?: number | null;
}; };

View File

@@ -6,66 +6,73 @@
* Standard API response wrapper * Standard API response wrapper
*/ */
export type ApiResponse_ActionResponse = { export type ApiResponse_ActionResponse = {
/**
* Response DTO for action information
*/
data: {
/** /**
* Response DTO for action information * Creation timestamp
*/ */
data: { created: string;
/**
* Creation timestamp
*/
created: string;
/**
* Action description
*/
description: string;
/**
* Entry point
*/
entrypoint: string;
/**
* Action ID
*/
id: number;
/**
* Whether this is an ad-hoc action (not from pack installation)
*/
is_adhoc: boolean;
/**
* Human-readable label
*/
label: string;
/**
* Output schema
*/
out_schema: any | null;
/**
* Pack ID
*/
pack: number;
/**
* Pack reference
*/
pack_ref: string;
/**
* Parameter schema
*/
param_schema: any | null;
/**
* Unique reference identifier
*/
ref: string;
/**
* Runtime ID
*/
runtime?: number | null;
/**
* Last update timestamp
*/
updated: string;
};
/** /**
* Optional message * Action description
*/ */
message?: string | null; description: string;
/**
* Entry point
*/
entrypoint: string;
/**
* Action ID
*/
id: number;
/**
* Whether this is an ad-hoc action (not from pack installation)
*/
is_adhoc: boolean;
/**
* Human-readable label
*/
label: string;
/**
* Output schema
*/
out_schema: any | null;
/**
* Pack ID
*/
pack: number;
/**
* Pack reference
*/
pack_ref: string;
/**
* Parameter schema (StackStorm-style with inline required/secret)
*/
param_schema: any | null;
/**
* Unique reference identifier
*/
ref: string;
/**
* Runtime ID
*/
runtime?: number | null;
/**
* Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0")
*/
runtime_version_constraint?: string | null;
/**
* Last update timestamp
*/
updated: string;
/**
* Workflow definition ID (non-null if this action is a workflow)
*/
workflow_def?: number | null;
};
/**
* Optional message
*/
message?: string | null;
}; };

View File

@@ -38,6 +38,10 @@ export type ApiResponse_EnforcementResponse = {
* Enforcement payload * Enforcement payload
*/ */
payload: Record<string, any>; payload: Record<string, any>;
/**
* Timestamp when the enforcement was resolved (status changed from created to processed/disabled)
*/
resolved_at?: string | null;
rule?: (null | i64); rule?: (null | i64);
/** /**
* Rule reference * Rule reference
@@ -51,10 +55,6 @@ export type ApiResponse_EnforcementResponse = {
* Trigger reference * Trigger reference
*/ */
trigger_ref: string; trigger_ref: string;
/**
* Last update timestamp
*/
updated: string;
}; };
/** /**
* Optional message * Optional message

View File

@@ -42,10 +42,6 @@ export type ApiResponse_EventResponse = {
* Trigger reference * Trigger reference
*/ */
trigger_ref: string; trigger_ref: string;
/**
* Last update timestamp
*/
updated: string;
}; };
/** /**
* Optional message * Optional message

View File

@@ -2,63 +2,79 @@
/* istanbul ignore file */ /* istanbul ignore file */
/* tslint:disable */ /* tslint:disable */
/* eslint-disable */ /* eslint-disable */
import type { ExecutionStatus } from './ExecutionStatus'; import type { ExecutionStatus } from "./ExecutionStatus";
/** /**
* Standard API response wrapper * Standard API response wrapper
*/ */
export type ApiResponse_ExecutionResponse = { export type ApiResponse_ExecutionResponse = {
/**
* Response DTO for execution information
*/
data: {
/** /**
* Response DTO for execution information * Action ID (optional, may be null for ad-hoc executions)
*/ */
data: { action?: number | null;
/**
* Action ID (optional, may be null for ad-hoc executions)
*/
action?: number | null;
/**
* Action reference
*/
action_ref: string;
/**
* Execution configuration/parameters
*/
config: Record<string, any>;
/**
* Creation timestamp
*/
created: string;
/**
* Enforcement ID (rule enforcement that triggered this)
*/
enforcement?: number | null;
/**
* Executor ID (worker/executor that ran this)
*/
executor?: number | null;
/**
* Execution ID
*/
id: number;
/**
* Parent execution ID (for nested/child executions)
*/
parent?: number | null;
/**
* Execution result/output
*/
result: Record<string, any>;
/**
* Execution status
*/
status: ExecutionStatus;
/**
* Last update timestamp
*/
updated: string;
};
/** /**
* Optional message * Action reference
*/ */
message?: string | null; action_ref: string;
/**
* Execution configuration/parameters
*/
config: Record<string, any>;
/**
* Creation timestamp
*/
created: string;
/**
* Enforcement ID (rule enforcement that triggered this)
*/
enforcement?: number | null;
/**
* Executor ID (worker/executor that ran this)
*/
executor?: number | null;
/**
* Execution ID
*/
id: number;
/**
* Parent execution ID (for nested/child executions)
*/
parent?: number | null;
/**
* Execution result/output
*/
result: Record<string, any>;
/**
* Execution status
*/
status: ExecutionStatus;
/**
* Last update timestamp
*/
updated: string;
/**
* Workflow task metadata (only populated for workflow task executions)
*/
workflow_task?: {
workflow_execution: number;
task_name: string;
task_index?: number | null;
task_batch?: number | null;
retry_count: number;
max_retries: number;
next_retry_at?: string | null;
timeout_seconds?: number | null;
timed_out: boolean;
duration_ms?: number | null;
started_at?: string | null;
completed_at?: string | null;
} | null;
};
/**
* Optional message
*/
message?: string | null;
}; };

View File

@@ -22,6 +22,10 @@ export type ApiResponse_PackResponse = {
* Creation timestamp * Creation timestamp
*/ */
created: string; created: string;
/**
* Pack dependencies (refs of required packs)
*/
dependencies: Array<string>;
/** /**
* Pack description * Pack description
*/ */
@@ -47,7 +51,7 @@ export type ApiResponse_PackResponse = {
*/ */
ref: string; ref: string;
/** /**
* Runtime dependencies * Runtime dependencies (e.g., shell, python, nodejs)
*/ */
runtime_deps: Array<string>; runtime_deps: Array<string>;
/** /**

View File

@@ -11,9 +11,9 @@ export type ApiResponse_RuleResponse = {
*/ */
data: { data: {
/** /**
* Action ID * Action ID (null if the referenced action has been deleted)
*/ */
action: number; action?: number | null;
/** /**
* Parameters to pass to the action when rule is triggered * Parameters to pass to the action when rule is triggered
*/ */
@@ -63,9 +63,9 @@ export type ApiResponse_RuleResponse = {
*/ */
ref: string; ref: string;
/** /**
* Trigger ID * Trigger ID (null if the referenced trigger has been deleted)
*/ */
trigger: number; trigger?: number | null;
/** /**
* Parameters for trigger configuration and event filtering * Parameters for trigger configuration and event filtering
*/ */

View File

@@ -43,7 +43,7 @@ export type ApiResponse_SensorResponse = {
*/ */
pack_ref?: string | null; pack_ref?: string | null;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**

View File

@@ -47,7 +47,7 @@ export type ApiResponse_TriggerResponse = {
*/ */
pack_ref?: string | null; pack_ref?: string | null;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**

View File

@@ -47,7 +47,7 @@ export type ApiResponse_WorkflowResponse = {
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**

View File

@@ -19,7 +19,7 @@ export type CreateActionRequest = {
*/ */
label: string; label: string;
/** /**
* Output schema (JSON Schema) defining expected outputs * Output schema (flat format) defining expected outputs with inline required/secret
*/ */
out_schema?: any | null; out_schema?: any | null;
/** /**
@@ -27,7 +27,7 @@ export type CreateActionRequest = {
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Parameter schema (JSON Schema) defining expected inputs * Parameter schema (StackStorm-style) defining expected inputs with inline required/secret
*/ */
param_schema?: any | null; param_schema?: any | null;
/** /**
@@ -38,5 +38,9 @@ export type CreateActionRequest = {
* Optional runtime ID for this action * Optional runtime ID for this action
*/ */
runtime?: number | null; runtime?: number | null;
/**
* Optional semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0")
*/
runtime_version_constraint?: string | null;
}; };

View File

@@ -17,7 +17,7 @@ export type CreateInquiryRequest = {
*/ */
prompt: string; prompt: string;
/** /**
* Optional JSON schema for the expected response format * Optional schema for the expected response format (flat format with inline required/secret)
*/ */
response_schema: Record<string, any>; response_schema: Record<string, any>;
/** /**

View File

@@ -7,13 +7,17 @@
*/ */
export type CreatePackRequest = { export type CreatePackRequest = {
/** /**
* Configuration schema (JSON Schema) * Configuration schema (flat format with inline required/secret per parameter)
*/ */
conf_schema?: Record<string, any>; conf_schema?: Record<string, any>;
/** /**
* Pack configuration values * Pack configuration values
*/ */
config?: Record<string, any>; config?: Record<string, any>;
/**
* Pack dependencies (refs of required packs)
*/
dependencies?: Array<string>;
/** /**
* Pack description * Pack description
*/ */
@@ -35,7 +39,7 @@ export type CreatePackRequest = {
*/ */
ref: string; ref: string;
/** /**
* Runtime dependencies (refs of required packs) * Runtime dependencies (e.g., shell, python, nodejs)
*/ */
runtime_deps?: Array<string>; runtime_deps?: Array<string>;
/** /**

View File

@@ -31,7 +31,7 @@ export type CreateSensorRequest = {
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Parameter schema (JSON Schema) for sensor configuration * Parameter schema (flat format) for sensor configuration
*/ */
param_schema?: any | null; param_schema?: any | null;
/** /**

View File

@@ -19,7 +19,7 @@ export type CreateTriggerRequest = {
*/ */
label: string; label: string;
/** /**
* Output schema (JSON Schema) defining event data structure * Output schema (flat format) defining event data structure with inline required/secret
*/ */
out_schema?: any | null; out_schema?: any | null;
/** /**
@@ -27,7 +27,7 @@ export type CreateTriggerRequest = {
*/ */
pack_ref?: string | null; pack_ref?: string | null;
/** /**
* Parameter schema (JSON Schema) defining event payload structure * Parameter schema (StackStorm-style) defining trigger configuration with inline required/secret
*/ */
param_schema?: any | null; param_schema?: any | null;
/** /**

View File

@@ -23,7 +23,7 @@ export type CreateWorkflowRequest = {
*/ */
label: string; label: string;
/** /**
* Output schema (JSON Schema) defining expected outputs * Output schema (flat format) defining expected outputs with inline required/secret
*/ */
out_schema: Record<string, any>; out_schema: Record<string, any>;
/** /**
@@ -31,7 +31,7 @@ export type CreateWorkflowRequest = {
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Parameter schema (JSON Schema) defining expected inputs * Parameter schema (StackStorm-style) defining expected inputs with inline required/secret
*/ */
param_schema: Record<string, any>; param_schema: Record<string, any>;
/** /**

View File

@@ -34,6 +34,10 @@ export type EnforcementResponse = {
* Enforcement payload * Enforcement payload
*/ */
payload: Record<string, any>; payload: Record<string, any>;
/**
* Timestamp when the enforcement was resolved (status changed from created to processed/disabled)
*/
resolved_at?: string | null;
rule?: (null | i64); rule?: (null | i64);
/** /**
* Rule reference * Rule reference
@@ -47,9 +51,5 @@ export type EnforcementResponse = {
* Trigger reference * Trigger reference
*/ */
trigger_ref: string; trigger_ref: string;
/**
* Last update timestamp
*/
updated: string;
}; };

View File

@@ -38,9 +38,5 @@ export type EventResponse = {
* Trigger reference * Trigger reference
*/ */
trigger_ref: string; trigger_ref: string;
/**
* Last update timestamp
*/
updated: string;
}; };

View File

@@ -2,54 +2,70 @@
/* istanbul ignore file */ /* istanbul ignore file */
/* tslint:disable */ /* tslint:disable */
/* eslint-disable */ /* eslint-disable */
import type { ExecutionStatus } from './ExecutionStatus'; import type { ExecutionStatus } from "./ExecutionStatus";
/** /**
* Response DTO for execution information * Response DTO for execution information
*/ */
export type ExecutionResponse = { export type ExecutionResponse = {
/** /**
* Action ID (optional, may be null for ad-hoc executions) * Action ID (optional, may be null for ad-hoc executions)
*/ */
action?: number | null; action?: number | null;
/** /**
* Action reference * Action reference
*/ */
action_ref: string; action_ref: string;
/** /**
* Execution configuration/parameters * Execution configuration/parameters
*/ */
config: Record<string, any>; config: Record<string, any>;
/** /**
* Creation timestamp * Creation timestamp
*/ */
created: string; created: string;
/** /**
* Enforcement ID (rule enforcement that triggered this) * Enforcement ID (rule enforcement that triggered this)
*/ */
enforcement?: number | null; enforcement?: number | null;
/** /**
* Executor ID (worker/executor that ran this) * Executor ID (worker/executor that ran this)
*/ */
executor?: number | null; executor?: number | null;
/** /**
* Execution ID * Execution ID
*/ */
id: number; id: number;
/** /**
* Parent execution ID (for nested/child executions) * Parent execution ID (for nested/child executions)
*/ */
parent?: number | null; parent?: number | null;
/** /**
* Execution result/output * Execution result/output
*/ */
result: Record<string, any>; result: Record<string, any>;
/** /**
* Execution status * Execution status
*/ */
status: ExecutionStatus; status: ExecutionStatus;
/** /**
* Last update timestamp * Last update timestamp
*/ */
updated: string; updated: string;
/**
* Workflow task metadata (only populated for workflow task executions)
*/
workflow_task?: {
workflow_execution: number;
task_name: string;
task_index?: number | null;
task_batch?: number | null;
retry_count: number;
max_retries: number;
next_retry_at?: string | null;
timeout_seconds?: number | null;
timed_out: boolean;
duration_ms?: number | null;
started_at?: string | null;
completed_at?: string | null;
} | null;
}; };

View File

@@ -2,46 +2,62 @@
/* istanbul ignore file */ /* istanbul ignore file */
/* tslint:disable */ /* tslint:disable */
/* eslint-disable */ /* eslint-disable */
import type { ExecutionStatus } from './ExecutionStatus'; import type { ExecutionStatus } from "./ExecutionStatus";
/** /**
* Simplified execution response (for list endpoints) * Simplified execution response (for list endpoints)
*/ */
export type ExecutionSummary = { export type ExecutionSummary = {
/** /**
* Action reference * Action reference
*/ */
action_ref: string; action_ref: string;
/** /**
* Creation timestamp * Creation timestamp
*/ */
created: string; created: string;
/** /**
* Enforcement ID * Enforcement ID
*/ */
enforcement?: number | null; enforcement?: number | null;
/** /**
* Execution ID * Execution ID
*/ */
id: number; id: number;
/** /**
* Parent execution ID * Parent execution ID
*/ */
parent?: number | null; parent?: number | null;
/** /**
* Rule reference (if triggered by a rule) * Rule reference (if triggered by a rule)
*/ */
rule_ref?: string | null; rule_ref?: string | null;
/** /**
* Execution status * Execution status
*/ */
status: ExecutionStatus; status: ExecutionStatus;
/** /**
* Trigger reference (if triggered by a trigger) * Trigger reference (if triggered by a trigger)
*/ */
trigger_ref?: string | null; trigger_ref?: string | null;
/** /**
* Last update timestamp * Last update timestamp
*/ */
updated: string; updated: string;
/**
* Workflow task metadata (only populated for workflow task executions)
*/
workflow_task?: {
workflow_execution: number;
task_name: string;
task_index?: number | null;
task_batch?: number | null;
retry_count: number;
max_retries: number;
next_retry_at?: string | null;
timeout_seconds?: number | null;
timed_out: boolean;
duration_ms?: number | null;
started_at?: string | null;
completed_at?: string | null;
} | null;
}; };

View File

@@ -6,20 +6,21 @@
* Request DTO for installing a pack from remote source * Request DTO for installing a pack from remote source
*/ */
export type InstallPackRequest = { export type InstallPackRequest = {
/** /**
* Git branch, tag, or commit reference * Git branch, tag, or commit reference
*/ */
ref_spec?: string | null; ref_spec?: string | null;
/** /**
* Skip dependency validation (not recommended) * Skip dependency validation (not recommended)
*/ */
skip_deps?: boolean; skip_deps?: boolean;
/** /**
* Skip running pack tests during installation * Skip running pack tests during installation
*/ */
skip_tests?: boolean; skip_tests?: boolean;
/** /**
* Repository URL or source location * Repository URL or source location
*/ */
source: string; source: string;
}; };

View File

@@ -18,6 +18,10 @@ export type PackResponse = {
* Creation timestamp * Creation timestamp
*/ */
created: string; created: string;
/**
* Pack dependencies (refs of required packs)
*/
dependencies: Array<string>;
/** /**
* Pack description * Pack description
*/ */
@@ -43,7 +47,7 @@ export type PackResponse = {
*/ */
ref: string; ref: string;
/** /**
* Runtime dependencies * Runtime dependencies (e.g., shell, python, nodejs)
*/ */
runtime_deps: Array<string>; runtime_deps: Array<string>;
/** /**

View File

@@ -2,55 +2,62 @@
/* istanbul ignore file */ /* istanbul ignore file */
/* tslint:disable */ /* tslint:disable */
/* eslint-disable */ /* eslint-disable */
import type { PaginationMeta } from './PaginationMeta'; import type { PaginationMeta } from "./PaginationMeta";
/** /**
* Paginated response wrapper * Paginated response wrapper
*/ */
export type PaginatedResponse_ActionSummary = { export type PaginatedResponse_ActionSummary = {
/**
* The data items
*/
data: Array<{
/** /**
* The data items * Creation timestamp
*/ */
data: Array<{ created: string;
/**
* Creation timestamp
*/
created: string;
/**
* Action description
*/
description: string;
/**
* Entry point
*/
entrypoint: string;
/**
* Action ID
*/
id: number;
/**
* Human-readable label
*/
label: string;
/**
* Pack reference
*/
pack_ref: string;
/**
* Unique reference identifier
*/
ref: string;
/**
* Runtime ID
*/
runtime?: number | null;
/**
* Last update timestamp
*/
updated: string;
}>;
/** /**
* Pagination metadata * Action description
*/ */
pagination: PaginationMeta; description: string;
/**
* Entry point
*/
entrypoint: string;
/**
* Action ID
*/
id: number;
/**
* Human-readable label
*/
label: string;
/**
* Pack reference
*/
pack_ref: string;
/**
* Unique reference identifier
*/
ref: string;
/**
* Runtime ID
*/
runtime?: number | null;
/**
* Semver version constraint for the runtime
*/
runtime_version_constraint?: string | null;
/**
* Last update timestamp
*/
updated: string;
/**
* Workflow definition ID (non-null if this action is a workflow)
*/
workflow_def?: number | null;
}>;
/**
* Pagination metadata
*/
pagination: PaginationMeta;
}; };

View File

@@ -2,56 +2,72 @@
/* istanbul ignore file */ /* istanbul ignore file */
/* tslint:disable */ /* tslint:disable */
/* eslint-disable */ /* eslint-disable */
import type { ExecutionStatus } from './ExecutionStatus'; import type { ExecutionStatus } from "./ExecutionStatus";
import type { PaginationMeta } from './PaginationMeta'; import type { PaginationMeta } from "./PaginationMeta";
/** /**
* Paginated response wrapper * Paginated response wrapper
*/ */
export type PaginatedResponse_ExecutionSummary = { export type PaginatedResponse_ExecutionSummary = {
/**
* The data items
*/
data: Array<{
/** /**
* The data items * Action reference
*/ */
data: Array<{ action_ref: string;
/**
* Action reference
*/
action_ref: string;
/**
* Creation timestamp
*/
created: string;
/**
* Enforcement ID
*/
enforcement?: number | null;
/**
* Execution ID
*/
id: number;
/**
* Parent execution ID
*/
parent?: number | null;
/**
* Rule reference (if triggered by a rule)
*/
rule_ref?: string | null;
/**
* Execution status
*/
status: ExecutionStatus;
/**
* Trigger reference (if triggered by a trigger)
*/
trigger_ref?: string | null;
/**
* Last update timestamp
*/
updated: string;
}>;
/** /**
* Pagination metadata * Creation timestamp
*/ */
pagination: PaginationMeta; created: string;
/**
* Enforcement ID
*/
enforcement?: number | null;
/**
* Execution ID
*/
id: number;
/**
* Parent execution ID
*/
parent?: number | null;
/**
* Rule reference (if triggered by a rule)
*/
rule_ref?: string | null;
/**
* Execution status
*/
status: ExecutionStatus;
/**
* Trigger reference (if triggered by a trigger)
*/
trigger_ref?: string | null;
/**
* Last update timestamp
*/
updated: string;
/**
* Workflow task metadata (only populated for workflow task executions)
*/
workflow_task?: {
workflow_execution: number;
task_name: string;
task_index?: number | null;
task_batch?: number | null;
retry_count: number;
max_retries: number;
next_retry_at?: string | null;
timeout_seconds?: number | null;
timed_out: boolean;
duration_ms?: number | null;
started_at?: string | null;
completed_at?: string | null;
} | null;
}>;
/**
* Pagination metadata
*/
pagination: PaginationMeta;
}; };

View File

@@ -7,9 +7,9 @@
*/ */
export type RuleResponse = { export type RuleResponse = {
/** /**
* Action ID * Action ID (null if the referenced action has been deleted)
*/ */
action: number; action?: number | null;
/** /**
* Parameters to pass to the action when rule is triggered * Parameters to pass to the action when rule is triggered
*/ */
@@ -59,9 +59,9 @@ export type RuleResponse = {
*/ */
ref: string; ref: string;
/** /**
* Trigger ID * Trigger ID (null if the referenced trigger has been deleted)
*/ */
trigger: number; trigger?: number | null;
/** /**
* Parameters for trigger configuration and event filtering * Parameters for trigger configuration and event filtering
*/ */

View File

@@ -39,7 +39,7 @@ export type SensorResponse = {
*/ */
pack_ref?: string | null; pack_ref?: string | null;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**

View File

@@ -43,7 +43,7 @@ export type TriggerResponse = {
*/ */
pack_ref?: string | null; pack_ref?: string | null;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**

View File

@@ -23,12 +23,16 @@ export type UpdateActionRequest = {
*/ */
out_schema: any | null; out_schema: any | null;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**
* Runtime ID * Runtime ID
*/ */
runtime?: number | null; runtime?: number | null;
/**
* Optional semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0")
*/
runtime_version_constraint?: string | null;
}; };

View File

@@ -14,6 +14,10 @@ export type UpdatePackRequest = {
* Pack configuration values * Pack configuration values
*/ */
config: any | null; config: any | null;
/**
* Pack dependencies (refs of required packs)
*/
dependencies?: any[] | null;
/** /**
* Pack description * Pack description
*/ */
@@ -31,7 +35,7 @@ export type UpdatePackRequest = {
*/ */
meta: any | null; meta: any | null;
/** /**
* Runtime dependencies * Runtime dependencies (e.g., shell, python, nodejs)
*/ */
runtime_deps?: any[] | null; runtime_deps?: any[] | null;
/** /**

View File

@@ -23,7 +23,7 @@ export type UpdateSensorRequest = {
*/ */
label?: string | null; label?: string | null;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
}; };

View File

@@ -23,7 +23,7 @@ export type UpdateTriggerRequest = {
*/ */
out_schema: any | null; out_schema: any | null;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
}; };

View File

@@ -27,7 +27,7 @@ export type UpdateWorkflowRequest = {
*/ */
out_schema: any | null; out_schema: any | null;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**

View File

@@ -43,7 +43,7 @@ export type WorkflowResponse = {
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**

View File

@@ -2,432 +2,456 @@
/* istanbul ignore file */ /* istanbul ignore file */
/* tslint:disable */ /* tslint:disable */
/* eslint-disable */ /* eslint-disable */
import type { CreateActionRequest } from '../models/CreateActionRequest'; import type { CreateActionRequest } from "../models/CreateActionRequest";
import type { PaginatedResponse_ActionSummary } from '../models/PaginatedResponse_ActionSummary'; import type { PaginatedResponse_ActionSummary } from "../models/PaginatedResponse_ActionSummary";
import type { SuccessResponse } from '../models/SuccessResponse'; import type { SuccessResponse } from "../models/SuccessResponse";
import type { UpdateActionRequest } from '../models/UpdateActionRequest'; import type { UpdateActionRequest } from "../models/UpdateActionRequest";
import type { CancelablePromise } from '../core/CancelablePromise'; import type { CancelablePromise } from "../core/CancelablePromise";
import { OpenAPI } from '../core/OpenAPI'; import { OpenAPI } from "../core/OpenAPI";
import { request as __request } from '../core/request'; import { request as __request } from "../core/request";
export class ActionsService { export class ActionsService {
/**
* List all actions with pagination
* @returns PaginatedResponse_ActionSummary List of actions
* @throws ApiError
*/
public static listActions({
page,
pageSize,
}: {
/** /**
* List all actions with pagination * Page number (1-based)
* @returns PaginatedResponse_ActionSummary List of actions
* @throws ApiError
*/ */
public static listActions({ page?: number;
page,
pageSize,
}: {
/**
* Page number (1-based)
*/
page?: number,
/**
* Number of items per page
*/
pageSize?: number,
}): CancelablePromise<PaginatedResponse_ActionSummary> {
return __request(OpenAPI, {
method: 'GET',
url: '/api/v1/actions',
query: {
'page': page,
'page_size': pageSize,
},
});
}
/** /**
* Create a new action * Number of items per page
* @returns any Action created successfully
* @throws ApiError
*/ */
public static createAction({ pageSize?: number;
requestBody, }): CancelablePromise<PaginatedResponse_ActionSummary> {
}: { return __request(OpenAPI, {
requestBody: CreateActionRequest, method: "GET",
}): CancelablePromise<{ url: "/api/v1/actions",
/** query: {
* Response DTO for action information page: page,
*/ page_size: pageSize,
data: { },
/** });
* Creation timestamp }
*/ /**
created: string; * Create a new action
/** * @returns any Action created successfully
* Action description * @throws ApiError
*/ */
description: string; public static createAction({
/** requestBody,
* Entry point }: {
*/ requestBody: CreateActionRequest;
entrypoint: string; }): CancelablePromise<{
/**
* Action ID
*/
id: number;
/**
* Whether this is an ad-hoc action (not from pack installation)
*/
is_adhoc: boolean;
/**
* Human-readable label
*/
label: string;
/**
* Output schema
*/
out_schema: any | null;
/**
* Pack ID
*/
pack: number;
/**
* Pack reference
*/
pack_ref: string;
/**
* Parameter schema
*/
param_schema: any | null;
/**
* Unique reference identifier
*/
ref: string;
/**
* Runtime ID
*/
runtime?: number | null;
/**
* Last update timestamp
*/
updated: string;
};
/**
* Optional message
*/
message?: string | null;
}> {
return __request(OpenAPI, {
method: 'POST',
url: '/api/v1/actions',
body: requestBody,
mediaType: 'application/json',
errors: {
400: `Validation error`,
404: `Pack not found`,
409: `Action with same ref already exists`,
},
});
}
/** /**
* Get a single action by reference * Response DTO for action information
* @returns any Action details
* @throws ApiError
*/ */
public static getAction({ data: {
ref, /**
}: { * Creation timestamp
/** */
* Action reference identifier created: string;
*/ /**
ref: string, * Action description
}): CancelablePromise<{ */
/** description: string;
* Response DTO for action information /**
*/ * Entry point
data: { */
/** entrypoint: string;
* Creation timestamp /**
*/ * Action ID
created: string; */
/** id: number;
* Action description /**
*/ * Whether this is an ad-hoc action (not from pack installation)
description: string; */
/** is_adhoc: boolean;
* Entry point /**
*/ * Human-readable label
entrypoint: string; */
/** label: string;
* Action ID /**
*/ * Output schema
id: number; */
/** out_schema: any | null;
* Whether this is an ad-hoc action (not from pack installation) /**
*/ * Pack ID
is_adhoc: boolean; */
/** pack: number;
* Human-readable label /**
*/ * Pack reference
label: string; */
/** pack_ref: string;
* Output schema /**
*/ * Parameter schema (StackStorm-style with inline required/secret)
out_schema: any | null; */
/** param_schema: any | null;
* Pack ID /**
*/ * Unique reference identifier
pack: number; */
/** ref: string;
* Pack reference /**
*/ * Runtime ID
pack_ref: string; */
/** runtime?: number | null;
* Parameter schema /**
*/ * Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0")
param_schema: any | null; */
/** runtime_version_constraint?: string | null;
* Unique reference identifier /**
*/ * Last update timestamp
ref: string; */
/** updated: string;
* Runtime ID /**
*/ * Workflow definition ID (non-null if this action is a workflow)
runtime?: number | null; */
/** workflow_def?: number | null;
* Last update timestamp };
*/
updated: string;
};
/**
* Optional message
*/
message?: string | null;
}> {
return __request(OpenAPI, {
method: 'GET',
url: '/api/v1/actions/{ref}',
path: {
'ref': ref,
},
errors: {
404: `Action not found`,
},
});
}
/** /**
* Update an existing action * Optional message
* @returns any Action updated successfully
* @throws ApiError
*/ */
public static updateAction({ message?: string | null;
ref, }> {
requestBody, return __request(OpenAPI, {
}: { method: "POST",
/** url: "/api/v1/actions",
* Action reference identifier body: requestBody,
*/ mediaType: "application/json",
ref: string, errors: {
requestBody: UpdateActionRequest, 400: `Validation error`,
}): CancelablePromise<{ 404: `Pack not found`,
/** 409: `Action with same ref already exists`,
* Response DTO for action information },
*/ });
data: { }
/** /**
* Creation timestamp * Get a single action by reference
*/ * @returns any Action details
created: string; * @throws ApiError
/** */
* Action description public static getAction({
*/ ref,
description: string; }: {
/**
* Entry point
*/
entrypoint: string;
/**
* Action ID
*/
id: number;
/**
* Whether this is an ad-hoc action (not from pack installation)
*/
is_adhoc: boolean;
/**
* Human-readable label
*/
label: string;
/**
* Output schema
*/
out_schema: any | null;
/**
* Pack ID
*/
pack: number;
/**
* Pack reference
*/
pack_ref: string;
/**
* Parameter schema
*/
param_schema: any | null;
/**
* Unique reference identifier
*/
ref: string;
/**
* Runtime ID
*/
runtime?: number | null;
/**
* Last update timestamp
*/
updated: string;
};
/**
* Optional message
*/
message?: string | null;
}> {
return __request(OpenAPI, {
method: 'PUT',
url: '/api/v1/actions/{ref}',
path: {
'ref': ref,
},
body: requestBody,
mediaType: 'application/json',
errors: {
400: `Validation error`,
404: `Action not found`,
},
});
}
/** /**
* Delete an action * Action reference identifier
* @returns SuccessResponse Action deleted successfully
* @throws ApiError
*/ */
public static deleteAction({ ref: string;
ref, }): CancelablePromise<{
}: {
/**
* Action reference identifier
*/
ref: string,
}): CancelablePromise<SuccessResponse> {
return __request(OpenAPI, {
method: 'DELETE',
url: '/api/v1/actions/{ref}',
path: {
'ref': ref,
},
errors: {
404: `Action not found`,
},
});
}
/** /**
* Get queue statistics for an action * Response DTO for action information
* @returns any Queue statistics
* @throws ApiError
*/ */
public static getQueueStats({ data: {
ref, /**
}: { * Creation timestamp
/** */
* Action reference identifier created: string;
*/ /**
ref: string, * Action description
}): CancelablePromise<{ */
/** description: string;
* Response DTO for queue statistics /**
*/ * Entry point
data: { */
/** entrypoint: string;
* Action ID /**
*/ * Action ID
action_id: number; */
/** id: number;
* Action reference /**
*/ * Whether this is an ad-hoc action (not from pack installation)
action_ref: string; */
/** is_adhoc: boolean;
* Number of currently running executions /**
*/ * Human-readable label
active_count: number; */
/** label: string;
* Timestamp of last statistics update /**
*/ * Output schema
last_updated: string; */
/** out_schema: any | null;
* Maximum concurrent executions allowed /**
*/ * Pack ID
max_concurrent: number; */
/** pack: number;
* Timestamp of oldest queued execution (if any) /**
*/ * Pack reference
oldest_enqueued_at?: string | null; */
/** pack_ref: string;
* Number of executions waiting in queue /**
*/ * Parameter schema (StackStorm-style with inline required/secret)
queue_length: number; */
/** param_schema: any | null;
* Total executions completed since queue creation /**
*/ * Unique reference identifier
total_completed: number; */
/** ref: string;
* Total executions enqueued since queue creation /**
*/ * Runtime ID
total_enqueued: number; */
}; runtime?: number | null;
/** /**
* Optional message * Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0")
*/ */
message?: string | null; runtime_version_constraint?: string | null;
}> { /**
return __request(OpenAPI, { * Last update timestamp
method: 'GET', */
url: '/api/v1/actions/{ref}/queue-stats', updated: string;
path: { /**
'ref': ref, * Workflow definition ID (non-null if this action is a workflow)
}, */
errors: { workflow_def?: number | null;
404: `Action not found or no queue statistics available`, };
},
});
}
/** /**
* List actions by pack reference * Optional message
* @returns PaginatedResponse_ActionSummary List of actions for pack
* @throws ApiError
*/ */
public static listActionsByPack({ message?: string | null;
packRef, }> {
page, return __request(OpenAPI, {
pageSize, method: "GET",
}: { url: "/api/v1/actions/{ref}",
/** path: {
* Pack reference identifier ref: ref,
*/ },
packRef: string, errors: {
/** 404: `Action not found`,
* Page number (1-based) },
*/ });
page?: number, }
/** /**
* Number of items per page * Update an existing action
*/ * @returns any Action updated successfully
pageSize?: number, * @throws ApiError
}): CancelablePromise<PaginatedResponse_ActionSummary> { */
return __request(OpenAPI, { public static updateAction({
method: 'GET', ref,
url: '/api/v1/packs/{pack_ref}/actions', requestBody,
path: { }: {
'pack_ref': packRef, /**
}, * Action reference identifier
query: { */
'page': page, ref: string;
'page_size': pageSize, requestBody: UpdateActionRequest;
}, }): CancelablePromise<{
errors: { /**
404: `Pack not found`, * Response DTO for action information
}, */
}); data: {
} /**
* Creation timestamp
*/
created: string;
/**
* Action description
*/
description: string;
/**
* Entry point
*/
entrypoint: string;
/**
* Action ID
*/
id: number;
/**
* Whether this is an ad-hoc action (not from pack installation)
*/
is_adhoc: boolean;
/**
* Human-readable label
*/
label: string;
/**
* Output schema
*/
out_schema: any | null;
/**
* Pack ID
*/
pack: number;
/**
* Pack reference
*/
pack_ref: string;
/**
* Parameter schema (StackStorm-style with inline required/secret)
*/
param_schema: any | null;
/**
* Unique reference identifier
*/
ref: string;
/**
* Runtime ID
*/
runtime?: number | null;
/**
* Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0")
*/
runtime_version_constraint?: string | null;
/**
* Last update timestamp
*/
updated: string;
/**
* Workflow definition ID (non-null if this action is a workflow)
*/
workflow_def?: number | null;
};
/**
* Optional message
*/
message?: string | null;
}> {
return __request(OpenAPI, {
method: "PUT",
url: "/api/v1/actions/{ref}",
path: {
ref: ref,
},
body: requestBody,
mediaType: "application/json",
errors: {
400: `Validation error`,
404: `Action not found`,
},
});
}
/**
* Delete an action
* @returns SuccessResponse Action deleted successfully
* @throws ApiError
*/
public static deleteAction({
ref,
}: {
/**
* Action reference identifier
*/
ref: string;
}): CancelablePromise<SuccessResponse> {
return __request(OpenAPI, {
method: "DELETE",
url: "/api/v1/actions/{ref}",
path: {
ref: ref,
},
errors: {
404: `Action not found`,
},
});
}
/**
* Get queue statistics for an action
* @returns any Queue statistics
* @throws ApiError
*/
public static getQueueStats({
ref,
}: {
/**
* Action reference identifier
*/
ref: string;
}): CancelablePromise<{
/**
* Response DTO for queue statistics
*/
data: {
/**
* Action ID
*/
action_id: number;
/**
* Action reference
*/
action_ref: string;
/**
* Number of currently running executions
*/
active_count: number;
/**
* Timestamp of last statistics update
*/
last_updated: string;
/**
* Maximum concurrent executions allowed
*/
max_concurrent: number;
/**
* Timestamp of oldest queued execution (if any)
*/
oldest_enqueued_at?: string | null;
/**
* Number of executions waiting in queue
*/
queue_length: number;
/**
* Total executions completed since queue creation
*/
total_completed: number;
/**
* Total executions enqueued since queue creation
*/
total_enqueued: number;
};
/**
* Optional message
*/
message?: string | null;
}> {
return __request(OpenAPI, {
method: "GET",
url: "/api/v1/actions/{ref}/queue-stats",
path: {
ref: ref,
},
errors: {
404: `Action not found or no queue statistics available`,
},
});
}
/**
* List actions by pack reference
* @returns PaginatedResponse_ActionSummary List of actions for pack
* @throws ApiError
*/
public static listActionsByPack({
packRef,
page,
pageSize,
}: {
/**
* Pack reference identifier
*/
packRef: string;
/**
* Page number (1-based)
*/
page?: number;
/**
* Number of items per page
*/
pageSize?: number;
}): CancelablePromise<PaginatedResponse_ActionSummary> {
return __request(OpenAPI, {
method: "GET",
url: "/api/v1/packs/{pack_ref}/actions",
path: {
pack_ref: packRef,
},
query: {
page: page,
page_size: pageSize,
},
errors: {
404: `Pack not found`,
},
});
}
} }

View File

@@ -2,92 +2,92 @@
/* istanbul ignore file */ /* istanbul ignore file */
/* tslint:disable */ /* tslint:disable */
/* eslint-disable */ /* eslint-disable */
import type { ApiResponse_EventResponse } from "../models/ApiResponse_EventResponse"; import type { ApiResponse_EventResponse } from '../models/ApiResponse_EventResponse';
import type { i64 } from "../models/i64"; import type { i64 } from '../models/i64';
import type { PaginatedResponse_EventSummary } from "../models/PaginatedResponse_EventSummary"; import type { PaginatedResponse_EventSummary } from '../models/PaginatedResponse_EventSummary';
import type { CancelablePromise } from "../core/CancelablePromise"; import type { CancelablePromise } from '../core/CancelablePromise';
import { OpenAPI } from "../core/OpenAPI"; import { OpenAPI } from '../core/OpenAPI';
import { request as __request } from "../core/request"; import { request as __request } from '../core/request';
export class EventsService { export class EventsService {
/**
* List all events with pagination and optional filters
* @returns PaginatedResponse_EventSummary List of events
* @throws ApiError
*/
public static listEvents({
trigger,
triggerRef,
ruleRef,
source,
page,
perPage,
}: {
/** /**
* Filter by trigger ID * List all events with pagination and optional filters
* @returns PaginatedResponse_EventSummary List of events
* @throws ApiError
*/ */
trigger?: null | i64; public static listEvents({
trigger,
triggerRef,
ruleRef,
source,
page,
perPage,
}: {
/**
* Filter by trigger ID
*/
trigger?: (null | i64),
/**
* Filter by trigger reference
*/
triggerRef?: string | null,
/**
* Filter by rule reference
*/
ruleRef?: string | null,
/**
* Filter by source ID
*/
source?: (null | i64),
/**
* Page number (1-indexed)
*/
page?: number,
/**
* Items per page
*/
perPage?: number,
}): CancelablePromise<PaginatedResponse_EventSummary> {
return __request(OpenAPI, {
method: 'GET',
url: '/api/v1/events',
query: {
'trigger': trigger,
'trigger_ref': triggerRef,
'rule_ref': ruleRef,
'source': source,
'page': page,
'per_page': perPage,
},
errors: {
401: `Unauthorized`,
500: `Internal server error`,
},
});
}
/** /**
* Filter by trigger reference * Get a single event by ID
* @returns ApiResponse_EventResponse Event details
* @throws ApiError
*/ */
triggerRef?: string | null; public static getEvent({
/** id,
* Filter by rule reference }: {
*/ /**
ruleRef?: string | null; * Event ID
/** */
* Filter by source ID id: number,
*/ }): CancelablePromise<ApiResponse_EventResponse> {
source?: null | i64; return __request(OpenAPI, {
/** method: 'GET',
* Page number (1-indexed) url: '/api/v1/events/{id}',
*/ path: {
page?: number; 'id': id,
/** },
* Items per page errors: {
*/ 401: `Unauthorized`,
perPage?: number; 404: `Event not found`,
}): CancelablePromise<PaginatedResponse_EventSummary> { 500: `Internal server error`,
return __request(OpenAPI, { },
method: "GET", });
url: "/api/v1/events", }
query: {
trigger: trigger,
trigger_ref: triggerRef,
rule_ref: ruleRef,
source: source,
page: page,
per_page: perPage,
},
errors: {
401: `Unauthorized`,
500: `Internal server error`,
},
});
}
/**
* Get a single event by ID
* @returns ApiResponse_EventResponse Event details
* @throws ApiError
*/
public static getEvent({
id,
}: {
/**
* Event ID
*/
id: number;
}): CancelablePromise<ApiResponse_EventResponse> {
return __request(OpenAPI, {
method: "GET",
url: "/api/v1/events/{id}",
path: {
id: id,
},
errors: {
401: `Unauthorized`,
404: `Event not found`,
500: `Internal server error`,
},
});
}
} }

View File

@@ -2,260 +2,283 @@
/* istanbul ignore file */ /* istanbul ignore file */
/* tslint:disable */ /* tslint:disable */
/* eslint-disable */ /* eslint-disable */
import type { ExecutionStatus } from '../models/ExecutionStatus'; import type { ExecutionStatus } from "../models/ExecutionStatus";
import type { PaginatedResponse_ExecutionSummary } from '../models/PaginatedResponse_ExecutionSummary'; import type { PaginatedResponse_ExecutionSummary } from "../models/PaginatedResponse_ExecutionSummary";
import type { CancelablePromise } from '../core/CancelablePromise'; import type { CancelablePromise } from "../core/CancelablePromise";
import { OpenAPI } from '../core/OpenAPI'; import { OpenAPI } from "../core/OpenAPI";
import { request as __request } from '../core/request'; import { request as __request } from "../core/request";
export class ExecutionsService { export class ExecutionsService {
/**
* List all executions with pagination and optional filters
* @returns PaginatedResponse_ExecutionSummary List of executions
* @throws ApiError
*/
public static listExecutions({
status,
actionRef,
packName,
ruleRef,
triggerRef,
executor,
resultContains,
enforcement,
parent,
topLevelOnly,
page,
perPage,
}: {
/** /**
* List all executions with pagination and optional filters * Filter by execution status
* @returns PaginatedResponse_ExecutionSummary List of executions
* @throws ApiError
*/ */
public static listExecutions({ status?: null | ExecutionStatus;
status,
actionRef,
packName,
ruleRef,
triggerRef,
executor,
resultContains,
enforcement,
parent,
page,
perPage,
}: {
/**
* Filter by execution status
*/
status?: (null | ExecutionStatus),
/**
* Filter by action reference
*/
actionRef?: string | null,
/**
* Filter by pack name
*/
packName?: string | null,
/**
* Filter by rule reference
*/
ruleRef?: string | null,
/**
* Filter by trigger reference
*/
triggerRef?: string | null,
/**
* Filter by executor ID
*/
executor?: number | null,
/**
* Search in result JSON (case-insensitive substring match)
*/
resultContains?: string | null,
/**
* Filter by enforcement ID
*/
enforcement?: number | null,
/**
* Filter by parent execution ID
*/
parent?: number | null,
/**
* Page number (for pagination)
*/
page?: number,
/**
* Items per page (for pagination)
*/
perPage?: number,
}): CancelablePromise<PaginatedResponse_ExecutionSummary> {
return __request(OpenAPI, {
method: 'GET',
url: '/api/v1/executions',
query: {
'status': status,
'action_ref': actionRef,
'pack_name': packName,
'rule_ref': ruleRef,
'trigger_ref': triggerRef,
'executor': executor,
'result_contains': resultContains,
'enforcement': enforcement,
'parent': parent,
'page': page,
'per_page': perPage,
},
});
}
/** /**
* List executions by enforcement ID * Filter by action reference
* @returns PaginatedResponse_ExecutionSummary List of executions for enforcement
* @throws ApiError
*/ */
public static listExecutionsByEnforcement({ actionRef?: string | null;
enforcementId,
page,
pageSize,
}: {
/**
* Enforcement ID
*/
enforcementId: number,
/**
* Page number (1-based)
*/
page?: number,
/**
* Number of items per page
*/
pageSize?: number,
}): CancelablePromise<PaginatedResponse_ExecutionSummary> {
return __request(OpenAPI, {
method: 'GET',
url: '/api/v1/executions/enforcement/{enforcement_id}',
path: {
'enforcement_id': enforcementId,
},
query: {
'page': page,
'page_size': pageSize,
},
errors: {
500: `Internal server error`,
},
});
}
/** /**
* Get execution statistics * Filter by pack name
* @returns any Execution statistics
* @throws ApiError
*/ */
public static getExecutionStats(): CancelablePromise<Record<string, any>> { packName?: string | null;
return __request(OpenAPI, {
method: 'GET',
url: '/api/v1/executions/stats',
errors: {
500: `Internal server error`,
},
});
}
/** /**
* List executions by status * Filter by rule reference
* @returns PaginatedResponse_ExecutionSummary List of executions with specified status
* @throws ApiError
*/ */
public static listExecutionsByStatus({ ruleRef?: string | null;
status,
page,
pageSize,
}: {
/**
* Execution status (requested, scheduling, scheduled, running, completed, failed, canceling, cancelled, timeout, abandoned)
*/
status: string,
/**
* Page number (1-based)
*/
page?: number,
/**
* Number of items per page
*/
pageSize?: number,
}): CancelablePromise<PaginatedResponse_ExecutionSummary> {
return __request(OpenAPI, {
method: 'GET',
url: '/api/v1/executions/status/{status}',
path: {
'status': status,
},
query: {
'page': page,
'page_size': pageSize,
},
errors: {
400: `Invalid status`,
500: `Internal server error`,
},
});
}
/** /**
* Get a single execution by ID * Filter by trigger reference
* @returns any Execution details
* @throws ApiError
*/ */
public static getExecution({ triggerRef?: string | null;
id, /**
}: { * Filter by executor ID
/** */
* Execution ID executor?: number | null;
*/ /**
id: number, * Search in result JSON (case-insensitive substring match)
}): CancelablePromise<{ */
/** resultContains?: string | null;
* Response DTO for execution information /**
*/ * Filter by enforcement ID
data: { */
/** enforcement?: number | null;
* Action ID (optional, may be null for ad-hoc executions) /**
*/ * Filter by parent execution ID
action?: number | null; */
/** parent?: number | null;
* Action reference /**
*/ * If true, only return top-level executions (those without a parent)
action_ref: string; */
/** topLevelOnly?: boolean | null;
* Execution configuration/parameters /**
*/ * Page number (for pagination)
config: Record<string, any>; */
/** page?: number;
* Creation timestamp /**
*/ * Items per page (for pagination)
created: string; */
/** perPage?: number;
* Enforcement ID (rule enforcement that triggered this) }): CancelablePromise<PaginatedResponse_ExecutionSummary> {
*/ return __request(OpenAPI, {
enforcement?: number | null; method: "GET",
/** url: "/api/v1/executions",
* Executor ID (worker/executor that ran this) query: {
*/ status: status,
executor?: number | null; action_ref: actionRef,
/** pack_name: packName,
* Execution ID rule_ref: ruleRef,
*/ trigger_ref: triggerRef,
id: number; executor: executor,
/** result_contains: resultContains,
* Parent execution ID (for nested/child executions) enforcement: enforcement,
*/ parent: parent,
parent?: number | null; top_level_only: topLevelOnly,
/** page: page,
* Execution result/output per_page: perPage,
*/ },
result: Record<string, any>; });
/** }
* Execution status /**
*/ * List executions by enforcement ID
status: ExecutionStatus; * @returns PaginatedResponse_ExecutionSummary List of executions for enforcement
/** * @throws ApiError
* Last update timestamp */
*/ public static listExecutionsByEnforcement({
updated: string; enforcementId,
}; page,
/** pageSize,
* Optional message }: {
*/ /**
message?: string | null; * Enforcement ID
}> { */
return __request(OpenAPI, { enforcementId: number;
method: 'GET', /**
url: '/api/v1/executions/{id}', * Page number (1-based)
path: { */
'id': id, page?: number;
}, /**
errors: { * Number of items per page
404: `Execution not found`, */
}, pageSize?: number;
}); }): CancelablePromise<PaginatedResponse_ExecutionSummary> {
} return __request(OpenAPI, {
method: "GET",
url: "/api/v1/executions/enforcement/{enforcement_id}",
path: {
enforcement_id: enforcementId,
},
query: {
page: page,
page_size: pageSize,
},
errors: {
500: `Internal server error`,
},
});
}
/**
* Get execution statistics
* @returns any Execution statistics
* @throws ApiError
*/
public static getExecutionStats(): CancelablePromise<Record<string, any>> {
return __request(OpenAPI, {
method: "GET",
url: "/api/v1/executions/stats",
errors: {
500: `Internal server error`,
},
});
}
/**
* List executions by status
* @returns PaginatedResponse_ExecutionSummary List of executions with specified status
* @throws ApiError
*/
public static listExecutionsByStatus({
status,
page,
pageSize,
}: {
/**
* Execution status (requested, scheduling, scheduled, running, completed, failed, canceling, cancelled, timeout, abandoned)
*/
status: string;
/**
* Page number (1-based)
*/
page?: number;
/**
* Number of items per page
*/
pageSize?: number;
}): CancelablePromise<PaginatedResponse_ExecutionSummary> {
return __request(OpenAPI, {
method: "GET",
url: "/api/v1/executions/status/{status}",
path: {
status: status,
},
query: {
page: page,
page_size: pageSize,
},
errors: {
400: `Invalid status`,
500: `Internal server error`,
},
});
}
/**
* Get a single execution by ID
* @returns any Execution details
* @throws ApiError
*/
public static getExecution({
id,
}: {
/**
* Execution ID
*/
id: number;
}): CancelablePromise<{
/**
* Response DTO for execution information
*/
data: {
/**
* Action ID (optional, may be null for ad-hoc executions)
*/
action?: number | null;
/**
* Action reference
*/
action_ref: string;
/**
* Execution configuration/parameters
*/
config: Record<string, any>;
/**
* Creation timestamp
*/
created: string;
/**
* Enforcement ID (rule enforcement that triggered this)
*/
enforcement?: number | null;
/**
* Executor ID (worker/executor that ran this)
*/
executor?: number | null;
/**
* Execution ID
*/
id: number;
/**
* Parent execution ID (for nested/child executions)
*/
parent?: number | null;
/**
* Execution result/output
*/
result: Record<string, any>;
/**
* Execution status
*/
status: ExecutionStatus;
/**
* Last update timestamp
*/
updated: string;
/**
* Workflow task metadata (only populated for workflow task executions)
*/
workflow_task?: {
workflow_execution: number;
task_name: string;
task_index?: number | null;
task_batch?: number | null;
retry_count: number;
max_retries: number;
next_retry_at?: string | null;
timeout_seconds?: number | null;
timed_out: boolean;
duration_ms?: number | null;
started_at?: string | null;
completed_at?: string | null;
} | null;
};
/**
* Optional message
*/
message?: string | null;
}> {
return __request(OpenAPI, {
method: "GET",
url: "/api/v1/executions/{id}",
path: {
id: id,
},
errors: {
404: `Execution not found`,
},
});
}
} }

View File

@@ -71,6 +71,10 @@ export class PacksService {
* Creation timestamp * Creation timestamp
*/ */
created: string; created: string;
/**
* Pack dependencies (refs of required packs)
*/
dependencies: Array<string>;
/** /**
* Pack description * Pack description
*/ */
@@ -96,7 +100,7 @@ export class PacksService {
*/ */
ref: string; ref: string;
/** /**
* Runtime dependencies * Runtime dependencies (e.g., shell, python, nodejs)
*/ */
runtime_deps: Array<string>; runtime_deps: Array<string>;
/** /**
@@ -145,7 +149,6 @@ export class PacksService {
mediaType: 'application/json', mediaType: 'application/json',
errors: { errors: {
400: `Invalid request or tests failed`, 400: `Invalid request or tests failed`,
409: `Pack already exists`,
501: `Not implemented yet`, 501: `Not implemented yet`,
}, },
}); });
@@ -200,6 +203,10 @@ export class PacksService {
* Creation timestamp * Creation timestamp
*/ */
created: string; created: string;
/**
* Pack dependencies (refs of required packs)
*/
dependencies: Array<string>;
/** /**
* Pack description * Pack description
*/ */
@@ -225,7 +232,7 @@ export class PacksService {
*/ */
ref: string; ref: string;
/** /**
* Runtime dependencies * Runtime dependencies (e.g., shell, python, nodejs)
*/ */
runtime_deps: Array<string>; runtime_deps: Array<string>;
/** /**
@@ -288,6 +295,10 @@ export class PacksService {
* Creation timestamp * Creation timestamp
*/ */
created: string; created: string;
/**
* Pack dependencies (refs of required packs)
*/
dependencies: Array<string>;
/** /**
* Pack description * Pack description
*/ */
@@ -313,7 +324,7 @@ export class PacksService {
*/ */
ref: string; ref: string;
/** /**
* Runtime dependencies * Runtime dependencies (e.g., shell, python, nodejs)
*/ */
runtime_deps: Array<string>; runtime_deps: Array<string>;
/** /**

View File

@@ -150,7 +150,7 @@ export class WorkflowsService {
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**
@@ -241,7 +241,7 @@ export class WorkflowsService {
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**
@@ -333,7 +333,7 @@ export class WorkflowsService {
*/ */
pack_ref: string; pack_ref: string;
/** /**
* Parameter schema * Parameter schema (StackStorm-style with inline required/secret)
*/ */
param_schema: any | null; param_schema: any | null;
/** /**

View File

@@ -0,0 +1,312 @@
import { useState, useMemo } from "react";
import { Link } from "react-router-dom";
import { formatDistanceToNow } from "date-fns";
import {
ChevronDown,
ChevronRight,
Workflow,
CheckCircle2,
XCircle,
Clock,
Loader2,
AlertTriangle,
Ban,
CircleDot,
RotateCcw,
} from "lucide-react";
import { useChildExecutions } from "@/hooks/useExecutions";
interface WorkflowTasksPanelProps {
/** The parent (workflow) execution ID */
parentExecutionId: number;
/** Whether the panel starts collapsed (default: false — open by default for workflows) */
defaultCollapsed?: boolean;
}
/** Format a duration in ms to a human-readable string. */
function formatDuration(ms: number): string {
if (ms < 1000) return `${ms}ms`;
const secs = ms / 1000;
if (secs < 60) return `${secs.toFixed(1)}s`;
const mins = Math.floor(secs / 60);
const remainSecs = Math.round(secs % 60);
if (mins < 60) return `${mins}m ${remainSecs}s`;
const hrs = Math.floor(mins / 60);
const remainMins = mins % 60;
return `${hrs}h ${remainMins}m`;
}
function getStatusIcon(status: string) {
switch (status) {
case "completed":
return <CheckCircle2 className="h-4 w-4 text-green-500" />;
case "failed":
return <XCircle className="h-4 w-4 text-red-500" />;
case "running":
return <Loader2 className="h-4 w-4 text-blue-500 animate-spin" />;
case "requested":
case "scheduling":
case "scheduled":
return <Clock className="h-4 w-4 text-yellow-500" />;
case "timeout":
return <AlertTriangle className="h-4 w-4 text-orange-500" />;
case "canceling":
case "cancelled":
return <Ban className="h-4 w-4 text-gray-400" />;
case "abandoned":
return <AlertTriangle className="h-4 w-4 text-red-400" />;
default:
return <CircleDot className="h-4 w-4 text-gray-400" />;
}
}
function getStatusBadgeClasses(status: string): string {
switch (status) {
case "completed":
return "bg-green-100 text-green-800";
case "failed":
return "bg-red-100 text-red-800";
case "running":
return "bg-blue-100 text-blue-800";
case "requested":
case "scheduling":
case "scheduled":
return "bg-yellow-100 text-yellow-800";
case "timeout":
return "bg-orange-100 text-orange-800";
case "canceling":
case "cancelled":
return "bg-gray-100 text-gray-800";
case "abandoned":
return "bg-red-100 text-red-600";
default:
return "bg-gray-100 text-gray-800";
}
}
/**
* Panel that displays workflow task (child) executions for a parent
* workflow execution. Shows each task's name, action, status, and timing.
*/
export default function WorkflowTasksPanel({
parentExecutionId,
defaultCollapsed = false,
}: WorkflowTasksPanelProps) {
const [isCollapsed, setIsCollapsed] = useState(defaultCollapsed);
const { data, isLoading, error } = useChildExecutions(parentExecutionId);
const tasks = useMemo(() => {
if (!data?.data) return [];
return data.data;
}, [data]);
const summary = useMemo(() => {
const total = tasks.length;
const completed = tasks.filter((t) => t.status === "completed").length;
const failed = tasks.filter((t) => t.status === "failed").length;
const running = tasks.filter(
(t) =>
t.status === "running" ||
t.status === "requested" ||
t.status === "scheduling" ||
t.status === "scheduled",
).length;
const other = total - completed - failed - running;
return { total, completed, failed, running, other };
}, [tasks]);
if (!isLoading && tasks.length === 0 && !error) {
// No child tasks — nothing to show
return null;
}
return (
<div className="bg-white shadow rounded-lg">
{/* Header */}
<button
onClick={() => setIsCollapsed(!isCollapsed)}
className="w-full flex items-center justify-between p-6 text-left hover:bg-gray-50 rounded-lg transition-colors"
>
<div className="flex items-center gap-3">
{isCollapsed ? (
<ChevronRight className="h-5 w-5 text-gray-400" />
) : (
<ChevronDown className="h-5 w-5 text-gray-400" />
)}
<Workflow className="h-5 w-5 text-indigo-500" />
<h2 className="text-xl font-semibold">Workflow Tasks</h2>
{!isLoading && (
<span className="text-sm text-gray-500">
({summary.total} task{summary.total !== 1 ? "s" : ""})
</span>
)}
</div>
{/* Summary badges */}
{!isCollapsed || !isLoading ? (
<div className="flex items-center gap-2">
{summary.completed > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-green-100 text-green-800">
<CheckCircle2 className="h-3 w-3" />
{summary.completed}
</span>
)}
{summary.running > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800">
<Loader2 className="h-3 w-3 animate-spin" />
{summary.running}
</span>
)}
{summary.failed > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-red-100 text-red-800">
<XCircle className="h-3 w-3" />
{summary.failed}
</span>
)}
{summary.other > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-gray-100 text-gray-700">
{summary.other}
</span>
)}
</div>
) : null}
</button>
{/* Content */}
{!isCollapsed && (
<div className="px-6 pb-6">
{isLoading && (
<div className="flex items-center justify-center py-8">
<Loader2 className="h-5 w-5 animate-spin text-gray-400" />
<span className="ml-2 text-sm text-gray-500">
Loading workflow tasks
</span>
</div>
)}
{error && (
<div className="bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded text-sm">
Error loading workflow tasks:{" "}
{error instanceof Error ? error.message : "Unknown error"}
</div>
)}
{!isLoading && !error && tasks.length > 0 && (
<div className="space-y-2">
{/* Column headers */}
<div className="grid grid-cols-12 gap-3 px-3 py-2 text-xs font-medium text-gray-500 uppercase tracking-wider border-b border-gray-100">
<div className="col-span-1">#</div>
<div className="col-span-3">Task</div>
<div className="col-span-3">Action</div>
<div className="col-span-2">Status</div>
<div className="col-span-2">Duration</div>
<div className="col-span-1">Retry</div>
</div>
{/* Task rows */}
{tasks.map((task, idx) => {
const wt = task.workflow_task;
const taskName = wt?.task_name ?? `Task ${idx + 1}`;
const retryCount = wt?.retry_count ?? 0;
const maxRetries = wt?.max_retries ?? 0;
const timedOut = wt?.timed_out ?? false;
// Compute duration from created → updated (best available)
const created = new Date(task.created);
const updated = new Date(task.updated);
const durationMs =
wt?.duration_ms ??
(task.status === "completed" ||
task.status === "failed" ||
task.status === "timeout"
? updated.getTime() - created.getTime()
: null);
return (
<Link
key={task.id}
to={`/executions/${task.id}`}
className="grid grid-cols-12 gap-3 px-3 py-3 rounded-lg hover:bg-gray-50 transition-colors items-center group"
>
{/* Index */}
<div className="col-span-1 text-sm text-gray-400 font-mono">
{idx + 1}
</div>
{/* Task name */}
<div className="col-span-3 flex items-center gap-2 min-w-0">
{getStatusIcon(task.status)}
<span
className="text-sm font-medium text-gray-900 truncate group-hover:text-blue-600"
title={taskName}
>
{taskName}
</span>
{wt?.task_index != null && (
<span className="text-xs text-gray-400 flex-shrink-0">
[{wt.task_index}]
</span>
)}
</div>
{/* Action ref */}
<div className="col-span-3 min-w-0">
<span
className="text-sm text-gray-600 truncate block"
title={task.action_ref}
>
{task.action_ref}
</span>
</div>
{/* Status badge */}
<div className="col-span-2 flex items-center gap-1.5">
<span
className={`inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium ${getStatusBadgeClasses(task.status)}`}
>
{task.status}
</span>
{timedOut && (
<span title="Timed out">
<AlertTriangle className="h-3.5 w-3.5 text-orange-500" />
</span>
)}
</div>
{/* Duration */}
<div className="col-span-2 text-sm text-gray-500">
{task.status === "running" ? (
<span className="text-blue-600">
{formatDistanceToNow(created, { addSuffix: false })}
</span>
) : durationMs != null && durationMs > 0 ? (
formatDuration(durationMs)
) : (
<span className="text-gray-300"></span>
)}
</div>
{/* Retry info */}
<div className="col-span-1 text-sm text-gray-500">
{maxRetries > 0 ? (
<span
className="inline-flex items-center gap-0.5"
title={`Attempt ${retryCount + 1} of ${maxRetries + 1}`}
>
<RotateCcw className="h-3 w-3" />
{retryCount}/{maxRetries}
</span>
) : (
<span className="text-gray-300"></span>
)}
</div>
</Link>
);
})}
</div>
)}
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,297 @@
import { memo, useEffect } from "react";
import { Link } from "react-router-dom";
import { X, ExternalLink, Loader2 } from "lucide-react";
import { useExecution } from "@/hooks/useExecutions";
import { useExecutionStream } from "@/hooks/useExecutionStream";
import { formatDistanceToNow } from "date-fns";
import type { ExecutionStatus } from "@/api";
function formatDuration(ms: number): string {
if (ms < 1000) return `${ms}ms`;
const secs = ms / 1000;
if (secs < 60) return `${secs.toFixed(1)}s`;
const mins = Math.floor(secs / 60);
const remainSecs = Math.round(secs % 60);
if (mins < 60) return `${mins}m ${remainSecs}s`;
const hrs = Math.floor(mins / 60);
const remainMins = mins % 60;
return `${hrs}h ${remainMins}m`;
}
const getStatusColor = (status: string) => {
switch (status) {
case "succeeded":
case "completed":
return "bg-green-100 text-green-800";
case "failed":
case "timeout":
return "bg-red-100 text-red-800";
case "running":
return "bg-blue-100 text-blue-800";
case "scheduled":
case "scheduling":
case "requested":
return "bg-yellow-100 text-yellow-800";
case "canceling":
case "cancelled":
return "bg-gray-100 text-gray-600";
default:
return "bg-gray-100 text-gray-800";
}
};
interface ExecutionPreviewPanelProps {
executionId: number;
onClose: () => void;
}
const ExecutionPreviewPanel = memo(function ExecutionPreviewPanel({
executionId,
onClose,
}: ExecutionPreviewPanelProps) {
const { data, isLoading, error } = useExecution(executionId);
const execution = data?.data;
// Subscribe to real-time updates for this execution
useExecutionStream({ executionId, enabled: true });
// Close on Escape key
useEffect(() => {
const handleKeyDown = (e: KeyboardEvent) => {
if (e.key === "Escape") onClose();
};
window.addEventListener("keydown", handleKeyDown);
return () => window.removeEventListener("keydown", handleKeyDown);
}, [onClose]);
const isRunning =
execution?.status === "running" ||
execution?.status === "scheduling" ||
execution?.status === "scheduled" ||
execution?.status === "requested";
const created = execution ? new Date(execution.created) : null;
const updated = execution ? new Date(execution.updated) : null;
const durationMs =
created && updated && !isRunning
? updated.getTime() - created.getTime()
: null;
return (
<div className="border-l border-gray-200 bg-white flex flex-col h-full overflow-hidden">
{/* Header */}
<div className="flex items-center justify-between px-4 py-3 border-b border-gray-200 bg-gray-50 flex-shrink-0">
<div className="flex items-center gap-2 min-w-0">
<h3 className="text-sm font-semibold text-gray-900 truncate">
Execution #{executionId}
</h3>
{execution && (
<span
className={`px-2 py-0.5 text-xs rounded-full font-medium flex-shrink-0 ${getStatusColor(execution.status)}`}
>
{execution.status}
</span>
)}
{isRunning && (
<Loader2 className="h-3.5 w-3.5 text-blue-500 animate-spin flex-shrink-0" />
)}
</div>
<div className="flex items-center gap-1 flex-shrink-0">
<Link
to={`/executions/${executionId}`}
className="p-1.5 text-gray-400 hover:text-blue-600 rounded hover:bg-gray-100 transition-colors"
title="Open full detail page"
>
<ExternalLink className="h-4 w-4" />
</Link>
<button
onClick={onClose}
className="p-1.5 text-gray-400 hover:text-gray-600 rounded hover:bg-gray-100 transition-colors"
title="Close preview (Esc)"
>
<X className="h-4 w-4" />
</button>
</div>
</div>
{/* Body */}
<div className="flex-1 overflow-y-auto">
{isLoading && (
<div className="flex items-center justify-center h-32">
<Loader2 className="h-6 w-6 animate-spin text-gray-400" />
</div>
)}
{error && !execution && (
<div className="p-4">
<div className="bg-red-50 border border-red-200 text-red-700 px-3 py-2 rounded text-sm">
Error: {(error as Error).message}
</div>
</div>
)}
{execution && (
<div className="divide-y divide-gray-100">
{/* Action */}
<div className="px-4 py-3">
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide">
Action
</dt>
<dd className="mt-1">
<Link
to={`/actions/${execution.action_ref}`}
className="text-sm text-blue-600 hover:text-blue-800 font-medium"
>
{execution.action_ref}
</Link>
</dd>
</div>
{/* Timing */}
<div className="px-4 py-3 space-y-2">
<div>
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide">
Created
</dt>
<dd className="mt-0.5 text-sm text-gray-900">
{created!.toLocaleString()}
<span className="text-gray-400 ml-1.5 text-xs">
{formatDistanceToNow(created!, { addSuffix: true })}
</span>
</dd>
</div>
{durationMs != null && durationMs > 0 && (
<div>
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide">
Duration
</dt>
<dd className="mt-0.5 text-sm text-gray-900">
{formatDuration(durationMs)}
</dd>
</div>
)}
{isRunning && (
<div>
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide">
Elapsed
</dt>
<dd className="mt-0.5 text-sm text-blue-600 flex items-center gap-1.5">
<Loader2 className="h-3 w-3 animate-spin" />
{formatDistanceToNow(created!)}
</dd>
</div>
)}
</div>
{/* References */}
<div className="px-4 py-3 space-y-2">
{execution.parent && (
<div>
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide">
Parent Execution
</dt>
<dd className="mt-0.5 text-sm">
<Link
to={`/executions/${execution.parent}`}
className="text-blue-600 hover:text-blue-800 font-mono"
>
#{execution.parent}
</Link>
</dd>
</div>
)}
{execution.enforcement && (
<div>
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide">
Enforcement
</dt>
<dd className="mt-0.5 text-sm text-gray-900 font-mono">
#{execution.enforcement}
</dd>
</div>
)}
{execution.executor && (
<div>
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide">
Executor
</dt>
<dd className="mt-0.5 text-sm text-gray-900 font-mono">
#{execution.executor}
</dd>
</div>
)}
{execution.workflow_task && (
<div>
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide">
Workflow Task
</dt>
<dd className="mt-0.5 text-sm text-gray-900">
<span className="font-medium">
{execution.workflow_task.task_name}
</span>
{execution.workflow_task.task_index != null && (
<span className="text-gray-400 ml-1">
[{execution.workflow_task.task_index}]
</span>
)}
</dd>
</div>
)}
</div>
{/* Config / Parameters */}
{execution.config &&
Object.keys(execution.config).length > 0 && (
<div className="px-4 py-3">
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide mb-1.5">
Parameters
</dt>
<dd>
<pre className="bg-gray-50 border border-gray-200 rounded p-3 text-xs overflow-x-auto max-h-48 overflow-y-auto">
{JSON.stringify(execution.config, null, 2)}
</pre>
</dd>
</div>
)}
{/* Result */}
{execution.result &&
Object.keys(execution.result).length > 0 && (
<div className="px-4 py-3">
<dt className="text-xs font-medium text-gray-500 uppercase tracking-wide mb-1.5">
Result
</dt>
<dd>
<pre
className={`border rounded p-3 text-xs overflow-x-auto max-h-64 overflow-y-auto ${
execution.status === ("failed" as ExecutionStatus) ||
execution.status === ("timeout" as ExecutionStatus)
? "bg-red-50 border-red-200"
: "bg-gray-50 border-gray-200"
}`}
>
{JSON.stringify(execution.result, null, 2)}
</pre>
</dd>
</div>
)}
</div>
)}
</div>
{/* Footer */}
{execution && (
<div className="px-4 py-3 border-t border-gray-200 bg-gray-50 flex-shrink-0">
<Link
to={`/executions/${executionId}`}
className="block w-full text-center px-3 py-2 text-sm font-medium text-blue-700 bg-blue-50 hover:bg-blue-100 rounded-md transition-colors"
>
Open Full Details
</Link>
</div>
)}
</div>
);
});
export default ExecutionPreviewPanel;

View File

@@ -0,0 +1,78 @@
import { memo } from "react";
interface PaginationProps {
page: number;
setPage: (page: number) => void;
pageSize: number;
total: number;
}
function computeRange(page: number, pageSize: number, total: number) {
const start = (page - 1) * pageSize + 1;
const end = Math.min(page * pageSize, total);
return { start, end };
}
const Pagination = memo(function Pagination({
page,
setPage,
pageSize,
total,
}: PaginationProps) {
const totalPages = Math.ceil(total / pageSize);
if (totalPages <= 1) return null;
const { start, end } = computeRange(page, pageSize, total);
return (
<div className="bg-gray-50 px-6 py-4 flex items-center justify-between border-t border-gray-200">
<div className="flex-1 flex justify-between sm:hidden">
<button
onClick={() => setPage(page - 1)}
disabled={page === 1}
className="relative inline-flex items-center px-4 py-2 border border-gray-300 text-sm font-medium rounded-md text-gray-700 bg-white hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
>
Previous
</button>
<button
onClick={() => setPage(page + 1)}
disabled={page === totalPages}
className="ml-3 relative inline-flex items-center px-4 py-2 border border-gray-300 text-sm font-medium rounded-md text-gray-700 bg-white hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
>
Next
</button>
</div>
<div className="hidden sm:flex-1 sm:flex sm:items-center sm:justify-between">
<div>
<p className="text-sm text-gray-700">
Showing <span className="font-medium">{start}</span> to{" "}
<span className="font-medium">{end}</span> of{" "}
<span className="font-medium">{total}</span> executions
</p>
</div>
<div>
<nav className="relative z-0 inline-flex rounded-md shadow-sm -space-x-px">
<button
onClick={() => setPage(page - 1)}
disabled={page === 1}
className="relative inline-flex items-center px-2 py-2 rounded-l-md border border-gray-300 bg-white text-sm font-medium text-gray-500 hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
>
Previous
</button>
<button
onClick={() => setPage(page + 1)}
disabled={page === totalPages}
className="relative inline-flex items-center px-2 py-2 rounded-r-md border border-gray-300 bg-white text-sm font-medium text-gray-500 hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
>
Next
</button>
</nav>
</div>
</div>
</div>
);
});
Pagination.displayName = "Pagination";
export default Pagination;

View File

@@ -0,0 +1,622 @@
import { useState, useMemo, memo } from "react";
import { Link } from "react-router-dom";
import {
ChevronRight,
ChevronDown,
Workflow,
Loader2,
CheckCircle2,
XCircle,
Clock,
AlertTriangle,
Ban,
CircleDot,
RotateCcw,
} from "lucide-react";
import { useChildExecutions } from "@/hooks/useExecutions";
import type { ExecutionSummary } from "@/api";
import Pagination from "./Pagination";
// ─── Helpers ────────────────────────────────────────────────────────────────
function getStatusColor(status: string) {
switch (status) {
case "completed":
return "bg-green-100 text-green-800";
case "failed":
case "timeout":
return "bg-red-100 text-red-800";
case "running":
return "bg-blue-100 text-blue-800";
case "requested":
case "scheduling":
case "scheduled":
return "bg-yellow-100 text-yellow-800";
case "canceling":
case "cancelled":
return "bg-gray-100 text-gray-600";
default:
return "bg-gray-100 text-gray-800";
}
}
function getStatusIcon(status: string) {
switch (status) {
case "completed":
return <CheckCircle2 className="h-4 w-4 text-green-500" />;
case "failed":
return <XCircle className="h-4 w-4 text-red-500" />;
case "running":
return <Loader2 className="h-4 w-4 text-blue-500 animate-spin" />;
case "requested":
case "scheduling":
case "scheduled":
return <Clock className="h-4 w-4 text-yellow-500" />;
case "timeout":
return <AlertTriangle className="h-4 w-4 text-orange-500" />;
case "canceling":
case "cancelled":
return <Ban className="h-4 w-4 text-gray-400" />;
case "abandoned":
return <AlertTriangle className="h-4 w-4 text-red-400" />;
default:
return <CircleDot className="h-4 w-4 text-gray-400" />;
}
}
function formatDuration(ms: number): string {
if (ms < 1000) return `${ms}ms`;
const secs = ms / 1000;
if (secs < 60) return `${secs.toFixed(1)}s`;
const mins = Math.floor(secs / 60);
const remainSecs = Math.round(secs % 60);
if (mins < 60) return `${mins}m ${remainSecs}s`;
const hrs = Math.floor(mins / 60);
const remainMins = mins % 60;
return `${hrs}h ${remainMins}m`;
}
// ─── Child execution row (recursive) ────────────────────────────────────────
interface ChildExecutionRowProps {
execution: ExecutionSummary;
depth: number;
selectedExecutionId: number | null;
onSelectExecution: (id: number) => void;
workflowActionRefs: Set<string>;
}
/**
* A single child-execution row inside the accordion. If it has its own
* children (nested workflow), it can be expanded recursively.
*/
const ChildExecutionRow = memo(function ChildExecutionRow({
execution,
depth,
selectedExecutionId,
onSelectExecution,
workflowActionRefs,
}: ChildExecutionRowProps) {
const isWorkflow = workflowActionRefs.has(execution.action_ref);
const [expanded, setExpanded] = useState(false);
// Only fetch children when expanded and this is a workflow action
const { data, isLoading } = useChildExecutions(
expanded && isWorkflow ? execution.id : undefined,
);
const children = useMemo(() => data?.data ?? [], [data]);
const hasChildren = expanded && children.length > 0;
const wt = execution.workflow_task;
const taskName = wt?.task_name;
const retryCount = wt?.retry_count ?? 0;
const maxRetries = wt?.max_retries ?? 0;
const created = new Date(execution.created);
const updated = new Date(execution.updated);
const durationMs =
wt?.duration_ms ??
(execution.status === "completed" ||
execution.status === "failed" ||
execution.status === "timeout"
? updated.getTime() - created.getTime()
: null);
const indent = 16 + depth * 24;
return (
<>
<tr
className={`hover:bg-gray-50/80 group border-t border-gray-100 cursor-pointer ${
selectedExecutionId === execution.id
? "bg-blue-50 hover:bg-blue-50"
: ""
}`}
onClick={() => onSelectExecution(execution.id)}
>
{/* Task name / expand toggle */}
<td className="py-3 pr-2" style={{ paddingLeft: indent }}>
<div className="flex items-center gap-1.5 min-w-0">
{isWorkflow ? (
<button
onClick={(e) => {
e.preventDefault();
e.stopPropagation();
setExpanded((prev) => !prev);
}}
className={`flex-shrink-0 p-0.5 rounded hover:bg-gray-200 transition-colors ${
expanded || isLoading
? "visible"
: "invisible group-hover:visible"
}`}
title={expanded ? "Collapse" : "Expand"}
>
{isLoading ? (
<Loader2 className="h-3.5 w-3.5 text-gray-400 animate-spin" />
) : expanded ? (
<ChevronDown className="h-3.5 w-3.5 text-gray-400" />
) : (
<ChevronRight className="h-3.5 w-3.5 text-gray-400" />
)}
</button>
) : (
<span className="flex-shrink-0 w-[18px]" />
)}
{getStatusIcon(execution.status)}
{taskName && (
<span
className="text-sm font-medium text-gray-700 truncate"
title={taskName}
>
{taskName}
</span>
)}
{wt?.task_index != null && (
<span className="text-xs text-gray-400 flex-shrink-0">
[{wt.task_index}]
</span>
)}
</div>
</td>
{/* Exec ID */}
<td className="px-4 py-3 font-mono text-xs">
<Link
to={`/executions/${execution.id}`}
className="text-blue-600 hover:text-blue-800"
onClick={(e) => e.stopPropagation()}
>
#{execution.id}
</Link>
</td>
{/* Action */}
<td className="px-4 py-3">
<Link
to={`/executions/${execution.id}`}
className="text-sm text-blue-600 hover:text-blue-800 hover:underline truncate block"
title={execution.action_ref}
onClick={(e) => e.stopPropagation()}
>
{execution.action_ref}
</Link>
</td>
{/* Status */}
<td className="px-4 py-3">
<span
className={`px-2 py-0.5 text-xs rounded-full font-medium ${getStatusColor(execution.status)}`}
>
{execution.status}
</span>
</td>
{/* Duration */}
<td className="px-4 py-3 text-sm text-gray-500">
{execution.status === "running" ? (
<span className="text-blue-600 flex items-center gap-1">
<Loader2 className="h-3 w-3 animate-spin" />
running
</span>
) : durationMs != null && durationMs > 0 ? (
formatDuration(durationMs)
) : (
<span className="text-gray-300">&mdash;</span>
)}
</td>
{/* Retry */}
<td className="px-4 py-3 text-sm text-gray-500">
{maxRetries > 0 ? (
<span
className="inline-flex items-center gap-0.5"
title={`Attempt ${retryCount + 1} of ${maxRetries + 1}`}
>
<RotateCcw className="h-3 w-3" />
{retryCount}/{maxRetries}
</span>
) : (
<span className="text-gray-300">&mdash;</span>
)}
</td>
</tr>
{/* Nested children */}
{expanded &&
!isLoading &&
hasChildren &&
children.map((child: ExecutionSummary) => (
<ChildExecutionRow
key={child.id}
execution={child}
depth={depth + 1}
selectedExecutionId={selectedExecutionId}
onSelectExecution={onSelectExecution}
workflowActionRefs={workflowActionRefs}
/>
))}
</>
);
});
// ─── Top-level workflow row (accordion) ─────────────────────────────────────
interface WorkflowExecutionRowProps {
execution: ExecutionSummary;
workflowActionRefs: Set<string>;
selectedExecutionId: number | null;
onSelectExecution: (id: number) => void;
}
/**
* A top-level execution row with an expandable accordion for child tasks.
*/
const WorkflowExecutionRow = memo(function WorkflowExecutionRow({
execution,
workflowActionRefs,
selectedExecutionId,
onSelectExecution,
}: WorkflowExecutionRowProps) {
const isWorkflow = workflowActionRefs.has(execution.action_ref);
const [expanded, setExpanded] = useState(false);
const { data, isLoading } = useChildExecutions(
expanded && isWorkflow ? execution.id : undefined,
);
const children = useMemo(() => data?.data ?? [], [data]);
const summary = useMemo(() => {
const total = children.length;
const completed = children.filter(
(t: ExecutionSummary) => t.status === "completed",
).length;
const failed = children.filter(
(t: ExecutionSummary) => t.status === "failed" || t.status === "timeout",
).length;
const running = children.filter(
(t: ExecutionSummary) =>
t.status === "running" ||
t.status === "requested" ||
t.status === "scheduling" ||
t.status === "scheduled",
).length;
return { total, completed, failed, running };
}, [children]);
const hasWorkflowChildren = expanded && children.length > 0;
return (
<>
{/* Main execution row */}
<tr
className={`hover:bg-gray-50 border-b border-gray-200 cursor-pointer ${
selectedExecutionId === execution.id
? "bg-blue-50 hover:bg-blue-50"
: ""
}`}
onClick={() => onSelectExecution(execution.id)}
>
<td className="px-6 py-4">
<div className="flex items-center gap-2">
{isWorkflow ? (
<button
onClick={(e) => {
e.stopPropagation();
setExpanded((prev) => !prev);
}}
className="flex-shrink-0 p-0.5 rounded hover:bg-gray-200 transition-colors"
title={
expanded ? "Collapse workflow tasks" : "Expand workflow tasks"
}
>
{isLoading ? (
<Loader2 className="h-4 w-4 text-gray-400 animate-spin" />
) : expanded ? (
<ChevronDown className="h-4 w-4 text-gray-500" />
) : (
<ChevronRight className="h-4 w-4 text-gray-500" />
)}
</button>
) : (
<span className="flex-shrink-0 w-[20px]" />
)}
<Link
to={`/executions/${execution.id}`}
className="text-blue-600 hover:text-blue-800 font-mono text-sm"
onClick={(e) => e.stopPropagation()}
>
#{execution.id}
</Link>
</div>
</td>
<td className="px-6 py-4">
<span className="text-sm text-gray-900">{execution.action_ref}</span>
</td>
<td className="px-6 py-4">
{execution.rule_ref ? (
<span className="text-sm text-gray-700">{execution.rule_ref}</span>
) : (
<span className="text-sm text-gray-400 italic">-</span>
)}
</td>
<td className="px-6 py-4">
{execution.trigger_ref ? (
<span className="text-sm text-gray-700">
{execution.trigger_ref}
</span>
) : (
<span className="text-sm text-gray-400 italic">-</span>
)}
</td>
<td className="px-6 py-4">
<span
className={`px-2 py-1 text-xs rounded ${getStatusColor(execution.status)}`}
>
{execution.status}
</span>
</td>
<td className="px-6 py-4 text-sm text-gray-500">
{new Date(execution.created).toLocaleString()}
</td>
</tr>
{/* Expanded child-task section */}
{expanded && (
<tr>
<td colSpan={6} className="p-0">
<div className="bg-gray-50 border-b border-gray-200">
{/* Summary bar */}
{hasWorkflowChildren && (
<div className="flex items-center gap-3 px-8 py-2 border-b border-gray-200 bg-gray-100/60">
<Workflow className="h-4 w-4 text-indigo-500" />
<span className="text-xs font-medium text-gray-600">
{summary.total} task{summary.total !== 1 ? "s" : ""}
</span>
{summary.completed > 0 && (
<span className="inline-flex items-center gap-1 px-1.5 py-0.5 rounded-full text-xs font-medium bg-green-100 text-green-700">
<CheckCircle2 className="h-3 w-3" />
{summary.completed}
</span>
)}
{summary.running > 0 && (
<span className="inline-flex items-center gap-1 px-1.5 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-700">
<Loader2 className="h-3 w-3 animate-spin" />
{summary.running}
</span>
)}
{summary.failed > 0 && (
<span className="inline-flex items-center gap-1 px-1.5 py-0.5 rounded-full text-xs font-medium bg-red-100 text-red-700">
<XCircle className="h-3 w-3" />
{summary.failed}
</span>
)}
</div>
)}
{/* Loading state */}
{isLoading && (
<div className="flex items-center gap-2 px-8 py-4">
<Loader2 className="h-4 w-4 animate-spin text-gray-400" />
<span className="text-sm text-gray-500">
Loading workflow tasks...
</span>
</div>
)}
{/* No children yet (workflow still starting) */}
{!isLoading && children.length === 0 && (
<div className="px-8 py-3 text-sm text-gray-400 italic">
No child tasks yet.
</div>
)}
{/* Children table */}
{hasWorkflowChildren && (
<table className="w-full">
<thead>
<tr className="text-xs font-medium text-gray-500 uppercase tracking-wider">
<th
className="py-2 pr-2 text-left"
style={{ paddingLeft: 40 }}
>
Task
</th>
<th className="px-4 py-2 text-left">ID</th>
<th className="px-4 py-2 text-left">Action</th>
<th className="px-4 py-2 text-left">Status</th>
<th className="px-4 py-2 text-left">Duration</th>
<th className="px-4 py-2 text-left">Retry</th>
</tr>
</thead>
<tbody>
{children.map((child: ExecutionSummary) => (
<ChildExecutionRow
key={child.id}
execution={child}
depth={0}
selectedExecutionId={selectedExecutionId}
onSelectExecution={onSelectExecution}
workflowActionRefs={workflowActionRefs}
/>
))}
</tbody>
</table>
)}
</div>
</td>
</tr>
)}
</>
);
});
// ─── Main tree table ────────────────────────────────────────────────────────
interface WorkflowExecutionTreeProps {
executions: ExecutionSummary[];
isLoading: boolean;
isFetching: boolean;
error: Error | null;
hasActiveFilters: boolean;
clearFilters: () => void;
page: number;
setPage: (page: number) => void;
pageSize: number;
total: number;
workflowActionRefs: Set<string>;
selectedExecutionId: number | null;
onSelectExecution: (id: number) => void;
}
/**
* Renders the executions list in "By Workflow" mode. Top-level executions
* are shown with the same columns as the "All" view, but each row is
* expandable to reveal the workflow's child task executions in an accordion.
* Nested workflows can be drilled into recursively.
*/
const WorkflowExecutionTree = memo(function WorkflowExecutionTree({
executions,
isLoading,
isFetching,
error,
hasActiveFilters,
clearFilters,
page,
setPage,
pageSize,
total,
workflowActionRefs,
selectedExecutionId,
onSelectExecution,
}: WorkflowExecutionTreeProps) {
// Initial load
if (isLoading && executions.length === 0) {
return (
<div className="bg-white shadow rounded-lg">
<div className="flex items-center justify-center h-64">
<div className="animate-spin rounded-full h-12 w-12 border-b-2 border-blue-600" />
</div>
</div>
);
}
// Error with no cached data
if (error && executions.length === 0) {
return (
<div className="bg-white shadow rounded-lg">
<div className="bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded">
<p>Error: {error.message}</p>
</div>
</div>
);
}
// Empty
if (executions.length === 0) {
return (
<div className="bg-white p-12 text-center rounded-lg shadow">
<p>No executions found</p>
{hasActiveFilters && (
<button
onClick={clearFilters}
className="mt-3 text-sm text-blue-600 hover:text-blue-800"
>
Clear filters
</button>
)}
</div>
);
}
return (
<div className="relative">
{/* Loading overlay */}
{isFetching && (
<div className="absolute inset-0 bg-white/60 z-10 flex items-center justify-center rounded-lg">
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-blue-600" />
</div>
)}
{/* Non-fatal error banner */}
{error && (
<div className="mb-4 bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded">
<p>Error refreshing: {error.message}</p>
</div>
)}
<div className="bg-white shadow rounded-lg overflow-hidden">
<table className="min-w-full">
<thead className="bg-gray-50">
<tr>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">
ID
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">
Action
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">
Rule
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">
Trigger
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">
Status
</th>
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">
Created
</th>
</tr>
</thead>
<tbody className="bg-white">
{executions.map((exec: ExecutionSummary) => (
<WorkflowExecutionRow
key={exec.id}
execution={exec}
workflowActionRefs={workflowActionRefs}
selectedExecutionId={selectedExecutionId}
onSelectExecution={onSelectExecution}
/>
))}
</tbody>
</table>
</div>
<Pagination
page={page}
setPage={setPage}
pageSize={pageSize}
total={total}
/>
</div>
);
});
WorkflowExecutionTree.displayName = "WorkflowExecutionTree";
export default WorkflowExecutionTree;

View File

@@ -90,12 +90,6 @@ export function useEnforcementStream(
// Extract enforcement data from notification payload (flat structure) // Extract enforcement data from notification payload (flat structure)
const enforcementData = notification.payload as any; const enforcementData = notification.payload as any;
// Invalidate history queries so the EntityHistoryPanel picks up new records
// (e.g. status changes recorded by the enforcement_history trigger)
queryClient.invalidateQueries({
queryKey: ["history", "enforcement", notification.entity_id],
});
// Update specific enforcement query if it exists // Update specific enforcement query if it exists
queryClient.setQueryData( queryClient.setQueryData(
["enforcements", notification.entity_id], ["enforcements", notification.entity_id],

View File

@@ -48,6 +48,22 @@ function stripNotificationMeta(payload: any): any {
function executionMatchesParams(execution: any, params: any): boolean { function executionMatchesParams(execution: any, params: any): boolean {
if (!params) return true; if (!params) return true;
// Check topLevelOnly filter — child executions (with a parent) must not
// appear in top-level list queries.
if (params.topLevelOnly && execution.parent != null) {
return false;
}
// Check parent filter — child execution queries (keyed by { parent: id })
// should only receive notifications for executions belonging to that parent.
// Without this, every execution notification would match child queries since
// they have no other filter fields.
if (params.parent !== undefined) {
if (execution.parent !== params.parent) {
return false;
}
}
// Check status filter (from API query parameters) // Check status filter (from API query parameters)
if (params.status && execution.status !== params.status) { if (params.status && execution.status !== params.status) {
return false; return false;

View File

@@ -11,6 +11,7 @@ interface ExecutionsQueryParams {
ruleRef?: string; ruleRef?: string;
triggerRef?: string; triggerRef?: string;
executor?: number; executor?: number;
topLevelOnly?: boolean;
} }
export function useExecutions(params?: ExecutionsQueryParams) { export function useExecutions(params?: ExecutionsQueryParams) {
@@ -21,7 +22,8 @@ export function useExecutions(params?: ExecutionsQueryParams) {
params?.packName || params?.packName ||
params?.ruleRef || params?.ruleRef ||
params?.triggerRef || params?.triggerRef ||
params?.executor; params?.executor ||
params?.topLevelOnly;
return useQuery({ return useQuery({
queryKey: ["executions", params], queryKey: ["executions", params],
@@ -35,6 +37,7 @@ export function useExecutions(params?: ExecutionsQueryParams) {
ruleRef: params?.ruleRef, ruleRef: params?.ruleRef,
triggerRef: params?.triggerRef, triggerRef: params?.triggerRef,
executor: params?.executor, executor: params?.executor,
topLevelOnly: params?.topLevelOnly,
}); });
return response; return response;
}, },
@@ -59,3 +62,37 @@ export function useExecution(id: number) {
staleTime: 30000, // 30 seconds - SSE handles real-time updates staleTime: 30000, // 30 seconds - SSE handles real-time updates
}); });
} }
/**
* Fetch child executions (workflow tasks) for a given parent execution ID.
*
* Enabled only when `parentId` is provided. Polls every 5 seconds while any
* child execution is still in a running/pending state so the UI stays current.
*/
export function useChildExecutions(parentId: number | undefined) {
return useQuery({
queryKey: ["executions", { parent: parentId }],
queryFn: async () => {
const response = await ExecutionsService.listExecutions({
parent: parentId,
perPage: 100,
});
return response;
},
enabled: !!parentId,
staleTime: 5000,
// Re-fetch periodically so in-progress tasks update
refetchInterval: (query) => {
const data = query.state.data;
if (!data) return false;
const hasActive = data.data.some(
(e) =>
e.status === "requested" ||
e.status === "scheduling" ||
e.status === "scheduled" ||
e.status === "running",
);
return hasActive ? 5000 : false;
},
});
}

View File

@@ -61,12 +61,20 @@ export function useFilterSuggestions() {
return [...new Set(refs)].sort(); return [...new Set(refs)].sort();
}, [actionsData]); }, [actionsData]);
const workflowActionRefs = useMemo(() => {
const refs =
actionsData?.data
?.filter((a) => a.workflow_def != null)
.map((a) => a.ref) || [];
return new Set(refs);
}, [actionsData]);
const triggerRefs = useMemo(() => { const triggerRefs = useMemo(() => {
const refs = triggersData?.data?.map((t) => t.ref) || []; const refs = triggersData?.data?.map((t) => t.ref) || [];
return [...new Set(refs)].sort(); return [...new Set(refs)].sort();
}, [triggersData]); }, [triggersData]);
return { packNames, ruleRefs, actionRefs, triggerRefs }; return { packNames, ruleRefs, actionRefs, triggerRefs, workflowActionRefs };
} }
/** /**

View File

@@ -5,11 +5,7 @@ import { apiClient } from "@/lib/api-client";
* Supported entity types for history queries. * Supported entity types for history queries.
* Maps to the TimescaleDB history hypertables. * Maps to the TimescaleDB history hypertables.
*/ */
export type HistoryEntityType = export type HistoryEntityType = "execution" | "worker";
| "execution"
| "worker"
| "enforcement"
| "event";
/** /**
* A single history record from the API. * A single history record from the API.
@@ -68,8 +64,6 @@ export interface HistoryQueryParams {
* Uses the entity-specific endpoints: * Uses the entity-specific endpoints:
* - GET /api/v1/executions/:id/history * - GET /api/v1/executions/:id/history
* - GET /api/v1/workers/:id/history * - GET /api/v1/workers/:id/history
* - GET /api/v1/enforcements/:id/history
* - GET /api/v1/events/:id/history
*/ */
async function fetchEntityHistory( async function fetchEntityHistory(
entityType: HistoryEntityType, entityType: HistoryEntityType,
@@ -79,8 +73,6 @@ async function fetchEntityHistory(
const pluralMap: Record<HistoryEntityType, string> = { const pluralMap: Record<HistoryEntityType, string> = {
execution: "executions", execution: "executions",
worker: "workers", worker: "workers",
enforcement: "enforcements",
event: "events",
}; };
const queryParams: Record<string, string | number> = {}; const queryParams: Record<string, string | number> = {};
@@ -143,23 +135,3 @@ export function useWorkerHistory(
) { ) {
return useEntityHistory("worker", workerId, params); return useEntityHistory("worker", workerId, params);
} }
/**
* Convenience hook for enforcement history.
*/
export function useEnforcementHistory(
enforcementId: number,
params: HistoryQueryParams = {},
) {
return useEntityHistory("enforcement", enforcementId, params);
}
/**
* Convenience hook for event history.
*/
export function useEventHistory(
eventId: number,
params: HistoryQueryParams = {},
) {
return useEntityHistory("event", eventId, params);
}

View File

@@ -2,7 +2,16 @@ import { Link, useParams, useNavigate } from "react-router-dom";
import { useActions, useAction, useDeleteAction } from "@/hooks/useActions"; import { useActions, useAction, useDeleteAction } from "@/hooks/useActions";
import { useExecutions } from "@/hooks/useExecutions"; import { useExecutions } from "@/hooks/useExecutions";
import { useState, useMemo } from "react"; import { useState, useMemo } from "react";
import { ChevronDown, ChevronRight, Search, X, Play, Plus } from "lucide-react"; import {
ChevronDown,
ChevronRight,
Search,
X,
Play,
Plus,
GitBranch,
Pencil,
} from "lucide-react";
import ExecuteActionModal from "@/components/common/ExecuteActionModal"; import ExecuteActionModal from "@/components/common/ExecuteActionModal";
import ErrorDisplay from "@/components/common/ErrorDisplay"; import ErrorDisplay from "@/components/common/ErrorDisplay";
import { extractProperties } from "@/components/common/ParamSchemaForm"; import { extractProperties } from "@/components/common/ParamSchemaForm";
@@ -177,7 +186,12 @@ export default function ActionsPage() {
: "border-2 border-transparent hover:bg-gray-50" : "border-2 border-transparent hover:bg-gray-50"
}`} }`}
> >
<div className="font-medium text-sm text-gray-900 truncate"> <div className="font-medium text-sm text-gray-900 truncate flex items-center gap-1.5">
{action.workflow_def && (
<span title="Workflow">
<GitBranch className="w-3.5 h-3.5 text-purple-500 flex-shrink-0" />
</span>
)}
{action.label} {action.label}
</div> </div>
<div className="font-mono text-xs text-gray-500 mt-1 truncate"> <div className="font-mono text-xs text-gray-500 mt-1 truncate">
@@ -236,6 +250,7 @@ export default function ActionsPage() {
} }
function ActionDetail({ actionRef }: { actionRef: string }) { function ActionDetail({ actionRef }: { actionRef: string }) {
const navigate = useNavigate();
const { data: action, isLoading, error } = useAction(actionRef); const { data: action, isLoading, error } = useAction(actionRef);
const { data: executionsData } = useExecutions({ const { data: executionsData } = useExecutions({
actionRef: actionRef, actionRef: actionRef,
@@ -290,6 +305,17 @@ function ActionDetail({ actionRef }: { actionRef: string }) {
</h1> </h1>
</div> </div>
<div className="flex gap-2"> <div className="flex gap-2">
{action.data?.workflow_def && (
<button
onClick={() =>
navigate(`/actions/workflows/${action.data!.ref}/edit`)
}
className="px-4 py-2 bg-purple-600 text-white rounded hover:bg-purple-700 flex items-center gap-2"
>
<Pencil className="h-4 w-4" />
Edit Workflow
</button>
)}
<button <button
onClick={() => setShowExecuteModal(true)} onClick={() => setShowExecuteModal(true)}
className="px-4 py-2 bg-green-600 text-white rounded hover:bg-green-700 flex items-center gap-2" className="px-4 py-2 bg-green-600 text-white rounded hover:bg-green-700 flex items-center gap-2"

View File

@@ -457,7 +457,7 @@ export default function WorkflowBuilderPage() {
}, },
}); });
} else { } else {
await saveWorkflowFile.mutateAsync({ const fileData = {
name: state.name, name: state.name,
label: state.label, label: state.label,
description: state.description || undefined, description: state.description || undefined,
@@ -472,7 +472,30 @@ export default function WorkflowBuilderPage() {
Object.keys(state.output).length > 0 ? state.output : undefined, Object.keys(state.output).length > 0 ? state.output : undefined,
tags: state.tags.length > 0 ? state.tags : undefined, tags: state.tags.length > 0 ? state.tags : undefined,
enabled: state.enabled, enabled: state.enabled,
}); };
try {
await saveWorkflowFile.mutateAsync(fileData);
} catch (createErr: unknown) {
const apiErr = createErr as { status?: number };
if (apiErr?.status === 409) {
// Workflow already exists — fall back to update
const workflowRef = `${state.packRef}.${state.name}`;
await updateWorkflowFile.mutateAsync({
workflowRef,
data: fileData,
});
} else {
throw createErr;
}
}
}
// After a successful first save, navigate to the edit URL so the
// page transitions into edit mode (locks ref, uses update on next save).
if (!isEditing) {
const newRef = `${state.packRef}.${state.name}`;
navigate(`/actions/workflows/${newRef}/edit`, { replace: true });
return;
} }
setSaveSuccess(true); setSaveSuccess(true);
@@ -490,6 +513,7 @@ export default function WorkflowBuilderPage() {
saveWorkflowFile, saveWorkflowFile,
updateWorkflowFile, updateWorkflowFile,
actionSchemaMap, actionSchemaMap,
navigate,
]); ]);
const handleSave = useCallback(() => { const handleSave = useCallback(() => {
@@ -540,9 +564,11 @@ export default function WorkflowBuilderPage() {
{/* Left section: Back + metadata */} {/* Left section: Back + metadata */}
<div className="flex items-center gap-3 flex-1 min-w-0"> <div className="flex items-center gap-3 flex-1 min-w-0">
<button <button
onClick={() => navigate("/actions")} onClick={() =>
navigate(isEditing ? `/actions/${editRef}` : "/actions")
}
className="p-1.5 rounded hover:bg-gray-100 text-gray-500 hover:text-gray-700 transition-colors flex-shrink-0" className="p-1.5 rounded hover:bg-gray-100 text-gray-500 hover:text-gray-700 transition-colors flex-shrink-0"
title="Back to Actions" title={isEditing ? "Back to Workflow" : "Back to Actions"}
> >
<ArrowLeft className="w-5 h-5" /> <ArrowLeft className="w-5 h-5" />
</button> </button>
@@ -558,6 +584,7 @@ export default function WorkflowBuilderPage() {
}))} }))}
placeholder="Pack..." placeholder="Pack..."
className="max-w-[140px]" className="max-w-[140px]"
disabled={isEditing}
/> />
<span className="text-gray-400 text-lg font-light">/</span> <span className="text-gray-400 text-lg font-light">/</span>
@@ -571,8 +598,9 @@ export default function WorkflowBuilderPage() {
name: e.target.value.replace(/[^a-zA-Z0-9_-]/g, "_"), name: e.target.value.replace(/[^a-zA-Z0-9_-]/g, "_"),
}) })
} }
className="px-2 py-1.5 border border-gray-300 rounded text-sm font-mono focus:ring-2 focus:ring-blue-500 focus:border-blue-500 w-48" className={`px-2 py-1.5 border border-gray-300 rounded text-sm font-mono w-48 ${isEditing ? "bg-gray-100 cursor-not-allowed text-gray-500" : "focus:ring-2 focus:ring-blue-500 focus:border-blue-500"}`}
placeholder="workflow_name" placeholder="workflow_name"
disabled={isEditing}
/> />
<span className="text-gray-400 text-lg font-light"></span> <span className="text-gray-400 text-lg font-light"></span>

View File

@@ -1,7 +1,6 @@
import { useParams, Link } from "react-router-dom"; import { useParams, Link } from "react-router-dom";
import { useEnforcement } from "@/hooks/useEvents"; import { useEnforcement } from "@/hooks/useEvents";
import { EnforcementStatus, EnforcementCondition } from "@/api"; import { EnforcementStatus, EnforcementCondition } from "@/api";
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
export default function EnforcementDetailPage() { export default function EnforcementDetailPage() {
const { id } = useParams<{ id: string }>(); const { id } = useParams<{ id: string }>();
@@ -189,6 +188,18 @@ export default function EnforcementDetailPage() {
{formatDate(enforcement.created)} {formatDate(enforcement.created)}
</dd> </dd>
</div> </div>
<div>
<dt className="text-sm font-medium text-gray-500">
Resolved At
</dt>
<dd className="mt-1 text-gray-900">
{enforcement.resolved_at ? (
formatDate(enforcement.resolved_at)
) : (
<span className="text-gray-500">Pending</span>
)}
</dd>
</div>
</dl> </dl>
</div> </div>
</div> </div>
@@ -331,6 +342,14 @@ export default function EnforcementDetailPage() {
{formatDate(enforcement.created)} {formatDate(enforcement.created)}
</dd> </dd>
</div> </div>
{enforcement.resolved_at && (
<div>
<dt className="text-gray-500">Resolved</dt>
<dd className="text-gray-900">
{formatDate(enforcement.resolved_at)}
</dd>
</div>
)}
</dl> </dl>
</div> </div>
</div> </div>
@@ -377,15 +396,6 @@ export default function EnforcementDetailPage() {
</div> </div>
</div> </div>
</div> </div>
{/* Change History */}
<div className="mt-6">
<EntityHistoryPanel
entityType="enforcement"
entityId={enforcement.id}
title="Enforcement History"
/>
</div>
</div> </div>
); );
} }

View File

@@ -1,6 +1,5 @@
import { useParams, Link } from "react-router-dom"; import { useParams, Link } from "react-router-dom";
import { useEvent } from "@/hooks/useEvents"; import { useEvent } from "@/hooks/useEvents";
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
export default function EventDetailPage() { export default function EventDetailPage() {
const { id } = useParams<{ id: string }>(); const { id } = useParams<{ id: string }>();
@@ -259,15 +258,6 @@ export default function EventDetailPage() {
</div> </div>
</div> </div>
</div> </div>
{/* Change History */}
<div className="mt-6">
<EntityHistoryPanel
entityType="event"
entityId={event.id}
title="Event History"
/>
</div>
</div> </div>
); );
} }

View File

@@ -22,6 +22,7 @@ import { useState, useMemo } from "react";
import { RotateCcw, Loader2 } from "lucide-react"; import { RotateCcw, Loader2 } from "lucide-react";
import ExecuteActionModal from "@/components/common/ExecuteActionModal"; import ExecuteActionModal from "@/components/common/ExecuteActionModal";
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel"; import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
import WorkflowTasksPanel from "@/components/common/WorkflowTasksPanel";
const getStatusColor = (status: string) => { const getStatusColor = (status: string) => {
switch (status) { switch (status) {
@@ -116,6 +117,9 @@ export default function ExecutionDetailPage() {
// Fetch the action so we can get param_schema for the re-run modal // Fetch the action so we can get param_schema for the re-run modal
const { data: actionData } = useAction(execution?.action_ref || ""); const { data: actionData } = useAction(execution?.action_ref || "");
// Determine if this execution is a workflow (action has workflow_def)
const isWorkflow = !!actionData?.data?.workflow_def;
const [showRerunModal, setShowRerunModal] = useState(false); const [showRerunModal, setShowRerunModal] = useState(false);
// Fetch status history for the timeline // Fetch status history for the timeline
@@ -207,6 +211,11 @@ export default function ExecutionDetailPage() {
<div className="flex items-center justify-between"> <div className="flex items-center justify-between">
<div className="flex items-center gap-4"> <div className="flex items-center gap-4">
<h1 className="text-3xl font-bold">Execution #{execution.id}</h1> <h1 className="text-3xl font-bold">Execution #{execution.id}</h1>
{isWorkflow && (
<span className="px-3 py-1 text-sm rounded-full bg-indigo-100 text-indigo-800">
Workflow
</span>
)}
<span <span
className={`px-3 py-1 text-sm rounded-full ${getStatusColor(execution.status)}`} className={`px-3 py-1 text-sm rounded-full ${getStatusColor(execution.status)}`}
> >
@@ -247,6 +256,25 @@ export default function ExecutionDetailPage() {
{execution.action_ref} {execution.action_ref}
</Link> </Link>
</p> </p>
{execution.workflow_task && (
<p className="text-sm text-indigo-600 mt-1 flex items-center gap-1.5">
<span className="text-gray-500">Task</span>{" "}
<span className="font-medium">
{execution.workflow_task.task_name}
</span>
{execution.parent && (
<>
<span className="text-gray-500">in workflow</span>
<Link
to={`/executions/${execution.parent}`}
className="text-indigo-600 hover:text-indigo-800 font-medium"
>
Execution #{execution.parent}
</Link>
</>
)}
</p>
)}
</div> </div>
{/* Re-Run Modal */} {/* Re-Run Modal */}
@@ -504,6 +532,13 @@ export default function ExecutionDetailPage() {
</div> </div>
</div> </div>
{/* Workflow Tasks (shown only for workflow executions) */}
{isWorkflow && (
<div className="mt-6">
<WorkflowTasksPanel parentExecutionId={execution.id} />
</div>
)}
{/* Change History */} {/* Change History */}
<div className="mt-6"> <div className="mt-6">
<EntityHistoryPanel <EntityHistoryPanel

View File

@@ -3,13 +3,19 @@ import { useExecutions } from "@/hooks/useExecutions";
import { useExecutionStream } from "@/hooks/useExecutionStream"; import { useExecutionStream } from "@/hooks/useExecutionStream";
import { ExecutionStatus } from "@/api"; import { ExecutionStatus } from "@/api";
import { useState, useMemo, memo, useCallback, useEffect } from "react"; import { useState, useMemo, memo, useCallback, useEffect } from "react";
import { Search, X } from "lucide-react"; import { Search, X, List, GitBranch } from "lucide-react";
import MultiSelect from "@/components/common/MultiSelect"; import MultiSelect from "@/components/common/MultiSelect";
import AutocompleteInput from "@/components/common/AutocompleteInput"; import AutocompleteInput from "@/components/common/AutocompleteInput";
import { import {
useFilterSuggestions, useFilterSuggestions,
useMergedSuggestions, useMergedSuggestions,
} from "@/hooks/useFilterSuggestions"; } from "@/hooks/useFilterSuggestions";
import WorkflowExecutionTree from "@/components/executions/WorkflowExecutionTree";
import ExecutionPreviewPanel from "@/components/executions/ExecutionPreviewPanel";
type ViewMode = "all" | "workflow";
const VIEW_MODE_STORAGE_KEY = "attune:executions:viewMode";
// Memoized filter input component for non-ref fields (e.g. Executor ID) // Memoized filter input component for non-ref fields (e.g. Executor ID)
const FilterInput = memo( const FilterInput = memo(
@@ -87,6 +93,8 @@ const ExecutionsResultsTable = memo(
setPage, setPage,
pageSize, pageSize,
total, total,
selectedExecutionId,
onSelectExecution,
}: { }: {
executions: any[]; executions: any[];
isLoading: boolean; isLoading: boolean;
@@ -98,6 +106,8 @@ const ExecutionsResultsTable = memo(
setPage: (page: number) => void; setPage: (page: number) => void;
pageSize: number; pageSize: number;
total: number; total: number;
selectedExecutionId: number | null;
onSelectExecution: (id: number) => void;
}) => { }) => {
const totalPages = Math.ceil(total / pageSize); const totalPages = Math.ceil(total / pageSize);
@@ -182,11 +192,20 @@ const ExecutionsResultsTable = memo(
</thead> </thead>
<tbody className="bg-white divide-y divide-gray-200"> <tbody className="bg-white divide-y divide-gray-200">
{executions.map((exec: any) => ( {executions.map((exec: any) => (
<tr key={exec.id} className="hover:bg-gray-50"> <tr
key={exec.id}
className={`hover:bg-gray-50 cursor-pointer ${
selectedExecutionId === exec.id
? "bg-blue-50 hover:bg-blue-50"
: ""
}`}
onClick={() => onSelectExecution(exec.id)}
>
<td className="px-6 py-4 font-mono text-sm"> <td className="px-6 py-4 font-mono text-sm">
<Link <Link
to={`/executions/${exec.id}`} to={`/executions/${exec.id}`}
className="text-blue-600 hover:text-blue-800" className="text-blue-600 hover:text-blue-800"
onClick={(e) => e.stopPropagation()}
> >
#{exec.id} #{exec.id}
</Link> </Link>
@@ -294,6 +313,15 @@ ExecutionsResultsTable.displayName = "ExecutionsResultsTable";
export default function ExecutionsPage() { export default function ExecutionsPage() {
const [searchParams] = useSearchParams(); const [searchParams] = useSearchParams();
// --- View mode toggle ---
const [viewMode, setViewMode] = useState<ViewMode>(() => {
const stored = localStorage.getItem(VIEW_MODE_STORAGE_KEY);
if (stored === "all" || stored === "workflow") return stored;
const param = searchParams.get("view");
if (param === "all" || param === "workflow") return param;
return "all";
});
// --- Filter input state (updates immediately on keystroke) --- // --- Filter input state (updates immediately on keystroke) ---
const [page, setPage] = useState(1); const [page, setPage] = useState(1);
const pageSize = 50; const pageSize = 50;
@@ -342,8 +370,11 @@ export default function ExecutionsPage() {
if (debouncedStatuses.length === 1) { if (debouncedStatuses.length === 1) {
params.status = debouncedStatuses[0] as ExecutionStatus; params.status = debouncedStatuses[0] as ExecutionStatus;
} }
if (viewMode === "workflow") {
params.topLevelOnly = true;
}
return params; return params;
}, [page, pageSize, debouncedFilters, debouncedStatuses]); }, [page, pageSize, debouncedFilters, debouncedStatuses, viewMode]);
const { data, isLoading, isFetching, error } = useExecutions(queryParams); const { data, isLoading, isFetching, error } = useExecutions(queryParams);
const { isConnected } = useExecutionStream({ enabled: true }); const { isConnected } = useExecutionStream({ enabled: true });
@@ -423,103 +454,181 @@ export default function ExecutionsPage() {
Object.values(searchFilters).some((v) => v !== "") || Object.values(searchFilters).some((v) => v !== "") ||
selectedStatuses.length > 0; selectedStatuses.length > 0;
const [selectedExecutionId, setSelectedExecutionId] = useState<number | null>(
null,
);
const handleSelectExecution = useCallback((id: number) => {
setSelectedExecutionId((prev) => (prev === id ? null : id));
}, []);
const handleClosePreview = useCallback(() => {
setSelectedExecutionId(null);
}, []);
const handleViewModeChange = useCallback((mode: ViewMode) => {
setViewMode(mode);
localStorage.setItem(VIEW_MODE_STORAGE_KEY, mode);
setPage(1);
}, []);
return ( return (
<div className="p-6"> <div className="flex h-[calc(100vh-4rem)]">
{/* Header - always visible */} {/* Main content area */}
<div className="flex items-center justify-between mb-6"> <div
<div> className={`flex-1 min-w-0 overflow-y-auto p-6 ${selectedExecutionId ? "mr-0" : ""}`}
<h1 className="text-3xl font-bold">Executions</h1> >
{isFetching && hasActiveFilters && ( {/* Header - always visible */}
<p className="text-sm text-gray-500 mt-1"> <div className="flex items-center justify-between mb-6">
Searching executions... <div className="flex items-center gap-3">
</p> <h1 className="text-3xl font-bold">Executions</h1>
)} {isConnected && (
</div> <div className="flex items-center gap-1.5 text-xs text-green-600 bg-green-50 border border-green-200 rounded-full px-2.5 py-1">
{isConnected && ( <div className="h-1.5 w-1.5 rounded-full bg-green-500 animate-pulse" />
<div className="flex items-center gap-2 text-sm text-green-600"> <span>Live</span>
<div className="h-2 w-2 rounded-full bg-green-600 animate-pulse" /> </div>
<span>Live Updates</span> )}
{isFetching && hasActiveFilters && (
<p className="text-sm text-gray-500">Searching executions...</p>
)}
</div> </div>
<div className="flex items-center gap-4">
{/* View mode toggle */}
<div className="inline-flex rounded-lg border border-gray-300 bg-white shadow-sm">
<button
onClick={() => handleViewModeChange("all")}
className={`inline-flex items-center gap-1.5 px-3 py-1.5 text-sm font-medium rounded-l-lg transition-colors ${
viewMode === "all"
? "bg-blue-600 text-white"
: "text-gray-600 hover:bg-gray-50"
}`}
>
<List className="h-4 w-4" />
All
</button>
<button
onClick={() => handleViewModeChange("workflow")}
className={`inline-flex items-center gap-1.5 px-3 py-1.5 text-sm font-medium rounded-r-lg transition-colors ${
viewMode === "workflow"
? "bg-blue-600 text-white"
: "text-gray-600 hover:bg-gray-50"
}`}
>
<GitBranch className="h-4 w-4" />
By Workflow
</button>
</div>
</div>
</div>
{/* Filter section - always mounted, never unmounts during loading */}
<div className="bg-white shadow rounded-lg p-4 mb-6">
<div className="flex items-center justify-between mb-4">
<div className="flex items-center gap-2">
<Search className="h-5 w-5 text-gray-400" />
<h2 className="text-lg font-semibold">Filter Executions</h2>
</div>
{hasActiveFilters && (
<button
onClick={clearFilters}
className="flex items-center gap-1 text-sm text-gray-600 hover:text-gray-900"
>
<X className="h-4 w-4" />
Clear Filters
</button>
)}
</div>
<div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-6 gap-4">
<AutocompleteInput
label="Pack"
value={searchFilters.pack}
onChange={(value) => handleFilterChange("pack", value)}
suggestions={packSuggestions}
placeholder="e.g., core"
/>
<AutocompleteInput
label="Rule"
value={searchFilters.rule}
onChange={(value) => handleFilterChange("rule", value)}
suggestions={ruleSuggestions}
placeholder="e.g., core.on_timer"
/>
<AutocompleteInput
label="Action"
value={searchFilters.action}
onChange={(value) => handleFilterChange("action", value)}
suggestions={actionSuggestions}
placeholder="e.g., core.echo"
/>
<AutocompleteInput
label="Trigger"
value={searchFilters.trigger}
onChange={(value) => handleFilterChange("trigger", value)}
suggestions={triggerSuggestions}
placeholder="e.g., core.timer"
/>
<FilterInput
label="Executor ID"
value={searchFilters.executor}
onChange={(value) => handleFilterChange("executor", value)}
placeholder="e.g., 1"
/>
<div>
<MultiSelect
label="Status"
options={STATUS_OPTIONS}
value={selectedStatuses}
onChange={setSelectedStatuses}
placeholder="All Statuses"
/>
</div>
</div>
</div>
{/* Results section - isolated from filter state, only depends on query results */}
{viewMode === "all" ? (
<ExecutionsResultsTable
executions={filteredExecutions}
isLoading={isLoading}
isFetching={isFetching}
error={error as Error | null}
hasActiveFilters={hasActiveFilters}
clearFilters={clearFilters}
page={page}
setPage={setPage}
pageSize={pageSize}
total={total}
selectedExecutionId={selectedExecutionId}
onSelectExecution={handleSelectExecution}
/>
) : (
<WorkflowExecutionTree
executions={filteredExecutions}
isLoading={isLoading}
isFetching={isFetching}
error={error as Error | null}
hasActiveFilters={hasActiveFilters}
clearFilters={clearFilters}
page={page}
setPage={setPage}
pageSize={pageSize}
total={total}
workflowActionRefs={baseSuggestions.workflowActionRefs}
selectedExecutionId={selectedExecutionId}
onSelectExecution={handleSelectExecution}
/>
)} )}
</div> </div>
{/* Filter section - always mounted, never unmounts during loading */} {/* Right-side preview panel */}
<div className="bg-white shadow rounded-lg p-4 mb-6"> {selectedExecutionId && (
<div className="flex items-center justify-between mb-4"> <div className="w-[400px] flex-shrink-0 h-full">
<div className="flex items-center gap-2"> <ExecutionPreviewPanel
<Search className="h-5 w-5 text-gray-400" /> executionId={selectedExecutionId}
<h2 className="text-lg font-semibold">Filter Executions</h2> onClose={handleClosePreview}
</div> />
{hasActiveFilters && (
<button
onClick={clearFilters}
className="flex items-center gap-1 text-sm text-gray-600 hover:text-gray-900"
>
<X className="h-4 w-4" />
Clear Filters
</button>
)}
</div> </div>
<div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-6 gap-4"> )}
<AutocompleteInput
label="Pack"
value={searchFilters.pack}
onChange={(value) => handleFilterChange("pack", value)}
suggestions={packSuggestions}
placeholder="e.g., core"
/>
<AutocompleteInput
label="Rule"
value={searchFilters.rule}
onChange={(value) => handleFilterChange("rule", value)}
suggestions={ruleSuggestions}
placeholder="e.g., core.on_timer"
/>
<AutocompleteInput
label="Action"
value={searchFilters.action}
onChange={(value) => handleFilterChange("action", value)}
suggestions={actionSuggestions}
placeholder="e.g., core.echo"
/>
<AutocompleteInput
label="Trigger"
value={searchFilters.trigger}
onChange={(value) => handleFilterChange("trigger", value)}
suggestions={triggerSuggestions}
placeholder="e.g., core.timer"
/>
<FilterInput
label="Executor ID"
value={searchFilters.executor}
onChange={(value) => handleFilterChange("executor", value)}
placeholder="e.g., 1"
/>
<div>
<MultiSelect
label="Status"
options={STATUS_OPTIONS}
value={selectedStatuses}
onChange={setSelectedStatuses}
placeholder="All Statuses"
/>
</div>
</div>
</div>
{/* Results section - isolated from filter state, only depends on query results */}
<ExecutionsResultsTable
executions={filteredExecutions}
isLoading={isLoading}
isFetching={isFetching}
error={error as Error | null}
hasActiveFilters={hasActiveFilters}
clearFilters={clearFilters}
page={page}
setPage={setPage}
pageSize={pageSize}
total={total}
/>
</div> </div>
); );
} }

View File

@@ -0,0 +1,59 @@
# Execution Table → TimescaleDB Hypertable Conversion
**Date**: 2026-02-27
**Scope**: Database migration, Rust code fixes, AGENTS.md updates
## Summary
Converted the `execution` table from a regular PostgreSQL table to a TimescaleDB hypertable partitioned on `created` (1-day chunks), consistent with the existing `event` and `enforcement` hypertable conversions. This enables automatic time-based partitioning, compression, and retention for execution data.
## Key Design Decisions
- **`updated` column preserved**: Unlike `event` (immutable) and `enforcement` (single update), executions are updated ~4 times during their lifecycle. The `updated` column and its BEFORE UPDATE trigger are kept because the timeout monitor and UI depend on them.
- **`execution_history` preserved**: The execution_history hypertable tracks field-level diffs which remain valuable for a mutable table. Its continuous aggregates (`execution_status_hourly`, `execution_throughput_hourly`) are unchanged.
- **7-day compression window is safe**: Executions complete within at most ~1 day, so all updates finish well before compression kicks in.
- **New `execution_volume_hourly` continuous aggregate**: Queries the execution hypertable directly (like `event_volume_hourly` queries event), providing belt-and-suspenders volume monitoring alongside the history-based aggregates.
## Changes
### New Migration: `migrations/20250101000010_execution_hypertable.sql`
- Drops all FK constraints referencing `execution` (inquiry, workflow_execution, self-references, action, executor, workflow_def)
- Changes PK from `(id)` to `(id, created)` (TimescaleDB requirement)
- Converts to hypertable with `create_hypertable('execution', 'created', chunk_time_interval => '1 day')`
- Adds compression policy (segmented by `action_ref`, after 7 days)
- Adds 90-day retention policy
- Adds `execution_volume_hourly` continuous aggregate with 30-minute refresh policy
### Rust Code Fixes
- **`crates/executor/src/timeout_monitor.rs`**: Replaced `SELECT * FROM execution` with explicit column list. `SELECT *` on hypertables is fragile — the execution table has columns (`is_workflow`, `workflow_def`) not present in the Rust `Execution` model.
- **`crates/api/tests/sse_execution_stream_tests.rs`**: Fixed references to non-existent `start_time` and `end_time` columns (replaced with `updated = NOW()`).
- **`crates/common/src/repositories/analytics.rs`**: Added `ExecutionVolumeBucket` struct and `execution_volume_hourly` / `execution_volume_hourly_by_action` repository methods for the new continuous aggregate.
### AGENTS.md Updates
- Added **Execution Table (TimescaleDB Hypertable)** documentation
- Updated FK ON DELETE Policy to reflect execution as hypertable
- Updated Nullable FK Fields to list all dropped FK constraints
- Updated table count (still 20) and migration count (9 → 10)
- Updated continuous aggregate count (5 → 6)
- Updated development status to include execution hypertable
- Added pitfall #19: never use `SELECT *` on hypertable-backed models
- Added pitfall #20: execution/event/enforcement cannot be FK targets
## FK Constraints Dropped
| Source Column | Target | Disposition |
|---|---|---|
| `inquiry.execution` | `execution(id)` | Column kept as plain BIGINT |
| `workflow_execution.execution` | `execution(id)` | Column kept as plain BIGINT |
| `execution.parent` | `execution(id)` | Self-ref, column kept |
| `execution.original_execution` | `execution(id)` | Self-ref, column kept |
| `execution.workflow_def` | `workflow_definition(id)` | Column kept |
| `execution.action` | `action(id)` | Column kept |
| `execution.executor` | `identity(id)` | Column kept |
| `execution.enforcement` | `enforcement(id)` | Already dropped in migration 000009 |
## Verification
- `cargo check --all-targets --workspace`: Zero warnings
- `cargo test --workspace --lib`: All 90 unit tests pass
- Integration test failures are pre-existing (missing `attune_test` database), unrelated to these changes

View File

@@ -0,0 +1,91 @@
# `with_items` Concurrency Limiting Implementation
**Date**: 2026-02-27
**Scope**: `crates/executor/src/scheduler.rs`
## Problem
Workflow tasks with `with_items` and a `concurrency` limit dispatched all items simultaneously, ignoring the concurrency setting entirely. For example, a task with `concurrency: 3` and 20 items would dispatch all 20 at once instead of running at most 3 in parallel.
## Root Cause
The `dispatch_with_items_task` method iterated over all items in a single loop, creating a child execution and publishing it to the MQ for every item unconditionally. The `task_node.concurrency` value was logged but never used to gate dispatching.
## Solution
### Approach: DB-Based Sliding Window
All child execution records are created in the database up front (with fully-rendered inputs), but only the first `concurrency` items are published to the message queue. The remaining children stay at `Requested` status in the DB. As each item completes, `advance_workflow` queries for `Requested`-status siblings and publishes enough to refill the concurrency window.
This avoids the need for any auxiliary state in workflow variables — the database itself is the single source of truth for which items are pending vs in-flight.
### Initial Attempt: Workflow Variables (Abandoned)
The first implementation stored pending items as JSON metadata in `workflow_execution.variables` under `__pending_items__{task_name}`. This approach suffered from race conditions: when multiple items completed simultaneously, concurrent `advance_workflow` calls would read stale pending lists, pop the same item, and lose others. The result was that only the initial batch ever executed.
### Key Changes
#### 1. `dispatch_with_items_task` — Two-Phase Dispatch
- **Phase 1**: Creates ALL child execution records in the database. Each row has its input already rendered through the `WorkflowContext`, so no re-rendering is needed later.
- **Phase 2**: Publishes only the first `min(total, concurrency)` to the MQ via `publish_execution_requested`. The rest stay at `Requested` status.
#### 2. `publish_execution_requested` — New Helper
Publishes an `ExecutionRequested` MQ message for an existing execution row. Used both during initial dispatch (Phase 2) and when filling concurrency slots on completion.
#### 3. `publish_pending_with_items_children` — Fill Concurrency Slots
Replaces the old `dispatch_next_pending_with_items`. Queries the database for siblings at `Requested` status (ordered by `task_index`), limited to the number of free slots, and publishes them. No workflow variables involved — the DB query `status = 'requested'` is the authoritative source of undispatched items.
#### 4. `advance_workflow` — Concurrency-Aware Completion
The with_items completion branch now:
1. Counts **in-flight** siblings (`scheduling`, `scheduled`, `running` — NOT `requested`)
2. Reads the `concurrency` limit from the task graph
3. Calculates `free_slots = concurrency - in_flight`
4. Calls `publish_pending_with_items_children(free_slots)` to fill the window
5. Checks **all** non-terminal siblings (including `requested`) to decide whether to advance
## Concurrency Flow Example
For a task with 5 items and `concurrency: 3`:
```
Initial: Create items 0-4 in DB; publish items 0, 1, 2 to MQ
Items 3, 4 stay at Requested status in DB
Item 0 ✓: in_flight=2 (items 1,2), free_slots=1 → publish item 3
siblings_remaining=3 (items 1,2,3,4 minus terminal) → return early
Item 1 ✓: in_flight=2 (items 2,3), free_slots=1 → publish item 4
siblings_remaining=3 → return early
Item 2 ✓: in_flight=2 (items 3,4), free_slots=1 → no Requested items left
siblings_remaining=2 → return early
Item 3 ✓: in_flight=1 (item 4), free_slots=2 → no Requested items left
siblings_remaining=1 → return early
Item 4 ✓: in_flight=0, free_slots=3 → no Requested items left
siblings_remaining=0 → advance workflow to successor tasks
```
## Race Condition Handling
When multiple items complete simultaneously, concurrent `advance_workflow` calls may both query `status = 'requested'` and find the same pending items. The worst case is a brief over-dispatch (the same execution published to MQ twice). The scheduler handles this gracefully — the second message finds the execution already at `Scheduled`/`Running` status. This is a benign, self-correcting race that never loses items.
## Files Changed
- **`crates/executor/src/scheduler.rs`**:
- Rewrote `dispatch_with_items_task` with two-phase create-then-publish approach
- Added `publish_execution_requested` helper for publishing existing execution rows
- Added `publish_pending_with_items_children` for DB-query-based slot filling
- Rewrote `advance_workflow` with_items branch with in-flight counting and slot calculation
- Updated unit tests for the new approach
## Testing
- All 104 executor tests pass (102 + 2 ignored)
- 2 new unit tests for dispatch count and free slots calculations
- Clean workspace build with no new warnings

View File

@@ -0,0 +1,67 @@
# Workflow Execution Orchestration & UI Ref-Lock Fix
**Date**: 2026-02-27
## Problem
Two issues were addressed:
### 1. Workflow ref editable during edit mode (UI)
When editing an existing workflow action, the pack selector and workflow name fields were editable, allowing users to change the action's ref — which should be immutable after creation.
### 2. Workflow execution runtime error
Executing a workflow action produced:
```
Action execution failed: Internal error: Runtime not found: No runtime found for action: examples.single_echo (available: node.js, python, shell)
```
**Root cause**: Workflow companion actions are created with `runtime: None` (they aren't scripts — they're orchestration definitions). When the executor's scheduler received an execution request for a workflow action, it dispatched it to a worker like any regular action. The worker then tried to find a runtime to execute it, failed (no runtime matches a `.workflow.yaml` entrypoint), and returned the error.
The `WorkflowCoordinator` in `crates/executor/src/workflow/coordinator.rs` existed as prototype code but was never integrated into the execution pipeline.
## Solution
### UI Fix (`web/src/pages/actions/WorkflowBuilderPage.tsx`)
- Added `disabled={isEditing}` to the `SearchableSelect` pack selector (already supported a `disabled` prop)
- Added `disabled={isEditing}` and conditional disabled styling to the workflow name `<input>`
- Both fields are now locked when editing an existing workflow, preventing ref changes
### Workflow Orchestration (`crates/executor/src/scheduler.rs`)
Added workflow detection and orchestration directly in the `ExecutionScheduler`:
1. **Detection**: `process_execution_requested` checks `action.workflow_def.is_some()` before dispatching to a worker
2. **`process_workflow_execution`**: Loads the workflow definition, parses it into a `WorkflowDefinition`, builds a `TaskGraph`, creates a `workflow_execution` record, and marks the parent execution as Running
3. **`dispatch_workflow_task`**: For each entry-point task in the graph, creates a child execution with the task's actual action ref (e.g., `core.echo` instead of `examples.single_echo`) and publishes an `ExecutionRequested` message. The child execution includes `workflow_task` metadata linking it back to the `workflow_execution` record.
4. **`advance_workflow`** (public): Called by the completion listener when a workflow child task completes. Evaluates transitions from the completed task, schedules successor tasks, checks join barriers, and completes the workflow when all tasks are done.
5. **`complete_workflow`**: Updates both the `workflow_execution` and parent `execution` records to their terminal state.
Key design decisions:
- Child task executions re-enter the normal scheduling pipeline via MQ, so nested workflows (a workflow task that is itself a workflow) are handled recursively
- Transition evaluation supports `succeeded()`, `failed()`, `timed_out()`, `always`, and custom conditions (custom defaults to fire-on-success for now)
- Join barriers are respected — tasks with `join` counts wait for enough predecessors
### Completion Listener (`crates/executor/src/completion_listener.rs`)
- Added workflow advancement: when a completed execution has `workflow_task` metadata, calls `ExecutionScheduler::advance_workflow` to schedule successor tasks or complete the workflow
- Added an `AtomicUsize` round-robin counter for dispatching successor tasks to workers
### Binary Entry Point (`crates/executor/src/main.rs`)
- Added `mod workflow;` so the binary crate can resolve `crate::workflow::graph::*` paths used in the scheduler
## Files Changed
| File | Change |
|------|--------|
| `web/src/pages/actions/WorkflowBuilderPage.tsx` | Disable pack selector and name input when editing |
| `crates/executor/src/scheduler.rs` | Workflow detection, orchestration, task dispatch, advancement |
| `crates/executor/src/completion_listener.rs` | Workflow advancement on child task completion |
| `crates/executor/src/main.rs` | Added `mod workflow;` |
## Architecture Note
This implementation bypasses the prototype `WorkflowCoordinator` (`crates/executor/src/workflow/coordinator.rs`) which had several issues: hardcoded `attune.` schema prefixes, `SELECT *` on the execution table, duplicate parent execution creation, and no integration with the MQ-based scheduling pipeline. The new implementation works directly within the scheduler and completion listener, using the existing repository layer and message queue infrastructure.
## Testing
- Existing executor unit tests pass
- Workspace compiles with zero errors
- No new warnings introduced (pre-existing warnings from unused prototype workflow code remain)

View File

@@ -0,0 +1,50 @@
# Workflow Parameter Resolution Fix
**Date**: 2026-02-27
**Scope**: `crates/executor/src/scheduler.rs`
## Problem
Workflow executions triggered via the API failed to resolve `{{ parameters.X }}` template expressions in task inputs. Instead of substituting the actual parameter value, the literal string `"{{ parameters.n }}"` was passed to the child action, causing runtime errors like:
```
ValueError: invalid literal for int() with base 10: '{{ parameters.n }}'
```
## Root Cause
The execution scheduler's `process_workflow_execution` and `advance_workflow` methods extracted workflow parameters from the execution's `config` field using:
```rust
execution.config.as_ref()
.and_then(|c| c.get("parameters").cloned())
.unwrap_or(json!({}))
```
This only handled the **wrapped** format `{"parameters": {"n": 5}}`, which is how child task executions store their config. However, when a workflow is triggered manually via the API, the config is stored in **flat** format `{"n": 5}` — the API places `request.parameters` directly into the execution's `config` column without wrapping it.
Because `config.get("parameters")` returned `None` for the flat format, `workflow_params` was set to `{}` (empty). The `WorkflowContext` was then built with no parameters, so `{{ parameters.n }}` failed to resolve. The error was silently swallowed by the fallback in `dispatch_workflow_task`, which used the raw (unresolved) input when template rendering failed.
## Fix
Added an `extract_workflow_params` helper function that handles both config formats, matching the existing logic in the worker's `ActionExecutor::prepare_execution_context`:
1. If config contains a `"parameters"` key → use that value (wrapped format)
2. Otherwise, if config is a JSON object → use the entire object as parameters (flat format)
3. Otherwise → return empty object
Replaced both extraction sites in the scheduler (`process_workflow_execution` and `advance_workflow`) with calls to this helper.
## Files Changed
- **`crates/executor/src/scheduler.rs`**:
- Added `extract_workflow_params()` helper function
- Updated `process_workflow_execution()` to use the helper
- Updated `advance_workflow()` to use the helper
- Added 6 unit tests covering wrapped, flat, None, non-object, empty, and precedence cases
## Testing
- All 104 existing executor tests pass
- 6 new unit tests added and passing
- No new warnings introduced

View File

@@ -0,0 +1,73 @@
# Workflow Template Resolution Implementation
**Date**: 2026-02-27
## Problem
Workflow task parameters containing `{{ }}` template expressions were being passed to workers verbatim without resolution. For example, a workflow task with `seconds: "{{item}}"` would send the literal string `"{{item}}"` to `core.sleep`, which rejected it with `"ERROR: seconds must be a positive integer"`.
Three interconnected features were missing from the executor's workflow orchestration:
1. **Template resolution**`{{ item }}`, `{{ parameters.x }}`, `{{ result().data.items }}`, etc. in task inputs were never rendered through the `WorkflowContext` before dispatching child executions.
2. **`with_items` expansion** — Tasks declaring `with_items: "{{ number_list }}"` were not expanded into multiple parallel child executions (one per item).
3. **`publish` variable processing** — Transition `publish` directives like `number_list: "{{ result().data.items }}"` were ignored, so variables never propagated between tasks.
A secondary issue was **type coercion**: `render_json` stringified all template results, so `"{{ item }}"` resolving to integer `5` became the string `"5"`, causing type validation failures in downstream actions.
## Root Cause
The `ExecutionScheduler::dispatch_workflow_task()` method passed `task_node.input` directly into the child execution's config without any template rendering. Neither `process_workflow_execution` (entry-point dispatch) nor `advance_workflow` (successor dispatch) constructed or used a `WorkflowContext`. The `publish` directives on transitions were completely ignored in `advance_workflow`.
## Changes
### `crates/executor/src/workflow/context.rs`
- **Function-call expressions**: Added support for `result()`, `result().path.to.field`, `succeeded()`, `failed()`, and `timed_out()` in the expression evaluator via `try_evaluate_function_call()`.
- **`TaskOutcome` enum**: New enum (`Succeeded`, `Failed`, `TimedOut`) to track the last completed task's status for function expressions.
- **`set_last_task_outcome()`**: Records the result and outcome of the most recently completed task.
- **Type-preserving `render_json`**: When a JSON string value is a pure template expression (the entire string is `{{ expr }}`), `render_json` now returns the raw `JsonValue` from the expression instead of stringifying it. Added `try_evaluate_pure_expression()` helper. This means `"{{ item }}"` resolving to `5` stays as integer `5`, not string `"5"`.
- **`rebuild()` constructor**: Reconstructs a `WorkflowContext` from persisted workflow state (stored variables, parameters, and completed task results). Used by the scheduler when advancing a workflow.
- **`export_variables()`**: Exports workflow variables as a JSON object for persisting back to the `workflow_execution.variables` column.
- **Updated `publish_from_result()`**: Uses type-preserving `render_json` for publish expressions so arrays/numbers/booleans retain their types.
- **18 unit tests**: All passing, including new tests for type preservation, `result()` function, `succeeded()`/`failed()`, publish with result function, rebuild, and the exact `with_items` integer scenario from the failing workflow.
### `crates/executor/src/scheduler.rs`
- **Template resolution in `dispatch_workflow_task()`**: Now accepts a `WorkflowContext` parameter and renders `task_node.input` through `wf_ctx.render_json()` before wrapping in the execution config.
- **Initial context in `process_workflow_execution()`**: Builds a `WorkflowContext` from the parent execution's parameters and workflow-level vars, passes it to entry-point task dispatch.
- **Context reconstruction in `advance_workflow()`**: Rebuilds the `WorkflowContext` from the `workflow_execution.variables` column plus results of all completed child executions. Sets `last_task_outcome` from the just-completed execution.
- **`publish` processing**: Iterates transition `publish` directives when a transition fires, evaluates expressions through the context, and persists updated variables back to the `workflow_execution` record.
- **`with_items` expansion**: New `dispatch_with_items_task()` method resolves the `with_items` expression to a JSON array, then creates one child execution per item with `item`/`index` set on the context. Each child gets `task_index` set in its `WorkflowTaskMetadata`.
- **`with_items` completion tracking**: In `advance_workflow()`, tasks with `task_index` (indicating `with_items`) are only marked completed/failed when ALL sibling items for that task name are done.
### `packs/examples/actions/list_example.sh` & `list_example.yaml`
- Rewrote shell script from `bash`+`jq` (unavailable in worker containers) to pure POSIX shell with DOTENV parameter parsing, matching the core pack pattern.
- Changed `parameter_format` from `json` to `dotenv`.
### `packs.external/python_example/actions/list_numbers.py` & `list_numbers.yaml`
- New action `python_example.list_numbers` that returns `{"items": list(range(start, n+start))}`.
- Parameters: `n` (default 10), `start` (default 0). JSON output format, Python ≥3.9.
## Workflow Flow (After Fix)
For the `examples.hello_workflow`:
```
1. generate_numbers task dispatched with rendered input {count: 5, n: 5}
2. python_example.list_numbers returns {items: [0, 1, 2, 3, 4]}
3. Transition publish: number_list = result().data.items → [0,1,2,3,4]
Variables persisted to workflow_execution record
4. sleep_2 dispatched with with_items: "{{ number_list }}"
→ 5 child executions created, each with item/index context
→ seconds: "{{item}}" renders to 0, 1, 2, 3, 4 (integers, not strings)
5. All sleep items complete → task marked done → echo_3 dispatched
6. Workflow completes
```
## Testing
- All 96 executor unit tests pass (0 failures)
- All 18 workflow context tests pass (including 8 new tests)
- Full workspace compiles with no new warnings (30 pre-existing)

View File

@@ -0,0 +1,141 @@
# Event & Enforcement Tables → TimescaleDB Hypertable Migration
**Date:** 2026-02
**Scope:** Database migrations, Rust models/repositories/API, Web UI
## Summary
Converted the `event` and `enforcement` tables from regular PostgreSQL tables to TimescaleDB hypertables, and removed the now-unnecessary `event_history` and `enforcement_history` tables.
- **Events** are immutable after insert (never updated), so a separate change-tracking history table added no value.
- **Enforcements** are updated exactly once (~1 second after creation, to set status from `created` to `processed` or `disabled`), well before the 7-day compression window. A history table tracking one deterministic status change per row was unnecessary overhead.
Both tables now benefit from automatic time-based partitioning, compression, and retention directly.
## Motivation
The `event_history` and `enforcement_history` hypertables were created alongside `execution_history` and `worker_history` to track field-level changes. However:
- **Events** are never modified after creation — no code path in the API, executor, worker, or sensor ever updates an event row. The history trigger was recording INSERT operations only, duplicating data already in the `event` table.
- **Enforcements** undergo a single, predictable status transition (created → processed/disabled) within ~1 second. The history table recorded one INSERT and one UPDATE per enforcement — the INSERT was redundant, and the UPDATE only changed `status`. The new `resolved_at` column captures this lifecycle directly on the enforcement row itself.
## Changes
### Database Migrations
**`000004_trigger_sensor_event_rule.sql`**:
- Removed `updated` column from the `event` table
- Removed `update_event_updated` trigger
- Replaced `updated` column with `resolved_at TIMESTAMPTZ` (nullable) on the `enforcement` table
- Removed `update_enforcement_updated` trigger
- Updated column comments for enforcement (status lifecycle, resolved_at semantics)
**`000008_notify_triggers.sql`**:
- Updated enforcement NOTIFY trigger payloads: `updated``resolved_at`
**`000009_timescaledb_history.sql`**:
- Removed `event_history` table, all its indexes, trigger function, trigger, compression and retention policies
- Removed `enforcement_history` table, all its indexes, trigger function, trigger, compression and retention policies
- Added hypertable conversion for `event` table:
- Dropped FK constraint from `enforcement.event``event(id)`
- Changed PK from `(id)` to `(id, created)`
- Converted to hypertable with 1-day chunk interval
- Compression segmented by `trigger_ref`, retention 90 days
- Added hypertable conversion for `enforcement` table:
- Dropped FK constraint from `execution.enforcement``enforcement(id)`
- Changed PK from `(id)` to `(id, created)`
- Converted to hypertable with 1-day chunk interval
- Compression segmented by `rule_ref`, retention 90 days
- Updated `event_volume_hourly` continuous aggregate to query `event` table directly
- Updated `enforcement_volume_hourly` continuous aggregate to query `enforcement` table directly
### Rust Code — Events
**`crates/common/src/models.rs`**:
- Removed `updated` field from `Event` struct
- Removed `Event` variant from `HistoryEntityType` enum
**`crates/common/src/repositories/event.rs`**:
- Removed `UpdateEventInput` struct and `Update` trait implementation for `EventRepository`
- Updated all SELECT queries to remove `updated` column
**`crates/api/src/dto/event.rs`**:
- Removed `updated` field from `EventResponse`
**`crates/common/tests/event_repository_tests.rs`**:
- Removed all update tests
- Renamed timestamp test to `test_event_created_timestamp_auto_set`
- Updated `test_delete_event_enforcement_retains_event_id` (FK dropped, so enforcement.event is now a dangling reference after event deletion)
### Rust Code — Enforcements
**`crates/common/src/models.rs`**:
- Replaced `updated: DateTime<Utc>` with `resolved_at: Option<DateTime<Utc>>` on `Enforcement` struct
- Removed `Enforcement` variant from `HistoryEntityType` enum
- Updated `FromStr`, `Display`, and `table_name()` implementations (only `Execution` and `Worker` remain)
**`crates/common/src/repositories/event.rs`**:
- Added `resolved_at: Option<DateTime<Utc>>` to `UpdateEnforcementInput`
- Updated all SELECT queries to use `resolved_at` instead of `updated`
- Update query no longer appends `, updated = NOW()``resolved_at` is set explicitly by the caller
**`crates/api/src/dto/event.rs`**:
- Replaced `updated` with `resolved_at: Option<DateTime<Utc>>` on `EnforcementResponse`
**`crates/executor/src/enforcement_processor.rs`**:
- Both status update paths (Processed and Disabled) now set `resolved_at: Some(chrono::Utc::now())`
- Updated test mock enforcement struct
**`crates/common/tests/enforcement_repository_tests.rs`**:
- Updated all tests to use `resolved_at` instead of `updated`
- Renamed `test_create_enforcement_with_invalid_event_fails``test_create_enforcement_with_nonexistent_event_succeeds` (FK dropped)
- Renamed `test_enforcement_timestamps_auto_managed``test_enforcement_resolved_at_lifecycle`
- All `UpdateEnforcementInput` usages now include `resolved_at` field
### Rust Code — History Infrastructure
**`crates/api/src/routes/history.rs`**:
- Removed `get_event_history` and `get_enforcement_history` endpoints
- Removed `/events/{id}/history` and `/enforcements/{id}/history` routes
- Updated doc comments to list only `execution` and `worker`
**`crates/api/src/dto/history.rs`**:
- Updated entity type comment
**`crates/common/src/repositories/entity_history.rs`**:
- Updated tests to remove `Event` and `Enforcement` variant assertions
- Both now correctly fail to parse as `HistoryEntityType`
### Web UI
**`web/src/pages/events/EventDetailPage.tsx`**:
- Removed `EntityHistoryPanel` component
**`web/src/pages/enforcements/EnforcementDetailPage.tsx`**:
- Removed `EntityHistoryPanel` component
- Added `resolved_at` display in Overview card ("Resolved At" field, shows "Pending" when null)
- Added `resolved_at` display in Metadata sidebar
**`web/src/hooks/useHistory.ts`**:
- Removed `"event"` and `"enforcement"` from `HistoryEntityType` union and `pluralMap`
- Removed `useEventHistory` and `useEnforcementHistory` convenience hooks
**`web/src/hooks/useEnforcementStream.ts`**:
- Removed history query invalidation (no more enforcement_history table)
### Documentation
- Updated `AGENTS.md`: table counts (22→20), history entity list, FK policy, enforcement lifecycle (resolved_at), pitfall #17
- Updated `docs/plans/timescaledb-entity-history.md`: removed event_history and enforcement_history from all tables, added notes about both hypertables
## Key Design Decisions
1. **Composite PK `(id, created)` on both tables**: Required by TimescaleDB — the partitioning column must be part of the PK. The `id` column retains its `BIGSERIAL` for unique identification; `created` is added for partitioning.
2. **Dropped FKs targeting hypertables**: TimescaleDB hypertables cannot be the target of foreign key constraints. Affected: `enforcement.event → event(id)` and `execution.enforcement → enforcement(id)`. Both columns remain as plain BIGINT for application-level joins. Since the original FKs were `ON DELETE SET NULL` (soft references), this is a minor change — the columns may now become dangling references if the referenced row is deleted.
3. **`resolved_at` instead of `updated`**: The `updated` column was a generic auto-managed timestamp. The new `resolved_at` column is semantically meaningful — it records specifically when the enforcement was resolved (status transitioned away from `created`). It is `NULL` while the enforcement is pending, making it easy to query for unresolved enforcements. The executor sets it explicitly alongside the status change.
4. **Compression segmentation**: Event table segments by `trigger_ref`, enforcement table segments by `rule_ref` — matching the most common query patterns for each table.
5. **90-day retention for both**: Aligned with execution history retention since events and enforcements are primary operational records in the event-driven pipeline.

View File

@@ -0,0 +1,69 @@
# Remove `is_workflow` from Action Table & Add Workflow Edit Button
**Date**: 2026-02
## Summary
Removed the redundant `is_workflow` boolean column from the `action` table throughout the entire stack. An action being a workflow is fully determined by having a non-null `workflow_def` FK — the boolean was unnecessary. Also added a workflow edit button and visual indicator to the Actions page UI.
## Changes
### Backend — Drop `is_workflow` from Action
**`crates/common/src/models.rs`**
- Removed `is_workflow: bool` field from the `Action` struct
**`crates/common/src/repositories/action.rs`**
- Removed `is_workflow` from all SELECT column lists (9 queries)
- Updated `find_workflows()` to use `WHERE workflow_def IS NOT NULL` instead of `WHERE is_workflow = true`
- Updated `link_workflow_def()` to only `SET workflow_def = $2` (no longer sets `is_workflow = true`)
**`crates/api/src/dto/action.rs`**
- Removed `is_workflow` field from `ActionResponse` and `ActionSummary` DTOs
- Added `workflow_def: Option<i64>` field to both DTOs (non-null means this action is a workflow)
- Updated `From<Action>` impls accordingly
**`crates/api/src/validation/params.rs`**
- Removed `is_workflow` from test fixture `make_action()`
**Comments updated in:**
- `crates/api/src/routes/workflows.rs` — companion action helper functions
- `crates/common/src/workflow/registrar.rs` — companion action creation
- `crates/executor/src/workflow/registrar.rs` — companion action creation
### Database Migration
**`migrations/20250101000006_workflow_system.sql`** (modified in-place, no production deployments)
- Removed `ADD COLUMN is_workflow BOOLEAN DEFAULT false NOT NULL` from ALTER TABLE
- Removed `idx_action_is_workflow` partial index
- Updated `workflow_action_link` view to use `LEFT JOIN action a ON a.workflow_def = wd.id` (dropped `AND a.is_workflow = true` filter)
- Updated column comment on `workflow_def`
> Note: `execution.is_workflow` is a separate DB-level column used by PostgreSQL notification triggers and was NOT removed. It exists only in SQL (not in the Rust `Execution` model).
### Frontend — Workflow Edit Button & Indicator
**TypeScript types updated** (4 files):
- `web/src/api/models/ActionResponse.ts` — added `workflow_def?: number | null`
- `web/src/api/models/ActionSummary.ts` — added `workflow_def?: number | null`
- `web/src/api/models/PaginatedResponse_ActionSummary.ts` — added `workflow_def?: number | null`
- `web/src/api/models/ApiResponse_ActionResponse.ts` — added `workflow_def?: number | null`
**`web/src/pages/actions/ActionsPage.tsx`**
- **Action list sidebar**: Workflow actions now show a purple `GitBranch` icon next to their label
- **Action detail view**: Workflow actions show a purple "Edit Workflow" button (with `Pencil` icon) that navigates to `/actions/workflows/:ref/edit`
### Prior Fix — Workflow Save Upsert (same session)
**`web/src/pages/actions/WorkflowBuilderPage.tsx`**
- Fixed workflow save from "new" page when workflow already exists
- On 409 CONFLICT from POST, automatically falls back to PUT (update) with the same data
- Constructs the workflow ref as `{packRef}.{name}` for the fallback PUT call
## Design Rationale
The `is_workflow` boolean on the action table was fully redundant:
- A workflow action always has `workflow_def IS NOT NULL`
- A workflow action's entrypoint always ends in `.workflow.yaml`
- The executor detects workflows by looking up `workflow_definition` by ref, not by checking `is_workflow`
- No runtime code path depended on the boolean that couldn't use `workflow_def IS NOT NULL` instead