diff --git a/AGENTS.md b/AGENTS.md index 06eef20..eb69f35 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -54,7 +54,7 @@ attune/ ## Service Architecture (Distributed Microservices) 1. **attune-api**: REST API gateway, JWT auth, all client interactions -2. **attune-executor**: Manages execution lifecycle, scheduling, policy enforcement +2. **attune-executor**: Manages execution lifecycle, scheduling, policy enforcement, workflow orchestration 3. **attune-worker**: Executes actions in multiple runtimes (Python/Node.js/containers) 4. **attune-sensor**: Monitors triggers, generates events 5. **attune-notifier**: Real-time notifications via PostgreSQL LISTEN/NOTIFY + WebSocket @@ -126,6 +126,11 @@ docker compose logs -f # View logs ``` Sensor → Trigger fires → Event created → Rule evaluates → Enforcement created → Execution scheduled → Worker executes Action + +For workflows: +Execution requested → Scheduler detects workflow_def → Loads definition → +Creates workflow_execution record → Dispatches entry-point tasks as child executions → +Completion listener advances workflow → Schedules successor tasks → Completes workflow ``` **Key Entities** (all in `public` schema, IDs are `i64`): @@ -210,14 +215,30 @@ Enforcement created → Execution scheduled → Worker executes Action - **JSON Fields**: Use `serde_json::Value` for flexible attributes/parameters, including `execution.workflow_task` JSONB - **Enums**: PostgreSQL enum types mapped with `#[sqlx(type_name = "...")]` - **Workflow Tasks**: Stored as JSONB in `execution.workflow_task` (consolidated from separate table 2026-01-27) -- **FK ON DELETE Policy**: Historical records (executions, events, enforcements) use `ON DELETE SET NULL` so they survive entity deletion while preserving text ref fields (`action_ref`, `trigger_ref`, etc.) for auditing. Pack-owned entities (actions, triggers, sensors, rules, runtimes) use `ON DELETE CASCADE` from pack. Workflow executions cascade-delete with their workflow definition. -- **Entity History Tracking (TimescaleDB)**: Append-only `_history` hypertables track field-level changes to `execution`, `worker`, `enforcement`, and `event` tables. Populated by PostgreSQL `AFTER INSERT OR UPDATE OR DELETE` triggers — no Rust code changes needed for recording. Uses JSONB diff format (`old_values`/`new_values`) with a `changed_fields TEXT[]` column for efficient filtering. Worker heartbeat-only updates are excluded. See `docs/plans/timescaledb-entity-history.md` for full design. +- **FK ON DELETE Policy**: Historical records (executions) use `ON DELETE SET NULL` so they survive entity deletion while preserving text ref fields (`action_ref`, `trigger_ref`, etc.) for auditing. The `event`, `enforcement`, and `execution` tables are TimescaleDB hypertables, so they **cannot be the target of FK constraints** — `enforcement.event`, `execution.enforcement`, `inquiry.execution`, `workflow_execution.execution`, `execution.parent`, and `execution.original_execution` are plain BIGINT columns (no FK) and may become dangling references if the referenced row is deleted. Pack-owned entities (actions, triggers, sensors, rules, runtimes) use `ON DELETE CASCADE` from pack. Workflow executions cascade-delete with their workflow definition. +- **Event Table (TimescaleDB Hypertable)**: The `event` table is a TimescaleDB hypertable partitioned on `created` (1-day chunks). Events are **immutable after insert** — there is no `updated` column, no update trigger, and no `Update` repository impl. The `Event` model has no `updated` field. Compression is segmented by `trigger_ref` (after 7 days) and retention is 90 days. The `event_volume_hourly` continuous aggregate queries the `event` table directly. +- **Enforcement Table (TimescaleDB Hypertable)**: The `enforcement` table is a TimescaleDB hypertable partitioned on `created` (1-day chunks). Enforcements are updated **exactly once** — the executor sets `status` from `created` to `processed` or `disabled` within ~1 second of creation, well before the 7-day compression window. The `resolved_at` column (nullable `TIMESTAMPTZ`) records when this transition occurred; it is `NULL` while status is `created`. There is no `updated` column. Compression is segmented by `rule_ref` (after 7 days) and retention is 90 days. The `enforcement_volume_hourly` continuous aggregate queries the `enforcement` table directly. +- **Execution Table (TimescaleDB Hypertable)**: The `execution` table is a TimescaleDB hypertable partitioned on `created` (1-day chunks). Executions are updated **~4 times** during their lifecycle (requested → scheduled → running → completed/failed), completing within at most ~1 day — well before the 7-day compression window. The `updated` column and its BEFORE UPDATE trigger are preserved (used by timeout monitor and UI). Compression is segmented by `action_ref` (after 7 days) and retention is 90 days. The `execution_volume_hourly` continuous aggregate queries the execution hypertable directly. The `execution_history` hypertable (field-level diffs) and its continuous aggregates (`execution_status_hourly`, `execution_throughput_hourly`) are preserved alongside — they serve complementary purposes (change tracking vs. volume monitoring). +- **Entity History Tracking (TimescaleDB)**: Append-only `
_history` hypertables track field-level changes to `execution` and `worker` tables. Populated by PostgreSQL `AFTER INSERT OR UPDATE OR DELETE` triggers — no Rust code changes needed for recording. Uses JSONB diff format (`old_values`/`new_values`) with a `changed_fields TEXT[]` column for efficient filtering. Worker heartbeat-only updates are excluded. There are **no `event_history` or `enforcement_history` tables** — events are immutable and enforcements have a single deterministic status transition, so both tables are hypertables themselves. See `docs/plans/timescaledb-entity-history.md` for full design. - **History Large-Field Guardrails**: The `execution` history trigger stores a compact **digest summary** instead of the full value for the `result` column (which can be arbitrarily large). The digest is produced by the `_jsonb_digest_summary(JSONB)` helper function and has the shape `{"digest": "md5:", "size": , "type": ""}`. This preserves change-detection semantics while avoiding history table bloat. The full result is always available on the live `execution` row. When adding new large JSONB columns to history triggers, use `_jsonb_digest_summary()` instead of storing the raw value. -- **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, and `event.source` are also nullable. -**Table Count**: 22 tables total in the schema (including `runtime_version` and 4 `*_history` hypertables) -**Migration Count**: 9 consolidated migrations (`000001` through `000009`) — see `migrations/` directory +- **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, and `event.source` are also nullable. `enforcement.event` is nullable but has no FK constraint (event is a hypertable). `execution.enforcement` is nullable but has no FK constraint (enforcement is a hypertable). All FK columns on the execution table (`action`, `parent`, `original_execution`, `enforcement`, `executor`, `workflow_def`) have no FK constraints (execution is a hypertable). `inquiry.execution` and `workflow_execution.execution` also have no FK constraints. `enforcement.resolved_at` is nullable — `None` while status is `created`, set when resolved. +**Table Count**: 20 tables total in the schema (including `runtime_version`, 2 `*_history` hypertables, and the `event`, `enforcement`, + `execution` hypertables) +**Migration Count**: 9 migrations (`000001` through `000009`) — see `migrations/` directory - **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order. +### Workflow Execution Orchestration +- **Detection**: The `ExecutionScheduler` checks `action.workflow_def.is_some()` before dispatching to a worker. Workflow actions are orchestrated by the executor, not sent to workers. +- **Orchestration Flow**: Scheduler loads the `WorkflowDefinition`, builds a `TaskGraph`, creates a `workflow_execution` record, marks the parent execution as Running, builds an initial `WorkflowContext` from execution parameters and workflow vars, then dispatches entry-point tasks as child executions via MQ with rendered inputs. +- **Template Resolution**: Task inputs are rendered through `WorkflowContext.render_json()` before dispatching. Supports `{{ parameters.x }}`, `{{ item }}`, `{{ index }}`, `{{ number_list }}` (direct variable), `{{ task.task_name.field }}`, and function expressions. **Type-preserving**: pure template expressions like `"{{ item }}"` preserve the JSON type (integer `5` stays as `5`, not string `"5"`). Mixed expressions like `"Sleeping for {{ item }} seconds"` remain strings. +- **Function Expressions**: `{{ result() }}` returns the last completed task's result. `{{ result().field.subfield }}` navigates into it. `{{ succeeded() }}`, `{{ failed() }}`, `{{ timed_out() }}` return booleans. These are evaluated by `WorkflowContext.try_evaluate_function_call()`. +- **Publish Directives**: Transition `publish` directives (e.g., `number_list: "{{ result().data.items }}"`) are evaluated when a transition fires. Published variables are persisted to the `workflow_execution.variables` column and available to subsequent tasks. Uses type-preserving rendering so arrays/numbers/booleans retain their types. +- **Child Task Dispatch**: Each workflow task becomes a child execution with the task's actual action ref (e.g., `core.echo`), `workflow_task` metadata linking it to the `workflow_execution` record, and a parent reference to the workflow execution. Child executions re-enter the normal scheduling pipeline, so nested workflows work recursively. +- **with_items Expansion**: Tasks declaring `with_items: "{{ expr }}"` are expanded into child executions. The expression is resolved via the `WorkflowContext` to produce a JSON array, then each item gets its own child execution with `item`/`index` set on the context and `task_index` in `WorkflowTaskMetadata`. Completion tracking waits for ALL sibling items to finish before marking the task as completed/failed and advancing the workflow. +- **with_items Concurrency Limiting**: When a task declares `concurrency: N`, ALL child execution records are created in the database up front (with fully-rendered inputs), but only the first `N` are published to the message queue. The remaining children stay at `Requested` status in the DB. As each item completes, `advance_workflow` counts in-flight siblings (`scheduling`/`scheduled`/`running`), calculates free slots (`concurrency - in_flight`), and calls `publish_pending_with_items_children()` which queries for `Requested`-status siblings ordered by `task_index` and publishes them. The DB `status = 'requested'` query is the authoritative source of undispatched items — no auxiliary state in workflow variables needed. The task is only marked complete when all siblings reach a terminal state. Without a `concurrency` value, all items are dispatched at once (previous behavior). +- **Advancement**: The `CompletionListener` detects when a completed execution has `workflow_task` metadata and calls `ExecutionScheduler::advance_workflow()`. The scheduler rebuilds the `WorkflowContext` from persisted `workflow_execution.variables` plus all completed child execution results, sets `last_task_outcome`, evaluates transitions (succeeded/failed/always/timed_out/custom with context-based condition evaluation), processes publish directives, schedules successor tasks with rendered inputs, and completes the workflow when all tasks are done. +- **Transition Evaluation**: `succeeded()`, `failed()`, `timed_out()`, and `always` (no condition) are supported. Custom conditions are evaluated via `WorkflowContext.evaluate_condition()` with fallback to fire-on-success if evaluation fails. +- **Legacy Coordinator**: The prototype `WorkflowCoordinator` in `crates/executor/src/workflow/coordinator.rs` is bypassed — it has hardcoded schema prefixes and is not integrated with the MQ pipeline. + ### Pack File Loading & Action Execution - **Pack Base Directory**: Configured via `packs_base_dir` in config (defaults to `/opt/attune/packs`, development uses `./packs`) - **Pack Volume Strategy**: Packs are mounted as volumes (NOT copied into Docker images) @@ -343,6 +364,7 @@ Rule `action_params` support Jinja2-style `{{ source.path }}` templates resolved - Multi-segment paths use Catmull-Rom → cubic Bezier conversion for smooth curves through waypoints (`buildSmoothPath` in `WorkflowEdges.tsx`) - **Orquesta-style `next` transitions**: Tasks use a `next: TaskTransition[]` array instead of flat `on_success`/`on_failure` fields. Each transition has `when` (condition), `publish` (variables), `do` (target tasks), plus optional `label`, `color`, `edge_waypoints`, and `label_positions`. See "Task Transition Model" above. - **No task type or task-level condition**: The UI does not expose task `type` or task-level `when` — all tasks are actions (workflows are also actions), and conditions belong on transitions. Parallelism is implicit via multiple `do` targets. + - **Ref immutability**: When editing an existing workflow, the pack selector and workflow name fields are disabled — the ref cannot be changed after creation. ## Development Workflow @@ -483,8 +505,10 @@ When reporting, ask: "Should I fix this first or continue with [original task]?" 14. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare` 15. **REMEMBER** packs are volumes - update with restart, not rebuild 16. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh` -17. **REMEMBER** when adding mutable columns to `execution`, `worker`, `enforcement`, or `event`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration +17. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row). 18. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history` +19. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures. +20. **REMEMBER** `execution`, `event`, and `enforcement` are all TimescaleDB hypertables — they **cannot be the target of FK constraints**. Any column referencing them (e.g., `inquiry.execution`, `workflow_execution.execution`, `execution.parent`) is a plain BIGINT with no FK and may become a dangling reference. ## Deployment - **Target**: Distributed deployment with separate service instances @@ -495,8 +519,8 @@ When reporting, ask: "Should I fix this first or continue with [original task]?" - **Web UI**: Static files served separately or via API service ## Current Development Status -- ✅ **Complete**: Database migrations (22 tables, 9 consolidated migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker, enforcement, event), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution, enforcement, event), TimescaleDB continuous aggregates (5 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector) -- 🔄 **In Progress**: Sensor service, advanced workflow features, Python runtime dependency management, API/UI endpoints for runtime version management +- ✅ **Complete**: Database migrations (20 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`) +- 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management - 📋 **Planned**: Notifier service, execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage ## Quick Reference diff --git a/crates/api/src/dto/action.rs b/crates/api/src/dto/action.rs index fa3e8a6..e8a21fa 100644 --- a/crates/api/src/dto/action.rs +++ b/crates/api/src/dto/action.rs @@ -137,6 +137,11 @@ pub struct ActionResponse { #[schema(value_type = Object, nullable = true)] pub out_schema: Option, + /// Workflow definition ID (non-null if this action is a workflow) + #[serde(skip_serializing_if = "Option::is_none")] + #[schema(example = 42, nullable = true)] + pub workflow_def: Option, + /// Whether this is an ad-hoc action (not from pack installation) #[schema(example = false)] pub is_adhoc: bool, @@ -186,6 +191,11 @@ pub struct ActionSummary { #[schema(example = ">=3.12", nullable = true)] pub runtime_version_constraint: Option, + /// Workflow definition ID (non-null if this action is a workflow) + #[serde(skip_serializing_if = "Option::is_none")] + #[schema(example = 42, nullable = true)] + pub workflow_def: Option, + /// Creation timestamp #[schema(example = "2024-01-13T10:30:00Z")] pub created: DateTime, @@ -210,6 +220,7 @@ impl From for ActionResponse { runtime_version_constraint: action.runtime_version_constraint, param_schema: action.param_schema, out_schema: action.out_schema, + workflow_def: action.workflow_def, is_adhoc: action.is_adhoc, created: action.created, updated: action.updated, @@ -229,6 +240,7 @@ impl From for ActionSummary { entrypoint: action.entrypoint, runtime: action.runtime, runtime_version_constraint: action.runtime_version_constraint, + workflow_def: action.workflow_def, created: action.created, updated: action.updated, } diff --git a/crates/api/src/dto/event.rs b/crates/api/src/dto/event.rs index 2042d27..085a2f7 100644 --- a/crates/api/src/dto/event.rs +++ b/crates/api/src/dto/event.rs @@ -53,10 +53,6 @@ pub struct EventResponse { /// Creation timestamp #[schema(example = "2024-01-13T10:30:00Z")] pub created: DateTime, - - /// Last update timestamp - #[schema(example = "2024-01-13T10:30:00Z")] - pub updated: DateTime, } impl From for EventResponse { @@ -72,7 +68,6 @@ impl From for EventResponse { rule: event.rule, rule_ref: event.rule_ref, created: event.created, - updated: event.updated, } } } @@ -230,9 +225,9 @@ pub struct EnforcementResponse { #[schema(example = "2024-01-13T10:30:00Z")] pub created: DateTime, - /// Last update timestamp - #[schema(example = "2024-01-13T10:30:00Z")] - pub updated: DateTime, + /// Timestamp when the enforcement was resolved (status changed from created to processed/disabled) + #[schema(example = "2024-01-13T10:30:01Z", nullable = true)] + pub resolved_at: Option>, } impl From for EnforcementResponse { @@ -249,7 +244,7 @@ impl From for EnforcementResponse { condition: enforcement.condition, conditions: enforcement.conditions, created: enforcement.created, - updated: enforcement.updated, + resolved_at: enforcement.resolved_at, } } } diff --git a/crates/api/src/dto/execution.rs b/crates/api/src/dto/execution.rs index beb0ff7..2e0d030 100644 --- a/crates/api/src/dto/execution.rs +++ b/crates/api/src/dto/execution.rs @@ -6,6 +6,7 @@ use serde_json::Value as JsonValue; use utoipa::{IntoParams, ToSchema}; use attune_common::models::enums::ExecutionStatus; +use attune_common::models::execution::WorkflowTaskMetadata; /// Request DTO for creating a manual execution #[derive(Debug, Clone, Deserialize, ToSchema)] @@ -62,6 +63,11 @@ pub struct ExecutionResponse { #[schema(value_type = Object, example = json!({"message_id": "1234567890.123456"}))] pub result: Option, + /// Workflow task metadata (only populated for workflow task executions) + #[serde(skip_serializing_if = "Option::is_none")] + #[schema(value_type = Option, nullable = true)] + pub workflow_task: Option, + /// Creation timestamp #[schema(example = "2024-01-13T10:30:00Z")] pub created: DateTime, @@ -102,6 +108,11 @@ pub struct ExecutionSummary { #[schema(example = "core.timer")] pub trigger_ref: Option, + /// Workflow task metadata (only populated for workflow task executions) + #[serde(skip_serializing_if = "Option::is_none")] + #[schema(value_type = Option, nullable = true)] + pub workflow_task: Option, + /// Creation timestamp #[schema(example = "2024-01-13T10:30:00Z")] pub created: DateTime, @@ -150,6 +161,12 @@ pub struct ExecutionQueryParams { #[param(example = 1)] pub parent: Option, + /// If true, only return top-level executions (those without a parent). + /// Useful for the "By Workflow" view where child tasks are loaded separately. + #[serde(default)] + #[param(example = false)] + pub top_level_only: Option, + /// Page number (for pagination) #[serde(default = "default_page")] #[param(example = 1, minimum = 1)] @@ -190,6 +207,7 @@ impl From for ExecutionResponse { result: execution .result .map(|r| serde_json::to_value(r).unwrap_or(JsonValue::Null)), + workflow_task: execution.workflow_task, created: execution.created, updated: execution.updated, } @@ -207,6 +225,7 @@ impl From for ExecutionSummary { enforcement: execution.enforcement, rule_ref: None, // Populated separately via enforcement lookup trigger_ref: None, // Populated separately via enforcement lookup + workflow_task: execution.workflow_task, created: execution.created, updated: execution.updated, } @@ -256,6 +275,7 @@ mod tests { action_ref: None, enforcement: None, parent: None, + top_level_only: None, pack_name: None, rule_ref: None, trigger_ref: None, @@ -274,6 +294,7 @@ mod tests { action_ref: None, enforcement: None, parent: None, + top_level_only: None, pack_name: None, rule_ref: None, trigger_ref: None, diff --git a/crates/api/src/dto/history.rs b/crates/api/src/dto/history.rs index daace23..b468297 100644 --- a/crates/api/src/dto/history.rs +++ b/crates/api/src/dto/history.rs @@ -126,7 +126,7 @@ impl HistoryQueryParams { /// Path parameter for the entity type segment. #[derive(Debug, Clone, Deserialize, IntoParams)] pub struct HistoryEntityTypePath { - /// Entity type: `execution`, `worker`, `enforcement`, or `event` + /// Entity type: `execution` or `worker` pub entity_type: String, } diff --git a/crates/api/src/routes/executions.rs b/crates/api/src/routes/executions.rs index 3f1924c..1b65b1e 100644 --- a/crates/api/src/routes/executions.rs +++ b/crates/api/src/routes/executions.rs @@ -168,6 +168,10 @@ pub async fn list_executions( filtered_executions.retain(|e| e.parent == Some(parent_id)); } + if query.top_level_only == Some(true) { + filtered_executions.retain(|e| e.parent.is_none()); + } + if let Some(executor_id) = query.executor { filtered_executions.retain(|e| e.executor == Some(executor_id)); } diff --git a/crates/api/src/routes/history.rs b/crates/api/src/routes/history.rs index ae963a8..eff459c 100644 --- a/crates/api/src/routes/history.rs +++ b/crates/api/src/routes/history.rs @@ -27,14 +27,14 @@ use crate::{ /// List history records for a given entity type. /// -/// Supported entity types: `execution`, `worker`, `enforcement`, `event`. +/// Supported entity types: `execution`, `worker`. /// Returns a paginated list of change records ordered by time descending. #[utoipa::path( get, path = "/api/v1/history/{entity_type}", tag = "history", params( - ("entity_type" = String, Path, description = "Entity type: execution, worker, enforcement, or event"), + ("entity_type" = String, Path, description = "Entity type: execution or worker"), HistoryQueryParams, ), responses( @@ -127,56 +127,6 @@ pub async fn get_worker_history( get_entity_history_by_id(&state, HistoryEntityType::Worker, id, query).await } -/// Get history for a specific enforcement by ID. -/// -/// Returns all change records for the given enforcement, ordered by time descending. -#[utoipa::path( - get, - path = "/api/v1/enforcements/{id}/history", - tag = "history", - params( - ("id" = i64, Path, description = "Enforcement ID"), - HistoryQueryParams, - ), - responses( - (status = 200, description = "History records for the enforcement", body = PaginatedResponse), - ), - security(("bearer_auth" = [])) -)] -pub async fn get_enforcement_history( - State(state): State>, - RequireAuth(_user): RequireAuth, - Path(id): Path, - Query(query): Query, -) -> ApiResult { - get_entity_history_by_id(&state, HistoryEntityType::Enforcement, id, query).await -} - -/// Get history for a specific event by ID. -/// -/// Returns all change records for the given event, ordered by time descending. -#[utoipa::path( - get, - path = "/api/v1/events/{id}/history", - tag = "history", - params( - ("id" = i64, Path, description = "Event ID"), - HistoryQueryParams, - ), - responses( - (status = 200, description = "History records for the event", body = PaginatedResponse), - ), - security(("bearer_auth" = [])) -)] -pub async fn get_event_history( - State(state): State>, - RequireAuth(_user): RequireAuth, - Path(id): Path, - Query(query): Query, -) -> ApiResult { - get_entity_history_by_id(&state, HistoryEntityType::Event, id, query).await -} - // --------------------------------------------------------------------------- // Shared helpers // --------------------------------------------------------------------------- @@ -231,8 +181,6 @@ async fn get_entity_history_by_id( /// - `GET /history/:entity_type` — generic history query /// - `GET /executions/:id/history` — execution-specific history /// - `GET /workers/:id/history` — worker-specific history (note: currently no /workers base route exists) -/// - `GET /enforcements/:id/history` — enforcement-specific history -/// - `GET /events/:id/history` — event-specific history pub fn routes() -> Router> { Router::new() // Generic history endpoint @@ -240,6 +188,4 @@ pub fn routes() -> Router> { // Entity-specific convenience endpoints .route("/executions/{id}/history", get(get_execution_history)) .route("/workers/{id}/history", get(get_worker_history)) - .route("/enforcements/{id}/history", get(get_enforcement_history)) - .route("/events/{id}/history", get(get_event_history)) } diff --git a/crates/api/src/routes/workflows.rs b/crates/api/src/routes/workflows.rs index 0157b5d..c200e91 100644 --- a/crates/api/src/routes/workflows.rs +++ b/crates/api/src/routes/workflows.rs @@ -601,8 +601,8 @@ async fn write_workflow_yaml( /// Create a companion action record for a workflow definition. /// /// This ensures the workflow appears in action lists and the action palette in the -/// workflow builder. The action is created with `is_workflow = true` and linked to -/// the workflow definition via the `workflow_def` FK. +/// workflow builder. The action is linked to the workflow definition via the +/// `workflow_def` FK. async fn create_companion_action( db: &sqlx::PgPool, workflow_ref: &str, @@ -643,7 +643,7 @@ async fn create_companion_action( )) })?; - // Link the action to the workflow definition (sets is_workflow = true and workflow_def) + // Link the action to the workflow definition (sets workflow_def FK) ActionRepository::link_workflow_def(db, action.id, workflow_def_id) .await .map_err(|e| { diff --git a/crates/api/src/validation/params.rs b/crates/api/src/validation/params.rs index f1e8b01..75ebf62 100644 --- a/crates/api/src/validation/params.rs +++ b/crates/api/src/validation/params.rs @@ -368,7 +368,6 @@ mod tests { runtime_version_constraint: None, param_schema: schema, out_schema: None, - is_workflow: false, workflow_def: None, is_adhoc: false, parameter_delivery: attune_common::models::ParameterDelivery::default(), diff --git a/crates/api/tests/sse_execution_stream_tests.rs b/crates/api/tests/sse_execution_stream_tests.rs index 3e5be1f..d64b5fc 100644 --- a/crates/api/tests/sse_execution_stream_tests.rs +++ b/crates/api/tests/sse_execution_stream_tests.rs @@ -120,23 +120,21 @@ async fn test_sse_stream_receives_execution_updates() -> Result<()> { println!("Updating execution {} to 'running' status", execution_id); // Update execution status - this should trigger PostgreSQL NOTIFY - let _ = sqlx::query( - "UPDATE execution SET status = 'running', start_time = NOW() WHERE id = $1", - ) - .bind(execution_id) - .execute(&pool_clone) - .await; + let _ = + sqlx::query("UPDATE execution SET status = 'running', updated = NOW() WHERE id = $1") + .bind(execution_id) + .execute(&pool_clone) + .await; println!("Update executed, waiting before setting to succeeded"); tokio::time::sleep(Duration::from_millis(500)).await; // Update to succeeded - let _ = sqlx::query( - "UPDATE execution SET status = 'succeeded', end_time = NOW() WHERE id = $1", - ) - .bind(execution_id) - .execute(&pool_clone) - .await; + let _ = + sqlx::query("UPDATE execution SET status = 'succeeded', updated = NOW() WHERE id = $1") + .bind(execution_id) + .execute(&pool_clone) + .await; println!("Execution {} updated to 'succeeded'", execution_id); }); diff --git a/crates/common/src/models.rs b/crates/common/src/models.rs index 39974c4..01cba9a 100644 --- a/crates/common/src/models.rs +++ b/crates/common/src/models.rs @@ -896,7 +896,6 @@ pub mod action { pub runtime_version_constraint: Option, pub param_schema: Option, pub out_schema: Option, - pub is_workflow: bool, pub workflow_def: Option, pub is_adhoc: bool, #[sqlx(default)] @@ -988,7 +987,6 @@ pub mod event { pub source: Option, pub source_ref: Option, pub created: DateTime, - pub updated: DateTime, pub rule: Option, pub rule_ref: Option, } @@ -1006,7 +1004,7 @@ pub mod event { pub condition: EnforcementCondition, pub conditions: JsonValue, pub created: DateTime, - pub updated: DateTime, + pub resolved_at: Option>, } } @@ -1484,8 +1482,6 @@ pub mod entity_history { pub enum HistoryEntityType { Execution, Worker, - Enforcement, - Event, } impl HistoryEntityType { @@ -1494,8 +1490,6 @@ pub mod entity_history { match self { Self::Execution => "execution_history", Self::Worker => "worker_history", - Self::Enforcement => "enforcement_history", - Self::Event => "event_history", } } } @@ -1505,8 +1499,6 @@ pub mod entity_history { match self { Self::Execution => write!(f, "execution"), Self::Worker => write!(f, "worker"), - Self::Enforcement => write!(f, "enforcement"), - Self::Event => write!(f, "event"), } } } @@ -1518,10 +1510,8 @@ pub mod entity_history { match s.to_lowercase().as_str() { "execution" => Ok(Self::Execution), "worker" => Ok(Self::Worker), - "enforcement" => Ok(Self::Enforcement), - "event" => Ok(Self::Event), other => Err(format!( - "unknown history entity type '{}'; expected one of: execution, worker, enforcement, event", + "unknown history entity type '{}'; expected one of: execution, worker", other )), } diff --git a/crates/common/src/repositories/action.rs b/crates/common/src/repositories/action.rs index b746b8b..1d87881 100644 --- a/crates/common/src/repositories/action.rs +++ b/crates/common/src/repositories/action.rs @@ -57,7 +57,7 @@ impl FindById for ActionRepository { r#" SELECT id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated FROM action WHERE id = $1 "#, @@ -80,7 +80,7 @@ impl FindByRef for ActionRepository { r#" SELECT id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated FROM action WHERE ref = $1 "#, @@ -103,7 +103,7 @@ impl List for ActionRepository { r#" SELECT id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated FROM action ORDER BY ref ASC "#, @@ -142,7 +142,7 @@ impl Create for ActionRepository { VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11) RETURNING id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated "#, ) .bind(&input.r#ref) @@ -256,7 +256,7 @@ impl Update for ActionRepository { query.push(", updated = NOW() WHERE id = "); query.push_bind(id); - query.push(" RETURNING id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated"); + query.push(" RETURNING id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, param_schema, out_schema, workflow_def, is_adhoc, created, updated"); let action = query .build_query_as::() @@ -296,7 +296,7 @@ impl ActionRepository { r#" SELECT id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated FROM action WHERE pack = $1 ORDER BY ref ASC @@ -318,7 +318,7 @@ impl ActionRepository { r#" SELECT id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated FROM action WHERE runtime = $1 ORDER BY ref ASC @@ -341,7 +341,7 @@ impl ActionRepository { r#" SELECT id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated FROM action WHERE LOWER(ref) LIKE $1 OR LOWER(label) LIKE $1 OR LOWER(description) LIKE $1 ORDER BY ref ASC @@ -354,7 +354,7 @@ impl ActionRepository { Ok(actions) } - /// Find all workflow actions (actions where is_workflow = true) + /// Find all workflow actions (actions linked to a workflow definition) pub async fn find_workflows<'e, E>(executor: E) -> Result> where E: Executor<'e, Database = Postgres> + 'e, @@ -363,9 +363,9 @@ impl ActionRepository { r#" SELECT id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated FROM action - WHERE is_workflow = true + WHERE workflow_def IS NOT NULL ORDER BY ref ASC "#, ) @@ -387,7 +387,7 @@ impl ActionRepository { r#" SELECT id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated FROM action WHERE workflow_def = $1 "#, @@ -411,11 +411,11 @@ impl ActionRepository { let action = sqlx::query_as::<_, Action>( r#" UPDATE action - SET is_workflow = true, workflow_def = $2, updated = NOW() + SET workflow_def = $2, updated = NOW() WHERE id = $1 RETURNING id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, - param_schema, out_schema, is_workflow, workflow_def, is_adhoc, created, updated + param_schema, out_schema, workflow_def, is_adhoc, created, updated "#, ) .bind(action_id) diff --git a/crates/common/src/repositories/analytics.rs b/crates/common/src/repositories/analytics.rs index a2bdb70..86e90e2 100644 --- a/crates/common/src/repositories/analytics.rs +++ b/crates/common/src/repositories/analytics.rs @@ -80,6 +80,19 @@ pub struct EnforcementVolumeBucket { pub enforcement_count: i64, } +/// A single hourly bucket of execution volume (from execution hypertable directly). +#[derive(Debug, Clone, Serialize, FromRow)] +pub struct ExecutionVolumeBucket { + /// Start of the 1-hour bucket + pub bucket: DateTime, + /// Action ref; NULL when grouped across all actions + pub action_ref: Option, + /// The initial status at creation time + pub initial_status: Option, + /// Number of executions created in this bucket + pub execution_count: i64, +} + /// Aggregated failure rate over a time range. #[derive(Debug, Clone, Serialize)] pub struct FailureRateSummary { @@ -454,6 +467,69 @@ impl AnalyticsRepository { Ok(rows) } + // ======================================================================= + // Execution volume (from execution hypertable directly) + // ======================================================================= + + /// Query the `execution_volume_hourly` continuous aggregate for execution + /// creation volume across all actions. + pub async fn execution_volume_hourly<'e, E>( + executor: E, + range: &AnalyticsTimeRange, + ) -> Result> + where + E: Executor<'e, Database = Postgres> + 'e, + { + sqlx::query_as::<_, ExecutionVolumeBucket>( + r#" + SELECT + bucket, + NULL::text AS action_ref, + initial_status::text AS initial_status, + SUM(execution_count)::bigint AS execution_count + FROM execution_volume_hourly + WHERE bucket >= $1 AND bucket <= $2 + GROUP BY bucket, initial_status + ORDER BY bucket ASC, initial_status + "#, + ) + .bind(range.since) + .bind(range.until) + .fetch_all(executor) + .await + .map_err(Into::into) + } + + /// Query the `execution_volume_hourly` continuous aggregate filtered by + /// a specific action ref. + pub async fn execution_volume_hourly_by_action<'e, E>( + executor: E, + range: &AnalyticsTimeRange, + action_ref: &str, + ) -> Result> + where + E: Executor<'e, Database = Postgres> + 'e, + { + sqlx::query_as::<_, ExecutionVolumeBucket>( + r#" + SELECT + bucket, + action_ref, + initial_status::text AS initial_status, + execution_count + FROM execution_volume_hourly + WHERE bucket >= $1 AND bucket <= $2 AND action_ref = $3 + ORDER BY bucket ASC, initial_status + "#, + ) + .bind(range.since) + .bind(range.until) + .bind(action_ref) + .fetch_all(executor) + .await + .map_err(Into::into) + } + // ======================================================================= // Derived analytics // ======================================================================= diff --git a/crates/common/src/repositories/entity_history.rs b/crates/common/src/repositories/entity_history.rs index 4cb389b..a55d25f 100644 --- a/crates/common/src/repositories/entity_history.rs +++ b/crates/common/src/repositories/entity_history.rs @@ -263,11 +263,6 @@ mod tests { "execution_history" ); assert_eq!(HistoryEntityType::Worker.table_name(), "worker_history"); - assert_eq!( - HistoryEntityType::Enforcement.table_name(), - "enforcement_history" - ); - assert_eq!(HistoryEntityType::Event.table_name(), "event_history"); } #[test] @@ -280,14 +275,8 @@ mod tests { "Worker".parse::().unwrap(), HistoryEntityType::Worker ); - assert_eq!( - "ENFORCEMENT".parse::().unwrap(), - HistoryEntityType::Enforcement - ); - assert_eq!( - "event".parse::().unwrap(), - HistoryEntityType::Event - ); + assert!("enforcement".parse::().is_err()); + assert!("event".parse::().is_err()); assert!("unknown".parse::().is_err()); } @@ -295,7 +284,5 @@ mod tests { fn test_history_entity_type_display() { assert_eq!(HistoryEntityType::Execution.to_string(), "execution"); assert_eq!(HistoryEntityType::Worker.to_string(), "worker"); - assert_eq!(HistoryEntityType::Enforcement.to_string(), "enforcement"); - assert_eq!(HistoryEntityType::Event.to_string(), "event"); } } diff --git a/crates/common/src/repositories/event.rs b/crates/common/src/repositories/event.rs index da0fdaf..20316ad 100644 --- a/crates/common/src/repositories/event.rs +++ b/crates/common/src/repositories/event.rs @@ -1,6 +1,9 @@ //! Event and Enforcement repository for database operations //! //! This module provides CRUD operations and queries for Event and Enforcement entities. +//! Note: Events are immutable time-series data — there is no Update impl for EventRepository. + +use chrono::{DateTime, Utc}; use crate::models::{ enums::{EnforcementCondition, EnforcementStatus}, @@ -36,13 +39,6 @@ pub struct CreateEventInput { pub rule_ref: Option, } -/// Input for updating an event -#[derive(Debug, Clone, Default)] -pub struct UpdateEventInput { - pub config: Option, - pub payload: Option, -} - #[async_trait::async_trait] impl FindById for EventRepository { async fn find_by_id<'e, E>(executor: E, id: i64) -> Result> @@ -52,7 +48,7 @@ impl FindById for EventRepository { let event = sqlx::query_as::<_, Event>( r#" SELECT id, trigger, trigger_ref, config, payload, source, source_ref, - rule, rule_ref, created, updated + rule, rule_ref, created FROM event WHERE id = $1 "#, @@ -74,7 +70,7 @@ impl List for EventRepository { let events = sqlx::query_as::<_, Event>( r#" SELECT id, trigger, trigger_ref, config, payload, source, source_ref, - rule, rule_ref, created, updated + rule, rule_ref, created FROM event ORDER BY created DESC LIMIT 1000 @@ -100,7 +96,7 @@ impl Create for EventRepository { INSERT INTO event (trigger, trigger_ref, config, payload, source, source_ref, rule, rule_ref) VALUES ($1, $2, $3, $4, $5, $6, $7, $8) RETURNING id, trigger, trigger_ref, config, payload, source, source_ref, - rule, rule_ref, created, updated + rule, rule_ref, created "#, ) .bind(input.trigger) @@ -118,49 +114,6 @@ impl Create for EventRepository { } } -#[async_trait::async_trait] -impl Update for EventRepository { - type UpdateInput = UpdateEventInput; - - async fn update<'e, E>(executor: E, id: i64, input: Self::UpdateInput) -> Result - where - E: Executor<'e, Database = Postgres> + 'e, - { - // Build update query - - let mut query = QueryBuilder::new("UPDATE event SET "); - let mut has_updates = false; - - if let Some(config) = &input.config { - query.push("config = "); - query.push_bind(config); - has_updates = true; - } - - if let Some(payload) = &input.payload { - if has_updates { - query.push(", "); - } - query.push("payload = "); - query.push_bind(payload); - has_updates = true; - } - - if !has_updates { - // No updates requested, fetch and return existing entity - return Self::get_by_id(executor, id).await; - } - - query.push(", updated = NOW() WHERE id = "); - query.push_bind(id); - query.push(" RETURNING id, trigger, trigger_ref, config, payload, source, source_ref, rule, rule_ref, created, updated"); - - let event = query.build_query_as::().fetch_one(executor).await?; - - Ok(event) - } -} - #[async_trait::async_trait] impl Delete for EventRepository { async fn delete<'e, E>(executor: E, id: i64) -> Result @@ -185,7 +138,7 @@ impl EventRepository { let events = sqlx::query_as::<_, Event>( r#" SELECT id, trigger, trigger_ref, config, payload, source, source_ref, - rule, rule_ref, created, updated + rule, rule_ref, created FROM event WHERE trigger = $1 ORDER BY created DESC @@ -207,7 +160,7 @@ impl EventRepository { let events = sqlx::query_as::<_, Event>( r#" SELECT id, trigger, trigger_ref, config, payload, source, source_ref, - rule, rule_ref, created, updated + rule, rule_ref, created FROM event WHERE trigger_ref = $1 ORDER BY created DESC @@ -256,6 +209,7 @@ pub struct CreateEnforcementInput { pub struct UpdateEnforcementInput { pub status: Option, pub payload: Option, + pub resolved_at: Option>, } #[async_trait::async_trait] @@ -267,7 +221,7 @@ impl FindById for EnforcementRepository { let enforcement = sqlx::query_as::<_, Enforcement>( r#" SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, - condition, conditions, created, updated + condition, conditions, created, resolved_at FROM enforcement WHERE id = $1 "#, @@ -289,7 +243,7 @@ impl List for EnforcementRepository { let enforcements = sqlx::query_as::<_, Enforcement>( r#" SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, - condition, conditions, created, updated + condition, conditions, created, resolved_at FROM enforcement ORDER BY created DESC LIMIT 1000 @@ -316,7 +270,7 @@ impl Create for EnforcementRepository { payload, condition, conditions) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9) RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload, - condition, conditions, created, updated + condition, conditions, created, resolved_at "#, ) .bind(input.rule) @@ -363,14 +317,23 @@ impl Update for EnforcementRepository { has_updates = true; } + if let Some(resolved_at) = input.resolved_at { + if has_updates { + query.push(", "); + } + query.push("resolved_at = "); + query.push_bind(resolved_at); + has_updates = true; + } + if !has_updates { // No updates requested, fetch and return existing entity return Self::get_by_id(executor, id).await; } - query.push(", updated = NOW() WHERE id = "); + query.push(" WHERE id = "); query.push_bind(id); - query.push(" RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload, condition, conditions, created, updated"); + query.push(" RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload, condition, conditions, created, resolved_at"); let enforcement = query .build_query_as::() @@ -405,7 +368,7 @@ impl EnforcementRepository { let enforcements = sqlx::query_as::<_, Enforcement>( r#" SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, - condition, conditions, created, updated + condition, conditions, created, resolved_at FROM enforcement WHERE rule = $1 ORDER BY created DESC @@ -429,7 +392,7 @@ impl EnforcementRepository { let enforcements = sqlx::query_as::<_, Enforcement>( r#" SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, - condition, conditions, created, updated + condition, conditions, created, resolved_at FROM enforcement WHERE status = $1 ORDER BY created DESC @@ -450,7 +413,7 @@ impl EnforcementRepository { let enforcements = sqlx::query_as::<_, Enforcement>( r#" SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload, - condition, conditions, created, updated + condition, conditions, created, resolved_at FROM enforcement WHERE event = $1 ORDER BY created DESC diff --git a/crates/common/src/repositories/execution.rs b/crates/common/src/repositories/execution.rs index da9daac..64bab7f 100644 --- a/crates/common/src/repositories/execution.rs +++ b/crates/common/src/repositories/execution.rs @@ -6,6 +6,15 @@ use sqlx::{Executor, Postgres, QueryBuilder}; use super::{Create, Delete, FindById, List, Repository, Update}; +/// Column list for SELECT queries on the execution table. +/// +/// Defined once to avoid drift between queries and the `Execution` model. +/// The execution table has DB-only columns (`is_workflow`, `workflow_def`) that +/// are NOT in the Rust struct, so `SELECT *` must never be used. +pub const SELECT_COLUMNS: &str = "\ + id, action, action_ref, config, env_vars, parent, enforcement, \ + executor, status, result, workflow_task, created, updated"; + pub struct ExecutionRepository; impl Repository for ExecutionRepository { @@ -54,9 +63,12 @@ impl FindById for ExecutionRepository { where E: Executor<'e, Database = Postgres> + 'e, { - sqlx::query_as::<_, Execution>( - "SELECT id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated FROM execution WHERE id = $1" - ).bind(id).fetch_optional(executor).await.map_err(Into::into) + let sql = format!("SELECT {SELECT_COLUMNS} FROM execution WHERE id = $1"); + sqlx::query_as::<_, Execution>(&sql) + .bind(id) + .fetch_optional(executor) + .await + .map_err(Into::into) } } @@ -66,9 +78,12 @@ impl List for ExecutionRepository { where E: Executor<'e, Database = Postgres> + 'e, { - sqlx::query_as::<_, Execution>( - "SELECT id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated FROM execution ORDER BY created DESC LIMIT 1000" - ).fetch_all(executor).await.map_err(Into::into) + let sql = + format!("SELECT {SELECT_COLUMNS} FROM execution ORDER BY created DESC LIMIT 1000"); + sqlx::query_as::<_, Execution>(&sql) + .fetch_all(executor) + .await + .map_err(Into::into) } } @@ -79,9 +94,26 @@ impl Create for ExecutionRepository { where E: Executor<'e, Database = Postgres> + 'e, { - sqlx::query_as::<_, Execution>( - "INSERT INTO execution (action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) RETURNING id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated" - ).bind(input.action).bind(&input.action_ref).bind(&input.config).bind(&input.env_vars).bind(input.parent).bind(input.enforcement).bind(input.executor).bind(input.status).bind(&input.result).bind(sqlx::types::Json(&input.workflow_task)).fetch_one(executor).await.map_err(Into::into) + let sql = format!( + "INSERT INTO execution \ + (action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task) \ + VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) \ + RETURNING {SELECT_COLUMNS}" + ); + sqlx::query_as::<_, Execution>(&sql) + .bind(input.action) + .bind(&input.action_ref) + .bind(&input.config) + .bind(&input.env_vars) + .bind(input.parent) + .bind(input.enforcement) + .bind(input.executor) + .bind(input.status) + .bind(&input.result) + .bind(sqlx::types::Json(&input.workflow_task)) + .fetch_one(executor) + .await + .map_err(Into::into) } } @@ -130,7 +162,8 @@ impl Update for ExecutionRepository { } query.push(", updated = NOW() WHERE id = ").push_bind(id); - query.push(" RETURNING id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated"); + query.push(" RETURNING "); + query.push(SELECT_COLUMNS); query .build_query_as::() @@ -162,9 +195,14 @@ impl ExecutionRepository { where E: Executor<'e, Database = Postgres> + 'e, { - sqlx::query_as::<_, Execution>( - "SELECT id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated FROM execution WHERE status = $1 ORDER BY created DESC" - ).bind(status).fetch_all(executor).await.map_err(Into::into) + let sql = format!( + "SELECT {SELECT_COLUMNS} FROM execution WHERE status = $1 ORDER BY created DESC" + ); + sqlx::query_as::<_, Execution>(&sql) + .bind(status) + .fetch_all(executor) + .await + .map_err(Into::into) } pub async fn find_by_enforcement<'e, E>( @@ -174,8 +212,31 @@ impl ExecutionRepository { where E: Executor<'e, Database = Postgres> + 'e, { - sqlx::query_as::<_, Execution>( - "SELECT id, action, action_ref, config, env_vars, parent, enforcement, executor, status, result, workflow_task, created, updated FROM execution WHERE enforcement = $1 ORDER BY created DESC" - ).bind(enforcement_id).fetch_all(executor).await.map_err(Into::into) + let sql = format!( + "SELECT {SELECT_COLUMNS} FROM execution WHERE enforcement = $1 ORDER BY created DESC" + ); + sqlx::query_as::<_, Execution>(&sql) + .bind(enforcement_id) + .fetch_all(executor) + .await + .map_err(Into::into) + } + + /// Find all child executions for a given parent execution ID. + /// + /// Returns child executions ordered by creation time (ascending), + /// which is the natural task execution order for workflows. + pub async fn find_by_parent<'e, E>(executor: E, parent_id: Id) -> Result> + where + E: Executor<'e, Database = Postgres> + 'e, + { + let sql = format!( + "SELECT {SELECT_COLUMNS} FROM execution WHERE parent = $1 ORDER BY created ASC" + ); + sqlx::query_as::<_, Execution>(&sql) + .bind(parent_id) + .fetch_all(executor) + .await + .map_err(Into::into) } } diff --git a/crates/common/src/workflow/registrar.rs b/crates/common/src/workflow/registrar.rs index 77c2ca1..7ebe557 100644 --- a/crates/common/src/workflow/registrar.rs +++ b/crates/common/src/workflow/registrar.rs @@ -194,7 +194,7 @@ impl WorkflowRegistrar { /// /// This ensures the workflow appears in action lists and the action palette /// in the workflow builder. The action is linked to the workflow definition - /// via `is_workflow = true` and `workflow_def` FK. + /// via the `workflow_def` FK. async fn create_companion_action( &self, workflow_def_id: i64, @@ -221,7 +221,7 @@ impl WorkflowRegistrar { let action = ActionRepository::create(&self.pool, action_input).await?; - // Link the action to the workflow definition (sets is_workflow = true and workflow_def) + // Link the action to the workflow definition (sets workflow_def FK) ActionRepository::link_workflow_def(&self.pool, action.id, workflow_def_id).await?; info!( diff --git a/crates/common/tests/enforcement_repository_tests.rs b/crates/common/tests/enforcement_repository_tests.rs index dc61a0b..9c3dfed 100644 --- a/crates/common/tests/enforcement_repository_tests.rs +++ b/crates/common/tests/enforcement_repository_tests.rs @@ -54,8 +54,8 @@ async fn test_create_enforcement_minimal() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -89,7 +89,7 @@ async fn test_create_enforcement_minimal() { assert_eq!(enforcement.condition, EnforcementCondition::All); assert_eq!(enforcement.conditions, json!([])); assert!(enforcement.created.timestamp() > 0); - assert!(enforcement.updated.timestamp() > 0); + assert_eq!(enforcement.resolved_at, None); // Not yet resolved } #[tokio::test] @@ -125,8 +125,8 @@ async fn test_create_enforcement_with_event() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -192,8 +192,8 @@ async fn test_create_enforcement_with_conditions() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -257,8 +257,8 @@ async fn test_create_enforcement_with_any_condition() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -333,10 +333,12 @@ async fn test_create_enforcement_with_invalid_rule_fails() { } #[tokio::test] -async fn test_create_enforcement_with_invalid_event_fails() { +async fn test_create_enforcement_with_nonexistent_event_succeeds() { let pool = create_test_pool().await.unwrap(); - // Try to create enforcement with non-existent event ID + // The enforcement.event column has no FK constraint (event is a hypertable + // and hypertables cannot be FK targets). A non-existent event ID is accepted + // as a dangling reference. let input = CreateEnforcementInput { rule: None, rule_ref: "some.rule".to_string(), @@ -351,8 +353,9 @@ async fn test_create_enforcement_with_invalid_event_fails() { let result = EnforcementRepository::create(&pool, input).await; - assert!(result.is_err()); - // Foreign key constraint violation + assert!(result.is_ok()); + let enforcement = result.unwrap(); + assert_eq!(enforcement.event, Some(99999)); } // ============================================================================ @@ -392,8 +395,8 @@ async fn test_find_enforcement_by_id() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -464,8 +467,8 @@ async fn test_get_enforcement_by_id() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -542,8 +545,8 @@ async fn test_list_enforcements() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -613,8 +616,8 @@ async fn test_update_enforcement_status() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -628,9 +631,11 @@ async fn test_update_enforcement_status() { .await .unwrap(); + let now = chrono::Utc::now(); let input = UpdateEnforcementInput { status: Some(EnforcementStatus::Processed), payload: None, + resolved_at: Some(now), }; let updated = EnforcementRepository::update(&pool, enforcement.id, input) @@ -639,7 +644,8 @@ async fn test_update_enforcement_status() { assert_eq!(updated.id, enforcement.id); assert_eq!(updated.status, EnforcementStatus::Processed); - assert!(updated.updated > enforcement.updated); + assert!(updated.resolved_at.is_some()); + assert!(updated.resolved_at.unwrap() >= enforcement.created); } #[tokio::test] @@ -675,8 +681,8 @@ async fn test_update_enforcement_status_transitions() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -689,26 +695,30 @@ async fn test_update_enforcement_status_transitions() { .await .unwrap(); - // Test status transitions: Created -> Succeeded + // Test status transitions: Created -> Processed + let now = chrono::Utc::now(); let updated = EnforcementRepository::update( &pool, enforcement.id, UpdateEnforcementInput { status: Some(EnforcementStatus::Processed), payload: None, + resolved_at: Some(now), }, ) .await .unwrap(); assert_eq!(updated.status, EnforcementStatus::Processed); + assert!(updated.resolved_at.is_some()); - // Test status transition: Succeeded -> Failed (although unusual) + // Test status transition: Processed -> Disabled (although unusual) let updated = EnforcementRepository::update( &pool, enforcement.id, UpdateEnforcementInput { status: Some(EnforcementStatus::Disabled), payload: None, + resolved_at: None, }, ) .await @@ -749,8 +759,8 @@ async fn test_update_enforcement_payload() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -768,6 +778,7 @@ async fn test_update_enforcement_payload() { let input = UpdateEnforcementInput { status: None, payload: Some(new_payload.clone()), + resolved_at: None, }; let updated = EnforcementRepository::update(&pool, enforcement.id, input) @@ -810,8 +821,8 @@ async fn test_update_enforcement_both_fields() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -824,10 +835,12 @@ async fn test_update_enforcement_both_fields() { .await .unwrap(); + let now = chrono::Utc::now(); let new_payload = json!({"result": "success"}); let input = UpdateEnforcementInput { status: Some(EnforcementStatus::Processed), payload: Some(new_payload.clone()), + resolved_at: Some(now), }; let updated = EnforcementRepository::update(&pool, enforcement.id, input) @@ -871,8 +884,8 @@ async fn test_update_enforcement_no_changes() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -889,6 +902,7 @@ async fn test_update_enforcement_no_changes() { let input = UpdateEnforcementInput { status: None, payload: None, + resolved_at: None, }; let result = EnforcementRepository::update(&pool, enforcement.id, input) @@ -907,6 +921,7 @@ async fn test_update_enforcement_not_found() { let input = UpdateEnforcementInput { status: Some(EnforcementStatus::Processed), payload: None, + resolved_at: Some(chrono::Utc::now()), }; let result = EnforcementRepository::update(&pool, 99999, input).await; @@ -952,8 +967,8 @@ async fn test_delete_enforcement() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -1025,8 +1040,8 @@ async fn test_find_enforcements_by_rule() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -1047,8 +1062,8 @@ async fn test_find_enforcements_by_rule() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -1117,8 +1132,8 @@ async fn test_find_enforcements_by_status() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -1206,8 +1221,8 @@ async fn test_find_enforcements_by_event() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -1290,8 +1305,8 @@ async fn test_delete_rule_sets_enforcement_rule_to_null() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -1323,7 +1338,7 @@ async fn test_delete_rule_sets_enforcement_rule_to_null() { // ============================================================================ #[tokio::test] -async fn test_enforcement_timestamps_auto_managed() { +async fn test_enforcement_resolved_at_lifecycle() { let pool = create_test_pool().await.unwrap(); let pack = PackFixture::new_unique("timestamp_pack") @@ -1355,8 +1370,8 @@ async fn test_enforcement_timestamps_auto_managed() { trigger: trigger.id, trigger_ref: trigger.r#ref.clone(), conditions: json!({}), - action_params: json!({}), - trigger_params: json!({}), + action_params: json!({}), + trigger_params: json!({}), enabled: true, is_adhoc: false, }, @@ -1369,24 +1384,23 @@ async fn test_enforcement_timestamps_auto_managed() { .await .unwrap(); - let created_time = enforcement.created; - let updated_time = enforcement.updated; - - assert!(created_time.timestamp() > 0); - assert_eq!(created_time, updated_time); - - // Update and verify timestamp changed - tokio::time::sleep(tokio::time::Duration::from_millis(10)).await; + // Initially, resolved_at is NULL + assert!(enforcement.created.timestamp() > 0); + assert_eq!(enforcement.resolved_at, None); + // Resolve the enforcement and verify resolved_at is set + let resolved_time = chrono::Utc::now(); let input = UpdateEnforcementInput { status: Some(EnforcementStatus::Processed), payload: None, + resolved_at: Some(resolved_time), }; let updated = EnforcementRepository::update(&pool, enforcement.id, input) .await .unwrap(); - assert_eq!(updated.created, created_time); // created unchanged - assert!(updated.updated > updated_time); // updated changed + assert_eq!(updated.created, enforcement.created); // created unchanged + assert!(updated.resolved_at.is_some()); + assert!(updated.resolved_at.unwrap() >= enforcement.created); } diff --git a/crates/common/tests/event_repository_tests.rs b/crates/common/tests/event_repository_tests.rs index 6a459f5..85cc356 100644 --- a/crates/common/tests/event_repository_tests.rs +++ b/crates/common/tests/event_repository_tests.rs @@ -2,13 +2,14 @@ //! //! These tests verify CRUD operations, queries, and constraints //! for the Event repository. +//! Note: Events are immutable time-series data — there are no update tests. mod helpers; use attune_common::{ repositories::{ - event::{CreateEventInput, EventRepository, UpdateEventInput}, - Create, Delete, FindById, List, Update, + event::{CreateEventInput, EventRepository}, + Create, Delete, FindById, List, }, Error, }; @@ -56,7 +57,6 @@ async fn test_create_event_minimal() { assert_eq!(event.source, None); assert_eq!(event.source_ref, None); assert!(event.created.timestamp() > 0); - assert!(event.updated.timestamp() > 0); } #[tokio::test] @@ -363,162 +363,6 @@ async fn test_list_events_respects_limit() { assert!(events.len() <= 1000); } -// ============================================================================ -// UPDATE Tests -// ============================================================================ - -#[tokio::test] -async fn test_update_event_config() { - let pool = create_test_pool().await.unwrap(); - - let pack = PackFixture::new_unique("update_pack") - .create(&pool) - .await - .unwrap(); - - let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook") - .create(&pool) - .await - .unwrap(); - - let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref) - .with_config(json!({"old": "config"})) - .create(&pool) - .await - .unwrap(); - - let new_config = json!({"new": "config", "updated": true}); - let input = UpdateEventInput { - config: Some(new_config.clone()), - payload: None, - }; - - let updated = EventRepository::update(&pool, event.id, input) - .await - .unwrap(); - - assert_eq!(updated.id, event.id); - assert_eq!(updated.config, Some(new_config)); - assert!(updated.updated > event.updated); -} - -#[tokio::test] -async fn test_update_event_payload() { - let pool = create_test_pool().await.unwrap(); - - let pack = PackFixture::new_unique("payload_update_pack") - .create(&pool) - .await - .unwrap(); - - let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook") - .create(&pool) - .await - .unwrap(); - - let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref) - .with_payload(json!({"initial": "payload"})) - .create(&pool) - .await - .unwrap(); - - let new_payload = json!({"updated": "payload", "version": 2}); - let input = UpdateEventInput { - config: None, - payload: Some(new_payload.clone()), - }; - - let updated = EventRepository::update(&pool, event.id, input) - .await - .unwrap(); - - assert_eq!(updated.payload, Some(new_payload)); - assert!(updated.updated > event.updated); -} - -#[tokio::test] -async fn test_update_event_both_fields() { - let pool = create_test_pool().await.unwrap(); - - let pack = PackFixture::new_unique("both_update_pack") - .create(&pool) - .await - .unwrap(); - - let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook") - .create(&pool) - .await - .unwrap(); - - let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref) - .create(&pool) - .await - .unwrap(); - - let new_config = json!({"setting": "value"}); - let new_payload = json!({"data": "value"}); - let input = UpdateEventInput { - config: Some(new_config.clone()), - payload: Some(new_payload.clone()), - }; - - let updated = EventRepository::update(&pool, event.id, input) - .await - .unwrap(); - - assert_eq!(updated.config, Some(new_config)); - assert_eq!(updated.payload, Some(new_payload)); -} - -#[tokio::test] -async fn test_update_event_no_changes() { - let pool = create_test_pool().await.unwrap(); - - let pack = PackFixture::new_unique("nochange_pack") - .create(&pool) - .await - .unwrap(); - - let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook") - .create(&pool) - .await - .unwrap(); - - let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref) - .with_payload(json!({"test": "data"})) - .create(&pool) - .await - .unwrap(); - - let input = UpdateEventInput { - config: None, - payload: None, - }; - - let result = EventRepository::update(&pool, event.id, input) - .await - .unwrap(); - - // Should return existing event without updating - assert_eq!(result.id, event.id); - assert_eq!(result.payload, event.payload); -} - -#[tokio::test] -async fn test_update_event_not_found() { - let pool = create_test_pool().await.unwrap(); - - let input = UpdateEventInput { - config: Some(json!({"test": "config"})), - payload: None, - }; - - let result = EventRepository::update(&pool, 99999, input).await; - - // When updating non-existent entity with changes, SQLx returns RowNotFound error - assert!(result.is_err()); -} - // ============================================================================ // DELETE Tests // ============================================================================ @@ -561,7 +405,7 @@ async fn test_delete_event_not_found() { } #[tokio::test] -async fn test_delete_event_sets_enforcement_event_to_null() { +async fn test_delete_event_enforcement_retains_event_id() { let pool = create_test_pool().await.unwrap(); // Create pack, trigger, action, rule, and event @@ -616,17 +460,19 @@ async fn test_delete_event_sets_enforcement_event_to_null() { .await .unwrap(); - // Delete the event - enforcement.event should be set to NULL (ON DELETE SET NULL) + // Delete the event — since the event table is a TimescaleDB hypertable, the FK + // constraint from enforcement.event was dropped (hypertables cannot be FK targets). + // The enforcement.event column retains the old ID as a dangling reference. EventRepository::delete(&pool, event.id).await.unwrap(); - // Enforcement should still exist but with NULL event + // Enforcement still exists with the original event ID (now a dangling reference) use attune_common::repositories::event::EnforcementRepository; let found_enforcement = EnforcementRepository::find_by_id(&pool, enforcement.id) .await .unwrap() .unwrap(); - assert_eq!(found_enforcement.event, None); + assert_eq!(found_enforcement.event, Some(event.id)); } // ============================================================================ @@ -756,7 +602,7 @@ async fn test_find_events_by_trigger_ref_preserves_after_trigger_deletion() { // ============================================================================ #[tokio::test] -async fn test_event_timestamps_auto_managed() { +async fn test_event_created_timestamp_auto_set() { let pool = create_test_pool().await.unwrap(); let pack = PackFixture::new_unique("timestamp_pack") @@ -774,24 +620,5 @@ async fn test_event_timestamps_auto_managed() { .await .unwrap(); - let created_time = event.created; - let updated_time = event.updated; - - assert!(created_time.timestamp() > 0); - assert_eq!(created_time, updated_time); - - // Update and verify timestamp changed - tokio::time::sleep(tokio::time::Duration::from_millis(10)).await; - - let input = UpdateEventInput { - config: Some(json!({"updated": true})), - payload: None, - }; - - let updated = EventRepository::update(&pool, event.id, input) - .await - .unwrap(); - - assert_eq!(updated.created, created_time); // created unchanged - assert!(updated.updated > updated_time); // updated changed + assert!(event.created.timestamp() > 0); } diff --git a/crates/executor/src/completion_listener.rs b/crates/executor/src/completion_listener.rs index 23b0deb..8f923af 100644 --- a/crates/executor/src/completion_listener.rs +++ b/crates/executor/src/completion_listener.rs @@ -7,6 +7,7 @@ //! - Detecting inquiry requests in execution results //! - Creating inquiries for human-in-the-loop workflows //! - Enabling FIFO execution ordering by notifying waiting executions +//! - Advancing workflow orchestration when child task executions complete use anyhow::Result; use attune_common::{ @@ -14,10 +15,14 @@ use attune_common::{ repositories::{execution::ExecutionRepository, FindById}, }; use sqlx::PgPool; +use std::sync::atomic::AtomicUsize; use std::sync::Arc; use tracing::{debug, error, info, warn}; -use crate::{inquiry_handler::InquiryHandler, queue_manager::ExecutionQueueManager}; +use crate::{ + inquiry_handler::InquiryHandler, queue_manager::ExecutionQueueManager, + scheduler::ExecutionScheduler, +}; /// Completion listener that handles execution completion messages pub struct CompletionListener { @@ -25,6 +30,9 @@ pub struct CompletionListener { consumer: Arc, publisher: Arc, queue_manager: Arc, + /// Round-robin counter shared with the scheduler for dispatching workflow + /// successor tasks to workers. + round_robin_counter: Arc, } impl CompletionListener { @@ -40,6 +48,7 @@ impl CompletionListener { consumer, publisher, queue_manager, + round_robin_counter: Arc::new(AtomicUsize::new(0)), } } @@ -50,6 +59,7 @@ impl CompletionListener { let pool = self.pool.clone(); let publisher = self.publisher.clone(); let queue_manager = self.queue_manager.clone(); + let round_robin_counter = self.round_robin_counter.clone(); // Use the handler pattern to consume messages self.consumer @@ -58,12 +68,14 @@ impl CompletionListener { let pool = pool.clone(); let publisher = publisher.clone(); let queue_manager = queue_manager.clone(); + let round_robin_counter = round_robin_counter.clone(); async move { if let Err(e) = Self::process_execution_completed( &pool, &publisher, &queue_manager, + &round_robin_counter, &envelope, ) .await @@ -88,6 +100,7 @@ impl CompletionListener { pool: &PgPool, publisher: &Publisher, queue_manager: &ExecutionQueueManager, + round_robin_counter: &AtomicUsize, envelope: &MessageEnvelope, ) -> Result<()> { debug!("Processing execution completed message: {:?}", envelope); @@ -115,6 +128,26 @@ impl CompletionListener { execution_id, exec.status ); + // Check if this execution is a workflow child task and advance the + // workflow orchestration (schedule successor tasks or complete the + // workflow). + if exec.workflow_task.is_some() { + info!( + "Execution {} is a workflow task, advancing workflow", + execution_id + ); + if let Err(e) = + ExecutionScheduler::advance_workflow(pool, publisher, round_robin_counter, exec) + .await + { + error!( + "Failed to advance workflow for execution {}: {}", + execution_id, e + ); + // Continue processing — don't fail the entire completion + } + } + // Check if execution result contains an inquiry request if let Some(result) = &exec.result { if InquiryHandler::has_inquiry_request(result) { diff --git a/crates/executor/src/enforcement_processor.rs b/crates/executor/src/enforcement_processor.rs index a594c5e..5d56d37 100644 --- a/crates/executor/src/enforcement_processor.rs +++ b/crates/executor/src/enforcement_processor.rs @@ -152,6 +152,7 @@ impl EnforcementProcessor { UpdateEnforcementInput { status: Some(EnforcementStatus::Processed), payload: None, + resolved_at: Some(chrono::Utc::now()), }, ) .await?; @@ -170,6 +171,7 @@ impl EnforcementProcessor { UpdateEnforcementInput { status: Some(EnforcementStatus::Disabled), payload: None, + resolved_at: Some(chrono::Utc::now()), }, ) .await?; @@ -356,7 +358,7 @@ mod tests { condition: attune_common::models::enums::EnforcementCondition::Any, conditions: json!({}), created: chrono::Utc::now(), - updated: chrono::Utc::now(), + resolved_at: Some(chrono::Utc::now()), }; let mut rule = Rule { diff --git a/crates/executor/src/main.rs b/crates/executor/src/main.rs index 01d7033..56d84ec 100644 --- a/crates/executor/src/main.rs +++ b/crates/executor/src/main.rs @@ -21,6 +21,7 @@ mod scheduler; mod service; mod timeout_monitor; mod worker_health; +mod workflow; use anyhow::Result; use attune_common::config::Config; diff --git a/crates/executor/src/scheduler.rs b/crates/executor/src/scheduler.rs index a2ea24b..9c21fd4 100644 --- a/crates/executor/src/scheduler.rs +++ b/crates/executor/src/scheduler.rs @@ -6,28 +6,66 @@ //! - Queuing executions to worker-specific queues //! - Updating execution status to Scheduled //! - Handling worker unavailability and retries +//! - Detecting workflow actions and orchestrating them via child task executions +//! - Resolving `{{ }}` template expressions in workflow task inputs +//! - Processing `publish` directives from transitions +//! - Expanding `with_items` into parallel child executions use anyhow::Result; use attune_common::{ - models::{enums::ExecutionStatus, Action, Execution}, + models::{enums::ExecutionStatus, execution::WorkflowTaskMetadata, Action, Execution}, mq::{Consumer, ExecutionRequestedPayload, MessageEnvelope, MessageType, Publisher}, repositories::{ action::ActionRepository, - execution::ExecutionRepository, + execution::{CreateExecutionInput, ExecutionRepository}, runtime::{RuntimeRepository, WorkerRepository}, - FindById, FindByRef, Update, + workflow::{ + CreateWorkflowExecutionInput, WorkflowDefinitionRepository, WorkflowExecutionRepository, + }, + Create, FindById, FindByRef, Update, }, runtime_detection::runtime_matches_filter, + workflow::WorkflowDefinition, }; use chrono::Utc; use serde::{Deserialize, Serialize}; use serde_json::Value as JsonValue; use sqlx::PgPool; +use std::collections::HashMap; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Arc; use std::time::Duration; use tracing::{debug, error, info, warn}; +use crate::workflow::context::{TaskOutcome, WorkflowContext}; +use crate::workflow::graph::TaskGraph; + +/// Extract workflow parameters from an execution's `config` field. +/// +/// The config may be stored in two formats: +/// 1. Wrapped: `{"parameters": {"n": 5, ...}}` — used by child task executions +/// 2. Flat: `{"n": 5, ...}` — used by the API for manual executions +/// +/// This helper checks for a `"parameters"` key first, and if absent treats +/// the entire config object as the parameters (matching the worker's logic +/// in `ActionExecutor::prepare_execution_context`). +fn extract_workflow_params(config: &Option) -> JsonValue { + match config { + Some(c) => { + // Prefer the wrapped format if present + if let Some(params) = c.get("parameters") { + params.clone() + } else if c.is_object() { + // Flat format — the config itself is the parameters + c.clone() + } else { + serde_json::json!({}) + } + } + None => serde_json::json!({}), + } +} + /// Payload for execution scheduled messages #[derive(Debug, Clone, Serialize, Deserialize)] struct ExecutionScheduledPayload { @@ -118,14 +156,30 @@ impl ExecutionScheduler { info!("Scheduling execution: {}", execution_id); // Fetch execution from database - let mut execution = ExecutionRepository::find_by_id(pool, execution_id) + let execution = ExecutionRepository::find_by_id(pool, execution_id) .await? .ok_or_else(|| anyhow::anyhow!("Execution not found: {}", execution_id))?; // Fetch action to determine runtime requirements let action = Self::get_action_for_execution(pool, &execution).await?; - // Select appropriate worker (round-robin among compatible workers) + // Check if this action is a workflow (has workflow_def set) + if action.workflow_def.is_some() { + info!( + "Action '{}' is a workflow, orchestrating instead of dispatching to worker", + action.r#ref + ); + return Self::process_workflow_execution( + pool, + publisher, + round_robin_counter, + &execution, + &action, + ) + .await; + } + + // Regular action: select appropriate worker (round-robin among compatible workers) let worker = Self::select_worker(pool, &action, round_robin_counter).await?; info!( @@ -135,8 +189,10 @@ impl ExecutionScheduler { // Update execution status to scheduled let execution_config = execution.config.clone(); - execution.status = ExecutionStatus::Scheduled; - ExecutionRepository::update(pool, execution.id, execution.into()).await?; + let mut execution_for_update = execution; + execution_for_update.status = ExecutionStatus::Scheduled; + ExecutionRepository::update(pool, execution_for_update.id, execution_for_update.into()) + .await?; // Publish message to worker-specific queue Self::queue_to_worker( @@ -157,6 +213,1095 @@ impl ExecutionScheduler { Ok(()) } + // ----------------------------------------------------------------------- + // Workflow orchestration + // ----------------------------------------------------------------------- + + /// Handle a workflow execution by loading its definition, creating a + /// `workflow_execution` record, and dispatching the entry-point tasks as + /// child executions that workers *can* handle. + async fn process_workflow_execution( + pool: &PgPool, + publisher: &Publisher, + round_robin_counter: &AtomicUsize, + execution: &Execution, + action: &Action, + ) -> Result<()> { + let workflow_def_id = action + .workflow_def + .ok_or_else(|| anyhow::anyhow!("Action '{}' has no workflow_def", action.r#ref))?; + + // Load workflow definition + let workflow_def = WorkflowDefinitionRepository::find_by_id(pool, workflow_def_id) + .await? + .ok_or_else(|| { + anyhow::anyhow!( + "Workflow definition {} not found for action '{}'", + workflow_def_id, + action.r#ref + ) + })?; + + if !workflow_def.enabled { + warn!( + "Workflow '{}' is disabled, failing execution {}", + workflow_def.r#ref, execution.id + ); + let mut fail = execution.clone(); + fail.status = ExecutionStatus::Failed; + fail.result = Some(serde_json::json!({ + "error": format!("Workflow '{}' is disabled", workflow_def.r#ref), + "succeeded": false, + })); + ExecutionRepository::update(pool, fail.id, fail.into()).await?; + return Ok(()); + } + + // Parse workflow definition JSON into the strongly-typed struct + let definition: WorkflowDefinition = + serde_json::from_value(workflow_def.definition.clone()).map_err(|e| { + anyhow::anyhow!( + "Invalid workflow definition for '{}': {}", + workflow_def.r#ref, + e + ) + })?; + + // Build the task graph to determine entry points and transitions + let graph = TaskGraph::from_workflow(&definition).map_err(|e| { + anyhow::anyhow!( + "Failed to build task graph for workflow '{}': {}", + workflow_def.r#ref, + e + ) + })?; + + let task_graph_json: JsonValue = serde_json::to_value(&graph).unwrap_or_default(); + + // Gather initial variables from the definition + let initial_vars: JsonValue = + serde_json::to_value(&definition.vars).unwrap_or_else(|_| serde_json::json!({})); + + // Create workflow_execution record + let workflow_execution = WorkflowExecutionRepository::create( + pool, + CreateWorkflowExecutionInput { + execution: execution.id, + workflow_def: workflow_def.id, + task_graph: task_graph_json, + variables: initial_vars, + status: ExecutionStatus::Running, + }, + ) + .await?; + + info!( + "Created workflow_execution {} for workflow '{}' (parent execution {})", + workflow_execution.id, workflow_def.r#ref, execution.id + ); + + // Mark the parent execution as Running + let mut running_exec = execution.clone(); + running_exec.status = ExecutionStatus::Running; + ExecutionRepository::update(pool, running_exec.id, running_exec.into()).await?; + + if graph.entry_points.is_empty() { + warn!( + "Workflow '{}' has no entry-point tasks, completing immediately", + workflow_def.r#ref + ); + Self::complete_workflow(pool, execution.id, workflow_execution.id, true, None).await?; + return Ok(()); + } + + // Build initial workflow context from execution parameters and + // workflow-level vars so that entry-point task inputs are rendered. + let workflow_params = extract_workflow_params(&execution.config); + let wf_ctx = WorkflowContext::new( + workflow_params, + definition + .vars + .iter() + .map(|(k, v)| { + let jv: JsonValue = + serde_json::to_value(v).unwrap_or(JsonValue::String(v.to_string())); + (k.clone(), jv) + }) + .collect(), + ); + + // For each entry-point task, create a child execution and dispatch it + for entry_task_name in &graph.entry_points { + if let Some(task_node) = graph.get_task(entry_task_name) { + Self::dispatch_workflow_task( + pool, + publisher, + round_robin_counter, + execution, + &workflow_execution.id, + task_node, + &wf_ctx, + ) + .await?; + } else { + warn!( + "Entry-point task '{}' not found in graph for workflow '{}'", + entry_task_name, workflow_def.r#ref + ); + } + } + + Ok(()) + } + + /// Create a child execution for a single workflow task and dispatch it to + /// a worker. The child execution references the parent workflow execution + /// via `workflow_task` metadata. + async fn dispatch_workflow_task( + pool: &PgPool, + publisher: &Publisher, + _round_robin_counter: &AtomicUsize, + parent_execution: &Execution, + workflow_execution_id: &i64, + task_node: &crate::workflow::graph::TaskNode, + wf_ctx: &WorkflowContext, + ) -> Result<()> { + let action_ref: String = match &task_node.action { + Some(a) => a.clone(), + None => { + warn!( + "Workflow task '{}' has no action reference, skipping", + task_node.name + ); + return Ok(()); + } + }; + + // Resolve the task's action from the database + let task_action = ActionRepository::find_by_ref(pool, &action_ref).await?; + let task_action = match task_action { + Some(a) => a, + None => { + error!( + "Action '{}' not found for workflow task '{}'", + action_ref, task_node.name + ); + return Err(anyhow::anyhow!( + "Action '{}' not found for workflow task '{}'", + action_ref, + task_node.name + )); + } + }; + + // ----------------------------------------------------------------- + // with_items expansion: if the task declares `with_items`, resolve + // the list expression and create one child execution per item (up + // to `concurrency` in parallel — though concurrency limiting is + // left for a future enhancement; we fan out all items now). + // ----------------------------------------------------------------- + if let Some(ref with_items_expr) = task_node.with_items { + return Self::dispatch_with_items_task( + pool, + publisher, + parent_execution, + workflow_execution_id, + task_node, + &task_action, + &action_ref, + with_items_expr, + wf_ctx, + ) + .await; + } + + // ----------------------------------------------------------------- + // Render task input templates through the WorkflowContext + // ----------------------------------------------------------------- + let rendered_input = + if task_node.input.is_object() && !task_node.input.as_object().unwrap().is_empty() { + match wf_ctx.render_json(&task_node.input) { + Ok(rendered) => rendered, + Err(e) => { + warn!( + "Template rendering failed for task '{}': {}. Using raw input.", + task_node.name, e + ); + task_node.input.clone() + } + } + } else { + task_node.input.clone() + }; + + // Build task config from the (rendered) input + let task_config: Option = + if rendered_input.is_object() && !rendered_input.as_object().unwrap().is_empty() { + Some(serde_json::json!({ + "parameters": rendered_input + })) + } else if let Some(parent_config) = &parent_execution.config { + Some(parent_config.clone()) + } else { + None + }; + + // Build workflow task metadata + let workflow_task = WorkflowTaskMetadata { + workflow_execution: *workflow_execution_id, + task_name: task_node.name.clone(), + task_index: None, + task_batch: None, + retry_count: 0, + max_retries: task_node + .retry + .as_ref() + .map(|r| r.count as i32) + .unwrap_or(0), + next_retry_at: None, + timeout_seconds: task_node.timeout.map(|t| t as i32), + timed_out: false, + duration_ms: None, + started_at: None, + completed_at: None, + }; + + // Create child execution record + let child_execution = ExecutionRepository::create( + pool, + CreateExecutionInput { + action: Some(task_action.id), + action_ref: action_ref.clone(), + config: task_config, + env_vars: parent_execution.env_vars.clone(), + parent: Some(parent_execution.id), + enforcement: parent_execution.enforcement, + executor: None, + status: ExecutionStatus::Requested, + result: None, + workflow_task: Some(workflow_task), + }, + ) + .await?; + + info!( + "Created child execution {} for workflow task '{}' (action '{}', workflow_execution {})", + child_execution.id, task_node.name, action_ref, workflow_execution_id + ); + + // If the task's action is itself a workflow, the recursive + // `process_execution_requested` call will detect that and orchestrate + // it in turn. For regular actions it will be dispatched to a worker. + let payload = ExecutionRequestedPayload { + execution_id: child_execution.id, + action_id: Some(task_action.id), + action_ref: action_ref.clone(), + parent_id: Some(parent_execution.id), + enforcement_id: parent_execution.enforcement, + config: child_execution.config.clone(), + }; + + let envelope = MessageEnvelope::new(MessageType::ExecutionRequested, payload) + .with_source("executor-scheduler"); + + publisher.publish_envelope(&envelope).await?; + + info!( + "Published ExecutionRequested for child execution {} (task '{}')", + child_execution.id, task_node.name + ); + + Ok(()) + } + + /// Expand a `with_items` task into child executions. + /// + /// The `with_items` expression (e.g. `"{{ number_list }}"`) is resolved + /// via the workflow context to produce a JSON array. ALL child execution + /// records are created in the database up front so that the sibling-count + /// query in [`advance_workflow`] sees the complete set. + /// + /// When a `concurrency` limit is set on the task, only the first + /// `concurrency` items are published to the message queue. The remaining + /// children stay at `Requested` status in the database. As each item + /// completes, [`advance_workflow`] publishes the next `Requested` sibling + /// to keep the concurrency window full. + #[allow(clippy::too_many_arguments)] + async fn dispatch_with_items_task( + pool: &PgPool, + publisher: &Publisher, + parent_execution: &Execution, + workflow_execution_id: &i64, + task_node: &crate::workflow::graph::TaskNode, + task_action: &Action, + action_ref: &str, + with_items_expr: &str, + wf_ctx: &WorkflowContext, + ) -> Result<()> { + // Resolve the with_items expression to a JSON array + let items_value = wf_ctx + .render_json(&JsonValue::String(with_items_expr.to_string())) + .map_err(|e| { + anyhow::anyhow!( + "Failed to resolve with_items expression '{}' for task '{}': {}", + with_items_expr, + task_node.name, + e + ) + })?; + + let items = match items_value.as_array() { + Some(arr) => arr.clone(), + None => { + warn!( + "with_items for task '{}' resolved to non-array value: {:?}. \ + Wrapping in single-element array.", + task_node.name, items_value + ); + vec![items_value] + } + }; + + let total = items.len(); + let concurrency_limit = task_node.concurrency.unwrap_or(total); + let dispatch_count = total.min(concurrency_limit); + + info!( + "Expanding with_items for task '{}': {} items (concurrency: {}, dispatching first {})", + task_node.name, total, concurrency_limit, dispatch_count + ); + + // Phase 1: Create ALL child execution records in the database. + // Each row captures the fully-rendered input so we never need to + // re-render templates later when publishing deferred items. + let mut child_ids: Vec = Vec::with_capacity(total); + + for (index, item) in items.iter().enumerate() { + let mut item_ctx = wf_ctx.clone(); + item_ctx.set_current_item(item.clone(), index); + + let rendered_input = if task_node.input.is_object() + && !task_node.input.as_object().unwrap().is_empty() + { + match item_ctx.render_json(&task_node.input) { + Ok(rendered) => rendered, + Err(e) => { + warn!( + "Template rendering failed for task '{}' item {}: {}. Using raw input.", + task_node.name, index, e + ); + task_node.input.clone() + } + } + } else { + task_node.input.clone() + }; + + let task_config: Option = + if rendered_input.is_object() && !rendered_input.as_object().unwrap().is_empty() { + Some(serde_json::json!({ "parameters": rendered_input })) + } else if let Some(parent_config) = &parent_execution.config { + Some(parent_config.clone()) + } else { + None + }; + + let workflow_task = WorkflowTaskMetadata { + workflow_execution: *workflow_execution_id, + task_name: task_node.name.clone(), + task_index: Some(index as i32), + task_batch: None, + retry_count: 0, + max_retries: task_node + .retry + .as_ref() + .map(|r| r.count as i32) + .unwrap_or(0), + next_retry_at: None, + timeout_seconds: task_node.timeout.map(|t| t as i32), + timed_out: false, + duration_ms: None, + started_at: None, + completed_at: None, + }; + + let child_execution = ExecutionRepository::create( + pool, + CreateExecutionInput { + action: Some(task_action.id), + action_ref: action_ref.to_string(), + config: task_config, + env_vars: parent_execution.env_vars.clone(), + parent: Some(parent_execution.id), + enforcement: parent_execution.enforcement, + executor: None, + status: ExecutionStatus::Requested, + result: None, + workflow_task: Some(workflow_task), + }, + ) + .await?; + + info!( + "Created with_items child execution {} for task '{}' item {} \ + (action '{}', workflow_execution {})", + child_execution.id, task_node.name, index, action_ref, workflow_execution_id + ); + + child_ids.push(child_execution.id); + } + + // Phase 2: Publish only the first `dispatch_count` to the MQ. + // The rest stay at Requested status until advance_workflow picks + // them up as earlier items complete. + for &child_id in child_ids.iter().take(dispatch_count) { + Self::publish_execution_requested( + pool, + publisher, + child_id, + task_action.id, + action_ref, + parent_execution, + ) + .await?; + } + + info!( + "Dispatched {} of {} with_items child executions for task '{}'", + dispatch_count, total, task_node.name + ); + + Ok(()) + } + + /// Publish an `ExecutionRequested` message for an existing execution row. + /// + /// Used to MQ-publish child executions that were created in the database + /// but not yet dispatched (deferred by concurrency limiting). + async fn publish_execution_requested( + pool: &PgPool, + publisher: &Publisher, + execution_id: i64, + action_id: i64, + action_ref: &str, + parent_execution: &Execution, + ) -> Result<()> { + let child = ExecutionRepository::find_by_id(pool, execution_id) + .await? + .ok_or_else(|| anyhow::anyhow!("Execution {} not found", execution_id))?; + + let payload = ExecutionRequestedPayload { + execution_id: child.id, + action_id: Some(action_id), + action_ref: action_ref.to_string(), + parent_id: Some(parent_execution.id), + enforcement_id: parent_execution.enforcement, + config: child.config.clone(), + }; + + let envelope = MessageEnvelope::new(MessageType::ExecutionRequested, payload) + .with_source("executor-scheduler"); + + publisher.publish_envelope(&envelope).await?; + + debug!( + "Published deferred ExecutionRequested for child execution {}", + execution_id + ); + + Ok(()) + } + + /// Publish the next `Requested`-status with_items siblings to fill freed + /// concurrency slots. + /// + /// When a with_items child completes, this method queries for siblings + /// that are still at `Requested` status (created in DB but never + /// published to MQ) and publishes enough of them to restore the + /// concurrency window. + /// + /// Returns the number of items dispatched. + async fn publish_pending_with_items_children( + pool: &PgPool, + publisher: &Publisher, + parent_execution: &Execution, + workflow_execution_id: i64, + task_name: &str, + slots: usize, + ) -> Result { + if slots == 0 { + return Ok(0); + } + + // Find siblings still at Requested status, ordered by task_index. + let pending_rows: Vec<(i64, i64)> = sqlx::query_as( + "SELECT id, COALESCE(action, 0) as action_id \ + FROM execution \ + WHERE workflow_task->>'workflow_execution' = $1::text \ + AND workflow_task->>'task_name' = $2 \ + AND status = 'requested' \ + ORDER BY (workflow_task->>'task_index')::int ASC \ + LIMIT $3", + ) + .bind(workflow_execution_id.to_string()) + .bind(task_name) + .bind(slots as i64) + .fetch_all(pool) + .await?; + + let mut dispatched = 0usize; + for (child_id, action_id) in &pending_rows { + // Read action_ref from the execution row + let child = match ExecutionRepository::find_by_id(pool, *child_id).await? { + Some(c) => c, + None => continue, + }; + + if let Err(e) = Self::publish_execution_requested( + pool, + publisher, + *child_id, + *action_id, + &child.action_ref, + parent_execution, + ) + .await + { + error!( + "Failed to publish pending with_items child {}: {}", + child_id, e + ); + } else { + dispatched += 1; + } + } + + if dispatched > 0 { + info!( + "Published {} pending with_items children for task '{}' \ + (workflow_execution {})", + dispatched, task_name, workflow_execution_id + ); + } + + Ok(dispatched) + } + + /// Advance a workflow after a child task completes. Called from the + /// completion listener when it detects that the completed execution has + /// `workflow_task` metadata. + /// + /// This evaluates transitions from the completed task, schedules successor + /// tasks, and completes the workflow when all tasks are done. + pub async fn advance_workflow( + pool: &PgPool, + publisher: &Publisher, + round_robin_counter: &AtomicUsize, + execution: &Execution, + ) -> Result<()> { + let workflow_task = match &execution.workflow_task { + Some(wt) => wt, + None => return Ok(()), // Not a workflow task, nothing to do + }; + + let workflow_execution_id = workflow_task.workflow_execution; + let task_name = &workflow_task.task_name; + let task_succeeded = execution.status == ExecutionStatus::Completed; + let task_timed_out = execution.status == ExecutionStatus::Timeout; + + let task_outcome = if task_succeeded { + TaskOutcome::Succeeded + } else if task_timed_out { + TaskOutcome::TimedOut + } else { + TaskOutcome::Failed + }; + + info!( + "Advancing workflow_execution {} after task '{}' {:?} (execution {})", + workflow_execution_id, task_name, task_outcome, execution.id, + ); + + // Load the workflow execution record + let workflow_execution = + WorkflowExecutionRepository::find_by_id(pool, workflow_execution_id) + .await? + .ok_or_else(|| { + anyhow::anyhow!("Workflow execution {} not found", workflow_execution_id) + })?; + + // Already in a terminal state — nothing to do + if matches!( + workflow_execution.status, + ExecutionStatus::Completed | ExecutionStatus::Failed | ExecutionStatus::Cancelled + ) { + debug!( + "Workflow execution {} already in terminal state {:?}, skipping advance", + workflow_execution_id, workflow_execution.status + ); + return Ok(()); + } + + // Rebuild the task graph from the stored JSON + let graph: TaskGraph = serde_json::from_value(workflow_execution.task_graph.clone()) + .map_err(|e| { + anyhow::anyhow!( + "Failed to deserialize task graph for workflow_execution {}: {}", + workflow_execution_id, + e + ) + })?; + + // Update completed/failed task lists + let mut completed_tasks: Vec = workflow_execution.completed_tasks.clone(); + let mut failed_tasks: Vec = workflow_execution.failed_tasks.clone(); + + // For with_items tasks, only mark completed/failed when ALL items + // for this task are done (no more running children with the same + // task_name). + let is_with_items = workflow_task.task_index.is_some(); + if is_with_items { + // --------------------------------------------------------- + // Concurrency: publish next Requested-status sibling(s) to + // fill the slot freed by this completion. + // --------------------------------------------------------- + let parent_for_pending = + ExecutionRepository::find_by_id(pool, workflow_execution.execution) + .await? + .ok_or_else(|| { + anyhow::anyhow!( + "Parent execution {} not found for workflow_execution {}", + workflow_execution.execution, + workflow_execution_id + ) + })?; + + // Count siblings that are actively in-flight (Scheduling, + // Scheduled, or Running — NOT Requested, which means "created + // but not yet published to MQ"). + let in_flight_count: (i64,) = sqlx::query_as( + "SELECT COUNT(*) \ + FROM execution \ + WHERE workflow_task->>'workflow_execution' = $1::text \ + AND workflow_task->>'task_name' = $2 \ + AND status IN ('scheduling', 'scheduled', 'running') \ + AND id != $3", + ) + .bind(workflow_execution_id.to_string()) + .bind(task_name) + .bind(execution.id) + .fetch_one(pool) + .await?; + + // Determine the concurrency limit from the task graph + let concurrency_limit = graph + .get_task(task_name) + .and_then(|n| n.concurrency) + .unwrap_or(usize::MAX); + + let free_slots = + concurrency_limit.saturating_sub(in_flight_count.0 as usize); + + if free_slots > 0 { + if let Err(e) = Self::publish_pending_with_items_children( + pool, + publisher, + &parent_for_pending, + workflow_execution_id, + task_name, + free_slots, + ) + .await + { + error!( + "Failed to publish pending with_items for task '{}': {}", + task_name, e + ); + } + } + + // Count how many siblings are NOT in a terminal state + // (Requested items are pending, in-flight items are working). + let siblings_remaining: Vec<(String,)> = sqlx::query_as( + "SELECT workflow_task->>'task_name' as task_name \ + FROM execution \ + WHERE workflow_task->>'workflow_execution' = $1::text \ + AND workflow_task->>'task_name' = $2 \ + AND status NOT IN ('completed', 'failed', 'timeout', 'cancelled') \ + AND id != $3", + ) + .bind(workflow_execution_id.to_string()) + .bind(task_name) + .bind(execution.id) + .fetch_all(pool) + .await?; + + if !siblings_remaining.is_empty() { + debug!( + "with_items task '{}' item {} done, but {} siblings remaining — \ + not advancing yet", + task_name, + workflow_task.task_index.unwrap_or(-1), + siblings_remaining.len(), + ); + return Ok(()); + } + + // All items done — check if any failed + let any_failed: Vec<(i64,)> = sqlx::query_as( + "SELECT id \ + FROM execution \ + WHERE workflow_task->>'workflow_execution' = $1::text \ + AND workflow_task->>'task_name' = $2 \ + AND status IN ('failed', 'timeout') \ + LIMIT 1", + ) + .bind(workflow_execution_id.to_string()) + .bind(task_name) + .fetch_all(pool) + .await?; + + if any_failed.is_empty() { + if !completed_tasks.contains(task_name) { + completed_tasks.push(task_name.clone()); + } + } else if !failed_tasks.contains(task_name) { + failed_tasks.push(task_name.clone()); + } + } else { + // Normal (non-with_items) task + if task_succeeded { + if !completed_tasks.contains(task_name) { + completed_tasks.push(task_name.clone()); + } + } else if !failed_tasks.contains(task_name) { + failed_tasks.push(task_name.clone()); + } + } + + // Load the parent execution for context + let parent_execution = ExecutionRepository::find_by_id(pool, workflow_execution.execution) + .await? + .ok_or_else(|| { + anyhow::anyhow!( + "Parent execution {} not found for workflow_execution {}", + workflow_execution.execution, + workflow_execution_id + ) + })?; + + // ----------------------------------------------------------------- + // Rebuild the WorkflowContext from persisted state + completed task + // results so that successor task inputs can be rendered. + // ----------------------------------------------------------------- + let workflow_params = extract_workflow_params(&parent_execution.config); + + // Collect results from all completed children of this workflow + let child_executions = + ExecutionRepository::find_by_parent(pool, parent_execution.id).await?; + let mut task_results_map: HashMap = HashMap::new(); + for child in &child_executions { + if let Some(ref wt) = child.workflow_task { + if wt.workflow_execution == workflow_execution_id { + if matches!( + child.status, + ExecutionStatus::Completed + | ExecutionStatus::Failed + | ExecutionStatus::Timeout + ) { + let result_val = child.result.clone().unwrap_or(serde_json::json!({})); + task_results_map.insert(wt.task_name.clone(), result_val); + } + } + } + } + + let mut wf_ctx = WorkflowContext::rebuild( + workflow_params, + &workflow_execution.variables, + task_results_map, + ); + + // Set the just-completed task's outcome so that `result()`, + // `succeeded()`, `failed()` resolve correctly for publish and + // transition conditions. + let completed_result = execution.result.clone().unwrap_or(serde_json::json!({})); + wf_ctx.set_last_task_outcome(completed_result, task_outcome); + + // ----------------------------------------------------------------- + // Process transitions: evaluate conditions, process publish + // directives, collect successor tasks. + // ----------------------------------------------------------------- + let mut tasks_to_schedule: Vec = Vec::new(); + + if let Some(completed_task_node) = graph.get_task(task_name) { + for transition in &completed_task_node.transitions { + let should_fire = match transition.kind() { + crate::workflow::graph::TransitionKind::Succeeded => task_succeeded, + crate::workflow::graph::TransitionKind::Failed => { + !task_succeeded && !task_timed_out + } + crate::workflow::graph::TransitionKind::Always => true, + crate::workflow::graph::TransitionKind::TimedOut => task_timed_out, + crate::workflow::graph::TransitionKind::Custom => { + // Try to evaluate via the workflow context + if let Some(ref when_expr) = transition.when { + match wf_ctx.evaluate_condition(when_expr) { + Ok(val) => val, + Err(e) => { + warn!( + "Custom condition '{}' evaluation failed: {}. \ + Defaulting to fire-on-success.", + when_expr, e + ); + task_succeeded + } + } + } else { + task_succeeded + } + } + }; + + if should_fire { + // Process publish directives from this transition + if !transition.publish.is_empty() { + let publish_map: HashMap = transition + .publish + .iter() + .map(|p| (p.name.clone(), p.expression.clone())) + .collect(); + if let Err(e) = wf_ctx.publish_from_result( + &serde_json::json!({}), + &[], + Some(&publish_map), + ) { + warn!("Failed to process publish for task '{}': {}", task_name, e); + } else { + debug!( + "Published {} variables from task '{}' transition", + publish_map.len(), + task_name + ); + } + } + + for next_task_name in &transition.do_tasks { + // Skip tasks that are already completed or failed + if completed_tasks.contains(next_task_name) + || failed_tasks.contains(next_task_name) + { + debug!( + "Skipping task '{}' — already completed or failed", + next_task_name + ); + continue; + } + + // Check join barrier: if the task has a `join` count, + // only schedule it when enough predecessors are done. + if let Some(next_node) = graph.get_task(next_task_name) { + if let Some(join_count) = next_node.join { + let inbound_completed = next_node + .inbound_tasks + .iter() + .filter(|t| completed_tasks.contains(*t)) + .count(); + if inbound_completed < join_count { + debug!( + "Task '{}' join barrier not met ({}/{} predecessors done)", + next_task_name, inbound_completed, join_count + ); + continue; + } + } + } + + if !tasks_to_schedule.contains(next_task_name) { + tasks_to_schedule.push(next_task_name.clone()); + } + } + } + } + } + + // Check if any tasks are still running (children of this workflow + // that haven't completed yet). We query child executions that have + // workflow_task metadata pointing to our workflow_execution. + let running_children = Self::count_running_workflow_children( + pool, + workflow_execution_id, + &completed_tasks, + &failed_tasks, + ) + .await?; + + // Dispatch successor tasks, passing the updated workflow context + for next_task_name in &tasks_to_schedule { + if let Some(task_node) = graph.get_task(next_task_name) { + if let Err(e) = Self::dispatch_workflow_task( + pool, + publisher, + round_robin_counter, + &parent_execution, + &workflow_execution_id, + task_node, + &wf_ctx, + ) + .await + { + error!( + "Failed to dispatch workflow task '{}': {}", + next_task_name, e + ); + if !failed_tasks.contains(next_task_name) { + failed_tasks.push(next_task_name.clone()); + } + } + } + } + + // Determine current executing tasks (for the workflow_execution record) + let current_tasks: Vec = tasks_to_schedule.clone(); + + // Persist updated workflow variables (from publish directives) and + // completed/failed task lists. + let updated_variables = wf_ctx.export_variables(); + WorkflowExecutionRepository::update( + pool, + workflow_execution_id, + attune_common::repositories::workflow::UpdateWorkflowExecutionInput { + current_tasks: Some(current_tasks), + completed_tasks: Some(completed_tasks.clone()), + failed_tasks: Some(failed_tasks.clone()), + skipped_tasks: None, + variables: Some(updated_variables), + status: None, // Updated below if terminal + error_message: None, + paused: None, + pause_reason: None, + }, + ) + .await?; + + // Check if workflow is complete: no more tasks to schedule and no + // children still running (excluding the ones we just scheduled). + let all_done = tasks_to_schedule.is_empty() && running_children == 0; + + if all_done { + let has_failures = !failed_tasks.is_empty(); + let error_msg = if has_failures { + Some(format!( + "Workflow failed: {} task(s) failed: {}", + failed_tasks.len(), + failed_tasks.join(", ") + )) + } else { + None + }; + Self::complete_workflow( + pool, + parent_execution.id, + workflow_execution_id, + !has_failures, + error_msg.as_deref(), + ) + .await?; + } + + Ok(()) + } + + /// Count child executions that are still in progress for a workflow. + async fn count_running_workflow_children( + pool: &PgPool, + workflow_execution_id: i64, + completed_tasks: &[String], + failed_tasks: &[String], + ) -> Result { + // Query child executions that reference this workflow_execution and + // are not yet in a terminal state. We use the workflow_task JSONB + // field to filter. + let rows: Vec<(String,)> = sqlx::query_as( + "SELECT workflow_task->>'task_name' as task_name \ + FROM execution \ + WHERE workflow_task->>'workflow_execution' = $1::text \ + AND status NOT IN ('completed', 'failed', 'timeout', 'cancelled')", + ) + .bind(workflow_execution_id.to_string()) + .fetch_all(pool) + .await?; + + let count = rows + .iter() + .filter(|(tn,)| !completed_tasks.contains(tn) && !failed_tasks.contains(tn)) + .count(); + + Ok(count) + } + + /// Mark a workflow as completed (success or failure) and update both the + /// `workflow_execution` and parent `execution` records. + async fn complete_workflow( + pool: &PgPool, + parent_execution_id: i64, + workflow_execution_id: i64, + success: bool, + error_message: Option<&str>, + ) -> Result<()> { + let status = if success { + ExecutionStatus::Completed + } else { + ExecutionStatus::Failed + }; + + info!( + "Completing workflow_execution {} with status {:?} (parent execution {})", + workflow_execution_id, status, parent_execution_id + ); + + // Update workflow_execution status + WorkflowExecutionRepository::update( + pool, + workflow_execution_id, + attune_common::repositories::workflow::UpdateWorkflowExecutionInput { + current_tasks: Some(vec![]), + completed_tasks: None, + failed_tasks: None, + skipped_tasks: None, + variables: None, + status: Some(status), + error_message: error_message.map(|s| s.to_string()), + paused: None, + pause_reason: None, + }, + ) + .await?; + + // Update parent execution + let parent = ExecutionRepository::find_by_id(pool, parent_execution_id).await?; + if let Some(mut parent) = parent { + parent.status = status; + parent.result = if !success { + Some(serde_json::json!({ + "error": error_message.unwrap_or("Workflow failed"), + "succeeded": false, + })) + } else { + Some(serde_json::json!({ + "succeeded": true, + })) + }; + ExecutionRepository::update(pool, parent.id, parent.into()).await?; + } + + Ok(()) + } + + // ----------------------------------------------------------------------- + // Regular action scheduling helpers + // ----------------------------------------------------------------------- + /// Get the action associated with an execution async fn get_action_for_execution(pool: &PgPool, execution: &Execution) -> Result { // Try to get action by ID first @@ -464,4 +1609,98 @@ mod tests { // Real tests will require database and message queue setup assert!(true); } + + #[test] + fn test_concurrency_limit_dispatch_count() { + // Verify the dispatch_count calculation used by dispatch_with_items_task + let total = 20usize; + let concurrency: Option = Some(3); + let concurrency_limit = concurrency.unwrap_or(total); + let dispatch_count = total.min(concurrency_limit); + assert_eq!(dispatch_count, 3); + + // No concurrency limit → dispatch all + let concurrency: Option = None; + let concurrency_limit = concurrency.unwrap_or(total); + let dispatch_count = total.min(concurrency_limit); + assert_eq!(dispatch_count, 20); + + // Concurrency exceeds total → dispatch all + let concurrency: Option = Some(50); + let concurrency_limit = concurrency.unwrap_or(total); + let dispatch_count = total.min(concurrency_limit); + assert_eq!(dispatch_count, 20); + } + + #[test] + fn test_free_slots_calculation() { + // Simulates the free-slots logic in advance_workflow + let concurrency_limit = 3usize; + + // 2 in-flight → 1 free slot + let in_flight = 2usize; + let free = concurrency_limit.saturating_sub(in_flight); + assert_eq!(free, 1); + + // 0 in-flight → 3 free slots + let in_flight = 0usize; + let free = concurrency_limit.saturating_sub(in_flight); + assert_eq!(free, 3); + + // 3 in-flight → 0 free slots + let in_flight = 3usize; + let free = concurrency_limit.saturating_sub(in_flight); + assert_eq!(free, 0); + } + + #[test] + fn test_extract_workflow_params_wrapped_format() { + // Child task executions store config as {"parameters": {...}} + let config = Some(serde_json::json!({ + "parameters": {"n": 5, "name": "test"} + })); + let params = extract_workflow_params(&config); + assert_eq!(params, serde_json::json!({"n": 5, "name": "test"})); + } + + #[test] + fn test_extract_workflow_params_flat_format() { + // API manual executions store config as flat {"n": 5, ...} + let config = Some(serde_json::json!({"n": 5, "name": "test"})); + let params = extract_workflow_params(&config); + assert_eq!(params, serde_json::json!({"n": 5, "name": "test"})); + } + + #[test] + fn test_extract_workflow_params_none() { + let params = extract_workflow_params(&None); + assert_eq!(params, serde_json::json!({})); + } + + #[test] + fn test_extract_workflow_params_non_object() { + // Edge case: config is a non-object JSON value + let config = Some(serde_json::json!("not an object")); + let params = extract_workflow_params(&config); + assert_eq!(params, serde_json::json!({})); + } + + #[test] + fn test_extract_workflow_params_empty_object() { + let config = Some(serde_json::json!({})); + let params = extract_workflow_params(&config); + assert_eq!(params, serde_json::json!({})); + } + + #[test] + fn test_extract_workflow_params_wrapped_takes_precedence() { + // If config has a "parameters" key, that value is used even if + // the config object also has other top-level keys + let config = Some(serde_json::json!({ + "parameters": {"n": 5}, + "context": {"rule": "test"} + })); + let params = extract_workflow_params(&config); + assert_eq!(params, serde_json::json!({"n": 5})); + } } diff --git a/crates/executor/src/timeout_monitor.rs b/crates/executor/src/timeout_monitor.rs index d5b0200..4760e13 100644 --- a/crates/executor/src/timeout_monitor.rs +++ b/crates/executor/src/timeout_monitor.rs @@ -12,6 +12,7 @@ use anyhow::Result; use attune_common::{ models::{enums::ExecutionStatus, Execution}, mq::{MessageEnvelope, MessageType, Publisher}, + repositories::execution::SELECT_COLUMNS as EXECUTION_COLUMNS, }; use chrono::{DateTime, Utc}; use serde::{Deserialize, Serialize}; @@ -105,17 +106,16 @@ impl ExecutionTimeoutMonitor { ); // Find executions stuck in SCHEDULED status - let stale_executions = sqlx::query_as::<_, Execution>( - "SELECT * FROM execution - WHERE status = $1 - AND updated < $2 - ORDER BY updated ASC - LIMIT 100", // Process in batches to avoid overwhelming system - ) - .bind(ExecutionStatus::Scheduled) - .bind(cutoff) - .fetch_all(&self.pool) - .await?; + let sql = format!( + "SELECT {EXECUTION_COLUMNS} FROM execution \ + WHERE status = $1 AND updated < $2 \ + ORDER BY updated ASC LIMIT 100" + ); + let stale_executions = sqlx::query_as::<_, Execution>(&sql) + .bind(ExecutionStatus::Scheduled) + .bind(cutoff) + .fetch_all(&self.pool) + .await?; if stale_executions.is_empty() { debug!("No stale scheduled executions found"); diff --git a/crates/executor/src/workflow/context.rs b/crates/executor/src/workflow/context.rs index 7cb450a..32b1ed1 100644 --- a/crates/executor/src/workflow/context.rs +++ b/crates/executor/src/workflow/context.rs @@ -2,6 +2,22 @@ //! //! This module manages workflow execution context, including variables, //! template rendering, and data flow between tasks. +//! +//! ## Function-call expressions +//! +//! Templates support Orquesta-style function calls: +//! - `{{ result() }}` — the last completed task's result +//! - `{{ result().field }}` — nested access into the result +//! - `{{ succeeded() }}` — `true` if the last task succeeded +//! - `{{ failed() }}` — `true` if the last task failed +//! - `{{ timed_out() }}` — `true` if the last task timed out +//! +//! ## Type-preserving rendering +//! +//! When a JSON string value is a *pure* template expression (the entire value +//! is `{{ expr }}`), `render_json` returns the raw `JsonValue` from the +//! expression instead of stringifying it. This means `"{{ item }}"` resolving +//! to integer `5` stays as `5`, not the string `"5"`. use dashmap::DashMap; use serde_json::{json, Value as JsonValue}; @@ -31,6 +47,15 @@ pub enum ContextError { JsonError(#[from] serde_json::Error), } +/// The status of the last completed task, used by `succeeded()` / `failed()` / +/// `timed_out()` function expressions. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum TaskOutcome { + Succeeded, + Failed, + TimedOut, +} + /// Workflow execution context /// /// Uses Arc for shared immutable data to enable efficient cloning. @@ -55,6 +80,12 @@ pub struct WorkflowContext { /// Current item index (for with-items iteration) - per-item data current_index: Option, + + /// The result of the last completed task (for `result()` expressions) + last_task_result: Option, + + /// The outcome of the last completed task (for `succeeded()` / `failed()`) + last_task_outcome: Option, } impl WorkflowContext { @@ -75,6 +106,46 @@ impl WorkflowContext { system: Arc::new(system), current_item: None, current_index: None, + last_task_result: None, + last_task_outcome: None, + } + } + + /// Rebuild a workflow context from persisted workflow execution state. + /// + /// This is used when advancing a workflow after a child task completes — + /// the scheduler reconstructs the context from the `workflow_execution` + /// record's stored `variables` plus the results of all completed child + /// executions. + pub fn rebuild( + parameters: JsonValue, + stored_variables: &JsonValue, + task_results: HashMap, + ) -> Self { + let variables = DashMap::new(); + if let Some(obj) = stored_variables.as_object() { + for (k, v) in obj { + variables.insert(k.clone(), v.clone()); + } + } + + let results = DashMap::new(); + for (k, v) in task_results { + results.insert(k, v); + } + + let system = DashMap::new(); + system.insert("workflow_start".to_string(), json!(chrono::Utc::now())); + + Self { + variables: Arc::new(variables), + parameters: Arc::new(parameters), + task_results: Arc::new(results), + system: Arc::new(system), + current_item: None, + current_index: None, + last_task_result: None, + last_task_outcome: None, } } @@ -112,7 +183,28 @@ impl WorkflowContext { self.current_index = None; } - /// Render a template string + /// Record the outcome of the last completed task so that `result()`, + /// `succeeded()`, `failed()`, and `timed_out()` expressions resolve + /// correctly. + pub fn set_last_task_outcome(&mut self, result: JsonValue, outcome: TaskOutcome) { + self.last_task_result = Some(result); + self.last_task_outcome = Some(outcome); + } + + /// Export workflow variables as a JSON object suitable for persisting + /// back to the `workflow_execution.variables` column. + pub fn export_variables(&self) -> JsonValue { + let map: HashMap = self + .variables + .iter() + .map(|entry| (entry.key().clone(), entry.value().clone())) + .collect(); + json!(map) + } + + /// Render a template string, always returning a `String`. + /// + /// For type-preserving rendering of JSON values use [`render_json`]. pub fn render_template(&self, template: &str) -> ContextResult { // Simple template rendering (Jinja2-like syntax) // Supports: {{ variable }}, {{ task.result }}, {{ parameters.key }} @@ -143,10 +235,49 @@ impl WorkflowContext { Ok(result) } - /// Render a JSON value (recursively render templates in strings) + /// Try to evaluate a string as a single pure template expression. + /// + /// Returns `Some(JsonValue)` when the **entire** string is exactly + /// `{{ expr }}` (with optional whitespace), preserving the original + /// JSON type of the evaluated expression. Returns `None` if the + /// string contains literal text around the template or multiple + /// template expressions — in that case the caller should fall back + /// to `render_template` which always stringifies. + fn try_evaluate_pure_expression(&self, s: &str) -> Option> { + let trimmed = s.trim(); + if !trimmed.starts_with("{{") || !trimmed.ends_with("}}") { + return None; + } + + // Make sure there is only ONE template expression in the string. + // Count `{{` occurrences — if more than one, it's not a pure expr. + if trimmed.matches("{{").count() != 1 { + return None; + } + + let expr = trimmed[2..trimmed.len() - 2].trim(); + if expr.is_empty() { + return None; + } + + Some(self.evaluate_expression(expr)) + } + + /// Render a JSON value, recursively resolving `{{ }}` templates in + /// strings. + /// + /// **Type-preserving**: when a string value is a *pure* template + /// expression (the entire string is `{{ expr }}`), the raw `JsonValue` + /// from the expression is returned. For example, if `item` is `5` + /// (a JSON number), then `"{{ item }}"` resolves to `5` not `"5"`. pub fn render_json(&self, value: &JsonValue) -> ContextResult { match value { JsonValue::String(s) => { + // Fast path: try as a pure expression to preserve type + if let Some(result) = self.try_evaluate_pure_expression(s) { + return result; + } + // Fallback: render as string (interpolation with surrounding text) let rendered = self.render_template(s)?; Ok(JsonValue::String(rendered)) } @@ -170,6 +301,28 @@ impl WorkflowContext { /// Evaluate a template expression fn evaluate_expression(&self, expr: &str) -> ContextResult { + // --------------------------------------------------------------- + // Function-call expressions: result(), succeeded(), failed(), timed_out() + // --------------------------------------------------------------- + // We handle these *before* splitting on `.` because the function + // name contains parentheses which would confuse the dot-split. + // + // Supported patterns: + // result() → last task result + // result().foo.bar → nested access into result + // result().data.items → nested access into result + // succeeded() → boolean + // failed() → boolean + // timed_out() → boolean + // --------------------------------------------------------------- + + if let Some(result_val) = self.try_evaluate_function_call(expr)? { + return Ok(result_val); + } + + // --------------------------------------------------------------- + // Dot-path expressions + // --------------------------------------------------------------- let parts: Vec<&str> = expr.split('.').collect(); if parts.is_empty() { @@ -244,7 +397,8 @@ impl WorkflowContext { Err(ContextError::VariableNotFound(format!("system.{}", key))) } } - // Direct variable reference + // Direct variable reference (e.g., `number_list` published by a + // previous task's transition) var_name => { if let Some(entry) = self.variables.get(var_name) { let value = entry.value().clone(); @@ -261,6 +415,56 @@ impl WorkflowContext { } } + /// Try to evaluate `expr` as a function-call expression. + /// + /// Returns `Ok(Some(value))` if the expression starts with a recognised + /// function call, `Ok(None)` if it does not match, or `Err` on failure. + fn try_evaluate_function_call(&self, expr: &str) -> ContextResult> { + // succeeded() + if expr == "succeeded()" { + let val = self + .last_task_outcome + .map(|o| o == TaskOutcome::Succeeded) + .unwrap_or(false); + return Ok(Some(json!(val))); + } + + // failed() + if expr == "failed()" { + let val = self + .last_task_outcome + .map(|o| o == TaskOutcome::Failed) + .unwrap_or(false); + return Ok(Some(json!(val))); + } + + // timed_out() + if expr == "timed_out()" { + let val = self + .last_task_outcome + .map(|o| o == TaskOutcome::TimedOut) + .unwrap_or(false); + return Ok(Some(json!(val))); + } + + // result() or result().path.to.field + if expr == "result()" || expr.starts_with("result().") { + let base = self.last_task_result.clone().unwrap_or(JsonValue::Null); + + if expr == "result()" { + return Ok(Some(base)); + } + + // Strip "result()." prefix and navigate the remaining path + let rest = &expr["result().".len()..]; + let path_parts: Vec<&str> = rest.split('.').collect(); + let val = self.get_nested_value(&base, &path_parts)?; + return Ok(Some(val)); + } + + Ok(None) + } + /// Get nested value from JSON fn get_nested_value(&self, value: &JsonValue, path: &[&str]) -> ContextResult { let mut current = value; @@ -313,7 +517,12 @@ impl WorkflowContext { } } - /// Publish variables from a task result + /// Publish variables from a task result. + /// + /// Each publish directive is a `(name, expression)` pair where the + /// expression is a template string like `"{{ result().data.items }}"`. + /// The expression is rendered with `render_json`-style type preservation + /// so that non-string values (arrays, numbers, booleans) keep their type. pub fn publish_from_result( &mut self, result: &JsonValue, @@ -323,16 +532,11 @@ impl WorkflowContext { // If publish map is provided, use it if let Some(map) = publish_map { for (var_name, template) in map { - // Create temporary context with result - let mut temp_ctx = self.clone(); - temp_ctx.set_var("result", result.clone()); - - let value_str = temp_ctx.render_template(template)?; - - // Try to parse as JSON, otherwise store as string - let value = serde_json::from_str(&value_str) - .unwrap_or_else(|_| JsonValue::String(value_str)); - + // Use type-preserving rendering: if the entire template is a + // single expression like `{{ result().data.items }}`, preserve + // the underlying JsonValue type (e.g. an array stays an array). + let json_value = JsonValue::String(template.clone()); + let value = self.render_json(&json_value)?; self.set_var(var_name, value); } } else { @@ -405,6 +609,8 @@ impl WorkflowContext { system: Arc::new(system), current_item: None, current_index: None, + last_task_result: None, + last_task_outcome: None, }) } } @@ -513,6 +719,122 @@ mod tests { assert_eq!(result["nested"]["value"], "Name is test"); } + #[test] + fn test_render_json_type_preserving_number() { + let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); + ctx.set_current_item(json!(5), 0); + + // Pure expression — should preserve the integer type + let input = json!({"seconds": "{{ item }}"}); + let result = ctx.render_json(&input).unwrap(); + assert_eq!(result["seconds"], json!(5)); + assert!(result["seconds"].is_number()); + } + + #[test] + fn test_render_json_type_preserving_array() { + let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); + ctx.set_last_task_outcome( + json!({"data": {"items": [0, 1, 2, 3, 4]}}), + TaskOutcome::Succeeded, + ); + + // Pure expression into result() — should preserve the array type + let input = json!({"list": "{{ result().data.items }}"}); + let result = ctx.render_json(&input).unwrap(); + assert_eq!(result["list"], json!([0, 1, 2, 3, 4])); + assert!(result["list"].is_array()); + } + + #[test] + fn test_render_json_mixed_template_stays_string() { + let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); + ctx.set_current_item(json!(5), 0); + + // Mixed text + template — must remain a string + let input = json!({"msg": "Sleeping for {{ item }} seconds"}); + let result = ctx.render_json(&input).unwrap(); + assert_eq!(result["msg"], json!("Sleeping for 5 seconds")); + assert!(result["msg"].is_string()); + } + + #[test] + fn test_render_json_type_preserving_bool() { + let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); + ctx.set_last_task_outcome(json!({}), TaskOutcome::Succeeded); + + let input = json!({"ok": "{{ succeeded() }}"}); + let result = ctx.render_json(&input).unwrap(); + assert_eq!(result["ok"], json!(true)); + assert!(result["ok"].is_boolean()); + } + + #[test] + fn test_result_function() { + let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); + ctx.set_last_task_outcome( + json!({"data": {"items": [10, 20]}, "stdout": "hello"}), + TaskOutcome::Succeeded, + ); + + // result() returns the full last task result + let val = ctx.evaluate_expression("result()").unwrap(); + assert_eq!(val["data"]["items"], json!([10, 20])); + + // result().stdout returns nested field + let val = ctx.evaluate_expression("result().stdout").unwrap(); + assert_eq!(val, json!("hello")); + + // result().data.items returns deeper nested field + let val = ctx.evaluate_expression("result().data.items").unwrap(); + assert_eq!(val, json!([10, 20])); + } + + #[test] + fn test_succeeded_failed_functions() { + let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); + ctx.set_last_task_outcome(json!({}), TaskOutcome::Succeeded); + + assert_eq!(ctx.evaluate_expression("succeeded()").unwrap(), json!(true)); + assert_eq!(ctx.evaluate_expression("failed()").unwrap(), json!(false)); + assert_eq!( + ctx.evaluate_expression("timed_out()").unwrap(), + json!(false) + ); + + ctx.set_last_task_outcome(json!({}), TaskOutcome::Failed); + assert_eq!( + ctx.evaluate_expression("succeeded()").unwrap(), + json!(false) + ); + assert_eq!(ctx.evaluate_expression("failed()").unwrap(), json!(true)); + + ctx.set_last_task_outcome(json!({}), TaskOutcome::TimedOut); + assert_eq!(ctx.evaluate_expression("timed_out()").unwrap(), json!(true)); + } + + #[test] + fn test_publish_with_result_function() { + let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); + ctx.set_last_task_outcome( + json!({"data": {"items": [0, 1, 2]}}), + TaskOutcome::Succeeded, + ); + + let mut publish_map = HashMap::new(); + publish_map.insert( + "number_list".to_string(), + "{{ result().data.items }}".to_string(), + ); + + ctx.publish_from_result(&json!({}), &[], Some(&publish_map)) + .unwrap(); + + let val = ctx.get_var("number_list").unwrap(); + assert_eq!(val, json!([0, 1, 2])); + assert!(val.is_array()); + } + #[test] fn test_publish_variables() { let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); @@ -524,6 +846,23 @@ mod tests { assert_eq!(ctx.get_var("my_var").unwrap(), result); } + #[test] + fn test_rebuild_context() { + let stored_vars = json!({"number_list": [0, 1, 2]}); + let mut task_results = HashMap::new(); + task_results.insert("task1".to_string(), json!({"data": {"items": [0, 1, 2]}})); + + let ctx = WorkflowContext::rebuild(json!({"count": 5}), &stored_vars, task_results); + + assert_eq!(ctx.get_var("number_list").unwrap(), json!([0, 1, 2])); + assert_eq!( + ctx.get_task_result("task1").unwrap(), + json!({"data": {"items": [0, 1, 2]}}) + ); + let rendered = ctx.render_template("{{ parameters.count }}").unwrap(); + assert_eq!(rendered, "5"); + } + #[test] fn test_export_import() { let mut ctx = WorkflowContext::new(json!({"key": "value"}), HashMap::new()); @@ -539,4 +878,28 @@ mod tests { json!({"result": "ok"}) ); } + + #[test] + fn test_with_items_integer_type_preservation() { + // Simulates the sleep_2 task from the hello_workflow: + // input: { seconds: "{{ item }}" } + // with_items: [0, 1, 2, 3, 4] + let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); + ctx.set_current_item(json!(3), 3); + + let input = json!({ + "message": "Sleeping for {{ item }} seconds ", + "seconds": "{{item}}" + }); + + let rendered = ctx.render_json(&input).unwrap(); + + // seconds should be integer 3, not string "3" + assert_eq!(rendered["seconds"], json!(3)); + assert!(rendered["seconds"].is_number()); + + // message should be a string with the value interpolated + assert_eq!(rendered["message"], json!("Sleeping for 3 seconds ")); + assert!(rendered["message"].is_string()); + } } diff --git a/crates/executor/src/workflow/registrar.rs b/crates/executor/src/workflow/registrar.rs index 2327475..5d31abf 100644 --- a/crates/executor/src/workflow/registrar.rs +++ b/crates/executor/src/workflow/registrar.rs @@ -196,7 +196,7 @@ impl WorkflowRegistrar { /// /// This ensures the workflow appears in action lists and the action palette /// in the workflow builder. The action is linked to the workflow definition - /// via `is_workflow = true` and `workflow_def` FK. + /// via the `workflow_def` FK. async fn create_companion_action( &self, workflow_def_id: i64, @@ -223,7 +223,7 @@ impl WorkflowRegistrar { let action = ActionRepository::create(&self.pool, action_input).await?; - // Link the action to the workflow definition (sets is_workflow = true and workflow_def) + // Link the action to the workflow definition (sets workflow_def FK) ActionRepository::link_workflow_def(&self.pool, action.id, workflow_def_id).await?; info!( diff --git a/docs/plans/timescaledb-entity-history.md b/docs/plans/timescaledb-entity-history.md index 63ffc6c..884191c 100644 --- a/docs/plans/timescaledb-entity-history.md +++ b/docs/plans/timescaledb-entity-history.md @@ -67,8 +67,19 @@ History rows are written by `AFTER INSERT OR UPDATE OR DELETE` triggers on the o |--------|--------------|---------------------|-----------------| | `execution` | `execution_history` | `action_ref` | *(none)* | | `worker` | `worker_history` | `name` | `last_heartbeat` (when sole change) | -| `enforcement` | `enforcement_history` | `rule_ref` | *(none)* | -| `event` | `event_history` | `trigger_ref` | *(none)* | + +> **Note:** The `event` and `enforcement` tables do **not** have separate `_history` +> tables. Both are TimescaleDB hypertables partitioned on `created`: +> +> - **Events** are immutable after insert (never updated). Compression and retention +> policies are applied directly. The `event_volume_hourly` continuous aggregate +> queries the `event` table directly. +> - **Enforcements** are updated exactly once (status transitions from `created` to +> `processed` or `disabled` within ~1 second of creation, well before the 7-day +> compression window). The `resolved_at` column records when this transition +> occurred. A separate history table added little value for a single deterministic +> status change. The `enforcement_volume_hourly` continuous aggregate queries the +> `enforcement` table directly. ## Table Schema @@ -100,11 +111,11 @@ Column details: ## Hypertable Configuration -| History Table | Chunk Interval | Rationale | -|---------------|---------------|-----------| +| Table | Chunk Interval | Rationale | +|-------|---------------|-----------| | `execution_history` | 1 day | Highest expected volume | -| `enforcement_history` | 1 day | Correlated with execution volume | -| `event_history` | 1 day | Can be high volume from active sensors | +| `event` (hypertable) | 1 day | Can be high volume from active sensors | +| `enforcement` (hypertable) | 1 day | Correlated with execution volume | | `worker_history` | 7 days | Low volume (status changes are infrequent) | ## Indexes @@ -138,22 +149,22 @@ Each tracked table gets a dedicated trigger function that: Applied after data leaves the "hot" query window: -| History Table | Compress After | `segmentby` | `orderby` | -|---------------|---------------|-------------|-----------| +| Table | Compress After | `segmentby` | `orderby` | +|-------|---------------|-------------|-----------| | `execution_history` | 7 days | `entity_id` | `time DESC` | | `worker_history` | 7 days | `entity_id` | `time DESC` | -| `enforcement_history` | 7 days | `entity_id` | `time DESC` | -| `event_history` | 7 days | `entity_id` | `time DESC` | +| `event` (hypertable) | 7 days | `trigger_ref` | `created DESC` | +| `enforcement` (hypertable) | 7 days | `rule_ref` | `created DESC` | `segmentby = entity_id` ensures that "show me history for entity X" queries are fast even on compressed chunks. ## Retention Policies -| History Table | Retain For | Rationale | -|---------------|-----------|-----------| +| Table | Retain For | Rationale | +|-------|-----------|-----------| | `execution_history` | 90 days | Primary operational data | -| `enforcement_history` | 90 days | Tied to execution lifecycle | -| `event_history` | 30 days | High volume, less long-term value | +| `event` (hypertable) | 90 days | High volume time-series data | +| `enforcement` (hypertable) | 90 days | Tied to execution lifecycle | | `worker_history` | 180 days | Low volume, useful for capacity trends | ## Continuous Aggregates (Future) @@ -181,9 +192,8 @@ SELECT time_bucket('1 hour', time) AS bucket, entity_ref AS trigger_ref, COUNT(*) AS event_count -FROM event_history -WHERE operation = 'INSERT' -GROUP BY bucket, entity_ref +FROM event +GROUP BY bucket, trigger_ref WITH NO DATA; ``` diff --git a/migrations/20250101000004_trigger_sensor_event_rule.sql b/migrations/20250101000004_trigger_sensor_event_rule.sql index 9af9685..1c30a20 100644 --- a/migrations/20250101000004_trigger_sensor_event_rule.sql +++ b/migrations/20250101000004_trigger_sensor_event_rule.sql @@ -2,6 +2,12 @@ -- Description: Creates trigger, sensor, event, enforcement, and action tables -- with runtime version constraint support. Includes webhook key -- generation function used by webhook management functions in 000007. +-- +-- NOTE: The event and enforcement tables are converted to TimescaleDB +-- hypertables in migration 000009. Hypertables cannot be the target of +-- FK constraints, so enforcement.event is a plain BIGINT with no FK. +-- FKs *from* hypertables to regular tables (e.g., event.trigger → trigger, +-- enforcement.rule → rule) are supported by TimescaleDB 2.x and are kept. -- Version: 20250101000004 -- ============================================================================ @@ -140,8 +146,7 @@ CREATE TABLE event ( source_ref TEXT, created TIMESTAMPTZ NOT NULL DEFAULT NOW(), rule BIGINT, - rule_ref TEXT, - updated TIMESTAMPTZ NOT NULL DEFAULT NOW() + rule_ref TEXT ); -- Indexes @@ -154,12 +159,6 @@ CREATE INDEX idx_event_trigger_ref_created ON event(trigger_ref, created DESC); CREATE INDEX idx_event_source_created ON event(source, created DESC); CREATE INDEX idx_event_payload_gin ON event USING GIN (payload); --- Trigger -CREATE TRIGGER update_event_updated - BEFORE UPDATE ON event - FOR EACH ROW - EXECUTE FUNCTION update_updated_column(); - -- Comments COMMENT ON TABLE event IS 'Events are instances of triggers firing'; COMMENT ON COLUMN event.trigger IS 'Trigger that fired (may be null if trigger deleted)'; @@ -178,13 +177,13 @@ CREATE TABLE enforcement ( rule_ref TEXT NOT NULL, trigger_ref TEXT NOT NULL, config JSONB, - event BIGINT REFERENCES event(id) ON DELETE SET NULL, + event BIGINT, -- references event(id); no FK because event becomes a hypertable status enforcement_status_enum NOT NULL DEFAULT 'created', payload JSONB NOT NULL, condition enforcement_condition_enum NOT NULL DEFAULT 'all', conditions JSONB NOT NULL DEFAULT '[]'::jsonb, created TIMESTAMPTZ NOT NULL DEFAULT NOW(), - updated TIMESTAMPTZ NOT NULL DEFAULT NOW(), + resolved_at TIMESTAMPTZ, -- Constraints CONSTRAINT enforcement_condition_check CHECK (condition IN ('any', 'all')) @@ -203,18 +202,13 @@ CREATE INDEX idx_enforcement_event_status ON enforcement(event, status); CREATE INDEX idx_enforcement_payload_gin ON enforcement USING GIN (payload); CREATE INDEX idx_enforcement_conditions_gin ON enforcement USING GIN (conditions); --- Trigger -CREATE TRIGGER update_enforcement_updated - BEFORE UPDATE ON enforcement - FOR EACH ROW - EXECUTE FUNCTION update_updated_column(); - -- Comments COMMENT ON TABLE enforcement IS 'Enforcements represent rule triggering by events'; COMMENT ON COLUMN enforcement.rule IS 'Rule being enforced (may be null if rule deleted)'; COMMENT ON COLUMN enforcement.rule_ref IS 'Rule reference (preserved even if rule deleted)'; -COMMENT ON COLUMN enforcement.event IS 'Event that triggered this enforcement'; -COMMENT ON COLUMN enforcement.status IS 'Processing status'; +COMMENT ON COLUMN enforcement.event IS 'Event that triggered this enforcement (no FK — event is a hypertable)'; +COMMENT ON COLUMN enforcement.status IS 'Processing status (created → processed or disabled)'; +COMMENT ON COLUMN enforcement.resolved_at IS 'Timestamp when the enforcement was resolved (status changed from created to processed/disabled). NULL while status is created.'; COMMENT ON COLUMN enforcement.payload IS 'Event payload for rule evaluation'; COMMENT ON COLUMN enforcement.condition IS 'Logical operator for conditions (any=OR, all=AND)'; COMMENT ON COLUMN enforcement.conditions IS 'Condition expressions to evaluate'; diff --git a/migrations/20250101000005_execution_and_operations.sql b/migrations/20250101000005_execution_and_operations.sql index 3560f31..e5f3108 100644 --- a/migrations/20250101000005_execution_and_operations.sql +++ b/migrations/20250101000005_execution_and_operations.sql @@ -3,6 +3,14 @@ -- Includes retry tracking, worker health views, and helper functions. -- Consolidates former migrations: 000006 (execution_system), 000008 -- (worker_notification), 000014 (worker_table), and 20260209 (phase3). +-- +-- NOTE: The execution table is converted to a TimescaleDB hypertable in +-- migration 000009. Hypertables cannot be the target of FK constraints, +-- so columns referencing execution (inquiry.execution, workflow_execution.execution) +-- are plain BIGINT with no FK. Similarly, columns ON the execution table that +-- would self-reference or reference other hypertables (parent, enforcement, +-- original_execution) are plain BIGINT. The action and executor FKs are also +-- omitted since they would need to be dropped during hypertable conversion. -- Version: 20250101000005 -- ============================================================================ @@ -11,25 +19,25 @@ CREATE TABLE execution ( id BIGSERIAL PRIMARY KEY, - action BIGINT REFERENCES action(id) ON DELETE SET NULL, + action BIGINT, -- references action(id); no FK because execution becomes a hypertable action_ref TEXT NOT NULL, config JSONB, env_vars JSONB, - parent BIGINT REFERENCES execution(id) ON DELETE SET NULL, - enforcement BIGINT REFERENCES enforcement(id) ON DELETE SET NULL, - executor BIGINT REFERENCES identity(id) ON DELETE SET NULL, + parent BIGINT, -- self-reference; no FK because execution becomes a hypertable + enforcement BIGINT, -- references enforcement(id); no FK (both are hypertables) + executor BIGINT, -- references identity(id); no FK because execution becomes a hypertable status execution_status_enum NOT NULL DEFAULT 'requested', result JSONB, created TIMESTAMPTZ NOT NULL DEFAULT NOW(), is_workflow BOOLEAN DEFAULT false NOT NULL, - workflow_def BIGINT, + workflow_def BIGINT, -- references workflow_definition(id); no FK because execution becomes a hypertable workflow_task JSONB, -- Retry tracking (baked in from phase 3) retry_count INTEGER NOT NULL DEFAULT 0, max_retries INTEGER, retry_reason TEXT, - original_execution BIGINT REFERENCES execution(id) ON DELETE SET NULL, + original_execution BIGINT, -- self-reference; no FK because execution becomes a hypertable updated TIMESTAMPTZ NOT NULL DEFAULT NOW() ); @@ -65,9 +73,9 @@ COMMENT ON COLUMN execution.action IS 'Action being executed (may be null if act COMMENT ON COLUMN execution.action_ref IS 'Action reference (preserved even if action deleted)'; COMMENT ON COLUMN execution.config IS 'Snapshot of action configuration at execution time'; COMMENT ON COLUMN execution.env_vars IS 'Environment variables for this execution as key-value pairs (string -> string). These are set in the execution environment and are separate from action parameters. Used for execution context, configuration, and non-sensitive metadata.'; -COMMENT ON COLUMN execution.parent IS 'Parent execution ID for workflow hierarchies'; -COMMENT ON COLUMN execution.enforcement IS 'Enforcement that triggered this execution (if rule-driven)'; -COMMENT ON COLUMN execution.executor IS 'Identity that initiated the execution'; +COMMENT ON COLUMN execution.parent IS 'Parent execution ID for workflow hierarchies (no FK — execution is a hypertable)'; +COMMENT ON COLUMN execution.enforcement IS 'Enforcement that triggered this execution (no FK — both are hypertables)'; +COMMENT ON COLUMN execution.executor IS 'Identity that initiated the execution (no FK — execution is a hypertable)'; COMMENT ON COLUMN execution.status IS 'Current execution lifecycle status'; COMMENT ON COLUMN execution.result IS 'Execution output/results'; COMMENT ON COLUMN execution.retry_count IS 'Current retry attempt number (0 = first attempt, 1 = first retry, etc.)'; @@ -83,7 +91,7 @@ COMMENT ON COLUMN execution.original_execution IS 'ID of the original execution CREATE TABLE inquiry ( id BIGSERIAL PRIMARY KEY, - execution BIGINT NOT NULL REFERENCES execution(id) ON DELETE CASCADE, + execution BIGINT NOT NULL, -- references execution(id); no FK because execution is a hypertable prompt TEXT NOT NULL, response_schema JSONB, assigned_to BIGINT REFERENCES identity(id) ON DELETE SET NULL, @@ -114,7 +122,7 @@ CREATE TRIGGER update_inquiry_updated -- Comments COMMENT ON TABLE inquiry IS 'Inquiries enable human-in-the-loop workflows with async user interactions'; -COMMENT ON COLUMN inquiry.execution IS 'Execution that is waiting on this inquiry'; +COMMENT ON COLUMN inquiry.execution IS 'Execution that is waiting on this inquiry (no FK — execution is a hypertable)'; COMMENT ON COLUMN inquiry.prompt IS 'Question or prompt text for the user'; COMMENT ON COLUMN inquiry.response_schema IS 'JSON schema defining expected response format'; COMMENT ON COLUMN inquiry.assigned_to IS 'Identity who should respond to this inquiry'; diff --git a/migrations/20250101000006_workflow_system.sql b/migrations/20250101000006_workflow_system.sql index dd99c95..1c689a2 100644 --- a/migrations/20250101000006_workflow_system.sql +++ b/migrations/20250101000006_workflow_system.sql @@ -1,6 +1,13 @@ -- Migration: Workflow System -- Description: Creates workflow_definition and workflow_execution tables -- (workflow_task_execution consolidated into execution.workflow_task JSONB) +-- +-- NOTE: The execution table is converted to a TimescaleDB hypertable in +-- migration 000009. Hypertables cannot be the target of FK constraints, +-- so workflow_execution.execution is a plain BIGINT with no FK. +-- execution.workflow_def also has no FK (added as plain BIGINT in 000005) +-- since execution is a hypertable and FKs from hypertables are only +-- supported for simple cases — we omit it for consistency. -- Version: 20250101000006 -- ============================================================================ @@ -49,7 +56,7 @@ COMMENT ON COLUMN workflow_definition.out_schema IS 'JSON schema for workflow ou CREATE TABLE workflow_execution ( id BIGSERIAL PRIMARY KEY, - execution BIGINT NOT NULL REFERENCES execution(id) ON DELETE CASCADE, + execution BIGINT NOT NULL, -- references execution(id); no FK because execution is a hypertable workflow_def BIGINT NOT NULL REFERENCES workflow_definition(id) ON DELETE CASCADE, current_tasks TEXT[] DEFAULT '{}', completed_tasks TEXT[] DEFAULT '{}', @@ -78,7 +85,7 @@ CREATE TRIGGER update_workflow_execution_updated EXECUTE FUNCTION update_updated_column(); -- Comments -COMMENT ON TABLE workflow_execution IS 'Runtime state tracking for workflow executions'; +COMMENT ON TABLE workflow_execution IS 'Runtime state tracking for workflow executions. execution column has no FK — execution is a hypertable.'; COMMENT ON COLUMN workflow_execution.variables IS 'Workflow-scoped variables, updated via publish directives'; COMMENT ON COLUMN workflow_execution.task_graph IS 'Execution graph with dependencies and transitions'; COMMENT ON COLUMN workflow_execution.current_tasks IS 'Array of task names currently executing'; @@ -89,22 +96,15 @@ COMMENT ON COLUMN workflow_execution.paused IS 'True if workflow execution is pa -- ============================================================================ ALTER TABLE action - ADD COLUMN is_workflow BOOLEAN DEFAULT false NOT NULL, ADD COLUMN workflow_def BIGINT REFERENCES workflow_definition(id) ON DELETE CASCADE; -CREATE INDEX idx_action_is_workflow ON action(is_workflow) WHERE is_workflow = true; CREATE INDEX idx_action_workflow_def ON action(workflow_def); -COMMENT ON COLUMN action.is_workflow IS 'True if this action is a workflow (composable action graph)'; -COMMENT ON COLUMN action.workflow_def IS 'Reference to workflow definition if is_workflow=true'; +COMMENT ON COLUMN action.workflow_def IS 'Reference to workflow definition (non-null means this action is a workflow)'; --- ============================================================================ --- ADD FOREIGN KEY CONSTRAINT FOR EXECUTION.WORKFLOW_DEF --- ============================================================================ - -ALTER TABLE execution - ADD CONSTRAINT execution_workflow_def_fkey - FOREIGN KEY (workflow_def) REFERENCES workflow_definition(id) ON DELETE CASCADE; +-- NOTE: execution.workflow_def has no FK constraint because execution is a +-- TimescaleDB hypertable (converted in migration 000009). The column was +-- created as a plain BIGINT in migration 000005. -- ============================================================================ -- WORKFLOW VIEWS @@ -143,6 +143,6 @@ SELECT a.pack as pack_id, a.pack_ref FROM workflow_definition wd -LEFT JOIN action a ON a.workflow_def = wd.id AND a.is_workflow = true; +LEFT JOIN action a ON a.workflow_def = wd.id; COMMENT ON VIEW workflow_action_link IS 'Links workflow definitions to their corresponding action records'; diff --git a/migrations/20250101000008_notify_triggers.sql b/migrations/20250101000008_notify_triggers.sql index f8562b0..a068337 100644 --- a/migrations/20250101000008_notify_triggers.sql +++ b/migrations/20250101000008_notify_triggers.sql @@ -163,7 +163,7 @@ BEGIN 'config', NEW.config, 'payload', NEW.payload, 'created', NEW.created, - 'updated', NEW.updated + 'resolved_at', NEW.resolved_at ); PERFORM pg_notify('enforcement_created', payload::text); @@ -203,7 +203,7 @@ BEGIN 'config', NEW.config, 'payload', NEW.payload, 'created', NEW.created, - 'updated', NEW.updated + 'resolved_at', NEW.resolved_at ); PERFORM pg_notify('enforcement_status_changed', payload::text); diff --git a/migrations/20250101000009_timescaledb_history.sql b/migrations/20250101000009_timescaledb_history.sql index deec543..02d832f 100644 --- a/migrations/20250101000009_timescaledb_history.sql +++ b/migrations/20250101000009_timescaledb_history.sql @@ -1,10 +1,15 @@ -- Migration: TimescaleDB Entity History and Analytics --- Description: Creates append-only history hypertables for execution, worker, enforcement, --- and event tables. Uses JSONB diff format to track field-level changes via --- PostgreSQL triggers. Includes continuous aggregates for dashboard analytics. --- Consolidates former migrations: 20260226100000 (entity_history_timescaledb), --- 20260226200000 (continuous_aggregates), and 20260226300000 (fix + result digest). +-- Description: Creates append-only history hypertables for execution and worker tables. +-- Uses JSONB diff format to track field-level changes via PostgreSQL triggers. +-- Converts the event, enforcement, and execution tables into TimescaleDB +-- hypertables (events are immutable; enforcements are updated exactly once; +-- executions are updated ~4 times during their lifecycle). +-- Includes continuous aggregates for dashboard analytics. -- See docs/plans/timescaledb-entity-history.md for full design. +-- +-- NOTE: FK constraints that would reference hypertable targets were never +-- created in earlier migrations (000004, 000005, 000006), so no DROP +-- CONSTRAINT statements are needed here. -- Version: 20250101000009 -- ============================================================================ @@ -114,67 +119,76 @@ CREATE INDEX idx_worker_history_changed_fields COMMENT ON TABLE worker_history IS 'Append-only history of field-level changes to the worker table (TimescaleDB hypertable)'; COMMENT ON COLUMN worker_history.entity_ref IS 'Denormalized worker name for JOIN-free queries'; --- ---------------------------------------------------------------------------- --- enforcement_history +-- ============================================================================ +-- CONVERT EVENT TABLE TO HYPERTABLE +-- ============================================================================ +-- Events are immutable after insert — they are never updated. Instead of +-- maintaining a separate event_history table to track changes that never +-- happen, we convert the event table itself into a TimescaleDB hypertable +-- partitioned on `created`. This gives us automatic time-based partitioning, +-- compression, and retention for free. +-- +-- No FK constraints reference event(id) — enforcement.event was created as a +-- plain BIGINT in migration 000004 (hypertables cannot be FK targets). -- ---------------------------------------------------------------------------- -CREATE TABLE enforcement_history ( - time TIMESTAMPTZ NOT NULL DEFAULT NOW(), - operation TEXT NOT NULL, - entity_id BIGINT NOT NULL, - entity_ref TEXT, - changed_fields TEXT[] NOT NULL DEFAULT '{}', - old_values JSONB, - new_values JSONB -); +-- Replace the single-column PK with a composite PK that includes the +-- partitioning column (required by TimescaleDB). +ALTER TABLE event DROP CONSTRAINT event_pkey; +ALTER TABLE event ADD PRIMARY KEY (id, created); -SELECT create_hypertable('enforcement_history', 'time', - chunk_time_interval => INTERVAL '1 day'); +SELECT create_hypertable('event', 'created', + chunk_time_interval => INTERVAL '1 day', + migrate_data => true); -CREATE INDEX idx_enforcement_history_entity - ON enforcement_history (entity_id, time DESC); +COMMENT ON TABLE event IS 'Events are instances of triggers firing (TimescaleDB hypertable partitioned on created)'; -CREATE INDEX idx_enforcement_history_entity_ref - ON enforcement_history (entity_ref, time DESC); - -CREATE INDEX idx_enforcement_history_status_changes - ON enforcement_history (time DESC) - WHERE 'status' = ANY(changed_fields); - -CREATE INDEX idx_enforcement_history_changed_fields - ON enforcement_history USING GIN (changed_fields); - -COMMENT ON TABLE enforcement_history IS 'Append-only history of field-level changes to the enforcement table (TimescaleDB hypertable)'; -COMMENT ON COLUMN enforcement_history.entity_ref IS 'Denormalized rule_ref for JOIN-free queries'; - --- ---------------------------------------------------------------------------- --- event_history +-- ============================================================================ +-- CONVERT ENFORCEMENT TABLE TO HYPERTABLE +-- ============================================================================ +-- Enforcements are created and then updated exactly once (status changes from +-- `created` to `processed` or `disabled` within ~1 second). This single update +-- happens well before the 7-day compression window, so UPDATE on uncompressed +-- chunks works without issues. +-- +-- No FK constraints reference enforcement(id) — execution.enforcement was +-- created as a plain BIGINT in migration 000005. -- ---------------------------------------------------------------------------- -CREATE TABLE event_history ( - time TIMESTAMPTZ NOT NULL DEFAULT NOW(), - operation TEXT NOT NULL, - entity_id BIGINT NOT NULL, - entity_ref TEXT, - changed_fields TEXT[] NOT NULL DEFAULT '{}', - old_values JSONB, - new_values JSONB -); +ALTER TABLE enforcement DROP CONSTRAINT enforcement_pkey; +ALTER TABLE enforcement ADD PRIMARY KEY (id, created); -SELECT create_hypertable('event_history', 'time', - chunk_time_interval => INTERVAL '1 day'); +SELECT create_hypertable('enforcement', 'created', + chunk_time_interval => INTERVAL '1 day', + migrate_data => true); -CREATE INDEX idx_event_history_entity - ON event_history (entity_id, time DESC); +COMMENT ON TABLE enforcement IS 'Enforcements represent rule triggering by events (TimescaleDB hypertable partitioned on created)'; -CREATE INDEX idx_event_history_entity_ref - ON event_history (entity_ref, time DESC); +-- ============================================================================ +-- CONVERT EXECUTION TABLE TO HYPERTABLE +-- ============================================================================ +-- Executions are updated ~4 times during their lifecycle (requested → scheduled +-- → running → completed/failed), completing within at most ~1 day — well before +-- the 7-day compression window. The `updated` column and its BEFORE UPDATE +-- trigger are preserved (used by timeout monitor and UI). +-- +-- No FK constraints reference execution(id) — inquiry.execution, +-- workflow_execution.execution, execution.parent, and execution.original_execution +-- were all created as plain BIGINT columns in migrations 000005 and 000006. +-- +-- The existing execution_history hypertable and its trigger are preserved — +-- they track field-level diffs of each update, which remains valuable for +-- a mutable table. +-- ---------------------------------------------------------------------------- -CREATE INDEX idx_event_history_changed_fields - ON event_history USING GIN (changed_fields); +ALTER TABLE execution DROP CONSTRAINT execution_pkey; +ALTER TABLE execution ADD PRIMARY KEY (id, created); -COMMENT ON TABLE event_history IS 'Append-only history of field-level changes to the event table (TimescaleDB hypertable)'; -COMMENT ON COLUMN event_history.entity_ref IS 'Denormalized trigger_ref for JOIN-free queries'; +SELECT create_hypertable('execution', 'created', + chunk_time_interval => INTERVAL '1 day', + migrate_data => true); + +COMMENT ON TABLE execution IS 'Executions represent action runs with workflow support (TimescaleDB hypertable partitioned on created). Updated ~4 times during lifecycle, completing within ~1 day (well before 7-day compression window).'; -- ============================================================================ -- TRIGGER FUNCTIONS @@ -341,118 +355,6 @@ $$ LANGUAGE plpgsql; COMMENT ON FUNCTION record_worker_history() IS 'Records field-level changes to worker table in worker_history hypertable. Excludes heartbeat-only updates.'; --- ---------------------------------------------------------------------------- --- enforcement history trigger --- Tracked fields: status, payload --- ---------------------------------------------------------------------------- - -CREATE OR REPLACE FUNCTION record_enforcement_history() -RETURNS TRIGGER AS $$ -DECLARE - changed TEXT[] := '{}'; - old_vals JSONB := '{}'; - new_vals JSONB := '{}'; -BEGIN - IF TG_OP = 'INSERT' THEN - INSERT INTO enforcement_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values) - VALUES (NOW(), 'INSERT', NEW.id, NEW.rule_ref, '{}', NULL, - jsonb_build_object( - 'rule_ref', NEW.rule_ref, - 'trigger_ref', NEW.trigger_ref, - 'status', NEW.status, - 'condition', NEW.condition, - 'event', NEW.event - )); - RETURN NEW; - END IF; - - IF TG_OP = 'DELETE' THEN - INSERT INTO enforcement_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values) - VALUES (NOW(), 'DELETE', OLD.id, OLD.rule_ref, '{}', NULL, NULL); - RETURN OLD; - END IF; - - -- UPDATE: detect which fields changed - IF OLD.status IS DISTINCT FROM NEW.status THEN - changed := array_append(changed, 'status'); - old_vals := old_vals || jsonb_build_object('status', OLD.status); - new_vals := new_vals || jsonb_build_object('status', NEW.status); - END IF; - - IF OLD.payload IS DISTINCT FROM NEW.payload THEN - changed := array_append(changed, 'payload'); - old_vals := old_vals || jsonb_build_object('payload', OLD.payload); - new_vals := new_vals || jsonb_build_object('payload', NEW.payload); - END IF; - - -- Only record if something actually changed - IF array_length(changed, 1) > 0 THEN - INSERT INTO enforcement_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values) - VALUES (NOW(), 'UPDATE', NEW.id, NEW.rule_ref, changed, old_vals, new_vals); - END IF; - - RETURN NEW; -END; -$$ LANGUAGE plpgsql; - -COMMENT ON FUNCTION record_enforcement_history() IS 'Records field-level changes to enforcement table in enforcement_history hypertable'; - --- ---------------------------------------------------------------------------- --- event history trigger --- Tracked fields: config, payload --- ---------------------------------------------------------------------------- - -CREATE OR REPLACE FUNCTION record_event_history() -RETURNS TRIGGER AS $$ -DECLARE - changed TEXT[] := '{}'; - old_vals JSONB := '{}'; - new_vals JSONB := '{}'; -BEGIN - IF TG_OP = 'INSERT' THEN - INSERT INTO event_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values) - VALUES (NOW(), 'INSERT', NEW.id, NEW.trigger_ref, '{}', NULL, - jsonb_build_object( - 'trigger_ref', NEW.trigger_ref, - 'source', NEW.source, - 'source_ref', NEW.source_ref, - 'rule', NEW.rule, - 'rule_ref', NEW.rule_ref - )); - RETURN NEW; - END IF; - - IF TG_OP = 'DELETE' THEN - INSERT INTO event_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values) - VALUES (NOW(), 'DELETE', OLD.id, OLD.trigger_ref, '{}', NULL, NULL); - RETURN OLD; - END IF; - - -- UPDATE: detect which fields changed - IF OLD.config IS DISTINCT FROM NEW.config THEN - changed := array_append(changed, 'config'); - old_vals := old_vals || jsonb_build_object('config', OLD.config); - new_vals := new_vals || jsonb_build_object('config', NEW.config); - END IF; - - IF OLD.payload IS DISTINCT FROM NEW.payload THEN - changed := array_append(changed, 'payload'); - old_vals := old_vals || jsonb_build_object('payload', OLD.payload); - new_vals := new_vals || jsonb_build_object('payload', NEW.payload); - END IF; - - -- Only record if something actually changed - IF array_length(changed, 1) > 0 THEN - INSERT INTO event_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values) - VALUES (NOW(), 'UPDATE', NEW.id, NEW.trigger_ref, changed, old_vals, new_vals); - END IF; - - RETURN NEW; -END; -$$ LANGUAGE plpgsql; - -COMMENT ON FUNCTION record_event_history() IS 'Records field-level changes to event table in event_history hypertable'; - -- ============================================================================ -- ATTACH TRIGGERS TO OPERATIONAL TABLES -- ============================================================================ @@ -467,20 +369,11 @@ CREATE TRIGGER worker_history_trigger FOR EACH ROW EXECUTE FUNCTION record_worker_history(); -CREATE TRIGGER enforcement_history_trigger - AFTER INSERT OR UPDATE OR DELETE ON enforcement - FOR EACH ROW - EXECUTE FUNCTION record_enforcement_history(); - -CREATE TRIGGER event_history_trigger - AFTER INSERT OR UPDATE OR DELETE ON event - FOR EACH ROW - EXECUTE FUNCTION record_event_history(); - -- ============================================================================ -- COMPRESSION POLICIES -- ============================================================================ +-- History tables ALTER TABLE execution_history SET ( timescaledb.compress, timescaledb.compress_segmentby = 'entity_id', @@ -495,28 +388,39 @@ ALTER TABLE worker_history SET ( ); SELECT add_compression_policy('worker_history', INTERVAL '7 days'); -ALTER TABLE enforcement_history SET ( +-- Event table (hypertable) +ALTER TABLE event SET ( timescaledb.compress, - timescaledb.compress_segmentby = 'entity_id', - timescaledb.compress_orderby = 'time DESC' + timescaledb.compress_segmentby = 'trigger_ref', + timescaledb.compress_orderby = 'created DESC' ); -SELECT add_compression_policy('enforcement_history', INTERVAL '7 days'); +SELECT add_compression_policy('event', INTERVAL '7 days'); -ALTER TABLE event_history SET ( +-- Enforcement table (hypertable) +ALTER TABLE enforcement SET ( timescaledb.compress, - timescaledb.compress_segmentby = 'entity_id', - timescaledb.compress_orderby = 'time DESC' + timescaledb.compress_segmentby = 'rule_ref', + timescaledb.compress_orderby = 'created DESC' ); -SELECT add_compression_policy('event_history', INTERVAL '7 days'); +SELECT add_compression_policy('enforcement', INTERVAL '7 days'); + +-- Execution table (hypertable) +ALTER TABLE execution SET ( + timescaledb.compress, + timescaledb.compress_segmentby = 'action_ref', + timescaledb.compress_orderby = 'created DESC' +); +SELECT add_compression_policy('execution', INTERVAL '7 days'); -- ============================================================================ -- RETENTION POLICIES -- ============================================================================ SELECT add_retention_policy('execution_history', INTERVAL '90 days'); -SELECT add_retention_policy('enforcement_history', INTERVAL '90 days'); -SELECT add_retention_policy('event_history', INTERVAL '30 days'); SELECT add_retention_policy('worker_history', INTERVAL '180 days'); +SELECT add_retention_policy('event', INTERVAL '90 days'); +SELECT add_retention_policy('enforcement', INTERVAL '90 days'); +SELECT add_retention_policy('execution', INTERVAL '90 days'); -- ============================================================================ -- CONTINUOUS AGGREGATES @@ -530,6 +434,7 @@ DROP MATERIALIZED VIEW IF EXISTS execution_throughput_hourly CASCADE; DROP MATERIALIZED VIEW IF EXISTS event_volume_hourly CASCADE; DROP MATERIALIZED VIEW IF EXISTS worker_status_hourly CASCADE; DROP MATERIALIZED VIEW IF EXISTS enforcement_volume_hourly CASCADE; +DROP MATERIALIZED VIEW IF EXISTS execution_volume_hourly CASCADE; -- ---------------------------------------------------------------------------- -- execution_status_hourly @@ -582,17 +487,18 @@ SELECT add_continuous_aggregate_policy('execution_throughput_hourly', -- event_volume_hourly -- Tracks event creation volume per hour by trigger ref. -- Powers: event throughput monitoring widget. +-- NOTE: Queries the event table directly (it is now a hypertable) instead of +-- a separate event_history table. -- ---------------------------------------------------------------------------- CREATE MATERIALIZED VIEW event_volume_hourly WITH (timescaledb.continuous) AS SELECT - time_bucket('1 hour', time) AS bucket, - entity_ref AS trigger_ref, + time_bucket('1 hour', created) AS bucket, + trigger_ref, COUNT(*) AS event_count -FROM event_history -WHERE operation = 'INSERT' -GROUP BY bucket, entity_ref +FROM event +GROUP BY bucket, trigger_ref WITH NO DATA; SELECT add_continuous_aggregate_policy('event_volume_hourly', @@ -629,17 +535,18 @@ SELECT add_continuous_aggregate_policy('worker_status_hourly', -- enforcement_volume_hourly -- Tracks enforcement creation volume per hour by rule ref. -- Powers: rule activation rate monitoring. +-- NOTE: Queries the enforcement table directly (it is now a hypertable) +-- instead of a separate enforcement_history table. -- ---------------------------------------------------------------------------- CREATE MATERIALIZED VIEW enforcement_volume_hourly WITH (timescaledb.continuous) AS SELECT - time_bucket('1 hour', time) AS bucket, - entity_ref AS rule_ref, + time_bucket('1 hour', created) AS bucket, + rule_ref, COUNT(*) AS enforcement_count -FROM enforcement_history -WHERE operation = 'INSERT' -GROUP BY bucket, entity_ref +FROM enforcement +GROUP BY bucket, rule_ref WITH NO DATA; SELECT add_continuous_aggregate_policy('enforcement_volume_hourly', @@ -648,6 +555,34 @@ SELECT add_continuous_aggregate_policy('enforcement_volume_hourly', schedule_interval => INTERVAL '30 minutes' ); +-- ---------------------------------------------------------------------------- +-- execution_volume_hourly +-- Tracks execution creation volume per hour by action_ref and status. +-- This queries the execution hypertable directly (like event_volume_hourly +-- queries the event table). Complements the existing execution_status_hourly +-- and execution_throughput_hourly aggregates which query execution_history. +-- +-- Use case: direct execution volume monitoring without relying on the history +-- trigger (belt-and-suspenders, plus captures the initial status at creation). +-- ---------------------------------------------------------------------------- + +CREATE MATERIALIZED VIEW execution_volume_hourly +WITH (timescaledb.continuous) AS +SELECT + time_bucket('1 hour', created) AS bucket, + action_ref, + status AS initial_status, + COUNT(*) AS execution_count +FROM execution +GROUP BY bucket, action_ref, status +WITH NO DATA; + +SELECT add_continuous_aggregate_policy('execution_volume_hourly', + start_offset => INTERVAL '7 days', + end_offset => INTERVAL '1 hour', + schedule_interval => INTERVAL '30 minutes' +); + -- ============================================================================ -- INITIAL REFRESH NOTE -- ============================================================================ @@ -664,3 +599,4 @@ SELECT add_continuous_aggregate_policy('enforcement_volume_hourly', -- CALL refresh_continuous_aggregate('event_volume_hourly', NULL, NOW()); -- CALL refresh_continuous_aggregate('worker_status_hourly', NULL, NOW()); -- CALL refresh_continuous_aggregate('enforcement_volume_hourly', NULL, NOW()); +-- CALL refresh_continuous_aggregate('execution_volume_hourly', NULL, NOW()); diff --git a/packs.external/python_example b/packs.external/python_example index 57532ef..daf3d04 160000 --- a/packs.external/python_example +++ b/packs.external/python_example @@ -1 +1 @@ -Subproject commit 57532efabdbec5ab2400a44dbc70e7cc65ecf457 +Subproject commit daf3d0439572b2d22476c8da591206bf3afc2894 diff --git a/packs/examples/actions/list_example.sh b/packs/examples/actions/list_example.sh index 11f51b1..3db142d 100755 --- a/packs/examples/actions/list_example.sh +++ b/packs/examples/actions/list_example.sh @@ -1,17 +1,58 @@ -#!/bin/bash +#!/bin/sh # List Example Action # Demonstrates JSON Lines output format for streaming results +# +# This script uses pure POSIX shell without external dependencies like jq. +# It reads parameters in DOTENV format from stdin until the delimiter. -set -euo pipefail +set -e -# Read parameters from stdin (JSON format) -read -r params_json +# Initialize count with default +count=5 -# Extract count parameter (default to 5 if not provided) -count=$(echo "$params_json" | jq -r '.count // 5') +# Read DOTENV-formatted parameters from stdin until delimiter +while IFS= read -r line; do + case "$line" in + *"---ATTUNE_PARAMS_END---"*) + break + ;; + count=*) + # Extract value after count= + count="${line#count=}" + # Remove quotes if present (both single and double) + case "$count" in + \"*\") + count="${count#\"}" + count="${count%\"}" + ;; + \'*\') + count="${count#\'}" + count="${count%\'}" + ;; + esac + ;; + esac +done + +# Validate count is a positive integer +case "$count" in + ''|*[!0-9]*) + count=5 + ;; +esac + +if [ "$count" -lt 1 ]; then + count=1 +elif [ "$count" -gt 100 ]; then + count=100 +fi # Generate JSON Lines output (one JSON object per line) -for i in $(seq 1 "$count"); do +i=1 +while [ "$i" -le "$count" ]; do timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ") - echo "{\"id\": $i, \"value\": \"item_$i\", \"timestamp\": \"$timestamp\"}" + printf '{"id": %d, "value": "item_%d", "timestamp": "%s"}\n' "$i" "$i" "$timestamp" + i=$((i + 1)) done + +exit 0 diff --git a/packs/examples/actions/list_example.yaml b/packs/examples/actions/list_example.yaml index 2553942..33353c6 100644 --- a/packs/examples/actions/list_example.yaml +++ b/packs/examples/actions/list_example.yaml @@ -12,9 +12,9 @@ runner_type: shell # Entry point is the shell script to execute entry_point: list_example.sh -# Parameter delivery: stdin for secure parameter passing +# Parameter delivery: stdin for secure parameter passing (no env vars) parameter_delivery: stdin -parameter_format: json +parameter_format: dotenv # Output format: jsonl (each line is a JSON object, collected into array) output_format: jsonl diff --git a/web/src/api/models/ActionResponse.ts b/web/src/api/models/ActionResponse.ts index 2568d8c..e5f783b 100644 --- a/web/src/api/models/ActionResponse.ts +++ b/web/src/api/models/ActionResponse.ts @@ -6,57 +6,64 @@ * Response DTO for action information */ export type ActionResponse = { - /** - * Creation timestamp - */ - created: string; - /** - * Action description - */ - description: string; - /** - * Entry point - */ - entrypoint: string; - /** - * Action ID - */ - id: number; - /** - * Whether this is an ad-hoc action (not from pack installation) - */ - is_adhoc: boolean; - /** - * Human-readable label - */ - label: string; - /** - * Output schema - */ - out_schema: any | null; - /** - * Pack ID - */ - pack: number; - /** - * Pack reference - */ - pack_ref: string; - /** - * Parameter schema - */ - param_schema: any | null; - /** - * Unique reference identifier - */ - ref: string; - /** - * Runtime ID - */ - runtime?: number | null; - /** - * Last update timestamp - */ - updated: string; + /** + * Creation timestamp + */ + created: string; + /** + * Action description + */ + description: string; + /** + * Entry point + */ + entrypoint: string; + /** + * Action ID + */ + id: number; + /** + * Whether this is an ad-hoc action (not from pack installation) + */ + is_adhoc: boolean; + /** + * Human-readable label + */ + label: string; + /** + * Output schema + */ + out_schema: any | null; + /** + * Pack ID + */ + pack: number; + /** + * Pack reference + */ + pack_ref: string; + /** + * Parameter schema (StackStorm-style with inline required/secret) + */ + param_schema: any | null; + /** + * Unique reference identifier + */ + ref: string; + /** + * Runtime ID + */ + runtime?: number | null; + /** + * Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0") + */ + runtime_version_constraint?: string | null; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow definition ID (non-null if this action is a workflow) + */ + workflow_def?: number | null; }; - diff --git a/web/src/api/models/ActionSummary.ts b/web/src/api/models/ActionSummary.ts index 6efa9c8..cb9421f 100644 --- a/web/src/api/models/ActionSummary.ts +++ b/web/src/api/models/ActionSummary.ts @@ -6,41 +6,48 @@ * Simplified action response (for list endpoints) */ export type ActionSummary = { - /** - * Creation timestamp - */ - created: string; - /** - * Action description - */ - description: string; - /** - * Entry point - */ - entrypoint: string; - /** - * Action ID - */ - id: number; - /** - * Human-readable label - */ - label: string; - /** - * Pack reference - */ - pack_ref: string; - /** - * Unique reference identifier - */ - ref: string; - /** - * Runtime ID - */ - runtime?: number | null; - /** - * Last update timestamp - */ - updated: string; + /** + * Creation timestamp + */ + created: string; + /** + * Action description + */ + description: string; + /** + * Entry point + */ + entrypoint: string; + /** + * Action ID + */ + id: number; + /** + * Human-readable label + */ + label: string; + /** + * Pack reference + */ + pack_ref: string; + /** + * Unique reference identifier + */ + ref: string; + /** + * Runtime ID + */ + runtime?: number | null; + /** + * Semver version constraint for the runtime + */ + runtime_version_constraint?: string | null; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow definition ID (non-null if this action is a workflow) + */ + workflow_def?: number | null; }; - diff --git a/web/src/api/models/ApiResponse_ActionResponse.ts b/web/src/api/models/ApiResponse_ActionResponse.ts index 817c43c..6f60675 100644 --- a/web/src/api/models/ApiResponse_ActionResponse.ts +++ b/web/src/api/models/ApiResponse_ActionResponse.ts @@ -6,66 +6,73 @@ * Standard API response wrapper */ export type ApiResponse_ActionResponse = { + /** + * Response DTO for action information + */ + data: { /** - * Response DTO for action information + * Creation timestamp */ - data: { - /** - * Creation timestamp - */ - created: string; - /** - * Action description - */ - description: string; - /** - * Entry point - */ - entrypoint: string; - /** - * Action ID - */ - id: number; - /** - * Whether this is an ad-hoc action (not from pack installation) - */ - is_adhoc: boolean; - /** - * Human-readable label - */ - label: string; - /** - * Output schema - */ - out_schema: any | null; - /** - * Pack ID - */ - pack: number; - /** - * Pack reference - */ - pack_ref: string; - /** - * Parameter schema - */ - param_schema: any | null; - /** - * Unique reference identifier - */ - ref: string; - /** - * Runtime ID - */ - runtime?: number | null; - /** - * Last update timestamp - */ - updated: string; - }; + created: string; /** - * Optional message + * Action description */ - message?: string | null; + description: string; + /** + * Entry point + */ + entrypoint: string; + /** + * Action ID + */ + id: number; + /** + * Whether this is an ad-hoc action (not from pack installation) + */ + is_adhoc: boolean; + /** + * Human-readable label + */ + label: string; + /** + * Output schema + */ + out_schema: any | null; + /** + * Pack ID + */ + pack: number; + /** + * Pack reference + */ + pack_ref: string; + /** + * Parameter schema (StackStorm-style with inline required/secret) + */ + param_schema: any | null; + /** + * Unique reference identifier + */ + ref: string; + /** + * Runtime ID + */ + runtime?: number | null; + /** + * Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0") + */ + runtime_version_constraint?: string | null; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow definition ID (non-null if this action is a workflow) + */ + workflow_def?: number | null; + }; + /** + * Optional message + */ + message?: string | null; }; - diff --git a/web/src/api/models/ApiResponse_EnforcementResponse.ts b/web/src/api/models/ApiResponse_EnforcementResponse.ts index 576e781..2b93fb1 100644 --- a/web/src/api/models/ApiResponse_EnforcementResponse.ts +++ b/web/src/api/models/ApiResponse_EnforcementResponse.ts @@ -38,6 +38,10 @@ export type ApiResponse_EnforcementResponse = { * Enforcement payload */ payload: Record; + /** + * Timestamp when the enforcement was resolved (status changed from created to processed/disabled) + */ + resolved_at?: string | null; rule?: (null | i64); /** * Rule reference @@ -51,10 +55,6 @@ export type ApiResponse_EnforcementResponse = { * Trigger reference */ trigger_ref: string; - /** - * Last update timestamp - */ - updated: string; }; /** * Optional message diff --git a/web/src/api/models/ApiResponse_EventResponse.ts b/web/src/api/models/ApiResponse_EventResponse.ts index 299ea36..bbea5f9 100644 --- a/web/src/api/models/ApiResponse_EventResponse.ts +++ b/web/src/api/models/ApiResponse_EventResponse.ts @@ -42,10 +42,6 @@ export type ApiResponse_EventResponse = { * Trigger reference */ trigger_ref: string; - /** - * Last update timestamp - */ - updated: string; }; /** * Optional message diff --git a/web/src/api/models/ApiResponse_ExecutionResponse.ts b/web/src/api/models/ApiResponse_ExecutionResponse.ts index c212a51..77e9f57 100644 --- a/web/src/api/models/ApiResponse_ExecutionResponse.ts +++ b/web/src/api/models/ApiResponse_ExecutionResponse.ts @@ -2,63 +2,79 @@ /* istanbul ignore file */ /* tslint:disable */ /* eslint-disable */ -import type { ExecutionStatus } from './ExecutionStatus'; +import type { ExecutionStatus } from "./ExecutionStatus"; /** * Standard API response wrapper */ export type ApiResponse_ExecutionResponse = { + /** + * Response DTO for execution information + */ + data: { /** - * Response DTO for execution information + * Action ID (optional, may be null for ad-hoc executions) */ - data: { - /** - * Action ID (optional, may be null for ad-hoc executions) - */ - action?: number | null; - /** - * Action reference - */ - action_ref: string; - /** - * Execution configuration/parameters - */ - config: Record; - /** - * Creation timestamp - */ - created: string; - /** - * Enforcement ID (rule enforcement that triggered this) - */ - enforcement?: number | null; - /** - * Executor ID (worker/executor that ran this) - */ - executor?: number | null; - /** - * Execution ID - */ - id: number; - /** - * Parent execution ID (for nested/child executions) - */ - parent?: number | null; - /** - * Execution result/output - */ - result: Record; - /** - * Execution status - */ - status: ExecutionStatus; - /** - * Last update timestamp - */ - updated: string; - }; + action?: number | null; /** - * Optional message + * Action reference */ - message?: string | null; + action_ref: string; + /** + * Execution configuration/parameters + */ + config: Record; + /** + * Creation timestamp + */ + created: string; + /** + * Enforcement ID (rule enforcement that triggered this) + */ + enforcement?: number | null; + /** + * Executor ID (worker/executor that ran this) + */ + executor?: number | null; + /** + * Execution ID + */ + id: number; + /** + * Parent execution ID (for nested/child executions) + */ + parent?: number | null; + /** + * Execution result/output + */ + result: Record; + /** + * Execution status + */ + status: ExecutionStatus; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow task metadata (only populated for workflow task executions) + */ + workflow_task?: { + workflow_execution: number; + task_name: string; + task_index?: number | null; + task_batch?: number | null; + retry_count: number; + max_retries: number; + next_retry_at?: string | null; + timeout_seconds?: number | null; + timed_out: boolean; + duration_ms?: number | null; + started_at?: string | null; + completed_at?: string | null; + } | null; + }; + /** + * Optional message + */ + message?: string | null; }; - diff --git a/web/src/api/models/ApiResponse_PackResponse.ts b/web/src/api/models/ApiResponse_PackResponse.ts index 11747e5..b0b3879 100644 --- a/web/src/api/models/ApiResponse_PackResponse.ts +++ b/web/src/api/models/ApiResponse_PackResponse.ts @@ -22,6 +22,10 @@ export type ApiResponse_PackResponse = { * Creation timestamp */ created: string; + /** + * Pack dependencies (refs of required packs) + */ + dependencies: Array; /** * Pack description */ @@ -47,7 +51,7 @@ export type ApiResponse_PackResponse = { */ ref: string; /** - * Runtime dependencies + * Runtime dependencies (e.g., shell, python, nodejs) */ runtime_deps: Array; /** diff --git a/web/src/api/models/ApiResponse_RuleResponse.ts b/web/src/api/models/ApiResponse_RuleResponse.ts index 1aa26bd..2f54ddb 100644 --- a/web/src/api/models/ApiResponse_RuleResponse.ts +++ b/web/src/api/models/ApiResponse_RuleResponse.ts @@ -11,9 +11,9 @@ export type ApiResponse_RuleResponse = { */ data: { /** - * Action ID + * Action ID (null if the referenced action has been deleted) */ - action: number; + action?: number | null; /** * Parameters to pass to the action when rule is triggered */ @@ -63,9 +63,9 @@ export type ApiResponse_RuleResponse = { */ ref: string; /** - * Trigger ID + * Trigger ID (null if the referenced trigger has been deleted) */ - trigger: number; + trigger?: number | null; /** * Parameters for trigger configuration and event filtering */ diff --git a/web/src/api/models/ApiResponse_SensorResponse.ts b/web/src/api/models/ApiResponse_SensorResponse.ts index add00f4..e3020b0 100644 --- a/web/src/api/models/ApiResponse_SensorResponse.ts +++ b/web/src/api/models/ApiResponse_SensorResponse.ts @@ -43,7 +43,7 @@ export type ApiResponse_SensorResponse = { */ pack_ref?: string | null; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** diff --git a/web/src/api/models/ApiResponse_TriggerResponse.ts b/web/src/api/models/ApiResponse_TriggerResponse.ts index b2f80d8..9bc676e 100644 --- a/web/src/api/models/ApiResponse_TriggerResponse.ts +++ b/web/src/api/models/ApiResponse_TriggerResponse.ts @@ -47,7 +47,7 @@ export type ApiResponse_TriggerResponse = { */ pack_ref?: string | null; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** diff --git a/web/src/api/models/ApiResponse_WorkflowResponse.ts b/web/src/api/models/ApiResponse_WorkflowResponse.ts index 8319886..07c0119 100644 --- a/web/src/api/models/ApiResponse_WorkflowResponse.ts +++ b/web/src/api/models/ApiResponse_WorkflowResponse.ts @@ -47,7 +47,7 @@ export type ApiResponse_WorkflowResponse = { */ pack_ref: string; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** diff --git a/web/src/api/models/CreateActionRequest.ts b/web/src/api/models/CreateActionRequest.ts index 5ce6603..2a6619d 100644 --- a/web/src/api/models/CreateActionRequest.ts +++ b/web/src/api/models/CreateActionRequest.ts @@ -19,7 +19,7 @@ export type CreateActionRequest = { */ label: string; /** - * Output schema (JSON Schema) defining expected outputs + * Output schema (flat format) defining expected outputs with inline required/secret */ out_schema?: any | null; /** @@ -27,7 +27,7 @@ export type CreateActionRequest = { */ pack_ref: string; /** - * Parameter schema (JSON Schema) defining expected inputs + * Parameter schema (StackStorm-style) defining expected inputs with inline required/secret */ param_schema?: any | null; /** @@ -38,5 +38,9 @@ export type CreateActionRequest = { * Optional runtime ID for this action */ runtime?: number | null; + /** + * Optional semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0") + */ + runtime_version_constraint?: string | null; }; diff --git a/web/src/api/models/CreateInquiryRequest.ts b/web/src/api/models/CreateInquiryRequest.ts index a02a8cf..ee3d38e 100644 --- a/web/src/api/models/CreateInquiryRequest.ts +++ b/web/src/api/models/CreateInquiryRequest.ts @@ -17,7 +17,7 @@ export type CreateInquiryRequest = { */ prompt: string; /** - * Optional JSON schema for the expected response format + * Optional schema for the expected response format (flat format with inline required/secret) */ response_schema: Record; /** diff --git a/web/src/api/models/CreatePackRequest.ts b/web/src/api/models/CreatePackRequest.ts index a001d9e..7c615bd 100644 --- a/web/src/api/models/CreatePackRequest.ts +++ b/web/src/api/models/CreatePackRequest.ts @@ -7,13 +7,17 @@ */ export type CreatePackRequest = { /** - * Configuration schema (JSON Schema) + * Configuration schema (flat format with inline required/secret per parameter) */ conf_schema?: Record; /** * Pack configuration values */ config?: Record; + /** + * Pack dependencies (refs of required packs) + */ + dependencies?: Array; /** * Pack description */ @@ -35,7 +39,7 @@ export type CreatePackRequest = { */ ref: string; /** - * Runtime dependencies (refs of required packs) + * Runtime dependencies (e.g., shell, python, nodejs) */ runtime_deps?: Array; /** diff --git a/web/src/api/models/CreateSensorRequest.ts b/web/src/api/models/CreateSensorRequest.ts index cc55bee..1dcfde8 100644 --- a/web/src/api/models/CreateSensorRequest.ts +++ b/web/src/api/models/CreateSensorRequest.ts @@ -31,7 +31,7 @@ export type CreateSensorRequest = { */ pack_ref: string; /** - * Parameter schema (JSON Schema) for sensor configuration + * Parameter schema (flat format) for sensor configuration */ param_schema?: any | null; /** diff --git a/web/src/api/models/CreateTriggerRequest.ts b/web/src/api/models/CreateTriggerRequest.ts index 8ba8d4b..442de5f 100644 --- a/web/src/api/models/CreateTriggerRequest.ts +++ b/web/src/api/models/CreateTriggerRequest.ts @@ -19,7 +19,7 @@ export type CreateTriggerRequest = { */ label: string; /** - * Output schema (JSON Schema) defining event data structure + * Output schema (flat format) defining event data structure with inline required/secret */ out_schema?: any | null; /** @@ -27,7 +27,7 @@ export type CreateTriggerRequest = { */ pack_ref?: string | null; /** - * Parameter schema (JSON Schema) defining event payload structure + * Parameter schema (StackStorm-style) defining trigger configuration with inline required/secret */ param_schema?: any | null; /** diff --git a/web/src/api/models/CreateWorkflowRequest.ts b/web/src/api/models/CreateWorkflowRequest.ts index 1e9f1c2..42ec7e8 100644 --- a/web/src/api/models/CreateWorkflowRequest.ts +++ b/web/src/api/models/CreateWorkflowRequest.ts @@ -23,7 +23,7 @@ export type CreateWorkflowRequest = { */ label: string; /** - * Output schema (JSON Schema) defining expected outputs + * Output schema (flat format) defining expected outputs with inline required/secret */ out_schema: Record; /** @@ -31,7 +31,7 @@ export type CreateWorkflowRequest = { */ pack_ref: string; /** - * Parameter schema (JSON Schema) defining expected inputs + * Parameter schema (StackStorm-style) defining expected inputs with inline required/secret */ param_schema: Record; /** diff --git a/web/src/api/models/EnforcementResponse.ts b/web/src/api/models/EnforcementResponse.ts index aebd726..b0cdd40 100644 --- a/web/src/api/models/EnforcementResponse.ts +++ b/web/src/api/models/EnforcementResponse.ts @@ -34,6 +34,10 @@ export type EnforcementResponse = { * Enforcement payload */ payload: Record; + /** + * Timestamp when the enforcement was resolved (status changed from created to processed/disabled) + */ + resolved_at?: string | null; rule?: (null | i64); /** * Rule reference @@ -47,9 +51,5 @@ export type EnforcementResponse = { * Trigger reference */ trigger_ref: string; - /** - * Last update timestamp - */ - updated: string; }; diff --git a/web/src/api/models/EventResponse.ts b/web/src/api/models/EventResponse.ts index b2af186..d3d4120 100644 --- a/web/src/api/models/EventResponse.ts +++ b/web/src/api/models/EventResponse.ts @@ -38,9 +38,5 @@ export type EventResponse = { * Trigger reference */ trigger_ref: string; - /** - * Last update timestamp - */ - updated: string; }; diff --git a/web/src/api/models/ExecutionResponse.ts b/web/src/api/models/ExecutionResponse.ts index ba2a50b..763dd36 100644 --- a/web/src/api/models/ExecutionResponse.ts +++ b/web/src/api/models/ExecutionResponse.ts @@ -2,54 +2,70 @@ /* istanbul ignore file */ /* tslint:disable */ /* eslint-disable */ -import type { ExecutionStatus } from './ExecutionStatus'; +import type { ExecutionStatus } from "./ExecutionStatus"; /** * Response DTO for execution information */ export type ExecutionResponse = { - /** - * Action ID (optional, may be null for ad-hoc executions) - */ - action?: number | null; - /** - * Action reference - */ - action_ref: string; - /** - * Execution configuration/parameters - */ - config: Record; - /** - * Creation timestamp - */ - created: string; - /** - * Enforcement ID (rule enforcement that triggered this) - */ - enforcement?: number | null; - /** - * Executor ID (worker/executor that ran this) - */ - executor?: number | null; - /** - * Execution ID - */ - id: number; - /** - * Parent execution ID (for nested/child executions) - */ - parent?: number | null; - /** - * Execution result/output - */ - result: Record; - /** - * Execution status - */ - status: ExecutionStatus; - /** - * Last update timestamp - */ - updated: string; + /** + * Action ID (optional, may be null for ad-hoc executions) + */ + action?: number | null; + /** + * Action reference + */ + action_ref: string; + /** + * Execution configuration/parameters + */ + config: Record; + /** + * Creation timestamp + */ + created: string; + /** + * Enforcement ID (rule enforcement that triggered this) + */ + enforcement?: number | null; + /** + * Executor ID (worker/executor that ran this) + */ + executor?: number | null; + /** + * Execution ID + */ + id: number; + /** + * Parent execution ID (for nested/child executions) + */ + parent?: number | null; + /** + * Execution result/output + */ + result: Record; + /** + * Execution status + */ + status: ExecutionStatus; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow task metadata (only populated for workflow task executions) + */ + workflow_task?: { + workflow_execution: number; + task_name: string; + task_index?: number | null; + task_batch?: number | null; + retry_count: number; + max_retries: number; + next_retry_at?: string | null; + timeout_seconds?: number | null; + timed_out: boolean; + duration_ms?: number | null; + started_at?: string | null; + completed_at?: string | null; + } | null; }; - diff --git a/web/src/api/models/ExecutionSummary.ts b/web/src/api/models/ExecutionSummary.ts index de27ca4..b46dbd6 100644 --- a/web/src/api/models/ExecutionSummary.ts +++ b/web/src/api/models/ExecutionSummary.ts @@ -2,46 +2,62 @@ /* istanbul ignore file */ /* tslint:disable */ /* eslint-disable */ -import type { ExecutionStatus } from './ExecutionStatus'; +import type { ExecutionStatus } from "./ExecutionStatus"; /** * Simplified execution response (for list endpoints) */ export type ExecutionSummary = { - /** - * Action reference - */ - action_ref: string; - /** - * Creation timestamp - */ - created: string; - /** - * Enforcement ID - */ - enforcement?: number | null; - /** - * Execution ID - */ - id: number; - /** - * Parent execution ID - */ - parent?: number | null; - /** - * Rule reference (if triggered by a rule) - */ - rule_ref?: string | null; - /** - * Execution status - */ - status: ExecutionStatus; - /** - * Trigger reference (if triggered by a trigger) - */ - trigger_ref?: string | null; - /** - * Last update timestamp - */ - updated: string; + /** + * Action reference + */ + action_ref: string; + /** + * Creation timestamp + */ + created: string; + /** + * Enforcement ID + */ + enforcement?: number | null; + /** + * Execution ID + */ + id: number; + /** + * Parent execution ID + */ + parent?: number | null; + /** + * Rule reference (if triggered by a rule) + */ + rule_ref?: string | null; + /** + * Execution status + */ + status: ExecutionStatus; + /** + * Trigger reference (if triggered by a trigger) + */ + trigger_ref?: string | null; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow task metadata (only populated for workflow task executions) + */ + workflow_task?: { + workflow_execution: number; + task_name: string; + task_index?: number | null; + task_batch?: number | null; + retry_count: number; + max_retries: number; + next_retry_at?: string | null; + timeout_seconds?: number | null; + timed_out: boolean; + duration_ms?: number | null; + started_at?: string | null; + completed_at?: string | null; + } | null; }; - diff --git a/web/src/api/models/InstallPackRequest.ts b/web/src/api/models/InstallPackRequest.ts index 12edd01..83729e7 100644 --- a/web/src/api/models/InstallPackRequest.ts +++ b/web/src/api/models/InstallPackRequest.ts @@ -6,20 +6,21 @@ * Request DTO for installing a pack from remote source */ export type InstallPackRequest = { - /** - * Git branch, tag, or commit reference - */ - ref_spec?: string | null; - /** - * Skip dependency validation (not recommended) - */ - skip_deps?: boolean; - /** - * Skip running pack tests during installation - */ - skip_tests?: boolean; - /** - * Repository URL or source location - */ - source: string; + /** + * Git branch, tag, or commit reference + */ + ref_spec?: string | null; + /** + * Skip dependency validation (not recommended) + */ + skip_deps?: boolean; + /** + * Skip running pack tests during installation + */ + skip_tests?: boolean; + /** + * Repository URL or source location + */ + source: string; }; + diff --git a/web/src/api/models/PackResponse.ts b/web/src/api/models/PackResponse.ts index 256cc28..ec8f76c 100644 --- a/web/src/api/models/PackResponse.ts +++ b/web/src/api/models/PackResponse.ts @@ -18,6 +18,10 @@ export type PackResponse = { * Creation timestamp */ created: string; + /** + * Pack dependencies (refs of required packs) + */ + dependencies: Array; /** * Pack description */ @@ -43,7 +47,7 @@ export type PackResponse = { */ ref: string; /** - * Runtime dependencies + * Runtime dependencies (e.g., shell, python, nodejs) */ runtime_deps: Array; /** diff --git a/web/src/api/models/PaginatedResponse_ActionSummary.ts b/web/src/api/models/PaginatedResponse_ActionSummary.ts index d3edb35..8fe79dc 100644 --- a/web/src/api/models/PaginatedResponse_ActionSummary.ts +++ b/web/src/api/models/PaginatedResponse_ActionSummary.ts @@ -2,55 +2,62 @@ /* istanbul ignore file */ /* tslint:disable */ /* eslint-disable */ -import type { PaginationMeta } from './PaginationMeta'; +import type { PaginationMeta } from "./PaginationMeta"; /** * Paginated response wrapper */ export type PaginatedResponse_ActionSummary = { + /** + * The data items + */ + data: Array<{ /** - * The data items + * Creation timestamp */ - data: Array<{ - /** - * Creation timestamp - */ - created: string; - /** - * Action description - */ - description: string; - /** - * Entry point - */ - entrypoint: string; - /** - * Action ID - */ - id: number; - /** - * Human-readable label - */ - label: string; - /** - * Pack reference - */ - pack_ref: string; - /** - * Unique reference identifier - */ - ref: string; - /** - * Runtime ID - */ - runtime?: number | null; - /** - * Last update timestamp - */ - updated: string; - }>; + created: string; /** - * Pagination metadata + * Action description */ - pagination: PaginationMeta; + description: string; + /** + * Entry point + */ + entrypoint: string; + /** + * Action ID + */ + id: number; + /** + * Human-readable label + */ + label: string; + /** + * Pack reference + */ + pack_ref: string; + /** + * Unique reference identifier + */ + ref: string; + /** + * Runtime ID + */ + runtime?: number | null; + /** + * Semver version constraint for the runtime + */ + runtime_version_constraint?: string | null; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow definition ID (non-null if this action is a workflow) + */ + workflow_def?: number | null; + }>; + /** + * Pagination metadata + */ + pagination: PaginationMeta; }; - diff --git a/web/src/api/models/PaginatedResponse_ExecutionSummary.ts b/web/src/api/models/PaginatedResponse_ExecutionSummary.ts index 4c59e53..f0a12de 100644 --- a/web/src/api/models/PaginatedResponse_ExecutionSummary.ts +++ b/web/src/api/models/PaginatedResponse_ExecutionSummary.ts @@ -2,56 +2,72 @@ /* istanbul ignore file */ /* tslint:disable */ /* eslint-disable */ -import type { ExecutionStatus } from './ExecutionStatus'; -import type { PaginationMeta } from './PaginationMeta'; +import type { ExecutionStatus } from "./ExecutionStatus"; +import type { PaginationMeta } from "./PaginationMeta"; /** * Paginated response wrapper */ export type PaginatedResponse_ExecutionSummary = { + /** + * The data items + */ + data: Array<{ /** - * The data items + * Action reference */ - data: Array<{ - /** - * Action reference - */ - action_ref: string; - /** - * Creation timestamp - */ - created: string; - /** - * Enforcement ID - */ - enforcement?: number | null; - /** - * Execution ID - */ - id: number; - /** - * Parent execution ID - */ - parent?: number | null; - /** - * Rule reference (if triggered by a rule) - */ - rule_ref?: string | null; - /** - * Execution status - */ - status: ExecutionStatus; - /** - * Trigger reference (if triggered by a trigger) - */ - trigger_ref?: string | null; - /** - * Last update timestamp - */ - updated: string; - }>; + action_ref: string; /** - * Pagination metadata + * Creation timestamp */ - pagination: PaginationMeta; + created: string; + /** + * Enforcement ID + */ + enforcement?: number | null; + /** + * Execution ID + */ + id: number; + /** + * Parent execution ID + */ + parent?: number | null; + /** + * Rule reference (if triggered by a rule) + */ + rule_ref?: string | null; + /** + * Execution status + */ + status: ExecutionStatus; + /** + * Trigger reference (if triggered by a trigger) + */ + trigger_ref?: string | null; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow task metadata (only populated for workflow task executions) + */ + workflow_task?: { + workflow_execution: number; + task_name: string; + task_index?: number | null; + task_batch?: number | null; + retry_count: number; + max_retries: number; + next_retry_at?: string | null; + timeout_seconds?: number | null; + timed_out: boolean; + duration_ms?: number | null; + started_at?: string | null; + completed_at?: string | null; + } | null; + }>; + /** + * Pagination metadata + */ + pagination: PaginationMeta; }; - diff --git a/web/src/api/models/RuleResponse.ts b/web/src/api/models/RuleResponse.ts index a84fffa..e080ed1 100644 --- a/web/src/api/models/RuleResponse.ts +++ b/web/src/api/models/RuleResponse.ts @@ -7,9 +7,9 @@ */ export type RuleResponse = { /** - * Action ID + * Action ID (null if the referenced action has been deleted) */ - action: number; + action?: number | null; /** * Parameters to pass to the action when rule is triggered */ @@ -59,9 +59,9 @@ export type RuleResponse = { */ ref: string; /** - * Trigger ID + * Trigger ID (null if the referenced trigger has been deleted) */ - trigger: number; + trigger?: number | null; /** * Parameters for trigger configuration and event filtering */ diff --git a/web/src/api/models/SensorResponse.ts b/web/src/api/models/SensorResponse.ts index 663590f..a45e8cd 100644 --- a/web/src/api/models/SensorResponse.ts +++ b/web/src/api/models/SensorResponse.ts @@ -39,7 +39,7 @@ export type SensorResponse = { */ pack_ref?: string | null; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** diff --git a/web/src/api/models/TriggerResponse.ts b/web/src/api/models/TriggerResponse.ts index e57179f..13e9e6b 100644 --- a/web/src/api/models/TriggerResponse.ts +++ b/web/src/api/models/TriggerResponse.ts @@ -43,7 +43,7 @@ export type TriggerResponse = { */ pack_ref?: string | null; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** diff --git a/web/src/api/models/UpdateActionRequest.ts b/web/src/api/models/UpdateActionRequest.ts index 1fa3e51..434e1b1 100644 --- a/web/src/api/models/UpdateActionRequest.ts +++ b/web/src/api/models/UpdateActionRequest.ts @@ -23,12 +23,16 @@ export type UpdateActionRequest = { */ out_schema: any | null; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** * Runtime ID */ runtime?: number | null; + /** + * Optional semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0") + */ + runtime_version_constraint?: string | null; }; diff --git a/web/src/api/models/UpdatePackRequest.ts b/web/src/api/models/UpdatePackRequest.ts index 5b3b46a..c94737e 100644 --- a/web/src/api/models/UpdatePackRequest.ts +++ b/web/src/api/models/UpdatePackRequest.ts @@ -14,6 +14,10 @@ export type UpdatePackRequest = { * Pack configuration values */ config: any | null; + /** + * Pack dependencies (refs of required packs) + */ + dependencies?: any[] | null; /** * Pack description */ @@ -31,7 +35,7 @@ export type UpdatePackRequest = { */ meta: any | null; /** - * Runtime dependencies + * Runtime dependencies (e.g., shell, python, nodejs) */ runtime_deps?: any[] | null; /** diff --git a/web/src/api/models/UpdateSensorRequest.ts b/web/src/api/models/UpdateSensorRequest.ts index bca06db..8dd634d 100644 --- a/web/src/api/models/UpdateSensorRequest.ts +++ b/web/src/api/models/UpdateSensorRequest.ts @@ -23,7 +23,7 @@ export type UpdateSensorRequest = { */ label?: string | null; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; }; diff --git a/web/src/api/models/UpdateTriggerRequest.ts b/web/src/api/models/UpdateTriggerRequest.ts index 5402ff7..5f18c45 100644 --- a/web/src/api/models/UpdateTriggerRequest.ts +++ b/web/src/api/models/UpdateTriggerRequest.ts @@ -23,7 +23,7 @@ export type UpdateTriggerRequest = { */ out_schema: any | null; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; }; diff --git a/web/src/api/models/UpdateWorkflowRequest.ts b/web/src/api/models/UpdateWorkflowRequest.ts index 0b4e06d..61dd732 100644 --- a/web/src/api/models/UpdateWorkflowRequest.ts +++ b/web/src/api/models/UpdateWorkflowRequest.ts @@ -27,7 +27,7 @@ export type UpdateWorkflowRequest = { */ out_schema: any | null; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** diff --git a/web/src/api/models/WorkflowResponse.ts b/web/src/api/models/WorkflowResponse.ts index 910e13e..7c2d366 100644 --- a/web/src/api/models/WorkflowResponse.ts +++ b/web/src/api/models/WorkflowResponse.ts @@ -43,7 +43,7 @@ export type WorkflowResponse = { */ pack_ref: string; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** diff --git a/web/src/api/services/ActionsService.ts b/web/src/api/services/ActionsService.ts index 803dc24..926868b 100644 --- a/web/src/api/services/ActionsService.ts +++ b/web/src/api/services/ActionsService.ts @@ -2,432 +2,456 @@ /* istanbul ignore file */ /* tslint:disable */ /* eslint-disable */ -import type { CreateActionRequest } from '../models/CreateActionRequest'; -import type { PaginatedResponse_ActionSummary } from '../models/PaginatedResponse_ActionSummary'; -import type { SuccessResponse } from '../models/SuccessResponse'; -import type { UpdateActionRequest } from '../models/UpdateActionRequest'; -import type { CancelablePromise } from '../core/CancelablePromise'; -import { OpenAPI } from '../core/OpenAPI'; -import { request as __request } from '../core/request'; +import type { CreateActionRequest } from "../models/CreateActionRequest"; +import type { PaginatedResponse_ActionSummary } from "../models/PaginatedResponse_ActionSummary"; +import type { SuccessResponse } from "../models/SuccessResponse"; +import type { UpdateActionRequest } from "../models/UpdateActionRequest"; +import type { CancelablePromise } from "../core/CancelablePromise"; +import { OpenAPI } from "../core/OpenAPI"; +import { request as __request } from "../core/request"; export class ActionsService { + /** + * List all actions with pagination + * @returns PaginatedResponse_ActionSummary List of actions + * @throws ApiError + */ + public static listActions({ + page, + pageSize, + }: { /** - * List all actions with pagination - * @returns PaginatedResponse_ActionSummary List of actions - * @throws ApiError + * Page number (1-based) */ - public static listActions({ - page, - pageSize, - }: { - /** - * Page number (1-based) - */ - page?: number, - /** - * Number of items per page - */ - pageSize?: number, - }): CancelablePromise { - return __request(OpenAPI, { - method: 'GET', - url: '/api/v1/actions', - query: { - 'page': page, - 'page_size': pageSize, - }, - }); - } + page?: number; /** - * Create a new action - * @returns any Action created successfully - * @throws ApiError + * Number of items per page */ - public static createAction({ - requestBody, - }: { - requestBody: CreateActionRequest, - }): CancelablePromise<{ - /** - * Response DTO for action information - */ - data: { - /** - * Creation timestamp - */ - created: string; - /** - * Action description - */ - description: string; - /** - * Entry point - */ - entrypoint: string; - /** - * Action ID - */ - id: number; - /** - * Whether this is an ad-hoc action (not from pack installation) - */ - is_adhoc: boolean; - /** - * Human-readable label - */ - label: string; - /** - * Output schema - */ - out_schema: any | null; - /** - * Pack ID - */ - pack: number; - /** - * Pack reference - */ - pack_ref: string; - /** - * Parameter schema - */ - param_schema: any | null; - /** - * Unique reference identifier - */ - ref: string; - /** - * Runtime ID - */ - runtime?: number | null; - /** - * Last update timestamp - */ - updated: string; - }; - /** - * Optional message - */ - message?: string | null; - }> { - return __request(OpenAPI, { - method: 'POST', - url: '/api/v1/actions', - body: requestBody, - mediaType: 'application/json', - errors: { - 400: `Validation error`, - 404: `Pack not found`, - 409: `Action with same ref already exists`, - }, - }); - } + pageSize?: number; + }): CancelablePromise { + return __request(OpenAPI, { + method: "GET", + url: "/api/v1/actions", + query: { + page: page, + page_size: pageSize, + }, + }); + } + /** + * Create a new action + * @returns any Action created successfully + * @throws ApiError + */ + public static createAction({ + requestBody, + }: { + requestBody: CreateActionRequest; + }): CancelablePromise<{ /** - * Get a single action by reference - * @returns any Action details - * @throws ApiError + * Response DTO for action information */ - public static getAction({ - ref, - }: { - /** - * Action reference identifier - */ - ref: string, - }): CancelablePromise<{ - /** - * Response DTO for action information - */ - data: { - /** - * Creation timestamp - */ - created: string; - /** - * Action description - */ - description: string; - /** - * Entry point - */ - entrypoint: string; - /** - * Action ID - */ - id: number; - /** - * Whether this is an ad-hoc action (not from pack installation) - */ - is_adhoc: boolean; - /** - * Human-readable label - */ - label: string; - /** - * Output schema - */ - out_schema: any | null; - /** - * Pack ID - */ - pack: number; - /** - * Pack reference - */ - pack_ref: string; - /** - * Parameter schema - */ - param_schema: any | null; - /** - * Unique reference identifier - */ - ref: string; - /** - * Runtime ID - */ - runtime?: number | null; - /** - * Last update timestamp - */ - updated: string; - }; - /** - * Optional message - */ - message?: string | null; - }> { - return __request(OpenAPI, { - method: 'GET', - url: '/api/v1/actions/{ref}', - path: { - 'ref': ref, - }, - errors: { - 404: `Action not found`, - }, - }); - } + data: { + /** + * Creation timestamp + */ + created: string; + /** + * Action description + */ + description: string; + /** + * Entry point + */ + entrypoint: string; + /** + * Action ID + */ + id: number; + /** + * Whether this is an ad-hoc action (not from pack installation) + */ + is_adhoc: boolean; + /** + * Human-readable label + */ + label: string; + /** + * Output schema + */ + out_schema: any | null; + /** + * Pack ID + */ + pack: number; + /** + * Pack reference + */ + pack_ref: string; + /** + * Parameter schema (StackStorm-style with inline required/secret) + */ + param_schema: any | null; + /** + * Unique reference identifier + */ + ref: string; + /** + * Runtime ID + */ + runtime?: number | null; + /** + * Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0") + */ + runtime_version_constraint?: string | null; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow definition ID (non-null if this action is a workflow) + */ + workflow_def?: number | null; + }; /** - * Update an existing action - * @returns any Action updated successfully - * @throws ApiError + * Optional message */ - public static updateAction({ - ref, - requestBody, - }: { - /** - * Action reference identifier - */ - ref: string, - requestBody: UpdateActionRequest, - }): CancelablePromise<{ - /** - * Response DTO for action information - */ - data: { - /** - * Creation timestamp - */ - created: string; - /** - * Action description - */ - description: string; - /** - * Entry point - */ - entrypoint: string; - /** - * Action ID - */ - id: number; - /** - * Whether this is an ad-hoc action (not from pack installation) - */ - is_adhoc: boolean; - /** - * Human-readable label - */ - label: string; - /** - * Output schema - */ - out_schema: any | null; - /** - * Pack ID - */ - pack: number; - /** - * Pack reference - */ - pack_ref: string; - /** - * Parameter schema - */ - param_schema: any | null; - /** - * Unique reference identifier - */ - ref: string; - /** - * Runtime ID - */ - runtime?: number | null; - /** - * Last update timestamp - */ - updated: string; - }; - /** - * Optional message - */ - message?: string | null; - }> { - return __request(OpenAPI, { - method: 'PUT', - url: '/api/v1/actions/{ref}', - path: { - 'ref': ref, - }, - body: requestBody, - mediaType: 'application/json', - errors: { - 400: `Validation error`, - 404: `Action not found`, - }, - }); - } + message?: string | null; + }> { + return __request(OpenAPI, { + method: "POST", + url: "/api/v1/actions", + body: requestBody, + mediaType: "application/json", + errors: { + 400: `Validation error`, + 404: `Pack not found`, + 409: `Action with same ref already exists`, + }, + }); + } + /** + * Get a single action by reference + * @returns any Action details + * @throws ApiError + */ + public static getAction({ + ref, + }: { /** - * Delete an action - * @returns SuccessResponse Action deleted successfully - * @throws ApiError + * Action reference identifier */ - public static deleteAction({ - ref, - }: { - /** - * Action reference identifier - */ - ref: string, - }): CancelablePromise { - return __request(OpenAPI, { - method: 'DELETE', - url: '/api/v1/actions/{ref}', - path: { - 'ref': ref, - }, - errors: { - 404: `Action not found`, - }, - }); - } + ref: string; + }): CancelablePromise<{ /** - * Get queue statistics for an action - * @returns any Queue statistics - * @throws ApiError + * Response DTO for action information */ - public static getQueueStats({ - ref, - }: { - /** - * Action reference identifier - */ - ref: string, - }): CancelablePromise<{ - /** - * Response DTO for queue statistics - */ - data: { - /** - * Action ID - */ - action_id: number; - /** - * Action reference - */ - action_ref: string; - /** - * Number of currently running executions - */ - active_count: number; - /** - * Timestamp of last statistics update - */ - last_updated: string; - /** - * Maximum concurrent executions allowed - */ - max_concurrent: number; - /** - * Timestamp of oldest queued execution (if any) - */ - oldest_enqueued_at?: string | null; - /** - * Number of executions waiting in queue - */ - queue_length: number; - /** - * Total executions completed since queue creation - */ - total_completed: number; - /** - * Total executions enqueued since queue creation - */ - total_enqueued: number; - }; - /** - * Optional message - */ - message?: string | null; - }> { - return __request(OpenAPI, { - method: 'GET', - url: '/api/v1/actions/{ref}/queue-stats', - path: { - 'ref': ref, - }, - errors: { - 404: `Action not found or no queue statistics available`, - }, - }); - } + data: { + /** + * Creation timestamp + */ + created: string; + /** + * Action description + */ + description: string; + /** + * Entry point + */ + entrypoint: string; + /** + * Action ID + */ + id: number; + /** + * Whether this is an ad-hoc action (not from pack installation) + */ + is_adhoc: boolean; + /** + * Human-readable label + */ + label: string; + /** + * Output schema + */ + out_schema: any | null; + /** + * Pack ID + */ + pack: number; + /** + * Pack reference + */ + pack_ref: string; + /** + * Parameter schema (StackStorm-style with inline required/secret) + */ + param_schema: any | null; + /** + * Unique reference identifier + */ + ref: string; + /** + * Runtime ID + */ + runtime?: number | null; + /** + * Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0") + */ + runtime_version_constraint?: string | null; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow definition ID (non-null if this action is a workflow) + */ + workflow_def?: number | null; + }; /** - * List actions by pack reference - * @returns PaginatedResponse_ActionSummary List of actions for pack - * @throws ApiError + * Optional message */ - public static listActionsByPack({ - packRef, - page, - pageSize, - }: { - /** - * Pack reference identifier - */ - packRef: string, - /** - * Page number (1-based) - */ - page?: number, - /** - * Number of items per page - */ - pageSize?: number, - }): CancelablePromise { - return __request(OpenAPI, { - method: 'GET', - url: '/api/v1/packs/{pack_ref}/actions', - path: { - 'pack_ref': packRef, - }, - query: { - 'page': page, - 'page_size': pageSize, - }, - errors: { - 404: `Pack not found`, - }, - }); - } + message?: string | null; + }> { + return __request(OpenAPI, { + method: "GET", + url: "/api/v1/actions/{ref}", + path: { + ref: ref, + }, + errors: { + 404: `Action not found`, + }, + }); + } + /** + * Update an existing action + * @returns any Action updated successfully + * @throws ApiError + */ + public static updateAction({ + ref, + requestBody, + }: { + /** + * Action reference identifier + */ + ref: string; + requestBody: UpdateActionRequest; + }): CancelablePromise<{ + /** + * Response DTO for action information + */ + data: { + /** + * Creation timestamp + */ + created: string; + /** + * Action description + */ + description: string; + /** + * Entry point + */ + entrypoint: string; + /** + * Action ID + */ + id: number; + /** + * Whether this is an ad-hoc action (not from pack installation) + */ + is_adhoc: boolean; + /** + * Human-readable label + */ + label: string; + /** + * Output schema + */ + out_schema: any | null; + /** + * Pack ID + */ + pack: number; + /** + * Pack reference + */ + pack_ref: string; + /** + * Parameter schema (StackStorm-style with inline required/secret) + */ + param_schema: any | null; + /** + * Unique reference identifier + */ + ref: string; + /** + * Runtime ID + */ + runtime?: number | null; + /** + * Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0") + */ + runtime_version_constraint?: string | null; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow definition ID (non-null if this action is a workflow) + */ + workflow_def?: number | null; + }; + /** + * Optional message + */ + message?: string | null; + }> { + return __request(OpenAPI, { + method: "PUT", + url: "/api/v1/actions/{ref}", + path: { + ref: ref, + }, + body: requestBody, + mediaType: "application/json", + errors: { + 400: `Validation error`, + 404: `Action not found`, + }, + }); + } + /** + * Delete an action + * @returns SuccessResponse Action deleted successfully + * @throws ApiError + */ + public static deleteAction({ + ref, + }: { + /** + * Action reference identifier + */ + ref: string; + }): CancelablePromise { + return __request(OpenAPI, { + method: "DELETE", + url: "/api/v1/actions/{ref}", + path: { + ref: ref, + }, + errors: { + 404: `Action not found`, + }, + }); + } + /** + * Get queue statistics for an action + * @returns any Queue statistics + * @throws ApiError + */ + public static getQueueStats({ + ref, + }: { + /** + * Action reference identifier + */ + ref: string; + }): CancelablePromise<{ + /** + * Response DTO for queue statistics + */ + data: { + /** + * Action ID + */ + action_id: number; + /** + * Action reference + */ + action_ref: string; + /** + * Number of currently running executions + */ + active_count: number; + /** + * Timestamp of last statistics update + */ + last_updated: string; + /** + * Maximum concurrent executions allowed + */ + max_concurrent: number; + /** + * Timestamp of oldest queued execution (if any) + */ + oldest_enqueued_at?: string | null; + /** + * Number of executions waiting in queue + */ + queue_length: number; + /** + * Total executions completed since queue creation + */ + total_completed: number; + /** + * Total executions enqueued since queue creation + */ + total_enqueued: number; + }; + /** + * Optional message + */ + message?: string | null; + }> { + return __request(OpenAPI, { + method: "GET", + url: "/api/v1/actions/{ref}/queue-stats", + path: { + ref: ref, + }, + errors: { + 404: `Action not found or no queue statistics available`, + }, + }); + } + /** + * List actions by pack reference + * @returns PaginatedResponse_ActionSummary List of actions for pack + * @throws ApiError + */ + public static listActionsByPack({ + packRef, + page, + pageSize, + }: { + /** + * Pack reference identifier + */ + packRef: string; + /** + * Page number (1-based) + */ + page?: number; + /** + * Number of items per page + */ + pageSize?: number; + }): CancelablePromise { + return __request(OpenAPI, { + method: "GET", + url: "/api/v1/packs/{pack_ref}/actions", + path: { + pack_ref: packRef, + }, + query: { + page: page, + page_size: pageSize, + }, + errors: { + 404: `Pack not found`, + }, + }); + } } diff --git a/web/src/api/services/EventsService.ts b/web/src/api/services/EventsService.ts index 0c019a0..63e62c9 100644 --- a/web/src/api/services/EventsService.ts +++ b/web/src/api/services/EventsService.ts @@ -2,92 +2,92 @@ /* istanbul ignore file */ /* tslint:disable */ /* eslint-disable */ -import type { ApiResponse_EventResponse } from "../models/ApiResponse_EventResponse"; -import type { i64 } from "../models/i64"; -import type { PaginatedResponse_EventSummary } from "../models/PaginatedResponse_EventSummary"; -import type { CancelablePromise } from "../core/CancelablePromise"; -import { OpenAPI } from "../core/OpenAPI"; -import { request as __request } from "../core/request"; +import type { ApiResponse_EventResponse } from '../models/ApiResponse_EventResponse'; +import type { i64 } from '../models/i64'; +import type { PaginatedResponse_EventSummary } from '../models/PaginatedResponse_EventSummary'; +import type { CancelablePromise } from '../core/CancelablePromise'; +import { OpenAPI } from '../core/OpenAPI'; +import { request as __request } from '../core/request'; export class EventsService { - /** - * List all events with pagination and optional filters - * @returns PaginatedResponse_EventSummary List of events - * @throws ApiError - */ - public static listEvents({ - trigger, - triggerRef, - ruleRef, - source, - page, - perPage, - }: { /** - * Filter by trigger ID + * List all events with pagination and optional filters + * @returns PaginatedResponse_EventSummary List of events + * @throws ApiError */ - trigger?: null | i64; + public static listEvents({ + trigger, + triggerRef, + ruleRef, + source, + page, + perPage, + }: { + /** + * Filter by trigger ID + */ + trigger?: (null | i64), + /** + * Filter by trigger reference + */ + triggerRef?: string | null, + /** + * Filter by rule reference + */ + ruleRef?: string | null, + /** + * Filter by source ID + */ + source?: (null | i64), + /** + * Page number (1-indexed) + */ + page?: number, + /** + * Items per page + */ + perPage?: number, + }): CancelablePromise { + return __request(OpenAPI, { + method: 'GET', + url: '/api/v1/events', + query: { + 'trigger': trigger, + 'trigger_ref': triggerRef, + 'rule_ref': ruleRef, + 'source': source, + 'page': page, + 'per_page': perPage, + }, + errors: { + 401: `Unauthorized`, + 500: `Internal server error`, + }, + }); + } /** - * Filter by trigger reference + * Get a single event by ID + * @returns ApiResponse_EventResponse Event details + * @throws ApiError */ - triggerRef?: string | null; - /** - * Filter by rule reference - */ - ruleRef?: string | null; - /** - * Filter by source ID - */ - source?: null | i64; - /** - * Page number (1-indexed) - */ - page?: number; - /** - * Items per page - */ - perPage?: number; - }): CancelablePromise { - return __request(OpenAPI, { - method: "GET", - url: "/api/v1/events", - query: { - trigger: trigger, - trigger_ref: triggerRef, - rule_ref: ruleRef, - source: source, - page: page, - per_page: perPage, - }, - errors: { - 401: `Unauthorized`, - 500: `Internal server error`, - }, - }); - } - /** - * Get a single event by ID - * @returns ApiResponse_EventResponse Event details - * @throws ApiError - */ - public static getEvent({ - id, - }: { - /** - * Event ID - */ - id: number; - }): CancelablePromise { - return __request(OpenAPI, { - method: "GET", - url: "/api/v1/events/{id}", - path: { - id: id, - }, - errors: { - 401: `Unauthorized`, - 404: `Event not found`, - 500: `Internal server error`, - }, - }); - } + public static getEvent({ + id, + }: { + /** + * Event ID + */ + id: number, + }): CancelablePromise { + return __request(OpenAPI, { + method: 'GET', + url: '/api/v1/events/{id}', + path: { + 'id': id, + }, + errors: { + 401: `Unauthorized`, + 404: `Event not found`, + 500: `Internal server error`, + }, + }); + } } diff --git a/web/src/api/services/ExecutionsService.ts b/web/src/api/services/ExecutionsService.ts index e74f4a8..78bf7f2 100644 --- a/web/src/api/services/ExecutionsService.ts +++ b/web/src/api/services/ExecutionsService.ts @@ -2,260 +2,283 @@ /* istanbul ignore file */ /* tslint:disable */ /* eslint-disable */ -import type { ExecutionStatus } from '../models/ExecutionStatus'; -import type { PaginatedResponse_ExecutionSummary } from '../models/PaginatedResponse_ExecutionSummary'; -import type { CancelablePromise } from '../core/CancelablePromise'; -import { OpenAPI } from '../core/OpenAPI'; -import { request as __request } from '../core/request'; +import type { ExecutionStatus } from "../models/ExecutionStatus"; +import type { PaginatedResponse_ExecutionSummary } from "../models/PaginatedResponse_ExecutionSummary"; +import type { CancelablePromise } from "../core/CancelablePromise"; +import { OpenAPI } from "../core/OpenAPI"; +import { request as __request } from "../core/request"; export class ExecutionsService { + /** + * List all executions with pagination and optional filters + * @returns PaginatedResponse_ExecutionSummary List of executions + * @throws ApiError + */ + public static listExecutions({ + status, + actionRef, + packName, + ruleRef, + triggerRef, + executor, + resultContains, + enforcement, + parent, + topLevelOnly, + page, + perPage, + }: { /** - * List all executions with pagination and optional filters - * @returns PaginatedResponse_ExecutionSummary List of executions - * @throws ApiError + * Filter by execution status */ - public static listExecutions({ - status, - actionRef, - packName, - ruleRef, - triggerRef, - executor, - resultContains, - enforcement, - parent, - page, - perPage, - }: { - /** - * Filter by execution status - */ - status?: (null | ExecutionStatus), - /** - * Filter by action reference - */ - actionRef?: string | null, - /** - * Filter by pack name - */ - packName?: string | null, - /** - * Filter by rule reference - */ - ruleRef?: string | null, - /** - * Filter by trigger reference - */ - triggerRef?: string | null, - /** - * Filter by executor ID - */ - executor?: number | null, - /** - * Search in result JSON (case-insensitive substring match) - */ - resultContains?: string | null, - /** - * Filter by enforcement ID - */ - enforcement?: number | null, - /** - * Filter by parent execution ID - */ - parent?: number | null, - /** - * Page number (for pagination) - */ - page?: number, - /** - * Items per page (for pagination) - */ - perPage?: number, - }): CancelablePromise { - return __request(OpenAPI, { - method: 'GET', - url: '/api/v1/executions', - query: { - 'status': status, - 'action_ref': actionRef, - 'pack_name': packName, - 'rule_ref': ruleRef, - 'trigger_ref': triggerRef, - 'executor': executor, - 'result_contains': resultContains, - 'enforcement': enforcement, - 'parent': parent, - 'page': page, - 'per_page': perPage, - }, - }); - } + status?: null | ExecutionStatus; /** - * List executions by enforcement ID - * @returns PaginatedResponse_ExecutionSummary List of executions for enforcement - * @throws ApiError + * Filter by action reference */ - public static listExecutionsByEnforcement({ - enforcementId, - page, - pageSize, - }: { - /** - * Enforcement ID - */ - enforcementId: number, - /** - * Page number (1-based) - */ - page?: number, - /** - * Number of items per page - */ - pageSize?: number, - }): CancelablePromise { - return __request(OpenAPI, { - method: 'GET', - url: '/api/v1/executions/enforcement/{enforcement_id}', - path: { - 'enforcement_id': enforcementId, - }, - query: { - 'page': page, - 'page_size': pageSize, - }, - errors: { - 500: `Internal server error`, - }, - }); - } + actionRef?: string | null; /** - * Get execution statistics - * @returns any Execution statistics - * @throws ApiError + * Filter by pack name */ - public static getExecutionStats(): CancelablePromise> { - return __request(OpenAPI, { - method: 'GET', - url: '/api/v1/executions/stats', - errors: { - 500: `Internal server error`, - }, - }); - } + packName?: string | null; /** - * List executions by status - * @returns PaginatedResponse_ExecutionSummary List of executions with specified status - * @throws ApiError + * Filter by rule reference */ - public static listExecutionsByStatus({ - status, - page, - pageSize, - }: { - /** - * Execution status (requested, scheduling, scheduled, running, completed, failed, canceling, cancelled, timeout, abandoned) - */ - status: string, - /** - * Page number (1-based) - */ - page?: number, - /** - * Number of items per page - */ - pageSize?: number, - }): CancelablePromise { - return __request(OpenAPI, { - method: 'GET', - url: '/api/v1/executions/status/{status}', - path: { - 'status': status, - }, - query: { - 'page': page, - 'page_size': pageSize, - }, - errors: { - 400: `Invalid status`, - 500: `Internal server error`, - }, - }); - } + ruleRef?: string | null; /** - * Get a single execution by ID - * @returns any Execution details - * @throws ApiError + * Filter by trigger reference */ - public static getExecution({ - id, - }: { - /** - * Execution ID - */ - id: number, - }): CancelablePromise<{ - /** - * Response DTO for execution information - */ - data: { - /** - * Action ID (optional, may be null for ad-hoc executions) - */ - action?: number | null; - /** - * Action reference - */ - action_ref: string; - /** - * Execution configuration/parameters - */ - config: Record; - /** - * Creation timestamp - */ - created: string; - /** - * Enforcement ID (rule enforcement that triggered this) - */ - enforcement?: number | null; - /** - * Executor ID (worker/executor that ran this) - */ - executor?: number | null; - /** - * Execution ID - */ - id: number; - /** - * Parent execution ID (for nested/child executions) - */ - parent?: number | null; - /** - * Execution result/output - */ - result: Record; - /** - * Execution status - */ - status: ExecutionStatus; - /** - * Last update timestamp - */ - updated: string; - }; - /** - * Optional message - */ - message?: string | null; - }> { - return __request(OpenAPI, { - method: 'GET', - url: '/api/v1/executions/{id}', - path: { - 'id': id, - }, - errors: { - 404: `Execution not found`, - }, - }); - } + triggerRef?: string | null; + /** + * Filter by executor ID + */ + executor?: number | null; + /** + * Search in result JSON (case-insensitive substring match) + */ + resultContains?: string | null; + /** + * Filter by enforcement ID + */ + enforcement?: number | null; + /** + * Filter by parent execution ID + */ + parent?: number | null; + /** + * If true, only return top-level executions (those without a parent) + */ + topLevelOnly?: boolean | null; + /** + * Page number (for pagination) + */ + page?: number; + /** + * Items per page (for pagination) + */ + perPage?: number; + }): CancelablePromise { + return __request(OpenAPI, { + method: "GET", + url: "/api/v1/executions", + query: { + status: status, + action_ref: actionRef, + pack_name: packName, + rule_ref: ruleRef, + trigger_ref: triggerRef, + executor: executor, + result_contains: resultContains, + enforcement: enforcement, + parent: parent, + top_level_only: topLevelOnly, + page: page, + per_page: perPage, + }, + }); + } + /** + * List executions by enforcement ID + * @returns PaginatedResponse_ExecutionSummary List of executions for enforcement + * @throws ApiError + */ + public static listExecutionsByEnforcement({ + enforcementId, + page, + pageSize, + }: { + /** + * Enforcement ID + */ + enforcementId: number; + /** + * Page number (1-based) + */ + page?: number; + /** + * Number of items per page + */ + pageSize?: number; + }): CancelablePromise { + return __request(OpenAPI, { + method: "GET", + url: "/api/v1/executions/enforcement/{enforcement_id}", + path: { + enforcement_id: enforcementId, + }, + query: { + page: page, + page_size: pageSize, + }, + errors: { + 500: `Internal server error`, + }, + }); + } + /** + * Get execution statistics + * @returns any Execution statistics + * @throws ApiError + */ + public static getExecutionStats(): CancelablePromise> { + return __request(OpenAPI, { + method: "GET", + url: "/api/v1/executions/stats", + errors: { + 500: `Internal server error`, + }, + }); + } + /** + * List executions by status + * @returns PaginatedResponse_ExecutionSummary List of executions with specified status + * @throws ApiError + */ + public static listExecutionsByStatus({ + status, + page, + pageSize, + }: { + /** + * Execution status (requested, scheduling, scheduled, running, completed, failed, canceling, cancelled, timeout, abandoned) + */ + status: string; + /** + * Page number (1-based) + */ + page?: number; + /** + * Number of items per page + */ + pageSize?: number; + }): CancelablePromise { + return __request(OpenAPI, { + method: "GET", + url: "/api/v1/executions/status/{status}", + path: { + status: status, + }, + query: { + page: page, + page_size: pageSize, + }, + errors: { + 400: `Invalid status`, + 500: `Internal server error`, + }, + }); + } + /** + * Get a single execution by ID + * @returns any Execution details + * @throws ApiError + */ + public static getExecution({ + id, + }: { + /** + * Execution ID + */ + id: number; + }): CancelablePromise<{ + /** + * Response DTO for execution information + */ + data: { + /** + * Action ID (optional, may be null for ad-hoc executions) + */ + action?: number | null; + /** + * Action reference + */ + action_ref: string; + /** + * Execution configuration/parameters + */ + config: Record; + /** + * Creation timestamp + */ + created: string; + /** + * Enforcement ID (rule enforcement that triggered this) + */ + enforcement?: number | null; + /** + * Executor ID (worker/executor that ran this) + */ + executor?: number | null; + /** + * Execution ID + */ + id: number; + /** + * Parent execution ID (for nested/child executions) + */ + parent?: number | null; + /** + * Execution result/output + */ + result: Record; + /** + * Execution status + */ + status: ExecutionStatus; + /** + * Last update timestamp + */ + updated: string; + /** + * Workflow task metadata (only populated for workflow task executions) + */ + workflow_task?: { + workflow_execution: number; + task_name: string; + task_index?: number | null; + task_batch?: number | null; + retry_count: number; + max_retries: number; + next_retry_at?: string | null; + timeout_seconds?: number | null; + timed_out: boolean; + duration_ms?: number | null; + started_at?: string | null; + completed_at?: string | null; + } | null; + }; + /** + * Optional message + */ + message?: string | null; + }> { + return __request(OpenAPI, { + method: "GET", + url: "/api/v1/executions/{id}", + path: { + id: id, + }, + errors: { + 404: `Execution not found`, + }, + }); + } } diff --git a/web/src/api/services/PacksService.ts b/web/src/api/services/PacksService.ts index 130f213..45217ac 100644 --- a/web/src/api/services/PacksService.ts +++ b/web/src/api/services/PacksService.ts @@ -71,6 +71,10 @@ export class PacksService { * Creation timestamp */ created: string; + /** + * Pack dependencies (refs of required packs) + */ + dependencies: Array; /** * Pack description */ @@ -96,7 +100,7 @@ export class PacksService { */ ref: string; /** - * Runtime dependencies + * Runtime dependencies (e.g., shell, python, nodejs) */ runtime_deps: Array; /** @@ -145,7 +149,6 @@ export class PacksService { mediaType: 'application/json', errors: { 400: `Invalid request or tests failed`, - 409: `Pack already exists`, 501: `Not implemented yet`, }, }); @@ -200,6 +203,10 @@ export class PacksService { * Creation timestamp */ created: string; + /** + * Pack dependencies (refs of required packs) + */ + dependencies: Array; /** * Pack description */ @@ -225,7 +232,7 @@ export class PacksService { */ ref: string; /** - * Runtime dependencies + * Runtime dependencies (e.g., shell, python, nodejs) */ runtime_deps: Array; /** @@ -288,6 +295,10 @@ export class PacksService { * Creation timestamp */ created: string; + /** + * Pack dependencies (refs of required packs) + */ + dependencies: Array; /** * Pack description */ @@ -313,7 +324,7 @@ export class PacksService { */ ref: string; /** - * Runtime dependencies + * Runtime dependencies (e.g., shell, python, nodejs) */ runtime_deps: Array; /** diff --git a/web/src/api/services/WorkflowsService.ts b/web/src/api/services/WorkflowsService.ts index 930a056..e3bba47 100644 --- a/web/src/api/services/WorkflowsService.ts +++ b/web/src/api/services/WorkflowsService.ts @@ -150,7 +150,7 @@ export class WorkflowsService { */ pack_ref: string; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** @@ -241,7 +241,7 @@ export class WorkflowsService { */ pack_ref: string; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** @@ -333,7 +333,7 @@ export class WorkflowsService { */ pack_ref: string; /** - * Parameter schema + * Parameter schema (StackStorm-style with inline required/secret) */ param_schema: any | null; /** diff --git a/web/src/components/common/WorkflowTasksPanel.tsx b/web/src/components/common/WorkflowTasksPanel.tsx new file mode 100644 index 0000000..0d16941 --- /dev/null +++ b/web/src/components/common/WorkflowTasksPanel.tsx @@ -0,0 +1,312 @@ +import { useState, useMemo } from "react"; +import { Link } from "react-router-dom"; +import { formatDistanceToNow } from "date-fns"; +import { + ChevronDown, + ChevronRight, + Workflow, + CheckCircle2, + XCircle, + Clock, + Loader2, + AlertTriangle, + Ban, + CircleDot, + RotateCcw, +} from "lucide-react"; +import { useChildExecutions } from "@/hooks/useExecutions"; + +interface WorkflowTasksPanelProps { + /** The parent (workflow) execution ID */ + parentExecutionId: number; + /** Whether the panel starts collapsed (default: false — open by default for workflows) */ + defaultCollapsed?: boolean; +} + +/** Format a duration in ms to a human-readable string. */ +function formatDuration(ms: number): string { + if (ms < 1000) return `${ms}ms`; + const secs = ms / 1000; + if (secs < 60) return `${secs.toFixed(1)}s`; + const mins = Math.floor(secs / 60); + const remainSecs = Math.round(secs % 60); + if (mins < 60) return `${mins}m ${remainSecs}s`; + const hrs = Math.floor(mins / 60); + const remainMins = mins % 60; + return `${hrs}h ${remainMins}m`; +} + +function getStatusIcon(status: string) { + switch (status) { + case "completed": + return ; + case "failed": + return ; + case "running": + return ; + case "requested": + case "scheduling": + case "scheduled": + return ; + case "timeout": + return ; + case "canceling": + case "cancelled": + return ; + case "abandoned": + return ; + default: + return ; + } +} + +function getStatusBadgeClasses(status: string): string { + switch (status) { + case "completed": + return "bg-green-100 text-green-800"; + case "failed": + return "bg-red-100 text-red-800"; + case "running": + return "bg-blue-100 text-blue-800"; + case "requested": + case "scheduling": + case "scheduled": + return "bg-yellow-100 text-yellow-800"; + case "timeout": + return "bg-orange-100 text-orange-800"; + case "canceling": + case "cancelled": + return "bg-gray-100 text-gray-800"; + case "abandoned": + return "bg-red-100 text-red-600"; + default: + return "bg-gray-100 text-gray-800"; + } +} + +/** + * Panel that displays workflow task (child) executions for a parent + * workflow execution. Shows each task's name, action, status, and timing. + */ +export default function WorkflowTasksPanel({ + parentExecutionId, + defaultCollapsed = false, +}: WorkflowTasksPanelProps) { + const [isCollapsed, setIsCollapsed] = useState(defaultCollapsed); + const { data, isLoading, error } = useChildExecutions(parentExecutionId); + + const tasks = useMemo(() => { + if (!data?.data) return []; + return data.data; + }, [data]); + + const summary = useMemo(() => { + const total = tasks.length; + const completed = tasks.filter((t) => t.status === "completed").length; + const failed = tasks.filter((t) => t.status === "failed").length; + const running = tasks.filter( + (t) => + t.status === "running" || + t.status === "requested" || + t.status === "scheduling" || + t.status === "scheduled", + ).length; + const other = total - completed - failed - running; + return { total, completed, failed, running, other }; + }, [tasks]); + + if (!isLoading && tasks.length === 0 && !error) { + // No child tasks — nothing to show + return null; + } + + return ( +
+ {/* Header */} + + + {/* Content */} + {!isCollapsed && ( +
+ {isLoading && ( +
+ + + Loading workflow tasks… + +
+ )} + + {error && ( +
+ Error loading workflow tasks:{" "} + {error instanceof Error ? error.message : "Unknown error"} +
+ )} + + {!isLoading && !error && tasks.length > 0 && ( +
+ {/* Column headers */} +
+
#
+
Task
+
Action
+
Status
+
Duration
+
Retry
+
+ + {/* Task rows */} + {tasks.map((task, idx) => { + const wt = task.workflow_task; + const taskName = wt?.task_name ?? `Task ${idx + 1}`; + const retryCount = wt?.retry_count ?? 0; + const maxRetries = wt?.max_retries ?? 0; + const timedOut = wt?.timed_out ?? false; + + // Compute duration from created → updated (best available) + const created = new Date(task.created); + const updated = new Date(task.updated); + const durationMs = + wt?.duration_ms ?? + (task.status === "completed" || + task.status === "failed" || + task.status === "timeout" + ? updated.getTime() - created.getTime() + : null); + + return ( + + {/* Index */} +
+ {idx + 1} +
+ + {/* Task name */} +
+ {getStatusIcon(task.status)} + + {taskName} + + {wt?.task_index != null && ( + + [{wt.task_index}] + + )} +
+ + {/* Action ref */} +
+ + {task.action_ref} + +
+ + {/* Status badge */} +
+ + {task.status} + + {timedOut && ( + + + + )} +
+ + {/* Duration */} +
+ {task.status === "running" ? ( + + {formatDistanceToNow(created, { addSuffix: false })}… + + ) : durationMs != null && durationMs > 0 ? ( + formatDuration(durationMs) + ) : ( + + )} +
+ + {/* Retry info */} +
+ {maxRetries > 0 ? ( + + + {retryCount}/{maxRetries} + + ) : ( + + )} +
+ + ); + })} +
+ )} +
+ )} +
+ ); +} diff --git a/web/src/components/executions/ExecutionPreviewPanel.tsx b/web/src/components/executions/ExecutionPreviewPanel.tsx new file mode 100644 index 0000000..25532ff --- /dev/null +++ b/web/src/components/executions/ExecutionPreviewPanel.tsx @@ -0,0 +1,297 @@ +import { memo, useEffect } from "react"; +import { Link } from "react-router-dom"; +import { X, ExternalLink, Loader2 } from "lucide-react"; +import { useExecution } from "@/hooks/useExecutions"; +import { useExecutionStream } from "@/hooks/useExecutionStream"; +import { formatDistanceToNow } from "date-fns"; +import type { ExecutionStatus } from "@/api"; + +function formatDuration(ms: number): string { + if (ms < 1000) return `${ms}ms`; + const secs = ms / 1000; + if (secs < 60) return `${secs.toFixed(1)}s`; + const mins = Math.floor(secs / 60); + const remainSecs = Math.round(secs % 60); + if (mins < 60) return `${mins}m ${remainSecs}s`; + const hrs = Math.floor(mins / 60); + const remainMins = mins % 60; + return `${hrs}h ${remainMins}m`; +} + +const getStatusColor = (status: string) => { + switch (status) { + case "succeeded": + case "completed": + return "bg-green-100 text-green-800"; + case "failed": + case "timeout": + return "bg-red-100 text-red-800"; + case "running": + return "bg-blue-100 text-blue-800"; + case "scheduled": + case "scheduling": + case "requested": + return "bg-yellow-100 text-yellow-800"; + case "canceling": + case "cancelled": + return "bg-gray-100 text-gray-600"; + default: + return "bg-gray-100 text-gray-800"; + } +}; + +interface ExecutionPreviewPanelProps { + executionId: number; + onClose: () => void; +} + +const ExecutionPreviewPanel = memo(function ExecutionPreviewPanel({ + executionId, + onClose, +}: ExecutionPreviewPanelProps) { + const { data, isLoading, error } = useExecution(executionId); + const execution = data?.data; + + // Subscribe to real-time updates for this execution + useExecutionStream({ executionId, enabled: true }); + + // Close on Escape key + useEffect(() => { + const handleKeyDown = (e: KeyboardEvent) => { + if (e.key === "Escape") onClose(); + }; + window.addEventListener("keydown", handleKeyDown); + return () => window.removeEventListener("keydown", handleKeyDown); + }, [onClose]); + + const isRunning = + execution?.status === "running" || + execution?.status === "scheduling" || + execution?.status === "scheduled" || + execution?.status === "requested"; + + const created = execution ? new Date(execution.created) : null; + const updated = execution ? new Date(execution.updated) : null; + const durationMs = + created && updated && !isRunning + ? updated.getTime() - created.getTime() + : null; + + return ( +
+ {/* Header */} +
+
+

+ Execution #{executionId} +

+ {execution && ( + + {execution.status} + + )} + {isRunning && ( + + )} +
+
+ + + + +
+
+ + {/* Body */} +
+ {isLoading && ( +
+ +
+ )} + + {error && !execution && ( +
+
+ Error: {(error as Error).message} +
+
+ )} + + {execution && ( +
+ {/* Action */} +
+
+ Action +
+
+ + {execution.action_ref} + +
+
+ + {/* Timing */} +
+
+
+ Created +
+
+ {created!.toLocaleString()} + + {formatDistanceToNow(created!, { addSuffix: true })} + +
+
+ {durationMs != null && durationMs > 0 && ( +
+
+ Duration +
+
+ {formatDuration(durationMs)} +
+
+ )} + {isRunning && ( +
+
+ Elapsed +
+
+ + {formatDistanceToNow(created!)} +
+
+ )} +
+ + {/* References */} +
+ {execution.parent && ( +
+
+ Parent Execution +
+
+ + #{execution.parent} + +
+
+ )} + {execution.enforcement && ( +
+
+ Enforcement +
+
+ #{execution.enforcement} +
+
+ )} + {execution.executor && ( +
+
+ Executor +
+
+ #{execution.executor} +
+
+ )} + {execution.workflow_task && ( +
+
+ Workflow Task +
+
+ + {execution.workflow_task.task_name} + + {execution.workflow_task.task_index != null && ( + + [{execution.workflow_task.task_index}] + + )} +
+
+ )} +
+ + {/* Config / Parameters */} + {execution.config && + Object.keys(execution.config).length > 0 && ( +
+
+ Parameters +
+
+
+                      {JSON.stringify(execution.config, null, 2)}
+                    
+
+
+ )} + + {/* Result */} + {execution.result && + Object.keys(execution.result).length > 0 && ( +
+
+ Result +
+
+
+                      {JSON.stringify(execution.result, null, 2)}
+                    
+
+
+ )} +
+ )} +
+ + {/* Footer */} + {execution && ( +
+ + Open Full Details + +
+ )} +
+ ); +}); + +export default ExecutionPreviewPanel; diff --git a/web/src/components/executions/Pagination.tsx b/web/src/components/executions/Pagination.tsx new file mode 100644 index 0000000..d31d10d --- /dev/null +++ b/web/src/components/executions/Pagination.tsx @@ -0,0 +1,78 @@ +import { memo } from "react"; + +interface PaginationProps { + page: number; + setPage: (page: number) => void; + pageSize: number; + total: number; +} + +function computeRange(page: number, pageSize: number, total: number) { + const start = (page - 1) * pageSize + 1; + const end = Math.min(page * pageSize, total); + return { start, end }; +} + +const Pagination = memo(function Pagination({ + page, + setPage, + pageSize, + total, +}: PaginationProps) { + const totalPages = Math.ceil(total / pageSize); + if (totalPages <= 1) return null; + + const { start, end } = computeRange(page, pageSize, total); + + return ( +
+
+ + +
+
+
+

+ Showing {start} to{" "} + {end} of{" "} + {total} executions +

+
+
+ +
+
+
+ ); +}); + +Pagination.displayName = "Pagination"; + +export default Pagination; diff --git a/web/src/components/executions/WorkflowExecutionTree.tsx b/web/src/components/executions/WorkflowExecutionTree.tsx new file mode 100644 index 0000000..1922e4d --- /dev/null +++ b/web/src/components/executions/WorkflowExecutionTree.tsx @@ -0,0 +1,618 @@ +import { useState, useMemo, memo } from "react"; +import { Link } from "react-router-dom"; +import { + ChevronRight, + ChevronDown, + Workflow, + Loader2, + CheckCircle2, + XCircle, + Clock, + AlertTriangle, + Ban, + CircleDot, + RotateCcw, +} from "lucide-react"; +import { useChildExecutions } from "@/hooks/useExecutions"; +import type { ExecutionSummary } from "@/api"; +import Pagination from "./Pagination"; + +// ─── Helpers ──────────────────────────────────────────────────────────────── + +function getStatusColor(status: string) { + switch (status) { + case "completed": + return "bg-green-100 text-green-800"; + case "failed": + case "timeout": + return "bg-red-100 text-red-800"; + case "running": + return "bg-blue-100 text-blue-800"; + case "requested": + case "scheduling": + case "scheduled": + return "bg-yellow-100 text-yellow-800"; + case "canceling": + case "cancelled": + return "bg-gray-100 text-gray-600"; + default: + return "bg-gray-100 text-gray-800"; + } +} + +function getStatusIcon(status: string) { + switch (status) { + case "completed": + return ; + case "failed": + return ; + case "running": + return ; + case "requested": + case "scheduling": + case "scheduled": + return ; + case "timeout": + return ; + case "canceling": + case "cancelled": + return ; + case "abandoned": + return ; + default: + return ; + } +} + +function formatDuration(ms: number): string { + if (ms < 1000) return `${ms}ms`; + const secs = ms / 1000; + if (secs < 60) return `${secs.toFixed(1)}s`; + const mins = Math.floor(secs / 60); + const remainSecs = Math.round(secs % 60); + if (mins < 60) return `${mins}m ${remainSecs}s`; + const hrs = Math.floor(mins / 60); + const remainMins = mins % 60; + return `${hrs}h ${remainMins}m`; +} + +// ─── Child execution row (recursive) ──────────────────────────────────────── + +interface ChildExecutionRowProps { + execution: ExecutionSummary; + depth: number; + selectedExecutionId: number | null; + onSelectExecution: (id: number) => void; + workflowActionRefs: Set; +} + +/** + * A single child-execution row inside the accordion. If it has its own + * children (nested workflow), it can be expanded recursively. + */ +const ChildExecutionRow = memo(function ChildExecutionRow({ + execution, + depth, + selectedExecutionId, + onSelectExecution, + workflowActionRefs, +}: ChildExecutionRowProps) { + const isWorkflow = workflowActionRefs.has(execution.action_ref); + const [expanded, setExpanded] = useState(false); + + // Only fetch children when expanded and this is a workflow action + const { data, isLoading } = useChildExecutions( + expanded && isWorkflow ? execution.id : undefined, + ); + + const children = useMemo(() => data?.data ?? [], [data]); + const hasChildren = expanded && children.length > 0; + + const wt = execution.workflow_task; + const taskName = wt?.task_name; + const retryCount = wt?.retry_count ?? 0; + const maxRetries = wt?.max_retries ?? 0; + + const created = new Date(execution.created); + const updated = new Date(execution.updated); + const durationMs = + wt?.duration_ms ?? + (execution.status === "completed" || + execution.status === "failed" || + execution.status === "timeout" + ? updated.getTime() - created.getTime() + : null); + + const indent = 16 + depth * 24; + + return ( + <> +
onSelectExecution(execution.id)} + > + {/* Task name / expand toggle */} + + + {/* Exec ID */} + + + {/* Action */} + + + {/* Status */} + + + {/* Duration */} + + + {/* Retry */} + + + + {/* Nested children */} + {expanded && + !isLoading && + hasChildren && + children.map((child: ExecutionSummary) => ( + + ))} + + ); +}); + +// ─── Top-level workflow row (accordion) ───────────────────────────────────── + +interface WorkflowExecutionRowProps { + execution: ExecutionSummary; + workflowActionRefs: Set; + selectedExecutionId: number | null; + onSelectExecution: (id: number) => void; +} + +/** + * A top-level execution row with an expandable accordion for child tasks. + */ +const WorkflowExecutionRow = memo(function WorkflowExecutionRow({ + execution, + workflowActionRefs, + selectedExecutionId, + onSelectExecution, +}: WorkflowExecutionRowProps) { + const isWorkflow = workflowActionRefs.has(execution.action_ref); + const [expanded, setExpanded] = useState(false); + + const { data, isLoading } = useChildExecutions( + expanded && isWorkflow ? execution.id : undefined, + ); + + const children = useMemo(() => data?.data ?? [], [data]); + + const summary = useMemo(() => { + const total = children.length; + const completed = children.filter( + (t: ExecutionSummary) => t.status === "completed", + ).length; + const failed = children.filter( + (t: ExecutionSummary) => t.status === "failed" || t.status === "timeout", + ).length; + const running = children.filter( + (t: ExecutionSummary) => + t.status === "running" || + t.status === "requested" || + t.status === "scheduling" || + t.status === "scheduled", + ).length; + return { total, completed, failed, running }; + }, [children]); + + const hasWorkflowChildren = expanded && children.length > 0; + + return ( + <> + {/* Main execution row */} + onSelectExecution(execution.id)} + > + + + + + + + + + {/* Expanded child-task section */} + {expanded && ( + + + + )} + + ); +}); + +// ─── Main tree table ──────────────────────────────────────────────────────── + +interface WorkflowExecutionTreeProps { + executions: ExecutionSummary[]; + isLoading: boolean; + isFetching: boolean; + error: Error | null; + hasActiveFilters: boolean; + clearFilters: () => void; + page: number; + setPage: (page: number) => void; + pageSize: number; + total: number; + workflowActionRefs: Set; + selectedExecutionId: number | null; + onSelectExecution: (id: number) => void; +} + +/** + * Renders the executions list in "By Workflow" mode. Top-level executions + * are shown with the same columns as the "All" view, but each row is + * expandable to reveal the workflow's child task executions in an accordion. + * Nested workflows can be drilled into recursively. + */ +const WorkflowExecutionTree = memo(function WorkflowExecutionTree({ + executions, + isLoading, + isFetching, + error, + hasActiveFilters, + clearFilters, + page, + setPage, + pageSize, + total, + workflowActionRefs, + selectedExecutionId, + onSelectExecution, +}: WorkflowExecutionTreeProps) { + // Initial load + if (isLoading && executions.length === 0) { + return ( +
+
+
+
+
+ ); + } + + // Error with no cached data + if (error && executions.length === 0) { + return ( +
+
+

Error: {error.message}

+
+
+ ); + } + + // Empty + if (executions.length === 0) { + return ( +
+

No executions found

+ {hasActiveFilters && ( + + )} +
+ ); + } + + return ( +
+ {/* Loading overlay */} + {isFetching && ( +
+
+
+ )} + + {/* Non-fatal error banner */} + {error && ( +
+

Error refreshing: {error.message}

+
+ )} + +
+
+
+ {isWorkflow && ( + + )} + + {getStatusIcon(execution.status)} + + {taskName && ( + + {taskName} + + )} + + {wt?.task_index != null && ( + + [{wt.task_index}] + + )} +
+
+ e.stopPropagation()} + > + #{execution.id} + + + e.stopPropagation()} + > + {execution.action_ref} + + + + {execution.status} + + + {execution.status === "running" ? ( + + + running + + ) : durationMs != null && durationMs > 0 ? ( + formatDuration(durationMs) + ) : ( + + )} + + {maxRetries > 0 ? ( + + + {retryCount}/{maxRetries} + + ) : ( + + )} +
+
+ {isWorkflow && ( + + )} + e.stopPropagation()} + > + #{execution.id} + +
+
+ {execution.action_ref} + + {execution.rule_ref ? ( + {execution.rule_ref} + ) : ( + - + )} + + {execution.trigger_ref ? ( + + {execution.trigger_ref} + + ) : ( + - + )} + + + {execution.status} + + + {new Date(execution.created).toLocaleString()} +
+
+ {/* Summary bar */} + {hasWorkflowChildren && ( +
+ + + {summary.total} task{summary.total !== 1 ? "s" : ""} + + {summary.completed > 0 && ( + + + {summary.completed} + + )} + {summary.running > 0 && ( + + + {summary.running} + + )} + {summary.failed > 0 && ( + + + {summary.failed} + + )} +
+ )} + + {/* Loading state */} + {isLoading && ( +
+ + + Loading workflow tasks... + +
+ )} + + {/* No children yet (workflow still starting) */} + {!isLoading && children.length === 0 && ( +
+ No child tasks yet. +
+ )} + + {/* Children table */} + {hasWorkflowChildren && ( + + + + + + + + + + + + + {children.map((child: ExecutionSummary) => ( + + ))} + +
+ Task + IDActionStatusDurationRetry
+ )} +
+
+ + + + + + + + + + + + {executions.map((exec: ExecutionSummary) => ( + + ))} + +
+ ID + + Action + + Rule + + Trigger + + Status + + Created +
+ + + + + ); +}); + +WorkflowExecutionTree.displayName = "WorkflowExecutionTree"; + +export default WorkflowExecutionTree; diff --git a/web/src/hooks/useEnforcementStream.ts b/web/src/hooks/useEnforcementStream.ts index 5754b96..8942419 100644 --- a/web/src/hooks/useEnforcementStream.ts +++ b/web/src/hooks/useEnforcementStream.ts @@ -90,12 +90,6 @@ export function useEnforcementStream( // Extract enforcement data from notification payload (flat structure) const enforcementData = notification.payload as any; - // Invalidate history queries so the EntityHistoryPanel picks up new records - // (e.g. status changes recorded by the enforcement_history trigger) - queryClient.invalidateQueries({ - queryKey: ["history", "enforcement", notification.entity_id], - }); - // Update specific enforcement query if it exists queryClient.setQueryData( ["enforcements", notification.entity_id], diff --git a/web/src/hooks/useExecutionStream.ts b/web/src/hooks/useExecutionStream.ts index 0b508bf..9e3de08 100644 --- a/web/src/hooks/useExecutionStream.ts +++ b/web/src/hooks/useExecutionStream.ts @@ -48,6 +48,22 @@ function stripNotificationMeta(payload: any): any { function executionMatchesParams(execution: any, params: any): boolean { if (!params) return true; + // Check topLevelOnly filter — child executions (with a parent) must not + // appear in top-level list queries. + if (params.topLevelOnly && execution.parent != null) { + return false; + } + + // Check parent filter — child execution queries (keyed by { parent: id }) + // should only receive notifications for executions belonging to that parent. + // Without this, every execution notification would match child queries since + // they have no other filter fields. + if (params.parent !== undefined) { + if (execution.parent !== params.parent) { + return false; + } + } + // Check status filter (from API query parameters) if (params.status && execution.status !== params.status) { return false; diff --git a/web/src/hooks/useExecutions.ts b/web/src/hooks/useExecutions.ts index 1a49397..f417729 100644 --- a/web/src/hooks/useExecutions.ts +++ b/web/src/hooks/useExecutions.ts @@ -11,6 +11,7 @@ interface ExecutionsQueryParams { ruleRef?: string; triggerRef?: string; executor?: number; + topLevelOnly?: boolean; } export function useExecutions(params?: ExecutionsQueryParams) { @@ -21,7 +22,8 @@ export function useExecutions(params?: ExecutionsQueryParams) { params?.packName || params?.ruleRef || params?.triggerRef || - params?.executor; + params?.executor || + params?.topLevelOnly; return useQuery({ queryKey: ["executions", params], @@ -35,6 +37,7 @@ export function useExecutions(params?: ExecutionsQueryParams) { ruleRef: params?.ruleRef, triggerRef: params?.triggerRef, executor: params?.executor, + topLevelOnly: params?.topLevelOnly, }); return response; }, @@ -59,3 +62,37 @@ export function useExecution(id: number) { staleTime: 30000, // 30 seconds - SSE handles real-time updates }); } + +/** + * Fetch child executions (workflow tasks) for a given parent execution ID. + * + * Enabled only when `parentId` is provided. Polls every 5 seconds while any + * child execution is still in a running/pending state so the UI stays current. + */ +export function useChildExecutions(parentId: number | undefined) { + return useQuery({ + queryKey: ["executions", { parent: parentId }], + queryFn: async () => { + const response = await ExecutionsService.listExecutions({ + parent: parentId, + perPage: 100, + }); + return response; + }, + enabled: !!parentId, + staleTime: 5000, + // Re-fetch periodically so in-progress tasks update + refetchInterval: (query) => { + const data = query.state.data; + if (!data) return false; + const hasActive = data.data.some( + (e) => + e.status === "requested" || + e.status === "scheduling" || + e.status === "scheduled" || + e.status === "running", + ); + return hasActive ? 5000 : false; + }, + }); +} diff --git a/web/src/hooks/useFilterSuggestions.ts b/web/src/hooks/useFilterSuggestions.ts index eb133c1..ebcad5d 100644 --- a/web/src/hooks/useFilterSuggestions.ts +++ b/web/src/hooks/useFilterSuggestions.ts @@ -61,12 +61,20 @@ export function useFilterSuggestions() { return [...new Set(refs)].sort(); }, [actionsData]); + const workflowActionRefs = useMemo(() => { + const refs = + actionsData?.data + ?.filter((a) => a.workflow_def != null) + .map((a) => a.ref) || []; + return new Set(refs); + }, [actionsData]); + const triggerRefs = useMemo(() => { const refs = triggersData?.data?.map((t) => t.ref) || []; return [...new Set(refs)].sort(); }, [triggersData]); - return { packNames, ruleRefs, actionRefs, triggerRefs }; + return { packNames, ruleRefs, actionRefs, triggerRefs, workflowActionRefs }; } /** diff --git a/web/src/hooks/useHistory.ts b/web/src/hooks/useHistory.ts index 9a092f6..aa871ce 100644 --- a/web/src/hooks/useHistory.ts +++ b/web/src/hooks/useHistory.ts @@ -5,11 +5,7 @@ import { apiClient } from "@/lib/api-client"; * Supported entity types for history queries. * Maps to the TimescaleDB history hypertables. */ -export type HistoryEntityType = - | "execution" - | "worker" - | "enforcement" - | "event"; +export type HistoryEntityType = "execution" | "worker"; /** * A single history record from the API. @@ -68,8 +64,6 @@ export interface HistoryQueryParams { * Uses the entity-specific endpoints: * - GET /api/v1/executions/:id/history * - GET /api/v1/workers/:id/history - * - GET /api/v1/enforcements/:id/history - * - GET /api/v1/events/:id/history */ async function fetchEntityHistory( entityType: HistoryEntityType, @@ -79,8 +73,6 @@ async function fetchEntityHistory( const pluralMap: Record = { execution: "executions", worker: "workers", - enforcement: "enforcements", - event: "events", }; const queryParams: Record = {}; @@ -143,23 +135,3 @@ export function useWorkerHistory( ) { return useEntityHistory("worker", workerId, params); } - -/** - * Convenience hook for enforcement history. - */ -export function useEnforcementHistory( - enforcementId: number, - params: HistoryQueryParams = {}, -) { - return useEntityHistory("enforcement", enforcementId, params); -} - -/** - * Convenience hook for event history. - */ -export function useEventHistory( - eventId: number, - params: HistoryQueryParams = {}, -) { - return useEntityHistory("event", eventId, params); -} diff --git a/web/src/pages/actions/ActionsPage.tsx b/web/src/pages/actions/ActionsPage.tsx index 87d74bd..44a5ad0 100644 --- a/web/src/pages/actions/ActionsPage.tsx +++ b/web/src/pages/actions/ActionsPage.tsx @@ -2,7 +2,16 @@ import { Link, useParams, useNavigate } from "react-router-dom"; import { useActions, useAction, useDeleteAction } from "@/hooks/useActions"; import { useExecutions } from "@/hooks/useExecutions"; import { useState, useMemo } from "react"; -import { ChevronDown, ChevronRight, Search, X, Play, Plus } from "lucide-react"; +import { + ChevronDown, + ChevronRight, + Search, + X, + Play, + Plus, + GitBranch, + Pencil, +} from "lucide-react"; import ExecuteActionModal from "@/components/common/ExecuteActionModal"; import ErrorDisplay from "@/components/common/ErrorDisplay"; import { extractProperties } from "@/components/common/ParamSchemaForm"; @@ -177,7 +186,12 @@ export default function ActionsPage() { : "border-2 border-transparent hover:bg-gray-50" }`} > -
+
+ {action.workflow_def && ( + + + + )} {action.label}
@@ -236,6 +250,7 @@ export default function ActionsPage() { } function ActionDetail({ actionRef }: { actionRef: string }) { + const navigate = useNavigate(); const { data: action, isLoading, error } = useAction(actionRef); const { data: executionsData } = useExecutions({ actionRef: actionRef, @@ -290,6 +305,17 @@ function ActionDetail({ actionRef }: { actionRef: string }) {
+ {action.data?.workflow_def && ( + + )} @@ -558,6 +584,7 @@ export default function WorkflowBuilderPage() { }))} placeholder="Pack..." className="max-w-[140px]" + disabled={isEditing} /> / @@ -571,8 +598,9 @@ export default function WorkflowBuilderPage() { name: e.target.value.replace(/[^a-zA-Z0-9_-]/g, "_"), }) } - className="px-2 py-1.5 border border-gray-300 rounded text-sm font-mono focus:ring-2 focus:ring-blue-500 focus:border-blue-500 w-48" + className={`px-2 py-1.5 border border-gray-300 rounded text-sm font-mono w-48 ${isEditing ? "bg-gray-100 cursor-not-allowed text-gray-500" : "focus:ring-2 focus:ring-blue-500 focus:border-blue-500"}`} placeholder="workflow_name" + disabled={isEditing} /> diff --git a/web/src/pages/enforcements/EnforcementDetailPage.tsx b/web/src/pages/enforcements/EnforcementDetailPage.tsx index 7f4a6a8..ee8c99b 100644 --- a/web/src/pages/enforcements/EnforcementDetailPage.tsx +++ b/web/src/pages/enforcements/EnforcementDetailPage.tsx @@ -1,7 +1,6 @@ import { useParams, Link } from "react-router-dom"; import { useEnforcement } from "@/hooks/useEvents"; import { EnforcementStatus, EnforcementCondition } from "@/api"; -import EntityHistoryPanel from "@/components/common/EntityHistoryPanel"; export default function EnforcementDetailPage() { const { id } = useParams<{ id: string }>(); @@ -189,6 +188,18 @@ export default function EnforcementDetailPage() { {formatDate(enforcement.created)}
+
+
+ Resolved At +
+
+ {enforcement.resolved_at ? ( + formatDate(enforcement.resolved_at) + ) : ( + Pending + )} +
+
@@ -331,6 +342,14 @@ export default function EnforcementDetailPage() { {formatDate(enforcement.created)} + {enforcement.resolved_at && ( +
+
Resolved
+
+ {formatDate(enforcement.resolved_at)} +
+
+ )} @@ -377,15 +396,6 @@ export default function EnforcementDetailPage() { - - {/* Change History */} -
- -
); } diff --git a/web/src/pages/events/EventDetailPage.tsx b/web/src/pages/events/EventDetailPage.tsx index ac9cc48..b4028c1 100644 --- a/web/src/pages/events/EventDetailPage.tsx +++ b/web/src/pages/events/EventDetailPage.tsx @@ -1,6 +1,5 @@ import { useParams, Link } from "react-router-dom"; import { useEvent } from "@/hooks/useEvents"; -import EntityHistoryPanel from "@/components/common/EntityHistoryPanel"; export default function EventDetailPage() { const { id } = useParams<{ id: string }>(); @@ -259,15 +258,6 @@ export default function EventDetailPage() { - - {/* Change History */} -
- -
); } diff --git a/web/src/pages/executions/ExecutionDetailPage.tsx b/web/src/pages/executions/ExecutionDetailPage.tsx index b0ea1ec..a162162 100644 --- a/web/src/pages/executions/ExecutionDetailPage.tsx +++ b/web/src/pages/executions/ExecutionDetailPage.tsx @@ -22,6 +22,7 @@ import { useState, useMemo } from "react"; import { RotateCcw, Loader2 } from "lucide-react"; import ExecuteActionModal from "@/components/common/ExecuteActionModal"; import EntityHistoryPanel from "@/components/common/EntityHistoryPanel"; +import WorkflowTasksPanel from "@/components/common/WorkflowTasksPanel"; const getStatusColor = (status: string) => { switch (status) { @@ -116,6 +117,9 @@ export default function ExecutionDetailPage() { // Fetch the action so we can get param_schema for the re-run modal const { data: actionData } = useAction(execution?.action_ref || ""); + // Determine if this execution is a workflow (action has workflow_def) + const isWorkflow = !!actionData?.data?.workflow_def; + const [showRerunModal, setShowRerunModal] = useState(false); // Fetch status history for the timeline @@ -207,6 +211,11 @@ export default function ExecutionDetailPage() {

Execution #{execution.id}

+ {isWorkflow && ( + + Workflow + + )} @@ -247,6 +256,25 @@ export default function ExecutionDetailPage() { {execution.action_ref}

+ {execution.workflow_task && ( +

+ Task{" "} + + {execution.workflow_task.task_name} + + {execution.parent && ( + <> + in workflow + + Execution #{execution.parent} + + + )} +

+ )}
{/* Re-Run Modal */} @@ -504,6 +532,13 @@ export default function ExecutionDetailPage() {
+ {/* Workflow Tasks (shown only for workflow executions) */} + {isWorkflow && ( +
+ +
+ )} + {/* Change History */}
void; pageSize: number; total: number; + selectedExecutionId: number | null; + onSelectExecution: (id: number) => void; }) => { const totalPages = Math.ceil(total / pageSize); @@ -182,11 +192,20 @@ const ExecutionsResultsTable = memo( {executions.map((exec: any) => ( - + onSelectExecution(exec.id)} + > e.stopPropagation()} > #{exec.id} @@ -294,6 +313,15 @@ ExecutionsResultsTable.displayName = "ExecutionsResultsTable"; export default function ExecutionsPage() { const [searchParams] = useSearchParams(); + // --- View mode toggle --- + const [viewMode, setViewMode] = useState(() => { + const stored = localStorage.getItem(VIEW_MODE_STORAGE_KEY); + if (stored === "all" || stored === "workflow") return stored; + const param = searchParams.get("view"); + if (param === "all" || param === "workflow") return param; + return "all"; + }); + // --- Filter input state (updates immediately on keystroke) --- const [page, setPage] = useState(1); const pageSize = 50; @@ -342,8 +370,11 @@ export default function ExecutionsPage() { if (debouncedStatuses.length === 1) { params.status = debouncedStatuses[0] as ExecutionStatus; } + if (viewMode === "workflow") { + params.topLevelOnly = true; + } return params; - }, [page, pageSize, debouncedFilters, debouncedStatuses]); + }, [page, pageSize, debouncedFilters, debouncedStatuses, viewMode]); const { data, isLoading, isFetching, error } = useExecutions(queryParams); const { isConnected } = useExecutionStream({ enabled: true }); @@ -423,103 +454,181 @@ export default function ExecutionsPage() { Object.values(searchFilters).some((v) => v !== "") || selectedStatuses.length > 0; + const [selectedExecutionId, setSelectedExecutionId] = useState( + null, + ); + + const handleSelectExecution = useCallback((id: number) => { + setSelectedExecutionId((prev) => (prev === id ? null : id)); + }, []); + + const handleClosePreview = useCallback(() => { + setSelectedExecutionId(null); + }, []); + + const handleViewModeChange = useCallback((mode: ViewMode) => { + setViewMode(mode); + localStorage.setItem(VIEW_MODE_STORAGE_KEY, mode); + setPage(1); + }, []); + return ( -
- {/* Header - always visible */} -
-
-

Executions

- {isFetching && hasActiveFilters && ( -

- Searching executions... -

- )} -
- {isConnected && ( -
-
- Live Updates +
+ {/* Main content area */} +
+ {/* Header - always visible */} +
+
+

Executions

+ {isConnected && ( +
+
+ Live +
+ )} + {isFetching && hasActiveFilters && ( +

Searching executions...

+ )}
+
+ {/* View mode toggle */} +
+ + +
+
+
+ + {/* Filter section - always mounted, never unmounts during loading */} +
+
+
+ +

Filter Executions

+
+ {hasActiveFilters && ( + + )} +
+
+ handleFilterChange("pack", value)} + suggestions={packSuggestions} + placeholder="e.g., core" + /> + handleFilterChange("rule", value)} + suggestions={ruleSuggestions} + placeholder="e.g., core.on_timer" + /> + handleFilterChange("action", value)} + suggestions={actionSuggestions} + placeholder="e.g., core.echo" + /> + handleFilterChange("trigger", value)} + suggestions={triggerSuggestions} + placeholder="e.g., core.timer" + /> + handleFilterChange("executor", value)} + placeholder="e.g., 1" + /> +
+ +
+
+
+ + {/* Results section - isolated from filter state, only depends on query results */} + {viewMode === "all" ? ( + + ) : ( + )}
- {/* Filter section - always mounted, never unmounts during loading */} -
-
-
- -

Filter Executions

-
- {hasActiveFilters && ( - - )} + {/* Right-side preview panel */} + {selectedExecutionId && ( +
+
-
- handleFilterChange("pack", value)} - suggestions={packSuggestions} - placeholder="e.g., core" - /> - handleFilterChange("rule", value)} - suggestions={ruleSuggestions} - placeholder="e.g., core.on_timer" - /> - handleFilterChange("action", value)} - suggestions={actionSuggestions} - placeholder="e.g., core.echo" - /> - handleFilterChange("trigger", value)} - suggestions={triggerSuggestions} - placeholder="e.g., core.timer" - /> - handleFilterChange("executor", value)} - placeholder="e.g., 1" - /> -
- -
-
-
- - {/* Results section - isolated from filter state, only depends on query results */} - + )}
); } diff --git a/work-summary/2026-02-27-execution-hypertable.md b/work-summary/2026-02-27-execution-hypertable.md new file mode 100644 index 0000000..2fa76b7 --- /dev/null +++ b/work-summary/2026-02-27-execution-hypertable.md @@ -0,0 +1,59 @@ +# Execution Table → TimescaleDB Hypertable Conversion + +**Date**: 2026-02-27 +**Scope**: Database migration, Rust code fixes, AGENTS.md updates + +## Summary + +Converted the `execution` table from a regular PostgreSQL table to a TimescaleDB hypertable partitioned on `created` (1-day chunks), consistent with the existing `event` and `enforcement` hypertable conversions. This enables automatic time-based partitioning, compression, and retention for execution data. + +## Key Design Decisions + +- **`updated` column preserved**: Unlike `event` (immutable) and `enforcement` (single update), executions are updated ~4 times during their lifecycle. The `updated` column and its BEFORE UPDATE trigger are kept because the timeout monitor and UI depend on them. +- **`execution_history` preserved**: The execution_history hypertable tracks field-level diffs which remain valuable for a mutable table. Its continuous aggregates (`execution_status_hourly`, `execution_throughput_hourly`) are unchanged. +- **7-day compression window is safe**: Executions complete within at most ~1 day, so all updates finish well before compression kicks in. +- **New `execution_volume_hourly` continuous aggregate**: Queries the execution hypertable directly (like `event_volume_hourly` queries event), providing belt-and-suspenders volume monitoring alongside the history-based aggregates. + +## Changes + +### New Migration: `migrations/20250101000010_execution_hypertable.sql` +- Drops all FK constraints referencing `execution` (inquiry, workflow_execution, self-references, action, executor, workflow_def) +- Changes PK from `(id)` to `(id, created)` (TimescaleDB requirement) +- Converts to hypertable with `create_hypertable('execution', 'created', chunk_time_interval => '1 day')` +- Adds compression policy (segmented by `action_ref`, after 7 days) +- Adds 90-day retention policy +- Adds `execution_volume_hourly` continuous aggregate with 30-minute refresh policy + +### Rust Code Fixes +- **`crates/executor/src/timeout_monitor.rs`**: Replaced `SELECT * FROM execution` with explicit column list. `SELECT *` on hypertables is fragile — the execution table has columns (`is_workflow`, `workflow_def`) not present in the Rust `Execution` model. +- **`crates/api/tests/sse_execution_stream_tests.rs`**: Fixed references to non-existent `start_time` and `end_time` columns (replaced with `updated = NOW()`). +- **`crates/common/src/repositories/analytics.rs`**: Added `ExecutionVolumeBucket` struct and `execution_volume_hourly` / `execution_volume_hourly_by_action` repository methods for the new continuous aggregate. + +### AGENTS.md Updates +- Added **Execution Table (TimescaleDB Hypertable)** documentation +- Updated FK ON DELETE Policy to reflect execution as hypertable +- Updated Nullable FK Fields to list all dropped FK constraints +- Updated table count (still 20) and migration count (9 → 10) +- Updated continuous aggregate count (5 → 6) +- Updated development status to include execution hypertable +- Added pitfall #19: never use `SELECT *` on hypertable-backed models +- Added pitfall #20: execution/event/enforcement cannot be FK targets + +## FK Constraints Dropped + +| Source Column | Target | Disposition | +|---|---|---| +| `inquiry.execution` | `execution(id)` | Column kept as plain BIGINT | +| `workflow_execution.execution` | `execution(id)` | Column kept as plain BIGINT | +| `execution.parent` | `execution(id)` | Self-ref, column kept | +| `execution.original_execution` | `execution(id)` | Self-ref, column kept | +| `execution.workflow_def` | `workflow_definition(id)` | Column kept | +| `execution.action` | `action(id)` | Column kept | +| `execution.executor` | `identity(id)` | Column kept | +| `execution.enforcement` | `enforcement(id)` | Already dropped in migration 000009 | + +## Verification + +- `cargo check --all-targets --workspace`: Zero warnings +- `cargo test --workspace --lib`: All 90 unit tests pass +- Integration test failures are pre-existing (missing `attune_test` database), unrelated to these changes \ No newline at end of file diff --git a/work-summary/2026-02-27-with-items-concurrency-limiting.md b/work-summary/2026-02-27-with-items-concurrency-limiting.md new file mode 100644 index 0000000..976bf1e --- /dev/null +++ b/work-summary/2026-02-27-with-items-concurrency-limiting.md @@ -0,0 +1,91 @@ +# `with_items` Concurrency Limiting Implementation + +**Date**: 2026-02-27 +**Scope**: `crates/executor/src/scheduler.rs` + +## Problem + +Workflow tasks with `with_items` and a `concurrency` limit dispatched all items simultaneously, ignoring the concurrency setting entirely. For example, a task with `concurrency: 3` and 20 items would dispatch all 20 at once instead of running at most 3 in parallel. + +## Root Cause + +The `dispatch_with_items_task` method iterated over all items in a single loop, creating a child execution and publishing it to the MQ for every item unconditionally. The `task_node.concurrency` value was logged but never used to gate dispatching. + +## Solution + +### Approach: DB-Based Sliding Window + +All child execution records are created in the database up front (with fully-rendered inputs), but only the first `concurrency` items are published to the message queue. The remaining children stay at `Requested` status in the DB. As each item completes, `advance_workflow` queries for `Requested`-status siblings and publishes enough to refill the concurrency window. + +This avoids the need for any auxiliary state in workflow variables — the database itself is the single source of truth for which items are pending vs in-flight. + +### Initial Attempt: Workflow Variables (Abandoned) + +The first implementation stored pending items as JSON metadata in `workflow_execution.variables` under `__pending_items__{task_name}`. This approach suffered from race conditions: when multiple items completed simultaneously, concurrent `advance_workflow` calls would read stale pending lists, pop the same item, and lose others. The result was that only the initial batch ever executed. + +### Key Changes + +#### 1. `dispatch_with_items_task` — Two-Phase Dispatch + +- **Phase 1**: Creates ALL child execution records in the database. Each row has its input already rendered through the `WorkflowContext`, so no re-rendering is needed later. +- **Phase 2**: Publishes only the first `min(total, concurrency)` to the MQ via `publish_execution_requested`. The rest stay at `Requested` status. + +#### 2. `publish_execution_requested` — New Helper + +Publishes an `ExecutionRequested` MQ message for an existing execution row. Used both during initial dispatch (Phase 2) and when filling concurrency slots on completion. + +#### 3. `publish_pending_with_items_children` — Fill Concurrency Slots + +Replaces the old `dispatch_next_pending_with_items`. Queries the database for siblings at `Requested` status (ordered by `task_index`), limited to the number of free slots, and publishes them. No workflow variables involved — the DB query `status = 'requested'` is the authoritative source of undispatched items. + +#### 4. `advance_workflow` — Concurrency-Aware Completion + +The with_items completion branch now: +1. Counts **in-flight** siblings (`scheduling`, `scheduled`, `running` — NOT `requested`) +2. Reads the `concurrency` limit from the task graph +3. Calculates `free_slots = concurrency - in_flight` +4. Calls `publish_pending_with_items_children(free_slots)` to fill the window +5. Checks **all** non-terminal siblings (including `requested`) to decide whether to advance + +## Concurrency Flow Example + +For a task with 5 items and `concurrency: 3`: + +``` +Initial: Create items 0-4 in DB; publish items 0, 1, 2 to MQ + Items 3, 4 stay at Requested status in DB + +Item 0 ✓: in_flight=2 (items 1,2), free_slots=1 → publish item 3 + siblings_remaining=3 (items 1,2,3,4 minus terminal) → return early + +Item 1 ✓: in_flight=2 (items 2,3), free_slots=1 → publish item 4 + siblings_remaining=3 → return early + +Item 2 ✓: in_flight=2 (items 3,4), free_slots=1 → no Requested items left + siblings_remaining=2 → return early + +Item 3 ✓: in_flight=1 (item 4), free_slots=2 → no Requested items left + siblings_remaining=1 → return early + +Item 4 ✓: in_flight=0, free_slots=3 → no Requested items left + siblings_remaining=0 → advance workflow to successor tasks +``` + +## Race Condition Handling + +When multiple items complete simultaneously, concurrent `advance_workflow` calls may both query `status = 'requested'` and find the same pending items. The worst case is a brief over-dispatch (the same execution published to MQ twice). The scheduler handles this gracefully — the second message finds the execution already at `Scheduled`/`Running` status. This is a benign, self-correcting race that never loses items. + +## Files Changed + +- **`crates/executor/src/scheduler.rs`**: + - Rewrote `dispatch_with_items_task` with two-phase create-then-publish approach + - Added `publish_execution_requested` helper for publishing existing execution rows + - Added `publish_pending_with_items_children` for DB-query-based slot filling + - Rewrote `advance_workflow` with_items branch with in-flight counting and slot calculation + - Updated unit tests for the new approach + +## Testing + +- All 104 executor tests pass (102 + 2 ignored) +- 2 new unit tests for dispatch count and free slots calculations +- Clean workspace build with no new warnings \ No newline at end of file diff --git a/work-summary/2026-02-27-workflow-execution-orchestration.md b/work-summary/2026-02-27-workflow-execution-orchestration.md new file mode 100644 index 0000000..5d9953e --- /dev/null +++ b/work-summary/2026-02-27-workflow-execution-orchestration.md @@ -0,0 +1,67 @@ +# Workflow Execution Orchestration & UI Ref-Lock Fix + +**Date**: 2026-02-27 + +## Problem + +Two issues were addressed: + +### 1. Workflow ref editable during edit mode (UI) +When editing an existing workflow action, the pack selector and workflow name fields were editable, allowing users to change the action's ref — which should be immutable after creation. + +### 2. Workflow execution runtime error +Executing a workflow action produced: +``` +Action execution failed: Internal error: Runtime not found: No runtime found for action: examples.single_echo (available: node.js, python, shell) +``` + +**Root cause**: Workflow companion actions are created with `runtime: None` (they aren't scripts — they're orchestration definitions). When the executor's scheduler received an execution request for a workflow action, it dispatched it to a worker like any regular action. The worker then tried to find a runtime to execute it, failed (no runtime matches a `.workflow.yaml` entrypoint), and returned the error. + +The `WorkflowCoordinator` in `crates/executor/src/workflow/coordinator.rs` existed as prototype code but was never integrated into the execution pipeline. + +## Solution + +### UI Fix (`web/src/pages/actions/WorkflowBuilderPage.tsx`) +- Added `disabled={isEditing}` to the `SearchableSelect` pack selector (already supported a `disabled` prop) +- Added `disabled={isEditing}` and conditional disabled styling to the workflow name `` +- Both fields are now locked when editing an existing workflow, preventing ref changes + +### Workflow Orchestration (`crates/executor/src/scheduler.rs`) +Added workflow detection and orchestration directly in the `ExecutionScheduler`: + +1. **Detection**: `process_execution_requested` checks `action.workflow_def.is_some()` before dispatching to a worker +2. **`process_workflow_execution`**: Loads the workflow definition, parses it into a `WorkflowDefinition`, builds a `TaskGraph`, creates a `workflow_execution` record, and marks the parent execution as Running +3. **`dispatch_workflow_task`**: For each entry-point task in the graph, creates a child execution with the task's actual action ref (e.g., `core.echo` instead of `examples.single_echo`) and publishes an `ExecutionRequested` message. The child execution includes `workflow_task` metadata linking it back to the `workflow_execution` record. +4. **`advance_workflow`** (public): Called by the completion listener when a workflow child task completes. Evaluates transitions from the completed task, schedules successor tasks, checks join barriers, and completes the workflow when all tasks are done. +5. **`complete_workflow`**: Updates both the `workflow_execution` and parent `execution` records to their terminal state. + +Key design decisions: +- Child task executions re-enter the normal scheduling pipeline via MQ, so nested workflows (a workflow task that is itself a workflow) are handled recursively +- Transition evaluation supports `succeeded()`, `failed()`, `timed_out()`, `always`, and custom conditions (custom defaults to fire-on-success for now) +- Join barriers are respected — tasks with `join` counts wait for enough predecessors + +### Completion Listener (`crates/executor/src/completion_listener.rs`) +- Added workflow advancement: when a completed execution has `workflow_task` metadata, calls `ExecutionScheduler::advance_workflow` to schedule successor tasks or complete the workflow +- Added an `AtomicUsize` round-robin counter for dispatching successor tasks to workers + +### Binary Entry Point (`crates/executor/src/main.rs`) +- Added `mod workflow;` so the binary crate can resolve `crate::workflow::graph::*` paths used in the scheduler + +## Files Changed + +| File | Change | +|------|--------| +| `web/src/pages/actions/WorkflowBuilderPage.tsx` | Disable pack selector and name input when editing | +| `crates/executor/src/scheduler.rs` | Workflow detection, orchestration, task dispatch, advancement | +| `crates/executor/src/completion_listener.rs` | Workflow advancement on child task completion | +| `crates/executor/src/main.rs` | Added `mod workflow;` | + +## Architecture Note + +This implementation bypasses the prototype `WorkflowCoordinator` (`crates/executor/src/workflow/coordinator.rs`) which had several issues: hardcoded `attune.` schema prefixes, `SELECT *` on the execution table, duplicate parent execution creation, and no integration with the MQ-based scheduling pipeline. The new implementation works directly within the scheduler and completion listener, using the existing repository layer and message queue infrastructure. + +## Testing + +- Existing executor unit tests pass +- Workspace compiles with zero errors +- No new warnings introduced (pre-existing warnings from unused prototype workflow code remain) \ No newline at end of file diff --git a/work-summary/2026-02-27-workflow-param-resolution-fix.md b/work-summary/2026-02-27-workflow-param-resolution-fix.md new file mode 100644 index 0000000..62a1f95 --- /dev/null +++ b/work-summary/2026-02-27-workflow-param-resolution-fix.md @@ -0,0 +1,50 @@ +# Workflow Parameter Resolution Fix + +**Date**: 2026-02-27 +**Scope**: `crates/executor/src/scheduler.rs` + +## Problem + +Workflow executions triggered via the API failed to resolve `{{ parameters.X }}` template expressions in task inputs. Instead of substituting the actual parameter value, the literal string `"{{ parameters.n }}"` was passed to the child action, causing runtime errors like: + +``` +ValueError: invalid literal for int() with base 10: '{{ parameters.n }}' +``` + +## Root Cause + +The execution scheduler's `process_workflow_execution` and `advance_workflow` methods extracted workflow parameters from the execution's `config` field using: + +```rust +execution.config.as_ref() + .and_then(|c| c.get("parameters").cloned()) + .unwrap_or(json!({})) +``` + +This only handled the **wrapped** format `{"parameters": {"n": 5}}`, which is how child task executions store their config. However, when a workflow is triggered manually via the API, the config is stored in **flat** format `{"n": 5}` — the API places `request.parameters` directly into the execution's `config` column without wrapping it. + +Because `config.get("parameters")` returned `None` for the flat format, `workflow_params` was set to `{}` (empty). The `WorkflowContext` was then built with no parameters, so `{{ parameters.n }}` failed to resolve. The error was silently swallowed by the fallback in `dispatch_workflow_task`, which used the raw (unresolved) input when template rendering failed. + +## Fix + +Added an `extract_workflow_params` helper function that handles both config formats, matching the existing logic in the worker's `ActionExecutor::prepare_execution_context`: + +1. If config contains a `"parameters"` key → use that value (wrapped format) +2. Otherwise, if config is a JSON object → use the entire object as parameters (flat format) +3. Otherwise → return empty object + +Replaced both extraction sites in the scheduler (`process_workflow_execution` and `advance_workflow`) with calls to this helper. + +## Files Changed + +- **`crates/executor/src/scheduler.rs`**: + - Added `extract_workflow_params()` helper function + - Updated `process_workflow_execution()` to use the helper + - Updated `advance_workflow()` to use the helper + - Added 6 unit tests covering wrapped, flat, None, non-object, empty, and precedence cases + +## Testing + +- All 104 existing executor tests pass +- 6 new unit tests added and passing +- No new warnings introduced \ No newline at end of file diff --git a/work-summary/2026-02-27-workflow-template-resolution.md b/work-summary/2026-02-27-workflow-template-resolution.md new file mode 100644 index 0000000..6240eb3 --- /dev/null +++ b/work-summary/2026-02-27-workflow-template-resolution.md @@ -0,0 +1,73 @@ +# Workflow Template Resolution Implementation + +**Date**: 2026-02-27 + +## Problem + +Workflow task parameters containing `{{ }}` template expressions were being passed to workers verbatim without resolution. For example, a workflow task with `seconds: "{{item}}"` would send the literal string `"{{item}}"` to `core.sleep`, which rejected it with `"ERROR: seconds must be a positive integer"`. + +Three interconnected features were missing from the executor's workflow orchestration: + +1. **Template resolution** — `{{ item }}`, `{{ parameters.x }}`, `{{ result().data.items }}`, etc. in task inputs were never rendered through the `WorkflowContext` before dispatching child executions. +2. **`with_items` expansion** — Tasks declaring `with_items: "{{ number_list }}"` were not expanded into multiple parallel child executions (one per item). +3. **`publish` variable processing** — Transition `publish` directives like `number_list: "{{ result().data.items }}"` were ignored, so variables never propagated between tasks. + +A secondary issue was **type coercion**: `render_json` stringified all template results, so `"{{ item }}"` resolving to integer `5` became the string `"5"`, causing type validation failures in downstream actions. + +## Root Cause + +The `ExecutionScheduler::dispatch_workflow_task()` method passed `task_node.input` directly into the child execution's config without any template rendering. Neither `process_workflow_execution` (entry-point dispatch) nor `advance_workflow` (successor dispatch) constructed or used a `WorkflowContext`. The `publish` directives on transitions were completely ignored in `advance_workflow`. + +## Changes + +### `crates/executor/src/workflow/context.rs` + +- **Function-call expressions**: Added support for `result()`, `result().path.to.field`, `succeeded()`, `failed()`, and `timed_out()` in the expression evaluator via `try_evaluate_function_call()`. +- **`TaskOutcome` enum**: New enum (`Succeeded`, `Failed`, `TimedOut`) to track the last completed task's status for function expressions. +- **`set_last_task_outcome()`**: Records the result and outcome of the most recently completed task. +- **Type-preserving `render_json`**: When a JSON string value is a pure template expression (the entire string is `{{ expr }}`), `render_json` now returns the raw `JsonValue` from the expression instead of stringifying it. Added `try_evaluate_pure_expression()` helper. This means `"{{ item }}"` resolving to `5` stays as integer `5`, not string `"5"`. +- **`rebuild()` constructor**: Reconstructs a `WorkflowContext` from persisted workflow state (stored variables, parameters, and completed task results). Used by the scheduler when advancing a workflow. +- **`export_variables()`**: Exports workflow variables as a JSON object for persisting back to the `workflow_execution.variables` column. +- **Updated `publish_from_result()`**: Uses type-preserving `render_json` for publish expressions so arrays/numbers/booleans retain their types. +- **18 unit tests**: All passing, including new tests for type preservation, `result()` function, `succeeded()`/`failed()`, publish with result function, rebuild, and the exact `with_items` integer scenario from the failing workflow. + +### `crates/executor/src/scheduler.rs` + +- **Template resolution in `dispatch_workflow_task()`**: Now accepts a `WorkflowContext` parameter and renders `task_node.input` through `wf_ctx.render_json()` before wrapping in the execution config. +- **Initial context in `process_workflow_execution()`**: Builds a `WorkflowContext` from the parent execution's parameters and workflow-level vars, passes it to entry-point task dispatch. +- **Context reconstruction in `advance_workflow()`**: Rebuilds the `WorkflowContext` from the `workflow_execution.variables` column plus results of all completed child executions. Sets `last_task_outcome` from the just-completed execution. +- **`publish` processing**: Iterates transition `publish` directives when a transition fires, evaluates expressions through the context, and persists updated variables back to the `workflow_execution` record. +- **`with_items` expansion**: New `dispatch_with_items_task()` method resolves the `with_items` expression to a JSON array, then creates one child execution per item with `item`/`index` set on the context. Each child gets `task_index` set in its `WorkflowTaskMetadata`. +- **`with_items` completion tracking**: In `advance_workflow()`, tasks with `task_index` (indicating `with_items`) are only marked completed/failed when ALL sibling items for that task name are done. + +### `packs/examples/actions/list_example.sh` & `list_example.yaml` + +- Rewrote shell script from `bash`+`jq` (unavailable in worker containers) to pure POSIX shell with DOTENV parameter parsing, matching the core pack pattern. +- Changed `parameter_format` from `json` to `dotenv`. + +### `packs.external/python_example/actions/list_numbers.py` & `list_numbers.yaml` + +- New action `python_example.list_numbers` that returns `{"items": list(range(start, n+start))}`. +- Parameters: `n` (default 10), `start` (default 0). JSON output format, Python ≥3.9. + +## Workflow Flow (After Fix) + +For the `examples.hello_workflow`: + +``` +1. generate_numbers task dispatched with rendered input {count: 5, n: 5} +2. python_example.list_numbers returns {items: [0, 1, 2, 3, 4]} +3. Transition publish: number_list = result().data.items → [0,1,2,3,4] + Variables persisted to workflow_execution record +4. sleep_2 dispatched with with_items: "{{ number_list }}" + → 5 child executions created, each with item/index context + → seconds: "{{item}}" renders to 0, 1, 2, 3, 4 (integers, not strings) +5. All sleep items complete → task marked done → echo_3 dispatched +6. Workflow completes +``` + +## Testing + +- All 96 executor unit tests pass (0 failures) +- All 18 workflow context tests pass (including 8 new tests) +- Full workspace compiles with no new warnings (30 pre-existing) \ No newline at end of file diff --git a/work-summary/2026-02-event-hypertable-migration.md b/work-summary/2026-02-event-hypertable-migration.md new file mode 100644 index 0000000..f914ce7 --- /dev/null +++ b/work-summary/2026-02-event-hypertable-migration.md @@ -0,0 +1,141 @@ +# Event & Enforcement Tables → TimescaleDB Hypertable Migration + +**Date:** 2026-02 +**Scope:** Database migrations, Rust models/repositories/API, Web UI + +## Summary + +Converted the `event` and `enforcement` tables from regular PostgreSQL tables to TimescaleDB hypertables, and removed the now-unnecessary `event_history` and `enforcement_history` tables. + +- **Events** are immutable after insert (never updated), so a separate change-tracking history table added no value. +- **Enforcements** are updated exactly once (~1 second after creation, to set status from `created` to `processed` or `disabled`), well before the 7-day compression window. A history table tracking one deterministic status change per row was unnecessary overhead. + +Both tables now benefit from automatic time-based partitioning, compression, and retention directly. + +## Motivation + +The `event_history` and `enforcement_history` hypertables were created alongside `execution_history` and `worker_history` to track field-level changes. However: + +- **Events** are never modified after creation — no code path in the API, executor, worker, or sensor ever updates an event row. The history trigger was recording INSERT operations only, duplicating data already in the `event` table. +- **Enforcements** undergo a single, predictable status transition (created → processed/disabled) within ~1 second. The history table recorded one INSERT and one UPDATE per enforcement — the INSERT was redundant, and the UPDATE only changed `status`. The new `resolved_at` column captures this lifecycle directly on the enforcement row itself. + +## Changes + +### Database Migrations + +**`000004_trigger_sensor_event_rule.sql`**: +- Removed `updated` column from the `event` table +- Removed `update_event_updated` trigger +- Replaced `updated` column with `resolved_at TIMESTAMPTZ` (nullable) on the `enforcement` table +- Removed `update_enforcement_updated` trigger +- Updated column comments for enforcement (status lifecycle, resolved_at semantics) + +**`000008_notify_triggers.sql`**: +- Updated enforcement NOTIFY trigger payloads: `updated` → `resolved_at` + +**`000009_timescaledb_history.sql`**: +- Removed `event_history` table, all its indexes, trigger function, trigger, compression and retention policies +- Removed `enforcement_history` table, all its indexes, trigger function, trigger, compression and retention policies +- Added hypertable conversion for `event` table: + - Dropped FK constraint from `enforcement.event` → `event(id)` + - Changed PK from `(id)` to `(id, created)` + - Converted to hypertable with 1-day chunk interval + - Compression segmented by `trigger_ref`, retention 90 days +- Added hypertable conversion for `enforcement` table: + - Dropped FK constraint from `execution.enforcement` → `enforcement(id)` + - Changed PK from `(id)` to `(id, created)` + - Converted to hypertable with 1-day chunk interval + - Compression segmented by `rule_ref`, retention 90 days +- Updated `event_volume_hourly` continuous aggregate to query `event` table directly +- Updated `enforcement_volume_hourly` continuous aggregate to query `enforcement` table directly + +### Rust Code — Events + +**`crates/common/src/models.rs`**: +- Removed `updated` field from `Event` struct +- Removed `Event` variant from `HistoryEntityType` enum + +**`crates/common/src/repositories/event.rs`**: +- Removed `UpdateEventInput` struct and `Update` trait implementation for `EventRepository` +- Updated all SELECT queries to remove `updated` column + +**`crates/api/src/dto/event.rs`**: +- Removed `updated` field from `EventResponse` + +**`crates/common/tests/event_repository_tests.rs`**: +- Removed all update tests +- Renamed timestamp test to `test_event_created_timestamp_auto_set` +- Updated `test_delete_event_enforcement_retains_event_id` (FK dropped, so enforcement.event is now a dangling reference after event deletion) + +### Rust Code — Enforcements + +**`crates/common/src/models.rs`**: +- Replaced `updated: DateTime` with `resolved_at: Option>` on `Enforcement` struct +- Removed `Enforcement` variant from `HistoryEntityType` enum +- Updated `FromStr`, `Display`, and `table_name()` implementations (only `Execution` and `Worker` remain) + +**`crates/common/src/repositories/event.rs`**: +- Added `resolved_at: Option>` to `UpdateEnforcementInput` +- Updated all SELECT queries to use `resolved_at` instead of `updated` +- Update query no longer appends `, updated = NOW()` — `resolved_at` is set explicitly by the caller + +**`crates/api/src/dto/event.rs`**: +- Replaced `updated` with `resolved_at: Option>` on `EnforcementResponse` + +**`crates/executor/src/enforcement_processor.rs`**: +- Both status update paths (Processed and Disabled) now set `resolved_at: Some(chrono::Utc::now())` +- Updated test mock enforcement struct + +**`crates/common/tests/enforcement_repository_tests.rs`**: +- Updated all tests to use `resolved_at` instead of `updated` +- Renamed `test_create_enforcement_with_invalid_event_fails` → `test_create_enforcement_with_nonexistent_event_succeeds` (FK dropped) +- Renamed `test_enforcement_timestamps_auto_managed` → `test_enforcement_resolved_at_lifecycle` +- All `UpdateEnforcementInput` usages now include `resolved_at` field + +### Rust Code — History Infrastructure + +**`crates/api/src/routes/history.rs`**: +- Removed `get_event_history` and `get_enforcement_history` endpoints +- Removed `/events/{id}/history` and `/enforcements/{id}/history` routes +- Updated doc comments to list only `execution` and `worker` + +**`crates/api/src/dto/history.rs`**: +- Updated entity type comment + +**`crates/common/src/repositories/entity_history.rs`**: +- Updated tests to remove `Event` and `Enforcement` variant assertions +- Both now correctly fail to parse as `HistoryEntityType` + +### Web UI + +**`web/src/pages/events/EventDetailPage.tsx`**: +- Removed `EntityHistoryPanel` component + +**`web/src/pages/enforcements/EnforcementDetailPage.tsx`**: +- Removed `EntityHistoryPanel` component +- Added `resolved_at` display in Overview card ("Resolved At" field, shows "Pending" when null) +- Added `resolved_at` display in Metadata sidebar + +**`web/src/hooks/useHistory.ts`**: +- Removed `"event"` and `"enforcement"` from `HistoryEntityType` union and `pluralMap` +- Removed `useEventHistory` and `useEnforcementHistory` convenience hooks + +**`web/src/hooks/useEnforcementStream.ts`**: +- Removed history query invalidation (no more enforcement_history table) + +### Documentation + +- Updated `AGENTS.md`: table counts (22→20), history entity list, FK policy, enforcement lifecycle (resolved_at), pitfall #17 +- Updated `docs/plans/timescaledb-entity-history.md`: removed event_history and enforcement_history from all tables, added notes about both hypertables + +## Key Design Decisions + +1. **Composite PK `(id, created)` on both tables**: Required by TimescaleDB — the partitioning column must be part of the PK. The `id` column retains its `BIGSERIAL` for unique identification; `created` is added for partitioning. + +2. **Dropped FKs targeting hypertables**: TimescaleDB hypertables cannot be the target of foreign key constraints. Affected: `enforcement.event → event(id)` and `execution.enforcement → enforcement(id)`. Both columns remain as plain BIGINT for application-level joins. Since the original FKs were `ON DELETE SET NULL` (soft references), this is a minor change — the columns may now become dangling references if the referenced row is deleted. + +3. **`resolved_at` instead of `updated`**: The `updated` column was a generic auto-managed timestamp. The new `resolved_at` column is semantically meaningful — it records specifically when the enforcement was resolved (status transitioned away from `created`). It is `NULL` while the enforcement is pending, making it easy to query for unresolved enforcements. The executor sets it explicitly alongside the status change. + +4. **Compression segmentation**: Event table segments by `trigger_ref`, enforcement table segments by `rule_ref` — matching the most common query patterns for each table. + +5. **90-day retention for both**: Aligned with execution history retention since events and enforcements are primary operational records in the event-driven pipeline. \ No newline at end of file diff --git a/work-summary/2026-02-remove-action-is-workflow.md b/work-summary/2026-02-remove-action-is-workflow.md new file mode 100644 index 0000000..a1b4ab0 --- /dev/null +++ b/work-summary/2026-02-remove-action-is-workflow.md @@ -0,0 +1,69 @@ +# Remove `is_workflow` from Action Table & Add Workflow Edit Button + +**Date**: 2026-02 + +## Summary + +Removed the redundant `is_workflow` boolean column from the `action` table throughout the entire stack. An action being a workflow is fully determined by having a non-null `workflow_def` FK — the boolean was unnecessary. Also added a workflow edit button and visual indicator to the Actions page UI. + +## Changes + +### Backend — Drop `is_workflow` from Action + +**`crates/common/src/models.rs`** +- Removed `is_workflow: bool` field from the `Action` struct + +**`crates/common/src/repositories/action.rs`** +- Removed `is_workflow` from all SELECT column lists (9 queries) +- Updated `find_workflows()` to use `WHERE workflow_def IS NOT NULL` instead of `WHERE is_workflow = true` +- Updated `link_workflow_def()` to only `SET workflow_def = $2` (no longer sets `is_workflow = true`) + +**`crates/api/src/dto/action.rs`** +- Removed `is_workflow` field from `ActionResponse` and `ActionSummary` DTOs +- Added `workflow_def: Option` field to both DTOs (non-null means this action is a workflow) +- Updated `From` impls accordingly + +**`crates/api/src/validation/params.rs`** +- Removed `is_workflow` from test fixture `make_action()` + +**Comments updated in:** +- `crates/api/src/routes/workflows.rs` — companion action helper functions +- `crates/common/src/workflow/registrar.rs` — companion action creation +- `crates/executor/src/workflow/registrar.rs` — companion action creation + +### Database Migration + +**`migrations/20250101000006_workflow_system.sql`** (modified in-place, no production deployments) +- Removed `ADD COLUMN is_workflow BOOLEAN DEFAULT false NOT NULL` from ALTER TABLE +- Removed `idx_action_is_workflow` partial index +- Updated `workflow_action_link` view to use `LEFT JOIN action a ON a.workflow_def = wd.id` (dropped `AND a.is_workflow = true` filter) +- Updated column comment on `workflow_def` + +> Note: `execution.is_workflow` is a separate DB-level column used by PostgreSQL notification triggers and was NOT removed. It exists only in SQL (not in the Rust `Execution` model). + +### Frontend — Workflow Edit Button & Indicator + +**TypeScript types updated** (4 files): +- `web/src/api/models/ActionResponse.ts` — added `workflow_def?: number | null` +- `web/src/api/models/ActionSummary.ts` — added `workflow_def?: number | null` +- `web/src/api/models/PaginatedResponse_ActionSummary.ts` — added `workflow_def?: number | null` +- `web/src/api/models/ApiResponse_ActionResponse.ts` — added `workflow_def?: number | null` + +**`web/src/pages/actions/ActionsPage.tsx`** +- **Action list sidebar**: Workflow actions now show a purple `GitBranch` icon next to their label +- **Action detail view**: Workflow actions show a purple "Edit Workflow" button (with `Pencil` icon) that navigates to `/actions/workflows/:ref/edit` + +### Prior Fix — Workflow Save Upsert (same session) + +**`web/src/pages/actions/WorkflowBuilderPage.tsx`** +- Fixed workflow save from "new" page when workflow already exists +- On 409 CONFLICT from POST, automatically falls back to PUT (update) with the same data +- Constructs the workflow ref as `{packRef}.{name}` for the fallback PUT call + +## Design Rationale + +The `is_workflow` boolean on the action table was fully redundant: +- A workflow action always has `workflow_def IS NOT NULL` +- A workflow action's entrypoint always ends in `.workflow.yaml` +- The executor detects workflows by looking up `workflow_definition` by ref, not by checking `is_workflow` +- No runtime code path depended on the boolean that couldn't use `workflow_def IS NOT NULL` instead \ No newline at end of file