Compare commits

...

3 Commits

Author SHA1 Message Date
67a1c02543 trying to run a gitea workflow
Some checks failed
CI / Security Advisory Checks (push) Waiting to run
CI / Rust Blocking Checks (push) Failing after 47s
CI / Web Blocking Checks (push) Failing after 46s
CI / Security Blocking Checks (push) Failing after 8s
CI / Web Advisory Checks (push) Failing after 9s
2026-03-04 22:36:16 -06:00
7438f92502 working on workflows 2026-03-04 22:02:34 -06:00
b54aa3ec26 artifact management 2026-03-03 14:16:23 -06:00
92 changed files with 12339 additions and 1224 deletions

141
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,141 @@
name: CI
on:
pull_request:
push:
branches:
- main
- master
env:
CARGO_TERM_COLOR: always
RUST_MIN_STACK: 16777216
jobs:
rust-blocking:
name: Rust Blocking Checks
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Rust
uses: dtolnay/rust-toolchain@stable
with:
components: rustfmt, clippy
- name: Rustfmt
run: cargo fmt --all -- --check
- name: Clippy
run: cargo clippy --workspace --all-targets --all-features -- -D warnings
- name: Tests
run: cargo test --workspace --all-features
- name: Install Rust security tooling
run: cargo install --locked cargo-audit cargo-deny
- name: Cargo Audit
run: cargo audit
- name: Cargo Deny
run: cargo deny check
web-blocking:
name: Web Blocking Checks
runs-on: ubuntu-latest
defaults:
run:
working-directory: web
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
cache: "npm"
cache-dependency-path: web/package-lock.json
- name: Install dependencies
run: npm ci
- name: ESLint
run: npm run lint
- name: TypeScript
run: npm run typecheck
- name: Build
run: npm run build
security-blocking:
name: Security Blocking Checks
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Gitleaks
run: |
mkdir -p "$HOME/bin"
curl -sSfL https://raw.githubusercontent.com/gitleaks/gitleaks/master/install.sh \
| sh -s -- -b "$HOME/bin" v8.24.2
- name: Gitleaks
run: |
"$HOME/bin/gitleaks" git \
--report-format sarif \
--report-path gitleaks.sarif \
--config .gitleaks.toml
web-advisory:
name: Web Advisory Checks
runs-on: ubuntu-latest
continue-on-error: true
defaults:
run:
working-directory: web
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
cache: "npm"
cache-dependency-path: web/package-lock.json
- name: Install dependencies
run: npm ci
- name: Knip
run: npm run knip
continue-on-error: true
- name: NPM Audit (prod deps)
run: npm audit --omit=dev
continue-on-error: true
security-advisory:
name: Security Advisory Checks
runs-on: ubuntu-latest
continue-on-error: true
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install Semgrep
run: pip install semgrep
- name: Semgrep
run: semgrep scan --config p/default --error
continue-on-error: true

16
.gitleaks.toml Normal file
View File

@@ -0,0 +1,16 @@
title = "attune-gitleaks-config"
[allowlist]
description = "Known development credentials and examples"
regexes = [
'''test@attune\.local''',
'''TestPass123!''',
'''JWT_SECRET''',
'''ENCRYPTION_KEY''',
]
paths = [
'''^docs/''',
'''^reference/''',
'''^web/openapi\.json$''',
'''^work-summary/''',
]

7
.semgrepignore Normal file
View File

@@ -0,0 +1,7 @@
target/
web/dist/
web/node_modules/
web/src/api/
packs/
packs.dev/
packs.external/

View File

@@ -102,6 +102,7 @@ docker compose logs -f <svc> # View logs
- **BuildKit cache mounts**: Persist cargo registry and compilation artifacts between builds - **BuildKit cache mounts**: Persist cargo registry and compilation artifacts between builds
- **Cache strategy**: `sharing=shared` for registry/git (concurrent-safe), service-specific IDs for target caches - **Cache strategy**: `sharing=shared` for registry/git (concurrent-safe), service-specific IDs for target caches
- **Parallel builds**: 4x faster than old `sharing=locked` strategy - no serialization overhead - **Parallel builds**: 4x faster than old `sharing=locked` strategy - no serialization overhead
- **Rustc stack size**: All Rust Dockerfiles set `ENV RUST_MIN_STACK=16777216` (16 MiB) in the build stage to prevent `rustc` SIGSEGV crashes during release compilation. The `Makefile` also exports this variable for local builds.
- **Documentation**: See `docs/docker-layer-optimization.md`, `docs/QUICKREF-docker-optimization.md`, `docs/QUICKREF-buildkit-cache-strategy.md` - **Documentation**: See `docs/docker-layer-optimization.md`, `docs/QUICKREF-docker-optimization.md`, `docs/QUICKREF-buildkit-cache-strategy.md`
### Docker Runtime Standardization ### Docker Runtime Standardization
@@ -242,14 +243,14 @@ Completion listener advances workflow → Schedules successor tasks → Complete
**Migration Count**: 10 migrations (`000001` through `000010`) — see `migrations/` directory **Migration Count**: 10 migrations (`000001` through `000010`) — see `migrations/` directory
- **Artifact System**: The `artifact` table stores metadata + structured data (progress entries via JSONB `data` column). The `artifact_version` table stores immutable content snapshots — either on disk (via `file_path` column) or in DB (via `content` BYTEA / `content_json` JSONB). Version numbering is auto-assigned via `next_artifact_version()` SQL function. A DB trigger (`enforce_artifact_retention`) auto-deletes oldest versions when count exceeds the artifact's `retention_limit`. `artifact.execution` is a plain BIGINT (no FK — execution is a hypertable). Progress-type artifacts use `artifact.data` (atomic JSON array append); file-type artifacts use `artifact_version` rows with `file_path` set. Binary content is excluded from default queries for performance (`SELECT_COLUMNS` vs `SELECT_COLUMNS_WITH_CONTENT`). **Visibility**: Each artifact has a `visibility` column (`artifact_visibility_enum`: `public` or `private`, DB default `private`). The `CreateArtifactRequest` DTO accepts `visibility` as `Option<ArtifactVisibility>` — when omitted the API route handler applies a **type-aware default**: `public` for Progress artifacts (informational status indicators), `private` for all other types. Callers can always override explicitly. Public artifacts are viewable by all authenticated users; private artifacts are restricted based on the artifact's `scope` (Identity, Pack, Action, Sensor) and `owner` fields. The visibility field is filterable via the search/list API (`?visibility=public`). Full RBAC enforcement is deferred — the column and basic query filtering are in place for future permission checks. **Notifications**: `artifact_created` and `artifact_updated` DB triggers (in migration `000008`) fire PostgreSQL NOTIFY with entity_type `artifact` and include `visibility` in the payload. The `artifact_updated` trigger extracts a progress summary (`progress_percent`, `progress_message`, `progress_entries`) from the last entry of the `data` JSONB array for progress-type artifacts. The Web UI `ExecutionProgressBar` component (`web/src/components/executions/ExecutionProgressBar.tsx`) renders an inline progress bar in the Execution Details card using the `useArtifactStream` hook (`web/src/hooks/useArtifactStream.ts`) for real-time WebSocket updates, with polling fallback via `useExecutionArtifacts`. - **Artifact System**: The `artifact` table stores metadata + structured data (progress entries via JSONB `data` column). The `artifact_version` table stores immutable content snapshots — either on disk (via `file_path` column) or in DB (via `content` BYTEA / `content_json` JSONB). Version numbering is auto-assigned via `next_artifact_version()` SQL function. A DB trigger (`enforce_artifact_retention`) auto-deletes oldest versions when count exceeds the artifact's `retention_limit`. `artifact.execution` is a plain BIGINT (no FK — execution is a hypertable). Progress-type artifacts use `artifact.data` (atomic JSON array append); file-type artifacts use `artifact_version` rows with `file_path` set. Binary content is excluded from default queries for performance (`SELECT_COLUMNS` vs `SELECT_COLUMNS_WITH_CONTENT`). **Visibility**: Each artifact has a `visibility` column (`artifact_visibility_enum`: `public` or `private`, DB default `private`). The `CreateArtifactRequest` DTO accepts `visibility` as `Option<ArtifactVisibility>` — when omitted the API route handler applies a **type-aware default**: `public` for Progress artifacts (informational status indicators), `private` for all other types. Callers can always override explicitly. Public artifacts are viewable by all authenticated users; private artifacts are restricted based on the artifact's `scope` (Identity, Pack, Action, Sensor) and `owner` fields. The visibility field is filterable via the search/list API (`?visibility=public`). Full RBAC enforcement is deferred — the column and basic query filtering are in place for future permission checks. **Notifications**: `artifact_created` and `artifact_updated` DB triggers (in migration `000008`) fire PostgreSQL NOTIFY with entity_type `artifact` and include `visibility` in the payload. The `artifact_updated` trigger extracts a progress summary (`progress_percent`, `progress_message`, `progress_entries`) from the last entry of the `data` JSONB array for progress-type artifacts. The Web UI `ExecutionProgressBar` component (`web/src/components/executions/ExecutionProgressBar.tsx`) renders an inline progress bar in the Execution Details card using the `useArtifactStream` hook (`web/src/hooks/useArtifactStream.ts`) for real-time WebSocket updates, with polling fallback via `useExecutionArtifacts`.
- **File-Based Artifact Storage**: File-type artifacts (FileBinary, FileDataTable, FileImage, FileText) use a shared filesystem volume instead of PostgreSQL BYTEA. The `artifact_version.file_path` column stores the relative path from the `artifacts_dir` root (e.g., `mypack/build_log/v1.txt`). Pattern: `{ref_with_dots_as_dirs}/v{version}.{ext}`. The artifact ref (globally unique) is used as the directory key — no execution ID in the path, so artifacts can outlive executions and be shared across them. **Endpoint**: `POST /api/v1/artifacts/{id}/versions/file` allocates a version number and file path without any file content; the execution process writes the file to `$ATTUNE_ARTIFACTS_DIR/{file_path}`. **Download**: `GET /api/v1/artifacts/{id}/download` and version-specific downloads check `file_path` first (read from disk), fall back to DB BYTEA/JSON. **Finalization**: After execution exits, the worker stats all file-backed versions for that execution and updates `size_bytes` on both `artifact_version` and parent `artifact` rows via direct DB access. **Cleanup**: Delete endpoints remove disk files before deleting DB rows; empty parent directories are cleaned up. **Backward compatible**: Existing DB-stored artifacts (`file_path = NULL`) continue to work unchanged. - **File-Based Artifact Storage**: File-type artifacts (FileBinary, FileDataTable, FileImage, FileText) use a shared filesystem volume instead of PostgreSQL BYTEA. The `artifact_version.file_path` column stores the relative path from the `artifacts_dir` root (e.g., `mypack/build_log/v1.txt`). Pattern: `{ref_with_dots_as_dirs}/v{version}.{ext}`. The artifact ref (globally unique) is used as the directory key — no execution ID in the path, so artifacts can outlive executions and be shared across them. **Endpoint**: `POST /api/v1/artifacts/{id}/versions/file` allocates a version number and file path without any file content; the execution process writes the file to `$ATTUNE_ARTIFACTS_DIR/{file_path}`. **Download**: `GET /api/v1/artifacts/{id}/download` and version-specific downloads check `file_path` first (read from disk), fall back to DB BYTEA/JSON. **Finalization**: After execution exits, the worker stats all file-backed versions for that execution and updates `size_bytes` on both `artifact_version` and parent `artifact` rows via direct DB access. **Cleanup**: Delete endpoints remove disk files before deleting DB rows; empty parent directories are cleaned up. **Backward compatible**: Existing DB-stored artifacts (`file_path = NULL`) continue to work unchanged.
- **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order. - **Pack Component Loading Order**: Runtimes → Triggers → Actions (+ workflow definitions) → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order. When an action YAML contains a `workflow_file` field, the loader creates/updates the referenced `workflow_definition` record and links it to the action during the Actions phase.
### Workflow Execution Orchestration ### Workflow Execution Orchestration
- **Detection**: The `ExecutionScheduler` checks `action.workflow_def.is_some()` before dispatching to a worker. Workflow actions are orchestrated by the executor, not sent to workers. - **Detection**: The `ExecutionScheduler` checks `action.workflow_def.is_some()` before dispatching to a worker. Workflow actions are orchestrated by the executor, not sent to workers.
- **Orchestration Flow**: Scheduler loads the `WorkflowDefinition`, builds a `TaskGraph`, creates a `workflow_execution` record, marks the parent execution as Running, builds an initial `WorkflowContext` from execution parameters and workflow vars, then dispatches entry-point tasks as child executions via MQ with rendered inputs. - **Orchestration Flow**: Scheduler loads the `WorkflowDefinition`, builds a `TaskGraph`, creates a `workflow_execution` record, marks the parent execution as Running, builds an initial `WorkflowContext` from execution parameters and workflow vars, then dispatches entry-point tasks as child executions via MQ with rendered inputs.
- **Template Resolution**: Task inputs are rendered through `WorkflowContext.render_json()` before dispatching. Uses the expression engine for full operator/function support inside `{{ }}`. Canonical namespaces: `parameters`, `workflow` (mutable vars), `task` (results), `config` (pack config), `keystore` (secrets), `item`, `index`, `system`. Backward-compat aliases: `vars`/`variables``workflow`, `tasks``task`, bare names → `workflow` fallback. **Type-preserving**: pure template expressions like `"{{ item }}"` preserve the JSON type (integer `5` stays as `5`, not string `"5"`). Mixed expressions like `"Sleeping for {{ item }} seconds"` remain strings. - **Template Resolution**: Task inputs are rendered through `WorkflowContext.render_json()` before dispatching. Uses the expression engine for full operator/function support inside `{{ }}`. Canonical namespaces: `parameters`, `workflow` (mutable vars), `task` (results), `config` (pack config), `keystore` (secrets), `item`, `index`, `system`. Backward-compat aliases: `vars`/`variables``workflow`, `tasks``task`, bare names → `workflow` fallback. **Type-preserving**: pure template expressions like `"{{ item }}"` preserve the JSON type (integer `5` stays as `5`, not string `"5"`). Mixed expressions like `"Sleeping for {{ item }} seconds"` remain strings.
- **Function Expressions**: `{{ result() }}` returns the last completed task's result. `{{ result().field.subfield }}` navigates into it. `{{ succeeded() }}`, `{{ failed() }}`, `{{ timed_out() }}` return booleans. These are evaluated by `WorkflowContext.try_evaluate_function_call()`. - **Function Expressions**: `{{ result() }}` returns the last completed task's result. `{{ result().field.subfield }}` navigates into it. `{{ succeeded() }}`, `{{ failed() }}`, `{{ timed_out() }}` return booleans. These are evaluated by `WorkflowContext.try_evaluate_function_call()`.
- **Publish Directives**: Transition `publish` directives (e.g., `number_list: "{{ result().data.items }}"`) are evaluated when a transition fires. Published variables are persisted to the `workflow_execution.variables` column and available to subsequent tasks via the `workflow` namespace (e.g., `{{ workflow.number_list }}`). Uses type-preserving rendering so arrays/numbers/booleans retain their types. - **Publish Directives**: Transition `publish` directives are evaluated when a transition fires. Published variables are persisted to the `workflow_execution.variables` column and available to subsequent tasks via the `workflow` namespace (e.g., `{{ workflow.number_list }}`). Values can be **any JSON-compatible type**: string templates (e.g., `number_list: "{{ result().data.items }}"`), booleans (`validation_passed: true`), numbers (`count: 42`), arrays, objects, or null. The `PublishDirective::Simple` variant stores `HashMap<String, serde_json::Value>`. String values are template-rendered with type preservation (pure `{{ }}` expressions preserve the underlying JSON type); non-string values (booleans, numbers, null) pass through `render_json` unchanged — `true` stays as boolean `true`, not string `"true"`. The `PublishVar` struct in `graph.rs` uses a `value: JsonValue` field (with `#[serde(alias = "expression")]` for backward compat with stored task graphs).
- **Child Task Dispatch**: Each workflow task becomes a child execution with the task's actual action ref (e.g., `core.echo`), `workflow_task` metadata linking it to the `workflow_execution` record, and a parent reference to the workflow execution. Child executions re-enter the normal scheduling pipeline, so nested workflows work recursively. - **Child Task Dispatch**: Each workflow task becomes a child execution with the task's actual action ref (e.g., `core.echo`), `workflow_task` metadata linking it to the `workflow_execution` record, and a parent reference to the workflow execution. Child executions re-enter the normal scheduling pipeline, so nested workflows work recursively.
- **with_items Expansion**: Tasks declaring `with_items: "{{ expr }}"` are expanded into child executions. The expression is resolved via the `WorkflowContext` to produce a JSON array, then each item gets its own child execution with `item`/`index` set on the context and `task_index` in `WorkflowTaskMetadata`. Completion tracking waits for ALL sibling items to finish before marking the task as completed/failed and advancing the workflow. - **with_items Expansion**: Tasks declaring `with_items: "{{ expr }}"` are expanded into child executions. The expression is resolved via the `WorkflowContext` to produce a JSON array, then each item gets its own child execution with `item`/`index` set on the context and `task_index` in `WorkflowTaskMetadata`. Completion tracking waits for ALL sibling items to finish before marking the task as completed/failed and advancing the workflow.
- **with_items Concurrency Limiting**: ALL child execution records are created in the database up front (with fully-rendered inputs), but only the first `N` are published to the message queue where `N` is the task's `concurrency` value (**default: 1**, i.e. serial execution). The remaining children stay at `Requested` status in the DB. As each item completes, `advance_workflow` counts in-flight siblings (`scheduling`/`scheduled`/`running`), calculates free slots (`concurrency - in_flight`), and calls `publish_pending_with_items_children()` which queries for `Requested`-status siblings ordered by `task_index` and publishes them. The DB `status = 'requested'` query is the authoritative source of undispatched items — no auxiliary state in workflow variables needed. The task is only marked complete when all siblings reach a terminal state. To run all items in parallel, explicitly set `concurrency` to the list length or a suitably large number. - **with_items Concurrency Limiting**: ALL child execution records are created in the database up front (with fully-rendered inputs), but only the first `N` are published to the message queue where `N` is the task's `concurrency` value (**default: 1**, i.e. serial execution). The remaining children stay at `Requested` status in the DB. As each item completes, `advance_workflow` counts in-flight siblings (`scheduling`/`scheduled`/`running`), calculates free slots (`concurrency - in_flight`), and calls `publish_pending_with_items_children()` which queries for `Requested`-status siblings ordered by `task_index` and publishes them. The DB `status = 'requested'` query is the authoritative source of undispatched items — no auxiliary state in workflow variables needed. The task is only marked complete when all siblings reach a terminal state. To run all items in parallel, explicitly set `concurrency` to the list length or a suitably large number.
@@ -264,7 +265,13 @@ Completion listener advances workflow → Schedules successor tasks → Complete
- Development packs in `./packs.dev/` are bind-mounted directly for instant updates - Development packs in `./packs.dev/` are bind-mounted directly for instant updates
- **Pack Binaries**: Native binaries (sensors) built separately with `./scripts/build-pack-binaries.sh` - **Pack Binaries**: Native binaries (sensors) built separately with `./scripts/build-pack-binaries.sh`
- **Action Script Resolution**: Worker constructs file paths as `{packs_base_dir}/{pack_ref}/actions/{entrypoint}` - **Action Script Resolution**: Worker constructs file paths as `{packs_base_dir}/{pack_ref}/actions/{entrypoint}`
- **Workflow File Storage**: Visual workflow builder saves files to `{packs_base_dir}/{pack_ref}/actions/workflows/{name}.workflow.yaml` via `POST /api/v1/packs/{pack_ref}/workflow-files` and `PUT /api/v1/workflows/{ref}/file` endpoints - **Workflow Action YAML (`workflow_file` field)**: An action YAML may include a `workflow_file` field (e.g., `workflow_file: workflows/timeline_demo.yaml`) pointing to a workflow definition file relative to the `actions/` directory. When present, the `PackComponentLoader` reads and parses the referenced workflow YAML, creates/updates a `workflow_definition` record, and links the action to it via `action.workflow_def`. This separates action-level metadata (ref, label, parameters, policies) from the workflow graph (tasks, transitions, variables), and allows **multiple actions to reference the same workflow file** with different parameter schemas or policy configurations. Workflow actions have no `runner_type` (runtime is `None`) — the executor orchestrates child task executions rather than sending to a worker.
- **Action-linked workflow files omit action-level metadata**: Workflow files referenced via `workflow_file` should contain **only the execution graph**: `version`, `vars`, `tasks`, `output_map`. The `ref`, `label`, `description`, `parameters`, `output`, and `tags` fields are omitted — the action YAML is the single authoritative source for those values. The `WorkflowDefinition` parser accepts empty `ref`/`label` (defaults to `""`), and the loader / registrar fall back to the action YAML (or filename-derived values) when they are missing. Standalone workflow files (in `workflows/`) still carry their own `ref`/`label` since they have no companion action YAML.
- **Workflow File Storage**: The visual workflow builder save endpoints (`POST /api/v1/packs/{pack_ref}/workflow-files` and `PUT /api/v1/workflows/{ref}/file`) write **two files** per workflow:
1. **Action YAML** at `{packs_base_dir}/{pack_ref}/actions/{name}.yaml` — action-level metadata (`ref`, `label`, `description`, `parameters`, `output`, `tags`, `workflow_file` reference, `enabled`). Built by `build_action_yaml()` in `crates/api/src/routes/workflows.rs`.
2. **Workflow YAML** at `{packs_base_dir}/{pack_ref}/actions/workflows/{name}.workflow.yaml` — graph-only (`version`, `vars`, `tasks`, `output_map`). The `strip_action_level_fields()` function removes `ref`, `label`, `description`, `parameters`, `output`, and `tags` from the definition before writing.
Pack-bundled workflows use the same directory layout and are discovered during pack registration when their companion action YAML contains `workflow_file`.
- **Workflow File Discovery (dual-directory scanning)**: The `WorkflowLoader` scans **two** directories when loading workflows for a pack: (1) `{pack_dir}/workflows/` (legacy standalone workflow files), and (2) `{pack_dir}/actions/workflows/` (visual-builder and action-linked workflow files). Files with `.workflow.yaml` suffix have the `.workflow` portion stripped when deriving the workflow name/ref (e.g., `deploy.workflow.yaml` → name `deploy`, ref `pack.deploy`). If the same ref appears in both directories, `actions/workflows/` wins. The `reload_workflow` method searches `actions/workflows/` first, trying `.workflow.yaml`, `.yaml`, `.workflow.yml`, and `.yml` extensions.
- **Task Model (Orquesta-aligned)**: Tasks are purely action invocations — there is no task `type` field or task-level `when` condition in the UI model. Parallelism is implicit (multiple `do` targets in a transition fan out into parallel branches). Conditions belong exclusively on transitions (`next[].when`). Each task has: `name`, `action`, `input`, `next` (transitions), `delay`, `retry`, `timeout`, `with_items`, `batch_size`, `concurrency`, `join`. - **Task Model (Orquesta-aligned)**: Tasks are purely action invocations — there is no task `type` field or task-level `when` condition in the UI model. Parallelism is implicit (multiple `do` targets in a transition fan out into parallel branches). Conditions belong exclusively on transitions (`next[].when`). Each task has: `name`, `action`, `input`, `next` (transitions), `delay`, `retry`, `timeout`, `with_items`, `batch_size`, `concurrency`, `join`.
- The backend `Task` struct (`crates/common/src/workflow/parser.rs`) still supports `type` and task-level `when` for backward compatibility, but the UI never sets them. - The backend `Task` struct (`crates/common/src/workflow/parser.rs`) still supports `type` and task-level `when` for backward compatibility, but the UI never sets them.
- **Task Transition Model (Orquesta-style)**: Tasks use an ordered `next` array of transitions instead of flat `on_success`/`on_failure`/`on_complete`/`on_timeout` fields. Each transition has: - **Task Transition Model (Orquesta-style)**: Tasks use an ordered `next` array of transitions instead of flat `on_success`/`on_failure`/`on_complete`/`on_timeout` fields. Each transition has:
@@ -315,6 +322,7 @@ Completion listener advances workflow → Schedules successor tasks → Complete
- **Web UI**: `extractProperties()` in `ParamSchemaForm.tsx` is the single extraction function for all schema types. Only handles flat format. - **Web UI**: `extractProperties()` in `ParamSchemaForm.tsx` is the single extraction function for all schema types. Only handles flat format.
- **SchemaBuilder**: Visual schema editor reads and writes flat format with `required` and `secret` checkboxes per parameter. - **SchemaBuilder**: Visual schema editor reads and writes flat format with `required` and `secret` checkboxes per parameter.
- **Backend Validation**: `flat_to_json_schema()` in `crates/api/src/validation/params.rs` converts flat format to JSON Schema internally for `jsonschema` crate validation. This conversion is an implementation detail — external interfaces always use flat format. - **Backend Validation**: `flat_to_json_schema()` in `crates/api/src/validation/params.rs` converts flat format to JSON Schema internally for `jsonschema` crate validation. This conversion is an implementation detail — external interfaces always use flat format.
- **Execution Config Format (Flat)**: The `execution.config` JSONB column always stores parameters in **flat format** — the object itself IS the parameters map (e.g., `{"url": "https://...", "method": "GET"}`). This is consistent across all execution sources: manual API calls, rule-triggered enforcements, and workflow task children. There is **no `{"parameters": {...}}` wrapper** — never nest parameters under a `"parameters"` key. The worker reads `config` as a flat object and passes each key-value pair as an action parameter. The scheduler's `extract_workflow_params()` helper treats the config object directly as the parameters map.
- **Parameter Delivery**: Actions receive parameters via stdin as JSON (never environment variables) - **Parameter Delivery**: Actions receive parameters via stdin as JSON (never environment variables)
- **Output Format**: Actions declare output format (text/json/yaml) - json/yaml are parsed into execution.result JSONB - **Output Format**: Actions declare output format (text/json/yaml) - json/yaml are parsed into execution.result JSONB
- **Standard Environment Variables**: Worker provides execution context via `ATTUNE_*` environment variables: - **Standard Environment Variables**: Worker provides execution context via `ATTUNE_*` environment variables:
@@ -444,12 +452,24 @@ input:
- **Styling**: Tailwind utility classes - **Styling**: Tailwind utility classes
- **Dev Server**: `npm run dev` (typically :3000 or :5173) - **Dev Server**: `npm run dev` (typically :3000 or :5173)
- **Build**: `npm run build` - **Build**: `npm run build`
- **Workflow Timeline DAG**: Prefect-style workflow run timeline visualization on the execution detail page for workflow executions
- Components in `web/src/components/executions/workflow-timeline/` (WorkflowTimelineDAG, TimelineRenderer, types, data, layout)
- Pure SVG renderer — no D3, no React Flow, no additional npm dependencies
- Renders child task executions as horizontal duration bars on a time axis with curved Bezier dependency edges
- **Data flow**: `WorkflowTimelineDAG` (orchestrator) fetches child executions via `useChildExecutions` + workflow definition via `useWorkflow(actionRef)` → `data.ts` transforms into `TimelineTask[]`/`TimelineEdge[]`/`TimelineMilestone[]` → `layout.ts` computes lane assignments + positions → `TimelineRenderer` renders SVG
- **Edge coloring from workflow metadata**: Fetches the workflow definition's `next` transition array, classifies `when` expressions (`{{ succeeded() }}` → green, `{{ failed() }}` → red dashed, `{{ timed_out() }}` → orange dash-dot, unconditional → gray), and reads `__chart_meta__` custom labels/colors
- **Task bars**: Colored by state (green=completed, blue=running with pulse animation, red=failed, gray=pending, orange=timeout). Left accent bar, text label with ellipsis clipping, timeout indicator badge.
- **Milestones**: Synthetic start/end diamond nodes + merge/fork junctions when fan-in/fan-out exceeds 3 tasks
- **Lane packing**: Greedy algorithm assigns tasks to non-overlapping y-lanes sorted by start time, with optional reordering to cluster tasks sharing upstream dependencies
- **Interactions**: Hover tooltip (name, state, times, duration, retries, upstream/downstream counts), click-to-select with BFS path highlighting, double-click to navigate to child execution, horizontal zoom (mouse wheel anchored to cursor), alt+drag pan, expand/compact toggle
- **Fallback**: When no workflow definition is available, infers dependency edges from task timing heuristics
- **Integration**: Rendered in `ExecutionDetailPage.tsx` above `WorkflowTasksPanel`, conditioned on `isWorkflow`. Shares TanStack Query cache with WorkflowTasksPanel. Accepts `ParentExecutionInfo` interface (satisfied by both `ExecutionResponse` and `ExecutionSummary`).
- **Workflow Builder**: Visual node-based workflow editor at `/actions/workflows/new` and `/actions/workflows/:ref/edit` - **Workflow Builder**: Visual node-based workflow editor at `/actions/workflows/new` and `/actions/workflows/:ref/edit`
- Components in `web/src/components/workflows/` (ActionPalette, WorkflowCanvas, TaskNode, WorkflowEdges, TaskInspector) - Components in `web/src/components/workflows/` (ActionPalette, WorkflowCanvas, TaskNode, WorkflowEdges, TaskInspector)
- Types and conversion utilities in `web/src/types/workflow.ts` - Types and conversion utilities in `web/src/types/workflow.ts`
- Hooks in `web/src/hooks/useWorkflows.ts` - Hooks in `web/src/hooks/useWorkflows.ts`
- Saves workflow files to `{packs_base_dir}/{pack_ref}/actions/workflows/{name}.workflow.yaml` via dedicated API endpoints - Saves workflow files to `{packs_base_dir}/{pack_ref}/actions/workflows/{name}.workflow.yaml` via dedicated API endpoints
- **Visual / Raw YAML toggle**: Toolbar has a segmented toggle to switch between the visual node-based builder and a full-width read-only YAML preview (generated via `js-yaml`). Raw YAML mode replaces the canvas, palette, and inspector with the effective workflow definition. - **Visual / Raw YAML toggle**: Toolbar has a segmented toggle to switch between the visual node-based builder and a two-panel read-only YAML preview (generated via `js-yaml`). Raw YAML mode replaces the canvas, palette, and inspector with side-by-side panels: **Action YAML** (left, blue — `actions/{name}.yaml`: ref, label, parameters, output, tags, `workflow_file` reference) and **Workflow YAML** (right, green — `actions/workflows/{name}.workflow.yaml`: version, vars, tasks, output_map — graph only). Each panel has its own copy button and a description bar explaining the file's role. The `builderStateToGraph()` function extracts the graph-only definition, and `builderStateToActionYaml()` extracts the action metadata.
- **Drag-handle connections**: TaskNode has output handles (green=succeeded, red=failed, gray=always) and an input handle (top). Drag from an output handle to another node's input handle to create a transition. - **Drag-handle connections**: TaskNode has output handles (green=succeeded, red=failed, gray=always) and an input handle (top). Drag from an output handle to another node's input handle to create a transition.
- **Transition customization**: Users can rename transitions (custom `label`) and assign custom colors (CSS color string or preset swatches) via the TaskInspector. Custom colors/labels are persisted in the workflow YAML and rendered on the canvas edges. - **Transition customization**: Users can rename transitions (custom `label`) and assign custom colors (CSS color string or preset swatches) via the TaskInspector. Custom colors/labels are persisted in the workflow YAML and rendered on the canvas edges.
- **Edge waypoints & label dragging**: Transition edges support intermediate waypoints for custom routing. Click an edge to select it, then: - **Edge waypoints & label dragging**: Transition edges support intermediate waypoints for custom routing. Click an edge to select it, then:
@@ -509,16 +529,33 @@ make db-reset # Drop & recreate DB
cargo install --path crates/cli # Install CLI cargo install --path crates/cli # Install CLI
attune auth login # Login attune auth login # Login
attune pack list # List packs attune pack list # List packs
attune pack create --ref my_pack # Create empty pack (non-interactive)
attune pack create -i # Create empty pack (interactive prompts)
attune pack upload ./path/to/pack # Upload local pack to API (works with Docker) attune pack upload ./path/to/pack # Upload local pack to API (works with Docker)
attune pack register /opt/attune/packs/mypak # Register from API-visible path attune pack register /opt/attune/packs/mypak # Register from API-visible path
attune action execute <ref> --param key=value attune action execute <ref> --param key=value
attune execution list # Monitor executions attune execution list # Monitor executions
attune workflow upload actions/deploy.yaml # Upload workflow action to existing pack
attune workflow upload actions/deploy.yaml --force # Update existing workflow
attune workflow list # List all workflows
attune workflow list --pack core # List workflows in a pack
attune workflow show core.install_packs # Show workflow details + task summary
attune workflow delete core.my_workflow --yes # Delete a workflow
``` ```
**Pack Upload vs Register**: **Pack Upload vs Register**:
- `attune pack upload <local-path>` — Tarballs the local directory and POSTs it to `POST /api/v1/packs/upload`. Works regardless of whether the API is local or in Docker. This is the primary way to install packs from your local machine into a Dockerized system. - `attune pack upload <local-path>` — Tarballs the local directory and POSTs it to `POST /api/v1/packs/upload`. Works regardless of whether the API is local or in Docker. This is the primary way to install packs from your local machine into a Dockerized system.
- `attune pack register <server-path>` — Sends a filesystem path string to the API (`POST /api/v1/packs/register`). Only works if the path is accessible from inside the API container (e.g. `/opt/attune/packs/...` or `/opt/attune/packs.dev/...`). - `attune pack register <server-path>` — Sends a filesystem path string to the API (`POST /api/v1/packs/register`). Only works if the path is accessible from inside the API container (e.g. `/opt/attune/packs/...` or `/opt/attune/packs.dev/...`).
**Workflow Upload** (`attune workflow upload <action-yaml-path>`):
- Reads the local action YAML file and extracts the `workflow_file` field to find the companion workflow YAML
- Determines the pack from the action ref (e.g., `mypack.deploy` → pack `mypack`, name `deploy`)
- The `workflow_file` path is resolved relative to the action YAML's parent directory (same as how pack loaders resolve it relative to the `actions/` directory)
- Constructs a `SaveWorkflowFileRequest` JSON payload combining action metadata (label, parameters, output, tags) with the workflow definition (version, vars, tasks, output_map) and POSTs to `POST /api/v1/packs/{pack_ref}/workflow-files`
- On 409 Conflict (workflow already exists), fails unless `--force` is passed, in which case it PUTs to `PUT /api/v1/workflows/{ref}/file` to update
- Does NOT require a full pack upload — individual workflow actions can be added to existing packs independently
- **Important**: The action YAML MUST contain a `workflow_file` field; regular (non-workflow) actions should be uploaded as part of a pack
**Pack Upload API endpoint**: `POST /api/v1/packs/upload` — accepts `multipart/form-data` with: **Pack Upload API endpoint**: `POST /api/v1/packs/upload` — accepts `multipart/form-data` with:
- `pack` (required): a `.tar.gz` archive of the pack directory - `pack` (required): a `.tar.gz` archive of the pack directory
- `force` (optional, text): `"true"` to overwrite an existing pack with the same ref - `force` (optional, text): `"true"` to overwrite an existing pack with the same ref
@@ -606,20 +643,21 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
4. **NEVER** commit secrets in config files (use env vars in production) 4. **NEVER** commit secrets in config files (use env vars in production)
5. **NEVER** hardcode schema prefixes in SQL queries - rely on PostgreSQL `search_path` mechanism 5. **NEVER** hardcode schema prefixes in SQL queries - rely on PostgreSQL `search_path` mechanism
6. **NEVER** copy packs into Dockerfiles - they are mounted as volumes 6. **NEVER** copy packs into Dockerfiles - they are mounted as volumes
7. **ALWAYS** use PostgreSQL enum type mappings for custom enums 7. **NEVER** put workflow definition content directly in action YAML — use a separate `.workflow.yaml` file in `actions/workflows/` and reference it via `workflow_file` in the action YAML
8. **ALWAYS** use transactions for multi-table operations 8. **ALWAYS** use PostgreSQL enum type mappings for custom enums
9. **ALWAYS** start with `attune/` or correct crate name when specifying file paths 9. **ALWAYS** use transactions for multi-table operations
10. **ALWAYS** convert runtime names to lowercase for comparison (database may store capitalized) 10. **ALWAYS** start with `attune/` or correct crate name when specifying file paths
11. **ALWAYS** use optimized Dockerfiles for new services (selective crate copying) 11. **ALWAYS** convert runtime names to lowercase for comparison (database may store capitalized)
12. **REMEMBER** IDs are `i64`, not `i32` or `uuid` 12. **ALWAYS** use optimized Dockerfiles for new services (selective crate copying)
13. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`) 13. **REMEMBER** IDs are `i64`, not `i32` or `uuid`
14. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare` 14. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`)
15. **REMEMBER** packs are volumes - update with restart, not rebuild 15. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare`
16. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh` 16. **REMEMBER** packs are volumes - update with restart, not rebuild
17. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row). 17. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh`
18. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history` 18. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row).
19. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures. 19. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history`
20. **REMEMBER** `execution`, `event`, and `enforcement` are all TimescaleDB hypertables — they **cannot be the target of FK constraints**. Any column referencing them (e.g., `inquiry.execution`, `workflow_execution.execution`, `execution.parent`) is a plain BIGINT with no FK and may become a dangling reference. 20. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures.
21. **REMEMBER** `execution`, `event`, and `enforcement` are all TimescaleDB hypertables — they **cannot be the target of FK constraints**. Any column referencing them (e.g., `inquiry.execution`, `workflow_execution.execution`, `execution.parent`) is a plain BIGINT with no FK and may become a dangling reference.
## Deployment ## Deployment
- **Target**: Distributed deployment with separate service instances - **Target**: Distributed deployment with separate service instances
@@ -630,7 +668,7 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
- **Web UI**: Static files served separately or via API service - **Web UI**: Static files served separately or via API service
## Current Development Status ## Current Development Status
- ✅ **Complete**: Database migrations (21 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`), Artifact content system (versioned file/JSON storage, progress-append semantics, binary upload/download, retention enforcement, execution-linked artifacts, 18 API endpoints under `/api/v1/artifacts/`, file-backed disk storage via shared volume for file-type artifacts), CLI `--wait` flag (WebSocket-first with polling fallback — connects to notifier on port 8081, subscribes to execution, returns immediately on terminal status; falls back to exponential-backoff REST polling if WS unavailable; polling always gets at least 10s budget regardless of how long WS path ran) - ✅ **Complete**: Database migrations (21 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder + workflow timeline DAG), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`), Artifact content system (versioned file/JSON storage, progress-append semantics, binary upload/download, retention enforcement, execution-linked artifacts, 18 API endpoints under `/api/v1/artifacts/`, file-backed disk storage via shared volume for file-type artifacts), CLI `--wait` flag (WebSocket-first with polling fallback — connects to notifier on port 8081, subscribes to execution, returns immediately on terminal status; falls back to exponential-backoff REST polling if WS unavailable; polling always gets at least 10s budget regardless of how long WS path ran), Workflow Timeline DAG visualization (Prefect-style time-aligned Gantt+DAG on execution detail page, pure SVG, transition-aware edge coloring from workflow definition metadata, hover tooltips, click-to-highlight path, zoom/pan)
- 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management, Artifact UI (web UI for browsing/downloading artifacts), Notifier service WebSocket (functional but lacks auth — the WS connection is unauthenticated; the subscribe filter controls visibility) - 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management, Artifact UI (web UI for browsing/downloading artifacts), Notifier service WebSocket (functional but lacks auth — the WS connection is unauthenticated; the subscribe filter controls visibility)
- 📋 **Planned**: Execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage - 📋 **Planned**: Execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage

View File

@@ -2,7 +2,8 @@
check fmt clippy install-tools db-create db-migrate db-reset docker-build \ check fmt clippy install-tools db-create db-migrate db-reset docker-build \
docker-up docker-down docker-cache-warm docker-stop-system-services dev watch generate-agents-index \ docker-up docker-down docker-cache-warm docker-stop-system-services dev watch generate-agents-index \
docker-build-workers docker-build-worker-base docker-build-worker-python \ docker-build-workers docker-build-worker-base docker-build-worker-python \
docker-build-worker-node docker-build-worker-full docker-build-worker-node docker-build-worker-full deny ci-rust ci-web-blocking ci-web-advisory \
ci-security-blocking ci-security-advisory ci-blocking ci-advisory
# Default target # Default target
help: help:
@@ -61,6 +62,9 @@ help:
@echo " make generate-agents-index - Generate AGENTS.md index for AI agents" @echo " make generate-agents-index - Generate AGENTS.md index for AI agents"
@echo "" @echo ""
# Increase rustc stack size to prevent SIGSEGV during compilation
export RUST_MIN_STACK := 16777216
# Building # Building
build: build:
cargo build cargo build
@@ -316,6 +320,42 @@ update:
audit: audit:
cargo audit cargo audit
deny:
cargo deny check
ci-rust:
cargo fmt --all -- --check
cargo clippy --workspace --all-targets --all-features -- -D warnings
cargo test --workspace --all-features
cargo audit
cargo deny check
ci-web-blocking:
cd web && npm ci
cd web && npm run lint
cd web && npm run typecheck
cd web && npm run build
ci-web-advisory:
cd web && npm ci
cd web && npm run knip
cd web && npm audit --omit=dev
ci-security-blocking:
mkdir -p $$HOME/bin
curl -sSfL https://raw.githubusercontent.com/gitleaks/gitleaks/master/install.sh | sh -s -- -b $$HOME/bin v8.24.2
$$HOME/bin/gitleaks git --report-format sarif --report-path gitleaks.sarif --config .gitleaks.toml
ci-security-advisory:
pip install semgrep
semgrep scan --config p/default --error
ci-blocking: ci-rust ci-web-blocking ci-security-blocking
@echo "✅ Blocking CI checks passed!"
ci-advisory: ci-web-advisory ci-security-advisory
@echo "Advisory CI checks complete."
# Check dependency tree # Check dependency tree
tree: tree:
cargo tree cargo tree
@@ -330,5 +370,5 @@ pre-commit: fmt clippy test
@echo "✅ All checks passed! Ready to commit." @echo "✅ All checks passed! Ready to commit."
# CI simulation # CI simulation
ci: check clippy test ci: ci-blocking ci-advisory
@echo "✅ CI checks passed!" @echo "✅ CI checks passed!"

View File

@@ -105,6 +105,9 @@ pub struct UpdateArtifactRequest {
/// Updated content type /// Updated content type
pub content_type: Option<String>, pub content_type: Option<String>,
/// Updated execution ID (re-links artifact to a different execution)
pub execution: Option<i64>,
/// Updated structured data (replaces existing data entirely) /// Updated structured data (replaces existing data entirely)
pub data: Option<JsonValue>, pub data: Option<JsonValue>,
} }
@@ -314,6 +317,62 @@ pub struct CreateFileVersionRequest {
pub created_by: Option<String>, pub created_by: Option<String>,
} }
/// Request DTO for the upsert-and-allocate endpoint.
///
/// Looks up an artifact by ref (creating it if it doesn't exist), then
/// allocates a new file-backed version and returns the `file_path` where
/// the caller should write the file on the shared artifact volume.
///
/// This replaces the multi-step create → 409-handling → allocate dance
/// with a single API call.
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct AllocateFileVersionByRefRequest {
// -- Artifact metadata (used only when creating a new artifact) ----------
/// Owner scope type (default: action)
#[schema(example = "action")]
pub scope: Option<OwnerType>,
/// Owner identifier (ref string of the owning entity)
#[schema(example = "python_example.artifact_demo")]
pub owner: Option<String>,
/// Artifact type (must be a file-backed type; default: file_text)
#[schema(example = "file_text")]
pub r#type: Option<ArtifactType>,
/// Visibility level. If omitted, uses type-aware default.
pub visibility: Option<ArtifactVisibility>,
/// Retention policy type (default: versions)
pub retention_policy: Option<RetentionPolicyType>,
/// Retention limit (default: 10)
pub retention_limit: Option<i32>,
/// Human-readable name
#[schema(example = "Demo Log")]
pub name: Option<String>,
/// Optional description
pub description: Option<String>,
/// Execution ID to link this artifact to
#[schema(example = 42)]
pub execution: Option<i64>,
// -- Version metadata ----------------------------------------------------
/// MIME content type for this version (e.g. "text/plain")
#[schema(example = "text/plain")]
pub content_type: Option<String>,
/// Free-form metadata about this version
#[schema(value_type = Option<Object>)]
pub meta: Option<JsonValue>,
/// Who created this version (e.g. action ref, identity, "system")
pub created_by: Option<String>,
}
/// Response DTO for an artifact version (without binary content) /// Response DTO for an artifact version (without binary content)
#[derive(Debug, Clone, Serialize, ToSchema)] #[derive(Debug, Clone, Serialize, ToSchema)]
pub struct ArtifactVersionResponse { pub struct ArtifactVersionResponse {

View File

@@ -237,6 +237,9 @@ pub async fn update_action(
runtime_version_constraint: request.runtime_version_constraint, runtime_version_constraint: request.runtime_version_constraint,
param_schema: request.param_schema, param_schema: request.param_schema,
out_schema: request.out_schema, out_schema: request.out_schema,
parameter_delivery: None,
parameter_format: None,
output_format: None,
}; };
let action = ActionRepository::update(&state.db, existing_action.id, update_input).await?; let action = ActionRepository::update(&state.db, existing_action.id, update_input).await?;

View File

@@ -8,19 +8,29 @@
//! - Progress append for progress-type artifacts (streaming updates) //! - Progress append for progress-type artifacts (streaming updates)
//! - Listing artifacts by execution //! - Listing artifacts by execution
//! - Version history and retrieval //! - Version history and retrieval
//! - Upsert-and-upload: create-or-reuse an artifact by ref and upload a version in one call
//! - Upsert-and-allocate: create-or-reuse an artifact by ref and allocate a file-backed version path in one call
//! - SSE streaming for file-backed artifacts (live tail while execution is running)
use axum::{ use axum::{
body::Body, body::Body,
extract::{Multipart, Path, Query, State}, extract::{Multipart, Path, Query, State},
http::{header, StatusCode}, http::{header, StatusCode},
response::IntoResponse, response::{
sse::{Event, KeepAlive, Sse},
IntoResponse,
},
routing::{get, post}, routing::{get, post},
Json, Router, Json, Router,
}; };
use futures::stream::Stream;
use std::sync::Arc; use std::sync::Arc;
use tracing::warn; use tokio::io::{AsyncReadExt, AsyncSeekExt};
use tracing::{debug, warn};
use attune_common::models::enums::{ArtifactType, ArtifactVisibility}; use attune_common::models::enums::{
ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType,
};
use attune_common::repositories::{ use attune_common::repositories::{
artifact::{ artifact::{
ArtifactRepository, ArtifactSearchFilters, ArtifactVersionRepository, CreateArtifactInput, ArtifactRepository, ArtifactSearchFilters, ArtifactVersionRepository, CreateArtifactInput,
@@ -33,10 +43,10 @@ use crate::{
auth::middleware::RequireAuth, auth::middleware::RequireAuth,
dto::{ dto::{
artifact::{ artifact::{
AppendProgressRequest, ArtifactQueryParams, ArtifactResponse, ArtifactSummary, AllocateFileVersionByRefRequest, AppendProgressRequest, ArtifactQueryParams,
ArtifactVersionResponse, ArtifactVersionSummary, CreateArtifactRequest, ArtifactResponse, ArtifactSummary, ArtifactVersionResponse, ArtifactVersionSummary,
CreateFileVersionRequest, CreateVersionJsonRequest, SetDataRequest, CreateArtifactRequest, CreateFileVersionRequest, CreateVersionJsonRequest,
UpdateArtifactRequest, SetDataRequest, UpdateArtifactRequest,
}, },
common::{PaginatedResponse, PaginationParams}, common::{PaginatedResponse, PaginationParams},
ApiResponse, SuccessResponse, ApiResponse, SuccessResponse,
@@ -251,6 +261,7 @@ pub async fn update_artifact(
description: request.description, description: request.description,
content_type: request.content_type, content_type: request.content_type,
size_bytes: None, // Managed by version creation trigger size_bytes: None, // Managed by version creation trigger
execution: request.execution.map(Some),
data: request.data, data: request.data,
}; };
@@ -655,6 +666,7 @@ pub async fn create_version_file(
// Update the version row with the computed file_path // Update the version row with the computed file_path
sqlx::query("UPDATE artifact_version SET file_path = $1 WHERE id = $2") sqlx::query("UPDATE artifact_version SET file_path = $1 WHERE id = $2")
.bind(&file_path) .bind(&file_path)
.bind(version.id)
.execute(&state.db) .execute(&state.db)
.await .await
.map_err(|e| { .map_err(|e| {
@@ -970,6 +982,441 @@ pub async fn delete_version(
)) ))
} }
// ============================================================================
// Upsert-and-upload by ref
// ============================================================================
/// Upload a file version to an artifact identified by ref, creating the artifact if it does not
/// already exist.
///
/// This is the recommended way for actions to produce versioned file artifacts. The caller
/// provides the artifact ref and file content in a single multipart request. The server:
///
/// 1. Looks up the artifact by `ref`.
/// 2. If not found, creates it using the metadata fields in the multipart body.
/// 3. If found, optionally updates the `execution` link to the current execution.
/// 4. Uploads the file bytes as a new version (version number is auto-assigned).
///
/// **Multipart fields:**
/// - `file` (required) — the binary file content
/// - `ref` (required for creation) — artifact reference (ignored if artifact already exists)
/// - `scope` — owner scope: `system`, `pack`, `action`, `sensor`, `rule` (default: `action`)
/// - `owner` — owner identifier (default: empty string)
/// - `type` — artifact type: `file_text`, `file_image`, etc. (default: `file_text`)
/// - `visibility` — `public` or `private` (default: type-aware server default)
/// - `name` — human-readable name
/// - `description` — optional description
/// - `content_type` — MIME type (default: auto-detected from multipart or `application/octet-stream`)
/// - `execution` — execution ID to link this artifact to (updates existing artifacts too)
/// - `retention_policy` — `versions`, `days`, `hours`, `minutes` (default: `versions`)
/// - `retention_limit` — limit value (default: `10`)
/// - `created_by` — who created this version
/// - `meta` — JSON metadata for this version
#[utoipa::path(
post,
path = "/api/v1/artifacts/ref/{ref}/versions/upload",
tag = "artifacts",
params(("ref" = String, Path, description = "Artifact reference (created if not found)")),
request_body(content = String, content_type = "multipart/form-data"),
responses(
(status = 201, description = "Version created (artifact may have been created too)", body = inline(ApiResponse<ArtifactVersionResponse>)),
(status = 400, description = "Missing file field or invalid metadata"),
(status = 413, description = "File too large"),
),
security(("bearer_auth" = []))
)]
pub async fn upload_version_by_ref(
RequireAuth(_user): RequireAuth,
State(state): State<Arc<AppState>>,
Path(artifact_ref): Path<String>,
mut multipart: Multipart,
) -> ApiResult<impl IntoResponse> {
// 50 MB limit
const MAX_FILE_SIZE: usize = 50 * 1024 * 1024;
// Collect all multipart fields
let mut file_data: Option<Vec<u8>> = None;
let mut file_content_type: Option<String> = None;
let mut content_type_field: Option<String> = None;
let mut meta: Option<serde_json::Value> = None;
let mut created_by: Option<String> = None;
// Artifact-creation metadata (used only when creating a new artifact)
let mut scope: Option<String> = None;
let mut owner: Option<String> = None;
let mut artifact_type: Option<String> = None;
let mut visibility: Option<String> = None;
let mut name: Option<String> = None;
let mut description: Option<String> = None;
let mut execution: Option<String> = None;
let mut retention_policy: Option<String> = None;
let mut retention_limit: Option<String> = None;
while let Some(field) = multipart
.next_field()
.await
.map_err(|e| ApiError::BadRequest(format!("Multipart error: {}", e)))?
{
let field_name = field.name().unwrap_or("").to_string();
match field_name.as_str() {
"file" => {
file_content_type = field.content_type().map(|s| s.to_string());
let bytes = field
.bytes()
.await
.map_err(|e| ApiError::BadRequest(format!("Failed to read file: {}", e)))?;
if bytes.len() > MAX_FILE_SIZE {
return Err(ApiError::BadRequest(format!(
"File exceeds maximum size of {} bytes",
MAX_FILE_SIZE
)));
}
file_data = Some(bytes.to_vec());
}
"content_type" => {
let t = field.text().await.unwrap_or_default();
if !t.is_empty() {
content_type_field = Some(t);
}
}
"meta" => {
let t = field.text().await.unwrap_or_default();
if !t.is_empty() {
meta =
Some(serde_json::from_str(&t).map_err(|e| {
ApiError::BadRequest(format!("Invalid meta JSON: {}", e))
})?);
}
}
"created_by" => {
let t = field.text().await.unwrap_or_default();
if !t.is_empty() {
created_by = Some(t);
}
}
"scope" => {
scope = Some(field.text().await.unwrap_or_default());
}
"owner" => {
owner = Some(field.text().await.unwrap_or_default());
}
"type" => {
artifact_type = Some(field.text().await.unwrap_or_default());
}
"visibility" => {
visibility = Some(field.text().await.unwrap_or_default());
}
"name" => {
name = Some(field.text().await.unwrap_or_default());
}
"description" => {
description = Some(field.text().await.unwrap_or_default());
}
"execution" => {
execution = Some(field.text().await.unwrap_or_default());
}
"retention_policy" => {
retention_policy = Some(field.text().await.unwrap_or_default());
}
"retention_limit" => {
retention_limit = Some(field.text().await.unwrap_or_default());
}
_ => { /* skip unknown fields */ }
}
}
let file_bytes = file_data.ok_or_else(|| {
ApiError::BadRequest("Missing required 'file' field in multipart upload".to_string())
})?;
// Parse execution ID
let execution_id: Option<i64> = match &execution {
Some(s) if !s.is_empty() => Some(
s.parse::<i64>()
.map_err(|_| ApiError::BadRequest(format!("Invalid execution ID: '{}'", s)))?,
),
_ => None,
};
// Upsert: find existing artifact or create a new one
let artifact = match ArtifactRepository::find_by_ref(&state.db, &artifact_ref).await? {
Some(existing) => {
// Update execution link if a new execution ID was provided
if execution_id.is_some() && execution_id != existing.execution {
let update_input = UpdateArtifactInput {
r#ref: None,
scope: None,
owner: None,
r#type: None,
visibility: None,
retention_policy: None,
retention_limit: None,
name: None,
description: None,
content_type: None,
size_bytes: None,
execution: execution_id.map(Some),
data: None,
};
ArtifactRepository::update(&state.db, existing.id, update_input).await?
} else {
existing
}
}
None => {
// Parse artifact type
let a_type: ArtifactType = match &artifact_type {
Some(t) => serde_json::from_value(serde_json::Value::String(t.clone()))
.map_err(|_| ApiError::BadRequest(format!("Invalid artifact type: '{}'", t)))?,
None => ArtifactType::FileText,
};
// Parse scope
let a_scope: OwnerType = match &scope {
Some(s) if !s.is_empty() => {
serde_json::from_value(serde_json::Value::String(s.clone()))
.map_err(|_| ApiError::BadRequest(format!("Invalid scope: '{}'", s)))?
}
_ => OwnerType::Action,
};
// Parse visibility with type-aware default
let a_visibility: ArtifactVisibility = match &visibility {
Some(v) if !v.is_empty() => {
serde_json::from_value(serde_json::Value::String(v.clone()))
.map_err(|_| ApiError::BadRequest(format!("Invalid visibility: '{}'", v)))?
}
_ => {
if a_type == ArtifactType::Progress {
ArtifactVisibility::Public
} else {
ArtifactVisibility::Private
}
}
};
// Parse retention
let a_retention_policy: RetentionPolicyType = match &retention_policy {
Some(rp) if !rp.is_empty() => {
serde_json::from_value(serde_json::Value::String(rp.clone())).map_err(|_| {
ApiError::BadRequest(format!("Invalid retention_policy: '{}'", rp))
})?
}
_ => RetentionPolicyType::Versions,
};
let a_retention_limit: i32 = match &retention_limit {
Some(rl) if !rl.is_empty() => rl.parse::<i32>().map_err(|_| {
ApiError::BadRequest(format!("Invalid retention_limit: '{}'", rl))
})?,
_ => 10,
};
let create_input = CreateArtifactInput {
r#ref: artifact_ref.clone(),
scope: a_scope,
owner: owner.unwrap_or_default(),
r#type: a_type,
visibility: a_visibility,
retention_policy: a_retention_policy,
retention_limit: a_retention_limit,
name: name.filter(|s| !s.is_empty()),
description: description.filter(|s| !s.is_empty()),
content_type: content_type_field
.clone()
.or_else(|| file_content_type.clone()),
execution: execution_id,
data: None,
};
ArtifactRepository::create(&state.db, create_input).await?
}
};
// Resolve content type: explicit field > multipart header > fallback
let resolved_ct = content_type_field
.or(file_content_type)
.unwrap_or_else(|| "application/octet-stream".to_string());
let version_input = CreateArtifactVersionInput {
artifact: artifact.id,
content_type: Some(resolved_ct),
content: Some(file_bytes),
content_json: None,
file_path: None,
meta,
created_by,
};
let version = ArtifactVersionRepository::create(&state.db, version_input).await?;
Ok((
StatusCode::CREATED,
Json(ApiResponse::with_message(
ArtifactVersionResponse::from(version),
"Version uploaded successfully",
)),
))
}
/// Upsert an artifact by ref and allocate a file-backed version in one call.
///
/// If the artifact doesn't exist, it is created using the supplied metadata.
/// If it already exists, the execution link is updated (if provided).
/// Then a new file-backed version is allocated and the `file_path` is returned.
///
/// The caller writes the file to `$ATTUNE_ARTIFACTS_DIR/{file_path}` on the
/// shared volume — no HTTP upload needed.
#[utoipa::path(
post,
path = "/api/v1/artifacts/ref/{ref}/versions/file",
tag = "artifacts",
params(
("ref" = String, Path, description = "Artifact reference (e.g. 'mypack.build_log')")
),
request_body = AllocateFileVersionByRefRequest,
responses(
(status = 201, description = "File version allocated", body = inline(ApiResponse<ArtifactVersionResponse>)),
(status = 400, description = "Invalid request (non-file-backed artifact type)"),
),
security(("bearer_auth" = []))
)]
pub async fn allocate_file_version_by_ref(
RequireAuth(_user): RequireAuth,
State(state): State<Arc<AppState>>,
Path(artifact_ref): Path<String>,
Json(request): Json<AllocateFileVersionByRefRequest>,
) -> ApiResult<impl IntoResponse> {
// Upsert: find existing artifact or create a new one
let artifact = match ArtifactRepository::find_by_ref(&state.db, &artifact_ref).await? {
Some(existing) => {
// Update execution link if a new execution ID was provided
if request.execution.is_some() && request.execution != existing.execution {
let update_input = UpdateArtifactInput {
r#ref: None,
scope: None,
owner: None,
r#type: None,
visibility: None,
retention_policy: None,
retention_limit: None,
name: None,
description: None,
content_type: None,
size_bytes: None,
execution: request.execution.map(Some),
data: None,
};
ArtifactRepository::update(&state.db, existing.id, update_input).await?
} else {
existing
}
}
None => {
// Parse artifact type (default to FileText)
let a_type = request.r#type.unwrap_or(ArtifactType::FileText);
// Validate it's a file-backed type
if !is_file_backed_type(a_type) {
return Err(ApiError::BadRequest(format!(
"Artifact type {:?} is not file-backed. \
Use POST /artifacts/ref/{{ref}}/versions/upload for DB-stored artifacts.",
a_type,
)));
}
let a_scope = request.scope.unwrap_or(OwnerType::Action);
let a_visibility = request.visibility.unwrap_or(ArtifactVisibility::Private);
let a_retention_policy = request
.retention_policy
.unwrap_or(RetentionPolicyType::Versions);
let a_retention_limit = request.retention_limit.unwrap_or(10);
let create_input = CreateArtifactInput {
r#ref: artifact_ref.clone(),
scope: a_scope,
owner: request.owner.unwrap_or_default(),
r#type: a_type,
visibility: a_visibility,
retention_policy: a_retention_policy,
retention_limit: a_retention_limit,
name: request.name,
description: request.description,
content_type: request.content_type.clone(),
execution: request.execution,
data: None,
};
ArtifactRepository::create(&state.db, create_input).await?
}
};
// Validate the existing artifact is file-backed
if !is_file_backed_type(artifact.r#type) {
return Err(ApiError::BadRequest(format!(
"Artifact '{}' is type {:?}, which does not support file-backed versions.",
artifact.r#ref, artifact.r#type,
)));
}
let content_type = request
.content_type
.unwrap_or_else(|| default_content_type_for_artifact(artifact.r#type));
// Create version row (file_path computed after we know the version number)
let input = CreateArtifactVersionInput {
artifact: artifact.id,
content_type: Some(content_type.clone()),
content: None,
content_json: None,
file_path: None,
meta: request.meta,
created_by: request.created_by,
};
let version = ArtifactVersionRepository::create(&state.db, input).await?;
// Compute the file path from the artifact ref and version number
let file_path = compute_file_path(&artifact.r#ref, version.version, &content_type);
// Create the parent directory on disk
let artifacts_dir = &state.config.artifacts_dir;
let full_path = std::path::Path::new(artifacts_dir).join(&file_path);
if let Some(parent) = full_path.parent() {
tokio::fs::create_dir_all(parent).await.map_err(|e| {
ApiError::InternalServerError(format!(
"Failed to create artifact directory '{}': {}",
parent.display(),
e,
))
})?;
}
// Update the version row with the computed file_path
sqlx::query("UPDATE artifact_version SET file_path = $1 WHERE id = $2")
.bind(&file_path)
.bind(version.id)
.execute(&state.db)
.await
.map_err(|e| {
ApiError::InternalServerError(format!(
"Failed to set file_path on version {}: {}",
version.id, e,
))
})?;
// Return the version with file_path populated
let mut response = ArtifactVersionResponse::from(version);
response.file_path = Some(file_path);
Ok((
StatusCode::CREATED,
Json(ApiResponse::with_message(
response,
"File version allocated — write content to $ATTUNE_ARTIFACTS_DIR/<file_path>",
)),
))
}
// ============================================================================ // ============================================================================
// Helpers // Helpers
// ============================================================================ // ============================================================================
@@ -1179,8 +1626,434 @@ fn cleanup_empty_parents(dir: &std::path::Path, stop_at: &str) {
} }
} }
} }
// ============================================================================
// SSE file streaming
// ============================================================================
/// Query parameters for the artifact stream endpoint.
#[derive(serde::Deserialize)]
pub struct StreamArtifactParams {
/// JWT access token (SSE/EventSource cannot set Authorization header).
pub token: Option<String>,
}
/// Internal state machine for the `stream_artifact` SSE generator.
///
/// We use `futures::stream::unfold` instead of `async_stream::stream!` to avoid
/// adding an external dependency.
enum TailState {
/// Waiting for the file to appear on disk.
WaitingForFile {
full_path: std::path::PathBuf,
file_path: String,
execution_id: Option<i64>,
db: sqlx::PgPool,
started: tokio::time::Instant,
},
/// File exists — send initial content.
SendInitial {
full_path: std::path::PathBuf,
file_path: String,
execution_id: Option<i64>,
db: sqlx::PgPool,
},
/// Tailing the file for new bytes.
Tailing {
full_path: std::path::PathBuf,
file_path: String,
execution_id: Option<i64>,
db: sqlx::PgPool,
offset: u64,
idle_count: u32,
},
/// Emit the final `done` SSE event and close.
SendDone,
/// Stream has ended — return `None` to close.
Finished,
}
/// How long to wait for the file to appear on disk.
const STREAM_MAX_WAIT: std::time::Duration = std::time::Duration::from_secs(30);
/// How often to poll for new bytes / file existence.
const STREAM_POLL_INTERVAL: std::time::Duration = std::time::Duration::from_millis(500);
/// After this many consecutive empty polls we check whether the execution
/// is done and, if so, terminate the stream.
const STREAM_IDLE_CHECKS_BEFORE_DONE: u32 = 6; // 3 seconds of no new data
/// Check whether the given execution has reached a terminal status.
async fn is_execution_terminal(db: &sqlx::PgPool, execution_id: Option<i64>) -> bool {
let Some(exec_id) = execution_id else {
return false;
};
match sqlx::query_scalar::<_, String>("SELECT status::text FROM execution WHERE id = $1")
.bind(exec_id)
.fetch_optional(db)
.await
{
Ok(Some(status)) => matches!(
status.as_str(),
"succeeded" | "failed" | "timeout" | "canceled" | "abandoned"
),
Ok(None) => true, // execution deleted — treat as done
Err(_) => false, // DB error — keep tailing
}
}
/// Do one final read from `offset` to EOF and return the new bytes (if any).
async fn final_read_bytes(full_path: &std::path::Path, offset: u64) -> Option<String> {
let mut f = tokio::fs::File::open(full_path).await.ok()?;
let meta = f.metadata().await.ok()?;
if meta.len() <= offset {
return None;
}
f.seek(std::io::SeekFrom::Start(offset)).await.ok()?;
let mut tail = Vec::new();
f.read_to_end(&mut tail).await.ok()?;
if tail.is_empty() {
return None;
}
Some(String::from_utf8_lossy(&tail).into_owned())
}
/// Stream the latest file-backed artifact version as Server-Sent Events.
///
/// The endpoint:
/// 1. Waits (up to ~30 s) for the file to appear on disk if it has been
/// allocated but not yet written by the worker.
/// 2. Once the file exists it sends the current content as an initial `content`
/// event, then tails the file every 500 ms, sending `append` events with new
/// bytes.
/// 3. When no new bytes have appeared for several consecutive checks **and** the
/// linked execution (if any) has reached a terminal status, it sends a `done`
/// event and the stream ends.
/// 4. If the client disconnects the stream is cleaned up automatically.
///
/// **Event types** (SSE `event:` field):
/// - `content` full file content up to the current offset (sent once)
/// - `append` incremental bytes appended since the last event
/// - `waiting` file does not exist yet; sent periodically while waiting
/// - `done` no more data expected; stream will close
/// - `error` something went wrong; `data` contains a human-readable message
#[utoipa::path(
get,
path = "/api/v1/artifacts/{id}/stream",
tag = "artifacts",
params(
("id" = i64, Path, description = "Artifact ID"),
("token" = String, Query, description = "JWT access token for authentication"),
),
responses(
(status = 200, description = "SSE stream of file content", content_type = "text/event-stream"),
(status = 401, description = "Unauthorized"),
(status = 404, description = "Artifact not found or not file-backed"),
),
)]
pub async fn stream_artifact(
State(state): State<Arc<AppState>>,
Path(id): Path<i64>,
Query(params): Query<StreamArtifactParams>,
) -> Result<Sse<impl Stream<Item = Result<Event, std::convert::Infallible>>>, ApiError> {
// --- auth (EventSource can't send headers, so token comes via query) ----
use crate::auth::jwt::validate_token;
let token = params.token.as_ref().ok_or(ApiError::Unauthorized(
"Missing authentication token".to_string(),
))?;
validate_token(token, &state.jwt_config)
.map_err(|_| ApiError::Unauthorized("Invalid authentication token".to_string()))?;
// --- resolve artifact + latest version ---------------------------------
let artifact = ArtifactRepository::find_by_id(&state.db, id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
if !is_file_backed_type(artifact.r#type) {
return Err(ApiError::BadRequest(format!(
"Artifact '{}' is type {:?} which is not file-backed. \
Use the download endpoint instead.",
artifact.r#ref, artifact.r#type,
)));
}
let ver = ArtifactVersionRepository::find_latest(&state.db, id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("No versions found for artifact {}", id)))?;
let file_path = ver.file_path.ok_or_else(|| {
ApiError::NotFound(format!(
"Latest version of artifact '{}' has no file_path allocated",
artifact.r#ref,
))
})?;
let artifacts_dir = state.config.artifacts_dir.clone();
let full_path = std::path::PathBuf::from(&artifacts_dir).join(&file_path);
let execution_id = artifact.execution;
let db = state.db.clone();
// --- build the SSE stream via unfold -----------------------------------
let initial_state = TailState::WaitingForFile {
full_path,
file_path,
execution_id,
db,
started: tokio::time::Instant::now(),
};
let stream = futures::stream::unfold(initial_state, |state| async move {
match state {
TailState::Finished => None,
// ---- Drain state for clean shutdown ----
TailState::SendDone => Some((
Ok(Event::default()
.event("done")
.data("Execution complete — stream closed")),
TailState::Finished,
)),
// ---- Phase 1: wait for the file to appear ----
TailState::WaitingForFile {
full_path,
file_path,
execution_id,
db,
started,
} => {
if full_path.exists() {
let next = TailState::SendInitial {
full_path,
file_path,
execution_id,
db,
};
Some((
Ok(Event::default()
.event("waiting")
.data("File found — loading content")),
next,
))
} else if started.elapsed() > STREAM_MAX_WAIT {
Some((
Ok(Event::default().event("error").data(format!(
"Timed out waiting for file to appear at '{}'",
file_path,
))),
TailState::Finished,
))
} else {
tokio::time::sleep(STREAM_POLL_INTERVAL).await;
Some((
Ok(Event::default()
.event("waiting")
.data("File not yet available — waiting for worker to create it")),
TailState::WaitingForFile {
full_path,
file_path,
execution_id,
db,
started,
},
))
}
}
// ---- Phase 2: read and send current file content ----
TailState::SendInitial {
full_path,
file_path,
execution_id,
db,
} => match tokio::fs::File::open(&full_path).await {
Ok(mut file) => {
let mut buf = Vec::new();
match file.read_to_end(&mut buf).await {
Ok(_) => {
let offset = buf.len() as u64;
debug!(
"artifact stream: sent initial {} bytes for '{}'",
offset, file_path,
);
Some((
Ok(Event::default()
.event("content")
.data(String::from_utf8_lossy(&buf).into_owned())),
TailState::Tailing {
full_path,
file_path,
execution_id,
db,
offset,
idle_count: 0,
},
))
}
Err(e) => Some((
Ok(Event::default()
.event("error")
.data(format!("Failed to read file: {}", e))),
TailState::Finished,
)),
}
}
Err(e) => Some((
Ok(Event::default()
.event("error")
.data(format!("Failed to open file: {}", e))),
TailState::Finished,
)),
},
// ---- Phase 3: tail the file for new bytes ----
TailState::Tailing {
full_path,
file_path,
execution_id,
db,
mut offset,
mut idle_count,
} => {
tokio::time::sleep(STREAM_POLL_INTERVAL).await;
// Re-open the file each iteration so we pick up content that
// was written by a different process (the worker).
let mut file = match tokio::fs::File::open(&full_path).await {
Ok(f) => f,
Err(e) => {
return Some((
Ok(Event::default()
.event("error")
.data(format!("File disappeared: {}", e))),
TailState::Finished,
));
}
};
let meta = match file.metadata().await {
Ok(m) => m,
Err(_) => {
// Transient metadata error — keep going.
return Some((
Ok(Event::default().comment("metadata-retry")),
TailState::Tailing {
full_path,
file_path,
execution_id,
db,
offset,
idle_count,
},
));
}
};
let file_len = meta.len();
if file_len > offset {
// New data available — seek and read.
if let Err(e) = file.seek(std::io::SeekFrom::Start(offset)).await {
return Some((
Ok(Event::default()
.event("error")
.data(format!("Seek error: {}", e))),
TailState::Finished,
));
}
let mut new_buf = Vec::with_capacity((file_len - offset) as usize);
match file.read_to_end(&mut new_buf).await {
Ok(n) => {
offset += n as u64;
idle_count = 0;
Some((
Ok(Event::default()
.event("append")
.data(String::from_utf8_lossy(&new_buf).into_owned())),
TailState::Tailing {
full_path,
file_path,
execution_id,
db,
offset,
idle_count,
},
))
}
Err(e) => Some((
Ok(Event::default()
.event("error")
.data(format!("Read error: {}", e))),
TailState::Finished,
)),
}
} else if file_len < offset {
// File truncated — resend from scratch.
drop(file);
Some((
Ok(Event::default()
.event("waiting")
.data("File was truncated — resending content")),
TailState::SendInitial {
full_path,
file_path,
execution_id,
db,
},
))
} else {
// No change.
idle_count += 1;
if idle_count >= STREAM_IDLE_CHECKS_BEFORE_DONE {
let done = is_execution_terminal(&db, execution_id).await
|| (execution_id.is_none()
&& idle_count >= STREAM_IDLE_CHECKS_BEFORE_DONE * 4);
if done {
// One final read to catch trailing bytes.
return if let Some(trailing) =
final_read_bytes(&full_path, offset).await
{
Some((
Ok(Event::default().event("append").data(trailing)),
TailState::SendDone,
))
} else {
Some((
Ok(Event::default()
.event("done")
.data("Execution complete — stream closed")),
TailState::Finished,
))
};
}
// Reset so we don't hit the DB every poll.
idle_count = 0;
}
Some((
Ok(Event::default().comment("no-change")),
TailState::Tailing {
full_path,
file_path,
execution_id,
db,
offset,
idle_count,
},
))
}
}
}
});
Ok(Sse::new(stream).keep_alive(
KeepAlive::new()
.interval(std::time::Duration::from_secs(15))
.text("keepalive"),
))
}
/// Derive a simple file extension from a MIME content type
fn extension_from_content_type(ct: &str) -> &str { fn extension_from_content_type(ct: &str) -> &str {
match ct { match ct {
"text/plain" => "txt", "text/plain" => "txt",
@@ -1219,6 +2092,14 @@ pub fn routes() -> Router<Arc<AppState>> {
.delete(delete_artifact), .delete(delete_artifact),
) )
.route("/artifacts/ref/{ref}", get(get_artifact_by_ref)) .route("/artifacts/ref/{ref}", get(get_artifact_by_ref))
.route(
"/artifacts/ref/{ref}/versions/upload",
post(upload_version_by_ref),
)
.route(
"/artifacts/ref/{ref}/versions/file",
post(allocate_file_version_by_ref),
)
// Progress / data // Progress / data
.route("/artifacts/{id}/progress", post(append_progress)) .route("/artifacts/{id}/progress", post(append_progress))
.route( .route(
@@ -1227,6 +2108,8 @@ pub fn routes() -> Router<Arc<AppState>> {
) )
// Download (latest) // Download (latest)
.route("/artifacts/{id}/download", get(download_latest)) .route("/artifacts/{id}/download", get(download_latest))
// SSE streaming for file-backed artifacts
.route("/artifacts/{id}/stream", get(stream_artifact))
// Version management // Version management
.route( .route(
"/artifacts/{id}/versions", "/artifacts/{id}/versions",

View File

@@ -15,11 +15,17 @@ use std::sync::Arc;
use tokio_stream::wrappers::BroadcastStream; use tokio_stream::wrappers::BroadcastStream;
use attune_common::models::enums::ExecutionStatus; use attune_common::models::enums::ExecutionStatus;
use attune_common::mq::{ExecutionRequestedPayload, MessageEnvelope, MessageType}; use attune_common::mq::{
ExecutionCancelRequestedPayload, ExecutionRequestedPayload, MessageEnvelope, MessageType,
Publisher,
};
use attune_common::repositories::{ use attune_common::repositories::{
action::ActionRepository, action::ActionRepository,
execution::{CreateExecutionInput, ExecutionRepository, ExecutionSearchFilters}, execution::{
Create, FindById, FindByRef, CreateExecutionInput, ExecutionRepository, ExecutionSearchFilters, UpdateExecutionInput,
},
workflow::WorkflowExecutionRepository,
Create, FindById, FindByRef, Update,
}; };
use sqlx::Row; use sqlx::Row;
@@ -357,6 +363,279 @@ pub async fn get_execution_stats(
Ok((StatusCode::OK, Json(response))) Ok((StatusCode::OK, Json(response)))
} }
/// Cancel a running execution
///
/// This endpoint requests cancellation of an execution. The execution must be in a
/// cancellable state (requested, scheduling, scheduled, running, or canceling).
/// For running executions, the worker will send SIGINT to the process, then SIGTERM
/// after a 10-second grace period if it hasn't stopped.
///
/// **Workflow cascading**: When a workflow (parent) execution is cancelled, all of
/// its incomplete child task executions are also cancelled. Children that haven't
/// reached a worker yet are set to Cancelled immediately; children that are running
/// receive a cancel MQ message so their worker can gracefully stop the process.
/// The workflow_execution record is also marked as Cancelled to prevent the
/// scheduler from dispatching any further tasks.
#[utoipa::path(
post,
path = "/api/v1/executions/{id}/cancel",
tag = "executions",
params(
("id" = i64, Path, description = "Execution ID")
),
responses(
(status = 200, description = "Cancellation requested", body = inline(ApiResponse<ExecutionResponse>)),
(status = 404, description = "Execution not found"),
(status = 409, description = "Execution is not in a cancellable state"),
),
security(("bearer_auth" = []))
)]
pub async fn cancel_execution(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
// Load the execution
let execution = ExecutionRepository::find_by_id(&state.db, id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Execution with ID {} not found", id)))?;
// Check if the execution is in a cancellable state
let cancellable = matches!(
execution.status,
ExecutionStatus::Requested
| ExecutionStatus::Scheduling
| ExecutionStatus::Scheduled
| ExecutionStatus::Running
| ExecutionStatus::Canceling
);
if !cancellable {
return Err(ApiError::Conflict(format!(
"Execution {} is in status '{}' and cannot be cancelled",
id,
format!("{:?}", execution.status).to_lowercase()
)));
}
// If already canceling, just return the current state
if execution.status == ExecutionStatus::Canceling {
let response = ApiResponse::new(ExecutionResponse::from(execution));
return Ok((StatusCode::OK, Json(response)));
}
let publisher = state.get_publisher().await;
// For executions that haven't reached a worker yet, cancel immediately
if matches!(
execution.status,
ExecutionStatus::Requested | ExecutionStatus::Scheduling | ExecutionStatus::Scheduled
) {
let update = UpdateExecutionInput {
status: Some(ExecutionStatus::Cancelled),
result: Some(
serde_json::json!({"error": "Cancelled by user before execution started"}),
),
..Default::default()
};
let updated = ExecutionRepository::update(&state.db, id, update).await?;
// Cascade to workflow children if this is a workflow execution
cancel_workflow_children(&state.db, publisher.as_deref(), id).await;
let response = ApiResponse::new(ExecutionResponse::from(updated));
return Ok((StatusCode::OK, Json(response)));
}
// For running executions, set status to Canceling and send cancel message to the worker
let update = UpdateExecutionInput {
status: Some(ExecutionStatus::Canceling),
..Default::default()
};
let updated = ExecutionRepository::update(&state.db, id, update).await?;
// Send cancel request to the worker via MQ
if let Some(worker_id) = execution.executor {
send_cancel_to_worker(publisher.as_deref(), id, worker_id).await;
} else {
tracing::warn!(
"Execution {} has no executor/worker assigned; marked as canceling but no MQ message sent",
id
);
}
// Cascade to workflow children if this is a workflow execution
cancel_workflow_children(&state.db, publisher.as_deref(), id).await;
let response = ApiResponse::new(ExecutionResponse::from(updated));
Ok((StatusCode::OK, Json(response)))
}
/// Send a cancel MQ message to a specific worker for a specific execution.
async fn send_cancel_to_worker(publisher: Option<&Publisher>, execution_id: i64, worker_id: i64) {
let payload = ExecutionCancelRequestedPayload {
execution_id,
worker_id,
};
let envelope = MessageEnvelope::new(MessageType::ExecutionCancelRequested, payload)
.with_source("api-service")
.with_correlation_id(uuid::Uuid::new_v4());
if let Some(publisher) = publisher {
let routing_key = format!("execution.cancel.worker.{}", worker_id);
let exchange = "attune.executions";
if let Err(e) = publisher
.publish_envelope_with_routing(&envelope, exchange, &routing_key)
.await
{
tracing::error!(
"Failed to publish cancel request for execution {}: {}",
execution_id,
e
);
}
} else {
tracing::warn!(
"No MQ publisher available to send cancel request for execution {}",
execution_id
);
}
}
/// Cancel all incomplete child executions of a workflow parent execution.
///
/// This handles the workflow cascade: when a workflow execution is cancelled,
/// its child task executions must also be cancelled to prevent further work.
/// Additionally, the `workflow_execution` record is marked Cancelled so the
/// scheduler's `advance_workflow` will short-circuit and not dispatch new tasks.
///
/// Children in pre-running states (Requested, Scheduling, Scheduled) are set
/// to Cancelled immediately. Children that are Running receive a cancel MQ
/// message so their worker can gracefully stop the process.
async fn cancel_workflow_children(
db: &sqlx::PgPool,
publisher: Option<&Publisher>,
parent_execution_id: i64,
) {
// Find all child executions that are still incomplete
let children: Vec<attune_common::models::Execution> = match sqlx::query_as::<
_,
attune_common::models::Execution,
>(&format!(
"SELECT {} FROM execution WHERE parent = $1 AND status NOT IN ('completed', 'failed', 'timeout', 'cancelled', 'abandoned')",
attune_common::repositories::execution::SELECT_COLUMNS
))
.bind(parent_execution_id)
.fetch_all(db)
.await
{
Ok(rows) => rows,
Err(e) => {
tracing::error!(
"Failed to fetch child executions for parent {}: {}",
parent_execution_id,
e
);
return;
}
};
if children.is_empty() {
return;
}
tracing::info!(
"Cascading cancellation from execution {} to {} child execution(s)",
parent_execution_id,
children.len()
);
for child in &children {
let child_id = child.id;
if matches!(
child.status,
ExecutionStatus::Requested | ExecutionStatus::Scheduling | ExecutionStatus::Scheduled
) {
// Pre-running: cancel immediately in DB
let update = UpdateExecutionInput {
status: Some(ExecutionStatus::Cancelled),
result: Some(serde_json::json!({
"error": "Cancelled: parent workflow execution was cancelled"
})),
..Default::default()
};
if let Err(e) = ExecutionRepository::update(db, child_id, update).await {
tracing::error!("Failed to cancel child execution {}: {}", child_id, e);
} else {
tracing::info!("Cancelled pre-running child execution {}", child_id);
}
} else if matches!(
child.status,
ExecutionStatus::Running | ExecutionStatus::Canceling
) {
// Running: set to Canceling and send MQ message to the worker
if child.status != ExecutionStatus::Canceling {
let update = UpdateExecutionInput {
status: Some(ExecutionStatus::Canceling),
..Default::default()
};
if let Err(e) = ExecutionRepository::update(db, child_id, update).await {
tracing::error!(
"Failed to set child execution {} to canceling: {}",
child_id,
e
);
}
}
if let Some(worker_id) = child.executor {
send_cancel_to_worker(publisher, child_id, worker_id).await;
}
}
// Recursively cancel grandchildren (nested workflows)
// Use Box::pin to allow the recursive async call
Box::pin(cancel_workflow_children(db, publisher, child_id)).await;
}
// Also mark any associated workflow_execution record as Cancelled so that
// advance_workflow short-circuits and does not dispatch new tasks.
// A workflow_execution is linked to the parent execution via its `execution` column.
if let Ok(Some(wf_exec)) =
WorkflowExecutionRepository::find_by_execution(db, parent_execution_id).await
{
if !matches!(
wf_exec.status,
ExecutionStatus::Completed | ExecutionStatus::Failed | ExecutionStatus::Cancelled
) {
let wf_update = attune_common::repositories::workflow::UpdateWorkflowExecutionInput {
status: Some(ExecutionStatus::Cancelled),
error_message: Some(
"Cancelled: parent workflow execution was cancelled".to_string(),
),
current_tasks: Some(vec![]),
completed_tasks: None,
failed_tasks: None,
skipped_tasks: None,
variables: None,
paused: None,
pause_reason: None,
};
if let Err(e) = WorkflowExecutionRepository::update(db, wf_exec.id, wf_update).await {
tracing::error!("Failed to cancel workflow_execution {}: {}", wf_exec.id, e);
} else {
tracing::info!(
"Cancelled workflow_execution {} for parent execution {}",
wf_exec.id,
parent_execution_id
);
}
}
}
}
/// Create execution routes /// Create execution routes
/// Stream execution updates via Server-Sent Events /// Stream execution updates via Server-Sent Events
/// ///
@@ -443,6 +722,10 @@ pub fn routes() -> Router<Arc<AppState>> {
.route("/executions/stats", get(get_execution_stats)) .route("/executions/stats", get(get_execution_stats))
.route("/executions/stream", get(stream_execution_updates)) .route("/executions/stream", get(stream_execution_updates))
.route("/executions/{id}", get(get_execution)) .route("/executions/{id}", get(get_execution))
.route(
"/executions/{id}/cancel",
axum::routing::post(cancel_execution),
)
.route( .route(
"/executions/status/{status}", "/executions/status/{status}",
get(list_executions_by_status), get(list_executions_by_status),

View File

@@ -14,10 +14,7 @@ use validator::Validate;
use attune_common::models::pack_test::PackTestResult; use attune_common::models::pack_test::PackTestResult;
use attune_common::mq::{MessageEnvelope, MessageType, PackRegisteredPayload}; use attune_common::mq::{MessageEnvelope, MessageType, PackRegisteredPayload};
use attune_common::repositories::{ use attune_common::repositories::{
action::ActionRepository,
pack::{CreatePackInput, UpdatePackInput}, pack::{CreatePackInput, UpdatePackInput},
rule::{RestoreRuleInput, RuleRepository},
trigger::TriggerRepository,
Create, Delete, FindById, FindByRef, PackRepository, PackTestRepository, Pagination, Update, Create, Delete, FindById, FindByRef, PackRepository, PackTestRepository, Pagination, Update,
}; };
use attune_common::workflow::{PackWorkflowService, PackWorkflowServiceConfig}; use attune_common::workflow::{PackWorkflowService, PackWorkflowServiceConfig};
@@ -732,85 +729,100 @@ async fn register_pack_internal(
.and_then(|v| v.as_str()) .and_then(|v| v.as_str())
.map(|s| s.to_string()); .map(|s| s.to_string());
// Ad-hoc rules to restore after pack reinstallation // Extract common metadata fields used for both create and update
let mut saved_adhoc_rules: Vec<attune_common::models::rule::Rule> = Vec::new(); let conf_schema = pack_yaml
.get("config_schema")
.and_then(|v| serde_json::to_value(v).ok())
.unwrap_or_else(|| serde_json::json!({}));
let meta = pack_yaml
.get("metadata")
.and_then(|v| serde_json::to_value(v).ok())
.unwrap_or_else(|| serde_json::json!({}));
let tags: Vec<String> = pack_yaml
.get("keywords")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default();
let runtime_deps: Vec<String> = pack_yaml
.get("runtime_deps")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default();
let dependencies: Vec<String> = pack_yaml
.get("dependencies")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default();
// Check if pack already exists // Check if pack already exists — update in place to preserve IDs
if !force { let existing_pack = PackRepository::find_by_ref(&state.db, &pack_ref).await?;
if PackRepository::exists_by_ref(&state.db, &pack_ref).await? {
let is_new_pack;
let pack = if let Some(existing) = existing_pack {
if !force {
return Err(ApiError::Conflict(format!( return Err(ApiError::Conflict(format!(
"Pack '{}' already exists. Use force=true to reinstall.", "Pack '{}' already exists. Use force=true to reinstall.",
pack_ref pack_ref
))); )));
} }
// Update existing pack in place — preserves pack ID and all child entity IDs
let update_input = UpdatePackInput {
label: Some(label),
description: Some(description.unwrap_or_default()),
version: Some(version.clone()),
conf_schema: Some(conf_schema),
config: None, // preserve user-set config
meta: Some(meta),
tags: Some(tags),
runtime_deps: Some(runtime_deps),
dependencies: Some(dependencies),
is_standard: None,
installers: None,
};
let updated = PackRepository::update(&state.db, existing.id, update_input).await?;
tracing::info!(
"Updated existing pack '{}' (ID: {}) in place",
pack_ref,
updated.id
);
is_new_pack = false;
updated
} else { } else {
// Delete existing pack if force is true, preserving ad-hoc (user-created) rules // Create new pack
if let Some(existing_pack) = PackRepository::find_by_ref(&state.db, &pack_ref).await? { let pack_input = CreatePackInput {
// Save ad-hoc rules before deletion — CASCADE on pack FK would destroy them r#ref: pack_ref.clone(),
saved_adhoc_rules = RuleRepository::find_adhoc_by_pack(&state.db, existing_pack.id) label,
.await description,
.unwrap_or_default(); version: version.clone(),
if !saved_adhoc_rules.is_empty() { conf_schema,
tracing::info!( config: serde_json::json!({}),
"Preserving {} ad-hoc rule(s) during reinstall of pack '{}'", meta,
saved_adhoc_rules.len(), tags,
pack_ref runtime_deps,
); dependencies,
} is_standard: false,
installers: serde_json::json!({}),
};
PackRepository::delete(&state.db, existing_pack.id).await?; is_new_pack = true;
tracing::info!("Deleted existing pack '{}' for forced reinstall", pack_ref); PackRepository::create(&state.db, pack_input).await?
}
}
// Create pack input
let pack_input = CreatePackInput {
r#ref: pack_ref.clone(),
label,
description,
version: version.clone(),
conf_schema: pack_yaml
.get("config_schema")
.and_then(|v| serde_json::to_value(v).ok())
.unwrap_or_else(|| serde_json::json!({})),
config: serde_json::json!({}),
meta: pack_yaml
.get("metadata")
.and_then(|v| serde_json::to_value(v).ok())
.unwrap_or_else(|| serde_json::json!({})),
tags: pack_yaml
.get("keywords")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default(),
runtime_deps: pack_yaml
.get("runtime_deps")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default(),
dependencies: pack_yaml
.get("dependencies")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default(),
is_standard: false,
installers: serde_json::json!({}),
}; };
let pack = PackRepository::create(&state.db, pack_input).await?;
// Auto-sync workflows after pack creation // Auto-sync workflows after pack creation
let packs_base_dir = PathBuf::from(&state.config.packs_base_dir); let packs_base_dir = PathBuf::from(&state.config.packs_base_dir);
let service_config = PackWorkflowServiceConfig { let service_config = PackWorkflowServiceConfig {
@@ -850,14 +862,18 @@ async fn register_pack_internal(
match component_loader.load_all(&pack_path).await { match component_loader.load_all(&pack_path).await {
Ok(load_result) => { Ok(load_result) => {
tracing::info!( tracing::info!(
"Pack '{}' components loaded: {} runtimes, {} triggers, {} actions, {} sensors ({} skipped, {} warnings)", "Pack '{}' components loaded: {} created, {} updated, {} skipped, {} removed, {} warnings \
(runtimes: {}/{}, triggers: {}/{}, actions: {}/{}, sensors: {}/{})",
pack.r#ref, pack.r#ref,
load_result.runtimes_loaded, load_result.total_loaded(),
load_result.triggers_loaded, load_result.total_updated(),
load_result.actions_loaded,
load_result.sensors_loaded,
load_result.total_skipped(), load_result.total_skipped(),
load_result.warnings.len() load_result.removed,
load_result.warnings.len(),
load_result.runtimes_loaded, load_result.runtimes_updated,
load_result.triggers_loaded, load_result.triggers_updated,
load_result.actions_loaded, load_result.actions_updated,
load_result.sensors_loaded, load_result.sensors_updated,
); );
for warning in &load_result.warnings { for warning in &load_result.warnings {
tracing::warn!("Pack component warning: {}", warning); tracing::warn!("Pack component warning: {}", warning);
@@ -873,122 +889,9 @@ async fn register_pack_internal(
} }
} }
// Restore ad-hoc rules that were saved before pack deletion, and // Since entities are now updated in place (IDs preserved), ad-hoc rules
// re-link any rules from other packs whose action/trigger FKs were // and cross-pack FK references survive reinstallation automatically.
// set to NULL when the old pack's entities were cascade-deleted. // No need to save/restore rules or re-link FKs.
{
// Phase 1: Restore saved ad-hoc rules
if !saved_adhoc_rules.is_empty() {
let mut restored = 0u32;
for saved_rule in &saved_adhoc_rules {
// Resolve action and trigger IDs by ref (they may have been recreated)
let action_id = ActionRepository::find_by_ref(&state.db, &saved_rule.action_ref)
.await
.ok()
.flatten()
.map(|a| a.id);
let trigger_id = TriggerRepository::find_by_ref(&state.db, &saved_rule.trigger_ref)
.await
.ok()
.flatten()
.map(|t| t.id);
let input = RestoreRuleInput {
r#ref: saved_rule.r#ref.clone(),
pack: pack.id,
pack_ref: pack.r#ref.clone(),
label: saved_rule.label.clone(),
description: saved_rule.description.clone(),
action: action_id,
action_ref: saved_rule.action_ref.clone(),
trigger: trigger_id,
trigger_ref: saved_rule.trigger_ref.clone(),
conditions: saved_rule.conditions.clone(),
action_params: saved_rule.action_params.clone(),
trigger_params: saved_rule.trigger_params.clone(),
enabled: saved_rule.enabled,
};
match RuleRepository::restore_rule(&state.db, input).await {
Ok(rule) => {
restored += 1;
if rule.action.is_none() || rule.trigger.is_none() {
tracing::warn!(
"Restored ad-hoc rule '{}' with unresolved references \
(action: {}, trigger: {})",
rule.r#ref,
if rule.action.is_some() {
"linked"
} else {
"NULL"
},
if rule.trigger.is_some() {
"linked"
} else {
"NULL"
},
);
}
}
Err(e) => {
tracing::warn!(
"Failed to restore ad-hoc rule '{}': {}",
saved_rule.r#ref,
e
);
}
}
}
tracing::info!(
"Restored {}/{} ad-hoc rule(s) for pack '{}'",
restored,
saved_adhoc_rules.len(),
pack.r#ref
);
}
// Phase 2: Re-link rules from other packs whose action/trigger FKs
// were set to NULL when the old pack's entities were cascade-deleted
let new_actions = ActionRepository::find_by_pack(&state.db, pack.id)
.await
.unwrap_or_default();
let new_triggers = TriggerRepository::find_by_pack(&state.db, pack.id)
.await
.unwrap_or_default();
for action in &new_actions {
match RuleRepository::relink_action_by_ref(&state.db, &action.r#ref, action.id).await {
Ok(count) if count > 0 => {
tracing::info!("Re-linked {} rule(s) to action '{}'", count, action.r#ref);
}
Err(e) => {
tracing::warn!(
"Failed to re-link rules to action '{}': {}",
action.r#ref,
e
);
}
_ => {}
}
}
for trigger in &new_triggers {
match RuleRepository::relink_trigger_by_ref(&state.db, &trigger.r#ref, trigger.id).await
{
Ok(count) if count > 0 => {
tracing::info!("Re-linked {} rule(s) to trigger '{}'", count, trigger.r#ref);
}
Err(e) => {
tracing::warn!(
"Failed to re-link rules to trigger '{}': {}",
trigger.r#ref,
e
);
}
_ => {}
}
}
}
// Set up runtime environments for the pack's actions. // Set up runtime environments for the pack's actions.
// This creates virtualenvs, installs dependencies, etc. based on each // This creates virtualenvs, installs dependencies, etc. based on each
@@ -1199,8 +1102,11 @@ async fn register_pack_internal(
let test_passed = result.status == "passed"; let test_passed = result.status == "passed";
if !test_passed && !force { if !test_passed && !force {
// Tests failed and force is not set - rollback pack creation // Tests failed and force is not set — only delete if we just created this pack.
let _ = PackRepository::delete(&state.db, pack.id).await; // If we updated an existing pack, deleting would destroy the original.
if is_new_pack {
let _ = PackRepository::delete(&state.db, pack.id).await;
}
return Err(ApiError::BadRequest(format!( return Err(ApiError::BadRequest(format!(
"Pack registration failed: tests did not pass. Use force=true to register anyway." "Pack registration failed: tests did not pass. Use force=true to register anyway."
))); )));
@@ -1217,7 +1123,9 @@ async fn register_pack_internal(
tracing::warn!("Failed to execute tests for pack '{}': {}", pack.r#ref, e); tracing::warn!("Failed to execute tests for pack '{}': {}", pack.r#ref, e);
// If tests can't be executed and force is not set, fail the registration // If tests can't be executed and force is not set, fail the registration
if !force { if !force {
let _ = PackRepository::delete(&state.db, pack.id).await; if is_new_pack {
let _ = PackRepository::delete(&state.db, pack.id).await;
}
return Err(ApiError::BadRequest(format!( return Err(ApiError::BadRequest(format!(
"Pack registration failed: could not execute tests. Error: {}. Use force=true to register anyway.", "Pack registration failed: could not execute tests. Error: {}. Use force=true to register anyway.",
e e

View File

@@ -523,12 +523,11 @@ async fn write_workflow_yaml(
pack_ref: &str, pack_ref: &str,
request: &SaveWorkflowFileRequest, request: &SaveWorkflowFileRequest,
) -> Result<(), ApiError> { ) -> Result<(), ApiError> {
let workflows_dir = packs_base_dir let pack_dir = packs_base_dir.join(pack_ref);
.join(pack_ref) let actions_dir = pack_dir.join("actions");
.join("actions") let workflows_dir = actions_dir.join("workflows");
.join("workflows");
// Ensure the directory exists // Ensure both directories exist
tokio::fs::create_dir_all(&workflows_dir) tokio::fs::create_dir_all(&workflows_dir)
.await .await
.map_err(|e| { .map_err(|e| {
@@ -539,34 +538,164 @@ async fn write_workflow_yaml(
)) ))
})?; })?;
let filename = format!("{}.workflow.yaml", request.name); // ── 1. Write the workflow file (graph-only: version, vars, tasks, output_map) ──
let filepath = workflows_dir.join(&filename); let workflow_filename = format!("{}.workflow.yaml", request.name);
let workflow_filepath = workflows_dir.join(&workflow_filename);
// Serialize definition to YAML // Strip action-level fields from the definition — the workflow file should
let yaml_content = serde_yaml_ng::to_string(&request.definition).map_err(|e| { // contain only the execution graph. The action YAML is authoritative for
// ref, label, description, parameters, output, and tags.
let graph_only = strip_action_level_fields(&request.definition);
let workflow_yaml = serde_yaml_ng::to_string(&graph_only).map_err(|e| {
ApiError::BadRequest(format!("Failed to serialize workflow to YAML: {}", e)) ApiError::BadRequest(format!("Failed to serialize workflow to YAML: {}", e))
})?; })?;
// Write file let workflow_yaml_with_header = format!(
tokio::fs::write(&filepath, yaml_content) "# Workflow execution graph for {}.{}\n\
# Action-level metadata (ref, label, parameters, output, tags) is defined\n\
# in the companion action YAML: actions/{}.yaml\n\n{}",
pack_ref, request.name, request.name, workflow_yaml
);
tokio::fs::write(&workflow_filepath, &workflow_yaml_with_header)
.await .await
.map_err(|e| { .map_err(|e| {
ApiError::InternalServerError(format!( ApiError::InternalServerError(format!(
"Failed to write workflow file '{}': {}", "Failed to write workflow file '{}': {}",
filepath.display(), workflow_filepath.display(),
e e
)) ))
})?; })?;
tracing::info!( tracing::info!(
"Wrote workflow file: {} ({} bytes)", "Wrote workflow file: {} ({} bytes)",
filepath.display(), workflow_filepath.display(),
filepath.metadata().map(|m| m.len()).unwrap_or(0) workflow_yaml_with_header.len()
);
// ── 2. Write the companion action YAML ──
let action_filename = format!("{}.yaml", request.name);
let action_filepath = actions_dir.join(&action_filename);
let action_yaml = build_action_yaml(pack_ref, request);
tokio::fs::write(&action_filepath, &action_yaml)
.await
.map_err(|e| {
ApiError::InternalServerError(format!(
"Failed to write action YAML '{}': {}",
action_filepath.display(),
e
))
})?;
tracing::info!(
"Wrote action YAML: {} ({} bytes)",
action_filepath.display(),
action_yaml.len()
); );
Ok(()) Ok(())
} }
/// Strip action-level fields from a workflow definition JSON, keeping only
/// the execution graph: `version`, `vars`, `tasks`, `output_map`.
///
/// Fields removed: `ref`, `label`, `description`, `parameters`, `output`, `tags`.
fn strip_action_level_fields(definition: &serde_json::Value) -> serde_json::Value {
if let Some(obj) = definition.as_object() {
let mut graph = serde_json::Map::new();
// Keep only graph-level fields
for key in &["version", "vars", "tasks", "output_map"] {
if let Some(val) = obj.get(*key) {
graph.insert((*key).to_string(), val.clone());
}
}
serde_json::Value::Object(graph)
} else {
// Shouldn't happen, but pass through if not an object
definition.clone()
}
}
/// Build the companion action YAML content for a workflow action.
///
/// This file defines the action-level metadata (ref, label, parameters, etc.)
/// and references the workflow file via `workflow_file`.
fn build_action_yaml(pack_ref: &str, request: &SaveWorkflowFileRequest) -> String {
let mut lines = Vec::new();
lines.push(format!(
"# Action definition for workflow {}.{}",
pack_ref, request.name
));
lines.push(format!(
"# The workflow graph (tasks, transitions, variables) is in:"
));
lines.push(format!(
"# actions/workflows/{}.workflow.yaml",
request.name
));
lines.push(String::new());
lines.push(format!("ref: {}.{}", pack_ref, request.name));
lines.push(format!("label: \"{}\"", request.label.replace('"', "\\\"")));
if let Some(ref desc) = request.description {
if !desc.is_empty() {
lines.push(format!("description: \"{}\"", desc.replace('"', "\\\"")));
}
}
lines.push(format!("enabled: true"));
lines.push(format!(
"workflow_file: workflows/{}.workflow.yaml",
request.name
));
// Parameters
if let Some(ref params) = request.param_schema {
if let Some(obj) = params.as_object() {
if !obj.is_empty() {
lines.push(String::new());
let params_yaml = serde_yaml_ng::to_string(params).unwrap_or_default();
lines.push(format!("parameters:"));
// Indent the YAML output under `parameters:`
for line in params_yaml.lines() {
lines.push(format!(" {}", line));
}
}
}
}
// Output schema
if let Some(ref output) = request.out_schema {
if let Some(obj) = output.as_object() {
if !obj.is_empty() {
lines.push(String::new());
let output_yaml = serde_yaml_ng::to_string(output).unwrap_or_default();
lines.push(format!("output:"));
for line in output_yaml.lines() {
lines.push(format!(" {}", line));
}
}
}
}
// Tags
if let Some(ref tags) = request.tags {
if !tags.is_empty() {
lines.push(String::new());
lines.push(format!("tags:"));
for tag in tags {
lines.push(format!(" - {}", tag));
}
}
}
lines.push(String::new()); // trailing newline
lines.join("\n")
}
/// Create a companion action record for a workflow definition. /// Create a companion action record for a workflow definition.
/// ///
/// This ensures the workflow appears in action lists and the action palette in the /// This ensures the workflow appears in action lists and the action palette in the
@@ -669,6 +798,9 @@ async fn update_companion_action(
runtime_version_constraint: None, runtime_version_constraint: None,
param_schema: param_schema.cloned(), param_schema: param_schema.cloned(),
out_schema: out_schema.cloned(), out_schema: out_schema.cloned(),
parameter_delivery: None,
parameter_format: None,
output_format: None,
}; };
ActionRepository::update(db, action.id, update_input) ActionRepository::update(db, action.id, update_input)
@@ -731,6 +863,9 @@ async fn ensure_companion_action(
runtime_version_constraint: None, runtime_version_constraint: None,
param_schema: param_schema.cloned(), param_schema: param_schema.cloned(),
out_schema: out_schema.cloned(), out_schema: out_schema.cloned(),
parameter_delivery: None,
parameter_format: None,
output_format: None,
}; };
ActionRepository::update(db, action.id, update_input) ActionRepository::update(db, action.id, update_input)

View File

@@ -1,5 +1,5 @@
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use reqwest::{multipart, Client as HttpClient, Method, RequestBuilder, Response, StatusCode}; use reqwest::{multipart, Client as HttpClient, Method, RequestBuilder, StatusCode};
use serde::{de::DeserializeOwned, Serialize}; use serde::{de::DeserializeOwned, Serialize};
use std::path::PathBuf; use std::path::PathBuf;
use std::time::Duration; use std::time::Duration;
@@ -83,13 +83,14 @@ impl ApiClient {
self.auth_token = None; self.auth_token = None;
} }
/// Refresh the authentication token using the refresh token /// Refresh the authentication token using the refresh token.
/// ///
/// Returns Ok(true) if refresh succeeded, Ok(false) if no refresh token available /// Returns `Ok(true)` if refresh succeeded, `Ok(false)` if no refresh token
/// is available or the server rejected it.
async fn refresh_auth_token(&mut self) -> Result<bool> { async fn refresh_auth_token(&mut self) -> Result<bool> {
let refresh_token = match &self.refresh_token { let refresh_token = match &self.refresh_token {
Some(token) => token.clone(), Some(token) => token.clone(),
None => return Ok(false), // No refresh token available None => return Ok(false),
}; };
#[derive(Serialize)] #[derive(Serialize)]
@@ -103,7 +104,6 @@ impl ApiClient {
refresh_token: String, refresh_token: String,
} }
// Build refresh request without auth token
let url = format!("{}/auth/refresh", self.base_url); let url = format!("{}/auth/refresh", self.base_url);
let req = self let req = self
.client .client
@@ -113,7 +113,7 @@ impl ApiClient {
let response = req.send().await.context("Failed to refresh token")?; let response = req.send().await.context("Failed to refresh token")?;
if !response.status().is_success() { if !response.status().is_success() {
// Refresh failed - clear tokens // Refresh failed clear tokens so we don't keep retrying
self.auth_token = None; self.auth_token = None;
self.refresh_token = None; self.refresh_token = None;
return Ok(false); return Ok(false);
@@ -128,7 +128,7 @@ impl ApiClient {
self.auth_token = Some(api_response.data.access_token.clone()); self.auth_token = Some(api_response.data.access_token.clone());
self.refresh_token = Some(api_response.data.refresh_token.clone()); self.refresh_token = Some(api_response.data.refresh_token.clone());
// Persist to config file if we have the path // Persist to config file
if self.config_path.is_some() { if self.config_path.is_some() {
if let Ok(mut config) = CliConfig::load() { if let Ok(mut config) = CliConfig::load() {
let _ = config.set_auth( let _ = config.set_auth(
@@ -141,45 +141,96 @@ impl ApiClient {
Ok(true) Ok(true)
} }
/// Build a request with common headers // ── Request building helpers ────────────────────────────────────────
fn build_request(&self, method: Method, path: &str) -> RequestBuilder {
// Auth endpoints are at /auth, not /auth /// Build a full URL from a path.
let url = if path.starts_with("/auth") { fn url_for(&self, path: &str) -> String {
if path.starts_with("/auth") {
format!("{}{}", self.base_url, path) format!("{}{}", self.base_url, path)
} else { } else {
format!("{}/api/v1{}", self.base_url, path) format!("{}/api/v1{}", self.base_url, path)
}; }
let mut req = self.client.request(method, &url); }
/// Build a `RequestBuilder` with auth header applied.
fn build_request(&self, method: Method, path: &str) -> RequestBuilder {
let url = self.url_for(path);
let mut req = self.client.request(method, &url);
if let Some(token) = &self.auth_token { if let Some(token) = &self.auth_token {
req = req.bearer_auth(token); req = req.bearer_auth(token);
} }
req req
} }
/// Execute a request and handle the response with automatic token refresh // ── Core execute-with-retry machinery ──────────────────────────────
async fn execute<T: DeserializeOwned>(&mut self, req: RequestBuilder) -> Result<T> {
/// Send a request that carries a JSON body. On a 401 response the token
/// is refreshed and the request is rebuilt & retried exactly once.
async fn execute_json<T, B>(
&mut self,
method: Method,
path: &str,
body: Option<&B>,
) -> Result<T>
where
T: DeserializeOwned,
B: Serialize,
{
// First attempt
let req = self.attach_body(self.build_request(method.clone(), path), body);
let response = req.send().await.context("Failed to send request to API")?; let response = req.send().await.context("Failed to send request to API")?;
// If 401 and we have a refresh token, try to refresh once
if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() { if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() {
// Try to refresh the token
if self.refresh_auth_token().await? { if self.refresh_auth_token().await? {
// Rebuild and retry the original request with new token // Retry with new token
// Note: This is a simplified retry - the original request body is already consumed let req = self.attach_body(self.build_request(method, path), body);
// For a production implementation, we'd need to clone the request or store the body let response = req
return Err(anyhow::anyhow!( .send()
"Token expired and was refreshed. Please retry your command." .await
)); .context("Failed to send request to API (retry)")?;
return self.handle_response(response).await;
} }
} }
self.handle_response(response).await self.handle_response(response).await
} }
/// Handle API response and extract data /// Send a request that carries a JSON body and expects no response body.
async fn handle_response<T: DeserializeOwned>(&self, response: Response) -> Result<T> { async fn execute_json_no_response<B: Serialize>(
&mut self,
method: Method,
path: &str,
body: Option<&B>,
) -> Result<()> {
let req = self.attach_body(self.build_request(method.clone(), path), body);
let response = req.send().await.context("Failed to send request to API")?;
if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() {
if self.refresh_auth_token().await? {
let req = self.attach_body(self.build_request(method, path), body);
let response = req
.send()
.await
.context("Failed to send request to API (retry)")?;
return self.handle_empty_response(response).await;
}
}
self.handle_empty_response(response).await
}
/// Optionally attach a JSON body to a request builder.
fn attach_body<B: Serialize>(&self, req: RequestBuilder, body: Option<&B>) -> RequestBuilder {
match body {
Some(b) => req.json(b),
None => req,
}
}
// ── Response handling ──────────────────────────────────────────────
/// Parse a successful API response or return a descriptive error.
async fn handle_response<T: DeserializeOwned>(&self, response: reqwest::Response) -> Result<T> {
let status = response.status(); let status = response.status();
if status.is_success() { if status.is_success() {
@@ -194,7 +245,6 @@ impl ApiClient {
.await .await
.unwrap_or_else(|_| "Unknown error".to_string()); .unwrap_or_else(|_| "Unknown error".to_string());
// Try to parse as API error
if let Ok(api_error) = serde_json::from_str::<ApiError>(&error_text) { if let Ok(api_error) = serde_json::from_str::<ApiError>(&error_text) {
anyhow::bail!("API error ({}): {}", status, api_error.error); anyhow::bail!("API error ({}): {}", status, api_error.error);
} else { } else {
@@ -203,10 +253,30 @@ impl ApiClient {
} }
} }
/// Handle a response where we only care about success/failure, not a body.
async fn handle_empty_response(&self, response: reqwest::Response) -> Result<()> {
let status = response.status();
if status.is_success() {
Ok(())
} else {
let error_text = response
.text()
.await
.unwrap_or_else(|_| "Unknown error".to_string());
if let Ok(api_error) = serde_json::from_str::<ApiError>(&error_text) {
anyhow::bail!("API error ({}): {}", status, api_error.error);
} else {
anyhow::bail!("API error ({}): {}", status, error_text);
}
}
}
// ── Public convenience methods ─────────────────────────────────────
/// GET request /// GET request
pub async fn get<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> { pub async fn get<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> {
let req = self.build_request(Method::GET, path); self.execute_json::<T, ()>(Method::GET, path, None).await
self.execute(req).await
} }
/// GET request with query parameters (query string must be in path) /// GET request with query parameters (query string must be in path)
@@ -215,8 +285,7 @@ impl ApiClient {
/// Example: `client.get_with_query("/actions?enabled=true&pack=core").await` /// Example: `client.get_with_query("/actions?enabled=true&pack=core").await`
#[allow(dead_code)] #[allow(dead_code)]
pub async fn get_with_query<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> { pub async fn get_with_query<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> {
let req = self.build_request(Method::GET, path); self.execute_json::<T, ()>(Method::GET, path, None).await
self.execute(req).await
} }
/// POST request with JSON body /// POST request with JSON body
@@ -225,8 +294,7 @@ impl ApiClient {
path: &str, path: &str,
body: &B, body: &B,
) -> Result<T> { ) -> Result<T> {
let req = self.build_request(Method::POST, path).json(body); self.execute_json(Method::POST, path, Some(body)).await
self.execute(req).await
} }
/// PUT request with JSON body /// PUT request with JSON body
@@ -237,8 +305,7 @@ impl ApiClient {
path: &str, path: &str,
body: &B, body: &B,
) -> Result<T> { ) -> Result<T> {
let req = self.build_request(Method::PUT, path).json(body); self.execute_json(Method::PUT, path, Some(body)).await
self.execute(req).await
} }
/// PATCH request with JSON body /// PATCH request with JSON body
@@ -247,8 +314,7 @@ impl ApiClient {
path: &str, path: &str,
body: &B, body: &B,
) -> Result<T> { ) -> Result<T> {
let req = self.build_request(Method::PATCH, path).json(body); self.execute_json(Method::PATCH, path, Some(body)).await
self.execute(req).await
} }
/// DELETE request with response parsing /// DELETE request with response parsing
@@ -259,8 +325,7 @@ impl ApiClient {
/// delete operations return metadata (e.g., cascade deletion summaries). /// delete operations return metadata (e.g., cascade deletion summaries).
#[allow(dead_code)] #[allow(dead_code)]
pub async fn delete<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> { pub async fn delete<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> {
let req = self.build_request(Method::DELETE, path); self.execute_json::<T, ()>(Method::DELETE, path, None).await
self.execute(req).await
} }
/// POST request without expecting response body /// POST request without expecting response body
@@ -270,36 +335,14 @@ impl ApiClient {
/// Kept for API completeness even though not currently used. /// Kept for API completeness even though not currently used.
#[allow(dead_code)] #[allow(dead_code)]
pub async fn post_no_response<B: Serialize>(&mut self, path: &str, body: &B) -> Result<()> { pub async fn post_no_response<B: Serialize>(&mut self, path: &str, body: &B) -> Result<()> {
let req = self.build_request(Method::POST, path).json(body); self.execute_json_no_response(Method::POST, path, Some(body))
let response = req.send().await.context("Failed to send request to API")?; .await
let status = response.status();
if status.is_success() {
Ok(())
} else {
let error_text = response
.text()
.await
.unwrap_or_else(|_| "Unknown error".to_string());
anyhow::bail!("API error ({}): {}", status, error_text);
}
} }
/// DELETE request without expecting response body /// DELETE request without expecting response body
pub async fn delete_no_response(&mut self, path: &str) -> Result<()> { pub async fn delete_no_response(&mut self, path: &str) -> Result<()> {
let req = self.build_request(Method::DELETE, path); self.execute_json_no_response::<()>(Method::DELETE, path, None)
let response = req.send().await.context("Failed to send request to API")?; .await
let status = response.status();
if status.is_success() {
Ok(())
} else {
let error_text = response
.text()
.await
.unwrap_or_else(|_| "Unknown error".to_string());
anyhow::bail!("API error ({}): {}", status, error_text);
}
} }
/// POST a multipart/form-data request with a file field and optional text fields. /// POST a multipart/form-data request with a file field and optional text fields.
@@ -318,33 +361,47 @@ impl ApiClient {
mime_type: &str, mime_type: &str,
extra_fields: Vec<(&str, String)>, extra_fields: Vec<(&str, String)>,
) -> Result<T> { ) -> Result<T> {
let url = format!("{}/api/v1{}", self.base_url, path); // Closure-like helper to build the multipart request from scratch.
// We need this because reqwest::multipart::Form is not Clone, so we
// must rebuild it for the retry attempt.
let build_multipart_request =
|client: &ApiClient, bytes: &[u8]| -> Result<reqwest::RequestBuilder> {
let url = format!("{}/api/v1{}", client.base_url, path);
let file_part = multipart::Part::bytes(file_bytes) let file_part = multipart::Part::bytes(bytes.to_vec())
.file_name(file_name.to_string()) .file_name(file_name.to_string())
.mime_str(mime_type) .mime_str(mime_type)
.context("Invalid MIME type")?; .context("Invalid MIME type")?;
let mut form = multipart::Form::new().part(file_field_name.to_string(), file_part); let mut form = multipart::Form::new().part(file_field_name.to_string(), file_part);
for (key, value) in extra_fields { for (key, value) in &extra_fields {
form = form.text(key.to_string(), value); form = form.text(key.to_string(), value.clone());
} }
let mut req = self.client.post(&url).multipart(form); let mut req = client.client.post(&url).multipart(form);
if let Some(token) = &client.auth_token {
req = req.bearer_auth(token);
}
Ok(req)
};
if let Some(token) = &self.auth_token { // First attempt
req = req.bearer_auth(token); let req = build_multipart_request(self, &file_bytes)?;
} let response = req
.send()
.await
.context("Failed to send multipart request to API")?;
let response = req.send().await.context("Failed to send multipart request to API")?;
// Handle 401 + refresh (same pattern as execute())
if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() { if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() {
if self.refresh_auth_token().await? { if self.refresh_auth_token().await? {
return Err(anyhow::anyhow!( // Retry with new token
"Token expired and was refreshed. Please retry your command." let req = build_multipart_request(self, &file_bytes)?;
)); let response = req
.send()
.await
.context("Failed to send multipart request to API (retry)")?;
return self.handle_response(response).await;
} }
} }
@@ -374,4 +431,22 @@ mod tests {
client.clear_auth_token(); client.clear_auth_token();
assert!(client.auth_token.is_none()); assert!(client.auth_token.is_none());
} }
#[test]
fn test_url_for_api_path() {
let client = ApiClient::new("http://localhost:8080".to_string(), None);
assert_eq!(
client.url_for("/actions"),
"http://localhost:8080/api/v1/actions"
);
}
#[test]
fn test_url_for_auth_path() {
let client = ApiClient::new("http://localhost:8080".to_string(), None);
assert_eq!(
client.url_for("/auth/login"),
"http://localhost:8080/auth/login"
);
}
} }

View File

@@ -52,7 +52,7 @@ pub enum ActionCommands {
action_ref: String, action_ref: String,
/// Skip confirmation prompt /// Skip confirmation prompt
#[arg(short, long)] #[arg(long)]
yes: bool, yes: bool,
}, },
/// Execute an action /// Execute an action

View File

@@ -7,3 +7,4 @@ pub mod pack_index;
pub mod rule; pub mod rule;
pub mod sensor; pub mod sensor;
pub mod trigger; pub mod trigger;
pub mod workflow;

View File

@@ -11,6 +11,37 @@ use crate::output::{self, OutputFormat};
#[derive(Subcommand)] #[derive(Subcommand)]
pub enum PackCommands { pub enum PackCommands {
/// Create an empty pack
///
/// Creates a new pack with no actions, triggers, rules, or sensors.
/// Use --interactive (-i) to be prompted for each field, or provide
/// fields via flags. Only --ref is required in non-interactive mode
/// (--label defaults to a title-cased ref, version defaults to 0.1.0).
Create {
/// Unique reference identifier (e.g., "my_pack", "slack")
#[arg(long, short = 'r')]
r#ref: Option<String>,
/// Human-readable label (defaults to title-cased ref)
#[arg(long, short)]
label: Option<String>,
/// Pack description
#[arg(long, short)]
description: Option<String>,
/// Pack version (semver format recommended)
#[arg(long = "pack-version", default_value = "0.1.0")]
pack_version: String,
/// Tags for categorization (comma-separated)
#[arg(long, value_delimiter = ',')]
tags: Vec<String>,
/// Interactive mode — prompt for each field
#[arg(long, short)]
interactive: bool,
},
/// List all installed packs /// List all installed packs
List { List {
/// Filter by pack name /// Filter by pack name
@@ -75,7 +106,7 @@ pub enum PackCommands {
pack_ref: String, pack_ref: String,
/// Skip confirmation prompt /// Skip confirmation prompt
#[arg(short = 'y', long)] #[arg(long)]
yes: bool, yes: bool,
}, },
/// Register a pack from a local directory (path must be accessible by the API server) /// Register a pack from a local directory (path must be accessible by the API server)
@@ -282,6 +313,17 @@ struct UploadPackResponse {
tests_skipped: bool, tests_skipped: bool,
} }
#[derive(Debug, Serialize)]
struct CreatePackBody {
r#ref: String,
label: String,
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
version: String,
#[serde(default)]
tags: Vec<String>,
}
pub async fn handle_pack_command( pub async fn handle_pack_command(
profile: &Option<String>, profile: &Option<String>,
command: PackCommands, command: PackCommands,
@@ -289,6 +331,27 @@ pub async fn handle_pack_command(
output_format: OutputFormat, output_format: OutputFormat,
) -> Result<()> { ) -> Result<()> {
match command { match command {
PackCommands::Create {
r#ref,
label,
description,
pack_version,
tags,
interactive,
} => {
handle_create(
profile,
r#ref,
label,
description,
pack_version,
tags,
interactive,
api_url,
output_format,
)
.await
}
PackCommands::List { name } => handle_list(profile, name, api_url, output_format).await, PackCommands::List { name } => handle_list(profile, name, api_url, output_format).await,
PackCommands::Show { pack_ref } => { PackCommands::Show { pack_ref } => {
handle_show(profile, pack_ref, api_url, output_format).await handle_show(profile, pack_ref, api_url, output_format).await
@@ -401,6 +464,169 @@ pub async fn handle_pack_command(
} }
} }
/// Derive a human-readable label from a pack ref.
///
/// Splits on `_`, `-`, or `.` and title-cases each word.
fn label_from_ref(r: &str) -> String {
r.split(|c| c == '_' || c == '-' || c == '.')
.filter(|s| !s.is_empty())
.map(|word| {
let mut chars = word.chars();
match chars.next() {
Some(first) => {
let upper: String = first.to_uppercase().collect();
format!("{}{}", upper, chars.as_str())
}
None => String::new(),
}
})
.collect::<Vec<_>>()
.join(" ")
}
async fn handle_create(
profile: &Option<String>,
ref_flag: Option<String>,
label_flag: Option<String>,
description_flag: Option<String>,
version_flag: String,
tags_flag: Vec<String>,
interactive: bool,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
// ── Collect field values ────────────────────────────────────────
let (pack_ref, label, description, version, tags) = if interactive {
// Interactive prompts
let pack_ref: String = match ref_flag {
Some(r) => r,
None => dialoguer::Input::new()
.with_prompt("Pack ref (unique identifier, e.g. \"my_pack\")")
.interact_text()?,
};
let default_label = label_flag
.clone()
.unwrap_or_else(|| label_from_ref(&pack_ref));
let label: String = dialoguer::Input::new()
.with_prompt("Label")
.default(default_label)
.interact_text()?;
let default_desc = description_flag.clone().unwrap_or_default();
let description: String = dialoguer::Input::new()
.with_prompt("Description (optional, Enter to skip)")
.default(default_desc)
.allow_empty(true)
.interact_text()?;
let description = if description.is_empty() {
None
} else {
Some(description)
};
let version: String = dialoguer::Input::new()
.with_prompt("Version")
.default(version_flag)
.interact_text()?;
let default_tags = if tags_flag.is_empty() {
String::new()
} else {
tags_flag.join(", ")
};
let tags_input: String = dialoguer::Input::new()
.with_prompt("Tags (comma-separated, optional)")
.default(default_tags)
.allow_empty(true)
.interact_text()?;
let tags: Vec<String> = tags_input
.split(',')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect();
// Show summary and confirm
println!();
output::print_section("New Pack Summary");
output::print_key_value_table(vec![
("Ref", pack_ref.clone()),
("Label", label.clone()),
(
"Description",
description
.clone()
.unwrap_or_else(|| "(none)".to_string()),
),
("Version", version.clone()),
(
"Tags",
if tags.is_empty() {
"(none)".to_string()
} else {
tags.join(", ")
},
),
]);
println!();
let confirm = dialoguer::Confirm::new()
.with_prompt("Create this pack?")
.default(true)
.interact()?;
if !confirm {
output::print_info("Pack creation cancelled");
return Ok(());
}
(pack_ref, label, description, version, tags)
} else {
// Non-interactive: ref is required
let pack_ref = ref_flag.ok_or_else(|| {
anyhow::anyhow!(
"Pack ref is required. Provide --ref <value> or use --interactive mode."
)
})?;
let label = label_flag.unwrap_or_else(|| label_from_ref(&pack_ref));
let description = description_flag;
let version = version_flag;
let tags = tags_flag;
(pack_ref, label, description, version, tags)
};
// ── Send request ────────────────────────────────────────────────
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let body = CreatePackBody {
r#ref: pack_ref,
label,
description,
version,
tags,
};
let pack: Pack = client.post("/packs", &body).await?;
// ── Output ──────────────────────────────────────────────────────
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&pack, output_format)?;
}
OutputFormat::Table => {
output::print_success(&format!(
"Pack '{}' created successfully (id: {})",
pack.pack_ref, pack.id
));
}
}
Ok(())
}
async fn handle_list( async fn handle_list(
profile: &Option<String>, profile: &Option<String>,
name: Option<String>, name: Option<String>,
@@ -1630,3 +1856,48 @@ async fn handle_update(
Ok(()) Ok(())
} }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_label_from_ref_underscores() {
assert_eq!(label_from_ref("my_cool_pack"), "My Cool Pack");
}
#[test]
fn test_label_from_ref_hyphens() {
assert_eq!(label_from_ref("my-cool-pack"), "My Cool Pack");
}
#[test]
fn test_label_from_ref_dots() {
assert_eq!(label_from_ref("my.cool.pack"), "My Cool Pack");
}
#[test]
fn test_label_from_ref_mixed_separators() {
assert_eq!(label_from_ref("my_cool-pack.v2"), "My Cool Pack V2");
}
#[test]
fn test_label_from_ref_single_word() {
assert_eq!(label_from_ref("slack"), "Slack");
}
#[test]
fn test_label_from_ref_already_capitalized() {
assert_eq!(label_from_ref("AWS"), "AWS");
}
#[test]
fn test_label_from_ref_empty() {
assert_eq!(label_from_ref(""), "");
}
#[test]
fn test_label_from_ref_consecutive_separators() {
assert_eq!(label_from_ref("my__pack"), "My Pack");
}
}

View File

@@ -42,7 +42,7 @@ pub enum TriggerCommands {
trigger_ref: String, trigger_ref: String,
/// Skip confirmation prompt /// Skip confirmation prompt
#[arg(short, long)] #[arg(long)]
yes: bool, yes: bool,
}, },
} }

View File

@@ -0,0 +1,699 @@
use anyhow::{Context, Result};
use clap::Subcommand;
use serde::{Deserialize, Serialize};
use std::path::{Path, PathBuf};
use crate::client::ApiClient;
use crate::config::CliConfig;
use crate::output::{self, OutputFormat};
#[derive(Subcommand)]
pub enum WorkflowCommands {
/// Upload a workflow action from local YAML files to an existing pack.
///
/// Reads the action YAML file, finds the referenced workflow YAML file
/// via its `workflow_file` field, and uploads both to the API. The pack
/// is determined from the action ref (e.g. `mypack.deploy` → pack `mypack`).
Upload {
/// Path to the action YAML file (e.g. actions/deploy.yaml).
/// Must contain a `workflow_file` field pointing to the workflow YAML.
action_file: String,
/// Force update if the workflow already exists
#[arg(short, long)]
force: bool,
},
/// List workflows
List {
/// Filter by pack reference
#[arg(long)]
pack: Option<String>,
/// Filter by tag (comma-separated)
#[arg(long)]
tags: Option<String>,
/// Search term (matches label/description)
#[arg(long)]
search: Option<String>,
},
/// Show details of a specific workflow
Show {
/// Workflow reference (e.g. core.install_packs)
workflow_ref: String,
},
/// Delete a workflow
Delete {
/// Workflow reference (e.g. core.install_packs)
workflow_ref: String,
/// Skip confirmation prompt
#[arg(long)]
yes: bool,
},
}
// ── Local YAML models (for parsing action YAML files) ──────────────────
/// Minimal representation of an action YAML file, capturing only the fields
/// we need to build a `SaveWorkflowFileRequest`.
#[derive(Debug, Deserialize)]
struct ActionYaml {
/// Full action ref, e.g. `python_example.timeline_demo`
#[serde(rename = "ref")]
action_ref: String,
/// Human-readable label
#[serde(default)]
label: String,
/// Description
#[serde(default)]
description: Option<String>,
/// Relative path to the workflow YAML from the `actions/` directory
workflow_file: Option<String>,
/// Parameter schema (flat format)
#[serde(default)]
parameters: Option<serde_json::Value>,
/// Output schema (flat format)
#[serde(default)]
output: Option<serde_json::Value>,
/// Tags
#[serde(default)]
tags: Option<Vec<String>>,
/// Whether the action is enabled
#[serde(default)]
enabled: Option<bool>,
}
// ── API DTOs ────────────────────────────────────────────────────────────
/// Mirrors the API's `SaveWorkflowFileRequest`.
#[derive(Debug, Serialize)]
struct SaveWorkflowFileRequest {
name: String,
label: String,
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
version: String,
pack_ref: String,
definition: serde_json::Value,
#[serde(skip_serializing_if = "Option::is_none")]
param_schema: Option<serde_json::Value>,
#[serde(skip_serializing_if = "Option::is_none")]
out_schema: Option<serde_json::Value>,
#[serde(skip_serializing_if = "Option::is_none")]
tags: Option<Vec<String>>,
#[serde(skip_serializing_if = "Option::is_none")]
enabled: Option<bool>,
}
#[derive(Debug, Serialize, Deserialize)]
struct WorkflowResponse {
id: i64,
#[serde(rename = "ref")]
workflow_ref: String,
pack: i64,
pack_ref: String,
label: String,
description: Option<String>,
version: String,
param_schema: Option<serde_json::Value>,
out_schema: Option<serde_json::Value>,
definition: serde_json::Value,
tags: Vec<String>,
enabled: bool,
created: String,
updated: String,
}
#[derive(Debug, Serialize, Deserialize)]
struct WorkflowSummary {
id: i64,
#[serde(rename = "ref")]
workflow_ref: String,
pack_ref: String,
label: String,
description: Option<String>,
version: String,
tags: Vec<String>,
enabled: bool,
created: String,
updated: String,
}
// ── Command dispatch ────────────────────────────────────────────────────
pub async fn handle_workflow_command(
profile: &Option<String>,
command: WorkflowCommands,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
match command {
WorkflowCommands::Upload { action_file, force } => {
handle_upload(profile, action_file, force, api_url, output_format).await
}
WorkflowCommands::List { pack, tags, search } => {
handle_list(profile, pack, tags, search, api_url, output_format).await
}
WorkflowCommands::Show { workflow_ref } => {
handle_show(profile, workflow_ref, api_url, output_format).await
}
WorkflowCommands::Delete { workflow_ref, yes } => {
handle_delete(profile, workflow_ref, yes, api_url, output_format).await
}
}
}
// ── Upload ──────────────────────────────────────────────────────────────
async fn handle_upload(
profile: &Option<String>,
action_file: String,
force: bool,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let action_path = Path::new(&action_file);
// ── 1. Validate & read the action YAML ──────────────────────────────
if !action_path.exists() {
anyhow::bail!("Action YAML file not found: {}", action_file);
}
if !action_path.is_file() {
anyhow::bail!("Path is not a file: {}", action_file);
}
let action_yaml_content =
std::fs::read_to_string(action_path).context("Failed to read action YAML file")?;
let action: ActionYaml = serde_yaml_ng::from_str(&action_yaml_content)
.context("Failed to parse action YAML file")?;
// ── 2. Extract pack_ref and workflow name from the action ref ────────
let (pack_ref, workflow_name) = split_action_ref(&action.action_ref)?;
// ── 3. Resolve the workflow_file path ───────────────────────────────
let workflow_file_rel = action.workflow_file.as_deref().ok_or_else(|| {
anyhow::anyhow!(
"Action YAML does not contain a 'workflow_file' field. \
This command requires a workflow action — regular actions should be \
uploaded as part of a pack."
)
})?;
// workflow_file is relative to the actions/ directory. The action YAML is
// typically at `<pack>/actions/<name>.yaml`, so the workflow file is
// resolved relative to the action YAML's parent directory.
let workflow_path = resolve_workflow_path(action_path, workflow_file_rel)?;
if !workflow_path.exists() {
anyhow::bail!(
"Workflow file not found: {}\n \
(resolved from workflow_file: '{}' relative to '{}')",
workflow_path.display(),
workflow_file_rel,
action_path
.parent()
.unwrap_or(Path::new("."))
.display()
);
}
// ── 4. Read and parse the workflow YAML ─────────────────────────────
let workflow_yaml_content =
std::fs::read_to_string(&workflow_path).context("Failed to read workflow YAML file")?;
let workflow_definition: serde_json::Value =
serde_yaml_ng::from_str(&workflow_yaml_content).context(format!(
"Failed to parse workflow YAML file: {}",
workflow_path.display()
))?;
// Extract version from the workflow definition, defaulting to "1.0.0"
let version = workflow_definition
.get("version")
.and_then(|v| v.as_str())
.unwrap_or("1.0.0")
.to_string();
// ── 5. Build the API request ────────────────────────────────────────
//
// Merge the action-level fields from the workflow definition back into the
// definition payload (the API's SaveWorkflowFileRequest.definition carries
// the full blob; write_workflow_yaml on the server side strips the action-
// level fields before writing the graph-only file).
let mut definition_map: serde_json::Map<String, serde_json::Value> =
if let Some(obj) = workflow_definition.as_object() {
obj.clone()
} else {
serde_json::Map::new()
};
// Ensure action-level fields are present in the definition (the API and
// web UI store the combined form in the database; the server splits them
// into two files on disk).
if let Some(params) = &action.parameters {
definition_map
.entry("parameters".to_string())
.or_insert_with(|| params.clone());
}
if let Some(out) = &action.output {
definition_map
.entry("output".to_string())
.or_insert_with(|| out.clone());
}
let request = SaveWorkflowFileRequest {
name: workflow_name.clone(),
label: if action.label.is_empty() {
workflow_name.clone()
} else {
action.label.clone()
},
description: action.description.clone(),
version,
pack_ref: pack_ref.clone(),
definition: serde_json::Value::Object(definition_map),
param_schema: action.parameters.clone(),
out_schema: action.output.clone(),
tags: action.tags.clone(),
enabled: action.enabled,
};
// ── 6. Print progress ───────────────────────────────────────────────
if output_format == OutputFormat::Table {
output::print_info(&format!(
"Uploading workflow action '{}.{}' to pack '{}'",
pack_ref, workflow_name, pack_ref,
));
output::print_info(&format!(" Action YAML: {}", action_path.display()));
output::print_info(&format!(" Workflow YAML: {}", workflow_path.display()));
}
// ── 7. Send to API ──────────────────────────────────────────────────
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let workflow_ref = format!("{}.{}", pack_ref, workflow_name);
// Try create first; if 409 Conflict and --force, fall back to update.
let create_path = format!("/packs/{}/workflow-files", pack_ref);
let result: Result<WorkflowResponse> = client.post(&create_path, &request).await;
let response: WorkflowResponse = match result {
Ok(resp) => resp,
Err(err) => {
let err_str = err.to_string();
if err_str.contains("409") || err_str.to_lowercase().contains("conflict") {
if !force {
anyhow::bail!(
"Workflow '{}' already exists. Use --force to update it.",
workflow_ref
);
}
if output_format == OutputFormat::Table {
output::print_info("Workflow already exists, updating...");
}
let update_path = format!("/workflows/{}/file", workflow_ref);
client.put(&update_path, &request).await.context(
"Failed to update existing workflow. \
Check that the pack exists and the workflow ref is correct.",
)?
} else {
return Err(err).context("Failed to upload workflow");
}
}
};
// ── 8. Print result ─────────────────────────────────────────────────
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&response, output_format)?;
}
OutputFormat::Table => {
println!();
output::print_success(&format!(
"Workflow '{}' uploaded successfully",
response.workflow_ref
));
output::print_key_value_table(vec![
("ID", response.id.to_string()),
("Reference", response.workflow_ref.clone()),
("Pack", response.pack_ref.clone()),
("Label", response.label.clone()),
("Version", response.version.clone()),
(
"Tags",
if response.tags.is_empty() {
"none".to_string()
} else {
response.tags.join(", ")
},
),
("Enabled", output::format_bool(response.enabled)),
]);
}
}
Ok(())
}
// ── List ────────────────────────────────────────────────────────────────
async fn handle_list(
profile: &Option<String>,
pack: Option<String>,
tags: Option<String>,
search: Option<String>,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let path = if let Some(ref pack_ref) = pack {
format!("/packs/{}/workflows", pack_ref)
} else {
let mut query_parts: Vec<String> = Vec::new();
if let Some(ref t) = tags {
query_parts.push(format!("tags={}", urlencoding::encode(t)));
}
if let Some(ref s) = search {
query_parts.push(format!("search={}", urlencoding::encode(s)));
}
if query_parts.is_empty() {
"/workflows".to_string()
} else {
format!("/workflows?{}", query_parts.join("&"))
}
};
let workflows: Vec<WorkflowSummary> = client.get(&path).await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&workflows, output_format)?;
}
OutputFormat::Table => {
if workflows.is_empty() {
output::print_info("No workflows found");
} else {
let mut table = output::create_table();
output::add_header(
&mut table,
vec!["ID", "Reference", "Pack", "Label", "Version", "Enabled", "Tags"],
);
for wf in &workflows {
table.add_row(vec![
wf.id.to_string(),
wf.workflow_ref.clone(),
wf.pack_ref.clone(),
output::truncate(&wf.label, 30),
wf.version.clone(),
output::format_bool(wf.enabled),
if wf.tags.is_empty() {
"-".to_string()
} else {
output::truncate(&wf.tags.join(", "), 25)
},
]);
}
println!("{}", table);
output::print_info(&format!("{} workflow(s) found", workflows.len()));
}
}
}
Ok(())
}
// ── Show ────────────────────────────────────────────────────────────────
async fn handle_show(
profile: &Option<String>,
workflow_ref: String,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let path = format!("/workflows/{}", workflow_ref);
let workflow: WorkflowResponse = client.get(&path).await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&workflow, output_format)?;
}
OutputFormat::Table => {
output::print_section(&format!("Workflow: {}", workflow.workflow_ref));
output::print_key_value_table(vec![
("ID", workflow.id.to_string()),
("Reference", workflow.workflow_ref.clone()),
("Pack", workflow.pack_ref.clone()),
("Pack ID", workflow.pack.to_string()),
("Label", workflow.label.clone()),
(
"Description",
workflow
.description
.clone()
.unwrap_or_else(|| "-".to_string()),
),
("Version", workflow.version.clone()),
("Enabled", output::format_bool(workflow.enabled)),
(
"Tags",
if workflow.tags.is_empty() {
"none".to_string()
} else {
workflow.tags.join(", ")
},
),
("Created", output::format_timestamp(&workflow.created)),
("Updated", output::format_timestamp(&workflow.updated)),
]);
// Show parameter schema if present
if let Some(ref params) = workflow.param_schema {
if !params.is_null() && params.as_object().is_some_and(|o| !o.is_empty()) {
output::print_section("Parameters");
let yaml = serde_yaml_ng::to_string(params)?;
println!("{}", yaml);
}
}
// Show output schema if present
if let Some(ref out) = workflow.out_schema {
if !out.is_null() && out.as_object().is_some_and(|o| !o.is_empty()) {
output::print_section("Output Schema");
let yaml = serde_yaml_ng::to_string(out)?;
println!("{}", yaml);
}
}
// Show task summary from definition
if let Some(tasks) = workflow.definition.get("tasks") {
if let Some(arr) = tasks.as_array() {
output::print_section("Tasks");
let mut table = output::create_table();
output::add_header(&mut table, vec!["#", "Name", "Action", "Transitions"]);
for (i, task) in arr.iter().enumerate() {
let name = task
.get("name")
.and_then(|v| v.as_str())
.unwrap_or("?");
let action = task
.get("action")
.and_then(|v| v.as_str())
.unwrap_or("-");
let transition_count = task
.get("next")
.and_then(|v| v.as_array())
.map(|a| {
// Count total target tasks across all transitions
a.iter()
.filter_map(|t| {
t.get("do").and_then(|d| d.as_array()).map(|d| d.len())
})
.sum::<usize>()
})
.unwrap_or(0);
let transitions_str = if transition_count == 0 {
"terminal".to_string()
} else {
format!("{} target(s)", transition_count)
};
table.add_row(vec![
(i + 1).to_string(),
name.to_string(),
output::truncate(action, 35),
transitions_str,
]);
}
println!("{}", table);
}
}
}
}
Ok(())
}
// ── Delete ──────────────────────────────────────────────────────────────
async fn handle_delete(
profile: &Option<String>,
workflow_ref: String,
yes: bool,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
if !yes && output_format == OutputFormat::Table {
let confirm = dialoguer::Confirm::new()
.with_prompt(format!(
"Are you sure you want to delete workflow '{}'?",
workflow_ref
))
.default(false)
.interact()?;
if !confirm {
output::print_info("Delete cancelled");
return Ok(());
}
}
let path = format!("/workflows/{}", workflow_ref);
client.delete_no_response(&path).await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
let msg = serde_json::json!({"message": format!("Workflow '{}' deleted", workflow_ref)});
output::print_output(&msg, output_format)?;
}
OutputFormat::Table => {
output::print_success(&format!("Workflow '{}' deleted successfully", workflow_ref));
}
}
Ok(())
}
// ── Helpers ─────────────────────────────────────────────────────────────
/// Split an action ref like `pack_name.action_name` into `(pack_ref, name)`.
///
/// Supports multi-segment pack refs: `org.pack.action` → `("org.pack", "action")`.
/// The last dot-separated segment is the workflow/action name; everything before
/// it is the pack ref.
fn split_action_ref(action_ref: &str) -> Result<(String, String)> {
let dot_pos = action_ref.rfind('.').ok_or_else(|| {
anyhow::anyhow!(
"Invalid action ref '{}': expected format 'pack_ref.name' (at least one dot)",
action_ref
)
})?;
let pack_ref = &action_ref[..dot_pos];
let name = &action_ref[dot_pos + 1..];
if pack_ref.is_empty() || name.is_empty() {
anyhow::bail!(
"Invalid action ref '{}': both pack_ref and name must be non-empty",
action_ref
);
}
Ok((pack_ref.to_string(), name.to_string()))
}
/// Resolve the workflow YAML path from the action YAML's location and the
/// `workflow_file` value.
///
/// `workflow_file` is relative to the `actions/` directory. Since the action
/// YAML is typically at `<pack>/actions/<name>.yaml`, the workflow path is
/// resolved relative to the action YAML's parent directory.
fn resolve_workflow_path(action_yaml_path: &Path, workflow_file: &str) -> Result<PathBuf> {
let action_dir = action_yaml_path
.parent()
.unwrap_or(Path::new("."));
let resolved = action_dir.join(workflow_file);
// Canonicalize if possible (for better error messages), but don't fail
// if the file doesn't exist yet — we'll check existence later.
Ok(resolved)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_split_action_ref_simple() {
let (pack, name) = split_action_ref("core.echo").unwrap();
assert_eq!(pack, "core");
assert_eq!(name, "echo");
}
#[test]
fn test_split_action_ref_multi_segment_pack() {
let (pack, name) = split_action_ref("org.infra.deploy").unwrap();
assert_eq!(pack, "org.infra");
assert_eq!(name, "deploy");
}
#[test]
fn test_split_action_ref_no_dot() {
assert!(split_action_ref("nodot").is_err());
}
#[test]
fn test_split_action_ref_empty_parts() {
assert!(split_action_ref(".name").is_err());
assert!(split_action_ref("pack.").is_err());
}
#[test]
fn test_resolve_workflow_path() {
let action_path = Path::new("/packs/mypack/actions/deploy.yaml");
let resolved =
resolve_workflow_path(action_path, "workflows/deploy.workflow.yaml").unwrap();
assert_eq!(
resolved,
PathBuf::from("/packs/mypack/actions/workflows/deploy.workflow.yaml")
);
}
#[test]
fn test_resolve_workflow_path_relative() {
let action_path = Path::new("actions/deploy.yaml");
let resolved =
resolve_workflow_path(action_path, "workflows/deploy.workflow.yaml").unwrap();
assert_eq!(
resolved,
PathBuf::from("actions/workflows/deploy.workflow.yaml")
);
}
}

View File

@@ -16,6 +16,7 @@ use commands::{
rule::RuleCommands, rule::RuleCommands,
sensor::SensorCommands, sensor::SensorCommands,
trigger::TriggerCommands, trigger::TriggerCommands,
workflow::WorkflowCommands,
}; };
#[derive(Parser)] #[derive(Parser)]
@@ -78,6 +79,11 @@ enum Commands {
#[command(subcommand)] #[command(subcommand)]
command: ExecutionCommands, command: ExecutionCommands,
}, },
/// Workflow management
Workflow {
#[command(subcommand)]
command: WorkflowCommands,
},
/// Trigger management /// Trigger management
Trigger { Trigger {
#[command(subcommand)] #[command(subcommand)]
@@ -172,6 +178,15 @@ async fn main() {
) )
.await .await
} }
Commands::Workflow { command } => {
commands::workflow::handle_workflow_command(
&cli.profile,
command,
&cli.api_url,
output_format,
)
.await
}
Commands::Trigger { command } => { Commands::Trigger { command } => {
commands::trigger::handle_trigger_command( commands::trigger::handle_trigger_command(
&cli.profile, &cli.profile,

View File

@@ -438,3 +438,38 @@ pub async fn mock_not_found(server: &MockServer, path_pattern: &str) {
.mount(server) .mount(server)
.await; .await;
} }
/// Mock a successful pack create response (POST /api/v1/packs)
#[allow(dead_code)]
pub async fn mock_pack_create(server: &MockServer) {
Mock::given(method("POST"))
.and(path("/api/v1/packs"))
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
"data": {
"id": 42,
"ref": "my_pack",
"label": "My Pack",
"description": "A test pack",
"version": "0.1.0",
"author": null,
"enabled": true,
"tags": ["test"],
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z"
}
})))
.mount(server)
.await;
}
/// Mock a 409 conflict response for pack create
#[allow(dead_code)]
pub async fn mock_pack_create_conflict(server: &MockServer) {
Mock::given(method("POST"))
.and(path("/api/v1/packs"))
.respond_with(ResponseTemplate::new(409).set_body_json(json!({
"error": "Pack with ref 'my_pack' already exists"
})))
.mount(server)
.await;
}

View File

@@ -4,6 +4,11 @@
use assert_cmd::Command; use assert_cmd::Command;
use predicates::prelude::*; use predicates::prelude::*;
use serde_json::json;
use wiremock::{
matchers::{body_json, method, path},
Mock, ResponseTemplate,
};
mod common; mod common;
use common::*; use common::*;
@@ -222,6 +227,231 @@ async fn test_pack_get_json_output() {
.stdout(predicate::str::contains(r#""ref": "core""#)); .stdout(predicate::str::contains(r#""ref": "core""#));
} }
// ── pack create tests ──────────────────────────────────────────────────
#[tokio::test]
async fn test_pack_create_non_interactive() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_pack_create(&fixture.mock_server).await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("pack")
.arg("create")
.arg("--ref")
.arg("my_pack")
.arg("--label")
.arg("My Pack")
.arg("--description")
.arg("A test pack")
.arg("--pack-version")
.arg("0.1.0")
.arg("--tags")
.arg("test");
cmd.assert()
.success()
.stdout(predicate::str::contains("my_pack"))
.stdout(predicate::str::contains("created successfully"));
}
#[tokio::test]
async fn test_pack_create_json_output() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_pack_create(&fixture.mock_server).await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("--json")
.arg("pack")
.arg("create")
.arg("--ref")
.arg("my_pack");
cmd.assert()
.success()
.stdout(predicate::str::contains(r#""ref": "my_pack""#))
.stdout(predicate::str::contains(r#""id": 42"#));
}
#[tokio::test]
async fn test_pack_create_conflict() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_pack_create_conflict(&fixture.mock_server).await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("pack")
.arg("create")
.arg("--ref")
.arg("my_pack");
cmd.assert()
.failure()
.stderr(predicate::str::contains("already exists"));
}
#[tokio::test]
async fn test_pack_create_missing_ref() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("pack")
.arg("create");
cmd.assert()
.failure()
.stderr(predicate::str::contains("Pack ref is required"));
}
#[tokio::test]
async fn test_pack_create_default_label_from_ref() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
// Use a custom mock that validates the request body contains the derived label
Mock::given(method("POST"))
.and(path("/api/v1/packs"))
.and(body_json(json!({
"ref": "my_cool_pack",
"label": "My Cool Pack",
"version": "0.1.0",
"tags": []
})))
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
"data": {
"id": 99,
"ref": "my_cool_pack",
"label": "My Cool Pack",
"version": "0.1.0",
"enabled": true,
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z"
}
})))
.mount(&fixture.mock_server)
.await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("pack")
.arg("create")
.arg("--ref")
.arg("my_cool_pack");
cmd.assert()
.success()
.stdout(predicate::str::contains("my_cool_pack"))
.stdout(predicate::str::contains("created successfully"));
}
#[tokio::test]
async fn test_pack_create_default_version() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
// Verify the default version "0.1.0" is sent when --pack-version is not specified
Mock::given(method("POST"))
.and(path("/api/v1/packs"))
.and(body_json(json!({
"ref": "versioned_pack",
"label": "Versioned Pack",
"version": "0.1.0",
"tags": []
})))
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
"data": {
"id": 7,
"ref": "versioned_pack",
"label": "Versioned Pack",
"version": "0.1.0",
"enabled": true,
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z"
}
})))
.mount(&fixture.mock_server)
.await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("pack")
.arg("create")
.arg("--ref")
.arg("versioned_pack");
cmd.assert().success();
}
#[tokio::test]
async fn test_pack_create_with_tags() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
Mock::given(method("POST"))
.and(path("/api/v1/packs"))
.and(body_json(json!({
"ref": "tagged",
"label": "Tagged",
"version": "0.1.0",
"tags": ["networking", "monitoring"]
})))
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
"data": {
"id": 10,
"ref": "tagged",
"label": "Tagged",
"version": "0.1.0",
"tags": ["networking", "monitoring"],
"enabled": true,
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z"
}
})))
.mount(&fixture.mock_server)
.await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("pack")
.arg("create")
.arg("--ref")
.arg("tagged")
.arg("--tags")
.arg("networking,monitoring");
cmd.assert().success();
}
#[tokio::test] #[tokio::test]
async fn test_pack_list_empty_result() { async fn test_pack_list_empty_result() {
let fixture = TestFixture::new().await; let fixture = TestFixture::new().await;

View File

@@ -0,0 +1,777 @@
//! Integration tests for CLI workflow commands
#![allow(deprecated)]
use assert_cmd::Command;
use predicates::prelude::*;
use serde_json::json;
use std::fs;
use wiremock::matchers::{method, path};
use wiremock::{Mock, MockServer, ResponseTemplate};
mod common;
use common::*;
// ── Mock helpers ────────────────────────────────────────────────────────
async fn mock_workflow_list(server: &MockServer) {
Mock::given(method("GET"))
.and(path("/api/v1/workflows"))
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
"data": [
{
"id": 1,
"ref": "core.install_packs",
"pack_ref": "core",
"label": "Install Packs",
"description": "Install one or more packs",
"version": "1.0.0",
"tags": ["core", "packs"],
"enabled": true,
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z"
},
{
"id": 2,
"ref": "mypack.deploy",
"pack_ref": "mypack",
"label": "Deploy App",
"description": "Deploy an application",
"version": "2.0.0",
"tags": ["deploy"],
"enabled": true,
"created": "2024-01-02T00:00:00Z",
"updated": "2024-01-02T00:00:00Z"
}
]
})))
.mount(server)
.await;
}
async fn mock_workflow_list_by_pack(server: &MockServer, pack_ref: &str) {
let p = format!("/api/v1/packs/{}/workflows", pack_ref);
Mock::given(method("GET"))
.and(path(p.as_str()))
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
"data": [
{
"id": 1,
"ref": format!("{}.example_workflow", pack_ref),
"pack_ref": pack_ref,
"label": "Example Workflow",
"description": "An example workflow",
"version": "1.0.0",
"tags": [],
"enabled": true,
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z"
}
]
})))
.mount(server)
.await;
}
async fn mock_workflow_get(server: &MockServer, workflow_ref: &str) {
let p = format!("/api/v1/workflows/{}", workflow_ref);
Mock::given(method("GET"))
.and(path(p.as_str()))
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
"data": {
"id": 1,
"ref": workflow_ref,
"pack": 1,
"pack_ref": "mypack",
"label": "My Workflow",
"description": "A test workflow",
"version": "1.0.0",
"param_schema": {
"url": {"type": "string", "required": true},
"timeout": {"type": "integer", "default": 30}
},
"out_schema": {
"status": {"type": "string"}
},
"definition": {
"version": "1.0.0",
"vars": {"result": null},
"tasks": [
{
"name": "step1",
"action": "core.echo",
"input": {"message": "hello"},
"next": [
{"when": "{{ succeeded() }}", "do": ["step2"]}
]
},
{
"name": "step2",
"action": "core.echo",
"input": {"message": "done"}
}
]
},
"tags": ["test", "demo"],
"enabled": true,
"created": "2024-01-01T00:00:00Z",
"updated": "2024-01-01T00:00:00Z"
}
})))
.mount(server)
.await;
}
async fn mock_workflow_delete(server: &MockServer, workflow_ref: &str) {
let p = format!("/api/v1/workflows/{}", workflow_ref);
Mock::given(method("DELETE"))
.and(path(p.as_str()))
.respond_with(ResponseTemplate::new(204))
.mount(server)
.await;
}
async fn mock_workflow_save(server: &MockServer, pack_ref: &str) {
let p = format!("/api/v1/packs/{}/workflow-files", pack_ref);
Mock::given(method("POST"))
.and(path(p.as_str()))
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
"data": {
"id": 10,
"ref": format!("{}.deploy", pack_ref),
"pack": 1,
"pack_ref": pack_ref,
"label": "Deploy App",
"description": "Deploy the application",
"version": "1.0.0",
"param_schema": null,
"out_schema": null,
"definition": {"version": "1.0.0", "tasks": []},
"tags": ["deploy"],
"enabled": true,
"created": "2024-01-10T00:00:00Z",
"updated": "2024-01-10T00:00:00Z"
}
})))
.mount(server)
.await;
}
async fn mock_workflow_save_conflict(server: &MockServer, pack_ref: &str) {
let p = format!("/api/v1/packs/{}/workflow-files", pack_ref);
Mock::given(method("POST"))
.and(path(p.as_str()))
.respond_with(ResponseTemplate::new(409).set_body_json(json!({
"error": "Workflow with ref 'mypack.deploy' already exists"
})))
.mount(server)
.await;
}
async fn mock_workflow_update(server: &MockServer, workflow_ref: &str) {
let p = format!("/api/v1/workflows/{}/file", workflow_ref);
Mock::given(method("PUT"))
.and(path(p.as_str()))
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
"data": {
"id": 10,
"ref": workflow_ref,
"pack": 1,
"pack_ref": "mypack",
"label": "Deploy App",
"description": "Deploy the application",
"version": "1.0.0",
"param_schema": null,
"out_schema": null,
"definition": {"version": "1.0.0", "tasks": []},
"tags": ["deploy"],
"enabled": true,
"created": "2024-01-10T00:00:00Z",
"updated": "2024-01-10T12:00:00Z"
}
})))
.mount(server)
.await;
}
// ── Helper to write action + workflow YAML to temp dirs ─────────────────
struct WorkflowFixture {
_dir: tempfile::TempDir,
action_yaml_path: String,
}
impl WorkflowFixture {
fn new(action_ref: &str, workflow_file: &str) -> Self {
let dir = tempfile::TempDir::new().expect("Failed to create temp dir");
let actions_dir = dir.path().join("actions");
let workflows_dir = actions_dir.join("workflows");
fs::create_dir_all(&workflows_dir).unwrap();
// Write the action YAML
let action_yaml = format!(
r#"ref: {}
label: "Deploy App"
description: "Deploy the application"
enabled: true
workflow_file: {}
parameters:
environment:
type: string
required: true
description: "Target environment"
version:
type: string
default: "latest"
output:
status:
type: string
tags:
- deploy
"#,
action_ref, workflow_file,
);
let action_name = action_ref.rsplit('.').next().unwrap();
let action_path = actions_dir.join(format!("{}.yaml", action_name));
fs::write(&action_path, &action_yaml).unwrap();
// Write the workflow YAML
let workflow_yaml = r#"version: "1.0.0"
vars:
deploy_result: null
tasks:
- name: prepare
action: core.echo
input:
message: "Preparing deployment"
next:
- when: "{{ succeeded() }}"
do:
- deploy
- name: deploy
action: core.echo
input:
message: "Deploying"
next:
- when: "{{ succeeded() }}"
do:
- verify
- name: verify
action: core.echo
input:
message: "Verifying"
output_map:
status: "{{ 'success' if workflow.deploy_result else 'unknown' }}"
"#;
let workflow_path = workflows_dir.join(format!("{}.workflow.yaml", action_name));
fs::write(&workflow_path, workflow_yaml).unwrap();
Self {
action_yaml_path: action_path.to_string_lossy().to_string(),
_dir: dir,
}
}
}
// ── List tests ──────────────────────────────────────────────────────────
#[tokio::test]
async fn test_workflow_list_authenticated() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_workflow_list(&fixture.mock_server).await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("list");
cmd.assert()
.success()
.stdout(predicate::str::contains("core.install_packs"))
.stdout(predicate::str::contains("mypack.deploy"))
.stdout(predicate::str::contains("2 workflow(s) found"));
}
#[tokio::test]
async fn test_workflow_list_by_pack() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_workflow_list_by_pack(&fixture.mock_server, "core").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("list")
.arg("--pack")
.arg("core");
cmd.assert()
.success()
.stdout(predicate::str::contains("core.example_workflow"))
.stdout(predicate::str::contains("1 workflow(s) found"));
}
#[tokio::test]
async fn test_workflow_list_json_output() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_workflow_list(&fixture.mock_server).await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("--json")
.arg("workflow")
.arg("list");
cmd.assert()
.success()
.stdout(predicate::str::contains("\"core.install_packs\""))
.stdout(predicate::str::contains("\"mypack.deploy\""));
}
#[tokio::test]
async fn test_workflow_list_yaml_output() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_workflow_list(&fixture.mock_server).await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("--yaml")
.arg("workflow")
.arg("list");
cmd.assert()
.success()
.stdout(predicate::str::contains("core.install_packs"))
.stdout(predicate::str::contains("mypack.deploy"));
}
#[tokio::test]
async fn test_workflow_list_empty() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
Mock::given(method("GET"))
.and(path("/api/v1/workflows"))
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
"data": []
})))
.mount(&fixture.mock_server)
.await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("list");
cmd.assert()
.success()
.stdout(predicate::str::contains("No workflows found"));
}
#[tokio::test]
async fn test_workflow_list_unauthenticated() {
let fixture = TestFixture::new().await;
fixture.write_default_config();
mock_unauthorized(&fixture.mock_server, "/api/v1/workflows").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("list");
cmd.assert().failure();
}
// ── Show tests ──────────────────────────────────────────────────────────
#[tokio::test]
async fn test_workflow_show() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_workflow_get(&fixture.mock_server, "mypack.my_workflow").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("show")
.arg("mypack.my_workflow");
cmd.assert()
.success()
.stdout(predicate::str::contains("mypack.my_workflow"))
.stdout(predicate::str::contains("My Workflow"))
.stdout(predicate::str::contains("1.0.0"))
.stdout(predicate::str::contains("test, demo"))
// Tasks table should show task names
.stdout(predicate::str::contains("step1"))
.stdout(predicate::str::contains("step2"))
.stdout(predicate::str::contains("core.echo"));
}
#[tokio::test]
async fn test_workflow_show_json_output() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_workflow_get(&fixture.mock_server, "mypack.my_workflow").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("--json")
.arg("workflow")
.arg("show")
.arg("mypack.my_workflow");
cmd.assert()
.success()
.stdout(predicate::str::contains("\"mypack.my_workflow\""))
.stdout(predicate::str::contains("\"My Workflow\""))
.stdout(predicate::str::contains("\"definition\""));
}
#[tokio::test]
async fn test_workflow_show_not_found() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_not_found(&fixture.mock_server, "/api/v1/workflows/nonexistent.wf").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("show")
.arg("nonexistent.wf");
cmd.assert().failure();
}
// ── Delete tests ────────────────────────────────────────────────────────
#[tokio::test]
async fn test_workflow_delete_with_yes_flag() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_workflow_delete(&fixture.mock_server, "mypack.my_workflow").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("delete")
.arg("mypack.my_workflow")
.arg("--yes");
cmd.assert()
.success()
.stdout(predicate::str::contains("deleted successfully"));
}
#[tokio::test]
async fn test_workflow_delete_json_output() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
mock_workflow_delete(&fixture.mock_server, "mypack.my_workflow").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("--json")
.arg("workflow")
.arg("delete")
.arg("mypack.my_workflow")
.arg("--yes");
cmd.assert()
.success()
.stdout(predicate::str::contains("\"message\""))
.stdout(predicate::str::contains("deleted"));
}
// ── Upload tests ────────────────────────────────────────────────────────
#[tokio::test]
async fn test_workflow_upload_success() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
let wf_fixture =
WorkflowFixture::new("mypack.deploy", "workflows/deploy.workflow.yaml");
mock_workflow_save(&fixture.mock_server, "mypack").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("upload")
.arg(&wf_fixture.action_yaml_path);
cmd.assert()
.success()
.stdout(predicate::str::contains("uploaded successfully"))
.stdout(predicate::str::contains("mypack.deploy"));
}
#[tokio::test]
async fn test_workflow_upload_json_output() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
let wf_fixture =
WorkflowFixture::new("mypack.deploy", "workflows/deploy.workflow.yaml");
mock_workflow_save(&fixture.mock_server, "mypack").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("--json")
.arg("workflow")
.arg("upload")
.arg(&wf_fixture.action_yaml_path);
cmd.assert()
.success()
.stdout(predicate::str::contains("\"mypack.deploy\""))
.stdout(predicate::str::contains("\"Deploy App\""));
}
#[tokio::test]
async fn test_workflow_upload_conflict_without_force() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
let wf_fixture =
WorkflowFixture::new("mypack.deploy", "workflows/deploy.workflow.yaml");
mock_workflow_save_conflict(&fixture.mock_server, "mypack").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("upload")
.arg(&wf_fixture.action_yaml_path);
cmd.assert()
.failure()
.stderr(predicate::str::contains("already exists"))
.stderr(predicate::str::contains("--force"));
}
#[tokio::test]
async fn test_workflow_upload_conflict_with_force() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
let wf_fixture =
WorkflowFixture::new("mypack.deploy", "workflows/deploy.workflow.yaml");
mock_workflow_save_conflict(&fixture.mock_server, "mypack").await;
mock_workflow_update(&fixture.mock_server, "mypack.deploy").await;
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("upload")
.arg(&wf_fixture.action_yaml_path)
.arg("--force");
cmd.assert()
.success()
.stdout(predicate::str::contains("uploaded successfully"));
}
#[tokio::test]
async fn test_workflow_upload_missing_action_file() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("upload")
.arg("/nonexistent/path/action.yaml");
cmd.assert()
.failure()
.stderr(predicate::str::contains("not found"));
}
#[tokio::test]
async fn test_workflow_upload_missing_workflow_file() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
// Create a temp dir with only the action YAML, no workflow file
let dir = tempfile::TempDir::new().unwrap();
let actions_dir = dir.path().join("actions");
fs::create_dir_all(&actions_dir).unwrap();
let action_yaml = r#"ref: mypack.deploy
label: "Deploy App"
workflow_file: workflows/deploy.workflow.yaml
"#;
let action_path = actions_dir.join("deploy.yaml");
fs::write(&action_path, action_yaml).unwrap();
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("upload")
.arg(action_path.to_string_lossy().as_ref());
cmd.assert()
.failure()
.stderr(predicate::str::contains("Workflow file not found"));
}
#[tokio::test]
async fn test_workflow_upload_action_without_workflow_file_field() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
// Create a temp dir with a regular (non-workflow) action YAML
let dir = tempfile::TempDir::new().unwrap();
let actions_dir = dir.path().join("actions");
fs::create_dir_all(&actions_dir).unwrap();
let action_yaml = r#"ref: mypack.echo
label: "Echo"
description: "A regular action, not a workflow"
runner_type: shell
entry_point: echo.sh
"#;
let action_path = actions_dir.join("echo.yaml");
fs::write(&action_path, action_yaml).unwrap();
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("upload")
.arg(action_path.to_string_lossy().as_ref());
cmd.assert()
.failure()
.stderr(predicate::str::contains("workflow_file"));
}
#[tokio::test]
async fn test_workflow_upload_invalid_action_yaml() {
let fixture = TestFixture::new().await;
fixture.write_authenticated_config("valid_token", "refresh_token");
let dir = tempfile::TempDir::new().unwrap();
let bad_yaml_path = dir.path().join("bad.yaml");
fs::write(&bad_yaml_path, "this is not valid yaml: [[[").unwrap();
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
.env("HOME", fixture.config_dir_path())
.arg("--api-url")
.arg(fixture.server_url())
.arg("workflow")
.arg("upload")
.arg(bad_yaml_path.to_string_lossy().as_ref());
cmd.assert()
.failure()
.stderr(predicate::str::contains("Failed to parse action YAML"));
}
// ── Help text tests ─────────────────────────────────────────────────────
#[tokio::test]
async fn test_workflow_help() {
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.arg("workflow").arg("--help");
cmd.assert()
.success()
.stdout(predicate::str::contains("upload"))
.stdout(predicate::str::contains("list"))
.stdout(predicate::str::contains("show"))
.stdout(predicate::str::contains("delete"));
}
#[tokio::test]
async fn test_workflow_upload_help() {
let mut cmd = Command::cargo_bin("attune").unwrap();
cmd.arg("workflow").arg("upload").arg("--help");
cmd.assert()
.success()
.stdout(predicate::str::contains("action"))
.stdout(predicate::str::contains("workflow_file"))
.stdout(predicate::str::contains("--force"));
}

View File

@@ -1052,6 +1052,14 @@ pub mod execution {
/// Task name within the workflow /// Task name within the workflow
pub task_name: String, pub task_name: String,
/// Name of the predecessor task whose completion triggered this task's
/// dispatch. `None` for entry-point tasks (dispatched at workflow
/// start). Used by the timeline UI to draw only the transitions that
/// actually fired rather than every possible transition from the
/// workflow definition.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub triggered_by: Option<String>,
/// Index for with-items iteration (0-based) /// Index for with-items iteration (0-based)
pub task_index: Option<i32>, pub task_index: Option<i32>,

View File

@@ -525,6 +525,28 @@ impl Connection {
) )
.await?; .await?;
// --- Cancel queue ---
// Each worker gets its own queue for execution cancel requests so that
// the API can target a specific worker to gracefully stop a running process.
let cancel_queue_name = format!("worker.{}.cancel", worker_id);
let cancel_queue_config = QueueConfig {
name: cancel_queue_name.clone(),
durable: true,
exclusive: false,
auto_delete: false,
};
self.declare_queue_with_optional_dlx(&cancel_queue_config, dlx)
.await?;
// Bind to worker-specific cancel routing key on the executions exchange
self.bind_queue(
&cancel_queue_name,
&config.rabbitmq.exchanges.executions.name,
&format!("execution.cancel.worker.{}", worker_id),
)
.await?;
info!( info!(
"Worker infrastructure setup complete for worker ID {}", "Worker infrastructure setup complete for worker ID {}",
worker_id worker_id

View File

@@ -67,6 +67,8 @@ pub enum MessageType {
RuleDisabled, RuleDisabled,
/// Pack registered or installed (triggers runtime environment setup in workers) /// Pack registered or installed (triggers runtime environment setup in workers)
PackRegistered, PackRegistered,
/// Execution cancel requested (sent to worker to gracefully stop a running execution)
ExecutionCancelRequested,
} }
impl MessageType { impl MessageType {
@@ -85,6 +87,7 @@ impl MessageType {
Self::RuleEnabled => "rule.enabled".to_string(), Self::RuleEnabled => "rule.enabled".to_string(),
Self::RuleDisabled => "rule.disabled".to_string(), Self::RuleDisabled => "rule.disabled".to_string(),
Self::PackRegistered => "pack.registered".to_string(), Self::PackRegistered => "pack.registered".to_string(),
Self::ExecutionCancelRequested => "execution.cancel".to_string(),
} }
} }
@@ -102,6 +105,7 @@ impl MessageType {
"attune.events".to_string() "attune.events".to_string()
} }
Self::PackRegistered => "attune.events".to_string(), Self::PackRegistered => "attune.events".to_string(),
Self::ExecutionCancelRequested => "attune.executions".to_string(),
} }
} }
@@ -120,6 +124,7 @@ impl MessageType {
Self::RuleEnabled => "RuleEnabled", Self::RuleEnabled => "RuleEnabled",
Self::RuleDisabled => "RuleDisabled", Self::RuleDisabled => "RuleDisabled",
Self::PackRegistered => "PackRegistered", Self::PackRegistered => "PackRegistered",
Self::ExecutionCancelRequested => "ExecutionCancelRequested",
} }
} }
} }
@@ -474,6 +479,19 @@ pub struct PackRegisteredPayload {
pub runtime_names: Vec<String>, pub runtime_names: Vec<String>,
} }
/// Payload for ExecutionCancelRequested message
///
/// Sent by the API to the worker that is running a specific execution,
/// instructing it to gracefully terminate the process (SIGINT, then SIGTERM
/// after a grace period).
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ExecutionCancelRequestedPayload {
/// Execution ID to cancel
pub execution_id: Id,
/// Worker ID that should handle this cancel (used for routing)
pub worker_id: Id,
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;

View File

@@ -57,10 +57,11 @@ pub use consumer::{Consumer, ConsumerConfig};
pub use error::{MqError, MqResult}; pub use error::{MqError, MqResult};
pub use message_queue::MessageQueue; pub use message_queue::MessageQueue;
pub use messages::{ pub use messages::{
EnforcementCreatedPayload, EventCreatedPayload, ExecutionCompletedPayload, EnforcementCreatedPayload, EventCreatedPayload, ExecutionCancelRequestedPayload,
ExecutionRequestedPayload, ExecutionStatusChangedPayload, InquiryCreatedPayload, ExecutionCompletedPayload, ExecutionRequestedPayload, ExecutionStatusChangedPayload,
InquiryRespondedPayload, Message, MessageEnvelope, MessageType, NotificationCreatedPayload, InquiryCreatedPayload, InquiryRespondedPayload, Message, MessageEnvelope, MessageType,
PackRegisteredPayload, RuleCreatedPayload, RuleDisabledPayload, RuleEnabledPayload, NotificationCreatedPayload, PackRegisteredPayload, RuleCreatedPayload, RuleDisabledPayload,
RuleEnabledPayload,
}; };
pub use publisher::{Publisher, PublisherConfig}; pub use publisher::{Publisher, PublisherConfig};
@@ -224,6 +225,8 @@ pub mod routing_keys {
pub const NOTIFICATION_CREATED: &str = "notification.created"; pub const NOTIFICATION_CREATED: &str = "notification.created";
/// Pack registered routing key /// Pack registered routing key
pub const PACK_REGISTERED: &str = "pack.registered"; pub const PACK_REGISTERED: &str = "pack.registered";
/// Execution cancel requested routing key
pub const EXECUTION_CANCEL: &str = "execution.cancel";
} }
#[cfg(test)] #[cfg(test)]

File diff suppressed because it is too large Load Diff

View File

@@ -8,6 +8,11 @@ use sqlx::{Executor, Postgres, QueryBuilder};
use super::{Create, Delete, FindById, FindByRef, List, Repository, Update}; use super::{Create, Delete, FindById, FindByRef, List, Repository, Update};
/// Columns selected in all Action queries. Must match the `Action` model's `FromRow` fields.
pub const ACTION_COLUMNS: &str = "id, ref, pack, pack_ref, label, description, entrypoint, \
runtime, runtime_version_constraint, param_schema, out_schema, workflow_def, is_adhoc, \
parameter_delivery, parameter_format, output_format, created, updated";
/// Filters for [`ActionRepository::list_search`]. /// Filters for [`ActionRepository::list_search`].
/// ///
/// All fields are optional and combinable (AND). Pagination is always applied. /// All fields are optional and combinable (AND). Pagination is always applied.
@@ -65,6 +70,9 @@ pub struct UpdateActionInput {
pub runtime_version_constraint: Option<Option<String>>, pub runtime_version_constraint: Option<Option<String>>,
pub param_schema: Option<JsonSchema>, pub param_schema: Option<JsonSchema>,
pub out_schema: Option<JsonSchema>, pub out_schema: Option<JsonSchema>,
pub parameter_delivery: Option<String>,
pub parameter_format: Option<String>,
pub output_format: Option<String>,
} }
#[async_trait::async_trait] #[async_trait::async_trait]
@@ -73,15 +81,10 @@ impl FindById for ActionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let action = sqlx::query_as::<_, Action>( let action = sqlx::query_as::<_, Action>(&format!(
r#" "SELECT {} FROM action WHERE id = $1",
SELECT id, ref, pack, pack_ref, label, description, entrypoint, ACTION_COLUMNS
runtime, runtime_version_constraint, ))
param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action
WHERE id = $1
"#,
)
.bind(id) .bind(id)
.fetch_optional(executor) .fetch_optional(executor)
.await?; .await?;
@@ -96,15 +99,10 @@ impl FindByRef for ActionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let action = sqlx::query_as::<_, Action>( let action = sqlx::query_as::<_, Action>(&format!(
r#" "SELECT {} FROM action WHERE ref = $1",
SELECT id, ref, pack, pack_ref, label, description, entrypoint, ACTION_COLUMNS
runtime, runtime_version_constraint, ))
param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action
WHERE ref = $1
"#,
)
.bind(ref_str) .bind(ref_str)
.fetch_optional(executor) .fetch_optional(executor)
.await?; .await?;
@@ -119,15 +117,10 @@ impl List for ActionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let actions = sqlx::query_as::<_, Action>( let actions = sqlx::query_as::<_, Action>(&format!(
r#" "SELECT {} FROM action ORDER BY ref ASC",
SELECT id, ref, pack, pack_ref, label, description, entrypoint, ACTION_COLUMNS
runtime, runtime_version_constraint, ))
param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action
ORDER BY ref ASC
"#,
)
.fetch_all(executor) .fetch_all(executor)
.await?; .await?;
@@ -155,16 +148,15 @@ impl Create for ActionRepository {
} }
// Try to insert - database will enforce uniqueness constraint // Try to insert - database will enforce uniqueness constraint
let action = sqlx::query_as::<_, Action>( let action = sqlx::query_as::<_, Action>(&format!(
r#" r#"
INSERT INTO action (ref, pack, pack_ref, label, description, entrypoint, INSERT INTO action (ref, pack, pack_ref, label, description, entrypoint,
runtime, runtime_version_constraint, param_schema, out_schema, is_adhoc) runtime, runtime_version_constraint, param_schema, out_schema, is_adhoc)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
RETURNING id, ref, pack, pack_ref, label, description, entrypoint, RETURNING {}
runtime, runtime_version_constraint,
param_schema, out_schema, workflow_def, is_adhoc, created, updated
"#, "#,
) ACTION_COLUMNS
))
.bind(&input.r#ref) .bind(&input.r#ref)
.bind(input.pack) .bind(input.pack)
.bind(&input.pack_ref) .bind(&input.pack_ref)
@@ -267,6 +259,33 @@ impl Update for ActionRepository {
has_updates = true; has_updates = true;
} }
if let Some(parameter_delivery) = &input.parameter_delivery {
if has_updates {
query.push(", ");
}
query.push("parameter_delivery = ");
query.push_bind(parameter_delivery);
has_updates = true;
}
if let Some(parameter_format) = &input.parameter_format {
if has_updates {
query.push(", ");
}
query.push("parameter_format = ");
query.push_bind(parameter_format);
has_updates = true;
}
if let Some(output_format) = &input.output_format {
if has_updates {
query.push(", ");
}
query.push("output_format = ");
query.push_bind(output_format);
has_updates = true;
}
if !has_updates { if !has_updates {
// No updates requested, fetch and return existing action // No updates requested, fetch and return existing action
return Self::find_by_id(executor, id) return Self::find_by_id(executor, id)
@@ -276,7 +295,7 @@ impl Update for ActionRepository {
query.push(", updated = NOW() WHERE id = "); query.push(", updated = NOW() WHERE id = ");
query.push_bind(id); query.push_bind(id);
query.push(" RETURNING id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, param_schema, out_schema, workflow_def, is_adhoc, created, updated"); query.push(&format!(" RETURNING {}", ACTION_COLUMNS));
let action = query let action = query
.build_query_as::<Action>() .build_query_as::<Action>()
@@ -317,10 +336,8 @@ impl ActionRepository {
where where
E: Executor<'e, Database = Postgres> + Copy + 'e, E: Executor<'e, Database = Postgres> + Copy + 'e,
{ {
let select_cols = "id, ref, pack, pack_ref, label, description, entrypoint, runtime, runtime_version_constraint, param_schema, out_schema, workflow_def, is_adhoc, created, updated";
let mut qb: QueryBuilder<'_, Postgres> = let mut qb: QueryBuilder<'_, Postgres> =
QueryBuilder::new(format!("SELECT {select_cols} FROM action")); QueryBuilder::new(format!("SELECT {} FROM action", ACTION_COLUMNS));
let mut count_qb: QueryBuilder<'_, Postgres> = let mut count_qb: QueryBuilder<'_, Postgres> =
QueryBuilder::new("SELECT COUNT(*) FROM action"); QueryBuilder::new("SELECT COUNT(*) FROM action");
@@ -398,16 +415,10 @@ impl ActionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let actions = sqlx::query_as::<_, Action>( let actions = sqlx::query_as::<_, Action>(&format!(
r#" "SELECT {} FROM action WHERE pack = $1 ORDER BY ref ASC",
SELECT id, ref, pack, pack_ref, label, description, entrypoint, ACTION_COLUMNS
runtime, runtime_version_constraint, ))
param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action
WHERE pack = $1
ORDER BY ref ASC
"#,
)
.bind(pack_id) .bind(pack_id)
.fetch_all(executor) .fetch_all(executor)
.await?; .await?;
@@ -420,16 +431,10 @@ impl ActionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let actions = sqlx::query_as::<_, Action>( let actions = sqlx::query_as::<_, Action>(&format!(
r#" "SELECT {} FROM action WHERE runtime = $1 ORDER BY ref ASC",
SELECT id, ref, pack, pack_ref, label, description, entrypoint, ACTION_COLUMNS
runtime, runtime_version_constraint, ))
param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action
WHERE runtime = $1
ORDER BY ref ASC
"#,
)
.bind(runtime_id) .bind(runtime_id)
.fetch_all(executor) .fetch_all(executor)
.await?; .await?;
@@ -443,16 +448,10 @@ impl ActionRepository {
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let search_pattern = format!("%{}%", query.to_lowercase()); let search_pattern = format!("%{}%", query.to_lowercase());
let actions = sqlx::query_as::<_, Action>( let actions = sqlx::query_as::<_, Action>(&format!(
r#" "SELECT {} FROM action WHERE LOWER(ref) LIKE $1 OR LOWER(label) LIKE $1 OR LOWER(description) LIKE $1 ORDER BY ref ASC",
SELECT id, ref, pack, pack_ref, label, description, entrypoint, ACTION_COLUMNS
runtime, runtime_version_constraint, ))
param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action
WHERE LOWER(ref) LIKE $1 OR LOWER(label) LIKE $1 OR LOWER(description) LIKE $1
ORDER BY ref ASC
"#,
)
.bind(&search_pattern) .bind(&search_pattern)
.fetch_all(executor) .fetch_all(executor)
.await?; .await?;
@@ -465,16 +464,10 @@ impl ActionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let actions = sqlx::query_as::<_, Action>( let actions = sqlx::query_as::<_, Action>(&format!(
r#" "SELECT {} FROM action WHERE workflow_def IS NOT NULL ORDER BY ref ASC",
SELECT id, ref, pack, pack_ref, label, description, entrypoint, ACTION_COLUMNS
runtime, runtime_version_constraint, ))
param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action
WHERE workflow_def IS NOT NULL
ORDER BY ref ASC
"#,
)
.fetch_all(executor) .fetch_all(executor)
.await?; .await?;
@@ -489,15 +482,10 @@ impl ActionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let action = sqlx::query_as::<_, Action>( let action = sqlx::query_as::<_, Action>(&format!(
r#" "SELECT {} FROM action WHERE workflow_def = $1",
SELECT id, ref, pack, pack_ref, label, description, entrypoint, ACTION_COLUMNS
runtime, runtime_version_constraint, ))
param_schema, out_schema, workflow_def, is_adhoc, created, updated
FROM action
WHERE workflow_def = $1
"#,
)
.bind(workflow_def_id) .bind(workflow_def_id)
.fetch_optional(executor) .fetch_optional(executor)
.await?; .await?;
@@ -505,6 +493,36 @@ impl ActionRepository {
Ok(action) Ok(action)
} }
/// Delete non-adhoc actions belonging to a pack whose refs are NOT in the given set.
///
/// Used during pack reinstallation to clean up actions that were removed
/// from the pack's YAML files. Ad-hoc (user-created) actions are preserved.
pub async fn delete_non_adhoc_by_pack_excluding<'e, E>(
executor: E,
pack_id: Id,
keep_refs: &[String],
) -> Result<u64>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let result = if keep_refs.is_empty() {
sqlx::query("DELETE FROM action WHERE pack = $1 AND is_adhoc = false")
.bind(pack_id)
.execute(executor)
.await?
} else {
sqlx::query(
"DELETE FROM action WHERE pack = $1 AND is_adhoc = false AND ref != ALL($2)",
)
.bind(pack_id)
.bind(keep_refs)
.execute(executor)
.await?
};
Ok(result.rows_affected())
}
/// Link an action to a workflow definition /// Link an action to a workflow definition
pub async fn link_workflow_def<'e, E>( pub async fn link_workflow_def<'e, E>(
executor: E, executor: E,
@@ -514,16 +532,15 @@ impl ActionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let action = sqlx::query_as::<_, Action>( let action = sqlx::query_as::<_, Action>(&format!(
r#" r#"
UPDATE action UPDATE action
SET workflow_def = $2, updated = NOW() SET workflow_def = $2, updated = NOW()
WHERE id = $1 WHERE id = $1
RETURNING id, ref, pack, pack_ref, label, description, entrypoint, RETURNING {}
runtime, runtime_version_constraint,
param_schema, out_schema, workflow_def, is_adhoc, created, updated
"#, "#,
) ACTION_COLUMNS
))
.bind(action_id) .bind(action_id)
.bind(workflow_def_id) .bind(workflow_def_id)
.fetch_one(executor) .fetch_one(executor)

View File

@@ -52,6 +52,7 @@ pub struct UpdateArtifactInput {
pub description: Option<String>, pub description: Option<String>,
pub content_type: Option<String>, pub content_type: Option<String>,
pub size_bytes: Option<i64>, pub size_bytes: Option<i64>,
pub execution: Option<Option<i64>>,
pub data: Option<serde_json::Value>, pub data: Option<serde_json::Value>,
} }
@@ -189,6 +190,15 @@ impl Update for ArtifactRepository {
push_field!(&input.description, "description"); push_field!(&input.description, "description");
push_field!(&input.content_type, "content_type"); push_field!(&input.content_type, "content_type");
push_field!(input.size_bytes, "size_bytes"); push_field!(input.size_bytes, "size_bytes");
// execution is Option<Option<i64>> — outer Option = "was field provided?",
// inner Option = nullable column value
if let Some(exec_val) = input.execution {
if has_updates {
query.push(", ");
}
query.push("execution = ").push_bind(exec_val);
has_updates = true;
}
push_field!(&input.data, "data"); push_field!(&input.data, "data");
if !has_updates { if !has_updates {

View File

@@ -284,6 +284,34 @@ impl RuntimeRepository {
Ok(runtime) Ok(runtime)
} }
/// Delete runtimes belonging to a pack whose refs are NOT in the given set.
///
/// Used during pack reinstallation to clean up runtimes that were removed
/// from the pack's YAML files. Associated runtime_version rows are
/// cascade-deleted by the FK constraint.
pub async fn delete_by_pack_excluding<'e, E>(
executor: E,
pack_id: Id,
keep_refs: &[String],
) -> Result<u64>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let result = if keep_refs.is_empty() {
sqlx::query("DELETE FROM runtime WHERE pack = $1")
.bind(pack_id)
.execute(executor)
.await?
} else {
sqlx::query("DELETE FROM runtime WHERE pack = $1 AND ref != ALL($2)")
.bind(pack_id)
.bind(keep_refs)
.execute(executor)
.await?
};
Ok(result.rows_affected())
}
} }
// ============================================================================ // ============================================================================

View File

@@ -301,6 +301,36 @@ impl Delete for TriggerRepository {
} }
impl TriggerRepository { impl TriggerRepository {
/// Delete non-adhoc triggers belonging to a pack whose refs are NOT in the given set.
///
/// Used during pack reinstallation to clean up triggers that were removed
/// from the pack's YAML files. Ad-hoc (user-created) triggers are preserved.
pub async fn delete_non_adhoc_by_pack_excluding<'e, E>(
executor: E,
pack_id: Id,
keep_refs: &[String],
) -> Result<u64>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let result = if keep_refs.is_empty() {
sqlx::query("DELETE FROM trigger WHERE pack = $1 AND is_adhoc = false")
.bind(pack_id)
.execute(executor)
.await?
} else {
sqlx::query(
"DELETE FROM trigger WHERE pack = $1 AND is_adhoc = false AND ref != ALL($2)",
)
.bind(pack_id)
.bind(keep_refs)
.execute(executor)
.await?
};
Ok(result.rows_affected())
}
/// Search triggers with all filters pushed into SQL. /// Search triggers with all filters pushed into SQL.
/// ///
/// All filter fields are combinable (AND). Pagination is server-side. /// All filter fields are combinable (AND). Pagination is server-side.
@@ -907,6 +937,34 @@ impl Delete for SensorRepository {
} }
impl SensorRepository { impl SensorRepository {
/// Delete non-adhoc sensors belonging to a pack whose refs are NOT in the given set.
///
/// Used during pack reinstallation to clean up sensors that were removed
/// from the pack's YAML files.
pub async fn delete_by_pack_excluding<'e, E>(
executor: E,
pack_id: Id,
keep_refs: &[String],
) -> Result<u64>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let result = if keep_refs.is_empty() {
sqlx::query("DELETE FROM sensor WHERE pack = $1")
.bind(pack_id)
.execute(executor)
.await?
} else {
sqlx::query("DELETE FROM sensor WHERE pack = $1 AND ref != ALL($2)")
.bind(pack_id)
.bind(keep_refs)
.execute(executor)
.await?
};
Ok(result.rows_affected())
}
/// Search sensors with all filters pushed into SQL. /// Search sensors with all filters pushed into SQL.
/// ///
/// All filter fields are combinable (AND). Pagination is server-side. /// All filter fields are combinable (AND). Pagination is server-side.

View File

@@ -115,12 +115,17 @@ pub fn validate_workflow_expressions(
match directive { match directive {
PublishDirective::Simple(map) => { PublishDirective::Simple(map) => {
for (pk, pv) in map { for (pk, pv) in map {
validate_template( // Only validate string values as templates;
pv, // non-string literals (booleans, numbers, etc.)
&format!("{task_loc} next[{ti}].publish.{pk}"), // pass through unchanged and have no expressions.
&known_names, if let Some(s) = pv.as_str() {
&mut warnings, validate_template(
); s,
&format!("{task_loc} next[{ti}].publish.{pk}"),
&known_names,
&mut warnings,
);
}
} }
} }
PublishDirective::Key(_) => { /* nothing to validate */ } PublishDirective::Key(_) => { /* nothing to validate */ }
@@ -132,12 +137,16 @@ pub fn validate_workflow_expressions(
for directive in &task.publish { for directive in &task.publish {
if let PublishDirective::Simple(map) = directive { if let PublishDirective::Simple(map) = directive {
for (pk, pv) in map { for (pk, pv) in map {
validate_template( // Only validate string values as templates;
pv, // non-string literals pass through unchanged.
&format!("{task_loc} publish.{pk}"), if let Some(s) = pv.as_str() {
&known_names, validate_template(
&mut warnings, s,
); &format!("{task_loc} publish.{pk}"),
&known_names,
&mut warnings,
);
}
} }
} }
} }
@@ -567,7 +576,7 @@ mod tests {
fn test_transition_publish_validated() { fn test_transition_publish_validated() {
let mut task = action_task("step1"); let mut task = action_task("step1");
let mut publish_map = HashMap::new(); let mut publish_map = HashMap::new();
publish_map.insert("out".to_string(), "{{ unknown_thing }}".to_string()); publish_map.insert("out".to_string(), serde_json::Value::String("{{ unknown_thing }}".to_string()));
task.next = vec![super::super::parser::TaskTransition { task.next = vec![super::super::parser::TaskTransition {
when: Some("{{ succeeded() }}".to_string()), when: Some("{{ succeeded() }}".to_string()),
publish: vec![PublishDirective::Simple(publish_map)], publish: vec![PublishDirective::Simple(publish_map)],

View File

@@ -109,32 +109,49 @@ impl WorkflowLoader {
} }
/// Load all workflows from a specific pack /// Load all workflows from a specific pack
///
/// Scans two directories in order:
/// 1. `{pack_dir}/workflows/` — legacy/standalone workflow files
/// 2. `{pack_dir}/actions/workflows/` — visual-builder and action-linked workflow files
///
/// If the same workflow ref appears in both directories, the version from
/// `actions/workflows/` wins (it is scanned second and overwrites the map entry).
pub async fn load_pack_workflows( pub async fn load_pack_workflows(
&self, &self,
pack_name: &str, pack_name: &str,
pack_dir: &Path, pack_dir: &Path,
) -> Result<HashMap<String, LoadedWorkflow>> { ) -> Result<HashMap<String, LoadedWorkflow>> {
let workflows_dir = pack_dir.join("workflows");
if !workflows_dir.exists() {
debug!("No workflows directory in pack '{}'", pack_name);
return Ok(HashMap::new());
}
let workflow_files = self.scan_workflow_files(&workflows_dir, pack_name).await?;
let mut workflows = HashMap::new(); let mut workflows = HashMap::new();
for file in workflow_files { // Scan both workflow directories
match self.load_workflow_file(&file).await { let scan_dirs: Vec<std::path::PathBuf> = vec![
Ok(loaded) => { pack_dir.join("workflows"),
workflows.insert(loaded.file.ref_name.clone(), loaded); pack_dir.join("actions").join("workflows"),
} ];
Err(e) => {
warn!("Failed to load workflow '{}': {}", file.path.display(), e); for workflows_dir in &scan_dirs {
if !workflows_dir.exists() {
continue;
}
let workflow_files = self.scan_workflow_files(workflows_dir, pack_name).await?;
for file in workflow_files {
match self.load_workflow_file(&file).await {
Ok(loaded) => {
workflows.insert(loaded.file.ref_name.clone(), loaded);
}
Err(e) => {
warn!("Failed to load workflow '{}': {}", file.path.display(), e);
}
} }
} }
} }
if workflows.is_empty() {
debug!("No workflows found in pack '{}'", pack_name);
}
Ok(workflows) Ok(workflows)
} }
@@ -185,6 +202,10 @@ impl WorkflowLoader {
} }
/// Reload a specific workflow by reference /// Reload a specific workflow by reference
///
/// Searches for the workflow file in both `workflows/` and
/// `actions/workflows/` directories, trying `.yaml`, `.yml`, and
/// `.workflow.yaml` extensions.
pub async fn reload_workflow(&self, ref_name: &str) -> Result<LoadedWorkflow> { pub async fn reload_workflow(&self, ref_name: &str) -> Result<LoadedWorkflow> {
let parts: Vec<&str> = ref_name.split('.').collect(); let parts: Vec<&str> = ref_name.split('.').collect();
if parts.len() != 2 { if parts.len() != 2 {
@@ -198,36 +219,35 @@ impl WorkflowLoader {
let workflow_name = parts[1]; let workflow_name = parts[1];
let pack_dir = self.config.packs_base_dir.join(pack_name); let pack_dir = self.config.packs_base_dir.join(pack_name);
let workflow_path = pack_dir
.join("workflows")
.join(format!("{}.yaml", workflow_name));
if !workflow_path.exists() { // Candidate directories and filename patterns to search
// Try .yml extension let dirs = [
let workflow_path_yml = pack_dir pack_dir.join("actions").join("workflows"),
.join("workflows") pack_dir.join("workflows"),
.join(format!("{}.yml", workflow_name)); ];
if workflow_path_yml.exists() { let extensions = [
let file = WorkflowFile { format!("{}.workflow.yaml", workflow_name),
path: workflow_path_yml, format!("{}.yaml", workflow_name),
pack: pack_name.to_string(), format!("{}.workflow.yml", workflow_name),
name: workflow_name.to_string(), format!("{}.yml", workflow_name),
ref_name: ref_name.to_string(), ];
};
return self.load_workflow_file(&file).await; for dir in &dirs {
for filename in &extensions {
let candidate = dir.join(filename);
if candidate.exists() {
let file = WorkflowFile {
path: candidate,
pack: pack_name.to_string(),
name: workflow_name.to_string(),
ref_name: ref_name.to_string(),
};
return self.load_workflow_file(&file).await;
}
} }
return Err(Error::not_found("workflow", "ref", ref_name));
} }
let file = WorkflowFile { Err(Error::not_found("workflow", "ref", ref_name))
path: workflow_path,
pack: pack_name.to_string(),
name: workflow_name.to_string(),
ref_name: ref_name.to_string(),
};
self.load_workflow_file(&file).await
} }
/// Scan pack directories /// Scan pack directories
@@ -259,6 +279,11 @@ impl WorkflowLoader {
} }
/// Scan workflow files in a directory /// Scan workflow files in a directory
///
/// Handles both `{name}.yaml` and `{name}.workflow.yaml` naming
/// conventions. For files with a `.workflow.yaml` suffix (produced by
/// the visual workflow builder), the `.workflow` portion is stripped
/// when deriving the workflow name and ref.
async fn scan_workflow_files( async fn scan_workflow_files(
&self, &self,
workflows_dir: &Path, workflows_dir: &Path,
@@ -278,7 +303,14 @@ impl WorkflowLoader {
if path.is_file() { if path.is_file() {
if let Some(ext) = path.extension() { if let Some(ext) = path.extension() {
if ext == "yaml" || ext == "yml" { if ext == "yaml" || ext == "yml" {
if let Some(name) = path.file_stem().and_then(|n| n.to_str()) { if let Some(raw_stem) = path.file_stem().and_then(|n| n.to_str()) {
// Strip `.workflow` suffix if present:
// "deploy.workflow.yaml" -> stem "deploy.workflow" -> name "deploy"
// "deploy.yaml" -> stem "deploy" -> name "deploy"
let name = raw_stem
.strip_suffix(".workflow")
.unwrap_or(raw_stem);
let ref_name = format!("{}.{}", pack_name, name); let ref_name = format!("{}.{}", pack_name, name);
workflow_files.push(WorkflowFile { workflow_files.push(WorkflowFile {
path: path.clone(), path: path.clone(),
@@ -475,4 +507,161 @@ tasks:
.to_string() .to_string()
.contains("exceeds maximum size")); .contains("exceeds maximum size"));
} }
/// Verify that `scan_workflow_files` strips the `.workflow` suffix from
/// filenames like `deploy.workflow.yaml`, yielding name `deploy` and
/// ref `pack.deploy` instead of `pack.deploy.workflow`.
#[tokio::test]
async fn test_scan_workflow_files_strips_workflow_suffix() {
let temp_dir = TempDir::new().unwrap();
let packs_dir = temp_dir.path().to_path_buf();
let pack_dir = packs_dir.join("my_pack");
let workflows_dir = pack_dir.join("actions").join("workflows");
fs::create_dir_all(&workflows_dir).await.unwrap();
let workflow_yaml = r#"
ref: my_pack.deploy
label: Deploy
version: "1.0.0"
tasks:
- name: step1
action: core.noop
"#;
fs::write(workflows_dir.join("deploy.workflow.yaml"), workflow_yaml)
.await
.unwrap();
let config = LoaderConfig {
packs_base_dir: packs_dir,
skip_validation: true,
max_file_size: 1024 * 1024,
};
let loader = WorkflowLoader::new(config);
let files = loader
.scan_workflow_files(&workflows_dir, "my_pack")
.await
.unwrap();
assert_eq!(files.len(), 1);
assert_eq!(files[0].name, "deploy");
assert_eq!(files[0].ref_name, "my_pack.deploy");
}
/// Verify that `load_pack_workflows` discovers workflow files in both
/// `workflows/` (legacy) and `actions/workflows/` (visual builder)
/// directories, and that `actions/workflows/` wins on ref collision.
#[tokio::test]
async fn test_load_pack_workflows_scans_both_directories() {
let temp_dir = TempDir::new().unwrap();
let packs_dir = temp_dir.path().to_path_buf();
let pack_dir = packs_dir.join("dual_pack");
// Legacy directory: workflows/
let legacy_dir = pack_dir.join("workflows");
fs::create_dir_all(&legacy_dir).await.unwrap();
let legacy_yaml = r#"
ref: dual_pack.alpha
label: Alpha (legacy)
version: "1.0.0"
tasks:
- name: t1
action: core.noop
"#;
fs::write(legacy_dir.join("alpha.yaml"), legacy_yaml)
.await
.unwrap();
// Also put a workflow that only exists in the legacy dir
let beta_yaml = r#"
ref: dual_pack.beta
label: Beta
version: "1.0.0"
tasks:
- name: t1
action: core.noop
"#;
fs::write(legacy_dir.join("beta.yaml"), beta_yaml)
.await
.unwrap();
// Visual builder directory: actions/workflows/
let builder_dir = pack_dir.join("actions").join("workflows");
fs::create_dir_all(&builder_dir).await.unwrap();
let builder_yaml = r#"
ref: dual_pack.alpha
label: Alpha (builder)
version: "2.0.0"
tasks:
- name: t1
action: core.noop
"#;
fs::write(builder_dir.join("alpha.workflow.yaml"), builder_yaml)
.await
.unwrap();
let config = LoaderConfig {
packs_base_dir: packs_dir,
skip_validation: true,
max_file_size: 1024 * 1024,
};
let loader = WorkflowLoader::new(config);
let workflows = loader
.load_pack_workflows("dual_pack", &pack_dir)
.await
.unwrap();
// Both alpha and beta should be present
assert_eq!(workflows.len(), 2);
assert!(workflows.contains_key("dual_pack.alpha"));
assert!(workflows.contains_key("dual_pack.beta"));
// Alpha should come from actions/workflows/ (scanned second, overwrites)
let alpha = &workflows["dual_pack.alpha"];
assert_eq!(alpha.workflow.label, "Alpha (builder)");
assert_eq!(alpha.workflow.version, "2.0.0");
// Beta only exists in legacy dir
let beta = &workflows["dual_pack.beta"];
assert_eq!(beta.workflow.label, "Beta");
}
/// Verify that `reload_workflow` finds files in `actions/workflows/`
/// with the `.workflow.yaml` extension.
#[tokio::test]
async fn test_reload_workflow_finds_actions_workflows_dir() {
let temp_dir = TempDir::new().unwrap();
let packs_dir = temp_dir.path().to_path_buf();
let pack_dir = packs_dir.join("rp");
let builder_dir = pack_dir.join("actions").join("workflows");
fs::create_dir_all(&builder_dir).await.unwrap();
let yaml = r#"
ref: rp.deploy
label: Deploy
version: "1.0.0"
tasks:
- name: step1
action: core.noop
"#;
fs::write(builder_dir.join("deploy.workflow.yaml"), yaml)
.await
.unwrap();
let config = LoaderConfig {
packs_base_dir: packs_dir,
skip_validation: true,
max_file_size: 1024 * 1024,
};
let loader = WorkflowLoader::new(config);
let loaded = loader.reload_workflow("rp.deploy").await.unwrap();
assert_eq!(loaded.workflow.r#ref, "rp.deploy");
assert_eq!(loaded.file.name, "deploy");
assert_eq!(loaded.file.ref_name, "rp.deploy");
}
} }

View File

@@ -78,14 +78,26 @@ impl From<ParseError> for crate::error::Error {
} }
/// Complete workflow definition parsed from YAML /// Complete workflow definition parsed from YAML
///
/// When loaded via an action's `workflow_file` field, the `ref` and `label`
/// fields are optional — the action YAML is authoritative for those values.
/// For standalone workflow files (in `workflows/`), they should be present.
#[derive(Debug, Clone, Serialize, Deserialize, Validate)] #[derive(Debug, Clone, Serialize, Deserialize, Validate)]
pub struct WorkflowDefinition { pub struct WorkflowDefinition {
/// Unique reference (e.g., "my_pack.deploy_app") /// Unique reference (e.g., "my_pack.deploy_app").
#[validate(length(min = 1, max = 255))] ///
/// Optional for action-linked workflow files (supplied by the action YAML).
/// Required for standalone workflow files.
#[serde(default)]
#[validate(length(max = 255))]
pub r#ref: String, pub r#ref: String,
/// Human-readable label /// Human-readable label.
#[validate(length(min = 1, max = 255))] ///
/// Optional for action-linked workflow files (supplied by the action YAML).
/// Required for standalone workflow files.
#[serde(default)]
#[validate(length(max = 255))]
pub label: String, pub label: String,
/// Optional description /// Optional description
@@ -412,11 +424,19 @@ pub enum TaskType {
} }
/// Variable publishing directive /// Variable publishing directive
///
/// Publish directives map variable names to values. Values may be template
/// expressions (strings containing `{{ }}`), literal strings, or any other
/// JSON-compatible type (booleans, numbers, arrays, objects). Non-string
/// literals are preserved through the rendering pipeline so that, for example,
/// `validation_passed: true` publishes the boolean `true`, not the string
/// `"true"`.
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)] #[serde(untagged)]
pub enum PublishDirective { pub enum PublishDirective {
/// Simple key-value pair /// Key-value pair where the value can be any JSON-compatible type
Simple(HashMap<String, String>), /// (string template, boolean, number, array, object, null).
Simple(HashMap<String, serde_json::Value>),
/// Just a key (publishes entire result under that key) /// Just a key (publishes entire result under that key)
Key(String), Key(String),
} }
@@ -1315,4 +1335,175 @@ tasks:
assert!(workflow.tasks[0].next[0].chart_meta.is_none()); assert!(workflow.tasks[0].next[0].chart_meta.is_none());
assert!(workflow.tasks[0].next[1].chart_meta.is_none()); assert!(workflow.tasks[0].next[1].chart_meta.is_none());
} }
// -----------------------------------------------------------------------
// Action-linked workflow file (no ref/label)
// -----------------------------------------------------------------------
#[test]
fn test_parse_action_linked_workflow_without_ref_and_label() {
// Action-linked workflow files (in actions/workflows/) omit ref and
// label — those are supplied by the companion action YAML. The
// parser must accept such files and default the fields to empty
// strings.
let yaml = r#"
version: 1.0.0
vars:
counter: 0
tasks:
- name: step1
action: core.echo
input:
message: "hello"
next:
- when: "{{ succeeded() }}"
do:
- step2
- name: step2
action: core.echo
input:
message: "world"
output_map:
result: "{{ task.step2.result }}"
"#;
let result = parse_workflow_yaml(yaml);
assert!(result.is_ok(), "Parse failed: {:?}", result.err());
let workflow = result.unwrap();
// ref and label default to empty strings
assert_eq!(workflow.r#ref, "");
assert_eq!(workflow.label, "");
// Graph fields are parsed normally
assert_eq!(workflow.version, "1.0.0");
assert_eq!(workflow.tasks.len(), 2);
assert_eq!(workflow.tasks[0].name, "step1");
assert!(workflow.vars.contains_key("counter"));
assert!(workflow.output_map.is_some());
// No parameters or output schema (those come from the action YAML)
assert!(workflow.parameters.is_none());
assert!(workflow.output.is_none());
assert!(workflow.tags.is_empty());
}
#[test]
fn test_parse_standalone_workflow_still_works_with_ref_and_label() {
// Standalone workflow files (in workflows/) still carry ref and label.
// Verify they continue to parse correctly.
let yaml = r#"
ref: mypack.deploy
label: Deploy Workflow
description: Deploys the application
version: 2.0.0
parameters:
target:
type: string
required: true
tags:
- deploy
- production
tasks:
- name: deploy
action: core.run
input:
target: "{{ parameters.target }}"
"#;
let result = parse_workflow_yaml(yaml);
assert!(result.is_ok(), "Parse failed: {:?}", result.err());
let workflow = result.unwrap();
assert_eq!(workflow.r#ref, "mypack.deploy");
assert_eq!(workflow.label, "Deploy Workflow");
assert_eq!(
workflow.description.as_deref(),
Some("Deploys the application")
);
assert_eq!(workflow.version, "2.0.0");
assert!(workflow.parameters.is_some());
assert_eq!(workflow.tags, vec!["deploy", "production"]);
}
#[test]
fn test_typed_publish_values_in_transitions() {
// Regression test: publish directive values that are booleans, numbers,
// or null must parse successfully (not just strings). Previously
// `PublishDirective::Simple(HashMap<String, String>)` rejected them.
let yaml = r#"
ref: test.typed_publish
label: Typed Publish
version: 1.0.0
tasks:
- name: validate
action: core.echo
next:
- when: "{{ succeeded() }}"
publish:
- validation_passed: true
- count: 42
- ratio: 3.14
- label: "hello"
- template_val: "{{ result().data }}"
- nothing: null
do:
- finalize
- when: "{{ failed() }}"
publish:
- validation_passed: false
do:
- handle_error
- name: finalize
action: core.echo
- name: handle_error
action: core.echo
"#;
let result = parse_workflow_yaml(yaml);
assert!(result.is_ok(), "Parse failed: {:?}", result.err());
let workflow = result.unwrap();
let task = &workflow.tasks[0];
assert_eq!(task.name, "validate");
assert_eq!(task.next.len(), 2);
// Success transition: 6 publish directives with mixed types
let success_transition = &task.next[0];
assert_eq!(success_transition.publish.len(), 6);
// Verify each typed value survived parsing
for directive in &success_transition.publish {
if let PublishDirective::Simple(map) = directive {
if let Some(val) = map.get("validation_passed") {
assert_eq!(val, &serde_json::Value::Bool(true), "boolean true");
} else if let Some(val) = map.get("count") {
assert_eq!(val, &serde_json::json!(42), "integer");
} else if let Some(val) = map.get("ratio") {
assert_eq!(val, &serde_json::json!(3.14), "float");
} else if let Some(val) = map.get("label") {
assert_eq!(val, &serde_json::json!("hello"), "string");
} else if let Some(val) = map.get("template_val") {
assert_eq!(val, &serde_json::json!("{{ result().data }}"), "template");
} else if let Some(val) = map.get("nothing") {
assert!(val.is_null(), "null");
}
}
}
// Failure transition: boolean false
let failure_transition = &task.next[1];
assert_eq!(failure_transition.publish.len(), 1);
if let PublishDirective::Simple(map) = &failure_transition.publish[0] {
assert_eq!(map.get("validation_passed"), Some(&serde_json::Value::Bool(false)));
} else {
panic!("Expected Simple publish directive");
}
}
} }

View File

@@ -4,6 +4,11 @@
//! Workflows are stored in the `workflow_definition` table with their full YAML definition //! Workflows are stored in the `workflow_definition` table with their full YAML definition
//! as JSON. A companion action record is also created so that workflows appear in //! as JSON. A companion action record is also created so that workflows appear in
//! action lists and the workflow builder's action palette. //! action lists and the workflow builder's action palette.
//!
//! Standalone workflow files (in `workflows/`) carry their own `ref` and `label`.
//! Action-linked workflow files (in `actions/workflows/`, referenced via
//! `workflow_file`) may omit those fields — the registrar falls back to
//! `WorkflowFile.ref_name` / `WorkflowFile.name` derived from the filename.
use crate::error::{Error, Result}; use crate::error::{Error, Result};
use crate::repositories::action::{ActionRepository, CreateActionInput, UpdateActionInput}; use crate::repositories::action::{ActionRepository, CreateActionInput, UpdateActionInput};
@@ -61,6 +66,32 @@ impl WorkflowRegistrar {
Self { pool, options } Self { pool, options }
} }
/// Resolve the effective ref for a workflow.
///
/// Prefers the value declared in the YAML; falls back to the
/// `WorkflowFile.ref_name` derived from the filename when the YAML
/// omits it (action-linked workflow files).
fn effective_ref(loaded: &LoadedWorkflow) -> String {
if loaded.workflow.r#ref.is_empty() {
loaded.file.ref_name.clone()
} else {
loaded.workflow.r#ref.clone()
}
}
/// Resolve the effective label for a workflow.
///
/// Prefers the value declared in the YAML; falls back to the
/// `WorkflowFile.name` (human-readable filename stem) when the YAML
/// omits it.
fn effective_label(loaded: &LoadedWorkflow) -> String {
if loaded.workflow.label.is_empty() {
loaded.file.name.clone()
} else {
loaded.workflow.label.clone()
}
}
/// Register a single workflow /// Register a single workflow
pub async fn register_workflow(&self, loaded: &LoadedWorkflow) -> Result<RegistrationResult> { pub async fn register_workflow(&self, loaded: &LoadedWorkflow) -> Result<RegistrationResult> {
debug!("Registering workflow: {}", loaded.file.ref_name); debug!("Registering workflow: {}", loaded.file.ref_name);
@@ -91,6 +122,12 @@ impl WorkflowRegistrar {
warnings.push(err.clone()); warnings.push(err.clone());
} }
// Resolve effective ref/label — prefer workflow YAML values, fall
// back to filename-derived values for action-linked workflow files
// that omit action-level metadata.
let effective_ref = Self::effective_ref(loaded);
let effective_label = Self::effective_label(loaded);
let (workflow_def_id, created) = if let Some(existing) = existing_workflow { let (workflow_def_id, created) = if let Some(existing) = existing_workflow {
if !self.options.update_existing { if !self.options.update_existing {
return Err(Error::already_exists( return Err(Error::already_exists(
@@ -102,7 +139,13 @@ impl WorkflowRegistrar {
info!("Updating existing workflow: {}", loaded.file.ref_name); info!("Updating existing workflow: {}", loaded.file.ref_name);
let workflow_def_id = self let workflow_def_id = self
.update_workflow(&existing.id, &loaded.workflow, &pack.r#ref) .update_workflow(
&existing.id,
&loaded.workflow,
&pack.r#ref,
&effective_ref,
&effective_label,
)
.await?; .await?;
// Update or create the companion action record // Update or create the companion action record
@@ -112,6 +155,8 @@ impl WorkflowRegistrar {
pack.id, pack.id,
&pack.r#ref, &pack.r#ref,
&loaded.file.name, &loaded.file.name,
&effective_ref,
&effective_label,
) )
.await?; .await?;
@@ -119,7 +164,14 @@ impl WorkflowRegistrar {
} else { } else {
info!("Creating new workflow: {}", loaded.file.ref_name); info!("Creating new workflow: {}", loaded.file.ref_name);
let workflow_def_id = self let workflow_def_id = self
.create_workflow(&loaded.workflow, &loaded.file.pack, pack.id, &pack.r#ref) .create_workflow(
&loaded.workflow,
&loaded.file.pack,
pack.id,
&pack.r#ref,
&effective_ref,
&effective_label,
)
.await?; .await?;
// Create a companion action record so the workflow appears in action lists // Create a companion action record so the workflow appears in action lists
@@ -129,6 +181,8 @@ impl WorkflowRegistrar {
pack.id, pack.id,
&pack.r#ref, &pack.r#ref,
&loaded.file.name, &loaded.file.name,
&effective_ref,
&effective_label,
) )
.await?; .await?;
@@ -195,6 +249,9 @@ impl WorkflowRegistrar {
/// This ensures the workflow appears in action lists and the action palette /// This ensures the workflow appears in action lists and the action palette
/// in the workflow builder. The action is linked to the workflow definition /// in the workflow builder. The action is linked to the workflow definition
/// via the `workflow_def` FK. /// via the `workflow_def` FK.
///
/// `effective_ref` and `effective_label` are the resolved values (which may
/// have been derived from the filename when the workflow YAML omits them).
async fn create_companion_action( async fn create_companion_action(
&self, &self,
workflow_def_id: i64, workflow_def_id: i64,
@@ -202,14 +259,16 @@ impl WorkflowRegistrar {
pack_id: i64, pack_id: i64,
pack_ref: &str, pack_ref: &str,
workflow_name: &str, workflow_name: &str,
effective_ref: &str,
effective_label: &str,
) -> Result<()> { ) -> Result<()> {
let entrypoint = format!("workflows/{}.workflow.yaml", workflow_name); let entrypoint = format!("workflows/{}.workflow.yaml", workflow_name);
let action_input = CreateActionInput { let action_input = CreateActionInput {
r#ref: workflow.r#ref.clone(), r#ref: effective_ref.to_string(),
pack: pack_id, pack: pack_id,
pack_ref: pack_ref.to_string(), pack_ref: pack_ref.to_string(),
label: workflow.label.clone(), label: effective_label.to_string(),
description: workflow.description.clone().unwrap_or_default(), description: workflow.description.clone().unwrap_or_default(),
entrypoint, entrypoint,
runtime: None, runtime: None,
@@ -226,7 +285,7 @@ impl WorkflowRegistrar {
info!( info!(
"Created companion action '{}' (ID: {}) for workflow definition (ID: {})", "Created companion action '{}' (ID: {}) for workflow definition (ID: {})",
workflow.r#ref, action.id, workflow_def_id effective_ref, action.id, workflow_def_id
); );
Ok(()) Ok(())
@@ -236,6 +295,9 @@ impl WorkflowRegistrar {
/// ///
/// If the action already exists, update it. If it doesn't exist (e.g., for /// If the action already exists, update it. If it doesn't exist (e.g., for
/// workflows registered before the companion-action fix), create it. /// workflows registered before the companion-action fix), create it.
///
/// `effective_ref` and `effective_label` are the resolved values (which may
/// have been derived from the filename when the workflow YAML omits them).
async fn ensure_companion_action( async fn ensure_companion_action(
&self, &self,
workflow_def_id: i64, workflow_def_id: i64,
@@ -243,6 +305,8 @@ impl WorkflowRegistrar {
pack_id: i64, pack_id: i64,
pack_ref: &str, pack_ref: &str,
workflow_name: &str, workflow_name: &str,
effective_ref: &str,
effective_label: &str,
) -> Result<()> { ) -> Result<()> {
let existing_action = let existing_action =
ActionRepository::find_by_workflow_def(&self.pool, workflow_def_id).await?; ActionRepository::find_by_workflow_def(&self.pool, workflow_def_id).await?;
@@ -250,13 +314,16 @@ impl WorkflowRegistrar {
if let Some(action) = existing_action { if let Some(action) = existing_action {
// Update the existing companion action to stay in sync // Update the existing companion action to stay in sync
let update_input = UpdateActionInput { let update_input = UpdateActionInput {
label: Some(workflow.label.clone()), label: Some(effective_label.to_string()),
description: workflow.description.clone(), description: workflow.description.clone(),
entrypoint: Some(format!("workflows/{}.workflow.yaml", workflow_name)), entrypoint: Some(format!("workflows/{}.workflow.yaml", workflow_name)),
runtime: None, runtime: None,
runtime_version_constraint: None, runtime_version_constraint: None,
param_schema: workflow.parameters.clone(), param_schema: workflow.parameters.clone(),
out_schema: workflow.output.clone(), out_schema: workflow.output.clone(),
parameter_delivery: None,
parameter_format: None,
output_format: None,
}; };
ActionRepository::update(&self.pool, action.id, update_input).await?; ActionRepository::update(&self.pool, action.id, update_input).await?;
@@ -273,6 +340,8 @@ impl WorkflowRegistrar {
pack_id, pack_id,
pack_ref, pack_ref,
workflow_name, workflow_name,
effective_ref,
effective_label,
) )
.await?; .await?;
} }
@@ -281,27 +350,32 @@ impl WorkflowRegistrar {
} }
/// Create a new workflow definition /// Create a new workflow definition
///
/// `effective_ref` and `effective_label` are the resolved values (which may
/// have been derived from the filename when the workflow YAML omits them).
async fn create_workflow( async fn create_workflow(
&self, &self,
workflow: &WorkflowYaml, workflow: &WorkflowYaml,
_pack_name: &str, _pack_name: &str,
pack_id: i64, pack_id: i64,
pack_ref: &str, pack_ref: &str,
effective_ref: &str,
effective_label: &str,
) -> Result<i64> { ) -> Result<i64> {
// Convert the parsed workflow back to JSON for storage // Convert the parsed workflow back to JSON for storage
let definition = serde_json::to_value(workflow) let definition = serde_json::to_value(workflow)
.map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?; .map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?;
let input = CreateWorkflowDefinitionInput { let input = CreateWorkflowDefinitionInput {
r#ref: workflow.r#ref.clone(), r#ref: effective_ref.to_string(),
pack: pack_id, pack: pack_id,
pack_ref: pack_ref.to_string(), pack_ref: pack_ref.to_string(),
label: workflow.label.clone(), label: effective_label.to_string(),
description: workflow.description.clone(), description: workflow.description.clone(),
version: workflow.version.clone(), version: workflow.version.clone(),
param_schema: workflow.parameters.clone(), param_schema: workflow.parameters.clone(),
out_schema: workflow.output.clone(), out_schema: workflow.output.clone(),
definition: definition, definition,
tags: workflow.tags.clone(), tags: workflow.tags.clone(),
enabled: true, enabled: true,
}; };
@@ -312,18 +386,23 @@ impl WorkflowRegistrar {
} }
/// Update an existing workflow definition /// Update an existing workflow definition
///
/// `effective_ref` and `effective_label` are the resolved values (which may
/// have been derived from the filename when the workflow YAML omits them).
async fn update_workflow( async fn update_workflow(
&self, &self,
workflow_id: &i64, workflow_id: &i64,
workflow: &WorkflowYaml, workflow: &WorkflowYaml,
_pack_ref: &str, _pack_ref: &str,
_effective_ref: &str,
effective_label: &str,
) -> Result<i64> { ) -> Result<i64> {
// Convert the parsed workflow back to JSON for storage // Convert the parsed workflow back to JSON for storage
let definition = serde_json::to_value(workflow) let definition = serde_json::to_value(workflow)
.map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?; .map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?;
let input = UpdateWorkflowDefinitionInput { let input = UpdateWorkflowDefinitionInput {
label: Some(workflow.label.clone()), label: Some(effective_label.to_string()),
description: workflow.description.clone(), description: workflow.description.clone(),
version: Some(workflow.version.clone()), version: Some(workflow.version.clone()),
param_schema: workflow.parameters.clone(), param_schema: workflow.parameters.clone(),

View File

@@ -196,11 +196,7 @@ async fn test_update_action() {
let update = UpdateActionInput { let update = UpdateActionInput {
label: Some("Updated Label".to_string()), label: Some("Updated Label".to_string()),
description: Some("Updated description".to_string()), description: Some("Updated description".to_string()),
entrypoint: None, ..Default::default()
runtime: None,
runtime_version_constraint: None,
param_schema: None,
out_schema: None,
}; };
let updated = ActionRepository::update(&pool, action.id, update) let updated = ActionRepository::update(&pool, action.id, update)

View File

@@ -263,6 +263,7 @@ async fn test_update_artifact_all_fields() {
content_type: Some("image/png".to_string()), content_type: Some("image/png".to_string()),
size_bytes: Some(12345), size_bytes: Some(12345),
data: Some(serde_json::json!({"key": "value"})), data: Some(serde_json::json!({"key": "value"})),
execution: None,
}; };
let updated = ArtifactRepository::update(&pool, created.id, update_input.clone()) let updated = ArtifactRepository::update(&pool, created.id, update_input.clone())

View File

@@ -42,27 +42,12 @@ use crate::workflow::graph::TaskGraph;
/// Extract workflow parameters from an execution's `config` field. /// Extract workflow parameters from an execution's `config` field.
/// ///
/// The config may be stored in two formats: /// All executions store config in flat format: `{"n": 5, ...}`.
/// 1. Wrapped: `{"parameters": {"n": 5, ...}}` — used by child task executions /// The config object itself IS the parameters map.
/// 2. Flat: `{"n": 5, ...}` — used by the API for manual executions
///
/// This helper checks for a `"parameters"` key first, and if absent treats
/// the entire config object as the parameters (matching the worker's logic
/// in `ActionExecutor::prepare_execution_context`).
fn extract_workflow_params(config: &Option<JsonValue>) -> JsonValue { fn extract_workflow_params(config: &Option<JsonValue>) -> JsonValue {
match config { match config {
Some(c) => { Some(c) if c.is_object() => c.clone(),
// Prefer the wrapped format if present _ => serde_json::json!({}),
if let Some(params) = c.get("parameters") {
params.clone()
} else if c.is_object() {
// Flat format — the config itself is the parameters
c.clone()
} else {
serde_json::json!({})
}
}
None => serde_json::json!({}),
} }
} }
@@ -100,10 +85,7 @@ fn apply_param_defaults(params: JsonValue, param_schema: &Option<JsonValue>) ->
}; };
if needs_default { if needs_default {
if let Some(default_val) = prop.get("default") { if let Some(default_val) = prop.get("default") {
debug!( debug!("Applying default for parameter '{}': {}", key, default_val);
"Applying default for parameter '{}': {}",
key, default_val
);
obj.insert(key.clone(), default_val.clone()); obj.insert(key.clone(), default_val.clone());
} }
} }
@@ -234,8 +216,25 @@ impl ExecutionScheduler {
worker.id, execution_id worker.id, execution_id
); );
// Apply parameter defaults from the action's param_schema.
// This mirrors what `process_workflow_execution` does for workflows
// so that non-workflow executions also get missing parameters filled
// in from the action's declared defaults.
let execution_config = {
let raw_config = execution.config.clone();
let params = extract_workflow_params(&raw_config);
let params_with_defaults = apply_param_defaults(params, &action.param_schema);
// Config is already flat — just use the defaults-applied version
if params_with_defaults.is_object()
&& !params_with_defaults.as_object().unwrap().is_empty()
{
Some(params_with_defaults)
} else {
raw_config
}
};
// Update execution status to scheduled // Update execution status to scheduled
let execution_config = execution.config.clone();
let mut execution_for_update = execution; let mut execution_for_update = execution;
execution_for_update.status = ExecutionStatus::Scheduled; execution_for_update.status = ExecutionStatus::Scheduled;
ExecutionRepository::update(pool, execution_for_update.id, execution_for_update.into()) ExecutionRepository::update(pool, execution_for_update.id, execution_for_update.into())
@@ -391,6 +390,7 @@ impl ExecutionScheduler {
&workflow_execution.id, &workflow_execution.id,
task_node, task_node,
&wf_ctx, &wf_ctx,
None, // entry-point task — no predecessor
) )
.await?; .await?;
} else { } else {
@@ -407,6 +407,10 @@ impl ExecutionScheduler {
/// Create a child execution for a single workflow task and dispatch it to /// Create a child execution for a single workflow task and dispatch it to
/// a worker. The child execution references the parent workflow execution /// a worker. The child execution references the parent workflow execution
/// via `workflow_task` metadata. /// via `workflow_task` metadata.
///
/// `triggered_by` is the name of the predecessor task whose completion
/// caused this task to be scheduled. Pass `None` for entry-point tasks
/// dispatched at workflow start.
async fn dispatch_workflow_task( async fn dispatch_workflow_task(
pool: &PgPool, pool: &PgPool,
publisher: &Publisher, publisher: &Publisher,
@@ -415,6 +419,7 @@ impl ExecutionScheduler {
workflow_execution_id: &i64, workflow_execution_id: &i64,
task_node: &crate::workflow::graph::TaskNode, task_node: &crate::workflow::graph::TaskNode,
wf_ctx: &WorkflowContext, wf_ctx: &WorkflowContext,
triggered_by: Option<&str>,
) -> Result<()> { ) -> Result<()> {
let action_ref: String = match &task_node.action { let action_ref: String = match &task_node.action {
Some(a) => a.clone(), Some(a) => a.clone(),
@@ -461,6 +466,7 @@ impl ExecutionScheduler {
&action_ref, &action_ref,
with_items_expr, with_items_expr,
wf_ctx, wf_ctx,
triggered_by,
) )
.await; .await;
} }
@@ -484,12 +490,12 @@ impl ExecutionScheduler {
task_node.input.clone() task_node.input.clone()
}; };
// Build task config from the (rendered) input // Build task config from the (rendered) input.
// Store as flat parameters (consistent with manual and rule-triggered
// executions) — no {"parameters": ...} wrapper.
let task_config: Option<JsonValue> = let task_config: Option<JsonValue> =
if rendered_input.is_object() && !rendered_input.as_object().unwrap().is_empty() { if rendered_input.is_object() && !rendered_input.as_object().unwrap().is_empty() {
Some(serde_json::json!({ Some(rendered_input.clone())
"parameters": rendered_input
}))
} else if let Some(parent_config) = &parent_execution.config { } else if let Some(parent_config) = &parent_execution.config {
Some(parent_config.clone()) Some(parent_config.clone())
} else { } else {
@@ -500,6 +506,7 @@ impl ExecutionScheduler {
let workflow_task = WorkflowTaskMetadata { let workflow_task = WorkflowTaskMetadata {
workflow_execution: *workflow_execution_id, workflow_execution: *workflow_execution_id,
task_name: task_node.name.clone(), task_name: task_node.name.clone(),
triggered_by: triggered_by.map(String::from),
task_index: None, task_index: None,
task_batch: None, task_batch: None,
retry_count: 0, retry_count: 0,
@@ -587,6 +594,7 @@ impl ExecutionScheduler {
action_ref: &str, action_ref: &str,
with_items_expr: &str, with_items_expr: &str,
wf_ctx: &WorkflowContext, wf_ctx: &WorkflowContext,
triggered_by: Option<&str>,
) -> Result<()> { ) -> Result<()> {
// Resolve the with_items expression to a JSON array // Resolve the with_items expression to a JSON array
let items_value = wf_ctx let items_value = wf_ctx
@@ -647,9 +655,11 @@ impl ExecutionScheduler {
task_node.input.clone() task_node.input.clone()
}; };
// Store as flat parameters (consistent with manual and rule-triggered
// executions) — no {"parameters": ...} wrapper.
let task_config: Option<JsonValue> = let task_config: Option<JsonValue> =
if rendered_input.is_object() && !rendered_input.as_object().unwrap().is_empty() { if rendered_input.is_object() && !rendered_input.as_object().unwrap().is_empty() {
Some(serde_json::json!({ "parameters": rendered_input })) Some(rendered_input.clone())
} else if let Some(parent_config) = &parent_execution.config { } else if let Some(parent_config) = &parent_execution.config {
Some(parent_config.clone()) Some(parent_config.clone())
} else { } else {
@@ -659,6 +669,7 @@ impl ExecutionScheduler {
let workflow_task = WorkflowTaskMetadata { let workflow_task = WorkflowTaskMetadata {
workflow_execution: *workflow_execution_id, workflow_execution: *workflow_execution_id,
task_name: task_node.name.clone(), task_name: task_node.name.clone(),
triggered_by: triggered_by.map(String::from),
task_index: Some(index as i32), task_index: Some(index as i32),
task_batch: None, task_batch: None,
retry_count: 0, retry_count: 0,
@@ -961,8 +972,7 @@ impl ExecutionScheduler {
.and_then(|n| n.concurrency) .and_then(|n| n.concurrency)
.unwrap_or(1); .unwrap_or(1);
let free_slots = let free_slots = concurrency_limit.saturating_sub(in_flight_count.0 as usize);
concurrency_limit.saturating_sub(in_flight_count.0 as usize);
if free_slots > 0 { if free_slots > 0 {
if let Err(e) = Self::publish_pending_with_items_children( if let Err(e) = Self::publish_pending_with_items_children(
@@ -1009,6 +1019,39 @@ impl ExecutionScheduler {
return Ok(()); return Ok(());
} }
// ---------------------------------------------------------
// Race-condition guard: when multiple with_items children
// complete nearly simultaneously, the worker updates their
// DB status to Completed *before* the completion MQ message
// is processed. This means several advance_workflow calls
// (processed sequentially by the completion listener) can
// each see "0 siblings remaining" and fall through to
// transition evaluation, dispatching successor tasks
// multiple times.
//
// To prevent this we re-check the *persisted*
// completed/failed task lists that were loaded from the
// workflow_execution record at the top of this function.
// If `task_name` is already present, a previous
// advance_workflow invocation already handled the final
// completion of this with_items task and dispatched its
// successors — we can safely return.
// ---------------------------------------------------------
if workflow_execution
.completed_tasks
.contains(&task_name.to_string())
|| workflow_execution
.failed_tasks
.contains(&task_name.to_string())
{
debug!(
"with_items task '{}' already in persisted completed/failed list — \
another advance_workflow call already handled final completion, skipping",
task_name,
);
return Ok(());
}
// All items done — check if any failed // All items done — check if any failed
let any_failed: Vec<(i64,)> = sqlx::query_as( let any_failed: Vec<(i64,)> = sqlx::query_as(
"SELECT id \ "SELECT id \
@@ -1129,10 +1172,10 @@ impl ExecutionScheduler {
if should_fire { if should_fire {
// Process publish directives from this transition // Process publish directives from this transition
if !transition.publish.is_empty() { if !transition.publish.is_empty() {
let publish_map: HashMap<String, String> = transition let publish_map: HashMap<String, JsonValue> = transition
.publish .publish
.iter() .iter()
.map(|p| (p.name.clone(), p.expression.clone())) .map(|p| (p.name.clone(), p.value.clone()))
.collect(); .collect();
if let Err(e) = wf_ctx.publish_from_result( if let Err(e) = wf_ctx.publish_from_result(
&serde_json::json!({}), &serde_json::json!({}),
@@ -1161,6 +1204,41 @@ impl ExecutionScheduler {
continue; continue;
} }
// Guard against dispatching a task that has already
// been dispatched (exists as a child execution in
// this workflow). This catches edge cases where
// the persisted completed/failed lists haven't been
// updated yet but a child execution was already
// created by a prior advance_workflow call.
//
// This is critical for with_items predecessors:
// workers update DB status to Completed before the
// completion MQ message is processed, so multiple
// with_items items can each see "0 siblings
// remaining" and attempt to dispatch the same
// successor. The query covers both regular tasks
// (task_index IS NULL) and with_items tasks
// (task_index IS NOT NULL) so that neither case
// can be double-dispatched.
let already_dispatched: (i64,) = sqlx::query_as(
"SELECT COUNT(*) \
FROM execution \
WHERE workflow_task->>'workflow_execution' = $1::text \
AND workflow_task->>'task_name' = $2",
)
.bind(workflow_execution_id.to_string())
.bind(next_task_name.as_str())
.fetch_one(pool)
.await?;
if already_dispatched.0 > 0 {
debug!(
"Skipping task '{}' — already dispatched ({} existing execution(s))",
next_task_name, already_dispatched.0
);
continue;
}
// Check join barrier: if the task has a `join` count, // Check join barrier: if the task has a `join` count,
// only schedule it when enough predecessors are done. // only schedule it when enough predecessors are done.
if let Some(next_node) = graph.get_task(next_task_name) { if let Some(next_node) = graph.get_task(next_task_name) {
@@ -1210,6 +1288,7 @@ impl ExecutionScheduler {
&workflow_execution_id, &workflow_execution_id,
task_node, task_node,
&wf_ctx, &wf_ctx,
Some(task_name), // predecessor that triggered this task
) )
.await .await
{ {
@@ -1716,19 +1795,8 @@ mod tests {
assert_eq!(free, 0); assert_eq!(free, 0);
} }
#[test]
fn test_extract_workflow_params_wrapped_format() {
// Child task executions store config as {"parameters": {...}}
let config = Some(serde_json::json!({
"parameters": {"n": 5, "name": "test"}
}));
let params = extract_workflow_params(&config);
assert_eq!(params, serde_json::json!({"n": 5, "name": "test"}));
}
#[test] #[test]
fn test_extract_workflow_params_flat_format() { fn test_extract_workflow_params_flat_format() {
// API manual executions store config as flat {"n": 5, ...}
let config = Some(serde_json::json!({"n": 5, "name": "test"})); let config = Some(serde_json::json!({"n": 5, "name": "test"}));
let params = extract_workflow_params(&config); let params = extract_workflow_params(&config);
assert_eq!(params, serde_json::json!({"n": 5, "name": "test"})); assert_eq!(params, serde_json::json!({"n": 5, "name": "test"}));
@@ -1742,7 +1810,6 @@ mod tests {
#[test] #[test]
fn test_extract_workflow_params_non_object() { fn test_extract_workflow_params_non_object() {
// Edge case: config is a non-object JSON value
let config = Some(serde_json::json!("not an object")); let config = Some(serde_json::json!("not an object"));
let params = extract_workflow_params(&config); let params = extract_workflow_params(&config);
assert_eq!(params, serde_json::json!({})); assert_eq!(params, serde_json::json!({}));
@@ -1756,14 +1823,17 @@ mod tests {
} }
#[test] #[test]
fn test_extract_workflow_params_wrapped_takes_precedence() { fn test_extract_workflow_params_with_parameters_key() {
// If config has a "parameters" key, that value is used even if // A "parameters" key is just a regular parameter — not unwrapped
// the config object also has other top-level keys
let config = Some(serde_json::json!({ let config = Some(serde_json::json!({
"parameters": {"n": 5}, "parameters": {"n": 5},
"context": {"rule": "test"} "context": {"rule": "test"}
})); }));
let params = extract_workflow_params(&config); let params = extract_workflow_params(&config);
assert_eq!(params, serde_json::json!({"n": 5})); // Returns the whole object as-is — "parameters" is treated as a normal key
assert_eq!(
params,
serde_json::json!({"parameters": {"n": 5}, "context": {"rule": "test"}})
);
} }
} }

View File

@@ -412,24 +412,26 @@ impl WorkflowContext {
/// Publish variables from a task result. /// Publish variables from a task result.
/// ///
/// Each publish directive is a `(name, expression)` pair where the /// Each publish directive is a `(name, value)` pair where the value is
/// expression is a template string like `"{{ result().data.items }}"`. /// any JSON-compatible type. String values are treated as template
/// The expression is rendered with `render_json`-style type preservation /// expressions (e.g. `"{{ result().data.items }}"`) and rendered with
/// so that non-string values (arrays, numbers, booleans) keep their type. /// type preservation. Non-string values (booleans, numbers, arrays,
/// objects, null) pass through `render_json` unchanged, preserving
/// their original type.
pub fn publish_from_result( pub fn publish_from_result(
&mut self, &mut self,
result: &JsonValue, result: &JsonValue,
publish_vars: &[String], publish_vars: &[String],
publish_map: Option<&HashMap<String, String>>, publish_map: Option<&HashMap<String, JsonValue>>,
) -> ContextResult<()> { ) -> ContextResult<()> {
// If publish map is provided, use it // If publish map is provided, use it
if let Some(map) = publish_map { if let Some(map) = publish_map {
for (var_name, template) in map { for (var_name, json_value) in map {
// Use type-preserving rendering: if the entire template is a // render_json handles all types: strings are template-rendered
// single expression like `{{ result().data.items }}`, preserve // (with type preservation for pure `{{ }}` expressions), while
// the underlying JsonValue type (e.g. an array stays an array). // booleans, numbers, arrays, objects, and null pass through
let json_value = JsonValue::String(template.clone()); // unchanged.
let value = self.render_json(&json_value)?; let value = self.render_json(json_value)?;
self.set_var(var_name, value); self.set_var(var_name, value);
} }
} else { } else {
@@ -1095,7 +1097,7 @@ mod tests {
let mut publish_map = HashMap::new(); let mut publish_map = HashMap::new();
publish_map.insert( publish_map.insert(
"number_list".to_string(), "number_list".to_string(),
"{{ result().data.items }}".to_string(), JsonValue::String("{{ result().data.items }}".to_string()),
); );
ctx.publish_from_result(&json!({}), &[], Some(&publish_map)) ctx.publish_from_result(&json!({}), &[], Some(&publish_map))
@@ -1117,6 +1119,52 @@ mod tests {
assert_eq!(ctx.get_var("my_var").unwrap(), result); assert_eq!(ctx.get_var("my_var").unwrap(), result);
} }
#[test]
fn test_publish_typed_values() {
// Non-string publish values (booleans, numbers, null) should pass
// through render_json unchanged and be stored with their original type.
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
ctx.set_last_task_outcome(json!({"status": "ok"}), TaskOutcome::Succeeded);
let mut publish_map = HashMap::new();
publish_map.insert("flag".to_string(), JsonValue::Bool(true));
publish_map.insert("count".to_string(), json!(42));
publish_map.insert("ratio".to_string(), json!(3.14));
publish_map.insert("nothing".to_string(), JsonValue::Null);
publish_map.insert(
"template".to_string(),
JsonValue::String("{{ result().status }}".to_string()),
);
publish_map.insert(
"plain_str".to_string(),
JsonValue::String("hello".to_string()),
);
ctx.publish_from_result(&json!({}), &[], Some(&publish_map))
.unwrap();
// Boolean preserved as boolean (not string "true")
assert_eq!(ctx.get_var("flag").unwrap(), json!(true));
assert!(ctx.get_var("flag").unwrap().is_boolean());
// Integer preserved
assert_eq!(ctx.get_var("count").unwrap(), json!(42));
assert!(ctx.get_var("count").unwrap().is_number());
// Float preserved
assert_eq!(ctx.get_var("ratio").unwrap(), json!(3.14));
// Null preserved
assert_eq!(ctx.get_var("nothing").unwrap(), json!(null));
assert!(ctx.get_var("nothing").unwrap().is_null());
// Template expression rendered with type preservation
assert_eq!(ctx.get_var("template").unwrap(), json!("ok"));
// Plain string stays as string
assert_eq!(ctx.get_var("plain_str").unwrap(), json!("hello"));
}
#[test] #[test]
fn test_published_var_accessible_via_workflow_namespace() { fn test_published_var_accessible_via_workflow_namespace() {
let mut ctx = WorkflowContext::new(json!({}), HashMap::new()); let mut ctx = WorkflowContext::new(json!({}), HashMap::new());

View File

@@ -11,6 +11,7 @@
//! - `do` — next tasks to invoke when the condition is met //! - `do` — next tasks to invoke when the condition is met
use attune_common::workflow::{Task, TaskType, WorkflowDefinition}; use attune_common::workflow::{Task, TaskType, WorkflowDefinition};
use serde_json::Value as JsonValue;
use std::collections::{HashMap, HashSet}; use std::collections::{HashMap, HashSet};
/// Result type for graph operations /// Result type for graph operations
@@ -101,11 +102,23 @@ pub struct GraphTransition {
pub do_tasks: Vec<String>, pub do_tasks: Vec<String>,
} }
/// A single publish variable (key = expression) /// A single publish variable (key = value).
///
/// The `value` field holds either a template expression (as a `JsonValue::String`
/// containing `{{ }}`), a literal string, or any other JSON-compatible type
/// (boolean, number, array, object, null). The workflow context's `render_json`
/// method handles all of these: strings are template-rendered (with type
/// preservation for pure expressions), while non-string values pass through
/// unchanged.
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)] #[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct PublishVar { pub struct PublishVar {
pub name: String, pub name: String,
pub expression: String, /// The publish value — may be a template string, literal boolean, number,
/// array, object, or null. Renamed from `expression` (which only supported
/// strings); the serde alias ensures existing serialized task graphs that
/// use the old field name still deserialize correctly.
#[serde(alias = "expression")]
pub value: JsonValue,
} }
/// Retry configuration /// Retry configuration
@@ -463,14 +476,14 @@ fn extract_publish_vars(publish: &[attune_common::workflow::PublishDirective]) -
for (key, value) in map { for (key, value) in map {
vars.push(PublishVar { vars.push(PublishVar {
name: key.clone(), name: key.clone(),
expression: value.clone(), value: value.clone(),
}); });
} }
} }
PublishDirective::Key(key) => { PublishDirective::Key(key) => {
vars.push(PublishVar { vars.push(PublishVar {
name: key.clone(), name: key.clone(),
expression: "{{ result() }}".to_string(), value: JsonValue::String("{{ result() }}".to_string()),
}); });
} }
} }
@@ -678,7 +691,7 @@ tasks:
assert_eq!(transitions.len(), 1); assert_eq!(transitions.len(), 1);
assert_eq!(transitions[0].publish.len(), 1); assert_eq!(transitions[0].publish.len(), 1);
assert_eq!(transitions[0].publish[0].name, "msg"); assert_eq!(transitions[0].publish[0].name, "msg");
assert_eq!(transitions[0].publish[0].expression, "task1 done"); assert_eq!(transitions[0].publish[0].value, JsonValue::String("task1 done".to_string()));
} }
#[test] #[test]
@@ -932,4 +945,82 @@ tasks:
assert!(next.contains(&"failure_task".to_string())); assert!(next.contains(&"failure_task".to_string()));
assert!(next.contains(&"always_task".to_string())); assert!(next.contains(&"always_task".to_string()));
} }
#[test]
fn test_typed_publish_values() {
// Verify that non-string publish values (booleans, numbers, null)
// are preserved through parsing and graph construction.
let yaml = r#"
ref: test.typed_publish
label: Typed Publish Test
version: 1.0.0
tasks:
- name: task1
action: core.echo
next:
- when: "{{ succeeded() }}"
publish:
- validation_passed: true
- count: 42
- ratio: 3.14
- label: "hello"
- template_val: "{{ result().data }}"
- nothing: null
do:
- task2
- when: "{{ failed() }}"
publish:
- validation_passed: false
do:
- task2
- name: task2
action: core.echo
"#;
let workflow = workflow::parse_workflow_yaml(yaml).unwrap();
let graph = TaskGraph::from_workflow(&workflow).unwrap();
let task1 = graph.get_task("task1").unwrap();
assert_eq!(task1.transitions.len(), 2);
// Success transition should have 6 publish vars
let success_publish = &task1.transitions[0].publish;
assert_eq!(success_publish.len(), 6);
// Build a lookup map for easier assertions
let publish_map: HashMap<&str, &JsonValue> = success_publish
.iter()
.map(|p| (p.name.as_str(), &p.value))
.collect();
// Boolean true is preserved as a JSON boolean
assert_eq!(publish_map["validation_passed"], &JsonValue::Bool(true));
// Integer is preserved as a JSON number
assert_eq!(publish_map["count"], &serde_json::json!(42));
// Float is preserved as a JSON number
assert_eq!(publish_map["ratio"], &serde_json::json!(3.14));
// Plain string stays as a string
assert_eq!(
publish_map["label"],
&JsonValue::String("hello".to_string())
);
// Template expression stays as a string (rendered later by context)
assert_eq!(
publish_map["template_val"],
&JsonValue::String("{{ result().data }}".to_string())
);
// Null is preserved
assert_eq!(publish_map["nothing"], &JsonValue::Null);
// Failure transition should have boolean false
let failure_publish = &task1.transitions[1].publish;
assert_eq!(failure_publish.len(), 1);
assert_eq!(failure_publish[0].name, "validation_passed");
assert_eq!(failure_publish[0].value, JsonValue::Bool(false));
}
} }

View File

@@ -162,11 +162,16 @@ pub enum TaskType {
} }
/// Variable publishing directive /// Variable publishing directive
///
/// Values may be template expressions (strings containing `{{ }}`), literal
/// strings, or any other JSON-compatible type (booleans, numbers, arrays,
/// objects). Non-string literals are preserved through the rendering pipeline.
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)] #[serde(untagged)]
pub enum PublishDirective { pub enum PublishDirective {
/// Simple key-value pair /// Key-value pair where the value can be any JSON-compatible type
Simple(HashMap<String, String>), /// (string template, boolean, number, array, object, null).
Simple(HashMap<String, serde_json::Value>),
/// Just a key (publishes entire result under that key) /// Just a key (publishes entire result under that key)
Key(String), Key(String),
} }

View File

@@ -4,6 +4,11 @@
//! Workflows are stored in the `workflow_definition` table with their full YAML definition //! Workflows are stored in the `workflow_definition` table with their full YAML definition
//! as JSON. A companion action record is also created so that workflows appear in //! as JSON. A companion action record is also created so that workflows appear in
//! action lists and the workflow builder's action palette. //! action lists and the workflow builder's action palette.
//!
//! Standalone workflow files (in `workflows/`) carry their own `ref` and `label`.
//! Action-linked workflow files (in `actions/workflows/`, referenced via
//! `workflow_file`) may omit those fields — the registrar falls back to
//! `WorkflowFile.ref_name` / `WorkflowFile.name` derived from the filename.
use attune_common::error::{Error, Result}; use attune_common::error::{Error, Result};
use attune_common::repositories::action::{ActionRepository, CreateActionInput, UpdateActionInput}; use attune_common::repositories::action::{ActionRepository, CreateActionInput, UpdateActionInput};
@@ -63,6 +68,32 @@ impl WorkflowRegistrar {
Self { pool, options } Self { pool, options }
} }
/// Resolve the effective ref for a workflow.
///
/// Prefers the value declared in the YAML; falls back to the
/// `WorkflowFile.ref_name` derived from the filename when the YAML
/// omits it (action-linked workflow files).
fn effective_ref(loaded: &LoadedWorkflow) -> String {
if loaded.workflow.r#ref.is_empty() {
loaded.file.ref_name.clone()
} else {
loaded.workflow.r#ref.clone()
}
}
/// Resolve the effective label for a workflow.
///
/// Prefers the value declared in the YAML; falls back to the
/// `WorkflowFile.name` (human-readable filename stem) when the YAML
/// omits it.
fn effective_label(loaded: &LoadedWorkflow) -> String {
if loaded.workflow.label.is_empty() {
loaded.file.name.clone()
} else {
loaded.workflow.label.clone()
}
}
/// Register a single workflow /// Register a single workflow
pub async fn register_workflow(&self, loaded: &LoadedWorkflow) -> Result<RegistrationResult> { pub async fn register_workflow(&self, loaded: &LoadedWorkflow) -> Result<RegistrationResult> {
debug!("Registering workflow: {}", loaded.file.ref_name); debug!("Registering workflow: {}", loaded.file.ref_name);
@@ -93,6 +124,12 @@ impl WorkflowRegistrar {
warnings.push(err.clone()); warnings.push(err.clone());
} }
// Resolve effective ref/label — prefer workflow YAML values, fall
// back to filename-derived values for action-linked workflow files
// that omit action-level metadata.
let effective_ref = Self::effective_ref(loaded);
let effective_label = Self::effective_label(loaded);
let (workflow_def_id, created) = if let Some(existing) = existing_workflow { let (workflow_def_id, created) = if let Some(existing) = existing_workflow {
if !self.options.update_existing { if !self.options.update_existing {
return Err(Error::already_exists( return Err(Error::already_exists(
@@ -104,7 +141,13 @@ impl WorkflowRegistrar {
info!("Updating existing workflow: {}", loaded.file.ref_name); info!("Updating existing workflow: {}", loaded.file.ref_name);
let workflow_def_id = self let workflow_def_id = self
.update_workflow(&existing.id, &loaded.workflow, &pack.r#ref) .update_workflow(
&existing.id,
&loaded.workflow,
&pack.r#ref,
&effective_ref,
&effective_label,
)
.await?; .await?;
// Update or create the companion action record // Update or create the companion action record
@@ -114,6 +157,8 @@ impl WorkflowRegistrar {
pack.id, pack.id,
&pack.r#ref, &pack.r#ref,
&loaded.file.name, &loaded.file.name,
&effective_ref,
&effective_label,
) )
.await?; .await?;
@@ -121,7 +166,14 @@ impl WorkflowRegistrar {
} else { } else {
info!("Creating new workflow: {}", loaded.file.ref_name); info!("Creating new workflow: {}", loaded.file.ref_name);
let workflow_def_id = self let workflow_def_id = self
.create_workflow(&loaded.workflow, &loaded.file.pack, pack.id, &pack.r#ref) .create_workflow(
&loaded.workflow,
&loaded.file.pack,
pack.id,
&pack.r#ref,
&effective_ref,
&effective_label,
)
.await?; .await?;
// Create a companion action record so the workflow appears in action lists // Create a companion action record so the workflow appears in action lists
@@ -131,6 +183,8 @@ impl WorkflowRegistrar {
pack.id, pack.id,
&pack.r#ref, &pack.r#ref,
&loaded.file.name, &loaded.file.name,
&effective_ref,
&effective_label,
) )
.await?; .await?;
@@ -197,6 +251,9 @@ impl WorkflowRegistrar {
/// This ensures the workflow appears in action lists and the action palette /// This ensures the workflow appears in action lists and the action palette
/// in the workflow builder. The action is linked to the workflow definition /// in the workflow builder. The action is linked to the workflow definition
/// via the `workflow_def` FK. /// via the `workflow_def` FK.
///
/// `effective_ref` and `effective_label` are the resolved values (which may
/// have been derived from the filename when the workflow YAML omits them).
async fn create_companion_action( async fn create_companion_action(
&self, &self,
workflow_def_id: i64, workflow_def_id: i64,
@@ -204,14 +261,16 @@ impl WorkflowRegistrar {
pack_id: i64, pack_id: i64,
pack_ref: &str, pack_ref: &str,
workflow_name: &str, workflow_name: &str,
effective_ref: &str,
effective_label: &str,
) -> Result<()> { ) -> Result<()> {
let entrypoint = format!("workflows/{}.workflow.yaml", workflow_name); let entrypoint = format!("workflows/{}.workflow.yaml", workflow_name);
let action_input = CreateActionInput { let action_input = CreateActionInput {
r#ref: workflow.r#ref.clone(), r#ref: effective_ref.to_string(),
pack: pack_id, pack: pack_id,
pack_ref: pack_ref.to_string(), pack_ref: pack_ref.to_string(),
label: workflow.label.clone(), label: effective_label.to_string(),
description: workflow.description.clone().unwrap_or_default(), description: workflow.description.clone().unwrap_or_default(),
entrypoint, entrypoint,
runtime: None, runtime: None,
@@ -228,7 +287,7 @@ impl WorkflowRegistrar {
info!( info!(
"Created companion action '{}' (ID: {}) for workflow definition (ID: {})", "Created companion action '{}' (ID: {}) for workflow definition (ID: {})",
workflow.r#ref, action.id, workflow_def_id effective_ref, action.id, workflow_def_id
); );
Ok(()) Ok(())
@@ -238,6 +297,9 @@ impl WorkflowRegistrar {
/// ///
/// If the action already exists, update it. If it doesn't exist (e.g., for /// If the action already exists, update it. If it doesn't exist (e.g., for
/// workflows registered before the companion-action fix), create it. /// workflows registered before the companion-action fix), create it.
///
/// `effective_ref` and `effective_label` are the resolved values (which may
/// have been derived from the filename when the workflow YAML omits them).
async fn ensure_companion_action( async fn ensure_companion_action(
&self, &self,
workflow_def_id: i64, workflow_def_id: i64,
@@ -245,6 +307,8 @@ impl WorkflowRegistrar {
pack_id: i64, pack_id: i64,
pack_ref: &str, pack_ref: &str,
workflow_name: &str, workflow_name: &str,
effective_ref: &str,
effective_label: &str,
) -> Result<()> { ) -> Result<()> {
let existing_action = let existing_action =
ActionRepository::find_by_workflow_def(&self.pool, workflow_def_id).await?; ActionRepository::find_by_workflow_def(&self.pool, workflow_def_id).await?;
@@ -252,13 +316,16 @@ impl WorkflowRegistrar {
if let Some(action) = existing_action { if let Some(action) = existing_action {
// Update the existing companion action to stay in sync // Update the existing companion action to stay in sync
let update_input = UpdateActionInput { let update_input = UpdateActionInput {
label: Some(workflow.label.clone()), label: Some(effective_label.to_string()),
description: workflow.description.clone(), description: workflow.description.clone(),
entrypoint: Some(format!("workflows/{}.workflow.yaml", workflow_name)), entrypoint: Some(format!("workflows/{}.workflow.yaml", workflow_name)),
runtime: None, runtime: None,
runtime_version_constraint: None, runtime_version_constraint: None,
param_schema: workflow.parameters.clone(), param_schema: workflow.parameters.clone(),
out_schema: workflow.output.clone(), out_schema: workflow.output.clone(),
parameter_delivery: None,
parameter_format: None,
output_format: None,
}; };
ActionRepository::update(&self.pool, action.id, update_input).await?; ActionRepository::update(&self.pool, action.id, update_input).await?;
@@ -275,6 +342,8 @@ impl WorkflowRegistrar {
pack_id, pack_id,
pack_ref, pack_ref,
workflow_name, workflow_name,
effective_ref,
effective_label,
) )
.await?; .await?;
} }
@@ -283,27 +352,32 @@ impl WorkflowRegistrar {
} }
/// Create a new workflow definition /// Create a new workflow definition
///
/// `effective_ref` and `effective_label` are the resolved values (which may
/// have been derived from the filename when the workflow YAML omits them).
async fn create_workflow( async fn create_workflow(
&self, &self,
workflow: &WorkflowYaml, workflow: &WorkflowYaml,
_pack_name: &str, _pack_name: &str,
pack_id: i64, pack_id: i64,
pack_ref: &str, pack_ref: &str,
effective_ref: &str,
effective_label: &str,
) -> Result<i64> { ) -> Result<i64> {
// Convert the parsed workflow back to JSON for storage // Convert the parsed workflow back to JSON for storage
let definition = serde_json::to_value(workflow) let definition = serde_json::to_value(workflow)
.map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?; .map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?;
let input = CreateWorkflowDefinitionInput { let input = CreateWorkflowDefinitionInput {
r#ref: workflow.r#ref.clone(), r#ref: effective_ref.to_string(),
pack: pack_id, pack: pack_id,
pack_ref: pack_ref.to_string(), pack_ref: pack_ref.to_string(),
label: workflow.label.clone(), label: effective_label.to_string(),
description: workflow.description.clone(), description: workflow.description.clone(),
version: workflow.version.clone(), version: workflow.version.clone(),
param_schema: workflow.parameters.clone(), param_schema: workflow.parameters.clone(),
out_schema: workflow.output.clone(), out_schema: workflow.output.clone(),
definition: definition, definition,
tags: workflow.tags.clone(), tags: workflow.tags.clone(),
enabled: true, enabled: true,
}; };
@@ -314,18 +388,23 @@ impl WorkflowRegistrar {
} }
/// Update an existing workflow definition /// Update an existing workflow definition
///
/// `effective_ref` and `effective_label` are the resolved values (which may
/// have been derived from the filename when the workflow YAML omits them).
async fn update_workflow( async fn update_workflow(
&self, &self,
workflow_id: &i64, workflow_id: &i64,
workflow: &WorkflowYaml, workflow: &WorkflowYaml,
_pack_ref: &str, _pack_ref: &str,
_effective_ref: &str,
effective_label: &str,
) -> Result<i64> { ) -> Result<i64> {
// Convert the parsed workflow back to JSON for storage // Convert the parsed workflow back to JSON for storage
let definition = serde_json::to_value(workflow) let definition = serde_json::to_value(workflow)
.map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?; .map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?;
let input = UpdateWorkflowDefinitionInput { let input = UpdateWorkflowDefinitionInput {
label: Some(workflow.label.clone()), label: Some(effective_label.to_string()),
description: workflow.description.clone(), description: workflow.description.clone(),
version: Some(workflow.version.clone()), version: Some(workflow.version.clone()),
param_schema: workflow.parameters.clone(), param_schema: workflow.parameters.clone(),

View File

@@ -13,6 +13,7 @@ path = "src/main.rs"
[dependencies] [dependencies]
attune-common = { path = "../common" } attune-common = { path = "../common" }
tokio = { workspace = true } tokio = { workspace = true }
tokio-util = { workspace = true }
sqlx = { workspace = true } sqlx = { workspace = true }
serde = { workspace = true } serde = { workspace = true }
serde_json = { workspace = true } serde_json = { workspace = true }
@@ -34,5 +35,6 @@ sha2 = { workspace = true }
base64 = { workspace = true } base64 = { workspace = true }
tempfile = { workspace = true } tempfile = { workspace = true }
jsonwebtoken = { workspace = true } jsonwebtoken = { workspace = true }
libc = "0.2"
[dev-dependencies] [dev-dependencies]

View File

@@ -48,6 +48,8 @@ pub struct ActionExecutor {
jwt_config: JwtConfig, jwt_config: JwtConfig,
} }
use tokio_util::sync::CancellationToken;
/// Normalize a server bind address into a connectable URL. /// Normalize a server bind address into a connectable URL.
/// ///
/// When the server binds to `0.0.0.0` (all interfaces) or `::` (IPv6 any), /// When the server binds to `0.0.0.0` (all interfaces) or `::` (IPv6 any),
@@ -90,6 +92,19 @@ impl ActionExecutor {
/// Execute an action for the given execution /// Execute an action for the given execution
pub async fn execute(&self, execution_id: i64) -> Result<ExecutionResult> { pub async fn execute(&self, execution_id: i64) -> Result<ExecutionResult> {
self.execute_with_cancel(execution_id, CancellationToken::new())
.await
}
/// Execute an action for the given execution, with cancellation support.
///
/// When the `cancel_token` is triggered, the running process receives
/// SIGINT → SIGTERM → SIGKILL with escalating grace periods.
pub async fn execute_with_cancel(
&self,
execution_id: i64,
cancel_token: CancellationToken,
) -> Result<ExecutionResult> {
info!("Starting execution: {}", execution_id); info!("Starting execution: {}", execution_id);
// Update execution status to running // Update execution status to running
@@ -108,7 +123,7 @@ impl ActionExecutor {
let action = self.load_action(&execution).await?; let action = self.load_action(&execution).await?;
// Prepare execution context // Prepare execution context
let context = match self.prepare_execution_context(&execution, &action).await { let mut context = match self.prepare_execution_context(&execution, &action).await {
Ok(ctx) => ctx, Ok(ctx) => ctx,
Err(e) => { Err(e) => {
error!("Failed to prepare execution context: {}", e); error!("Failed to prepare execution context: {}", e);
@@ -122,6 +137,9 @@ impl ActionExecutor {
} }
}; };
// Attach the cancellation token so the process executor can monitor it
context.cancel_token = Some(cancel_token);
// Execute the action // Execute the action
// Note: execute_action should rarely return Err - most failures should be // Note: execute_action should rarely return Err - most failures should be
// captured in ExecutionResult with non-zero exit codes // captured in ExecutionResult with non-zero exit codes
@@ -257,41 +275,27 @@ impl ActionExecutor {
execution.id execution.id
); );
// Extract parameters from execution config // Extract parameters from execution config.
// Config is always stored in flat format: the config object itself
// is the parameters map (e.g. {"url": "...", "method": "GET"}).
let mut parameters = HashMap::new(); let mut parameters = HashMap::new();
if let Some(config) = &execution.config { if let Some(config) = &execution.config {
info!("Execution config present: {:?}", config); debug!("Execution config present: {:?}", config);
// Try to get parameters from config.parameters first if let JsonValue::Object(map) = config {
if let Some(params) = config.get("parameters") {
info!("Found config.parameters key");
if let JsonValue::Object(map) = params {
for (key, value) in map {
parameters.insert(key.clone(), value.clone());
}
}
} else if let JsonValue::Object(map) = config {
info!("No config.parameters key, treating entire config as parameters");
// If no parameters key, treat entire config as parameters
// (this handles rule action_params being placed at root level)
for (key, value) in map { for (key, value) in map {
// Skip special keys that aren't action parameters debug!("Adding parameter: {} = {:?}", key, value);
if key != "context" && key != "env" { parameters.insert(key.clone(), value.clone());
info!("Adding parameter: {} = {:?}", key, value);
parameters.insert(key.clone(), value.clone());
} else {
info!("Skipping special key: {}", key);
}
} }
} else { } else {
info!("Config is not an Object, cannot extract parameters"); info!("Config is not an Object, cannot extract parameters");
} }
} else { } else {
info!("No execution config present"); debug!("No execution config present");
} }
info!( debug!(
"Extracted {} parameters: {:?}", "Extracted {} parameters: {:?}",
parameters.len(), parameters.len(),
parameters parameters
@@ -534,6 +538,7 @@ impl ActionExecutor {
parameter_delivery: action.parameter_delivery, parameter_delivery: action.parameter_delivery,
parameter_format: action.parameter_format, parameter_format: action.parameter_format,
output_format: action.output_format, output_format: action.output_format,
cancel_token: None,
}; };
Ok(context) Ok(context)

View File

@@ -177,6 +177,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
assert!(runtime.can_execute(&context)); assert!(runtime.can_execute(&context));
@@ -209,6 +210,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
assert!(!runtime.can_execute(&context)); assert!(!runtime.can_execute(&context));

View File

@@ -43,6 +43,7 @@ use serde::{Deserialize, Serialize};
use std::collections::HashMap; use std::collections::HashMap;
use std::path::PathBuf; use std::path::PathBuf;
use thiserror::Error; use thiserror::Error;
use tokio_util::sync::CancellationToken;
// Re-export dependency management types // Re-export dependency management types
pub use dependency::{ pub use dependency::{
@@ -90,7 +91,7 @@ pub enum RuntimeError {
} }
/// Action execution context /// Action execution context
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone)]
pub struct ExecutionContext { pub struct ExecutionContext {
/// Execution ID /// Execution ID
pub execution_id: i64, pub execution_id: i64,
@@ -130,7 +131,6 @@ pub struct ExecutionContext {
/// "Python" runtime). When present, `ProcessRuntime` uses this config /// "Python" runtime). When present, `ProcessRuntime` uses this config
/// instead of its built-in one for interpreter resolution, environment /// instead of its built-in one for interpreter resolution, environment
/// setup, and dependency management. /// setup, and dependency management.
#[serde(skip)]
pub runtime_config_override: Option<RuntimeExecutionConfig>, pub runtime_config_override: Option<RuntimeExecutionConfig>,
/// Optional override of the environment directory suffix. When a specific /// Optional override of the environment directory suffix. When a specific
@@ -144,28 +144,24 @@ pub struct ExecutionContext {
pub selected_runtime_version: Option<String>, pub selected_runtime_version: Option<String>,
/// Maximum stdout size in bytes (for log truncation) /// Maximum stdout size in bytes (for log truncation)
#[serde(default = "default_max_log_bytes")]
pub max_stdout_bytes: usize, pub max_stdout_bytes: usize,
/// Maximum stderr size in bytes (for log truncation) /// Maximum stderr size in bytes (for log truncation)
#[serde(default = "default_max_log_bytes")]
pub max_stderr_bytes: usize, pub max_stderr_bytes: usize,
/// How parameters should be delivered to the action /// How parameters should be delivered to the action
#[serde(default)]
pub parameter_delivery: ParameterDelivery, pub parameter_delivery: ParameterDelivery,
/// Format for parameter serialization /// Format for parameter serialization
#[serde(default)]
pub parameter_format: ParameterFormat, pub parameter_format: ParameterFormat,
/// Format for output parsing /// Format for output parsing
#[serde(default)]
pub output_format: OutputFormat, pub output_format: OutputFormat,
}
fn default_max_log_bytes() -> usize { /// Optional cancellation token for graceful process termination.
10 * 1024 * 1024 // 10MB /// When triggered, the executor sends SIGINT → SIGTERM → SIGKILL
/// with escalating grace periods.
pub cancel_token: Option<CancellationToken>,
} }
impl ExecutionContext { impl ExecutionContext {
@@ -193,6 +189,7 @@ impl ExecutionContext {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
} }
} }
} }

View File

@@ -725,8 +725,8 @@ impl Runtime for ProcessRuntime {
.unwrap_or_else(|| "<none>".to_string()), .unwrap_or_else(|| "<none>".to_string()),
); );
// Execute with streaming output capture // Execute with streaming output capture (with optional cancellation support)
process_executor::execute_streaming( process_executor::execute_streaming_cancellable(
cmd, cmd,
&context.secrets, &context.secrets,
parameters_stdin, parameters_stdin,
@@ -734,6 +734,7 @@ impl Runtime for ProcessRuntime {
context.max_stdout_bytes, context.max_stdout_bytes,
context.max_stderr_bytes, context.max_stderr_bytes,
context.output_format, context.output_format,
context.cancel_token.clone(),
) )
.await .await
} }
@@ -905,6 +906,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
assert!(runtime.can_execute(&context)); assert!(runtime.can_execute(&context));
@@ -939,6 +941,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
assert!(runtime.can_execute(&context)); assert!(runtime.can_execute(&context));
@@ -973,6 +976,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
assert!(!runtime.can_execute(&context)); assert!(!runtime.can_execute(&context));
@@ -1063,6 +1067,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -1120,6 +1125,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -1158,6 +1164,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -1208,6 +1215,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -1316,6 +1324,7 @@ mod tests {
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();

View File

@@ -4,6 +4,14 @@
//! implementations. Handles streaming stdout/stderr capture, bounded log //! implementations. Handles streaming stdout/stderr capture, bounded log
//! collection, timeout management, stdin parameter/secret delivery, and //! collection, timeout management, stdin parameter/secret delivery, and
//! output format parsing. //! output format parsing.
//!
//! ## Cancellation Support
//!
//! When a `CancellationToken` is provided, the executor monitors it alongside
//! the running process. On cancellation:
//! 1. SIGINT is sent to the process (allows graceful shutdown)
//! 2. After a 10-second grace period, SIGTERM is sent if the process hasn't exited
//! 3. After another 5-second grace period, SIGKILL is sent as a last resort
use super::{BoundedLogWriter, ExecutionResult, OutputFormat, RuntimeResult}; use super::{BoundedLogWriter, ExecutionResult, OutputFormat, RuntimeResult};
use std::collections::HashMap; use std::collections::HashMap;
@@ -12,7 +20,8 @@ use std::time::Instant;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader}; use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
use tokio::process::Command; use tokio::process::Command;
use tokio::time::timeout; use tokio::time::timeout;
use tracing::{debug, warn}; use tokio_util::sync::CancellationToken;
use tracing::{debug, info, warn};
/// Execute a subprocess command with streaming output capture. /// Execute a subprocess command with streaming output capture.
/// ///
@@ -33,6 +42,48 @@ use tracing::{debug, warn};
/// * `max_stderr_bytes` - Maximum stderr size before truncation /// * `max_stderr_bytes` - Maximum stderr size before truncation
/// * `output_format` - How to parse stdout (Text, Json, Yaml, Jsonl) /// * `output_format` - How to parse stdout (Text, Json, Yaml, Jsonl)
pub async fn execute_streaming( pub async fn execute_streaming(
cmd: Command,
secrets: &HashMap<String, String>,
parameters_stdin: Option<&str>,
timeout_secs: Option<u64>,
max_stdout_bytes: usize,
max_stderr_bytes: usize,
output_format: OutputFormat,
) -> RuntimeResult<ExecutionResult> {
execute_streaming_cancellable(
cmd,
secrets,
parameters_stdin,
timeout_secs,
max_stdout_bytes,
max_stderr_bytes,
output_format,
None,
)
.await
}
/// Execute a subprocess command with streaming output capture and optional cancellation.
///
/// This is the core execution function used by all runtime implementations.
/// It handles:
/// - Spawning the process with piped I/O
/// - Writing parameters and secrets to stdin
/// - Streaming stdout/stderr with bounded log collection
/// - Timeout management
/// - Graceful cancellation via SIGINT → SIGTERM → SIGKILL escalation
/// - Output format parsing (JSON, YAML, JSONL, text)
///
/// # Arguments
/// * `cmd` - Pre-configured `Command` (interpreter, args, env vars, working dir already set)
/// * `secrets` - Secrets to pass via stdin (as JSON)
/// * `parameters_stdin` - Optional parameter data to write to stdin before secrets
/// * `timeout_secs` - Optional execution timeout in seconds
/// * `max_stdout_bytes` - Maximum stdout size before truncation
/// * `max_stderr_bytes` - Maximum stderr size before truncation
/// * `output_format` - How to parse stdout (Text, Json, Yaml, Jsonl)
/// * `cancel_token` - Optional cancellation token for graceful process termination
pub async fn execute_streaming_cancellable(
mut cmd: Command, mut cmd: Command,
secrets: &HashMap<String, String>, secrets: &HashMap<String, String>,
parameters_stdin: Option<&str>, parameters_stdin: Option<&str>,
@@ -40,6 +91,7 @@ pub async fn execute_streaming(
max_stdout_bytes: usize, max_stdout_bytes: usize,
max_stderr_bytes: usize, max_stderr_bytes: usize,
output_format: OutputFormat, output_format: OutputFormat,
cancel_token: Option<CancellationToken>,
) -> RuntimeResult<ExecutionResult> { ) -> RuntimeResult<ExecutionResult> {
let start = Instant::now(); let start = Instant::now();
@@ -56,19 +108,14 @@ pub async fn execute_streaming(
let mut error = None; let mut error = None;
// Write parameters first if using stdin delivery. // Write parameters first if using stdin delivery.
// Skip empty/trivial content ("{}","","[]") to avoid polluting stdin // When the caller provides parameters_stdin (i.e. the action uses
// before secrets — scripts that read secrets via readline() expect // stdin delivery), always write the content — even if it's "{}" —
// the secrets JSON as the first line. // because the script expects to read valid JSON from stdin.
let has_real_params = parameters_stdin
.map(|s| !matches!(s.trim(), "" | "{}" | "[]"))
.unwrap_or(false);
if let Some(params_data) = parameters_stdin { if let Some(params_data) = parameters_stdin {
if has_real_params { if let Err(e) = stdin.write_all(params_data.as_bytes()).await {
if let Err(e) = stdin.write_all(params_data.as_bytes()).await { error = Some(format!("Failed to write parameters to stdin: {}", e));
error = Some(format!("Failed to write parameters to stdin: {}", e)); } else if let Err(e) = stdin.write_all(b"\n---ATTUNE_PARAMS_END---\n").await {
} else if let Err(e) = stdin.write_all(b"\n---ATTUNE_PARAMS_END---\n").await { error = Some(format!("Failed to write parameter delimiter: {}", e));
error = Some(format!("Failed to write parameter delimiter: {}", e));
}
} }
} }
@@ -139,15 +186,74 @@ pub async fn execute_streaming(
stderr_writer stderr_writer
}; };
// Wait for both streams and the process // Determine the process ID for signal-based cancellation.
let (stdout_writer, stderr_writer, wait_result) = // Must be read before we move `child` into the wait future.
tokio::join!(stdout_task, stderr_task, async { let child_pid = child.id();
// Build the wait future that handles timeout, cancellation, and normal completion.
//
// The result is a tuple: (wait_result, was_cancelled)
// - wait_result mirrors the original type: Result<Result<ExitStatus, io::Error>, Elapsed>
// - was_cancelled indicates the process was stopped by a cancel request
let wait_future = async {
// Inner future: wait for the child process to exit
let wait_child = child.wait();
// Apply optional timeout wrapping
let timed_wait = async {
if let Some(timeout_secs) = timeout_secs { if let Some(timeout_secs) = timeout_secs {
timeout(std::time::Duration::from_secs(timeout_secs), child.wait()).await timeout(std::time::Duration::from_secs(timeout_secs), wait_child).await
} else { } else {
Ok(child.wait().await) Ok(wait_child.await)
} }
}); };
// If we have a cancel token, race it against the (possibly-timed) wait
if let Some(ref token) = cancel_token {
tokio::select! {
result = timed_wait => (result, false),
_ = token.cancelled() => {
// Cancellation requested — escalate signals to the child process.
info!("Cancel signal received, sending SIGINT to process");
if let Some(pid) = child_pid {
send_signal(pid, libc::SIGINT);
}
// Grace period: wait up to 10s for the process to exit after SIGINT.
match timeout(std::time::Duration::from_secs(10), child.wait()).await {
Ok(status) => (Ok(status), true),
Err(_) => {
// Still alive — escalate to SIGTERM
warn!("Process did not exit after SIGINT + 10s grace period, sending SIGTERM");
if let Some(pid) = child_pid {
send_signal(pid, libc::SIGTERM);
}
// Final grace period: wait up to 5s for SIGTERM
match timeout(std::time::Duration::from_secs(5), child.wait()).await {
Ok(status) => (Ok(status), true),
Err(_) => {
// Last resort — SIGKILL
warn!("Process did not exit after SIGTERM + 5s, sending SIGKILL");
if let Some(pid) = child_pid {
send_signal(pid, libc::SIGKILL);
}
// Wait indefinitely for the SIGKILL to take effect
(Ok(child.wait().await), true)
}
}
}
}
}
}
} else {
(timed_wait.await, false)
}
};
// Wait for both streams and the process
let (stdout_writer, stderr_writer, (wait_result, was_cancelled)) =
tokio::join!(stdout_task, stderr_task, wait_future);
let duration_ms = start.elapsed().as_millis() as u64; let duration_ms = start.elapsed().as_millis() as u64;
@@ -182,6 +288,22 @@ pub async fn execute_streaming(
} }
}; };
// If the process was cancelled, return a specific result
if was_cancelled {
return Ok(ExecutionResult {
exit_code,
stdout: stdout_result.content.clone(),
stderr: stderr_result.content.clone(),
result: None,
duration_ms,
error: Some("Execution cancelled by user".to_string()),
stdout_truncated: stdout_result.truncated,
stderr_truncated: stderr_result.truncated,
stdout_bytes_truncated: stdout_result.bytes_truncated,
stderr_bytes_truncated: stderr_result.bytes_truncated,
});
}
debug!( debug!(
"Process execution completed: exit_code={}, duration={}ms, stdout_truncated={}, stderr_truncated={}", "Process execution completed: exit_code={}, duration={}ms, stdout_truncated={}, stderr_truncated={}",
exit_code, duration_ms, stdout_result.truncated, stderr_result.truncated exit_code, duration_ms, stdout_result.truncated, stderr_result.truncated
@@ -253,6 +375,19 @@ pub async fn execute_streaming(
} }
/// Parse stdout content according to the specified output format. /// Parse stdout content according to the specified output format.
/// Send a Unix signal to a process by PID.
///
/// Uses raw `libc::kill()` to deliver signals for graceful process termination.
/// This is safe because we only send signals to child processes we spawned.
fn send_signal(pid: u32, signal: i32) {
// Safety: we're sending a signal to a known child process PID.
// The PID is valid because we obtained it from `child.id()` before the
// child exited.
unsafe {
libc::kill(pid as i32, signal);
}
}
fn parse_output(stdout: &str, format: OutputFormat) -> Option<serde_json::Value> { fn parse_output(stdout: &str, format: OutputFormat) -> Option<serde_json::Value> {
let trimmed = stdout.trim(); let trimmed = stdout.trim();
if trimmed.is_empty() { if trimmed.is_empty() {

View File

@@ -623,6 +623,7 @@ mod tests {
parameter_delivery: attune_common::models::ParameterDelivery::default(), parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(), parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::default(), output_format: attune_common::models::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -659,6 +660,7 @@ mod tests {
parameter_delivery: attune_common::models::ParameterDelivery::default(), parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(), parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::default(), output_format: attune_common::models::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -690,6 +692,7 @@ mod tests {
parameter_delivery: attune_common::models::ParameterDelivery::default(), parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(), parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::default(), output_format: attune_common::models::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -723,6 +726,7 @@ mod tests {
parameter_delivery: attune_common::models::ParameterDelivery::default(), parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(), parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::default(), output_format: attune_common::models::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -771,6 +775,7 @@ echo "missing=$missing"
parameter_delivery: attune_common::models::ParameterDelivery::default(), parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(), parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::default(), output_format: attune_common::models::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -814,6 +819,7 @@ echo '{"id": 3, "name": "Charlie"}'
parameter_delivery: attune_common::models::ParameterDelivery::default(), parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(), parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::Jsonl, output_format: attune_common::models::OutputFormat::Jsonl,
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -880,6 +886,7 @@ printf '{"status_code":200,"body":"hello","json":{\n "args": {\n "hello": "w
parameter_delivery: attune_common::models::ParameterDelivery::default(), parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(), parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::Json, output_format: attune_common::models::OutputFormat::Json,
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -935,6 +942,7 @@ echo '{"result": "success", "count": 42}'
parameter_delivery: attune_common::models::ParameterDelivery::default(), parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(), parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::Json, output_format: attune_common::models::OutputFormat::Json,
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();

View File

@@ -11,7 +11,7 @@
//! 4. **Verify runtime versions** — run verification commands for each registered //! 4. **Verify runtime versions** — run verification commands for each registered
//! `RuntimeVersion` to determine which are available on this host/container //! `RuntimeVersion` to determine which are available on this host/container
//! 5. **Set up runtime environments** — create per-version environments for packs //! 5. **Set up runtime environments** — create per-version environments for packs
//! 6. Start heartbeat, execution consumer, and pack registration consumer //! 6. Start heartbeat, execution consumer, pack registration consumer, and cancel consumer
use attune_common::config::Config; use attune_common::config::Config;
use attune_common::db::Database; use attune_common::db::Database;
@@ -19,19 +19,21 @@ use attune_common::error::{Error, Result};
use attune_common::models::ExecutionStatus; use attune_common::models::ExecutionStatus;
use attune_common::mq::{ use attune_common::mq::{
config::MessageQueueConfig as MqConfig, Connection, Consumer, ConsumerConfig, config::MessageQueueConfig as MqConfig, Connection, Consumer, ConsumerConfig,
ExecutionCompletedPayload, ExecutionStatusChangedPayload, MessageEnvelope, MessageType, ExecutionCancelRequestedPayload, ExecutionCompletedPayload, ExecutionStatusChangedPayload,
PackRegisteredPayload, Publisher, PublisherConfig, MessageEnvelope, MessageType, PackRegisteredPayload, Publisher, PublisherConfig,
}; };
use attune_common::repositories::{execution::ExecutionRepository, FindById}; use attune_common::repositories::{execution::ExecutionRepository, FindById};
use attune_common::runtime_detection::runtime_in_filter; use attune_common::runtime_detection::runtime_in_filter;
use chrono::Utc; use chrono::Utc;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sqlx::PgPool; use sqlx::PgPool;
use std::collections::HashMap;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use tokio::sync::{Mutex, RwLock, Semaphore}; use tokio::sync::{Mutex, RwLock, Semaphore};
use tokio::task::{JoinHandle, JoinSet}; use tokio::task::{JoinHandle, JoinSet};
use tokio_util::sync::CancellationToken;
use tracing::{debug, error, info, warn}; use tracing::{debug, error, info, warn};
use crate::artifacts::ArtifactManager; use crate::artifacts::ArtifactManager;
@@ -72,6 +74,8 @@ pub struct WorkerService {
consumer_handle: Option<JoinHandle<()>>, consumer_handle: Option<JoinHandle<()>>,
pack_consumer: Option<Arc<Consumer>>, pack_consumer: Option<Arc<Consumer>>,
pack_consumer_handle: Option<JoinHandle<()>>, pack_consumer_handle: Option<JoinHandle<()>>,
cancel_consumer: Option<Arc<Consumer>>,
cancel_consumer_handle: Option<JoinHandle<()>>,
worker_id: Option<i64>, worker_id: Option<i64>,
/// Runtime filter derived from ATTUNE_WORKER_RUNTIMES /// Runtime filter derived from ATTUNE_WORKER_RUNTIMES
runtime_filter: Option<Vec<String>>, runtime_filter: Option<Vec<String>>,
@@ -83,6 +87,10 @@ pub struct WorkerService {
execution_semaphore: Arc<Semaphore>, execution_semaphore: Arc<Semaphore>,
/// Tracks in-flight execution tasks for graceful shutdown /// Tracks in-flight execution tasks for graceful shutdown
in_flight_tasks: Arc<Mutex<JoinSet<()>>>, in_flight_tasks: Arc<Mutex<JoinSet<()>>>,
/// Maps execution ID → CancellationToken for running processes.
/// When a cancel request arrives, the token is triggered, causing
/// the process executor to send SIGINT → SIGTERM → SIGKILL.
cancel_tokens: Arc<Mutex<HashMap<i64, CancellationToken>>>,
} }
impl WorkerService { impl WorkerService {
@@ -362,12 +370,15 @@ impl WorkerService {
consumer_handle: None, consumer_handle: None,
pack_consumer: None, pack_consumer: None,
pack_consumer_handle: None, pack_consumer_handle: None,
cancel_consumer: None,
cancel_consumer_handle: None,
worker_id: None, worker_id: None,
runtime_filter: runtime_filter_for_service, runtime_filter: runtime_filter_for_service,
packs_base_dir, packs_base_dir,
runtime_envs_dir, runtime_envs_dir,
execution_semaphore: Arc::new(Semaphore::new(max_concurrent_tasks)), execution_semaphore: Arc::new(Semaphore::new(max_concurrent_tasks)),
in_flight_tasks: Arc::new(Mutex::new(JoinSet::new())), in_flight_tasks: Arc::new(Mutex::new(JoinSet::new())),
cancel_tokens: Arc::new(Mutex::new(HashMap::new())),
}) })
} }
@@ -416,6 +427,9 @@ impl WorkerService {
// Start consuming pack registration events // Start consuming pack registration events
self.start_pack_consumer().await?; self.start_pack_consumer().await?;
// Start consuming cancel requests
self.start_cancel_consumer().await?;
info!("Worker Service started successfully"); info!("Worker Service started successfully");
Ok(()) Ok(())
@@ -640,6 +654,12 @@ impl WorkerService {
let _ = handle.await; let _ = handle.await;
} }
if let Some(handle) = self.cancel_consumer_handle.take() {
info!("Stopping cancel consumer task...");
handle.abort();
let _ = handle.await;
}
info!("Closing message queue connection..."); info!("Closing message queue connection...");
if let Err(e) = self.mq_connection.close().await { if let Err(e) = self.mq_connection.close().await {
warn!("Error closing message queue: {}", e); warn!("Error closing message queue: {}", e);
@@ -733,6 +753,7 @@ impl WorkerService {
let queue_name_for_log = queue_name.clone(); let queue_name_for_log = queue_name.clone();
let semaphore = self.execution_semaphore.clone(); let semaphore = self.execution_semaphore.clone();
let in_flight = self.in_flight_tasks.clone(); let in_flight = self.in_flight_tasks.clone();
let cancel_tokens = self.cancel_tokens.clone();
// Spawn the consumer loop as a background task so start() can return // Spawn the consumer loop as a background task so start() can return
let handle = tokio::spawn(async move { let handle = tokio::spawn(async move {
@@ -745,6 +766,7 @@ impl WorkerService {
let db_pool = db_pool.clone(); let db_pool = db_pool.clone();
let semaphore = semaphore.clone(); let semaphore = semaphore.clone();
let in_flight = in_flight.clone(); let in_flight = in_flight.clone();
let cancel_tokens = cancel_tokens.clone();
async move { async move {
let execution_id = envelope.payload.execution_id; let execution_id = envelope.payload.execution_id;
@@ -765,6 +787,13 @@ impl WorkerService {
semaphore.available_permits() semaphore.available_permits()
); );
// Create a cancellation token for this execution
let cancel_token = CancellationToken::new();
{
let mut tokens = cancel_tokens.lock().await;
tokens.insert(execution_id, cancel_token.clone());
}
// Spawn the actual execution as a background task so this // Spawn the actual execution as a background task so this
// handler returns immediately, acking the message and freeing // handler returns immediately, acking the message and freeing
// the consumer loop to process the next delivery. // the consumer loop to process the next delivery.
@@ -775,12 +804,20 @@ impl WorkerService {
let _permit = permit; let _permit = permit;
if let Err(e) = Self::handle_execution_scheduled( if let Err(e) = Self::handle_execution_scheduled(
executor, publisher, db_pool, envelope, executor,
publisher,
db_pool,
envelope,
cancel_token,
) )
.await .await
{ {
error!("Execution {} handler error: {}", execution_id, e); error!("Execution {} handler error: {}", execution_id, e);
} }
// Remove the cancel token now that execution is done
let mut tokens = cancel_tokens.lock().await;
tokens.remove(&execution_id);
}); });
Ok(()) Ok(())
@@ -813,6 +850,7 @@ impl WorkerService {
publisher: Arc<Publisher>, publisher: Arc<Publisher>,
db_pool: PgPool, db_pool: PgPool,
envelope: MessageEnvelope<ExecutionScheduledPayload>, envelope: MessageEnvelope<ExecutionScheduledPayload>,
cancel_token: CancellationToken,
) -> Result<()> { ) -> Result<()> {
let execution_id = envelope.payload.execution_id; let execution_id = envelope.payload.execution_id;
@@ -821,6 +859,42 @@ impl WorkerService {
execution_id execution_id
); );
// Check if the execution was already cancelled before we started
// (e.g. pre-running cancellation via the API).
{
if let Ok(Some(exec)) = ExecutionRepository::find_by_id(&db_pool, execution_id).await {
if matches!(
exec.status,
ExecutionStatus::Cancelled | ExecutionStatus::Canceling
) {
info!(
"Execution {} already in {:?} state, skipping",
execution_id, exec.status
);
// If it was Canceling, finalize to Cancelled
if exec.status == ExecutionStatus::Canceling {
let _ = Self::publish_status_update(
&db_pool,
&publisher,
execution_id,
ExecutionStatus::Cancelled,
None,
Some("Cancelled before execution started".to_string()),
)
.await;
let _ = Self::publish_completion_notification(
&db_pool,
&publisher,
execution_id,
ExecutionStatus::Cancelled,
)
.await;
}
return Ok(());
}
}
}
// Publish status: running // Publish status: running
if let Err(e) = Self::publish_status_update( if let Err(e) = Self::publish_status_update(
&db_pool, &db_pool,
@@ -836,42 +910,88 @@ impl WorkerService {
// Continue anyway - we'll update the database directly // Continue anyway - we'll update the database directly
} }
// Execute the action // Execute the action (with cancellation support)
match executor.execute(execution_id).await { match executor
.execute_with_cancel(execution_id, cancel_token.clone())
.await
{
Ok(result) => { Ok(result) => {
info!( // Check if this was a cancellation
"Execution {} completed successfully in {}ms", let was_cancelled = cancel_token.is_cancelled()
execution_id, result.duration_ms || result
); .error
.as_deref()
.is_some_and(|e| e.contains("cancelled"));
// Publish status: completed if was_cancelled {
if let Err(e) = Self::publish_status_update( info!(
&db_pool, "Execution {} was cancelled in {}ms",
&publisher, execution_id, result.duration_ms
execution_id,
ExecutionStatus::Completed,
result.result.clone(),
None,
)
.await
{
error!("Failed to publish success status: {}", e);
}
// Publish completion notification for queue management
if let Err(e) = Self::publish_completion_notification(
&db_pool,
&publisher,
execution_id,
ExecutionStatus::Completed,
)
.await
{
error!(
"Failed to publish completion notification for execution {}: {}",
execution_id, e
); );
// Continue - this is important for queue management but not fatal
// Publish status: cancelled
if let Err(e) = Self::publish_status_update(
&db_pool,
&publisher,
execution_id,
ExecutionStatus::Cancelled,
None,
Some("Cancelled by user".to_string()),
)
.await
{
error!("Failed to publish cancelled status: {}", e);
}
// Publish completion notification for queue management
if let Err(e) = Self::publish_completion_notification(
&db_pool,
&publisher,
execution_id,
ExecutionStatus::Cancelled,
)
.await
{
error!(
"Failed to publish completion notification for cancelled execution {}: {}",
execution_id, e
);
}
} else {
info!(
"Execution {} completed successfully in {}ms",
execution_id, result.duration_ms
);
// Publish status: completed
if let Err(e) = Self::publish_status_update(
&db_pool,
&publisher,
execution_id,
ExecutionStatus::Completed,
result.result.clone(),
None,
)
.await
{
error!("Failed to publish success status: {}", e);
}
// Publish completion notification for queue management
if let Err(e) = Self::publish_completion_notification(
&db_pool,
&publisher,
execution_id,
ExecutionStatus::Completed,
)
.await
{
error!(
"Failed to publish completion notification for execution {}: {}",
execution_id, e
);
// Continue - this is important for queue management but not fatal
}
} }
} }
Err(e) => { Err(e) => {
@@ -912,6 +1032,87 @@ impl WorkerService {
Ok(()) Ok(())
} }
/// Start consuming execution cancel requests from the per-worker cancel queue.
async fn start_cancel_consumer(&mut self) -> Result<()> {
let worker_id = self
.worker_id
.ok_or_else(|| Error::Internal("Worker not registered".to_string()))?;
let queue_name = format!("worker.{}.cancel", worker_id);
info!("Starting cancel consumer for queue: {}", queue_name);
let consumer = Arc::new(
Consumer::new(
&self.mq_connection,
ConsumerConfig {
queue: queue_name.clone(),
tag: format!("worker-{}-cancel", worker_id),
prefetch_count: 10,
auto_ack: false,
exclusive: false,
},
)
.await
.map_err(|e| Error::Internal(format!("Failed to create cancel consumer: {}", e)))?,
);
let consumer_for_task = consumer.clone();
let cancel_tokens = self.cancel_tokens.clone();
let queue_name_for_log = queue_name.clone();
let handle = tokio::spawn(async move {
info!(
"Cancel consumer loop started for queue '{}'",
queue_name_for_log
);
let result = consumer_for_task
.consume_with_handler(
move |envelope: MessageEnvelope<ExecutionCancelRequestedPayload>| {
let cancel_tokens = cancel_tokens.clone();
async move {
let execution_id = envelope.payload.execution_id;
info!("Received cancel request for execution {}", execution_id);
let tokens = cancel_tokens.lock().await;
if let Some(token) = tokens.get(&execution_id) {
info!("Triggering cancellation for execution {}", execution_id);
token.cancel();
} else {
warn!(
"No cancel token found for execution {} \
(may have already completed or not yet started)",
execution_id
);
}
Ok(())
}
},
)
.await;
match result {
Ok(()) => info!(
"Cancel consumer loop for queue '{}' ended",
queue_name_for_log
),
Err(e) => error!(
"Cancel consumer loop for queue '{}' failed: {}",
queue_name_for_log, e
),
}
});
self.cancel_consumer = Some(consumer);
self.cancel_consumer_handle = Some(handle);
info!("Cancel consumer initialized for queue: {}", queue_name);
Ok(())
}
/// Publish execution status update /// Publish execution status update
async fn publish_status_update( async fn publish_status_update(
db_pool: &PgPool, db_pool: &PgPool,
@@ -935,6 +1136,7 @@ impl WorkerService {
ExecutionStatus::Running => "running", ExecutionStatus::Running => "running",
ExecutionStatus::Completed => "completed", ExecutionStatus::Completed => "completed",
ExecutionStatus::Failed => "failed", ExecutionStatus::Failed => "failed",
ExecutionStatus::Canceling => "canceling",
ExecutionStatus::Cancelled => "cancelled", ExecutionStatus::Cancelled => "cancelled",
ExecutionStatus::Timeout => "timeout", ExecutionStatus::Timeout => "timeout",
_ => "unknown", _ => "unknown",

View File

@@ -86,6 +86,7 @@ fn make_context(action_ref: &str, entry_point: &str, runtime_name: &str) -> Exec
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
cancel_token: None,
} }
} }

View File

@@ -51,6 +51,7 @@ fn make_python_context(
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
cancel_token: None,
} }
} }
@@ -133,6 +134,7 @@ done
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -291,6 +293,7 @@ async fn test_shell_process_runtime_truncation() {
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();

View File

@@ -77,6 +77,7 @@ print(json.dumps(result))
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::Json, output_format: attune_worker::runtime::OutputFormat::Json,
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -170,6 +171,7 @@ echo "SECURITY_PASS: Secrets not in environment but accessible via get_secret"
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -234,6 +236,7 @@ print(json.dumps({'secret_a': secrets.get('secret_a')}))
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::Json, output_format: attune_worker::runtime::OutputFormat::Json,
cancel_token: None,
}; };
let result1 = runtime.execute(context1).await.unwrap(); let result1 = runtime.execute(context1).await.unwrap();
@@ -279,6 +282,7 @@ print(json.dumps({
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::Json, output_format: attune_worker::runtime::OutputFormat::Json,
cancel_token: None,
}; };
let result2 = runtime.execute(context2).await.unwrap(); let result2 = runtime.execute(context2).await.unwrap();
@@ -333,6 +337,7 @@ print("ok")
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -384,6 +389,7 @@ fi
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -456,6 +462,7 @@ echo "PASS: No secrets in environment"
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();
@@ -527,6 +534,7 @@ print(json.dumps({"leaked": leaked}))
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::Json, output_format: attune_worker::runtime::OutputFormat::Json,
cancel_token: None,
}; };
let result = runtime.execute(context).await.unwrap(); let result = runtime.execute(context).await.unwrap();

38
deny.toml Normal file
View File

@@ -0,0 +1,38 @@
[graph]
all-features = true
[advisories]
version = 2
yanked = "deny"
ignore = []
[licenses]
version = 2
confidence-threshold = 0.9
allow = [
"MIT",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause",
"ISC",
"MPL-2.0",
"Unicode-3.0",
"Zlib",
"CC0-1.0",
"OpenSSL",
"BSL-1.0",
]
[bans]
multiple-versions = "warn"
wildcards = "allow"
highlight = "all"
deny = []
skip = []
skip-tree = []
[sources]
unknown-registry = "deny"
unknown-git = "deny"
allow-registry = ["https://github.com/rust-lang/crates.io-index"]
allow-git = []

View File

@@ -98,6 +98,7 @@ services:
- ./docker/init-packs.sh:/init-packs.sh:ro - ./docker/init-packs.sh:/init-packs.sh:ro
- packs_data:/opt/attune/packs - packs_data:/opt/attune/packs
- runtime_envs:/opt/attune/runtime_envs - runtime_envs:/opt/attune/runtime_envs
- artifacts_data:/opt/attune/artifacts
environment: environment:
DB_HOST: postgres DB_HOST: postgres
DB_PORT: 5432 DB_PORT: 5432

View File

@@ -25,6 +25,9 @@ RUN apt-get update && apt-get install -y \
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=16777216
# Copy workspace manifests and source code # Copy workspace manifests and source code
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./
COPY crates/ ./crates/ COPY crates/ ./crates/
@@ -65,6 +68,9 @@ RUN apt-get update && apt-get install -y \
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=16777216
# Copy workspace files # Copy workspace files
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./
COPY crates/ ./crates/ COPY crates/ ./crates/

View File

@@ -27,6 +27,9 @@ RUN apt-get update && apt-get install -y \
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=16777216
# Copy dependency metadata first so `cargo fetch` layer is cached # Copy dependency metadata first so `cargo fetch` layer is cached
# when only source code changes (Cargo.toml/Cargo.lock stay the same) # when only source code changes (Cargo.toml/Cargo.lock stay the same)
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./

View File

@@ -29,6 +29,9 @@ RUN apt-get update && apt-get install -y \
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=16777216
# Copy workspace configuration # Copy workspace configuration
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./

View File

@@ -30,6 +30,9 @@ RUN apt-get update && apt-get install -y \
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=16777216
# Copy workspace files # Copy workspace files
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./
COPY crates/ ./crates/ COPY crates/ ./crates/

View File

@@ -30,6 +30,9 @@ RUN apt-get update && apt-get install -y \
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=16777216
# Copy dependency metadata first so `cargo fetch` layer is cached # Copy dependency metadata first so `cargo fetch` layer is cached
# when only source code changes (Cargo.toml/Cargo.lock stay the same) # when only source code changes (Cargo.toml/Cargo.lock stay the same)
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./

View File

@@ -27,6 +27,9 @@ RUN apt-get update && apt-get install -y \
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=16777216
# Copy workspace manifests and source code # Copy workspace manifests and source code
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./
COPY crates/ ./crates/ COPY crates/ ./crates/

View File

@@ -35,6 +35,9 @@ RUN apt-get update && apt-get install -y \
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=16777216
# Copy dependency metadata first so `cargo fetch` layer is cached # Copy dependency metadata first so `cargo fetch` layer is cached
# when only source code changes (Cargo.toml/Cargo.lock stay the same) # when only source code changes (Cargo.toml/Cargo.lock stay the same)
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./

View File

@@ -78,6 +78,17 @@ else
echo -e "${YELLOW}${NC} Runtime environments directory not mounted, skipping" echo -e "${YELLOW}${NC} Runtime environments directory not mounted, skipping"
fi fi
# Initialise artifacts volume with correct ownership.
# The API service (creates directories for file-backed artifact versions) and
# workers (write artifact files during execution) both run as attune uid 1000.
ARTIFACTS_DIR="${ARTIFACTS_DIR:-/opt/attune/artifacts}"
if [ -d "$ARTIFACTS_DIR" ] || mkdir -p "$ARTIFACTS_DIR" 2>/dev/null; then
chown -R 1000:1000 "$ARTIFACTS_DIR"
echo -e "${GREEN}${NC} Artifacts directory ready at: $ARTIFACTS_DIR"
else
echo -e "${YELLOW}${NC} Artifacts directory not mounted, skipping"
fi
# Check if source packs directory exists # Check if source packs directory exists
if [ ! -d "$SOURCE_PACKS_DIR" ]; then if [ ! -d "$SOURCE_PACKS_DIR" ]; then
echo -e "${RED}${NC} Source packs directory not found: $SOURCE_PACKS_DIR" echo -e "${RED}${NC} Source packs directory not found: $SOURCE_PACKS_DIR"

View File

@@ -35,6 +35,7 @@ BEGIN
'parent', NEW.parent, 'parent', NEW.parent,
'result', NEW.result, 'result', NEW.result,
'started_at', NEW.started_at, 'started_at', NEW.started_at,
'workflow_task', NEW.workflow_task,
'created', NEW.created, 'created', NEW.created,
'updated', NEW.updated 'updated', NEW.updated
); );
@@ -77,6 +78,7 @@ BEGIN
'parent', NEW.parent, 'parent', NEW.parent,
'result', NEW.result, 'result', NEW.result,
'started_at', NEW.started_at, 'started_at', NEW.started_at,
'workflow_task', NEW.workflow_task,
'created', NEW.created, 'created', NEW.created,
'updated', NEW.updated 'updated', NEW.updated
); );

View File

@@ -314,8 +314,117 @@ class PackLoader:
print(f" ⚠ Could not resolve runtime for runner_type '{runner_type}'") print(f" ⚠ Could not resolve runtime for runner_type '{runner_type}'")
return None return None
def upsert_workflow_definition(
self,
cursor,
workflow_file_path: str,
action_ref: str,
action_data: Dict[str, Any],
) -> Optional[int]:
"""Load a workflow definition file and upsert it in the database.
When an action YAML contains a `workflow_file` field, this method reads
the referenced workflow YAML, creates or updates the corresponding
`workflow_definition` row, and returns its ID so the action can be linked
via the `workflow_def` FK.
The action YAML's `parameters` and `output` fields take precedence over
the workflow file's own schemas (allowing the action to customise the
exposed interface without touching the workflow graph).
Args:
cursor: Database cursor.
workflow_file_path: Path to the workflow file relative to the
``actions/`` directory (e.g. ``workflows/deploy.workflow.yaml``).
action_ref: The ref of the action that references this workflow.
action_data: The parsed action YAML dict (used for schema overrides).
Returns:
The database ID of the workflow_definition row, or None on failure.
"""
actions_dir = self.pack_dir / "actions"
full_path = actions_dir / workflow_file_path
if not full_path.exists():
print(f" ⚠ Workflow file '{workflow_file_path}' not found at {full_path}")
return None
try:
workflow_data = self.load_yaml(full_path)
except Exception as e:
print(f" ⚠ Failed to parse workflow file '{workflow_file_path}': {e}")
return None
# The action YAML is authoritative for action-level metadata.
# Fall back to the workflow file's own values only when present
# (standalone workflow files in workflows/ still carry them).
workflow_ref = workflow_data.get("ref") or action_ref
label = workflow_data.get("label") or action_data.get("label", "")
description = workflow_data.get("description") or action_data.get(
"description", ""
)
version = workflow_data.get("version", "1.0.0")
tags = workflow_data.get("tags") or action_data.get("tags", [])
# The action YAML is authoritative for param_schema / out_schema.
# Fall back to the workflow file's own schemas only if the action
# YAML doesn't define them.
param_schema = action_data.get("parameters") or workflow_data.get("parameters")
out_schema = action_data.get("output") or workflow_data.get("output")
param_schema_json = json.dumps(param_schema) if param_schema else None
out_schema_json = json.dumps(out_schema) if out_schema else None
# Store the full workflow definition as JSON
definition_json = json.dumps(workflow_data)
tags_list = tags if isinstance(tags, list) else []
cursor.execute(
"""
INSERT INTO workflow_definition (
ref, pack, pack_ref, label, description, version,
param_schema, out_schema, definition, tags, enabled
)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
ON CONFLICT (ref) DO UPDATE SET
label = EXCLUDED.label,
description = EXCLUDED.description,
version = EXCLUDED.version,
param_schema = EXCLUDED.param_schema,
out_schema = EXCLUDED.out_schema,
definition = EXCLUDED.definition,
tags = EXCLUDED.tags,
enabled = EXCLUDED.enabled,
updated = NOW()
RETURNING id
""",
(
workflow_ref,
self.pack_id,
self.pack_ref,
label,
description,
version,
param_schema_json,
out_schema_json,
definition_json,
tags_list,
True,
),
)
workflow_def_id = cursor.fetchone()[0]
print(f" ✓ Workflow definition '{workflow_ref}' (ID: {workflow_def_id})")
return workflow_def_id
def upsert_actions(self, runtime_ids: Dict[str, int]) -> Dict[str, int]: def upsert_actions(self, runtime_ids: Dict[str, int]) -> Dict[str, int]:
"""Load action definitions""" """Load action definitions.
When an action YAML contains a ``workflow_file`` field, the loader reads
the referenced workflow definition, upserts a ``workflow_definition``
record, and links the action to it via ``action.workflow_def``. This
allows the action YAML to control action-level metadata independently
of the workflow graph, and lets multiple actions share a workflow file.
"""
print("\n→ Loading actions...") print("\n→ Loading actions...")
actions_dir = self.pack_dir / "actions" actions_dir = self.pack_dir / "actions"
@@ -324,6 +433,7 @@ class PackLoader:
return {} return {}
action_ids = {} action_ids = {}
workflow_count = 0
cursor = self.conn.cursor() cursor = self.conn.cursor()
for yaml_file in sorted(actions_dir.glob("*.yaml")): for yaml_file in sorted(actions_dir.glob("*.yaml")):
@@ -340,18 +450,36 @@ class PackLoader:
label = action_data.get("label") or generate_label(name) label = action_data.get("label") or generate_label(name)
description = action_data.get("description", "") description = action_data.get("description", "")
# Determine entrypoint # ── Workflow file handling ───────────────────────────────────
entrypoint = action_data.get("entry_point", "") workflow_file = action_data.get("workflow_file")
if not entrypoint: workflow_def_id: Optional[int] = None
# Try to find corresponding script file
for ext in [".sh", ".py"]:
script_path = actions_dir / f"{name}{ext}"
if script_path.exists():
entrypoint = str(script_path.relative_to(self.packs_dir))
break
# Resolve runtime ID for this action if workflow_file:
runtime_id = self.resolve_action_runtime(action_data, runtime_ids) workflow_def_id = self.upsert_workflow_definition(
cursor, workflow_file, ref, action_data
)
if workflow_def_id is not None:
workflow_count += 1
# For workflow actions the entrypoint is the workflow file path;
# for regular actions it comes from entry_point in the YAML.
if workflow_file:
entrypoint = workflow_file
else:
entrypoint = action_data.get("entry_point", "")
if not entrypoint:
# Try to find corresponding script file
for ext in [".sh", ".py"]:
script_path = actions_dir / f"{name}{ext}"
if script_path.exists():
entrypoint = str(script_path.relative_to(self.packs_dir))
break
# Resolve runtime ID (workflow actions have no runtime)
if workflow_file:
runtime_id = None
else:
runtime_id = self.resolve_action_runtime(action_data, runtime_ids)
param_schema = json.dumps(action_data.get("parameters", {})) param_schema = json.dumps(action_data.get("parameters", {}))
out_schema = json.dumps(action_data.get("output", {})) out_schema = json.dumps(action_data.get("output", {}))
@@ -423,9 +551,25 @@ class PackLoader:
action_id = cursor.fetchone()[0] action_id = cursor.fetchone()[0]
action_ids[ref] = action_id action_ids[ref] = action_id
print(f" ✓ Action '{ref}' (ID: {action_id})")
# Link action to workflow definition if present
if workflow_def_id is not None:
cursor.execute(
"""
UPDATE action SET workflow_def = %s, updated = NOW()
WHERE id = %s
""",
(workflow_def_id, action_id),
)
print(
f" ✓ Action '{ref}' (ID: {action_id}) → workflow def {workflow_def_id}"
)
else:
print(f" ✓ Action '{ref}' (ID: {action_id})")
cursor.close() cursor.close()
if workflow_count > 0:
print(f" ({workflow_count} workflow definition(s) registered)")
return action_ids return action_ids
def upsert_sensors( def upsert_sensors(
@@ -561,7 +705,15 @@ class PackLoader:
return sensor_ids return sensor_ids
def load_pack(self): def load_pack(self):
"""Main loading process""" """Main loading process.
Components are loaded in dependency order:
1. Runtimes (no dependencies)
2. Triggers (no dependencies)
3. Actions (depend on runtime; workflow actions also create
workflow_definition records)
4. Sensors (depend on triggers and runtime)
"""
print("=" * 60) print("=" * 60)
print(f"Pack Loader - {self.pack_name}") print(f"Pack Loader - {self.pack_name}")
print("=" * 60) print("=" * 60)
@@ -581,7 +733,7 @@ class PackLoader:
# Load triggers # Load triggers
trigger_ids = self.upsert_triggers() trigger_ids = self.upsert_triggers()
# Load actions (with runtime resolution) # Load actions (with runtime resolution + workflow definitions)
action_ids = self.upsert_actions(runtime_ids) action_ids = self.upsert_actions(runtime_ids)
# Load sensors # Load sensors

6
web/knip.json Normal file
View File

@@ -0,0 +1,6 @@
{
"$schema": "https://unpkg.com/knip@latest/schema.json",
"entry": ["src/main.tsx", "vite.config.ts"],
"project": ["src/**/*.{ts,tsx}", "scripts/**/*.js"],
"ignore": ["src/api/**", "dist/**", "node_modules/**"]
}

View File

@@ -6,7 +6,9 @@
"scripts": { "scripts": {
"dev": "vite", "dev": "vite",
"build": "tsc -b && vite build", "build": "tsc -b && vite build",
"typecheck": "tsc -b --pretty false",
"lint": "eslint .", "lint": "eslint .",
"knip": "npx --yes knip --config knip.json --production",
"preview": "vite preview", "preview": "vite preview",
"generate:api": "curl -s http://localhost:8080/api-spec/openapi.json > openapi.json && npx openapi-typescript-codegen --input ./openapi.json --output ./src/api --client axios --useOptions" "generate:api": "curl -s http://localhost:8080/api-spec/openapi.json > openapi.json && npx openapi-typescript-codegen --input ./openapi.json --output ./src/api --client axios --useOptions"
}, },

View File

@@ -62,6 +62,7 @@ export type ExecutionResponse = {
workflow_task?: { workflow_task?: {
workflow_execution: number; workflow_execution: number;
task_name: string; task_name: string;
triggered_by?: string | null;
task_index?: number | null; task_index?: number | null;
task_batch?: number | null; task_batch?: number | null;
retry_count: number; retry_count: number;

View File

@@ -54,6 +54,7 @@ export type ExecutionSummary = {
workflow_task?: { workflow_task?: {
workflow_execution: number; workflow_execution: number;
task_name: string; task_name: string;
triggered_by?: string | null;
task_index?: number | null; task_index?: number | null;
task_batch?: number | null; task_batch?: number | null;
retry_count: number; retry_count: number;

View File

@@ -1,326 +0,0 @@
import { useState, useMemo } from "react";
import { Link } from "react-router-dom";
import { formatDistanceToNow } from "date-fns";
import {
ChevronDown,
ChevronRight,
Workflow,
CheckCircle2,
XCircle,
Clock,
Loader2,
AlertTriangle,
Ban,
CircleDot,
RotateCcw,
} from "lucide-react";
import { useChildExecutions } from "@/hooks/useExecutions";
import { useExecutionStream } from "@/hooks/useExecutionStream";
interface WorkflowTasksPanelProps {
/** The parent (workflow) execution ID */
parentExecutionId: number;
/** Whether the panel starts collapsed (default: false — open by default for workflows) */
defaultCollapsed?: boolean;
}
/** Format a duration in ms to a human-readable string. */
function formatDuration(ms: number): string {
if (ms < 1000) return `${ms}ms`;
const secs = ms / 1000;
if (secs < 60) return `${secs.toFixed(1)}s`;
const mins = Math.floor(secs / 60);
const remainSecs = Math.round(secs % 60);
if (mins < 60) return `${mins}m ${remainSecs}s`;
const hrs = Math.floor(mins / 60);
const remainMins = mins % 60;
return `${hrs}h ${remainMins}m`;
}
function getStatusIcon(status: string) {
switch (status) {
case "completed":
return <CheckCircle2 className="h-4 w-4 text-green-500" />;
case "failed":
return <XCircle className="h-4 w-4 text-red-500" />;
case "running":
return <Loader2 className="h-4 w-4 text-blue-500 animate-spin" />;
case "requested":
case "scheduling":
case "scheduled":
return <Clock className="h-4 w-4 text-yellow-500" />;
case "timeout":
return <AlertTriangle className="h-4 w-4 text-orange-500" />;
case "canceling":
case "cancelled":
return <Ban className="h-4 w-4 text-gray-400" />;
case "abandoned":
return <AlertTriangle className="h-4 w-4 text-red-400" />;
default:
return <CircleDot className="h-4 w-4 text-gray-400" />;
}
}
function getStatusBadgeClasses(status: string): string {
switch (status) {
case "completed":
return "bg-green-100 text-green-800";
case "failed":
return "bg-red-100 text-red-800";
case "running":
return "bg-blue-100 text-blue-800";
case "requested":
case "scheduling":
case "scheduled":
return "bg-yellow-100 text-yellow-800";
case "timeout":
return "bg-orange-100 text-orange-800";
case "canceling":
case "cancelled":
return "bg-gray-100 text-gray-800";
case "abandoned":
return "bg-red-100 text-red-600";
default:
return "bg-gray-100 text-gray-800";
}
}
/**
* Panel that displays workflow task (child) executions for a parent
* workflow execution. Shows each task's name, action, status, and timing.
*/
export default function WorkflowTasksPanel({
parentExecutionId,
defaultCollapsed = false,
}: WorkflowTasksPanelProps) {
const [isCollapsed, setIsCollapsed] = useState(defaultCollapsed);
const { data, isLoading, error } = useChildExecutions(parentExecutionId);
// Subscribe to the unfiltered execution stream so that child execution
// WebSocket notifications update the ["executions", { parent }] query cache
// in real-time (the detail page only subscribes filtered by its own ID).
useExecutionStream({ enabled: true });
const tasks = useMemo(() => {
if (!data?.data) return [];
return data.data;
}, [data]);
const summary = useMemo(() => {
const total = tasks.length;
const completed = tasks.filter((t) => t.status === "completed").length;
const failed = tasks.filter((t) => t.status === "failed").length;
const running = tasks.filter(
(t) =>
t.status === "running" ||
t.status === "requested" ||
t.status === "scheduling" ||
t.status === "scheduled",
).length;
const other = total - completed - failed - running;
return { total, completed, failed, running, other };
}, [tasks]);
if (!isLoading && tasks.length === 0 && !error) {
// No child tasks — nothing to show
return null;
}
return (
<div className="bg-white shadow rounded-lg">
{/* Header */}
<button
onClick={() => setIsCollapsed(!isCollapsed)}
className="w-full flex items-center justify-between p-6 text-left hover:bg-gray-50 rounded-lg transition-colors"
>
<div className="flex items-center gap-3">
{isCollapsed ? (
<ChevronRight className="h-5 w-5 text-gray-400" />
) : (
<ChevronDown className="h-5 w-5 text-gray-400" />
)}
<Workflow className="h-5 w-5 text-indigo-500" />
<h2 className="text-xl font-semibold">Workflow Tasks</h2>
{!isLoading && (
<span className="text-sm text-gray-500">
({summary.total} task{summary.total !== 1 ? "s" : ""})
</span>
)}
</div>
{/* Summary badges */}
{!isCollapsed || !isLoading ? (
<div className="flex items-center gap-2">
{summary.completed > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-green-100 text-green-800">
<CheckCircle2 className="h-3 w-3" />
{summary.completed}
</span>
)}
{summary.running > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800">
<Loader2 className="h-3 w-3 animate-spin" />
{summary.running}
</span>
)}
{summary.failed > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-red-100 text-red-800">
<XCircle className="h-3 w-3" />
{summary.failed}
</span>
)}
{summary.other > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-gray-100 text-gray-700">
{summary.other}
</span>
)}
</div>
) : null}
</button>
{/* Content */}
{!isCollapsed && (
<div className="px-6 pb-6">
{isLoading && (
<div className="flex items-center justify-center py-8">
<Loader2 className="h-5 w-5 animate-spin text-gray-400" />
<span className="ml-2 text-sm text-gray-500">
Loading workflow tasks
</span>
</div>
)}
{error && (
<div className="bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded text-sm">
Error loading workflow tasks:{" "}
{error instanceof Error ? error.message : "Unknown error"}
</div>
)}
{!isLoading && !error && tasks.length > 0 && (
<div className="space-y-2">
{/* Column headers */}
<div className="grid grid-cols-12 gap-3 px-3 py-2 text-xs font-medium text-gray-500 uppercase tracking-wider border-b border-gray-100">
<div className="col-span-1">#</div>
<div className="col-span-3">Task</div>
<div className="col-span-3">Action</div>
<div className="col-span-2">Status</div>
<div className="col-span-2">Duration</div>
<div className="col-span-1">Retry</div>
</div>
{/* Task rows */}
{tasks.map((task, idx) => {
const wt = task.workflow_task;
const taskName = wt?.task_name ?? `Task ${idx + 1}`;
const retryCount = wt?.retry_count ?? 0;
const maxRetries = wt?.max_retries ?? 0;
const timedOut = wt?.timed_out ?? false;
// Compute duration from started_at → updated (actual run time)
const startedAt = task.started_at
? new Date(task.started_at)
: null;
const created = new Date(task.created);
const updated = new Date(task.updated);
const isTerminal =
task.status === "completed" ||
task.status === "failed" ||
task.status === "timeout";
const durationMs =
wt?.duration_ms ??
(isTerminal && startedAt
? updated.getTime() - startedAt.getTime()
: null);
return (
<Link
key={task.id}
to={`/executions/${task.id}`}
className="grid grid-cols-12 gap-3 px-3 py-3 rounded-lg hover:bg-gray-50 transition-colors items-center group"
>
{/* Index */}
<div className="col-span-1 text-sm text-gray-400 font-mono">
{idx + 1}
</div>
{/* Task name */}
<div className="col-span-3 flex items-center gap-2 min-w-0">
{getStatusIcon(task.status)}
<span
className="text-sm font-medium text-gray-900 truncate group-hover:text-blue-600"
title={taskName}
>
{taskName}
</span>
{wt?.task_index != null && (
<span className="text-xs text-gray-400 flex-shrink-0">
[{wt.task_index}]
</span>
)}
</div>
{/* Action ref */}
<div className="col-span-3 min-w-0">
<span
className="text-sm text-gray-600 truncate block"
title={task.action_ref}
>
{task.action_ref}
</span>
</div>
{/* Status badge */}
<div className="col-span-2 flex items-center gap-1.5">
<span
className={`inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium ${getStatusBadgeClasses(task.status)}`}
>
{task.status}
</span>
{timedOut && (
<span title="Timed out">
<AlertTriangle className="h-3.5 w-3.5 text-orange-500" />
</span>
)}
</div>
{/* Duration */}
<div className="col-span-2 text-sm text-gray-500">
{task.status === "running" ? (
<span className="text-blue-600">
{formatDistanceToNow(startedAt ?? created, {
addSuffix: false,
})}
</span>
) : durationMs != null && durationMs > 0 ? (
formatDuration(durationMs)
) : (
<span className="text-gray-300"></span>
)}
</div>
{/* Retry info */}
<div className="col-span-1 text-sm text-gray-500">
{maxRetries > 0 ? (
<span
className="inline-flex items-center gap-0.5"
title={`Attempt ${retryCount + 1} of ${maxRetries + 1}`}
>
<RotateCcw className="h-3 w-3" />
{retryCount}/{maxRetries}
</span>
) : (
<span className="text-gray-300"></span>
)}
</div>
</Link>
);
})}
</div>
)}
</div>
)}
</div>
);
}

View File

@@ -1,4 +1,4 @@
import { useState, useMemo, useEffect, useCallback } from "react"; import { useState, useMemo, useEffect, useCallback, useRef } from "react";
import { formatDistanceToNow } from "date-fns"; import { formatDistanceToNow } from "date-fns";
import { import {
ChevronDown, ChevronDown,
@@ -14,6 +14,7 @@ import {
Download, Download,
Eye, Eye,
X, X,
Radio,
} from "lucide-react"; } from "lucide-react";
import { import {
useExecutionArtifacts, useExecutionArtifacts,
@@ -136,7 +137,110 @@ function TextFileDetail({
const [content, setContent] = useState<string | null>(null); const [content, setContent] = useState<string | null>(null);
const [loadError, setLoadError] = useState<string | null>(null); const [loadError, setLoadError] = useState<string | null>(null);
const [isLoadingContent, setIsLoadingContent] = useState(true); const [isLoadingContent, setIsLoadingContent] = useState(true);
const [isStreaming, setIsStreaming] = useState(false);
const [isWaiting, setIsWaiting] = useState(false);
const [streamDone, setStreamDone] = useState(false);
const preRef = useRef<HTMLPreElement>(null);
const eventSourceRef = useRef<EventSource | null>(null);
// Track whether the user has scrolled away from the bottom so we can
// auto-scroll only when they're already at the end.
const userScrolledAwayRef = useRef(false);
// Auto-scroll the <pre> to the bottom when new content arrives,
// unless the user has deliberately scrolled up.
const scrollToBottom = useCallback(() => {
const el = preRef.current;
if (el && !userScrolledAwayRef.current) {
el.scrollTop = el.scrollHeight;
}
}, []);
// Detect whether the user has scrolled away from the bottom.
const handleScroll = useCallback(() => {
const el = preRef.current;
if (!el) return;
const atBottom = el.scrollHeight - el.scrollTop - el.clientHeight < 24;
userScrolledAwayRef.current = !atBottom;
}, []);
// ---- SSE streaming path (used when execution is running) ----
useEffect(() => {
if (!isRunning) return;
const token = localStorage.getItem("access_token");
if (!token) {
setLoadError("No authentication token available");
setIsLoadingContent(false);
return;
}
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/stream?token=${encodeURIComponent(token)}`;
const es = new EventSource(url);
eventSourceRef.current = es;
setIsStreaming(true);
setStreamDone(false);
es.addEventListener("waiting", (e: MessageEvent) => {
setIsWaiting(true);
setIsLoadingContent(false);
// If the message says "File found", the next event will be content
if (e.data?.includes("File found")) {
setIsWaiting(false);
}
});
es.addEventListener("content", (e: MessageEvent) => {
setContent(e.data);
setLoadError(null);
setIsLoadingContent(false);
setIsWaiting(false);
// Scroll after React renders the new content
requestAnimationFrame(scrollToBottom);
});
es.addEventListener("append", (e: MessageEvent) => {
setContent((prev) => (prev ?? "") + e.data);
setLoadError(null);
requestAnimationFrame(scrollToBottom);
});
es.addEventListener("done", () => {
setStreamDone(true);
setIsStreaming(false);
es.close();
});
es.addEventListener("error", (e: MessageEvent) => {
// SSE spec fires generic error events on connection close.
// Only show user-facing errors if the server sent an explicit event.
if (e.data) {
setLoadError(e.data);
}
});
es.onerror = () => {
// Connection dropped — EventSource will auto-reconnect, but if it
// reaches CLOSED state we fall back to the download endpoint.
if (es.readyState === EventSource.CLOSED) {
setIsStreaming(false);
// If we never got any content via SSE, fall back to download
setContent((prev) => {
if (prev === null) {
// Will be handled by the fetch fallback below
}
return prev;
});
}
};
return () => {
es.close();
eventSourceRef.current = null;
setIsStreaming(false);
};
}, [artifactId, isRunning, scrollToBottom]);
// ---- Fetch fallback (used when not running, or SSE never connected) ----
const fetchContent = useCallback(async () => { const fetchContent = useCallback(async () => {
const token = localStorage.getItem("access_token"); const token = localStorage.getItem("access_token");
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/download`; const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/download`;
@@ -159,16 +263,10 @@ function TextFileDetail({
} }
}, [artifactId]); }, [artifactId]);
// Initial load // When NOT running (execution completed), use download endpoint once.
useEffect(() => { useEffect(() => {
if (isRunning) return;
fetchContent(); fetchContent();
}, [fetchContent]);
// Poll while running to pick up new file versions
useEffect(() => {
if (!isRunning) return;
const interval = setInterval(fetchContent, 3000);
return () => clearInterval(interval);
}, [isRunning, fetchContent]); }, [isRunning, fetchContent]);
return ( return (
@@ -179,10 +277,19 @@ function TextFileDetail({
{artifactName ?? "Text File"} {artifactName ?? "Text File"}
</h4> </h4>
<div className="flex items-center gap-2"> <div className="flex items-center gap-2">
{isRunning && ( {isStreaming && !streamDone && (
<div className="flex items-center gap-1 text-xs text-blue-600"> <div className="flex items-center gap-1 text-xs text-green-600">
<Radio className="h-3 w-3 animate-pulse" />
<span>Streaming</span>
</div>
)}
{streamDone && (
<span className="text-xs text-gray-500">Stream complete</span>
)}
{isWaiting && (
<div className="flex items-center gap-1 text-xs text-amber-600">
<Loader2 className="h-3 w-3 animate-spin" /> <Loader2 className="h-3 w-3 animate-spin" />
<span>Live</span> <span>Waiting for file</span>
</div> </div>
)} )}
<button <button
@@ -194,7 +301,7 @@ function TextFileDetail({
</div> </div>
</div> </div>
{isLoadingContent && ( {isLoadingContent && !isWaiting && (
<div className="flex items-center gap-2 py-2 text-sm text-gray-500"> <div className="flex items-center gap-2 py-2 text-sm text-gray-500">
<Loader2 className="h-4 w-4 animate-spin" /> <Loader2 className="h-4 w-4 animate-spin" />
Loading content Loading content
@@ -206,10 +313,20 @@ function TextFileDetail({
)} )}
{!isLoadingContent && !loadError && content !== null && ( {!isLoadingContent && !loadError && content !== null && (
<pre className="max-h-64 overflow-y-auto bg-gray-900 text-gray-100 rounded p-3 text-xs font-mono whitespace-pre-wrap break-all"> <pre
ref={preRef}
onScroll={handleScroll}
className="max-h-64 overflow-y-auto bg-gray-900 text-gray-100 rounded p-3 text-xs font-mono whitespace-pre-wrap break-all"
>
{content || <span className="text-gray-500 italic">(empty)</span>} {content || <span className="text-gray-500 italic">(empty)</span>}
</pre> </pre>
)} )}
{isWaiting && content === null && !loadError && (
<div className="bg-gray-900 rounded p-3 text-xs text-gray-500 italic">
Waiting for the worker to write the file
</div>
)}
</div> </div>
); );
} }

View File

@@ -1,7 +1,7 @@
import { memo, useEffect } from "react"; import { memo, useEffect } from "react";
import { Link } from "react-router-dom"; import { Link } from "react-router-dom";
import { X, ExternalLink, Loader2 } from "lucide-react"; import { X, ExternalLink, Loader2, XCircle } from "lucide-react";
import { useExecution } from "@/hooks/useExecutions"; import { useExecution, useCancelExecution } from "@/hooks/useExecutions";
import { useExecutionStream } from "@/hooks/useExecutionStream"; import { useExecutionStream } from "@/hooks/useExecutionStream";
import { formatDistanceToNow } from "date-fns"; import { formatDistanceToNow } from "date-fns";
import type { ExecutionStatus } from "@/api"; import type { ExecutionStatus } from "@/api";
@@ -51,6 +51,7 @@ const ExecutionPreviewPanel = memo(function ExecutionPreviewPanel({
}: ExecutionPreviewPanelProps) { }: ExecutionPreviewPanelProps) {
const { data, isLoading, error } = useExecution(executionId); const { data, isLoading, error } = useExecution(executionId);
const execution = data?.data; const execution = data?.data;
const cancelExecution = useCancelExecution();
// Subscribe to real-time updates for this execution // Subscribe to real-time updates for this execution
useExecutionStream({ executionId, enabled: true }); useExecutionStream({ executionId, enabled: true });
@@ -70,6 +71,8 @@ const ExecutionPreviewPanel = memo(function ExecutionPreviewPanel({
execution?.status === "scheduled" || execution?.status === "scheduled" ||
execution?.status === "requested"; execution?.status === "requested";
const isCancellable = isRunning || execution?.status === "canceling";
const startedAt = execution?.started_at const startedAt = execution?.started_at
? new Date(execution.started_at) ? new Date(execution.started_at)
: null; : null;
@@ -100,6 +103,28 @@ const ExecutionPreviewPanel = memo(function ExecutionPreviewPanel({
)} )}
</div> </div>
<div className="flex items-center gap-1 flex-shrink-0"> <div className="flex items-center gap-1 flex-shrink-0">
{isCancellable && (
<button
onClick={() => {
if (
window.confirm(
`Are you sure you want to cancel execution #${executionId}?`,
)
) {
cancelExecution.mutate(executionId);
}
}}
disabled={cancelExecution.isPending}
className="p-1.5 text-gray-400 hover:text-red-600 rounded hover:bg-red-50 transition-colors"
title="Cancel execution"
>
{cancelExecution.isPending ? (
<Loader2 className="h-4 w-4 animate-spin" />
) : (
<XCircle className="h-4 w-4" />
)}
</button>
)}
<Link <Link
to={`/executions/${executionId}`} to={`/executions/${executionId}`}
className="p-1.5 text-gray-400 hover:text-blue-600 rounded hover:bg-gray-100 transition-colors" className="p-1.5 text-gray-400 hover:text-blue-600 rounded hover:bg-gray-100 transition-colors"

View File

@@ -0,0 +1,446 @@
import { useState, useMemo } from "react";
import { Link } from "react-router-dom";
import { formatDistanceToNow } from "date-fns";
import {
ChevronDown,
ChevronRight,
Workflow,
ChartGantt,
List,
CheckCircle2,
XCircle,
Clock,
Loader2,
AlertTriangle,
Ban,
CircleDot,
RotateCcw,
} from "lucide-react";
import type { ExecutionSummary } from "@/api";
import { useChildExecutions } from "@/hooks/useExecutions";
import { useExecutionStream } from "@/hooks/useExecutionStream";
import WorkflowTimelineDAG, {
type ParentExecutionInfo,
} from "@/components/executions/workflow-timeline";
// ---------------------------------------------------------------------------
// Types
// ---------------------------------------------------------------------------
type TabId = "timeline" | "tasks";
interface WorkflowDetailsPanelProps {
/** The parent (workflow) execution */
parentExecution: ParentExecutionInfo;
/** The action_ref of the parent execution (used to fetch workflow def) */
actionRef: string;
/** Whether the panel starts collapsed (default: false) */
defaultCollapsed?: boolean;
/** Which tab to show initially (default: "timeline") */
defaultTab?: TabId;
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function formatDuration(ms: number): string {
if (ms < 1000) return `${ms}ms`;
const secs = ms / 1000;
if (secs < 60) return `${secs.toFixed(1)}s`;
const mins = Math.floor(secs / 60);
const remainSecs = Math.round(secs % 60);
if (mins < 60) return `${mins}m ${remainSecs}s`;
const hrs = Math.floor(mins / 60);
const remainMins = mins % 60;
return `${hrs}h ${remainMins}m`;
}
function getStatusIcon(status: string) {
switch (status) {
case "completed":
return <CheckCircle2 className="h-4 w-4 text-green-500" />;
case "failed":
return <XCircle className="h-4 w-4 text-red-500" />;
case "running":
return <Loader2 className="h-4 w-4 text-blue-500 animate-spin" />;
case "requested":
case "scheduling":
case "scheduled":
return <Clock className="h-4 w-4 text-yellow-500" />;
case "timeout":
return <AlertTriangle className="h-4 w-4 text-orange-500" />;
case "canceling":
case "cancelled":
return <Ban className="h-4 w-4 text-gray-400" />;
case "abandoned":
return <AlertTriangle className="h-4 w-4 text-red-400" />;
default:
return <CircleDot className="h-4 w-4 text-gray-400" />;
}
}
function getStatusBadgeClasses(status: string): string {
switch (status) {
case "completed":
return "bg-green-100 text-green-800";
case "failed":
return "bg-red-100 text-red-800";
case "running":
return "bg-blue-100 text-blue-800";
case "requested":
case "scheduling":
case "scheduled":
return "bg-yellow-100 text-yellow-800";
case "timeout":
return "bg-orange-100 text-orange-800";
case "canceling":
case "cancelled":
return "bg-gray-100 text-gray-800";
case "abandoned":
return "bg-red-100 text-red-600";
default:
return "bg-gray-100 text-gray-800";
}
}
// ---------------------------------------------------------------------------
// Component
// ---------------------------------------------------------------------------
/**
* Combined "Workflow Details" panel that sits at the top of the execution
* detail page for workflow executions. Contains two tabs:
* - **Timeline** — the Gantt-style WorkflowTimelineDAG
* - **Tasks** — the tabular list of child task executions
*/
export default function WorkflowDetailsPanel({
parentExecution,
actionRef,
defaultCollapsed = false,
defaultTab = "timeline",
}: WorkflowDetailsPanelProps) {
const [isCollapsed, setIsCollapsed] = useState(defaultCollapsed);
const [activeTab, setActiveTab] = useState<TabId>(defaultTab);
// Fetch child executions (shared between both tabs' summary badges)
const { data, isLoading, error } = useChildExecutions(parentExecution.id);
// Subscribe to unfiltered execution stream so child execution WebSocket
// notifications update the query cache in real-time.
useExecutionStream({ enabled: true });
const tasks = useMemo(() => data?.data ?? [], [data]);
const summary = useMemo(() => {
const total = tasks.length;
const completed = tasks.filter((t) => t.status === "completed").length;
const failed = tasks.filter((t) => t.status === "failed").length;
const running = tasks.filter(
(t) =>
t.status === "running" ||
t.status === "requested" ||
t.status === "scheduling" ||
t.status === "scheduled",
).length;
const other = total - completed - failed - running;
return { total, completed, failed, running, other };
}, [tasks]);
// Don't render at all if there are no children and we're done loading
if (!isLoading && tasks.length === 0 && !error) {
return null;
}
return (
<div className="bg-white shadow rounded-lg">
{/* ----------------------------------------------------------------- */}
{/* Header row: collapse toggle + title + summary badges */}
{/* ----------------------------------------------------------------- */}
<button
onClick={() => setIsCollapsed(!isCollapsed)}
className="w-full flex items-center justify-between px-6 py-4 text-left hover:bg-gray-50 rounded-t-lg transition-colors"
>
<div className="flex items-center gap-3">
{isCollapsed ? (
<ChevronRight className="h-5 w-5 text-gray-400" />
) : (
<ChevronDown className="h-5 w-5 text-gray-400" />
)}
<Workflow className="h-5 w-5 text-indigo-500" />
<h2 className="text-xl font-semibold">Workflow Details</h2>
{!isLoading && (
<span className="text-sm text-gray-500">
({summary.total} task{summary.total !== 1 ? "s" : ""})
</span>
)}
</div>
{/* Summary badges (always visible) */}
<div className="flex items-center gap-2">
{summary.completed > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-green-100 text-green-800">
<CheckCircle2 className="h-3 w-3" />
{summary.completed}
</span>
)}
{summary.running > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800">
<Loader2 className="h-3 w-3 animate-spin" />
{summary.running}
</span>
)}
{summary.failed > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-red-100 text-red-800">
<XCircle className="h-3 w-3" />
{summary.failed}
</span>
)}
{summary.other > 0 && (
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-gray-100 text-gray-700">
{summary.other}
</span>
)}
</div>
</button>
{/* ----------------------------------------------------------------- */}
{/* Body (collapsible) */}
{/* ----------------------------------------------------------------- */}
{!isCollapsed && (
<div className="border-t border-gray-100">
{/* Tab bar */}
<div className="flex items-center gap-1 px-6 pt-3 pb-0">
<TabButton
active={activeTab === "timeline"}
onClick={() => setActiveTab("timeline")}
icon={<ChartGantt className="h-4 w-4" />}
label="Timeline"
/>
<TabButton
active={activeTab === "tasks"}
onClick={() => setActiveTab("tasks")}
icon={<List className="h-4 w-4" />}
label="Tasks"
/>
</div>
{/* Tab content — both tabs stay mounted so the timeline's
ResizeObserver remains active and containerWidth never resets. */}
<div className={activeTab === "timeline" ? "" : "hidden"}>
<WorkflowTimelineDAG
parentExecution={parentExecution}
actionRef={actionRef}
embedded
/>
</div>
<div className={activeTab === "tasks" ? "" : "hidden"}>
<TasksTab tasks={tasks} isLoading={isLoading} error={error} />
</div>
</div>
)}
</div>
);
}
// ---------------------------------------------------------------------------
// Tab Button
// ---------------------------------------------------------------------------
function TabButton({
active,
onClick,
icon,
label,
}: {
active: boolean;
onClick: () => void;
icon: React.ReactNode;
label: string;
}) {
return (
<button
onClick={(e) => {
e.stopPropagation();
onClick();
}}
className={`
flex items-center gap-1.5 px-3 py-2 text-sm font-medium rounded-t-md
transition-colors border-b-2
${
active
? "text-indigo-700 border-indigo-500 bg-indigo-50/50"
: "text-gray-500 border-transparent hover:text-gray-700 hover:bg-gray-50"
}
`}
>
{icon}
{label}
</button>
);
}
// ---------------------------------------------------------------------------
// Tasks Tab — table of child task executions
// ---------------------------------------------------------------------------
function TasksTab({
tasks,
isLoading,
error,
}: {
tasks: ExecutionSummary[];
isLoading: boolean;
error: unknown;
}) {
if (isLoading) {
return (
<div className="flex items-center justify-center py-8">
<Loader2 className="h-5 w-5 animate-spin text-gray-400" />
<span className="ml-2 text-sm text-gray-500">
Loading workflow tasks
</span>
</div>
);
}
if (error) {
return (
<div className="mx-6 my-4 bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded text-sm">
Error loading workflow tasks:{" "}
{error instanceof Error ? error.message : "Unknown error"}
</div>
);
}
if (tasks.length === 0) {
return (
<div className="flex items-center justify-center py-8 text-sm text-gray-500">
No workflow tasks yet.
</div>
);
}
return (
<div className="px-6 pb-6 pt-2">
<div className="space-y-2">
{/* Column headers */}
<div className="grid grid-cols-12 gap-3 px-3 py-2 text-xs font-medium text-gray-500 uppercase tracking-wider border-b border-gray-100">
<div className="col-span-1">#</div>
<div className="col-span-3">Task</div>
<div className="col-span-3">Action</div>
<div className="col-span-2">Status</div>
<div className="col-span-2">Duration</div>
<div className="col-span-1">Retry</div>
</div>
{/* Task rows */}
{tasks.map((task, idx) => {
const wt = task.workflow_task;
const taskName = wt?.task_name ?? `Task ${idx + 1}`;
const retryCount = wt?.retry_count ?? 0;
const maxRetries = wt?.max_retries ?? 0;
const timedOut = wt?.timed_out ?? false;
// Compute duration from started_at → updated (actual run time)
const startedAt = task.started_at ? new Date(task.started_at) : null;
const created = new Date(task.created);
const updated = new Date(task.updated);
const isTerminal =
task.status === "completed" ||
task.status === "failed" ||
task.status === "timeout";
const durationMs =
wt?.duration_ms ??
(isTerminal && startedAt
? updated.getTime() - startedAt.getTime()
: null);
return (
<Link
key={task.id}
to={`/executions/${task.id}`}
className="grid grid-cols-12 gap-3 px-3 py-3 rounded-lg hover:bg-gray-50 transition-colors items-center group"
>
{/* Index */}
<div className="col-span-1 text-sm text-gray-400 font-mono">
{idx + 1}
</div>
{/* Task name */}
<div className="col-span-3 flex items-center gap-2 min-w-0">
{getStatusIcon(task.status)}
<span
className="text-sm font-medium text-gray-900 truncate group-hover:text-blue-600"
title={taskName}
>
{taskName}
</span>
{wt?.task_index != null && (
<span className="text-xs text-gray-400 flex-shrink-0">
[{wt.task_index}]
</span>
)}
</div>
{/* Action ref */}
<div className="col-span-3 min-w-0">
<span
className="text-sm text-gray-600 truncate block"
title={task.action_ref}
>
{task.action_ref}
</span>
</div>
{/* Status badge */}
<div className="col-span-2 flex items-center gap-1.5">
<span
className={`inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium ${getStatusBadgeClasses(task.status)}`}
>
{task.status}
</span>
{timedOut && (
<span title="Timed out">
<AlertTriangle className="h-3.5 w-3.5 text-orange-500" />
</span>
)}
</div>
{/* Duration */}
<div className="col-span-2 text-sm text-gray-500">
{task.status === "running" ? (
<span className="text-blue-600">
{formatDistanceToNow(startedAt ?? created, {
addSuffix: false,
})}
</span>
) : durationMs != null && durationMs > 0 ? (
formatDuration(durationMs)
) : (
<span className="text-gray-300"></span>
)}
</div>
{/* Retry info */}
<div className="col-span-1 text-sm text-gray-500">
{maxRetries > 0 ? (
<span
className="inline-flex items-center gap-0.5"
title={`Attempt ${retryCount + 1} of ${maxRetries + 1}`}
>
<RotateCcw className="h-3 w-3" />
{retryCount}/{maxRetries}
</span>
) : (
<span className="text-gray-300"></span>
)}
</div>
</Link>
);
})}
</div>
</div>
);
}

View File

@@ -0,0 +1,461 @@
/**
* TimelineModal — Full-screen modal for the Workflow Timeline DAG.
*
* Opens as a portal overlay with:
* - A much larger vertical layout (more lane height, bigger bars)
* - A timescale zoom slider that re-computes the layout at wider widths
* - Horizontal scroll for zoomed-in views
* - All the same interactions as the inline renderer (hover, click, double-click)
* - Escape key / close button to dismiss
*/
import { useState, useRef, useCallback, useMemo, useEffect } from "react";
import { createPortal } from "react-dom";
import { X, ZoomIn, ZoomOut, RotateCcw, GitBranch } from "lucide-react";
import type {
TimelineTask,
TimelineEdge,
TimelineMilestone,
LayoutConfig,
ComputedLayout,
} from "./types";
import { DEFAULT_LAYOUT } from "./types";
import { computeLayout } from "./layout";
import TimelineRenderer from "./TimelineRenderer";
// ---------------------------------------------------------------------------
// Props
// ---------------------------------------------------------------------------
interface TimelineModalProps {
/** Whether the modal is open */
isOpen: boolean;
/** Callback to close the modal */
onClose: () => void;
/** Timeline tasks */
tasks: TimelineTask[];
/** Structural dependency edges between tasks */
taskEdges: TimelineEdge[];
/** Synthetic milestone nodes */
milestones: TimelineMilestone[];
/** Edges connecting milestones */
milestoneEdges: TimelineEdge[];
/** Direct task→task edge keys replaced by milestone-routed paths */
suppressedEdgeKeys?: Set<string>;
/** Callback when a task is double-clicked (navigate to execution) */
onTaskClick?: (task: TimelineTask) => void;
/** Summary stats for the header */
summary: {
total: number;
completed: number;
failed: number;
running: number;
other: number;
durationMs: number | null;
};
}
// ---------------------------------------------------------------------------
// Constants
// ---------------------------------------------------------------------------
/** The modal layout uses more generous spacing */
const MODAL_LAYOUT: LayoutConfig = {
...DEFAULT_LAYOUT,
laneHeight: 44,
barHeight: 28,
lanePadding: 8,
milestoneSize: 12,
paddingTop: 44,
paddingBottom: 24,
paddingLeft: 24,
paddingRight: 24,
minBarWidth: 12,
};
const MIN_ZOOM = 1;
const MAX_ZOOM = 8;
const ZOOM_STEP = 0.25;
// ---------------------------------------------------------------------------
// Component
// ---------------------------------------------------------------------------
export default function TimelineModal({
isOpen,
onClose,
tasks,
taskEdges,
milestones,
milestoneEdges,
suppressedEdgeKeys,
onTaskClick,
summary,
}: TimelineModalProps) {
const [zoom, setZoom] = useState(1);
const scrollRef = useRef<HTMLDivElement>(null);
const containerRef = useRef<HTMLDivElement>(null);
const [containerWidth, setContainerWidth] = useState(1200);
// ---- Observe container width ----
useEffect(() => {
if (!isOpen) return;
const el = containerRef.current;
if (!el) return;
// Initial measurement
setContainerWidth(el.clientWidth);
const observer = new ResizeObserver((entries) => {
for (const entry of entries) {
if (entry.contentRect.width > 0) {
setContainerWidth(entry.contentRect.width);
}
}
});
observer.observe(el);
return () => observer.disconnect();
}, [isOpen]);
// ---- Keyboard handling (Escape to close) ----
useEffect(() => {
if (!isOpen) return;
const handleKey = (e: KeyboardEvent) => {
if (e.key === "Escape") {
onClose();
}
};
window.addEventListener("keydown", handleKey);
return () => window.removeEventListener("keydown", handleKey);
}, [isOpen, onClose]);
// ---- Prevent body scroll when modal is open ----
useEffect(() => {
if (!isOpen) return;
const prev = document.body.style.overflow;
document.body.style.overflow = "hidden";
return () => {
document.body.style.overflow = prev;
};
}, [isOpen]);
// ---- Adjust layout config based on task count ----
const layoutConfig: LayoutConfig = useMemo(() => {
const taskCount = tasks.length;
if (taskCount > 80) {
return {
...MODAL_LAYOUT,
laneHeight: 32,
barHeight: 20,
lanePadding: 6,
};
}
if (taskCount > 40) {
return {
...MODAL_LAYOUT,
laneHeight: 38,
barHeight: 24,
lanePadding: 7,
};
}
return MODAL_LAYOUT;
}, [tasks.length]);
// ---- Compute layout at the zoomed width ----
const layout: ComputedLayout | null = useMemo(() => {
if (tasks.length === 0) return null;
// Zoom stretches the timeline horizontally
const effectiveWidth = Math.max(containerWidth * zoom, 600);
return computeLayout(
tasks,
taskEdges,
milestones,
milestoneEdges,
effectiveWidth,
layoutConfig,
suppressedEdgeKeys,
);
}, [
tasks,
taskEdges,
milestones,
milestoneEdges,
containerWidth,
zoom,
layoutConfig,
suppressedEdgeKeys,
]);
// ---- Zoom handlers ----
const handleZoomIn = useCallback(() => {
setZoom((z) => Math.min(MAX_ZOOM, z + ZOOM_STEP));
}, []);
const handleZoomOut = useCallback(() => {
setZoom((z) => Math.max(MIN_ZOOM, z - ZOOM_STEP));
}, []);
const handleZoomReset = useCallback(() => {
setZoom(1);
if (scrollRef.current) {
scrollRef.current.scrollLeft = 0;
}
}, []);
const handleZoomSlider = useCallback(
(e: React.ChangeEvent<HTMLInputElement>) => {
setZoom(parseFloat(e.target.value));
},
[],
);
// ---- Wheel zoom on the timeline area ----
const handleWheel = useCallback((e: React.WheelEvent) => {
// Only zoom on Ctrl+wheel or meta+wheel to avoid interfering with normal scroll
if (!e.ctrlKey && !e.metaKey) return;
e.preventDefault();
const delta = e.deltaY > 0 ? -ZOOM_STEP : ZOOM_STEP;
setZoom((z) => {
const newZoom = Math.max(MIN_ZOOM, Math.min(MAX_ZOOM, z + delta));
return newZoom;
});
}, []);
if (!isOpen) return null;
const content = (
<div
className="fixed inset-0 z-50 flex flex-col"
style={{ backgroundColor: "rgba(0, 0, 0, 0.6)" }}
onClick={(e) => {
// Close on backdrop click
if (e.target === e.currentTarget) onClose();
}}
>
{/* Modal container */}
<div className="flex flex-col m-4 md:m-6 lg:m-8 bg-white rounded-xl shadow-2xl overflow-hidden flex-1 min-h-0">
{/* ---- Header ---- */}
<div className="flex items-center justify-between px-5 py-3 border-b border-gray-200 bg-gray-50/80 flex-shrink-0">
<div className="flex items-center gap-3">
<GitBranch className="h-4 w-4 text-indigo-500" />
<h2 className="text-sm font-semibold text-gray-800">
Workflow Timeline
</h2>
<span className="text-xs text-gray-400">
{summary.total} task{summary.total !== 1 ? "s" : ""}
{summary.durationMs != null && (
<> · {formatDurationShort(summary.durationMs)}</>
)}
</span>
{/* Summary badges */}
<div className="flex items-center gap-1.5 ml-2">
{summary.completed > 0 && (
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-green-100 text-green-700">
{summary.completed}
</span>
)}
{summary.running > 0 && (
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-blue-100 text-blue-700">
{summary.running}
</span>
)}
{summary.failed > 0 && (
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-red-100 text-red-700">
{summary.failed}
</span>
)}
{summary.other > 0 && (
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-gray-100 text-gray-500">
{summary.other}
</span>
)}
</div>
</div>
{/* Right: zoom controls + close */}
<div className="flex items-center gap-3">
{/* Zoom controls */}
<div className="flex items-center gap-2 bg-white border border-gray-200 rounded-lg px-2.5 py-1.5 shadow-sm">
<button
onClick={handleZoomOut}
disabled={zoom <= MIN_ZOOM}
className="p-0.5 text-gray-500 hover:text-gray-800 disabled:text-gray-300 disabled:cursor-not-allowed"
title="Zoom out"
>
<ZoomOut className="h-3.5 w-3.5" />
</button>
<input
type="range"
min={MIN_ZOOM}
max={MAX_ZOOM}
step={ZOOM_STEP}
value={zoom}
onChange={handleZoomSlider}
className="w-24 h-1 accent-indigo-500 cursor-pointer"
title={`Timescale: ${Math.round(zoom * 100)}%`}
/>
<button
onClick={handleZoomIn}
disabled={zoom >= MAX_ZOOM}
className="p-0.5 text-gray-500 hover:text-gray-800 disabled:text-gray-300 disabled:cursor-not-allowed"
title="Zoom in"
>
<ZoomIn className="h-3.5 w-3.5" />
</button>
<span className="text-xs text-gray-500 font-mono tabular-nums w-10 text-center">
{Math.round(zoom * 100)}%
</span>
{zoom !== 1 && (
<button
onClick={handleZoomReset}
className="p-0.5 text-gray-400 hover:text-gray-700"
title="Reset zoom"
>
<RotateCcw className="h-3 w-3" />
</button>
)}
</div>
{/* Close button */}
<button
onClick={onClose}
className="p-1.5 text-gray-400 hover:text-gray-700 hover:bg-gray-100 rounded-lg transition-colors"
title="Close (Esc)"
>
<X className="h-5 w-5" />
</button>
</div>
</div>
{/* ---- Legend ---- */}
<div className="flex items-center gap-3 px-5 py-2 text-[10px] text-gray-400 border-b border-gray-100 flex-shrink-0">
<LegendItem color="#22c55e" label="Completed" />
<LegendItem color="#3b82f6" label="Running" />
<LegendItem color="#ef4444" label="Failed" dashed />
<LegendItem color="#f97316" label="Timeout" dotted />
<LegendItem color="#9ca3af" label="Pending" />
<span className="ml-2 text-gray-300">|</span>
<EdgeLegendItem color="#22c55e" label="Succeeded" />
<EdgeLegendItem color="#ef4444" label="Failed" dashed />
<EdgeLegendItem color="#9ca3af" label="Always" />
<span className="ml-auto text-gray-300">
Ctrl+scroll to zoom · Click task to highlight path · Double-click to
view
</span>
</div>
{/* ---- Timeline body ---- */}
<div
ref={containerRef}
className="flex-1 min-h-0 overflow-auto"
onWheel={handleWheel}
>
{layout ? (
<div ref={scrollRef} className="min-h-full">
<TimelineRenderer
layout={layout}
tasks={tasks}
config={layoutConfig}
onTaskClick={onTaskClick}
idPrefix="modal-"
/>
</div>
) : (
<div className="flex items-center justify-center h-full">
<span className="text-sm text-gray-400">No tasks to display</span>
</div>
)}
</div>
</div>
</div>
);
return createPortal(content, document.body);
}
// ---------------------------------------------------------------------------
// Legend sub-components (duplicated from WorkflowTimelineDAG to keep modal
// self-contained — these are tiny presentational helpers)
// ---------------------------------------------------------------------------
function LegendItem({
color,
label,
dashed,
dotted,
}: {
color: string;
label: string;
dashed?: boolean;
dotted?: boolean;
}) {
return (
<span className="flex items-center gap-1">
<span
className="inline-block w-5 h-2.5 rounded-sm"
style={{
backgroundColor: color,
opacity: 0.7,
border: dashed
? `1px dashed ${color}`
: dotted
? `1px dotted ${color}`
: undefined,
}}
/>
<span>{label}</span>
</span>
);
}
function EdgeLegendItem({
color,
label,
dashed,
}: {
color: string;
label: string;
dashed?: boolean;
}) {
return (
<span className="flex items-center gap-1">
<svg width="16" height="8" viewBox="0 0 16 8">
<line
x1="0"
y1="4"
x2="16"
y2="4"
stroke={color}
strokeWidth="1.5"
strokeDasharray={dashed ? "3 2" : undefined}
opacity="0.7"
/>
<polygon points="12,1 16,4 12,7" fill={color} opacity="0.6" />
</svg>
<span>{label}</span>
</span>
);
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function formatDurationShort(ms: number): string {
if (ms < 1000) return `${Math.round(ms)}ms`;
const secs = ms / 1000;
if (secs < 60) return `${secs.toFixed(1)}s`;
const mins = Math.floor(secs / 60);
const remainSecs = Math.round(secs % 60);
if (mins < 60) return `${mins}m ${remainSecs}s`;
const hrs = Math.floor(mins / 60);
const remainMins = mins % 60;
return `${hrs}h ${remainMins}m`;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,572 @@
/**
* WorkflowTimelineDAG — Orchestrator component for the Prefect-style
* workflow run timeline visualization.
*
* This component:
* 1. Fetches the workflow definition (for transition metadata)
* 2. Transforms child execution summaries into timeline structures
* 3. Computes the DAG layout (lanes, positions, edges)
* 4. Delegates rendering to TimelineRenderer
*
* It is designed to be embedded in the ExecutionDetailPage for workflow
* executions, receiving child execution data from the parent.
*/
import { useMemo, useRef, useCallback, useState, useEffect } from "react";
import { useNavigate } from "react-router-dom";
import type { ExecutionSummary } from "@/api";
import { useWorkflow } from "@/hooks/useWorkflows";
import { useChildExecutions } from "@/hooks/useExecutions";
import { useExecutionStream } from "@/hooks/useExecutionStream";
import {
ChartGantt,
ChevronDown,
ChevronRight,
Loader2,
Maximize2,
} from "lucide-react";
import type {
TimelineTask,
TimelineEdge,
TimelineMilestone,
WorkflowDefinition,
LayoutConfig,
} from "./types";
import { DEFAULT_LAYOUT } from "./types";
import {
buildTimelineTasks,
collapseWithItemsGroups,
buildEdges,
buildMilestones,
} from "./data";
import { computeLayout } from "./layout";
import TimelineRenderer from "./TimelineRenderer";
import TimelineModal from "./TimelineModal";
// ---------------------------------------------------------------------------
// Minimal parent execution shape accepted by this component.
// Both ExecutionResponse and ExecutionSummary satisfy this interface,
// so callers don't need an ugly cast.
// ---------------------------------------------------------------------------
export interface ParentExecutionInfo {
id: number;
action_ref: string;
status: string;
created: string;
updated: string;
started_at?: string | null;
}
// ---------------------------------------------------------------------------
// Props
// ---------------------------------------------------------------------------
interface WorkflowTimelineDAGProps {
/** The parent (workflow) execution — accepts ExecutionResponse or ExecutionSummary */
parentExecution: ParentExecutionInfo;
/** The action_ref of the parent execution (used to fetch workflow def) */
actionRef: string;
/** Whether the panel starts collapsed */
defaultCollapsed?: boolean;
/**
* When true, renders only the timeline content (legend, renderer, modal)
* without the outer card wrapper, header button, or collapse toggle.
* Used when the component is embedded inside another panel (e.g. WorkflowDetailsPanel).
*/
embedded?: boolean;
}
// ---------------------------------------------------------------------------
// Component
// ---------------------------------------------------------------------------
export default function WorkflowTimelineGraph({
parentExecution,
actionRef,
defaultCollapsed = false,
embedded = false,
}: WorkflowTimelineDAGProps) {
const navigate = useNavigate();
const containerRef = useRef<HTMLDivElement>(null);
const [isCollapsed, setIsCollapsed] = useState(
embedded ? false : defaultCollapsed,
);
const [isModalOpen, setIsModalOpen] = useState(false);
const [containerWidth, setContainerWidth] = useState(900);
const [nowMs, setNowMs] = useState(Date.now);
// ---- Determine if the workflow is still in-flight ----
const isTerminal = [
"completed",
"failed",
"timeout",
"cancelled",
"abandoned",
].includes(parentExecution.status);
// ---- Smooth animation via requestAnimationFrame ----
// While the workflow is running and the panel is visible, tick at display
// refresh rate (~60fps) so running task bars and the time axis grow smoothly.
useEffect(() => {
if (isTerminal || (!embedded && isCollapsed)) return;
let rafId: number;
const tick = () => {
setNowMs(Date.now());
rafId = requestAnimationFrame(tick);
};
rafId = requestAnimationFrame(tick);
return () => cancelAnimationFrame(rafId);
}, [isTerminal, isCollapsed, embedded]);
// ---- Data fetching ----
// Fetch child executions
const { data: childData, isLoading: childrenLoading } = useChildExecutions(
parentExecution.id,
);
// Subscribe to real-time execution updates so child tasks update live
useExecutionStream({ enabled: true });
// Fetch workflow definition for transition metadata
// The workflow ref matches the action ref for workflow actions
const { data: workflowData } = useWorkflow(actionRef);
const childExecutions: ExecutionSummary[] = useMemo(() => {
return childData?.data ?? [];
}, [childData]);
const workflowDef: WorkflowDefinition | null = useMemo(() => {
if (!workflowData?.data?.definition) return null;
return workflowData.data.definition as WorkflowDefinition;
}, [workflowData]);
// ---- Observe container width for responsive layout ----
useEffect(() => {
const el = containerRef.current;
if (!el) return;
const observer = new ResizeObserver((entries) => {
for (const entry of entries) {
const w = entry.contentRect.width;
if (w > 0) setContainerWidth(w);
}
});
observer.observe(el);
return () => observer.disconnect();
}, [isCollapsed]);
// ---- Build timeline data structures ----
// Split into two phases:
// 1. Structural memo — edges and upstream/downstream links. These depend
// only on the set of child executions and the workflow definition, NOT
// on the current time. Recomputes only when real data changes.
// 2. Per-frame memo — task time positions, milestones, and layout. These
// depend on `nowMs` so they update every animation frame (~60fps) while
// the workflow is running, giving smooth bar growth.
// Phase 1: Build tasks (without time-dependent endMs) and compute edges.
// `buildEdges` mutates tasks' upstreamIds/downstreamIds, so we must call
// it in the same memo that creates the task objects.
const { structuralTasks, taskEdges } = useMemo(() => {
if (childExecutions.length === 0) {
return {
structuralTasks: [] as TimelineTask[],
taskEdges: [] as TimelineEdge[],
};
}
// Build individual tasks, then collapse large with_items groups into
// single synthetic nodes before computing edges.
const rawTasks = buildTimelineTasks(childExecutions, workflowDef);
const { tasks: structuralTasks, memberToGroup } = collapseWithItemsGroups(
rawTasks,
childExecutions,
workflowDef,
);
// Derive dependency edges (purely structural — no time dependency).
// Pass the collapse mapping so edges redirect to group nodes.
const taskEdges = buildEdges(
structuralTasks,
childExecutions,
workflowDef,
memberToGroup,
);
return { structuralTasks, taskEdges };
}, [childExecutions, workflowDef]);
// Phase 2: Patch running-task time positions and build milestones.
// This runs every animation frame while the workflow is active.
const { tasks, milestones, milestoneEdges, suppressedEdgeKeys } =
useMemo(() => {
if (structuralTasks.length === 0) {
return {
tasks: [] as TimelineTask[],
milestones: [] as TimelineMilestone[],
milestoneEdges: [] as TimelineEdge[],
suppressedEdgeKeys: new Set<string>(),
};
}
// Patch endMs / durationMs for running tasks so bars grow in real time.
// We shallow-clone each task that needs updating to keep React diffing
// efficient (unchanged tasks keep the same object identity).
const tasks = structuralTasks.map((t) => {
if (t.state === "running" && t.startMs != null) {
const endMs = nowMs;
return { ...t, endMs, durationMs: endMs - t.startMs };
}
return t;
});
// Build milestones (start/end diamonds, merge/fork junctions)
const parentAsSummary: ExecutionSummary = {
id: parentExecution.id,
action_ref: parentExecution.action_ref,
status: parentExecution.status as ExecutionSummary["status"],
created: parentExecution.created,
updated: parentExecution.updated,
started_at: parentExecution.started_at,
};
const { milestones, milestoneEdges, suppressedEdgeKeys } =
buildMilestones(tasks, parentAsSummary);
return { tasks, milestones, milestoneEdges, suppressedEdgeKeys };
}, [structuralTasks, parentExecution, nowMs]);
// ---- Compute layout ----
const layoutConfig: LayoutConfig = useMemo(() => {
// Adjust layout based on task count for readability
const taskCount = tasks.length;
if (taskCount > 50) {
return {
...DEFAULT_LAYOUT,
laneHeight: 26,
barHeight: 16,
lanePadding: 5,
};
}
if (taskCount > 20) {
return {
...DEFAULT_LAYOUT,
laneHeight: 30,
barHeight: 18,
lanePadding: 6,
};
}
return DEFAULT_LAYOUT;
}, [tasks.length]);
const layout = useMemo(() => {
if (tasks.length === 0) return null;
return computeLayout(
tasks,
taskEdges,
milestones,
milestoneEdges,
containerWidth,
layoutConfig,
suppressedEdgeKeys,
);
}, [
tasks,
taskEdges,
milestones,
milestoneEdges,
containerWidth,
layoutConfig,
suppressedEdgeKeys,
]);
// ---- Handlers ----
const handleTaskClick = useCallback(
(task: TimelineTask) => {
navigate(`/executions/${task.id}`);
},
[navigate],
);
// ---- Summary stats ----
const summary = useMemo(() => {
const total = childExecutions.length;
const completed = childExecutions.filter(
(e) => e.status === "completed",
).length;
const failed = childExecutions.filter((e) => e.status === "failed").length;
const running = childExecutions.filter(
(e) =>
e.status === "running" ||
e.status === "requested" ||
e.status === "scheduling" ||
e.status === "scheduled",
).length;
const other = total - completed - failed - running;
// Compute overall duration from the already-patched tasks array so we
// get the live running-task endMs values for free.
let durationMs: number | null = null;
const taskStartTimes = tasks
.filter((t) => t.startMs != null)
.map((t) => t.startMs!);
const taskEndTimes = tasks
.filter((t) => t.endMs != null)
.map((t) => t.endMs!);
if (taskStartTimes.length > 0 && taskEndTimes.length > 0) {
durationMs = Math.max(...taskEndTimes) - Math.min(...taskStartTimes);
}
return { total, completed, failed, running, other, durationMs };
}, [childExecutions, tasks]);
// ---- Early returns ----
if (childrenLoading && childExecutions.length === 0) {
return (
<div className={embedded ? "" : "bg-white shadow rounded-lg"}>
<div className="flex items-center gap-3 p-4">
<Loader2 className="h-4 w-4 animate-spin text-gray-400" />
<span className="text-sm text-gray-500">
Loading workflow timeline
</span>
</div>
</div>
);
}
if (childExecutions.length === 0) {
if (embedded) {
return (
<div className="flex items-center justify-center py-8 text-sm text-gray-500">
No workflow tasks yet.
</div>
);
}
return null; // No child tasks to display
}
// ---- Shared content (legend + renderer + modal) ----
const timelineContent = (
<>
{/* Expand to modal */}
<div className="flex justify-end px-3 py-1">
<button
onClick={(e) => {
e.stopPropagation();
setIsModalOpen(true);
}}
className="flex items-center gap-1 text-[10px] text-gray-400 hover:text-gray-600 transition-colors"
title="Open expanded timeline with zoom"
>
<Maximize2 className="h-3 w-3" />
Expand
</button>
</div>
{/* Legend */}
<div className="flex items-center gap-3 px-5 pb-2 text-[10px] text-gray-400">
<LegendItem color="#22c55e" label="Completed" />
<LegendItem color="#3b82f6" label="Running" />
<LegendItem color="#ef4444" label="Failed" dashed />
<LegendItem color="#f97316" label="Timeout" dotted />
<LegendItem color="#9ca3af" label="Pending" />
<span className="ml-2 text-gray-300">|</span>
<EdgeLegendItem color="#22c55e" label="Succeeded" />
<EdgeLegendItem color="#ef4444" label="Failed" dashed />
<EdgeLegendItem color="#9ca3af" label="Always" />
</div>
{/* Timeline renderer */}
{layout ? (
<div
className={embedded ? "pb-3" : "px-2 pb-3"}
style={{
minHeight: layout.totalHeight + 8,
}}
>
<TimelineRenderer
layout={layout}
tasks={tasks}
config={layoutConfig}
onTaskClick={handleTaskClick}
/>
</div>
) : (
<div className="flex items-center justify-center py-8">
<Loader2 className="h-4 w-4 animate-spin text-gray-300" />
<span className="ml-2 text-xs text-gray-400">Computing layout</span>
</div>
)}
{/* ---- Expanded modal ---- */}
{isModalOpen && (
<TimelineModal
isOpen
onClose={() => setIsModalOpen(false)}
tasks={tasks}
taskEdges={taskEdges}
milestones={milestones}
milestoneEdges={milestoneEdges}
suppressedEdgeKeys={suppressedEdgeKeys}
onTaskClick={handleTaskClick}
summary={summary}
/>
)}
</>
);
// ---- Embedded mode: no card, no header, just the content ----
if (embedded) {
return (
<div ref={containerRef} className="pt-1">
{timelineContent}
</div>
);
}
// ---- Standalone mode: full card with header + collapse ----
return (
<div className="bg-white shadow rounded-lg" ref={containerRef}>
{/* ---- Header ---- */}
<button
onClick={() => setIsCollapsed(!isCollapsed)}
className="w-full flex items-center justify-between px-5 py-3 text-left hover:bg-gray-50 rounded-t-lg transition-colors"
>
<div className="flex items-center gap-2.5">
{isCollapsed ? (
<ChevronRight className="h-4 w-4 text-gray-400" />
) : (
<ChevronDown className="h-4 w-4 text-gray-400" />
)}
<ChartGantt className="h-4 w-4 text-indigo-500" />
<h3 className="text-sm font-semibold text-gray-800">
Workflow Timeline
</h3>
<span className="text-xs text-gray-400">
{summary.total} task{summary.total !== 1 ? "s" : ""}
{summary.durationMs != null && (
<> · {formatDurationShort(summary.durationMs)}</>
)}
</span>
</div>
{/* Summary badges */}
<div className="flex items-center gap-1.5">
{summary.completed > 0 && (
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-green-100 text-green-700">
{summary.completed}
</span>
)}
{summary.running > 0 && (
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-blue-100 text-blue-700">
{summary.running}
</span>
)}
{summary.failed > 0 && (
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-red-100 text-red-700">
{summary.failed}
</span>
)}
{summary.other > 0 && (
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-gray-100 text-gray-500">
{summary.other}
</span>
)}
</div>
</button>
{/* ---- Body ---- */}
{!isCollapsed && (
<div className="border-t border-gray-100">{timelineContent}</div>
)}
</div>
);
}
// ---------------------------------------------------------------------------
// Legend sub-components
// ---------------------------------------------------------------------------
function LegendItem({
color,
label,
dashed,
dotted,
}: {
color: string;
label: string;
dashed?: boolean;
dotted?: boolean;
}) {
return (
<span className="flex items-center gap-1">
<span
className="inline-block w-5 h-2.5 rounded-sm"
style={{
backgroundColor: color,
opacity: 0.7,
border: dashed
? `1px dashed ${color}`
: dotted
? `1px dotted ${color}`
: undefined,
}}
/>
<span>{label}</span>
</span>
);
}
function EdgeLegendItem({
color,
label,
dashed,
}: {
color: string;
label: string;
dashed?: boolean;
}) {
return (
<span className="flex items-center gap-1">
<svg width="16" height="8" viewBox="0 0 16 8">
<line
x1="0"
y1="4"
x2="16"
y2="4"
stroke={color}
strokeWidth="1.5"
strokeDasharray={dashed ? "3 2" : undefined}
opacity="0.7"
/>
<polygon points="12,1 16,4 12,7" fill={color} opacity="0.6" />
</svg>
<span>{label}</span>
</span>
);
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function formatDurationShort(ms: number): string {
if (ms < 1000) return `${Math.round(ms)}ms`;
const secs = ms / 1000;
if (secs < 60) return `${secs.toFixed(1)}s`;
const mins = Math.floor(secs / 60);
const remainSecs = Math.round(secs % 60);
if (mins < 60) return `${mins}m ${remainSecs}s`;
const hrs = Math.floor(mins / 60);
const remainMins = mins % 60;
return `${hrs}h ${remainMins}m`;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,41 @@
/**
* Workflow Timeline DAG — barrel exports.
*
* Usage:
* import WorkflowTimelineDAG from "@/components/executions/workflow-timeline";
*/
export { default } from "./WorkflowTimelineGraph";
export type { ParentExecutionInfo } from "./WorkflowTimelineGraph";
export { default as TimelineRenderer } from "./TimelineRenderer";
export { default as TimelineModal } from "./TimelineModal";
// Re-export types consumers might need
export type {
TimelineTask,
TimelineEdge,
TimelineMilestone,
TimelineNode,
ComputedLayout,
TaskState,
EdgeKind,
MilestoneKind,
TooltipData,
LayoutConfig,
WorkflowDefinition,
WithItemsGroupInfo,
} from "./types";
export { WITH_ITEMS_COLLAPSE_THRESHOLD } from "./types";
// Re-export data utilities for testing / advanced usage
export {
buildTimelineTasks,
buildEdges,
buildMilestones,
findConnectedPath,
edgeKey,
} from "./data";
// Re-export layout utilities
export { computeLayout, computeGridLines, computeEdgePath } from "./layout";

View File

@@ -0,0 +1,673 @@
/**
* Layout Engine for the Workflow Timeline DAG.
*
* Responsible for:
* 1. Computing the time→pixel x-scale from task time bounds.
* 2. Assigning tasks to non-overlapping y-lanes (greedy packing).
* 3. Positioning milestone nodes.
* 4. Producing the final ComputedLayout consumed by the SVG renderer.
*/
import type {
TimelineTask,
TimelineEdge,
TimelineMilestone,
TimelineNode,
ComputedLayout,
LayoutConfig,
} from "./types";
import { DEFAULT_LAYOUT } from "./types";
// ---------------------------------------------------------------------------
// Time scale helpers
// ---------------------------------------------------------------------------
interface TimeScale {
/** Minimum time (epoch ms) */
minMs: number;
/** Maximum time (epoch ms) */
maxMs: number;
/** Available pixel width for the time axis */
axisWidth: number;
/** Pixels per millisecond */
pxPerMs: number;
}
function buildTimeScale(
tasks: TimelineTask[],
milestones: TimelineMilestone[],
chartWidth: number,
config: LayoutConfig,
): TimeScale {
// Collect all time values
const times: number[] = [];
for (const t of tasks) {
if (t.startMs != null) times.push(t.startMs);
if (t.endMs != null) times.push(t.endMs);
}
for (const m of milestones) {
times.push(m.timeMs);
}
if (times.length === 0) {
// Fallback: a 10-second window around now
const now = Date.now();
times.push(now - 5000, now + 5000);
}
let minMs = Math.min(...times);
let maxMs = Math.max(...times);
// Add a small buffer so nodes at the edges aren't right on the border
const rangeMs = maxMs - minMs;
const bufferMs = Math.max(rangeMs * 0.04, 200); // at least 200ms buffer
minMs -= bufferMs;
maxMs += bufferMs;
const axisWidth = chartWidth - config.paddingLeft - config.paddingRight;
const pxPerMs = axisWidth / Math.max(maxMs - minMs, 1);
return { minMs, maxMs, axisWidth, pxPerMs };
}
/** Convert a timestamp (epoch ms) to an x pixel position */
function timeToPx(ms: number, scale: TimeScale, config: LayoutConfig): number {
return config.paddingLeft + (ms - scale.minMs) * scale.pxPerMs;
}
// ---------------------------------------------------------------------------
// Lane assignment (greedy packing)
// ---------------------------------------------------------------------------
interface LaneInterval {
/** Left x pixel (inclusive) */
left: number;
/** Right x pixel (inclusive) */
right: number;
}
/**
* Assign each task to the first lane where it doesn't overlap with
* any existing task bar in that lane.
*
* Tasks are sorted by startTime (earliest first), then by duration
* descending (longer bars first) to maximise packing efficiency.
*
* After initial packing we optionally reorder lanes so tasks with
* shared upstream dependencies are adjacent.
*/
function assignLanes(
tasks: TimelineTask[],
scale: TimeScale,
config: LayoutConfig,
): Map<string, number> {
// Build a sortable list with pixel extents
type Entry = {
task: TimelineTask;
left: number;
right: number;
};
const entries: Entry[] = tasks.map((t) => {
const left = t.startMs != null ? timeToPx(t.startMs, scale, config) : 0;
let right =
t.endMs != null
? timeToPx(t.endMs, scale, config)
: left + config.minBarWidth;
// Ensure minimum width
if (right - left < config.minBarWidth) {
right = left + config.minBarWidth;
}
return { task: t, left, right };
});
// Sort: by start position, then by width descending (longer bars first)
entries.sort((a, b) => {
if (a.left !== b.left) return a.left - b.left;
return b.right - b.left - (a.right - a.left);
});
// Greedy lane packing
const lanes: LaneInterval[][] = []; // lanes[laneIndex] = list of intervals
const assignment = new Map<string, number>();
for (const entry of entries) {
let placed = false;
const gap = 4; // minimum px gap between bars in the same lane
for (let lane = 0; lane < lanes.length; lane++) {
const intervals = lanes[lane];
const overlaps = intervals.some(
(iv) => entry.left < iv.right + gap && entry.right + gap > iv.left,
);
if (!overlaps) {
intervals.push({ left: entry.left, right: entry.right });
assignment.set(entry.task.id, lane);
placed = true;
break;
}
}
if (!placed) {
// Open a new lane
lanes.push([{ left: entry.left, right: entry.right }]);
assignment.set(entry.task.id, lanes.length - 1);
}
}
// --- Optional lane reordering to cluster related tasks ---
// Build a lane affinity score based on shared upstream dependencies.
// We do a simple bubble-pass: for each pair of adjacent lanes,
// if swapping them increases the total number of adjacent upstream-sharing
// task pairs, do the swap.
const laneCount = lanes.length;
if (laneCount > 2) {
const laneIds: number[] = Array.from({ length: laneCount }, (_, i) => i);
// Build lane→taskIds mapping
const tasksByLane = new Map<number, string[]>();
for (const [taskId, lane] of assignment) {
const list = tasksByLane.get(lane) ?? [];
list.push(taskId);
tasksByLane.set(lane, list);
}
// Build a task→upstreams lookup
const taskUpstreams = new Map<string, Set<string>>();
for (const t of tasks) {
taskUpstreams.set(t.id, new Set(t.upstreamIds));
}
// Affinity between two lanes: count of task pairs that share upstream deps
function laneAffinity(laneA: number, laneB: number): number {
const aTasks = tasksByLane.get(laneA) ?? [];
const bTasks = tasksByLane.get(laneB) ?? [];
let score = 0;
for (const a of aTasks) {
const aUp = taskUpstreams.get(a);
if (!aUp || aUp.size === 0) continue;
for (const b of bTasks) {
const bUp = taskUpstreams.get(b);
if (!bUp || bUp.size === 0) continue;
// Count shared upstreams
for (const u of aUp) {
if (bUp.has(u)) {
score++;
break; // one shared upstream is enough for this pair
}
}
}
}
return score;
}
// Simple bubble sort passes (max 3 passes for stability)
for (let pass = 0; pass < 3; pass++) {
let swapped = false;
for (let i = 0; i < laneIds.length - 1; i++) {
const curr = laneIds[i];
const next = laneIds[i + 1];
// Check if swapping improves adjacency with neighbours
const prev = i > 0 ? laneIds[i - 1] : -1;
const after = i + 2 < laneIds.length ? laneIds[i + 2] : -1;
let scoreBefore = 0;
let scoreAfter = 0;
if (prev >= 0) {
scoreBefore += laneAffinity(prev, curr);
scoreAfter += laneAffinity(prev, next);
}
if (after >= 0) {
scoreBefore += laneAffinity(next, after);
scoreAfter += laneAffinity(curr, after);
}
scoreBefore += laneAffinity(curr, next);
scoreAfter += laneAffinity(next, curr); // same, symmetric
if (scoreAfter > scoreBefore) {
laneIds[i] = next;
laneIds[i + 1] = curr;
swapped = true;
}
}
if (!swapped) break;
}
// Remap lane assignments to the reordered indices
const reorderMap = new Map<number, number>();
for (let newIdx = 0; newIdx < laneIds.length; newIdx++) {
reorderMap.set(laneIds[newIdx], newIdx);
}
for (const [taskId, oldLane] of assignment) {
assignment.set(taskId, reorderMap.get(oldLane) ?? oldLane);
}
}
return assignment;
}
// ---------------------------------------------------------------------------
// Milestone lane assignment
// ---------------------------------------------------------------------------
/**
* Position milestones in a lane that centres them vertically relative to
* the tasks they connect to. Start and end milestones go to a middle lane.
* Internal merge/fork milestones are placed at the median lane of their
* connected tasks.
*/
function assignMilestoneLanes(
milestones: TimelineMilestone[],
milestoneEdges: TimelineEdge[],
taskLanes: Map<string, number>,
laneCount: number,
): Map<string, number> {
const assignment = new Map<string, number>();
const midLane = Math.max(0, Math.floor((laneCount - 1) / 2));
for (const ms of milestones) {
if (ms.kind === "start" || ms.kind === "end") {
assignment.set(ms.id, midLane);
continue;
}
// Gather lanes of connected tasks
const connectedLanes: number[] = [];
for (const e of milestoneEdges) {
if (e.from === ms.id) {
const lane = taskLanes.get(e.to);
if (lane != null) connectedLanes.push(lane);
}
if (e.to === ms.id) {
const lane = taskLanes.get(e.from);
if (lane != null) connectedLanes.push(lane);
}
}
if (connectedLanes.length > 0) {
connectedLanes.sort((a, b) => a - b);
const median = connectedLanes[Math.floor(connectedLanes.length / 2)];
assignment.set(ms.id, median);
} else {
assignment.set(ms.id, midLane);
}
}
return assignment;
}
// ---------------------------------------------------------------------------
// Build TimelineNode array
// ---------------------------------------------------------------------------
function buildNodes(
tasks: TimelineTask[],
milestones: TimelineMilestone[],
taskLanes: Map<string, number>,
milestoneLanes: Map<string, number>,
scale: TimeScale,
config: LayoutConfig,
): TimelineNode[] {
const nodes: TimelineNode[] = [];
// Task nodes
for (const task of tasks) {
const lane = taskLanes.get(task.id) ?? 0;
const left =
task.startMs != null
? timeToPx(task.startMs, scale, config)
: timeToPx(
scale.maxMs - (scale.maxMs - scale.minMs) * 0.05,
scale,
config,
);
let right =
task.endMs != null
? timeToPx(task.endMs, scale, config)
: left + config.minBarWidth;
if (right - left < config.minBarWidth) {
right = left + config.minBarWidth;
}
const y =
config.paddingTop +
lane * config.laneHeight +
(config.laneHeight - config.barHeight) / 2;
nodes.push({
type: "task",
id: task.id,
lane,
x: left,
y,
width: right - left,
task,
});
}
// Milestone nodes
for (const ms of milestones) {
const lane = milestoneLanes.get(ms.id) ?? 0;
const x = timeToPx(ms.timeMs, scale, config);
const y =
config.paddingTop + lane * config.laneHeight + config.laneHeight / 2;
nodes.push({
type: "milestone",
id: ms.id,
lane,
x,
y,
width: config.milestoneSize,
milestone: ms,
});
}
return nodes;
}
// ---------------------------------------------------------------------------
// Grid line computation
// ---------------------------------------------------------------------------
export interface GridLine {
/** X pixel position */
x: number;
/** Human-readable label */
label: string;
/** Whether this is a major gridline (gets a label) */
major: boolean;
}
/**
* Compute vertical gridlines at "nice" time intervals.
*
* Picks an interval that gives roughly 612 major gridlines across
* the visible chart width.
*/
export function computeGridLines(
scale: TimeScale,
config: LayoutConfig,
): GridLine[] {
const rangeMs = scale.maxMs - scale.minMs;
if (rangeMs <= 0) return [];
// Target ~8 major gridlines
const targetCount = 8;
const rawInterval = rangeMs / targetCount;
// Snap to a "nice" interval
const niceIntervals = [
100,
200,
500, // sub-second
1000,
2000,
5000, // seconds
10_000,
15_000,
30_000, // tens of seconds
60_000,
120_000,
300_000, // minutes
600_000,
900_000,
1_800_000, // tens of minutes
3_600_000,
7_200_000, // hours
14_400_000,
28_800_000,
43_200_000, // multi-hour
86_400_000, // day
];
let interval = niceIntervals[0];
for (const ni of niceIntervals) {
interval = ni;
if (ni >= rawInterval) break;
}
const lines: GridLine[] = [];
// Start at the first "nice" multiple >= minMs
const firstTick = Math.ceil(scale.minMs / interval) * interval;
for (let ms = firstTick; ms <= scale.maxMs; ms += interval) {
const x = timeToPx(ms, scale, config);
lines.push({
x,
label: formatTimeLabel(ms, interval),
major: true,
});
// Add a minor gridline halfway if the interval is large enough
if (interval >= 2000) {
const midMs = ms + interval / 2;
if (midMs < scale.maxMs) {
lines.push({
x: timeToPx(midMs, scale, config),
label: "",
major: false,
});
}
}
}
return lines;
}
/** Format a timestamp as a short label relative to the chart start */
function formatTimeLabel(ms: number, intervalMs: number): string {
const date = new Date(ms);
if (intervalMs >= 86_400_000) {
// Days — show date
return date.toLocaleDateString(undefined, {
month: "short",
day: "numeric",
});
}
if (intervalMs >= 3_600_000) {
// Hours — show HH:MM
return date.toLocaleTimeString(undefined, {
hour: "2-digit",
minute: "2-digit",
});
}
if (intervalMs >= 60_000) {
// Minutes — show HH:MM:SS
return date.toLocaleTimeString(undefined, {
hour: "2-digit",
minute: "2-digit",
second: "2-digit",
});
}
if (intervalMs >= 1000) {
// Seconds — show HH:MM:SS
return date.toLocaleTimeString(undefined, {
hour: "2-digit",
minute: "2-digit",
second: "2-digit",
});
}
// Sub-second — show with milliseconds
return (
date.toLocaleTimeString(undefined, {
hour: "2-digit",
minute: "2-digit",
second: "2-digit",
}) +
"." +
String(date.getMilliseconds()).padStart(3, "0")
);
}
// ---------------------------------------------------------------------------
// Public API: computeLayout
// ---------------------------------------------------------------------------
export function computeLayout(
tasks: TimelineTask[],
taskEdges: TimelineEdge[],
milestones: TimelineMilestone[],
milestoneEdges: TimelineEdge[],
/** Desired chart width (pixels). The layout will use this for the x-scale. */
chartWidth: number,
configOverrides?: Partial<LayoutConfig>,
/** Direct task→task edge keys that are replaced by milestone-routed paths.
* These are filtered out of `taskEdges` to avoid duplicate rendering. */
suppressedEdgeKeys?: Set<string>,
): ComputedLayout {
const config: LayoutConfig = { ...DEFAULT_LAYOUT, ...configOverrides };
// Use a reasonable minimum width
const effectiveWidth = Math.max(chartWidth, 400);
// 1. Build time scale
const scale = buildTimeScale(tasks, milestones, effectiveWidth, config);
// 2. Assign task lanes
const taskLanes = assignLanes(tasks, scale, config);
// Count lanes
let laneCount = 0;
for (const lane of taskLanes.values()) {
laneCount = Math.max(laneCount, lane + 1);
}
// Ensure at least 1 lane even if there are no tasks
laneCount = Math.max(laneCount, 1);
// 3. Assign milestone lanes
const milestoneLanes = assignMilestoneLanes(
milestones,
milestoneEdges,
taskLanes,
laneCount,
);
// 4. Build node positions
const nodes = buildNodes(
tasks,
milestones,
taskLanes,
milestoneLanes,
scale,
config,
);
// 5. Merge all edges, filtering out any task edges that have been
// replaced by milestone-routed paths (e.g. A→C replaced by A→merge→C).
const filteredTaskEdges = suppressedEdgeKeys?.size
? taskEdges.filter((e) => !suppressedEdgeKeys.has(`${e.from}${e.to}`))
: taskEdges;
const allEdges = [...filteredTaskEdges, ...milestoneEdges];
// Deduplicate edges (same from→to)
const edgeSet = new Set<string>();
const dedupedEdges: TimelineEdge[] = [];
for (const e of allEdges) {
const key = `${e.from}${e.to}`;
if (!edgeSet.has(key)) {
edgeSet.add(key);
dedupedEdges.push(e);
}
}
// 6. Compute total dimensions
const totalWidth = effectiveWidth;
const totalHeight =
config.paddingTop + laneCount * config.laneHeight + config.paddingBottom;
return {
nodes,
edges: dedupedEdges,
totalWidth,
totalHeight,
laneCount,
minTimeMs: scale.minMs,
maxTimeMs: scale.maxMs,
pxPerMs: scale.pxPerMs,
};
}
// ---------------------------------------------------------------------------
// Bezier edge path generation
// ---------------------------------------------------------------------------
/**
* Generate an SVG cubic Bezier path string for an edge between two nodes.
*
* Edges flow left→right. The control points bend horizontally so curves
* are smooth and mostly follow the x-axis direction.
*
* Anchoring:
* - Task nodes: outgoing from right-center, incoming at left-center
* - Milestones: connect at center
*/
export function computeEdgePath(
fromNode: TimelineNode,
toNode: TimelineNode,
config: LayoutConfig = DEFAULT_LAYOUT,
): string {
let x1: number, y1: number, x2: number, y2: number;
// Source anchor
if (fromNode.type === "task") {
x1 = fromNode.x + fromNode.width; // right edge
y1 = fromNode.y + config.barHeight / 2; // vertical center
} else {
x1 = fromNode.x;
y1 = fromNode.y;
}
// Target anchor
if (toNode.type === "task") {
x2 = toNode.x; // left edge
y2 = toNode.y + config.barHeight / 2; // vertical center
} else {
x2 = toNode.x;
y2 = toNode.y;
}
// Handle edge case where target is to the left of source (e.g., timing quirks)
// In that case, draw a slight arc that loops
const dx = x2 - x1;
const dy = y2 - y1;
if (dx < 5) {
// Target is to the left or very close — use an S-curve that goes
// slightly below/above and loops back
const loopOffset = Math.max(30, Math.abs(dx) + 20);
const yMid = (y1 + y2) / 2 + (dy >= 0 ? 20 : -20);
return [
`M ${x1} ${y1}`,
`C ${x1 + loopOffset} ${y1}, ${x2 - loopOffset} ${yMid}, ${(x1 + x2) / 2} ${yMid}`,
`C ${(x1 + x2) / 2 + loopOffset} ${yMid}, ${x2 - loopOffset} ${y2}, ${x2} ${y2}`,
].join(" ");
}
// Normal left→right Bezier
// Control point offset: 40% of horizontal distance, clamped
const cpOffset = Math.min(Math.max(dx * 0.4, 20), 120);
const cx1 = x1 + cpOffset;
const cy1 = y1;
const cx2 = x2 - cpOffset;
const cy2 = y2;
return `M ${x1} ${y1} C ${cx1} ${cy1}, ${cx2} ${cy2}, ${x2} ${y2}`;
}
// ---------------------------------------------------------------------------
// Export timeToPx for use by the renderer (gridlines etc.)
// ---------------------------------------------------------------------------
export { timeToPx, type TimeScale };

View File

@@ -0,0 +1,285 @@
/**
* Workflow Timeline DAG Types
*
* Types for the Prefect-style workflow run timeline visualization.
* This component renders workflow task executions as horizontal duration bars
* on a time axis with curved dependency edges showing the DAG structure.
*/
import type { ExecutionSummary } from "@/api";
// ---------------------------------------------------------------------------
// Core data types
// ---------------------------------------------------------------------------
export type TaskState =
| "completed"
| "running"
| "failed"
| "pending"
| "timeout"
| "cancelled"
| "abandoned";
/**
* Metadata for a collapsed with_items group node.
* When a with_items task has ≥ WITH_ITEMS_COLLAPSE_THRESHOLD items, all
* individual item executions are merged into a single TimelineTask carrying
* this info so the renderer can display a compact "task ×N" bar.
*/
export interface WithItemsGroupInfo {
/** Total number of items in the group */
totalItems: number;
/** Per-state item counts */
completed: number;
failed: number;
running: number;
pending: number;
timedOut: number;
cancelled: number;
/** Concurrency limit declared on the task (0 = unlimited / unknown) */
concurrency: number;
/** IDs of all member executions (for upstream/downstream tracking) */
memberIds: string[];
}
/** Threshold at which with_items children are collapsed into a single node */
export const WITH_ITEMS_COLLAPSE_THRESHOLD = 10;
/** A single task run positioned on the timeline */
export interface TimelineTask {
/** Unique identifier (execution ID as string) */
id: string;
/** Display name (task_name from workflow_task metadata) */
name: string;
/** Action reference */
actionRef: string;
/** Visual state for coloring */
state: TaskState;
/** Start time as epoch ms (null if not yet started) */
startMs: number | null;
/** End time as epoch ms (null if still running or not started) */
endMs: number | null;
/** IDs of upstream tasks this depends on */
upstreamIds: string[];
/** IDs of downstream tasks that depend on this */
downstreamIds: string[];
/** with_items task index (null if not a with_items expansion) */
taskIndex: number | null;
/** Whether this task timed out */
timedOut: boolean;
/** Retry info */
retryCount: number;
maxRetries: number;
/** Duration in ms (from metadata or computed) */
durationMs: number | null;
/** Original execution summary for tooltip details */
execution: ExecutionSummary;
/**
* Present only on collapsed with_items group nodes.
* When set, this task represents multiple item executions merged into one.
*/
groupInfo?: WithItemsGroupInfo;
}
// ---------------------------------------------------------------------------
// Synthetic milestone / junction nodes
// ---------------------------------------------------------------------------
export type MilestoneKind = "start" | "end" | "merge" | "fork";
export interface TimelineMilestone {
id: string;
kind: MilestoneKind;
/** Position on the time axis (epoch ms) */
timeMs: number;
/** Human-readable label */
label: string;
}
// ---------------------------------------------------------------------------
// Unified node type (task bar OR milestone)
// ---------------------------------------------------------------------------
export type TimelineNodeType = "task" | "milestone";
export interface TimelineNode {
type: TimelineNodeType;
/** Unique ID */
id: string;
/** Assigned lane (y index) */
lane: number;
/** Pixel positions (computed by layout) */
x: number;
y: number;
width: number;
/** Original data */
task?: TimelineTask;
milestone?: TimelineMilestone;
}
// ---------------------------------------------------------------------------
// Edges
// ---------------------------------------------------------------------------
export type EdgeKind = "success" | "failure" | "always" | "timeout" | "custom";
export interface TimelineEdge {
/** Source node ID */
from: string;
/** Target node ID */
to: string;
/** Visual classification for coloring */
kind: EdgeKind;
/** Optional transition label (e.g. "succeeded", "failed") */
label?: string;
/** Optional custom color from workflow definition */
color?: string;
}
// ---------------------------------------------------------------------------
// Layout constants
// ---------------------------------------------------------------------------
export interface LayoutConfig {
/** Height of each lane in pixels */
laneHeight: number;
/** Height of a task bar in pixels */
barHeight: number;
/** Vertical padding within each lane */
lanePadding: number;
/** Size of milestone diamond/square in pixels */
milestoneSize: number;
/** Left padding for the chart area (px) */
paddingLeft: number;
/** Right padding for the chart area (px) */
paddingRight: number;
/** Top padding for the time axis area (px) */
paddingTop: number;
/** Bottom padding (px) */
paddingBottom: number;
/** Minimum bar width for very short tasks (px) */
minBarWidth: number;
/** Horizontal gap between milestone and adjacent bars (px) */
milestoneGap: number;
}
export const DEFAULT_LAYOUT: LayoutConfig = {
laneHeight: 32,
barHeight: 20,
lanePadding: 6,
milestoneSize: 10,
paddingLeft: 20,
paddingRight: 20,
paddingTop: 36,
paddingBottom: 16,
minBarWidth: 8,
milestoneGap: 12,
};
// ---------------------------------------------------------------------------
// Computed layout result
// ---------------------------------------------------------------------------
export interface ComputedLayout {
nodes: TimelineNode[];
edges: TimelineEdge[];
/** Total width needed (px) */
totalWidth: number;
/** Total height needed (px) */
totalHeight: number;
/** Number of lanes used */
laneCount: number;
/** Time bounds */
minTimeMs: number;
maxTimeMs: number;
/** The linear scale factor: px per ms */
pxPerMs: number;
}
// ---------------------------------------------------------------------------
// Interaction state
// ---------------------------------------------------------------------------
export interface TooltipData {
task: TimelineTask;
x: number;
y: number;
}
export interface ViewState {
/** Horizontal scroll offset (px) */
scrollX: number;
/** Zoom level (1.0 = default) */
zoom: number;
}
// ---------------------------------------------------------------------------
// Workflow definition transition types (for edge extraction)
// ---------------------------------------------------------------------------
export interface WorkflowDefinitionTransition {
when?: string;
publish?: Record<string, string>[];
do?: string[];
__chart_meta__?: {
label?: string;
color?: string;
line_style?: string;
};
}
export interface WorkflowDefinitionTask {
name: string;
action?: string;
next?: WorkflowDefinitionTransition[];
/** Number of inbound tasks that must complete before this task runs */
join?: number;
/** with_items expression (present when the task fans out over a list) */
with_items?: string;
/** Max concurrent items for with_items (default 1 = serial) */
concurrency?: number;
// Legacy fields (auto-converted to next)
on_success?: string | string[];
on_failure?: string | string[];
on_complete?: string | string[];
on_timeout?: string | string[];
}
export interface WorkflowDefinition {
ref?: string;
label?: string;
tasks?: WorkflowDefinitionTask[];
}
// ---------------------------------------------------------------------------
// Color constants
// ---------------------------------------------------------------------------
export const STATE_COLORS: Record<
TaskState,
{ bg: string; border: string; text: string }
> = {
completed: { bg: "#dcfce7", border: "#22c55e", text: "#15803d" },
running: { bg: "#dbeafe", border: "#3b82f6", text: "#1d4ed8" },
failed: { bg: "#fee2e2", border: "#ef4444", text: "#b91c1c" },
pending: { bg: "#f3f4f6", border: "#9ca3af", text: "#6b7280" },
timeout: { bg: "#ffedd5", border: "#f97316", text: "#c2410c" },
cancelled: { bg: "#f3f4f6", border: "#9ca3af", text: "#6b7280" },
abandoned: { bg: "#fee2e2", border: "#f87171", text: "#b91c1c" },
};
export const EDGE_KIND_COLORS: Record<EdgeKind, string> = {
success: "#22c55e",
failure: "#ef4444",
always: "#9ca3af",
timeout: "#f97316",
custom: "#8b5cf6",
};
export const MILESTONE_COLORS: Record<MilestoneKind, string> = {
start: "#6b7280",
end: "#6b7280",
merge: "#8b5cf6",
fork: "#8b5cf6",
};

View File

@@ -16,8 +16,6 @@ import {
SquareAsterisk, SquareAsterisk,
KeyRound, KeyRound,
Home, Home,
Paperclip,
FolderOpenDot,
FolderArchive, FolderArchive,
} from "lucide-react"; } from "lucide-react";

View File

@@ -160,6 +160,10 @@ export function useExecutionStream(options: UseExecutionStreamOptions = {}) {
// Extract query params from the query key (format: ["executions", params]) // Extract query params from the query key (format: ["executions", params])
const queryParams = queryKey[1]; const queryParams = queryKey[1];
// Child execution queries (keyed by { parent: id }) fetch all pages
// and must not be capped — the timeline DAG needs every child.
const isChildQuery = !!(queryParams as any)?.parent;
const old = oldData as any; const old = oldData as any;
// Check if execution already exists in the list // Check if execution already exists in the list
@@ -224,7 +228,9 @@ export function useExecutionStream(options: UseExecutionStreamOptions = {}) {
if (hasUnsupportedFilters(queryParams)) { if (hasUnsupportedFilters(queryParams)) {
return; return;
} }
updatedData = [executionData, ...old.data].slice(0, 50); updatedData = isChildQuery
? [...old.data, executionData]
: [executionData, ...old.data].slice(0, 50);
totalItemsDelta = 1; totalItemsDelta = 1;
} else { } else {
// No boundary crossing: either both match (execution was // No boundary crossing: either both match (execution was
@@ -240,8 +246,11 @@ export function useExecutionStream(options: UseExecutionStreamOptions = {}) {
} }
if (matchesQuery) { if (matchesQuery) {
// Add to beginning and cap at 50 items to prevent unbounded growth // Add to the list. Child queries keep all items (no cap);
updatedData = [executionData, ...old.data].slice(0, 50); // other lists cap at 50 to prevent unbounded growth.
updatedData = isChildQuery
? [...old.data, executionData]
: [executionData, ...old.data].slice(0, 50);
totalItemsDelta = 1; totalItemsDelta = 1;
} else { } else {
return; return;

View File

@@ -8,6 +8,7 @@ import { ExecutionsService } from "@/api";
import type { ExecutionStatus } from "@/api"; import type { ExecutionStatus } from "@/api";
import { OpenAPI } from "@/api/core/OpenAPI"; import { OpenAPI } from "@/api/core/OpenAPI";
import { request as __request } from "@/api/core/request"; import { request as __request } from "@/api/core/request";
import type { ExecutionResponse } from "@/api";
interface ExecutionsQueryParams { interface ExecutionsQueryParams {
page?: number; page?: number;
@@ -112,15 +113,65 @@ export function useRequestExecution() {
}); });
} }
/**
* Cancel a running or pending execution.
*
* Calls POST /api/v1/executions/{id}/cancel. For workflow executions this
* cascades to all incomplete child task executions on the server side.
*/
export function useCancelExecution() {
const queryClient = useQueryClient();
return useMutation({
mutationFn: async (executionId: number) => {
const response = await __request(OpenAPI, {
method: "POST",
url: "/api/v1/executions/{id}/cancel",
path: { id: executionId },
mediaType: "application/json",
});
return response as { data: ExecutionResponse };
},
onSuccess: (_data, executionId) => {
// Invalidate the specific execution and the list
queryClient.invalidateQueries({ queryKey: ["executions", executionId] });
queryClient.invalidateQueries({ queryKey: ["executions"] });
},
});
}
export function useChildExecutions(parentId: number | undefined) { export function useChildExecutions(parentId: number | undefined) {
return useQuery({ return useQuery({
queryKey: ["executions", { parent: parentId }], queryKey: ["executions", { parent: parentId }],
queryFn: async () => { queryFn: async () => {
const response = await ExecutionsService.listExecutions({ // Fetch page 1 with max page size (API caps at 100)
const first = await ExecutionsService.listExecutions({
parent: parentId, parent: parentId,
perPage: 100, perPage: 100,
page: 1,
}); });
return response;
const { total_pages } = first.pagination;
if (total_pages <= 1) return first;
// Fetch remaining pages in parallel
const remaining = await Promise.all(
Array.from({ length: total_pages - 1 }, (_, i) =>
ExecutionsService.listExecutions({
parent: parentId,
perPage: 100,
page: i + 2,
}),
),
);
// Merge all pages into the first response
for (const page of remaining) {
first.data.push(...page.data);
}
first.pagination.total_pages = 1;
first.pagination.page_size = first.data.length;
return first;
}, },
enabled: !!parentId, enabled: !!parentId,
staleTime: 5000, staleTime: 5000,

View File

@@ -45,6 +45,8 @@ import {
generateUniqueTaskName, generateUniqueTaskName,
generateTaskId, generateTaskId,
builderStateToDefinition, builderStateToDefinition,
builderStateToGraph,
builderStateToActionYaml,
definitionToBuilderState, definitionToBuilderState,
validateWorkflow, validateWorkflow,
addTransitionTarget, addTransitionTarget,
@@ -585,12 +587,14 @@ export default function WorkflowBuilderPage() {
doSave(); doSave();
}, [startNodeWarning, doSave]); }, [startNodeWarning, doSave]);
// YAML preview — generate proper YAML from builder state // YAML previewstwo separate panels for the two-file model:
const yamlPreview = useMemo(() => { // 1. Action YAML (ref, label, parameters, output, tags, workflow_file)
// 2. Workflow YAML (version, vars, tasks, output_map — graph only)
const actionYamlPreview = useMemo(() => {
if (!showYamlPreview) return ""; if (!showYamlPreview) return "";
try { try {
const definition = builderStateToDefinition(state, actionSchemaMap); const actionDef = builderStateToActionYaml(state);
return yaml.dump(definition, { return yaml.dump(actionDef, {
indent: 2, indent: 2,
lineWidth: 120, lineWidth: 120,
noRefs: true, noRefs: true,
@@ -599,7 +603,24 @@ export default function WorkflowBuilderPage() {
forceQuotes: false, forceQuotes: false,
}); });
} catch { } catch {
return "# Error generating YAML preview"; return "# Error generating action YAML preview";
}
}, [state, showYamlPreview]);
const workflowYamlPreview = useMemo(() => {
if (!showYamlPreview) return "";
try {
const graphDef = builderStateToGraph(state, actionSchemaMap);
return yaml.dump(graphDef, {
indent: 2,
lineWidth: 120,
noRefs: true,
sortKeys: false,
quotingType: '"',
forceQuotes: false,
});
} catch {
return "# Error generating workflow YAML preview";
} }
}, [state, showYamlPreview, actionSchemaMap]); }, [state, showYamlPreview, actionSchemaMap]);
@@ -854,26 +875,64 @@ export default function WorkflowBuilderPage() {
{/* Main content area */} {/* Main content area */}
<div className="flex-1 flex overflow-hidden"> <div className="flex-1 flex overflow-hidden">
{showYamlPreview ? ( {showYamlPreview ? (
/* Raw YAML mode — full-width YAML view */ /* Raw YAML mode — two-panel view: Action YAML + Workflow YAML */
<div className="flex-1 flex flex-col overflow-hidden bg-gray-900"> <div className="flex-1 flex overflow-hidden">
<div className="flex items-center gap-2 px-4 py-2 bg-gray-800 border-b border-gray-700 flex-shrink-0"> {/* Left panel: Action YAML */}
<FileCode className="w-4 h-4 text-gray-400" /> <div className="w-2/5 flex flex-col overflow-hidden bg-gray-900 border-r border-gray-700">
<span className="text-sm font-medium text-gray-300"> <div className="flex items-center gap-2 px-4 py-2 bg-gray-800 border-b border-gray-700 flex-shrink-0">
Workflow Definition <FileCode className="w-4 h-4 text-blue-400" />
</span> <span className="text-sm font-medium text-gray-300">
<span className="text-[10px] text-gray-500 ml-1"> Action YAML
(read-only preview of the generated YAML) </span>
</span> <span className="text-[10px] text-gray-500 ml-1">
<div className="ml-auto"> actions/{state.name}.yaml
</span>
<div className="flex-1" />
<button <button
onClick={() => { onClick={() => {
navigator.clipboard.writeText(yamlPreview).then(() => { navigator.clipboard.writeText(actionYamlPreview);
setYamlCopied(true);
setTimeout(() => setYamlCopied(false), 2000);
});
}} }}
className="flex items-center gap-1.5 px-2.5 py-1 text-xs font-medium rounded transition-colors text-gray-400 hover:text-gray-200 hover:bg-gray-700" className="flex items-center gap-1 px-2 py-1 text-xs text-gray-400 hover:text-gray-200 bg-gray-700 hover:bg-gray-600 rounded transition-colors"
title="Copy YAML to clipboard" title="Copy action YAML to clipboard"
>
<Copy className="w-3.5 h-3.5" />
<span>Copy</span>
</button>
</div>
<div className="px-4 py-2 bg-gray-800/50 border-b border-gray-700/50 flex-shrink-0">
<p className="text-[10px] text-gray-500 leading-relaxed">
Defines the action identity, parameters, and output schema.
References the workflow file via{" "}
<code className="text-gray-400">workflow_file</code>.
</p>
</div>
<pre className="flex-1 overflow-auto p-4 text-sm font-mono text-blue-300 whitespace-pre leading-relaxed">
{actionYamlPreview}
</pre>
</div>
{/* Right panel: Workflow YAML (graph only) */}
<div className="flex-1 flex flex-col overflow-hidden bg-gray-900">
<div className="flex items-center gap-2 px-4 py-2 bg-gray-800 border-b border-gray-700 flex-shrink-0">
<FileCode className="w-4 h-4 text-green-400" />
<span className="text-sm font-medium text-gray-300">
Workflow YAML
</span>
<span className="text-[10px] text-gray-500 ml-1">
actions/workflows/{state.name}.workflow.yaml
</span>
<div className="flex-1" />
<button
onClick={() => {
navigator.clipboard
.writeText(workflowYamlPreview)
.then(() => {
setYamlCopied(true);
setTimeout(() => setYamlCopied(false), 2000);
});
}}
className="flex items-center gap-1 px-2 py-1 text-xs text-gray-400 hover:text-gray-200 bg-gray-700 hover:bg-gray-600 rounded transition-colors"
title="Copy workflow YAML to clipboard"
> >
{yamlCopied ? ( {yamlCopied ? (
<> <>
@@ -883,15 +942,21 @@ export default function WorkflowBuilderPage() {
) : ( ) : (
<> <>
<Copy className="w-3.5 h-3.5" /> <Copy className="w-3.5 h-3.5" />
Copy <span>Copy</span>
</> </>
)} )}
</button> </button>
</div> </div>
<div className="px-4 py-2 bg-gray-800/50 border-b border-gray-700/50 flex-shrink-0">
<p className="text-[10px] text-gray-500 leading-relaxed">
Execution graph only tasks, transitions, variables. No
action-level metadata (those are in the action YAML).
</p>
</div>
<pre className="flex-1 overflow-auto p-4 text-sm font-mono text-green-400 whitespace-pre leading-relaxed">
{workflowYamlPreview}
</pre>
</div> </div>
<pre className="flex-1 overflow-auto p-6 text-sm font-mono text-green-400 whitespace-pre leading-relaxed">
{yamlPreview}
</pre>
</div> </div>
) : ( ) : (
<> <>

View File

@@ -12,19 +12,19 @@ function formatDuration(ms: number): string {
const remainMins = mins % 60; const remainMins = mins % 60;
return `${hrs}h ${remainMins}m`; return `${hrs}h ${remainMins}m`;
} }
import { useExecution } from "@/hooks/useExecutions"; import { useExecution, useCancelExecution } from "@/hooks/useExecutions";
import { useAction } from "@/hooks/useActions"; import { useAction } from "@/hooks/useActions";
import { useExecutionStream } from "@/hooks/useExecutionStream"; import { useExecutionStream } from "@/hooks/useExecutionStream";
import { useExecutionHistory } from "@/hooks/useHistory"; import { useExecutionHistory } from "@/hooks/useHistory";
import { formatDistanceToNow } from "date-fns"; import { formatDistanceToNow } from "date-fns";
import { ExecutionStatus } from "@/api"; import { ExecutionStatus } from "@/api";
import { useState, useMemo } from "react"; import { useState, useMemo } from "react";
import { RotateCcw, Loader2 } from "lucide-react"; import { RotateCcw, Loader2, XCircle } from "lucide-react";
import ExecuteActionModal from "@/components/common/ExecuteActionModal"; import ExecuteActionModal from "@/components/common/ExecuteActionModal";
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel"; import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
import WorkflowTasksPanel from "@/components/common/WorkflowTasksPanel";
import ExecutionArtifactsPanel from "@/components/executions/ExecutionArtifactsPanel"; import ExecutionArtifactsPanel from "@/components/executions/ExecutionArtifactsPanel";
import ExecutionProgressBar from "@/components/executions/ExecutionProgressBar"; import ExecutionProgressBar from "@/components/executions/ExecutionProgressBar";
import WorkflowDetailsPanel from "@/components/executions/WorkflowDetailsPanel";
const getStatusColor = (status: string) => { const getStatusColor = (status: string) => {
switch (status) { switch (status) {
@@ -123,6 +123,7 @@ export default function ExecutionDetailPage() {
const isWorkflow = !!actionData?.data?.workflow_def; const isWorkflow = !!actionData?.data?.workflow_def;
const [showRerunModal, setShowRerunModal] = useState(false); const [showRerunModal, setShowRerunModal] = useState(false);
const cancelExecution = useCancelExecution();
// Fetch status history for the timeline // Fetch status history for the timeline
const { data: historyData, isLoading: historyLoading } = useExecutionHistory( const { data: historyData, isLoading: historyLoading } = useExecutionHistory(
@@ -200,6 +201,9 @@ export default function ExecutionDetailPage() {
execution.status === ExecutionStatus.SCHEDULED || execution.status === ExecutionStatus.SCHEDULED ||
execution.status === ExecutionStatus.REQUESTED; execution.status === ExecutionStatus.REQUESTED;
const isCancellable =
isRunning || execution.status === ExecutionStatus.CANCELING;
return ( return (
<div className="p-6 max-w-7xl mx-auto"> <div className="p-6 max-w-7xl mx-auto">
{/* Header */} {/* Header */}
@@ -236,19 +240,44 @@ export default function ExecutionDetailPage() {
</div> </div>
)} )}
</div> </div>
<button <div className="flex items-center gap-2">
onClick={() => setShowRerunModal(true)} {isCancellable && (
disabled={!actionData?.data} <button
className="px-4 py-2 bg-blue-600 text-white rounded hover:bg-blue-700 disabled:opacity-50 disabled:cursor-not-allowed flex items-center gap-2" onClick={() => {
title={ if (
!actionData?.data window.confirm(
? "Loading action details..." `Are you sure you want to cancel execution #${execution.id}?`,
: "Re-run this action with the same parameters" )
} ) {
> cancelExecution.mutate(execution.id);
<RotateCcw className="h-4 w-4" /> }
Re-Run }}
</button> disabled={cancelExecution.isPending}
className="px-4 py-2 bg-red-600 text-white rounded hover:bg-red-700 disabled:opacity-50 disabled:cursor-not-allowed flex items-center gap-2"
title="Cancel this execution"
>
{cancelExecution.isPending ? (
<Loader2 className="h-4 w-4 animate-spin" />
) : (
<XCircle className="h-4 w-4" />
)}
Cancel
</button>
)}
<button
onClick={() => setShowRerunModal(true)}
disabled={!actionData?.data}
className="px-4 py-2 bg-blue-600 text-white rounded hover:bg-blue-700 disabled:opacity-50 disabled:cursor-not-allowed flex items-center gap-2"
title={
!actionData?.data
? "Loading action details..."
: "Re-run this action with the same parameters"
}
>
<RotateCcw className="h-4 w-4" />
Re-Run
</button>
</div>
</div> </div>
<p className="text-gray-600 mt-2"> <p className="text-gray-600 mt-2">
<Link <Link
@@ -279,6 +308,16 @@ export default function ExecutionDetailPage() {
)} )}
</div> </div>
{/* Workflow Details — combined timeline + tasks panel (top of page for workflows) */}
{isWorkflow && (
<div className="mb-6">
<WorkflowDetailsPanel
parentExecution={execution}
actionRef={execution.action_ref}
/>
</div>
)}
{/* Re-Run Modal */} {/* Re-Run Modal */}
{showRerunModal && actionData?.data && ( {showRerunModal && actionData?.data && (
<ExecuteActionModal <ExecuteActionModal
@@ -542,13 +581,6 @@ export default function ExecutionDetailPage() {
</div> </div>
</div> </div>
{/* Workflow Tasks (shown only for workflow executions) */}
{isWorkflow && (
<div className="mt-6">
<WorkflowTasksPanel parentExecutionId={execution.id} />
</div>
)}
{/* Artifacts */} {/* Artifacts */}
<div className="mt-6"> <div className="mt-6">
<ExecutionArtifactsPanel <ExecutionArtifactsPanel

View File

@@ -222,6 +222,10 @@ export interface ParamDefinition {
} }
/** Workflow definition as stored in the YAML file / API */ /** Workflow definition as stored in the YAML file / API */
/**
* Full workflow definition — used for DB storage and the save API payload.
* Contains both action-level metadata AND the execution graph.
*/
export interface WorkflowYamlDefinition { export interface WorkflowYamlDefinition {
ref: string; ref: string;
label: string; label: string;
@@ -235,6 +239,37 @@ export interface WorkflowYamlDefinition {
tags?: string[]; tags?: string[];
} }
/**
* Graph-only workflow definition — written to the `.workflow.yaml` file on disk.
*
* Action-linked workflow files contain only the execution graph. The companion
* action YAML (`actions/{name}.yaml`) is authoritative for `ref`, `label`,
* `description`, `parameters`, `output`, and `tags`.
*/
export interface WorkflowGraphDefinition {
version: string;
vars?: Record<string, unknown>;
tasks: WorkflowYamlTask[];
output_map?: Record<string, string>;
}
/**
* Action YAML definition — written to the companion `actions/{name}.yaml` file.
*
* Controls the action's identity and exposed interface. References the workflow
* file via `workflow_file`.
*/
export interface ActionYamlDefinition {
ref: string;
label: string;
description?: string;
enabled: boolean;
workflow_file: string;
parameters?: Record<string, unknown>;
output?: Record<string, unknown>;
tags?: string[];
}
/** Chart-only metadata for a transition edge (not consumed by the backend) */ /** Chart-only metadata for a transition edge (not consumed by the backend) */
export interface TransitionChartMeta { export interface TransitionChartMeta {
/** Custom display label for the transition */ /** Custom display label for the transition */
@@ -382,6 +417,52 @@ export function builderStateToDefinition(
state: WorkflowBuilderState, state: WorkflowBuilderState,
actionSchemas?: Map<string, Record<string, unknown> | null>, actionSchemas?: Map<string, Record<string, unknown> | null>,
): WorkflowYamlDefinition { ): WorkflowYamlDefinition {
const graph = builderStateToGraph(state, actionSchemas);
const definition: WorkflowYamlDefinition = {
ref: `${state.packRef}.${state.name}`,
label: state.label,
version: state.version,
tasks: graph.tasks,
};
if (state.description) {
definition.description = state.description;
}
if (Object.keys(state.parameters).length > 0) {
definition.parameters = state.parameters;
}
if (Object.keys(state.output).length > 0) {
definition.output = state.output;
}
if (graph.vars && Object.keys(graph.vars).length > 0) {
definition.vars = graph.vars;
}
if (graph.output_map) {
definition.output_map = graph.output_map;
}
if (state.tags.length > 0) {
definition.tags = state.tags;
}
return definition;
}
/**
* Extract the graph-only workflow definition from builder state.
*
* This produces the content that should be written to the `.workflow.yaml`
* file on disk — no `ref`, `label`, `description`, `parameters`, `output`,
* or `tags`. Those belong in the companion action YAML.
*/
export function builderStateToGraph(
state: WorkflowBuilderState,
actionSchemas?: Map<string, Record<string, unknown> | null>,
): WorkflowGraphDefinition {
const tasks: WorkflowYamlTask[] = state.tasks.map((task) => { const tasks: WorkflowYamlTask[] = state.tasks.map((task) => {
const yamlTask: WorkflowYamlTask = { const yamlTask: WorkflowYamlTask = {
name: task.name, name: task.name,
@@ -446,34 +527,51 @@ export function builderStateToDefinition(
return yamlTask; return yamlTask;
}); });
const definition: WorkflowYamlDefinition = { const graph: WorkflowGraphDefinition = {
ref: `${state.packRef}.${state.name}`,
label: state.label,
version: state.version, version: state.version,
tasks, tasks,
}; };
if (Object.keys(state.vars).length > 0) {
graph.vars = state.vars;
}
return graph;
}
/**
* Extract the action YAML definition from builder state.
*
* This produces the content for the companion `actions/{name}.yaml` file
* that owns action-level metadata and references the workflow file.
*/
export function builderStateToActionYaml(
state: WorkflowBuilderState,
): ActionYamlDefinition {
const action: ActionYamlDefinition = {
ref: `${state.packRef}.${state.name}`,
label: state.label,
enabled: state.enabled,
workflow_file: `workflows/${state.name}.workflow.yaml`,
};
if (state.description) { if (state.description) {
definition.description = state.description; action.description = state.description;
} }
if (Object.keys(state.parameters).length > 0) { if (Object.keys(state.parameters).length > 0) {
definition.parameters = state.parameters; action.parameters = state.parameters;
} }
if (Object.keys(state.output).length > 0) { if (Object.keys(state.output).length > 0) {
definition.output = state.output; action.output = state.output;
}
if (Object.keys(state.vars).length > 0) {
definition.vars = state.vars;
} }
if (state.tags.length > 0) { if (state.tags.length > 0) {
definition.tags = state.tags; action.tags = state.tags;
} }
return definition; return action;
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------

View File

@@ -0,0 +1,120 @@
# RUST_MIN_STACK Fix & Workflow File Metadata Separation
**Date**: 2026-02-05
## Summary
Three related changes: (1) fixed `rustc` SIGSEGV crashes during Docker release builds by increasing the compiler stack size, (2) enforced the separation of concerns between action YAML and workflow YAML files across the parser, loaders, and registrars, and (3) updated the workflow builder UI and API save endpoints to produce the correct two-file layout.
## Problem 1: rustc SIGSEGV in Docker Builds
Docker Compose builds were failing with `rustc interrupted by SIGSEGV` during release compilation. The error message suggested increasing `RUST_MIN_STACK` to 16 MiB.
### Fix
Added `ENV RUST_MIN_STACK=16777216` to the build stage of all 7 Rust Dockerfiles:
- `docker/Dockerfile` (both build stages)
- `docker/Dockerfile.optimized`
- `docker/Dockerfile.worker`
- `docker/Dockerfile.worker.optimized`
- `docker/Dockerfile.sensor.optimized`
- `docker/Dockerfile.pack-binaries`
- `docker/Dockerfile.pack-builder`
Also added `export RUST_MIN_STACK := 16777216` to the `Makefile` for local builds.
## Problem 2: Workflow File Metadata Duplication
The `timeline_demo.yaml` workflow file (in `actions/workflows/`) redundantly defined `ref`, `label`, `description`, `parameters`, `output`, and `tags` — all of which are action-level concerns that belong exclusively in the companion action YAML (`actions/timeline_demo.yaml`). This violated the design principle that action YAML owns the interface and workflow YAML owns the execution graph.
The root cause was that `WorkflowDefinition` required `ref` and `label` as mandatory fields, forcing even action-linked workflow files to include them.
### Backend Parser & Loader Changes
**`crates/common/src/workflow/parser.rs`**:
- Made `ref` and `label` optional with `#[serde(default)]` and removed `min = 1` validators
- Added two new tests: `test_parse_action_linked_workflow_without_ref_and_label` and `test_parse_standalone_workflow_still_works_with_ref_and_label`
**`crates/common/src/pack_registry/loader.rs`**:
- `load_workflow_for_action()` now fills in `ref`/`label`/`description`/`tags` from the action YAML when the workflow file omits them (action YAML is authoritative)
**`crates/common/src/workflow/registrar.rs`** and **`crates/executor/src/workflow/registrar.rs`**:
- Added `effective_ref()` and `effective_label()` helper methods that fall back to `WorkflowFile.ref_name` / `WorkflowFile.name` (derived from filename) when the workflow YAML omits them
- Threaded effective values through `create_workflow`, `update_workflow`, `create_companion_action`, and `ensure_companion_action`
**`scripts/load_core_pack.py`**:
- `upsert_workflow_definition()` now derives `ref`/`label`/`description`/`tags` from the action YAML when the workflow file omits them
**`packs.external/python_example/actions/workflows/timeline_demo.yaml`**:
- Stripped `ref`, `label`, `description`, `parameters`, `output`, and `tags` — file now contains only `version`, `vars`, `tasks`, and `output_map`
## Problem 3: Workflow Builder Wrote Full Definition to Disk
The visual workflow builder's save endpoints (`POST /api/v1/packs/{pack_ref}/workflow-files` and `PUT /api/v1/workflows/{ref}/file`) were writing the full `WorkflowYamlDefinition` — including action-level metadata — to the `.workflow.yaml` file on disk. The YAML viewer also showed a single monolithic preview.
### API Save Endpoint Changes
**`crates/api/src/routes/workflows.rs`** — `write_workflow_yaml()`:
- Now writes **two files** per save:
1. **Workflow YAML** (`actions/workflows/{name}.workflow.yaml`) — graph-only via `strip_action_level_fields()` which removes `ref`, `label`, `description`, `parameters`, `output`, `tags`
2. **Action YAML** (`actions/{name}.yaml`) — action-level metadata via `build_action_yaml()` which produces `ref`, `label`, `description`, `enabled`, `workflow_file`, `parameters`, `output`, `tags`
- Added `strip_action_level_fields()` helper — extracts only `version`, `vars`, `tasks`, `output_map` from the definition JSON
- Added `build_action_yaml()` helper — constructs the companion action YAML with proper formatting and comments
### Frontend Changes
**`web/src/types/workflow.ts`**:
- Added `WorkflowGraphDefinition` interface (graph-only: `version`, `vars`, `tasks`, `output_map`)
- Added `ActionYamlDefinition` interface (action metadata: `ref`, `label`, `description`, `enabled`, `workflow_file`, `parameters`, `output`, `tags`)
- Added `builderStateToGraph()` — extracts graph-only definition from builder state
- Added `builderStateToActionYaml()` — extracts action metadata from builder state
- Refactored `builderStateToDefinition()` to delegate to `builderStateToGraph()` internally
**`web/src/pages/actions/WorkflowBuilderPage.tsx`**:
- YAML viewer now shows **two side-by-side panels** instead of a single preview:
- **Left panel (blue, 2/5 width)**: Action YAML — shows `actions/{name}.yaml` content with ref, label, parameters, workflow_file reference
- **Right panel (green, 3/5 width)**: Workflow YAML — shows `actions/workflows/{name}.workflow.yaml` with graph-only content (version, vars, tasks)
- Each panel has its own copy button, filename label, and description bar explaining the file's role
- Separate `actionYamlPreview` and `workflowYamlPreview` memos replace the old `yamlPreview`
## Design: Two Valid Workflow File Conventions
1. **Standalone workflows** (`workflows/*.yaml`) — no companion action YAML, so they carry their own `ref`, `label`, `parameters`, etc. Loaded by `WorkflowLoader.sync_pack_workflows()`.
2. **Action-linked workflows** (`actions/workflows/*.yaml`) — referenced via `workflow_file` from an action YAML. The action YAML is the single authoritative source for `ref`, `label`, `description`, `parameters`, `output`, and `tags`. The workflow file contains only the execution graph: `version`, `vars`, `tasks`, `output_map`.
The visual workflow builder and API save endpoints now produce the action-linked layout (convention 2) with properly separated files.
## Files Changed
| File | Change |
|------|--------|
| `docker/Dockerfile` | Added `RUST_MIN_STACK=16777216` (both stages) |
| `docker/Dockerfile.optimized` | Added `RUST_MIN_STACK=16777216` |
| `docker/Dockerfile.worker` | Added `RUST_MIN_STACK=16777216` |
| `docker/Dockerfile.worker.optimized` | Added `RUST_MIN_STACK=16777216` |
| `docker/Dockerfile.sensor.optimized` | Added `RUST_MIN_STACK=16777216` |
| `docker/Dockerfile.pack-binaries` | Added `RUST_MIN_STACK=16777216` |
| `docker/Dockerfile.pack-builder` | Added `RUST_MIN_STACK=16777216` |
| `Makefile` | Added `export RUST_MIN_STACK` |
| `crates/common/src/workflow/parser.rs` | Optional `ref`/`label`, 2 new tests |
| `crates/common/src/pack_registry/loader.rs` | Action YAML fallback for metadata |
| `crates/common/src/workflow/registrar.rs` | `effective_ref()`/`effective_label()` |
| `crates/executor/src/workflow/registrar.rs` | `effective_ref()`/`effective_label()` |
| `scripts/load_core_pack.py` | Action YAML fallback for metadata |
| `crates/api/src/routes/workflows.rs` | Two-file write, `strip_action_level_fields()`, `build_action_yaml()` |
| `web/src/types/workflow.ts` | `WorkflowGraphDefinition`, `ActionYamlDefinition`, `builderStateToGraph()`, `builderStateToActionYaml()` |
| `web/src/pages/actions/WorkflowBuilderPage.tsx` | Two-panel YAML viewer |
| `packs.external/python_example/actions/workflows/timeline_demo.yaml` | Stripped action-level metadata |
| `AGENTS.md` | Updated Workflow File Storage, YAML viewer, Docker Build Optimization sections |
## Test Results
- All 23 parser tests pass (including 2 new)
- All 9 loader tests pass
- All 2 registrar tests pass
- All 598 workspace lib tests pass
- Zero TypeScript errors
- Zero compiler warnings
- Zero build errors

View File

@@ -0,0 +1,56 @@
# CLI Workflow Upload Command
**Date**: 2026-03-04
## Summary
Added a `workflow` subcommand group to the Attune CLI, enabling users to upload individual workflow actions to existing packs without requiring a full pack upload. Also fixed a pre-existing `-y` short flag conflict across multiple CLI subcommands.
## Changes Made
### New File: `crates/cli/src/commands/workflow.rs`
New CLI subcommand module with four commands:
- **`attune workflow upload <action-yaml-path>`** — Reads a local action YAML file, extracts the `workflow_file` field to locate the companion workflow YAML, determines the pack from the action ref (e.g., `mypack.deploy` → pack `mypack`), and uploads both files to the API via `POST /api/v1/packs/{pack_ref}/workflow-files`. On 409 Conflict, fails unless `--force` is passed, which triggers a `PUT /api/v1/workflows/{ref}/file` update instead.
- **`attune workflow list`** — Lists workflows with optional `--pack`, `--tags`, and `--search` filters.
- **`attune workflow show <ref>`** — Shows workflow details including a task summary table (name, action, transition count).
- **`attune workflow delete <ref>`** — Deletes a workflow with `--yes` confirmation bypass.
### Modified Files
| File | Change |
|------|--------|
| `crates/cli/src/commands/mod.rs` | Added `pub mod workflow` |
| `crates/cli/src/main.rs` | Added `Workflow` variant to `Commands` enum, import, and dispatch |
| `crates/cli/src/commands/action.rs` | Fixed `-y` short flag conflict on `Delete.yes` |
| `crates/cli/src/commands/trigger.rs` | Fixed `-y` short flag conflict on `Delete.yes` |
| `crates/cli/src/commands/pack.rs` | Fixed `-y` short flag conflict on `Uninstall.yes` |
| `AGENTS.md` | Added workflow CLI documentation to CLI Tool section |
### New Test File: `crates/cli/tests/test_workflows.rs`
21 integration tests covering:
- List (authenticated, by pack, JSON/YAML output, empty, unauthenticated)
- Show (table, JSON, not found)
- Delete (with `--yes`, JSON output)
- Upload (success, JSON output, conflict without force, conflict with force, missing action file, missing workflow file, non-workflow action, invalid YAML)
- Help text (workflow help, upload help)
### Bug Fix: `-y` Short Flag Conflict
The global `--yaml` flag uses `-y` as its short form. Three existing subcommands (`action delete`, `trigger delete`, `pack uninstall`) also defined `-y` as a short flag for `--yes`. This caused a clap runtime panic when both flags were in scope (e.g., `attune --yaml action delete ref --yes`). Fixed by removing the short flag from all `yes` arguments — they now only accept `--yes` (long form).
## Design Decisions
- **Reuses existing API endpoints** — No new server-side code needed. The CLI constructs a `SaveWorkflowFileRequest` JSON payload from the two local YAML files and posts to the existing workflow-file endpoints.
- **Pack determined from action ref** — The pack ref is extracted from the action's `ref` field using the last-dot convention (e.g., `org.infra.deploy` → pack `org.infra`, name `deploy`).
- **Workflow path resolution** — The `workflow_file` value is resolved relative to the action YAML's parent directory, matching how the pack loader resolves it relative to the `actions/` directory.
- **Create-or-update pattern** — Upload attempts create first; on 409 with `--force`, falls back to update. This matches the `pack upload --force` UX pattern.
## Test Results
- **Unit tests**: 6 new (split_action_ref, resolve_workflow_path variants)
- **Integration tests**: 21 new
- **Total CLI tests**: 160 passed, 0 failed, 1 ignored (pre-existing)
- **Compiler warnings**: 0

View File

@@ -0,0 +1,82 @@
# Typed Publish Directives in Workflow Definitions
**Date**: 2026-03-04
## Problem
The `python_example.timeline_demo` workflow action failed to execute with:
```
Runtime not found: No runtime found for action: python_example.timeline_demo
(available: node.js, python, shell)
```
This error was misleading — the real issue was that the workflow definition YAML
failed to parse during pack registration, so the `workflow_definition` record was
never created and the action's `workflow_def` FK remained NULL. Without a linked
workflow definition, the executor treated it as a regular action and dispatched it
to a worker, which couldn't find a runtime for a workflow action.
### Root Cause
The YAML parsing error was:
```
tasks[7].next[0].publish: data did not match any variant of untagged enum
PublishDirective at line 234 column 11
```
The `PublishDirective::Simple` variant was defined as `HashMap<String, String>`,
but the workflow YAML contained non-string publish values:
```yaml
publish:
- validation_passed: true # boolean, not a string
- validation_passed: false # boolean, not a string
```
YAML parses `true`/`false` as booleans, which couldn't deserialize into `String`.
## Solution
Changed `PublishDirective::Simple` from `HashMap<String, String>` to
`HashMap<String, serde_json::Value>` so publish directives can carry any
JSON-compatible type: strings (including template expressions), booleans,
numbers, arrays, objects, and null.
### Files Modified
| File | Change |
|------|--------|
| `crates/common/src/workflow/parser.rs` | `PublishDirective::Simple` value type → `serde_json::Value` |
| `crates/executor/src/workflow/parser.rs` | Same change (executor's local copy) |
| `crates/executor/src/workflow/graph.rs` | Renamed `PublishVar.expression: String``PublishVar.value: JsonValue` with `#[serde(alias = "expression")]` for backward compat with stored task graphs; imported `serde_json::Value` |
| `crates/executor/src/scheduler.rs` | Updated publish map from `HashMap<String, String>` to `HashMap<String, JsonValue>` |
| `crates/executor/src/workflow/context.rs` | `publish_from_result` accepts `HashMap<String, JsonValue>`, passes values directly to `render_json` (strings get template-rendered, non-strings pass through unchanged) |
| `crates/common/src/workflow/expression_validator.rs` | Only validates string values as templates; non-string literals are skipped |
| `packs.external/python_example/actions/workflows/timeline_demo.yaml` | Fixed `result().items``result().data.items` (secondary bug in workflow definition) |
### Type Preservation
The rendering pipeline now correctly preserves types end-to-end:
- **String values** (e.g., `"{{ result().data }}"`) → rendered through expression engine with type preservation
- **Boolean values** (e.g., `true`) → stored as `JsonValue::Bool(true)`, pass through `render_json` unchanged
- **Numeric values** (e.g., `42`, `3.14`) → stored as `JsonValue::Number`, pass through unchanged
- **Null** → stored as `JsonValue::Null`, passes through unchanged
- **Arrays/Objects** → stored as-is, with any nested string templates rendered recursively
### Tests Added
- `parser::tests::test_typed_publish_values_in_transitions` — verifies YAML parsing of booleans, numbers, strings, templates, and null in publish directives
- `graph::tests::test_typed_publish_values` — verifies typed values survive graph construction
- `context::tests::test_publish_typed_values` — verifies typed values pass through `publish_from_result` with correct types (boolean stays boolean, not string "true")
## Verification
After deploying the fix:
1. Re-registered `python_example` pack — workflow definition created successfully (ID: 2)
2. Action `python_example.timeline_demo` linked to `workflow_def = 2`
3. Executed the workflow — executor correctly identified it as a workflow action and orchestrated 15 child task executions through all stages: initialize → parallel fan-out (build/lint/scan) → merge join → generate items → with_items(×3) → validate → finalize
4. Workflow variables confirmed type preservation: `validation_passed: true` (boolean), `items_processed: 3` (integer), `number_list: [1, 2, 3]` (array)

View File

@@ -0,0 +1,55 @@
# Fix: with_items Race Condition Causing Duplicate Task Dispatches
**Date**: 2026-03-04
**Component**: Executor service (`crates/executor/src/scheduler.rs`)
**Issue**: Workflow tasks downstream of `with_items` tasks were being dispatched multiple times
## Problem
When a `with_items` task (e.g., `process_items` with `concurrency: 3`) had multiple items completing nearly simultaneously, the downstream successor task (e.g., `validate`) would be dispatched once per concurrently-completing item instead of once total.
**Root cause**: Workers update execution status in the database to `Completed` *before* publishing the `ExecutionCompleted` MQ message. The completion listener processes MQ messages sequentially, but by the time it processes item N's completion message, items N+1, N+2, etc. may already be marked `Completed` in the database. This means the `siblings_remaining` query (which checks DB status) returns 0 for multiple items, and each one falls through to transition evaluation and dispatches the successor task.
### Concrete Scenario
With `process_items` (5 items, `concurrency: 3`) → `validate`:
1. Items 3 and 4 finish on separate workers nearly simultaneously
2. Worker for item 3 updates DB: status = Completed, then publishes MQ message
3. Worker for item 4 updates DB: status = Completed, then publishes MQ message
4. Completion listener processes item 3's message:
- `siblings_remaining` query: item 4 is already Completed in DB → **0 remaining**
- Falls through → dispatches `validate`
5. Completion listener processes item 4's message:
- `siblings_remaining` query: all items completed → **0 remaining**
- Falls through → dispatches `validate` **again**
With `concurrency: 3` and tasks of equal duration, up to 3 items could complete simultaneously, causing the successor to be dispatched 3 times.
## Fix
Two-layer defense added to `advance_workflow()`:
### Layer 1: Persisted state check (with_items early return)
After the `siblings_remaining` check passes (all items done), but before evaluating transitions, the fix checks whether `task_name` is already present in the *persisted* `completed_tasks` or `failed_tasks` from the `workflow_execution` record. If so, a previous `advance_workflow` invocation already handled this task's final completion — return early.
This is efficient because it uses data already loaded at the top of the function.
### Layer 2: Already-dispatched DB check (all successor tasks)
Before dispatching ANY successor task, the fix queries the `execution` table for existing child executions with the same `workflow_execution` ID and `task_name`. If any exist, the successor has already been dispatched by a prior call — skip it.
This belt-and-suspenders guard catches edge cases regardless of how the race manifests, including scenarios where the persisted `completed_tasks` list hasn't been updated yet.
## Files Changed
- `crates/executor/src/scheduler.rs` — Added two guards in `advance_workflow()`:
1. Lines ~1035-1066: Early return for with_items tasks already in persisted completed/failed lists
2. Lines ~1220-1250: DB existence check before dispatching any successor task
## Testing
- All 601 unit tests pass across the workspace (0 failures, 8 intentionally ignored)
- Zero compiler warnings
- The fix is defensive and backward-compatible — no changes to data models, APIs, or MQ protocols

View File

@@ -0,0 +1,104 @@
# Workflow Action `workflow_file` Field & Timeline Demo Workflow
**Date**: 2026-03-04
**Scope**: Pack loading architecture, workflow file discovery, demo workflow
## Summary
Introduced a `workflow_file` field for action YAML definitions that separates action-level metadata from workflow graph definitions. This enables a clean conceptual divide: the action YAML controls ref, label, parameters, policies, and tags, while the workflow file contains the execution graph (tasks, transitions, variables). Multiple actions can reference the same workflow file with different configurations, which has implications for policies and parameter mapping.
Also created a comprehensive demo workflow in the `python_example` pack that exercises the Workflow Timeline DAG visualizer.
## Architecture Change
### Before
Workflows could be registered two ways, each with limitations:
1. **`workflows/` directory** (pack root) — scanned by `WorkflowLoader`, registered by `WorkflowRegistrar` which auto-creates a companion action. No separation of action metadata from workflow definition.
2. **API endpoints** (`POST /api/v1/packs/{ref}/workflow-files`) — writes to `actions/workflows/`, creates both `workflow_definition` and companion `action` records. Only available via the visual builder, not during pack file loading.
The `PackComponentLoader` had no awareness of workflow files at all — it only loaded actions, triggers, runtimes, and sensors from their respective directories.
### After
A third path is now supported, bridging both worlds:
3. **Action YAML with `workflow_file` field** — an action YAML in `actions/*.yaml` can include `workflow_file: workflows/timeline_demo.yaml` (path relative to `actions/`). During pack loading, the `PackComponentLoader`:
- Reads and parses the referenced workflow YAML
- Creates/updates a `workflow_definition` record
- Creates the action record with `workflow_def` FK linked
- Skips runtime resolution (workflow actions have no runner_type)
- Uses the workflow file path as the entrypoint
This preserves the clean separation the visual builder already uses (action metadata in one place, workflow graph in another) while making it work with the pack file loading pipeline.
### Dual-Directory Workflow Scanning
The `WorkflowLoader` now scans **two** directories:
1. `{pack_dir}/workflows/` — legacy standalone workflow files
2. `{pack_dir}/actions/workflows/` — visual-builder and action-linked workflow files
Files with `.workflow.yaml` suffix have the `.workflow` portion stripped when deriving the name/ref (e.g., `deploy.workflow.yaml` → name `deploy`, ref `pack.deploy`). If the same ref appears in both directories, `actions/workflows/` wins. The `reload_workflow` method searches `actions/workflows/` first with all extension variants.
## Files Changed
### Rust (`crates/common/src/pack_registry/loader.rs`)
- Added imports for `WorkflowDefinitionRepository`, `CreateWorkflowDefinitionInput`, `UpdateWorkflowDefinitionInput`, and `parse_workflow_yaml`
- **`load_actions()`**: When action YAML contains `workflow_file`, calls `load_workflow_for_action()` to create/update the workflow definition, sets entrypoint to the workflow file path, skips runtime resolution, and links the action to the workflow definition after creation/update
- **`load_workflow_for_action()`** (new): Reads and parses the workflow YAML, creates or updates the `workflow_definition` record, respects action YAML schema overrides (action's `parameters`/`output` take precedence over the workflow file's own schemas)
### Rust (`crates/common/src/workflow/loader.rs`)
- **`load_pack_workflows()`**: Now scans both `workflows/` and `actions/workflows/`, with the latter taking precedence on ref collision
- **`reload_workflow()`**: Searches `actions/workflows/` first, trying `.workflow.yaml`, `.yaml`, `.workflow.yml`, and `.yml` extensions before falling back to `workflows/`
- **`scan_workflow_files()`**: Strips `.workflow` suffix from filenames (e.g., `deploy.workflow.yaml` → name `deploy`)
- **3 new tests**: `test_scan_workflow_files_strips_workflow_suffix`, `test_load_pack_workflows_scans_both_directories`, `test_reload_workflow_finds_actions_workflows_dir`
### Python (`scripts/load_core_pack.py`)
- **`upsert_workflow_definition()`** (new): Reads a workflow YAML file, upserts into `workflow_definition` table, returns the ID
- **`upsert_actions()`**: Detects `workflow_file` field, calls `upsert_workflow_definition()`, sets entrypoint to workflow file path, skips runtime resolution for workflow actions, links action to workflow definition via `UPDATE action SET workflow_def = ...`
### Demo Pack Files (`packs.external/python_example/`)
- **`actions/simulate_work.py`** + **`actions/simulate_work.yaml`**: New action that simulates a unit of work with configurable duration, optional failure simulation, and structured JSON output
- **`actions/timeline_demo.yaml`**: Action YAML with `workflow_file: workflows/timeline_demo.yaml` — controls action-level metadata
- **`actions/workflows/timeline_demo.yaml`**: Workflow definition with 11 tasks, 18 transition edges, exercising parallel fan-out/fan-in, `with_items` + concurrency, failure paths, retries, timeouts, publish directives, and custom edge styling via `__chart_meta__`
### Documentation
- **`AGENTS.md`**: Updated Pack Component Loading Order, added Workflow Action YAML (`workflow_file` field) section, added Workflow File Discovery (dual-directory scanning) section, added pitfall #7 (never put workflow content directly in action YAML), renumbered subsequent items
- **`packs.external/python_example/README.md`**: Added docs for `simulate_work`, `timeline_demo` workflow, and usage examples
## Test Results
- **596 unit tests passing**, 0 failures
- **0 compiler warnings** across the workspace
- 3 new tests for the workflow loader changes, all passing
- Integration tests require `attune_test` database (pre-existing infrastructure issue, unrelated)
## Timeline Demo Workflow Features
The `python_example.timeline_demo` workflow creates this execution shape:
```
initialize ─┬─► build_artifacts(6s) ────────────────┐
├─► run_linter(3s) ─────┐ ├─► merge_results ─► generate_items ─► process_items(×5, 3∥) ─► validate ─┬─► finalize_success
└─► security_scan(4s) ──┘ │ └─► handle_failure ─► finalize_failure
└───────────────┘
```
| Feature | How Exercised |
|---------|--------------|
| Parallel fan-out | `initialize` → 3 branches with different durations |
| Fan-in / join | `merge_results` with `join: 3` |
| `with_items` + concurrency | `process_items` expands to N items, `concurrency: 3` |
| Failure paths | Every task has `{{ failed() }}` transitions |
| Timeout handling | `security_scan` has `timeout: 30` + `{{ timed_out() }}` |
| Retries | `build_artifacts` and `validate` with retry configs |
| Publish directives | Results passed between stages |
| Custom edge colors/labels | Via `__chart_meta__` on transitions |
| Configurable failure | `fail_validation=true` exercises the error path |

View File

@@ -0,0 +1,83 @@
# Workflow Timeline DAG Visualization
**Date**: 2026-02-05
**Component**: `web/src/components/executions/workflow-timeline/`
**Integration**: `web/src/pages/executions/ExecutionDetailPage.tsx`
## Overview
Added a Prefect-style workflow run timeline DAG visualization to the execution detail page for workflow executions. The component renders child task executions as horizontal duration bars on a time axis, connected by curved dependency edges that reflect the actual workflow definition transitions.
## Architecture
The implementation is a pure SVG renderer with no additional dependencies — it uses React, TypeScript, and inline SVG only (no D3, no React Flow, no new npm packages).
### Module Structure
```
web/src/components/executions/workflow-timeline/
├── index.ts # Barrel exports
├── types.ts # Type definitions, color constants, layout config
├── data.ts # Data transformation (executions → timeline structures)
├── layout.ts # Layout engine (lane assignment, time scaling, edge paths)
├── TimelineRenderer.tsx # SVG renderer with interactions
└── WorkflowTimelineDAG.tsx # Orchestrator component (data fetching + layout + render)
```
### Data Flow
1. **WorkflowTimelineDAG** (orchestrator) fetches child executions via `useChildExecutions` and the workflow definition via `useWorkflow(actionRef)`.
2. **data.ts** transforms `ExecutionSummary[]` + `WorkflowDefinition` into `TimelineTask[]`, `TimelineEdge[]`, and `TimelineMilestone[]`.
3. **layout.ts** computes lane assignments (greedy packing), time→pixel scale, node positions, grid lines, and cubic Bezier edge paths.
4. **TimelineRenderer** renders everything as SVG with interactive features.
## Key Features
### Visualization
- **Task bars**: Horizontal rounded rectangles colored by state (green=completed, blue=running, red=failed, gray=pending, orange=timeout). Left accent bar indicates state. Running tasks pulse.
- **Milestones**: Synthetic start/end diamond nodes plus merge/fork junctions inserted when fan-in/fan-out exceeds 3 tasks.
- **Edges**: Curved cubic Bezier dependency lines with transition-aware coloring and labels derived from the workflow definition (`succeeded`, `failed`, `timed out`, custom expressions). Failure edges are dashed, timeout edges use dash-dot pattern.
- **Time axis**: Vertical gridlines at "nice" intervals with timestamp labels along the top.
- **Lane packing**: Greedy algorithm assigns tasks to non-overlapping y-lanes, with optional lane reordering to cluster tasks with shared upstream dependencies.
### Workflow Metadata Integration
- Fetches the workflow definition to extract the `next` transition array from each task definition.
- Maps definition task names to execution IDs (handles `with_items` expansions with multiple executions per task name).
- Classifies `when` expressions (`{{ succeeded() }}`, `{{ failed() }}`, `{{ timed_out() }}`) into edge kinds with appropriate colors.
- Reads `__chart_meta__` labels and custom colors from workflow definition transitions.
- Falls back to timing-based heuristic edge inference when no workflow definition is available.
### Interactions
- **Hover tooltip**: Shows task name, state, action ref, start/end times, duration, retry info, upstream/downstream counts.
- **Click selection**: Clicking a task highlights its full upstream/downstream path (BFS traversal) and dims unrelated nodes/edges.
- **Double-click navigation**: Navigates to the child execution's detail page.
- **Horizontal zoom**: Mouse wheel zooms the x-axis while keeping y-lanes stable. Zoom anchors to cursor position.
- **Pan**: Alt+drag or middle-mouse-drag pans horizontally via native scroll.
- **Expand/compact toggle**: Expand button widens the chart for complex workflows.
### Performance
- Edge paths are memoized per layout computation.
- Node lookups use a `Map<string, TimelineNode>` for O(1) access.
- Grid lines and highlighted paths are memoized with stable dependency arrays.
- ResizeObserver tracks container width for responsive layout without polling.
- No additional npm dependencies; SVG rendering handles 300+ tasks efficiently.
## Integration Point
The `WorkflowTimelineDAG` component is rendered on the execution detail page (`ExecutionDetailPage.tsx`) above the existing `WorkflowTasksPanel`, conditioned on `isWorkflow` (action has `workflow_def`).
Both components share a single TanStack Query cache entry for child executions (`["executions", { parent: id }]`) and both subscribe to WebSocket execution streams for real-time updates.
The `WorkflowTimelineDAG` accepts a `ParentExecutionInfo` interface (satisfied by both `ExecutionResponse` and `ExecutionSummary`) to avoid type casting at the integration point.
## Files Changed
| File | Change |
|------|--------|
| `web/src/components/executions/workflow-timeline/types.ts` | New — type definitions |
| `web/src/components/executions/workflow-timeline/data.ts` | New — data transformation |
| `web/src/components/executions/workflow-timeline/layout.ts` | New — layout engine |
| `web/src/components/executions/workflow-timeline/TimelineRenderer.tsx` | New — SVG renderer |
| `web/src/components/executions/workflow-timeline/WorkflowTimelineDAG.tsx` | New — orchestrator |
| `web/src/components/executions/workflow-timeline/index.ts` | New — barrel exports |
| `web/src/pages/executions/ExecutionDetailPage.tsx` | Modified — import + render WorkflowTimelineDAG |