working on workflows
This commit is contained in:
76
AGENTS.md
76
AGENTS.md
@@ -102,6 +102,7 @@ docker compose logs -f <svc> # View logs
|
||||
- **BuildKit cache mounts**: Persist cargo registry and compilation artifacts between builds
|
||||
- **Cache strategy**: `sharing=shared` for registry/git (concurrent-safe), service-specific IDs for target caches
|
||||
- **Parallel builds**: 4x faster than old `sharing=locked` strategy - no serialization overhead
|
||||
- **Rustc stack size**: All Rust Dockerfiles set `ENV RUST_MIN_STACK=16777216` (16 MiB) in the build stage to prevent `rustc` SIGSEGV crashes during release compilation. The `Makefile` also exports this variable for local builds.
|
||||
- **Documentation**: See `docs/docker-layer-optimization.md`, `docs/QUICKREF-docker-optimization.md`, `docs/QUICKREF-buildkit-cache-strategy.md`
|
||||
|
||||
### Docker Runtime Standardization
|
||||
@@ -242,14 +243,14 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
||||
**Migration Count**: 10 migrations (`000001` through `000010`) — see `migrations/` directory
|
||||
- **Artifact System**: The `artifact` table stores metadata + structured data (progress entries via JSONB `data` column). The `artifact_version` table stores immutable content snapshots — either on disk (via `file_path` column) or in DB (via `content` BYTEA / `content_json` JSONB). Version numbering is auto-assigned via `next_artifact_version()` SQL function. A DB trigger (`enforce_artifact_retention`) auto-deletes oldest versions when count exceeds the artifact's `retention_limit`. `artifact.execution` is a plain BIGINT (no FK — execution is a hypertable). Progress-type artifacts use `artifact.data` (atomic JSON array append); file-type artifacts use `artifact_version` rows with `file_path` set. Binary content is excluded from default queries for performance (`SELECT_COLUMNS` vs `SELECT_COLUMNS_WITH_CONTENT`). **Visibility**: Each artifact has a `visibility` column (`artifact_visibility_enum`: `public` or `private`, DB default `private`). The `CreateArtifactRequest` DTO accepts `visibility` as `Option<ArtifactVisibility>` — when omitted the API route handler applies a **type-aware default**: `public` for Progress artifacts (informational status indicators), `private` for all other types. Callers can always override explicitly. Public artifacts are viewable by all authenticated users; private artifacts are restricted based on the artifact's `scope` (Identity, Pack, Action, Sensor) and `owner` fields. The visibility field is filterable via the search/list API (`?visibility=public`). Full RBAC enforcement is deferred — the column and basic query filtering are in place for future permission checks. **Notifications**: `artifact_created` and `artifact_updated` DB triggers (in migration `000008`) fire PostgreSQL NOTIFY with entity_type `artifact` and include `visibility` in the payload. The `artifact_updated` trigger extracts a progress summary (`progress_percent`, `progress_message`, `progress_entries`) from the last entry of the `data` JSONB array for progress-type artifacts. The Web UI `ExecutionProgressBar` component (`web/src/components/executions/ExecutionProgressBar.tsx`) renders an inline progress bar in the Execution Details card using the `useArtifactStream` hook (`web/src/hooks/useArtifactStream.ts`) for real-time WebSocket updates, with polling fallback via `useExecutionArtifacts`.
|
||||
- **File-Based Artifact Storage**: File-type artifacts (FileBinary, FileDataTable, FileImage, FileText) use a shared filesystem volume instead of PostgreSQL BYTEA. The `artifact_version.file_path` column stores the relative path from the `artifacts_dir` root (e.g., `mypack/build_log/v1.txt`). Pattern: `{ref_with_dots_as_dirs}/v{version}.{ext}`. The artifact ref (globally unique) is used as the directory key — no execution ID in the path, so artifacts can outlive executions and be shared across them. **Endpoint**: `POST /api/v1/artifacts/{id}/versions/file` allocates a version number and file path without any file content; the execution process writes the file to `$ATTUNE_ARTIFACTS_DIR/{file_path}`. **Download**: `GET /api/v1/artifacts/{id}/download` and version-specific downloads check `file_path` first (read from disk), fall back to DB BYTEA/JSON. **Finalization**: After execution exits, the worker stats all file-backed versions for that execution and updates `size_bytes` on both `artifact_version` and parent `artifact` rows via direct DB access. **Cleanup**: Delete endpoints remove disk files before deleting DB rows; empty parent directories are cleaned up. **Backward compatible**: Existing DB-stored artifacts (`file_path = NULL`) continue to work unchanged.
|
||||
- **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order.
|
||||
- **Pack Component Loading Order**: Runtimes → Triggers → Actions (+ workflow definitions) → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order. When an action YAML contains a `workflow_file` field, the loader creates/updates the referenced `workflow_definition` record and links it to the action during the Actions phase.
|
||||
|
||||
### Workflow Execution Orchestration
|
||||
- **Detection**: The `ExecutionScheduler` checks `action.workflow_def.is_some()` before dispatching to a worker. Workflow actions are orchestrated by the executor, not sent to workers.
|
||||
- **Orchestration Flow**: Scheduler loads the `WorkflowDefinition`, builds a `TaskGraph`, creates a `workflow_execution` record, marks the parent execution as Running, builds an initial `WorkflowContext` from execution parameters and workflow vars, then dispatches entry-point tasks as child executions via MQ with rendered inputs.
|
||||
- **Template Resolution**: Task inputs are rendered through `WorkflowContext.render_json()` before dispatching. Uses the expression engine for full operator/function support inside `{{ }}`. Canonical namespaces: `parameters`, `workflow` (mutable vars), `task` (results), `config` (pack config), `keystore` (secrets), `item`, `index`, `system`. Backward-compat aliases: `vars`/`variables` → `workflow`, `tasks` → `task`, bare names → `workflow` fallback. **Type-preserving**: pure template expressions like `"{{ item }}"` preserve the JSON type (integer `5` stays as `5`, not string `"5"`). Mixed expressions like `"Sleeping for {{ item }} seconds"` remain strings.
|
||||
- **Function Expressions**: `{{ result() }}` returns the last completed task's result. `{{ result().field.subfield }}` navigates into it. `{{ succeeded() }}`, `{{ failed() }}`, `{{ timed_out() }}` return booleans. These are evaluated by `WorkflowContext.try_evaluate_function_call()`.
|
||||
- **Publish Directives**: Transition `publish` directives (e.g., `number_list: "{{ result().data.items }}"`) are evaluated when a transition fires. Published variables are persisted to the `workflow_execution.variables` column and available to subsequent tasks via the `workflow` namespace (e.g., `{{ workflow.number_list }}`). Uses type-preserving rendering so arrays/numbers/booleans retain their types.
|
||||
- **Publish Directives**: Transition `publish` directives are evaluated when a transition fires. Published variables are persisted to the `workflow_execution.variables` column and available to subsequent tasks via the `workflow` namespace (e.g., `{{ workflow.number_list }}`). Values can be **any JSON-compatible type**: string templates (e.g., `number_list: "{{ result().data.items }}"`), booleans (`validation_passed: true`), numbers (`count: 42`), arrays, objects, or null. The `PublishDirective::Simple` variant stores `HashMap<String, serde_json::Value>`. String values are template-rendered with type preservation (pure `{{ }}` expressions preserve the underlying JSON type); non-string values (booleans, numbers, null) pass through `render_json` unchanged — `true` stays as boolean `true`, not string `"true"`. The `PublishVar` struct in `graph.rs` uses a `value: JsonValue` field (with `#[serde(alias = "expression")]` for backward compat with stored task graphs).
|
||||
- **Child Task Dispatch**: Each workflow task becomes a child execution with the task's actual action ref (e.g., `core.echo`), `workflow_task` metadata linking it to the `workflow_execution` record, and a parent reference to the workflow execution. Child executions re-enter the normal scheduling pipeline, so nested workflows work recursively.
|
||||
- **with_items Expansion**: Tasks declaring `with_items: "{{ expr }}"` are expanded into child executions. The expression is resolved via the `WorkflowContext` to produce a JSON array, then each item gets its own child execution with `item`/`index` set on the context and `task_index` in `WorkflowTaskMetadata`. Completion tracking waits for ALL sibling items to finish before marking the task as completed/failed and advancing the workflow.
|
||||
- **with_items Concurrency Limiting**: ALL child execution records are created in the database up front (with fully-rendered inputs), but only the first `N` are published to the message queue where `N` is the task's `concurrency` value (**default: 1**, i.e. serial execution). The remaining children stay at `Requested` status in the DB. As each item completes, `advance_workflow` counts in-flight siblings (`scheduling`/`scheduled`/`running`), calculates free slots (`concurrency - in_flight`), and calls `publish_pending_with_items_children()` which queries for `Requested`-status siblings ordered by `task_index` and publishes them. The DB `status = 'requested'` query is the authoritative source of undispatched items — no auxiliary state in workflow variables needed. The task is only marked complete when all siblings reach a terminal state. To run all items in parallel, explicitly set `concurrency` to the list length or a suitably large number.
|
||||
@@ -264,7 +265,13 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
||||
- Development packs in `./packs.dev/` are bind-mounted directly for instant updates
|
||||
- **Pack Binaries**: Native binaries (sensors) built separately with `./scripts/build-pack-binaries.sh`
|
||||
- **Action Script Resolution**: Worker constructs file paths as `{packs_base_dir}/{pack_ref}/actions/{entrypoint}`
|
||||
- **Workflow File Storage**: Visual workflow builder saves files to `{packs_base_dir}/{pack_ref}/actions/workflows/{name}.workflow.yaml` via `POST /api/v1/packs/{pack_ref}/workflow-files` and `PUT /api/v1/workflows/{ref}/file` endpoints
|
||||
- **Workflow Action YAML (`workflow_file` field)**: An action YAML may include a `workflow_file` field (e.g., `workflow_file: workflows/timeline_demo.yaml`) pointing to a workflow definition file relative to the `actions/` directory. When present, the `PackComponentLoader` reads and parses the referenced workflow YAML, creates/updates a `workflow_definition` record, and links the action to it via `action.workflow_def`. This separates action-level metadata (ref, label, parameters, policies) from the workflow graph (tasks, transitions, variables), and allows **multiple actions to reference the same workflow file** with different parameter schemas or policy configurations. Workflow actions have no `runner_type` (runtime is `None`) — the executor orchestrates child task executions rather than sending to a worker.
|
||||
- **Action-linked workflow files omit action-level metadata**: Workflow files referenced via `workflow_file` should contain **only the execution graph**: `version`, `vars`, `tasks`, `output_map`. The `ref`, `label`, `description`, `parameters`, `output`, and `tags` fields are omitted — the action YAML is the single authoritative source for those values. The `WorkflowDefinition` parser accepts empty `ref`/`label` (defaults to `""`), and the loader / registrar fall back to the action YAML (or filename-derived values) when they are missing. Standalone workflow files (in `workflows/`) still carry their own `ref`/`label` since they have no companion action YAML.
|
||||
- **Workflow File Storage**: The visual workflow builder save endpoints (`POST /api/v1/packs/{pack_ref}/workflow-files` and `PUT /api/v1/workflows/{ref}/file`) write **two files** per workflow:
|
||||
1. **Action YAML** at `{packs_base_dir}/{pack_ref}/actions/{name}.yaml` — action-level metadata (`ref`, `label`, `description`, `parameters`, `output`, `tags`, `workflow_file` reference, `enabled`). Built by `build_action_yaml()` in `crates/api/src/routes/workflows.rs`.
|
||||
2. **Workflow YAML** at `{packs_base_dir}/{pack_ref}/actions/workflows/{name}.workflow.yaml` — graph-only (`version`, `vars`, `tasks`, `output_map`). The `strip_action_level_fields()` function removes `ref`, `label`, `description`, `parameters`, `output`, and `tags` from the definition before writing.
|
||||
Pack-bundled workflows use the same directory layout and are discovered during pack registration when their companion action YAML contains `workflow_file`.
|
||||
- **Workflow File Discovery (dual-directory scanning)**: The `WorkflowLoader` scans **two** directories when loading workflows for a pack: (1) `{pack_dir}/workflows/` (legacy standalone workflow files), and (2) `{pack_dir}/actions/workflows/` (visual-builder and action-linked workflow files). Files with `.workflow.yaml` suffix have the `.workflow` portion stripped when deriving the workflow name/ref (e.g., `deploy.workflow.yaml` → name `deploy`, ref `pack.deploy`). If the same ref appears in both directories, `actions/workflows/` wins. The `reload_workflow` method searches `actions/workflows/` first, trying `.workflow.yaml`, `.yaml`, `.workflow.yml`, and `.yml` extensions.
|
||||
- **Task Model (Orquesta-aligned)**: Tasks are purely action invocations — there is no task `type` field or task-level `when` condition in the UI model. Parallelism is implicit (multiple `do` targets in a transition fan out into parallel branches). Conditions belong exclusively on transitions (`next[].when`). Each task has: `name`, `action`, `input`, `next` (transitions), `delay`, `retry`, `timeout`, `with_items`, `batch_size`, `concurrency`, `join`.
|
||||
- The backend `Task` struct (`crates/common/src/workflow/parser.rs`) still supports `type` and task-level `when` for backward compatibility, but the UI never sets them.
|
||||
- **Task Transition Model (Orquesta-style)**: Tasks use an ordered `next` array of transitions instead of flat `on_success`/`on_failure`/`on_complete`/`on_timeout` fields. Each transition has:
|
||||
@@ -315,6 +322,7 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
||||
- **Web UI**: `extractProperties()` in `ParamSchemaForm.tsx` is the single extraction function for all schema types. Only handles flat format.
|
||||
- **SchemaBuilder**: Visual schema editor reads and writes flat format with `required` and `secret` checkboxes per parameter.
|
||||
- **Backend Validation**: `flat_to_json_schema()` in `crates/api/src/validation/params.rs` converts flat format to JSON Schema internally for `jsonschema` crate validation. This conversion is an implementation detail — external interfaces always use flat format.
|
||||
- **Execution Config Format (Flat)**: The `execution.config` JSONB column always stores parameters in **flat format** — the object itself IS the parameters map (e.g., `{"url": "https://...", "method": "GET"}`). This is consistent across all execution sources: manual API calls, rule-triggered enforcements, and workflow task children. There is **no `{"parameters": {...}}` wrapper** — never nest parameters under a `"parameters"` key. The worker reads `config` as a flat object and passes each key-value pair as an action parameter. The scheduler's `extract_workflow_params()` helper treats the config object directly as the parameters map.
|
||||
- **Parameter Delivery**: Actions receive parameters via stdin as JSON (never environment variables)
|
||||
- **Output Format**: Actions declare output format (text/json/yaml) - json/yaml are parsed into execution.result JSONB
|
||||
- **Standard Environment Variables**: Worker provides execution context via `ATTUNE_*` environment variables:
|
||||
@@ -444,12 +452,24 @@ input:
|
||||
- **Styling**: Tailwind utility classes
|
||||
- **Dev Server**: `npm run dev` (typically :3000 or :5173)
|
||||
- **Build**: `npm run build`
|
||||
- **Workflow Timeline DAG**: Prefect-style workflow run timeline visualization on the execution detail page for workflow executions
|
||||
- Components in `web/src/components/executions/workflow-timeline/` (WorkflowTimelineDAG, TimelineRenderer, types, data, layout)
|
||||
- Pure SVG renderer — no D3, no React Flow, no additional npm dependencies
|
||||
- Renders child task executions as horizontal duration bars on a time axis with curved Bezier dependency edges
|
||||
- **Data flow**: `WorkflowTimelineDAG` (orchestrator) fetches child executions via `useChildExecutions` + workflow definition via `useWorkflow(actionRef)` → `data.ts` transforms into `TimelineTask[]`/`TimelineEdge[]`/`TimelineMilestone[]` → `layout.ts` computes lane assignments + positions → `TimelineRenderer` renders SVG
|
||||
- **Edge coloring from workflow metadata**: Fetches the workflow definition's `next` transition array, classifies `when` expressions (`{{ succeeded() }}` → green, `{{ failed() }}` → red dashed, `{{ timed_out() }}` → orange dash-dot, unconditional → gray), and reads `__chart_meta__` custom labels/colors
|
||||
- **Task bars**: Colored by state (green=completed, blue=running with pulse animation, red=failed, gray=pending, orange=timeout). Left accent bar, text label with ellipsis clipping, timeout indicator badge.
|
||||
- **Milestones**: Synthetic start/end diamond nodes + merge/fork junctions when fan-in/fan-out exceeds 3 tasks
|
||||
- **Lane packing**: Greedy algorithm assigns tasks to non-overlapping y-lanes sorted by start time, with optional reordering to cluster tasks sharing upstream dependencies
|
||||
- **Interactions**: Hover tooltip (name, state, times, duration, retries, upstream/downstream counts), click-to-select with BFS path highlighting, double-click to navigate to child execution, horizontal zoom (mouse wheel anchored to cursor), alt+drag pan, expand/compact toggle
|
||||
- **Fallback**: When no workflow definition is available, infers dependency edges from task timing heuristics
|
||||
- **Integration**: Rendered in `ExecutionDetailPage.tsx` above `WorkflowTasksPanel`, conditioned on `isWorkflow`. Shares TanStack Query cache with WorkflowTasksPanel. Accepts `ParentExecutionInfo` interface (satisfied by both `ExecutionResponse` and `ExecutionSummary`).
|
||||
- **Workflow Builder**: Visual node-based workflow editor at `/actions/workflows/new` and `/actions/workflows/:ref/edit`
|
||||
- Components in `web/src/components/workflows/` (ActionPalette, WorkflowCanvas, TaskNode, WorkflowEdges, TaskInspector)
|
||||
- Types and conversion utilities in `web/src/types/workflow.ts`
|
||||
- Hooks in `web/src/hooks/useWorkflows.ts`
|
||||
- Saves workflow files to `{packs_base_dir}/{pack_ref}/actions/workflows/{name}.workflow.yaml` via dedicated API endpoints
|
||||
- **Visual / Raw YAML toggle**: Toolbar has a segmented toggle to switch between the visual node-based builder and a full-width read-only YAML preview (generated via `js-yaml`). Raw YAML mode replaces the canvas, palette, and inspector with the effective workflow definition.
|
||||
- **Visual / Raw YAML toggle**: Toolbar has a segmented toggle to switch between the visual node-based builder and a two-panel read-only YAML preview (generated via `js-yaml`). Raw YAML mode replaces the canvas, palette, and inspector with side-by-side panels: **Action YAML** (left, blue — `actions/{name}.yaml`: ref, label, parameters, output, tags, `workflow_file` reference) and **Workflow YAML** (right, green — `actions/workflows/{name}.workflow.yaml`: version, vars, tasks, output_map — graph only). Each panel has its own copy button and a description bar explaining the file's role. The `builderStateToGraph()` function extracts the graph-only definition, and `builderStateToActionYaml()` extracts the action metadata.
|
||||
- **Drag-handle connections**: TaskNode has output handles (green=succeeded, red=failed, gray=always) and an input handle (top). Drag from an output handle to another node's input handle to create a transition.
|
||||
- **Transition customization**: Users can rename transitions (custom `label`) and assign custom colors (CSS color string or preset swatches) via the TaskInspector. Custom colors/labels are persisted in the workflow YAML and rendered on the canvas edges.
|
||||
- **Edge waypoints & label dragging**: Transition edges support intermediate waypoints for custom routing. Click an edge to select it, then:
|
||||
@@ -509,16 +529,33 @@ make db-reset # Drop & recreate DB
|
||||
cargo install --path crates/cli # Install CLI
|
||||
attune auth login # Login
|
||||
attune pack list # List packs
|
||||
attune pack create --ref my_pack # Create empty pack (non-interactive)
|
||||
attune pack create -i # Create empty pack (interactive prompts)
|
||||
attune pack upload ./path/to/pack # Upload local pack to API (works with Docker)
|
||||
attune pack register /opt/attune/packs/mypak # Register from API-visible path
|
||||
attune action execute <ref> --param key=value
|
||||
attune execution list # Monitor executions
|
||||
attune workflow upload actions/deploy.yaml # Upload workflow action to existing pack
|
||||
attune workflow upload actions/deploy.yaml --force # Update existing workflow
|
||||
attune workflow list # List all workflows
|
||||
attune workflow list --pack core # List workflows in a pack
|
||||
attune workflow show core.install_packs # Show workflow details + task summary
|
||||
attune workflow delete core.my_workflow --yes # Delete a workflow
|
||||
```
|
||||
|
||||
**Pack Upload vs Register**:
|
||||
- `attune pack upload <local-path>` — Tarballs the local directory and POSTs it to `POST /api/v1/packs/upload`. Works regardless of whether the API is local or in Docker. This is the primary way to install packs from your local machine into a Dockerized system.
|
||||
- `attune pack register <server-path>` — Sends a filesystem path string to the API (`POST /api/v1/packs/register`). Only works if the path is accessible from inside the API container (e.g. `/opt/attune/packs/...` or `/opt/attune/packs.dev/...`).
|
||||
|
||||
**Workflow Upload** (`attune workflow upload <action-yaml-path>`):
|
||||
- Reads the local action YAML file and extracts the `workflow_file` field to find the companion workflow YAML
|
||||
- Determines the pack from the action ref (e.g., `mypack.deploy` → pack `mypack`, name `deploy`)
|
||||
- The `workflow_file` path is resolved relative to the action YAML's parent directory (same as how pack loaders resolve it relative to the `actions/` directory)
|
||||
- Constructs a `SaveWorkflowFileRequest` JSON payload combining action metadata (label, parameters, output, tags) with the workflow definition (version, vars, tasks, output_map) and POSTs to `POST /api/v1/packs/{pack_ref}/workflow-files`
|
||||
- On 409 Conflict (workflow already exists), fails unless `--force` is passed, in which case it PUTs to `PUT /api/v1/workflows/{ref}/file` to update
|
||||
- Does NOT require a full pack upload — individual workflow actions can be added to existing packs independently
|
||||
- **Important**: The action YAML MUST contain a `workflow_file` field; regular (non-workflow) actions should be uploaded as part of a pack
|
||||
|
||||
**Pack Upload API endpoint**: `POST /api/v1/packs/upload` — accepts `multipart/form-data` with:
|
||||
- `pack` (required): a `.tar.gz` archive of the pack directory
|
||||
- `force` (optional, text): `"true"` to overwrite an existing pack with the same ref
|
||||
@@ -606,20 +643,21 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
|
||||
4. **NEVER** commit secrets in config files (use env vars in production)
|
||||
5. **NEVER** hardcode schema prefixes in SQL queries - rely on PostgreSQL `search_path` mechanism
|
||||
6. **NEVER** copy packs into Dockerfiles - they are mounted as volumes
|
||||
7. **ALWAYS** use PostgreSQL enum type mappings for custom enums
|
||||
8. **ALWAYS** use transactions for multi-table operations
|
||||
9. **ALWAYS** start with `attune/` or correct crate name when specifying file paths
|
||||
10. **ALWAYS** convert runtime names to lowercase for comparison (database may store capitalized)
|
||||
11. **ALWAYS** use optimized Dockerfiles for new services (selective crate copying)
|
||||
12. **REMEMBER** IDs are `i64`, not `i32` or `uuid`
|
||||
13. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`)
|
||||
14. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare`
|
||||
15. **REMEMBER** packs are volumes - update with restart, not rebuild
|
||||
16. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh`
|
||||
17. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row).
|
||||
18. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history`
|
||||
19. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures.
|
||||
20. **REMEMBER** `execution`, `event`, and `enforcement` are all TimescaleDB hypertables — they **cannot be the target of FK constraints**. Any column referencing them (e.g., `inquiry.execution`, `workflow_execution.execution`, `execution.parent`) is a plain BIGINT with no FK and may become a dangling reference.
|
||||
7. **NEVER** put workflow definition content directly in action YAML — use a separate `.workflow.yaml` file in `actions/workflows/` and reference it via `workflow_file` in the action YAML
|
||||
8. **ALWAYS** use PostgreSQL enum type mappings for custom enums
|
||||
9. **ALWAYS** use transactions for multi-table operations
|
||||
10. **ALWAYS** start with `attune/` or correct crate name when specifying file paths
|
||||
11. **ALWAYS** convert runtime names to lowercase for comparison (database may store capitalized)
|
||||
12. **ALWAYS** use optimized Dockerfiles for new services (selective crate copying)
|
||||
13. **REMEMBER** IDs are `i64`, not `i32` or `uuid`
|
||||
14. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`)
|
||||
15. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare`
|
||||
16. **REMEMBER** packs are volumes - update with restart, not rebuild
|
||||
17. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh`
|
||||
18. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row).
|
||||
19. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history`
|
||||
20. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures.
|
||||
21. **REMEMBER** `execution`, `event`, and `enforcement` are all TimescaleDB hypertables — they **cannot be the target of FK constraints**. Any column referencing them (e.g., `inquiry.execution`, `workflow_execution.execution`, `execution.parent`) is a plain BIGINT with no FK and may become a dangling reference.
|
||||
|
||||
## Deployment
|
||||
- **Target**: Distributed deployment with separate service instances
|
||||
@@ -630,7 +668,7 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
|
||||
- **Web UI**: Static files served separately or via API service
|
||||
|
||||
## Current Development Status
|
||||
- ✅ **Complete**: Database migrations (21 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`), Artifact content system (versioned file/JSON storage, progress-append semantics, binary upload/download, retention enforcement, execution-linked artifacts, 18 API endpoints under `/api/v1/artifacts/`, file-backed disk storage via shared volume for file-type artifacts), CLI `--wait` flag (WebSocket-first with polling fallback — connects to notifier on port 8081, subscribes to execution, returns immediately on terminal status; falls back to exponential-backoff REST polling if WS unavailable; polling always gets at least 10s budget regardless of how long WS path ran)
|
||||
- ✅ **Complete**: Database migrations (21 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder + workflow timeline DAG), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`), Artifact content system (versioned file/JSON storage, progress-append semantics, binary upload/download, retention enforcement, execution-linked artifacts, 18 API endpoints under `/api/v1/artifacts/`, file-backed disk storage via shared volume for file-type artifacts), CLI `--wait` flag (WebSocket-first with polling fallback — connects to notifier on port 8081, subscribes to execution, returns immediately on terminal status; falls back to exponential-backoff REST polling if WS unavailable; polling always gets at least 10s budget regardless of how long WS path ran), Workflow Timeline DAG visualization (Prefect-style time-aligned Gantt+DAG on execution detail page, pure SVG, transition-aware edge coloring from workflow definition metadata, hover tooltips, click-to-highlight path, zoom/pan)
|
||||
- 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management, Artifact UI (web UI for browsing/downloading artifacts), Notifier service WebSocket (functional but lacks auth — the WS connection is unauthenticated; the subscribe filter controls visibility)
|
||||
- 📋 **Planned**: Execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage
|
||||
|
||||
|
||||
3
Makefile
3
Makefile
@@ -61,6 +61,9 @@ help:
|
||||
@echo " make generate-agents-index - Generate AGENTS.md index for AI agents"
|
||||
@echo ""
|
||||
|
||||
# Increase rustc stack size to prevent SIGSEGV during compilation
|
||||
export RUST_MIN_STACK := 16777216
|
||||
|
||||
# Building
|
||||
build:
|
||||
cargo build
|
||||
|
||||
@@ -317,6 +317,62 @@ pub struct CreateFileVersionRequest {
|
||||
pub created_by: Option<String>,
|
||||
}
|
||||
|
||||
/// Request DTO for the upsert-and-allocate endpoint.
|
||||
///
|
||||
/// Looks up an artifact by ref (creating it if it doesn't exist), then
|
||||
/// allocates a new file-backed version and returns the `file_path` where
|
||||
/// the caller should write the file on the shared artifact volume.
|
||||
///
|
||||
/// This replaces the multi-step create → 409-handling → allocate dance
|
||||
/// with a single API call.
|
||||
#[derive(Debug, Clone, Deserialize, ToSchema)]
|
||||
pub struct AllocateFileVersionByRefRequest {
|
||||
// -- Artifact metadata (used only when creating a new artifact) ----------
|
||||
/// Owner scope type (default: action)
|
||||
#[schema(example = "action")]
|
||||
pub scope: Option<OwnerType>,
|
||||
|
||||
/// Owner identifier (ref string of the owning entity)
|
||||
#[schema(example = "python_example.artifact_demo")]
|
||||
pub owner: Option<String>,
|
||||
|
||||
/// Artifact type (must be a file-backed type; default: file_text)
|
||||
#[schema(example = "file_text")]
|
||||
pub r#type: Option<ArtifactType>,
|
||||
|
||||
/// Visibility level. If omitted, uses type-aware default.
|
||||
pub visibility: Option<ArtifactVisibility>,
|
||||
|
||||
/// Retention policy type (default: versions)
|
||||
pub retention_policy: Option<RetentionPolicyType>,
|
||||
|
||||
/// Retention limit (default: 10)
|
||||
pub retention_limit: Option<i32>,
|
||||
|
||||
/// Human-readable name
|
||||
#[schema(example = "Demo Log")]
|
||||
pub name: Option<String>,
|
||||
|
||||
/// Optional description
|
||||
pub description: Option<String>,
|
||||
|
||||
/// Execution ID to link this artifact to
|
||||
#[schema(example = 42)]
|
||||
pub execution: Option<i64>,
|
||||
|
||||
// -- Version metadata ----------------------------------------------------
|
||||
/// MIME content type for this version (e.g. "text/plain")
|
||||
#[schema(example = "text/plain")]
|
||||
pub content_type: Option<String>,
|
||||
|
||||
/// Free-form metadata about this version
|
||||
#[schema(value_type = Option<Object>)]
|
||||
pub meta: Option<JsonValue>,
|
||||
|
||||
/// Who created this version (e.g. action ref, identity, "system")
|
||||
pub created_by: Option<String>,
|
||||
}
|
||||
|
||||
/// Response DTO for an artifact version (without binary content)
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct ArtifactVersionResponse {
|
||||
|
||||
@@ -9,17 +9,24 @@
|
||||
//! - Listing artifacts by execution
|
||||
//! - Version history and retrieval
|
||||
//! - Upsert-and-upload: create-or-reuse an artifact by ref and upload a version in one call
|
||||
//! - Upsert-and-allocate: create-or-reuse an artifact by ref and allocate a file-backed version path in one call
|
||||
//! - SSE streaming for file-backed artifacts (live tail while execution is running)
|
||||
|
||||
use axum::{
|
||||
body::Body,
|
||||
extract::{Multipart, Path, Query, State},
|
||||
http::{header, StatusCode},
|
||||
response::IntoResponse,
|
||||
response::{
|
||||
sse::{Event, KeepAlive, Sse},
|
||||
IntoResponse,
|
||||
},
|
||||
routing::{get, post},
|
||||
Json, Router,
|
||||
};
|
||||
use futures::stream::Stream;
|
||||
use std::sync::Arc;
|
||||
use tracing::warn;
|
||||
use tokio::io::{AsyncReadExt, AsyncSeekExt};
|
||||
use tracing::{debug, warn};
|
||||
|
||||
use attune_common::models::enums::{
|
||||
ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType,
|
||||
@@ -36,10 +43,10 @@ use crate::{
|
||||
auth::middleware::RequireAuth,
|
||||
dto::{
|
||||
artifact::{
|
||||
AppendProgressRequest, ArtifactQueryParams, ArtifactResponse, ArtifactSummary,
|
||||
ArtifactVersionResponse, ArtifactVersionSummary, CreateArtifactRequest,
|
||||
CreateFileVersionRequest, CreateVersionJsonRequest, SetDataRequest,
|
||||
UpdateArtifactRequest,
|
||||
AllocateFileVersionByRefRequest, AppendProgressRequest, ArtifactQueryParams,
|
||||
ArtifactResponse, ArtifactSummary, ArtifactVersionResponse, ArtifactVersionSummary,
|
||||
CreateArtifactRequest, CreateFileVersionRequest, CreateVersionJsonRequest,
|
||||
SetDataRequest, UpdateArtifactRequest,
|
||||
},
|
||||
common::{PaginatedResponse, PaginationParams},
|
||||
ApiResponse, SuccessResponse,
|
||||
@@ -659,6 +666,7 @@ pub async fn create_version_file(
|
||||
// Update the version row with the computed file_path
|
||||
sqlx::query("UPDATE artifact_version SET file_path = $1 WHERE id = $2")
|
||||
.bind(&file_path)
|
||||
.bind(version.id)
|
||||
.execute(&state.db)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
@@ -1250,6 +1258,165 @@ pub async fn upload_version_by_ref(
|
||||
))
|
||||
}
|
||||
|
||||
/// Upsert an artifact by ref and allocate a file-backed version in one call.
|
||||
///
|
||||
/// If the artifact doesn't exist, it is created using the supplied metadata.
|
||||
/// If it already exists, the execution link is updated (if provided).
|
||||
/// Then a new file-backed version is allocated and the `file_path` is returned.
|
||||
///
|
||||
/// The caller writes the file to `$ATTUNE_ARTIFACTS_DIR/{file_path}` on the
|
||||
/// shared volume — no HTTP upload needed.
|
||||
#[utoipa::path(
|
||||
post,
|
||||
path = "/api/v1/artifacts/ref/{ref}/versions/file",
|
||||
tag = "artifacts",
|
||||
params(
|
||||
("ref" = String, Path, description = "Artifact reference (e.g. 'mypack.build_log')")
|
||||
),
|
||||
request_body = AllocateFileVersionByRefRequest,
|
||||
responses(
|
||||
(status = 201, description = "File version allocated", body = inline(ApiResponse<ArtifactVersionResponse>)),
|
||||
(status = 400, description = "Invalid request (non-file-backed artifact type)"),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn allocate_file_version_by_ref(
|
||||
RequireAuth(_user): RequireAuth,
|
||||
State(state): State<Arc<AppState>>,
|
||||
Path(artifact_ref): Path<String>,
|
||||
Json(request): Json<AllocateFileVersionByRefRequest>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
// Upsert: find existing artifact or create a new one
|
||||
let artifact = match ArtifactRepository::find_by_ref(&state.db, &artifact_ref).await? {
|
||||
Some(existing) => {
|
||||
// Update execution link if a new execution ID was provided
|
||||
if request.execution.is_some() && request.execution != existing.execution {
|
||||
let update_input = UpdateArtifactInput {
|
||||
r#ref: None,
|
||||
scope: None,
|
||||
owner: None,
|
||||
r#type: None,
|
||||
visibility: None,
|
||||
retention_policy: None,
|
||||
retention_limit: None,
|
||||
name: None,
|
||||
description: None,
|
||||
content_type: None,
|
||||
size_bytes: None,
|
||||
execution: request.execution.map(Some),
|
||||
data: None,
|
||||
};
|
||||
ArtifactRepository::update(&state.db, existing.id, update_input).await?
|
||||
} else {
|
||||
existing
|
||||
}
|
||||
}
|
||||
None => {
|
||||
// Parse artifact type (default to FileText)
|
||||
let a_type = request.r#type.unwrap_or(ArtifactType::FileText);
|
||||
|
||||
// Validate it's a file-backed type
|
||||
if !is_file_backed_type(a_type) {
|
||||
return Err(ApiError::BadRequest(format!(
|
||||
"Artifact type {:?} is not file-backed. \
|
||||
Use POST /artifacts/ref/{{ref}}/versions/upload for DB-stored artifacts.",
|
||||
a_type,
|
||||
)));
|
||||
}
|
||||
|
||||
let a_scope = request.scope.unwrap_or(OwnerType::Action);
|
||||
let a_visibility = request.visibility.unwrap_or(ArtifactVisibility::Private);
|
||||
let a_retention_policy = request
|
||||
.retention_policy
|
||||
.unwrap_or(RetentionPolicyType::Versions);
|
||||
let a_retention_limit = request.retention_limit.unwrap_or(10);
|
||||
|
||||
let create_input = CreateArtifactInput {
|
||||
r#ref: artifact_ref.clone(),
|
||||
scope: a_scope,
|
||||
owner: request.owner.unwrap_or_default(),
|
||||
r#type: a_type,
|
||||
visibility: a_visibility,
|
||||
retention_policy: a_retention_policy,
|
||||
retention_limit: a_retention_limit,
|
||||
name: request.name,
|
||||
description: request.description,
|
||||
content_type: request.content_type.clone(),
|
||||
execution: request.execution,
|
||||
data: None,
|
||||
};
|
||||
|
||||
ArtifactRepository::create(&state.db, create_input).await?
|
||||
}
|
||||
};
|
||||
|
||||
// Validate the existing artifact is file-backed
|
||||
if !is_file_backed_type(artifact.r#type) {
|
||||
return Err(ApiError::BadRequest(format!(
|
||||
"Artifact '{}' is type {:?}, which does not support file-backed versions.",
|
||||
artifact.r#ref, artifact.r#type,
|
||||
)));
|
||||
}
|
||||
|
||||
let content_type = request
|
||||
.content_type
|
||||
.unwrap_or_else(|| default_content_type_for_artifact(artifact.r#type));
|
||||
|
||||
// Create version row (file_path computed after we know the version number)
|
||||
let input = CreateArtifactVersionInput {
|
||||
artifact: artifact.id,
|
||||
content_type: Some(content_type.clone()),
|
||||
content: None,
|
||||
content_json: None,
|
||||
file_path: None,
|
||||
meta: request.meta,
|
||||
created_by: request.created_by,
|
||||
};
|
||||
|
||||
let version = ArtifactVersionRepository::create(&state.db, input).await?;
|
||||
|
||||
// Compute the file path from the artifact ref and version number
|
||||
let file_path = compute_file_path(&artifact.r#ref, version.version, &content_type);
|
||||
|
||||
// Create the parent directory on disk
|
||||
let artifacts_dir = &state.config.artifacts_dir;
|
||||
let full_path = std::path::Path::new(artifacts_dir).join(&file_path);
|
||||
if let Some(parent) = full_path.parent() {
|
||||
tokio::fs::create_dir_all(parent).await.map_err(|e| {
|
||||
ApiError::InternalServerError(format!(
|
||||
"Failed to create artifact directory '{}': {}",
|
||||
parent.display(),
|
||||
e,
|
||||
))
|
||||
})?;
|
||||
}
|
||||
|
||||
// Update the version row with the computed file_path
|
||||
sqlx::query("UPDATE artifact_version SET file_path = $1 WHERE id = $2")
|
||||
.bind(&file_path)
|
||||
.bind(version.id)
|
||||
.execute(&state.db)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
ApiError::InternalServerError(format!(
|
||||
"Failed to set file_path on version {}: {}",
|
||||
version.id, e,
|
||||
))
|
||||
})?;
|
||||
|
||||
// Return the version with file_path populated
|
||||
let mut response = ArtifactVersionResponse::from(version);
|
||||
response.file_path = Some(file_path);
|
||||
|
||||
Ok((
|
||||
StatusCode::CREATED,
|
||||
Json(ApiResponse::with_message(
|
||||
response,
|
||||
"File version allocated — write content to $ATTUNE_ARTIFACTS_DIR/<file_path>",
|
||||
)),
|
||||
))
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Helpers
|
||||
// ============================================================================
|
||||
@@ -1459,8 +1626,434 @@ fn cleanup_empty_parents(dir: &std::path::Path, stop_at: &str) {
|
||||
}
|
||||
}
|
||||
}
|
||||
// ============================================================================
|
||||
// SSE file streaming
|
||||
// ============================================================================
|
||||
|
||||
/// Query parameters for the artifact stream endpoint.
|
||||
#[derive(serde::Deserialize)]
|
||||
pub struct StreamArtifactParams {
|
||||
/// JWT access token (SSE/EventSource cannot set Authorization header).
|
||||
pub token: Option<String>,
|
||||
}
|
||||
|
||||
/// Internal state machine for the `stream_artifact` SSE generator.
|
||||
///
|
||||
/// We use `futures::stream::unfold` instead of `async_stream::stream!` to avoid
|
||||
/// adding an external dependency.
|
||||
enum TailState {
|
||||
/// Waiting for the file to appear on disk.
|
||||
WaitingForFile {
|
||||
full_path: std::path::PathBuf,
|
||||
file_path: String,
|
||||
execution_id: Option<i64>,
|
||||
db: sqlx::PgPool,
|
||||
started: tokio::time::Instant,
|
||||
},
|
||||
/// File exists — send initial content.
|
||||
SendInitial {
|
||||
full_path: std::path::PathBuf,
|
||||
file_path: String,
|
||||
execution_id: Option<i64>,
|
||||
db: sqlx::PgPool,
|
||||
},
|
||||
/// Tailing the file for new bytes.
|
||||
Tailing {
|
||||
full_path: std::path::PathBuf,
|
||||
file_path: String,
|
||||
execution_id: Option<i64>,
|
||||
db: sqlx::PgPool,
|
||||
offset: u64,
|
||||
idle_count: u32,
|
||||
},
|
||||
/// Emit the final `done` SSE event and close.
|
||||
SendDone,
|
||||
/// Stream has ended — return `None` to close.
|
||||
Finished,
|
||||
}
|
||||
|
||||
/// How long to wait for the file to appear on disk.
|
||||
const STREAM_MAX_WAIT: std::time::Duration = std::time::Duration::from_secs(30);
|
||||
/// How often to poll for new bytes / file existence.
|
||||
const STREAM_POLL_INTERVAL: std::time::Duration = std::time::Duration::from_millis(500);
|
||||
/// After this many consecutive empty polls we check whether the execution
|
||||
/// is done and, if so, terminate the stream.
|
||||
const STREAM_IDLE_CHECKS_BEFORE_DONE: u32 = 6; // 3 seconds of no new data
|
||||
|
||||
/// Check whether the given execution has reached a terminal status.
|
||||
async fn is_execution_terminal(db: &sqlx::PgPool, execution_id: Option<i64>) -> bool {
|
||||
let Some(exec_id) = execution_id else {
|
||||
return false;
|
||||
};
|
||||
match sqlx::query_scalar::<_, String>("SELECT status::text FROM execution WHERE id = $1")
|
||||
.bind(exec_id)
|
||||
.fetch_optional(db)
|
||||
.await
|
||||
{
|
||||
Ok(Some(status)) => matches!(
|
||||
status.as_str(),
|
||||
"succeeded" | "failed" | "timeout" | "canceled" | "abandoned"
|
||||
),
|
||||
Ok(None) => true, // execution deleted — treat as done
|
||||
Err(_) => false, // DB error — keep tailing
|
||||
}
|
||||
}
|
||||
|
||||
/// Do one final read from `offset` to EOF and return the new bytes (if any).
|
||||
async fn final_read_bytes(full_path: &std::path::Path, offset: u64) -> Option<String> {
|
||||
let mut f = tokio::fs::File::open(full_path).await.ok()?;
|
||||
let meta = f.metadata().await.ok()?;
|
||||
if meta.len() <= offset {
|
||||
return None;
|
||||
}
|
||||
f.seek(std::io::SeekFrom::Start(offset)).await.ok()?;
|
||||
let mut tail = Vec::new();
|
||||
f.read_to_end(&mut tail).await.ok()?;
|
||||
if tail.is_empty() {
|
||||
return None;
|
||||
}
|
||||
Some(String::from_utf8_lossy(&tail).into_owned())
|
||||
}
|
||||
|
||||
/// Stream the latest file-backed artifact version as Server-Sent Events.
|
||||
///
|
||||
/// The endpoint:
|
||||
/// 1. Waits (up to ~30 s) for the file to appear on disk if it has been
|
||||
/// allocated but not yet written by the worker.
|
||||
/// 2. Once the file exists it sends the current content as an initial `content`
|
||||
/// event, then tails the file every 500 ms, sending `append` events with new
|
||||
/// bytes.
|
||||
/// 3. When no new bytes have appeared for several consecutive checks **and** the
|
||||
/// linked execution (if any) has reached a terminal status, it sends a `done`
|
||||
/// event and the stream ends.
|
||||
/// 4. If the client disconnects the stream is cleaned up automatically.
|
||||
///
|
||||
/// **Event types** (SSE `event:` field):
|
||||
/// - `content` – full file content up to the current offset (sent once)
|
||||
/// - `append` – incremental bytes appended since the last event
|
||||
/// - `waiting` – file does not exist yet; sent periodically while waiting
|
||||
/// - `done` – no more data expected; stream will close
|
||||
/// - `error` – something went wrong; `data` contains a human-readable message
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/artifacts/{id}/stream",
|
||||
tag = "artifacts",
|
||||
params(
|
||||
("id" = i64, Path, description = "Artifact ID"),
|
||||
("token" = String, Query, description = "JWT access token for authentication"),
|
||||
),
|
||||
responses(
|
||||
(status = 200, description = "SSE stream of file content", content_type = "text/event-stream"),
|
||||
(status = 401, description = "Unauthorized"),
|
||||
(status = 404, description = "Artifact not found or not file-backed"),
|
||||
),
|
||||
)]
|
||||
pub async fn stream_artifact(
|
||||
State(state): State<Arc<AppState>>,
|
||||
Path(id): Path<i64>,
|
||||
Query(params): Query<StreamArtifactParams>,
|
||||
) -> Result<Sse<impl Stream<Item = Result<Event, std::convert::Infallible>>>, ApiError> {
|
||||
// --- auth (EventSource can't send headers, so token comes via query) ----
|
||||
use crate::auth::jwt::validate_token;
|
||||
|
||||
let token = params.token.as_ref().ok_or(ApiError::Unauthorized(
|
||||
"Missing authentication token".to_string(),
|
||||
))?;
|
||||
validate_token(token, &state.jwt_config)
|
||||
.map_err(|_| ApiError::Unauthorized("Invalid authentication token".to_string()))?;
|
||||
|
||||
// --- resolve artifact + latest version ---------------------------------
|
||||
let artifact = ArtifactRepository::find_by_id(&state.db, id)
|
||||
.await?
|
||||
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
|
||||
|
||||
if !is_file_backed_type(artifact.r#type) {
|
||||
return Err(ApiError::BadRequest(format!(
|
||||
"Artifact '{}' is type {:?} which is not file-backed. \
|
||||
Use the download endpoint instead.",
|
||||
artifact.r#ref, artifact.r#type,
|
||||
)));
|
||||
}
|
||||
|
||||
let ver = ArtifactVersionRepository::find_latest(&state.db, id)
|
||||
.await?
|
||||
.ok_or_else(|| ApiError::NotFound(format!("No versions found for artifact {}", id)))?;
|
||||
|
||||
let file_path = ver.file_path.ok_or_else(|| {
|
||||
ApiError::NotFound(format!(
|
||||
"Latest version of artifact '{}' has no file_path allocated",
|
||||
artifact.r#ref,
|
||||
))
|
||||
})?;
|
||||
|
||||
let artifacts_dir = state.config.artifacts_dir.clone();
|
||||
let full_path = std::path::PathBuf::from(&artifacts_dir).join(&file_path);
|
||||
let execution_id = artifact.execution;
|
||||
let db = state.db.clone();
|
||||
|
||||
// --- build the SSE stream via unfold -----------------------------------
|
||||
let initial_state = TailState::WaitingForFile {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
started: tokio::time::Instant::now(),
|
||||
};
|
||||
|
||||
let stream = futures::stream::unfold(initial_state, |state| async move {
|
||||
match state {
|
||||
TailState::Finished => None,
|
||||
|
||||
// ---- Drain state for clean shutdown ----
|
||||
TailState::SendDone => Some((
|
||||
Ok(Event::default()
|
||||
.event("done")
|
||||
.data("Execution complete — stream closed")),
|
||||
TailState::Finished,
|
||||
)),
|
||||
|
||||
// ---- Phase 1: wait for the file to appear ----
|
||||
TailState::WaitingForFile {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
started,
|
||||
} => {
|
||||
if full_path.exists() {
|
||||
let next = TailState::SendInitial {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
};
|
||||
Some((
|
||||
Ok(Event::default()
|
||||
.event("waiting")
|
||||
.data("File found — loading content")),
|
||||
next,
|
||||
))
|
||||
} else if started.elapsed() > STREAM_MAX_WAIT {
|
||||
Some((
|
||||
Ok(Event::default().event("error").data(format!(
|
||||
"Timed out waiting for file to appear at '{}'",
|
||||
file_path,
|
||||
))),
|
||||
TailState::Finished,
|
||||
))
|
||||
} else {
|
||||
tokio::time::sleep(STREAM_POLL_INTERVAL).await;
|
||||
Some((
|
||||
Ok(Event::default()
|
||||
.event("waiting")
|
||||
.data("File not yet available — waiting for worker to create it")),
|
||||
TailState::WaitingForFile {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
started,
|
||||
},
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
// ---- Phase 2: read and send current file content ----
|
||||
TailState::SendInitial {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
} => match tokio::fs::File::open(&full_path).await {
|
||||
Ok(mut file) => {
|
||||
let mut buf = Vec::new();
|
||||
match file.read_to_end(&mut buf).await {
|
||||
Ok(_) => {
|
||||
let offset = buf.len() as u64;
|
||||
debug!(
|
||||
"artifact stream: sent initial {} bytes for '{}'",
|
||||
offset, file_path,
|
||||
);
|
||||
Some((
|
||||
Ok(Event::default()
|
||||
.event("content")
|
||||
.data(String::from_utf8_lossy(&buf).into_owned())),
|
||||
TailState::Tailing {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
offset,
|
||||
idle_count: 0,
|
||||
},
|
||||
))
|
||||
}
|
||||
Err(e) => Some((
|
||||
Ok(Event::default()
|
||||
.event("error")
|
||||
.data(format!("Failed to read file: {}", e))),
|
||||
TailState::Finished,
|
||||
)),
|
||||
}
|
||||
}
|
||||
Err(e) => Some((
|
||||
Ok(Event::default()
|
||||
.event("error")
|
||||
.data(format!("Failed to open file: {}", e))),
|
||||
TailState::Finished,
|
||||
)),
|
||||
},
|
||||
|
||||
// ---- Phase 3: tail the file for new bytes ----
|
||||
TailState::Tailing {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
mut offset,
|
||||
mut idle_count,
|
||||
} => {
|
||||
tokio::time::sleep(STREAM_POLL_INTERVAL).await;
|
||||
|
||||
// Re-open the file each iteration so we pick up content that
|
||||
// was written by a different process (the worker).
|
||||
let mut file = match tokio::fs::File::open(&full_path).await {
|
||||
Ok(f) => f,
|
||||
Err(e) => {
|
||||
return Some((
|
||||
Ok(Event::default()
|
||||
.event("error")
|
||||
.data(format!("File disappeared: {}", e))),
|
||||
TailState::Finished,
|
||||
));
|
||||
}
|
||||
};
|
||||
|
||||
let meta = match file.metadata().await {
|
||||
Ok(m) => m,
|
||||
Err(_) => {
|
||||
// Transient metadata error — keep going.
|
||||
return Some((
|
||||
Ok(Event::default().comment("metadata-retry")),
|
||||
TailState::Tailing {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
offset,
|
||||
idle_count,
|
||||
},
|
||||
));
|
||||
}
|
||||
};
|
||||
|
||||
let file_len = meta.len();
|
||||
|
||||
if file_len > offset {
|
||||
// New data available — seek and read.
|
||||
if let Err(e) = file.seek(std::io::SeekFrom::Start(offset)).await {
|
||||
return Some((
|
||||
Ok(Event::default()
|
||||
.event("error")
|
||||
.data(format!("Seek error: {}", e))),
|
||||
TailState::Finished,
|
||||
));
|
||||
}
|
||||
let mut new_buf = Vec::with_capacity((file_len - offset) as usize);
|
||||
match file.read_to_end(&mut new_buf).await {
|
||||
Ok(n) => {
|
||||
offset += n as u64;
|
||||
idle_count = 0;
|
||||
Some((
|
||||
Ok(Event::default()
|
||||
.event("append")
|
||||
.data(String::from_utf8_lossy(&new_buf).into_owned())),
|
||||
TailState::Tailing {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
offset,
|
||||
idle_count,
|
||||
},
|
||||
))
|
||||
}
|
||||
Err(e) => Some((
|
||||
Ok(Event::default()
|
||||
.event("error")
|
||||
.data(format!("Read error: {}", e))),
|
||||
TailState::Finished,
|
||||
)),
|
||||
}
|
||||
} else if file_len < offset {
|
||||
// File truncated — resend from scratch.
|
||||
drop(file);
|
||||
Some((
|
||||
Ok(Event::default()
|
||||
.event("waiting")
|
||||
.data("File was truncated — resending content")),
|
||||
TailState::SendInitial {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
},
|
||||
))
|
||||
} else {
|
||||
// No change.
|
||||
idle_count += 1;
|
||||
|
||||
if idle_count >= STREAM_IDLE_CHECKS_BEFORE_DONE {
|
||||
let done = is_execution_terminal(&db, execution_id).await
|
||||
|| (execution_id.is_none()
|
||||
&& idle_count >= STREAM_IDLE_CHECKS_BEFORE_DONE * 4);
|
||||
|
||||
if done {
|
||||
// One final read to catch trailing bytes.
|
||||
return if let Some(trailing) =
|
||||
final_read_bytes(&full_path, offset).await
|
||||
{
|
||||
Some((
|
||||
Ok(Event::default().event("append").data(trailing)),
|
||||
TailState::SendDone,
|
||||
))
|
||||
} else {
|
||||
Some((
|
||||
Ok(Event::default()
|
||||
.event("done")
|
||||
.data("Execution complete — stream closed")),
|
||||
TailState::Finished,
|
||||
))
|
||||
};
|
||||
}
|
||||
|
||||
// Reset so we don't hit the DB every poll.
|
||||
idle_count = 0;
|
||||
}
|
||||
|
||||
Some((
|
||||
Ok(Event::default().comment("no-change")),
|
||||
TailState::Tailing {
|
||||
full_path,
|
||||
file_path,
|
||||
execution_id,
|
||||
db,
|
||||
offset,
|
||||
idle_count,
|
||||
},
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
Ok(Sse::new(stream).keep_alive(
|
||||
KeepAlive::new()
|
||||
.interval(std::time::Duration::from_secs(15))
|
||||
.text("keepalive"),
|
||||
))
|
||||
}
|
||||
|
||||
/// Derive a simple file extension from a MIME content type
|
||||
fn extension_from_content_type(ct: &str) -> &str {
|
||||
match ct {
|
||||
"text/plain" => "txt",
|
||||
@@ -1503,6 +2096,10 @@ pub fn routes() -> Router<Arc<AppState>> {
|
||||
"/artifacts/ref/{ref}/versions/upload",
|
||||
post(upload_version_by_ref),
|
||||
)
|
||||
.route(
|
||||
"/artifacts/ref/{ref}/versions/file",
|
||||
post(allocate_file_version_by_ref),
|
||||
)
|
||||
// Progress / data
|
||||
.route("/artifacts/{id}/progress", post(append_progress))
|
||||
.route(
|
||||
@@ -1511,6 +2108,8 @@ pub fn routes() -> Router<Arc<AppState>> {
|
||||
)
|
||||
// Download (latest)
|
||||
.route("/artifacts/{id}/download", get(download_latest))
|
||||
// SSE streaming for file-backed artifacts
|
||||
.route("/artifacts/{id}/stream", get(stream_artifact))
|
||||
// Version management
|
||||
.route(
|
||||
"/artifacts/{id}/versions",
|
||||
|
||||
@@ -523,12 +523,11 @@ async fn write_workflow_yaml(
|
||||
pack_ref: &str,
|
||||
request: &SaveWorkflowFileRequest,
|
||||
) -> Result<(), ApiError> {
|
||||
let workflows_dir = packs_base_dir
|
||||
.join(pack_ref)
|
||||
.join("actions")
|
||||
.join("workflows");
|
||||
let pack_dir = packs_base_dir.join(pack_ref);
|
||||
let actions_dir = pack_dir.join("actions");
|
||||
let workflows_dir = actions_dir.join("workflows");
|
||||
|
||||
// Ensure the directory exists
|
||||
// Ensure both directories exist
|
||||
tokio::fs::create_dir_all(&workflows_dir)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
@@ -539,34 +538,164 @@ async fn write_workflow_yaml(
|
||||
))
|
||||
})?;
|
||||
|
||||
let filename = format!("{}.workflow.yaml", request.name);
|
||||
let filepath = workflows_dir.join(&filename);
|
||||
// ── 1. Write the workflow file (graph-only: version, vars, tasks, output_map) ──
|
||||
let workflow_filename = format!("{}.workflow.yaml", request.name);
|
||||
let workflow_filepath = workflows_dir.join(&workflow_filename);
|
||||
|
||||
// Serialize definition to YAML
|
||||
let yaml_content = serde_yaml_ng::to_string(&request.definition).map_err(|e| {
|
||||
// Strip action-level fields from the definition — the workflow file should
|
||||
// contain only the execution graph. The action YAML is authoritative for
|
||||
// ref, label, description, parameters, output, and tags.
|
||||
let graph_only = strip_action_level_fields(&request.definition);
|
||||
|
||||
let workflow_yaml = serde_yaml_ng::to_string(&graph_only).map_err(|e| {
|
||||
ApiError::BadRequest(format!("Failed to serialize workflow to YAML: {}", e))
|
||||
})?;
|
||||
|
||||
// Write file
|
||||
tokio::fs::write(&filepath, yaml_content)
|
||||
let workflow_yaml_with_header = format!(
|
||||
"# Workflow execution graph for {}.{}\n\
|
||||
# Action-level metadata (ref, label, parameters, output, tags) is defined\n\
|
||||
# in the companion action YAML: actions/{}.yaml\n\n{}",
|
||||
pack_ref, request.name, request.name, workflow_yaml
|
||||
);
|
||||
|
||||
tokio::fs::write(&workflow_filepath, &workflow_yaml_with_header)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
ApiError::InternalServerError(format!(
|
||||
"Failed to write workflow file '{}': {}",
|
||||
filepath.display(),
|
||||
workflow_filepath.display(),
|
||||
e
|
||||
))
|
||||
})?;
|
||||
|
||||
tracing::info!(
|
||||
"Wrote workflow file: {} ({} bytes)",
|
||||
filepath.display(),
|
||||
filepath.metadata().map(|m| m.len()).unwrap_or(0)
|
||||
workflow_filepath.display(),
|
||||
workflow_yaml_with_header.len()
|
||||
);
|
||||
|
||||
// ── 2. Write the companion action YAML ──
|
||||
let action_filename = format!("{}.yaml", request.name);
|
||||
let action_filepath = actions_dir.join(&action_filename);
|
||||
|
||||
let action_yaml = build_action_yaml(pack_ref, request);
|
||||
|
||||
tokio::fs::write(&action_filepath, &action_yaml)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
ApiError::InternalServerError(format!(
|
||||
"Failed to write action YAML '{}': {}",
|
||||
action_filepath.display(),
|
||||
e
|
||||
))
|
||||
})?;
|
||||
|
||||
tracing::info!(
|
||||
"Wrote action YAML: {} ({} bytes)",
|
||||
action_filepath.display(),
|
||||
action_yaml.len()
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Strip action-level fields from a workflow definition JSON, keeping only
|
||||
/// the execution graph: `version`, `vars`, `tasks`, `output_map`.
|
||||
///
|
||||
/// Fields removed: `ref`, `label`, `description`, `parameters`, `output`, `tags`.
|
||||
fn strip_action_level_fields(definition: &serde_json::Value) -> serde_json::Value {
|
||||
if let Some(obj) = definition.as_object() {
|
||||
let mut graph = serde_json::Map::new();
|
||||
// Keep only graph-level fields
|
||||
for key in &["version", "vars", "tasks", "output_map"] {
|
||||
if let Some(val) = obj.get(*key) {
|
||||
graph.insert((*key).to_string(), val.clone());
|
||||
}
|
||||
}
|
||||
serde_json::Value::Object(graph)
|
||||
} else {
|
||||
// Shouldn't happen, but pass through if not an object
|
||||
definition.clone()
|
||||
}
|
||||
}
|
||||
|
||||
/// Build the companion action YAML content for a workflow action.
|
||||
///
|
||||
/// This file defines the action-level metadata (ref, label, parameters, etc.)
|
||||
/// and references the workflow file via `workflow_file`.
|
||||
fn build_action_yaml(pack_ref: &str, request: &SaveWorkflowFileRequest) -> String {
|
||||
let mut lines = Vec::new();
|
||||
|
||||
lines.push(format!(
|
||||
"# Action definition for workflow {}.{}",
|
||||
pack_ref, request.name
|
||||
));
|
||||
lines.push(format!(
|
||||
"# The workflow graph (tasks, transitions, variables) is in:"
|
||||
));
|
||||
lines.push(format!(
|
||||
"# actions/workflows/{}.workflow.yaml",
|
||||
request.name
|
||||
));
|
||||
lines.push(String::new());
|
||||
|
||||
lines.push(format!("ref: {}.{}", pack_ref, request.name));
|
||||
lines.push(format!("label: \"{}\"", request.label.replace('"', "\\\"")));
|
||||
if let Some(ref desc) = request.description {
|
||||
if !desc.is_empty() {
|
||||
lines.push(format!("description: \"{}\"", desc.replace('"', "\\\"")));
|
||||
}
|
||||
}
|
||||
lines.push(format!("enabled: true"));
|
||||
lines.push(format!(
|
||||
"workflow_file: workflows/{}.workflow.yaml",
|
||||
request.name
|
||||
));
|
||||
|
||||
// Parameters
|
||||
if let Some(ref params) = request.param_schema {
|
||||
if let Some(obj) = params.as_object() {
|
||||
if !obj.is_empty() {
|
||||
lines.push(String::new());
|
||||
let params_yaml = serde_yaml_ng::to_string(params).unwrap_or_default();
|
||||
lines.push(format!("parameters:"));
|
||||
// Indent the YAML output under `parameters:`
|
||||
for line in params_yaml.lines() {
|
||||
lines.push(format!(" {}", line));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Output schema
|
||||
if let Some(ref output) = request.out_schema {
|
||||
if let Some(obj) = output.as_object() {
|
||||
if !obj.is_empty() {
|
||||
lines.push(String::new());
|
||||
let output_yaml = serde_yaml_ng::to_string(output).unwrap_or_default();
|
||||
lines.push(format!("output:"));
|
||||
for line in output_yaml.lines() {
|
||||
lines.push(format!(" {}", line));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Tags
|
||||
if let Some(ref tags) = request.tags {
|
||||
if !tags.is_empty() {
|
||||
lines.push(String::new());
|
||||
lines.push(format!("tags:"));
|
||||
for tag in tags {
|
||||
lines.push(format!(" - {}", tag));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lines.push(String::new()); // trailing newline
|
||||
lines.join("\n")
|
||||
}
|
||||
|
||||
/// Create a companion action record for a workflow definition.
|
||||
///
|
||||
/// This ensures the workflow appears in action lists and the action palette in the
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use anyhow::{Context, Result};
|
||||
use reqwest::{multipart, Client as HttpClient, Method, RequestBuilder, Response, StatusCode};
|
||||
use reqwest::{multipart, Client as HttpClient, Method, RequestBuilder, StatusCode};
|
||||
use serde::{de::DeserializeOwned, Serialize};
|
||||
use std::path::PathBuf;
|
||||
use std::time::Duration;
|
||||
@@ -83,13 +83,14 @@ impl ApiClient {
|
||||
self.auth_token = None;
|
||||
}
|
||||
|
||||
/// Refresh the authentication token using the refresh token
|
||||
/// Refresh the authentication token using the refresh token.
|
||||
///
|
||||
/// Returns Ok(true) if refresh succeeded, Ok(false) if no refresh token available
|
||||
/// Returns `Ok(true)` if refresh succeeded, `Ok(false)` if no refresh token
|
||||
/// is available or the server rejected it.
|
||||
async fn refresh_auth_token(&mut self) -> Result<bool> {
|
||||
let refresh_token = match &self.refresh_token {
|
||||
Some(token) => token.clone(),
|
||||
None => return Ok(false), // No refresh token available
|
||||
None => return Ok(false),
|
||||
};
|
||||
|
||||
#[derive(Serialize)]
|
||||
@@ -103,7 +104,6 @@ impl ApiClient {
|
||||
refresh_token: String,
|
||||
}
|
||||
|
||||
// Build refresh request without auth token
|
||||
let url = format!("{}/auth/refresh", self.base_url);
|
||||
let req = self
|
||||
.client
|
||||
@@ -113,7 +113,7 @@ impl ApiClient {
|
||||
let response = req.send().await.context("Failed to refresh token")?;
|
||||
|
||||
if !response.status().is_success() {
|
||||
// Refresh failed - clear tokens
|
||||
// Refresh failed — clear tokens so we don't keep retrying
|
||||
self.auth_token = None;
|
||||
self.refresh_token = None;
|
||||
return Ok(false);
|
||||
@@ -128,7 +128,7 @@ impl ApiClient {
|
||||
self.auth_token = Some(api_response.data.access_token.clone());
|
||||
self.refresh_token = Some(api_response.data.refresh_token.clone());
|
||||
|
||||
// Persist to config file if we have the path
|
||||
// Persist to config file
|
||||
if self.config_path.is_some() {
|
||||
if let Ok(mut config) = CliConfig::load() {
|
||||
let _ = config.set_auth(
|
||||
@@ -141,45 +141,96 @@ impl ApiClient {
|
||||
Ok(true)
|
||||
}
|
||||
|
||||
/// Build a request with common headers
|
||||
fn build_request(&self, method: Method, path: &str) -> RequestBuilder {
|
||||
// Auth endpoints are at /auth, not /auth
|
||||
let url = if path.starts_with("/auth") {
|
||||
// ── Request building helpers ────────────────────────────────────────
|
||||
|
||||
/// Build a full URL from a path.
|
||||
fn url_for(&self, path: &str) -> String {
|
||||
if path.starts_with("/auth") {
|
||||
format!("{}{}", self.base_url, path)
|
||||
} else {
|
||||
format!("{}/api/v1{}", self.base_url, path)
|
||||
};
|
||||
let mut req = self.client.request(method, &url);
|
||||
}
|
||||
}
|
||||
|
||||
/// Build a `RequestBuilder` with auth header applied.
|
||||
fn build_request(&self, method: Method, path: &str) -> RequestBuilder {
|
||||
let url = self.url_for(path);
|
||||
let mut req = self.client.request(method, &url);
|
||||
if let Some(token) = &self.auth_token {
|
||||
req = req.bearer_auth(token);
|
||||
}
|
||||
|
||||
req
|
||||
}
|
||||
|
||||
/// Execute a request and handle the response with automatic token refresh
|
||||
async fn execute<T: DeserializeOwned>(&mut self, req: RequestBuilder) -> Result<T> {
|
||||
// ── Core execute-with-retry machinery ──────────────────────────────
|
||||
|
||||
/// Send a request that carries a JSON body. On a 401 response the token
|
||||
/// is refreshed and the request is rebuilt & retried exactly once.
|
||||
async fn execute_json<T, B>(
|
||||
&mut self,
|
||||
method: Method,
|
||||
path: &str,
|
||||
body: Option<&B>,
|
||||
) -> Result<T>
|
||||
where
|
||||
T: DeserializeOwned,
|
||||
B: Serialize,
|
||||
{
|
||||
// First attempt
|
||||
let req = self.attach_body(self.build_request(method.clone(), path), body);
|
||||
let response = req.send().await.context("Failed to send request to API")?;
|
||||
|
||||
// If 401 and we have a refresh token, try to refresh once
|
||||
if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() {
|
||||
// Try to refresh the token
|
||||
if self.refresh_auth_token().await? {
|
||||
// Rebuild and retry the original request with new token
|
||||
// Note: This is a simplified retry - the original request body is already consumed
|
||||
// For a production implementation, we'd need to clone the request or store the body
|
||||
return Err(anyhow::anyhow!(
|
||||
"Token expired and was refreshed. Please retry your command."
|
||||
));
|
||||
// Retry with new token
|
||||
let req = self.attach_body(self.build_request(method, path), body);
|
||||
let response = req
|
||||
.send()
|
||||
.await
|
||||
.context("Failed to send request to API (retry)")?;
|
||||
return self.handle_response(response).await;
|
||||
}
|
||||
}
|
||||
|
||||
self.handle_response(response).await
|
||||
}
|
||||
|
||||
/// Handle API response and extract data
|
||||
async fn handle_response<T: DeserializeOwned>(&self, response: Response) -> Result<T> {
|
||||
/// Send a request that carries a JSON body and expects no response body.
|
||||
async fn execute_json_no_response<B: Serialize>(
|
||||
&mut self,
|
||||
method: Method,
|
||||
path: &str,
|
||||
body: Option<&B>,
|
||||
) -> Result<()> {
|
||||
let req = self.attach_body(self.build_request(method.clone(), path), body);
|
||||
let response = req.send().await.context("Failed to send request to API")?;
|
||||
|
||||
if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() {
|
||||
if self.refresh_auth_token().await? {
|
||||
let req = self.attach_body(self.build_request(method, path), body);
|
||||
let response = req
|
||||
.send()
|
||||
.await
|
||||
.context("Failed to send request to API (retry)")?;
|
||||
return self.handle_empty_response(response).await;
|
||||
}
|
||||
}
|
||||
|
||||
self.handle_empty_response(response).await
|
||||
}
|
||||
|
||||
/// Optionally attach a JSON body to a request builder.
|
||||
fn attach_body<B: Serialize>(&self, req: RequestBuilder, body: Option<&B>) -> RequestBuilder {
|
||||
match body {
|
||||
Some(b) => req.json(b),
|
||||
None => req,
|
||||
}
|
||||
}
|
||||
|
||||
// ── Response handling ──────────────────────────────────────────────
|
||||
|
||||
/// Parse a successful API response or return a descriptive error.
|
||||
async fn handle_response<T: DeserializeOwned>(&self, response: reqwest::Response) -> Result<T> {
|
||||
let status = response.status();
|
||||
|
||||
if status.is_success() {
|
||||
@@ -194,7 +245,6 @@ impl ApiClient {
|
||||
.await
|
||||
.unwrap_or_else(|_| "Unknown error".to_string());
|
||||
|
||||
// Try to parse as API error
|
||||
if let Ok(api_error) = serde_json::from_str::<ApiError>(&error_text) {
|
||||
anyhow::bail!("API error ({}): {}", status, api_error.error);
|
||||
} else {
|
||||
@@ -203,10 +253,30 @@ impl ApiClient {
|
||||
}
|
||||
}
|
||||
|
||||
/// Handle a response where we only care about success/failure, not a body.
|
||||
async fn handle_empty_response(&self, response: reqwest::Response) -> Result<()> {
|
||||
let status = response.status();
|
||||
if status.is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
let error_text = response
|
||||
.text()
|
||||
.await
|
||||
.unwrap_or_else(|_| "Unknown error".to_string());
|
||||
|
||||
if let Ok(api_error) = serde_json::from_str::<ApiError>(&error_text) {
|
||||
anyhow::bail!("API error ({}): {}", status, api_error.error);
|
||||
} else {
|
||||
anyhow::bail!("API error ({}): {}", status, error_text);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Public convenience methods ─────────────────────────────────────
|
||||
|
||||
/// GET request
|
||||
pub async fn get<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> {
|
||||
let req = self.build_request(Method::GET, path);
|
||||
self.execute(req).await
|
||||
self.execute_json::<T, ()>(Method::GET, path, None).await
|
||||
}
|
||||
|
||||
/// GET request with query parameters (query string must be in path)
|
||||
@@ -215,8 +285,7 @@ impl ApiClient {
|
||||
/// Example: `client.get_with_query("/actions?enabled=true&pack=core").await`
|
||||
#[allow(dead_code)]
|
||||
pub async fn get_with_query<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> {
|
||||
let req = self.build_request(Method::GET, path);
|
||||
self.execute(req).await
|
||||
self.execute_json::<T, ()>(Method::GET, path, None).await
|
||||
}
|
||||
|
||||
/// POST request with JSON body
|
||||
@@ -225,8 +294,7 @@ impl ApiClient {
|
||||
path: &str,
|
||||
body: &B,
|
||||
) -> Result<T> {
|
||||
let req = self.build_request(Method::POST, path).json(body);
|
||||
self.execute(req).await
|
||||
self.execute_json(Method::POST, path, Some(body)).await
|
||||
}
|
||||
|
||||
/// PUT request with JSON body
|
||||
@@ -237,8 +305,7 @@ impl ApiClient {
|
||||
path: &str,
|
||||
body: &B,
|
||||
) -> Result<T> {
|
||||
let req = self.build_request(Method::PUT, path).json(body);
|
||||
self.execute(req).await
|
||||
self.execute_json(Method::PUT, path, Some(body)).await
|
||||
}
|
||||
|
||||
/// PATCH request with JSON body
|
||||
@@ -247,8 +314,7 @@ impl ApiClient {
|
||||
path: &str,
|
||||
body: &B,
|
||||
) -> Result<T> {
|
||||
let req = self.build_request(Method::PATCH, path).json(body);
|
||||
self.execute(req).await
|
||||
self.execute_json(Method::PATCH, path, Some(body)).await
|
||||
}
|
||||
|
||||
/// DELETE request with response parsing
|
||||
@@ -259,8 +325,7 @@ impl ApiClient {
|
||||
/// delete operations return metadata (e.g., cascade deletion summaries).
|
||||
#[allow(dead_code)]
|
||||
pub async fn delete<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> {
|
||||
let req = self.build_request(Method::DELETE, path);
|
||||
self.execute(req).await
|
||||
self.execute_json::<T, ()>(Method::DELETE, path, None).await
|
||||
}
|
||||
|
||||
/// POST request without expecting response body
|
||||
@@ -270,36 +335,14 @@ impl ApiClient {
|
||||
/// Kept for API completeness even though not currently used.
|
||||
#[allow(dead_code)]
|
||||
pub async fn post_no_response<B: Serialize>(&mut self, path: &str, body: &B) -> Result<()> {
|
||||
let req = self.build_request(Method::POST, path).json(body);
|
||||
let response = req.send().await.context("Failed to send request to API")?;
|
||||
|
||||
let status = response.status();
|
||||
if status.is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
let error_text = response
|
||||
.text()
|
||||
.await
|
||||
.unwrap_or_else(|_| "Unknown error".to_string());
|
||||
anyhow::bail!("API error ({}): {}", status, error_text);
|
||||
}
|
||||
self.execute_json_no_response(Method::POST, path, Some(body))
|
||||
.await
|
||||
}
|
||||
|
||||
/// DELETE request without expecting response body
|
||||
pub async fn delete_no_response(&mut self, path: &str) -> Result<()> {
|
||||
let req = self.build_request(Method::DELETE, path);
|
||||
let response = req.send().await.context("Failed to send request to API")?;
|
||||
|
||||
let status = response.status();
|
||||
if status.is_success() {
|
||||
Ok(())
|
||||
} else {
|
||||
let error_text = response
|
||||
.text()
|
||||
.await
|
||||
.unwrap_or_else(|_| "Unknown error".to_string());
|
||||
anyhow::bail!("API error ({}): {}", status, error_text);
|
||||
}
|
||||
self.execute_json_no_response::<()>(Method::DELETE, path, None)
|
||||
.await
|
||||
}
|
||||
|
||||
/// POST a multipart/form-data request with a file field and optional text fields.
|
||||
@@ -318,33 +361,47 @@ impl ApiClient {
|
||||
mime_type: &str,
|
||||
extra_fields: Vec<(&str, String)>,
|
||||
) -> Result<T> {
|
||||
let url = format!("{}/api/v1{}", self.base_url, path);
|
||||
// Closure-like helper to build the multipart request from scratch.
|
||||
// We need this because reqwest::multipart::Form is not Clone, so we
|
||||
// must rebuild it for the retry attempt.
|
||||
let build_multipart_request =
|
||||
|client: &ApiClient, bytes: &[u8]| -> Result<reqwest::RequestBuilder> {
|
||||
let url = format!("{}/api/v1{}", client.base_url, path);
|
||||
|
||||
let file_part = multipart::Part::bytes(file_bytes)
|
||||
.file_name(file_name.to_string())
|
||||
.mime_str(mime_type)
|
||||
.context("Invalid MIME type")?;
|
||||
let file_part = multipart::Part::bytes(bytes.to_vec())
|
||||
.file_name(file_name.to_string())
|
||||
.mime_str(mime_type)
|
||||
.context("Invalid MIME type")?;
|
||||
|
||||
let mut form = multipart::Form::new().part(file_field_name.to_string(), file_part);
|
||||
let mut form = multipart::Form::new().part(file_field_name.to_string(), file_part);
|
||||
|
||||
for (key, value) in extra_fields {
|
||||
form = form.text(key.to_string(), value);
|
||||
}
|
||||
for (key, value) in &extra_fields {
|
||||
form = form.text(key.to_string(), value.clone());
|
||||
}
|
||||
|
||||
let mut req = self.client.post(&url).multipart(form);
|
||||
let mut req = client.client.post(&url).multipart(form);
|
||||
if let Some(token) = &client.auth_token {
|
||||
req = req.bearer_auth(token);
|
||||
}
|
||||
Ok(req)
|
||||
};
|
||||
|
||||
if let Some(token) = &self.auth_token {
|
||||
req = req.bearer_auth(token);
|
||||
}
|
||||
// First attempt
|
||||
let req = build_multipart_request(self, &file_bytes)?;
|
||||
let response = req
|
||||
.send()
|
||||
.await
|
||||
.context("Failed to send multipart request to API")?;
|
||||
|
||||
let response = req.send().await.context("Failed to send multipart request to API")?;
|
||||
|
||||
// Handle 401 + refresh (same pattern as execute())
|
||||
if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() {
|
||||
if self.refresh_auth_token().await? {
|
||||
return Err(anyhow::anyhow!(
|
||||
"Token expired and was refreshed. Please retry your command."
|
||||
));
|
||||
// Retry with new token
|
||||
let req = build_multipart_request(self, &file_bytes)?;
|
||||
let response = req
|
||||
.send()
|
||||
.await
|
||||
.context("Failed to send multipart request to API (retry)")?;
|
||||
return self.handle_response(response).await;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -374,4 +431,22 @@ mod tests {
|
||||
client.clear_auth_token();
|
||||
assert!(client.auth_token.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_url_for_api_path() {
|
||||
let client = ApiClient::new("http://localhost:8080".to_string(), None);
|
||||
assert_eq!(
|
||||
client.url_for("/actions"),
|
||||
"http://localhost:8080/api/v1/actions"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_url_for_auth_path() {
|
||||
let client = ApiClient::new("http://localhost:8080".to_string(), None);
|
||||
assert_eq!(
|
||||
client.url_for("/auth/login"),
|
||||
"http://localhost:8080/auth/login"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -52,7 +52,7 @@ pub enum ActionCommands {
|
||||
action_ref: String,
|
||||
|
||||
/// Skip confirmation prompt
|
||||
#[arg(short, long)]
|
||||
#[arg(long)]
|
||||
yes: bool,
|
||||
},
|
||||
/// Execute an action
|
||||
|
||||
@@ -7,3 +7,4 @@ pub mod pack_index;
|
||||
pub mod rule;
|
||||
pub mod sensor;
|
||||
pub mod trigger;
|
||||
pub mod workflow;
|
||||
|
||||
@@ -11,6 +11,37 @@ use crate::output::{self, OutputFormat};
|
||||
|
||||
#[derive(Subcommand)]
|
||||
pub enum PackCommands {
|
||||
/// Create an empty pack
|
||||
///
|
||||
/// Creates a new pack with no actions, triggers, rules, or sensors.
|
||||
/// Use --interactive (-i) to be prompted for each field, or provide
|
||||
/// fields via flags. Only --ref is required in non-interactive mode
|
||||
/// (--label defaults to a title-cased ref, version defaults to 0.1.0).
|
||||
Create {
|
||||
/// Unique reference identifier (e.g., "my_pack", "slack")
|
||||
#[arg(long, short = 'r')]
|
||||
r#ref: Option<String>,
|
||||
|
||||
/// Human-readable label (defaults to title-cased ref)
|
||||
#[arg(long, short)]
|
||||
label: Option<String>,
|
||||
|
||||
/// Pack description
|
||||
#[arg(long, short)]
|
||||
description: Option<String>,
|
||||
|
||||
/// Pack version (semver format recommended)
|
||||
#[arg(long = "pack-version", default_value = "0.1.0")]
|
||||
pack_version: String,
|
||||
|
||||
/// Tags for categorization (comma-separated)
|
||||
#[arg(long, value_delimiter = ',')]
|
||||
tags: Vec<String>,
|
||||
|
||||
/// Interactive mode — prompt for each field
|
||||
#[arg(long, short)]
|
||||
interactive: bool,
|
||||
},
|
||||
/// List all installed packs
|
||||
List {
|
||||
/// Filter by pack name
|
||||
@@ -75,7 +106,7 @@ pub enum PackCommands {
|
||||
pack_ref: String,
|
||||
|
||||
/// Skip confirmation prompt
|
||||
#[arg(short = 'y', long)]
|
||||
#[arg(long)]
|
||||
yes: bool,
|
||||
},
|
||||
/// Register a pack from a local directory (path must be accessible by the API server)
|
||||
@@ -282,6 +313,17 @@ struct UploadPackResponse {
|
||||
tests_skipped: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct CreatePackBody {
|
||||
r#ref: String,
|
||||
label: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
description: Option<String>,
|
||||
version: String,
|
||||
#[serde(default)]
|
||||
tags: Vec<String>,
|
||||
}
|
||||
|
||||
pub async fn handle_pack_command(
|
||||
profile: &Option<String>,
|
||||
command: PackCommands,
|
||||
@@ -289,6 +331,27 @@ pub async fn handle_pack_command(
|
||||
output_format: OutputFormat,
|
||||
) -> Result<()> {
|
||||
match command {
|
||||
PackCommands::Create {
|
||||
r#ref,
|
||||
label,
|
||||
description,
|
||||
pack_version,
|
||||
tags,
|
||||
interactive,
|
||||
} => {
|
||||
handle_create(
|
||||
profile,
|
||||
r#ref,
|
||||
label,
|
||||
description,
|
||||
pack_version,
|
||||
tags,
|
||||
interactive,
|
||||
api_url,
|
||||
output_format,
|
||||
)
|
||||
.await
|
||||
}
|
||||
PackCommands::List { name } => handle_list(profile, name, api_url, output_format).await,
|
||||
PackCommands::Show { pack_ref } => {
|
||||
handle_show(profile, pack_ref, api_url, output_format).await
|
||||
@@ -401,6 +464,169 @@ pub async fn handle_pack_command(
|
||||
}
|
||||
}
|
||||
|
||||
/// Derive a human-readable label from a pack ref.
|
||||
///
|
||||
/// Splits on `_`, `-`, or `.` and title-cases each word.
|
||||
fn label_from_ref(r: &str) -> String {
|
||||
r.split(|c| c == '_' || c == '-' || c == '.')
|
||||
.filter(|s| !s.is_empty())
|
||||
.map(|word| {
|
||||
let mut chars = word.chars();
|
||||
match chars.next() {
|
||||
Some(first) => {
|
||||
let upper: String = first.to_uppercase().collect();
|
||||
format!("{}{}", upper, chars.as_str())
|
||||
}
|
||||
None => String::new(),
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.join(" ")
|
||||
}
|
||||
|
||||
async fn handle_create(
|
||||
profile: &Option<String>,
|
||||
ref_flag: Option<String>,
|
||||
label_flag: Option<String>,
|
||||
description_flag: Option<String>,
|
||||
version_flag: String,
|
||||
tags_flag: Vec<String>,
|
||||
interactive: bool,
|
||||
api_url: &Option<String>,
|
||||
output_format: OutputFormat,
|
||||
) -> Result<()> {
|
||||
// ── Collect field values ────────────────────────────────────────
|
||||
let (pack_ref, label, description, version, tags) = if interactive {
|
||||
// Interactive prompts
|
||||
let pack_ref: String = match ref_flag {
|
||||
Some(r) => r,
|
||||
None => dialoguer::Input::new()
|
||||
.with_prompt("Pack ref (unique identifier, e.g. \"my_pack\")")
|
||||
.interact_text()?,
|
||||
};
|
||||
|
||||
let default_label = label_flag
|
||||
.clone()
|
||||
.unwrap_or_else(|| label_from_ref(&pack_ref));
|
||||
let label: String = dialoguer::Input::new()
|
||||
.with_prompt("Label")
|
||||
.default(default_label)
|
||||
.interact_text()?;
|
||||
|
||||
let default_desc = description_flag.clone().unwrap_or_default();
|
||||
let description: String = dialoguer::Input::new()
|
||||
.with_prompt("Description (optional, Enter to skip)")
|
||||
.default(default_desc)
|
||||
.allow_empty(true)
|
||||
.interact_text()?;
|
||||
let description = if description.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(description)
|
||||
};
|
||||
|
||||
let version: String = dialoguer::Input::new()
|
||||
.with_prompt("Version")
|
||||
.default(version_flag)
|
||||
.interact_text()?;
|
||||
|
||||
let default_tags = if tags_flag.is_empty() {
|
||||
String::new()
|
||||
} else {
|
||||
tags_flag.join(", ")
|
||||
};
|
||||
let tags_input: String = dialoguer::Input::new()
|
||||
.with_prompt("Tags (comma-separated, optional)")
|
||||
.default(default_tags)
|
||||
.allow_empty(true)
|
||||
.interact_text()?;
|
||||
let tags: Vec<String> = tags_input
|
||||
.split(',')
|
||||
.map(|s| s.trim().to_string())
|
||||
.filter(|s| !s.is_empty())
|
||||
.collect();
|
||||
|
||||
// Show summary and confirm
|
||||
println!();
|
||||
output::print_section("New Pack Summary");
|
||||
output::print_key_value_table(vec![
|
||||
("Ref", pack_ref.clone()),
|
||||
("Label", label.clone()),
|
||||
(
|
||||
"Description",
|
||||
description
|
||||
.clone()
|
||||
.unwrap_or_else(|| "(none)".to_string()),
|
||||
),
|
||||
("Version", version.clone()),
|
||||
(
|
||||
"Tags",
|
||||
if tags.is_empty() {
|
||||
"(none)".to_string()
|
||||
} else {
|
||||
tags.join(", ")
|
||||
},
|
||||
),
|
||||
]);
|
||||
println!();
|
||||
|
||||
let confirm = dialoguer::Confirm::new()
|
||||
.with_prompt("Create this pack?")
|
||||
.default(true)
|
||||
.interact()?;
|
||||
|
||||
if !confirm {
|
||||
output::print_info("Pack creation cancelled");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
(pack_ref, label, description, version, tags)
|
||||
} else {
|
||||
// Non-interactive: ref is required
|
||||
let pack_ref = ref_flag.ok_or_else(|| {
|
||||
anyhow::anyhow!(
|
||||
"Pack ref is required. Provide --ref <value> or use --interactive mode."
|
||||
)
|
||||
})?;
|
||||
|
||||
let label = label_flag.unwrap_or_else(|| label_from_ref(&pack_ref));
|
||||
let description = description_flag;
|
||||
let version = version_flag;
|
||||
let tags = tags_flag;
|
||||
|
||||
(pack_ref, label, description, version, tags)
|
||||
};
|
||||
|
||||
// ── Send request ────────────────────────────────────────────────
|
||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||
let mut client = ApiClient::from_config(&config, api_url);
|
||||
|
||||
let body = CreatePackBody {
|
||||
r#ref: pack_ref,
|
||||
label,
|
||||
description,
|
||||
version,
|
||||
tags,
|
||||
};
|
||||
|
||||
let pack: Pack = client.post("/packs", &body).await?;
|
||||
|
||||
// ── Output ──────────────────────────────────────────────────────
|
||||
match output_format {
|
||||
OutputFormat::Json | OutputFormat::Yaml => {
|
||||
output::print_output(&pack, output_format)?;
|
||||
}
|
||||
OutputFormat::Table => {
|
||||
output::print_success(&format!(
|
||||
"Pack '{}' created successfully (id: {})",
|
||||
pack.pack_ref, pack.id
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn handle_list(
|
||||
profile: &Option<String>,
|
||||
name: Option<String>,
|
||||
@@ -1630,3 +1856,48 @@ async fn handle_update(
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_label_from_ref_underscores() {
|
||||
assert_eq!(label_from_ref("my_cool_pack"), "My Cool Pack");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_label_from_ref_hyphens() {
|
||||
assert_eq!(label_from_ref("my-cool-pack"), "My Cool Pack");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_label_from_ref_dots() {
|
||||
assert_eq!(label_from_ref("my.cool.pack"), "My Cool Pack");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_label_from_ref_mixed_separators() {
|
||||
assert_eq!(label_from_ref("my_cool-pack.v2"), "My Cool Pack V2");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_label_from_ref_single_word() {
|
||||
assert_eq!(label_from_ref("slack"), "Slack");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_label_from_ref_already_capitalized() {
|
||||
assert_eq!(label_from_ref("AWS"), "AWS");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_label_from_ref_empty() {
|
||||
assert_eq!(label_from_ref(""), "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_label_from_ref_consecutive_separators() {
|
||||
assert_eq!(label_from_ref("my__pack"), "My Pack");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -42,7 +42,7 @@ pub enum TriggerCommands {
|
||||
trigger_ref: String,
|
||||
|
||||
/// Skip confirmation prompt
|
||||
#[arg(short, long)]
|
||||
#[arg(long)]
|
||||
yes: bool,
|
||||
},
|
||||
}
|
||||
|
||||
699
crates/cli/src/commands/workflow.rs
Normal file
699
crates/cli/src/commands/workflow.rs
Normal file
@@ -0,0 +1,699 @@
|
||||
use anyhow::{Context, Result};
|
||||
use clap::Subcommand;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
use crate::client::ApiClient;
|
||||
use crate::config::CliConfig;
|
||||
use crate::output::{self, OutputFormat};
|
||||
|
||||
#[derive(Subcommand)]
|
||||
pub enum WorkflowCommands {
|
||||
/// Upload a workflow action from local YAML files to an existing pack.
|
||||
///
|
||||
/// Reads the action YAML file, finds the referenced workflow YAML file
|
||||
/// via its `workflow_file` field, and uploads both to the API. The pack
|
||||
/// is determined from the action ref (e.g. `mypack.deploy` → pack `mypack`).
|
||||
Upload {
|
||||
/// Path to the action YAML file (e.g. actions/deploy.yaml).
|
||||
/// Must contain a `workflow_file` field pointing to the workflow YAML.
|
||||
action_file: String,
|
||||
|
||||
/// Force update if the workflow already exists
|
||||
#[arg(short, long)]
|
||||
force: bool,
|
||||
},
|
||||
/// List workflows
|
||||
List {
|
||||
/// Filter by pack reference
|
||||
#[arg(long)]
|
||||
pack: Option<String>,
|
||||
|
||||
/// Filter by tag (comma-separated)
|
||||
#[arg(long)]
|
||||
tags: Option<String>,
|
||||
|
||||
/// Search term (matches label/description)
|
||||
#[arg(long)]
|
||||
search: Option<String>,
|
||||
},
|
||||
/// Show details of a specific workflow
|
||||
Show {
|
||||
/// Workflow reference (e.g. core.install_packs)
|
||||
workflow_ref: String,
|
||||
},
|
||||
/// Delete a workflow
|
||||
Delete {
|
||||
/// Workflow reference (e.g. core.install_packs)
|
||||
workflow_ref: String,
|
||||
|
||||
/// Skip confirmation prompt
|
||||
#[arg(long)]
|
||||
yes: bool,
|
||||
},
|
||||
}
|
||||
|
||||
// ── Local YAML models (for parsing action YAML files) ──────────────────
|
||||
|
||||
/// Minimal representation of an action YAML file, capturing only the fields
|
||||
/// we need to build a `SaveWorkflowFileRequest`.
|
||||
#[derive(Debug, Deserialize)]
|
||||
struct ActionYaml {
|
||||
/// Full action ref, e.g. `python_example.timeline_demo`
|
||||
#[serde(rename = "ref")]
|
||||
action_ref: String,
|
||||
|
||||
/// Human-readable label
|
||||
#[serde(default)]
|
||||
label: String,
|
||||
|
||||
/// Description
|
||||
#[serde(default)]
|
||||
description: Option<String>,
|
||||
|
||||
/// Relative path to the workflow YAML from the `actions/` directory
|
||||
workflow_file: Option<String>,
|
||||
|
||||
/// Parameter schema (flat format)
|
||||
#[serde(default)]
|
||||
parameters: Option<serde_json::Value>,
|
||||
|
||||
/// Output schema (flat format)
|
||||
#[serde(default)]
|
||||
output: Option<serde_json::Value>,
|
||||
|
||||
/// Tags
|
||||
#[serde(default)]
|
||||
tags: Option<Vec<String>>,
|
||||
|
||||
/// Whether the action is enabled
|
||||
#[serde(default)]
|
||||
enabled: Option<bool>,
|
||||
}
|
||||
|
||||
// ── API DTOs ────────────────────────────────────────────────────────────
|
||||
|
||||
/// Mirrors the API's `SaveWorkflowFileRequest`.
|
||||
#[derive(Debug, Serialize)]
|
||||
struct SaveWorkflowFileRequest {
|
||||
name: String,
|
||||
label: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
description: Option<String>,
|
||||
version: String,
|
||||
pack_ref: String,
|
||||
definition: serde_json::Value,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
param_schema: Option<serde_json::Value>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
out_schema: Option<serde_json::Value>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
tags: Option<Vec<String>>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
enabled: Option<bool>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
struct WorkflowResponse {
|
||||
id: i64,
|
||||
#[serde(rename = "ref")]
|
||||
workflow_ref: String,
|
||||
pack: i64,
|
||||
pack_ref: String,
|
||||
label: String,
|
||||
description: Option<String>,
|
||||
version: String,
|
||||
param_schema: Option<serde_json::Value>,
|
||||
out_schema: Option<serde_json::Value>,
|
||||
definition: serde_json::Value,
|
||||
tags: Vec<String>,
|
||||
enabled: bool,
|
||||
created: String,
|
||||
updated: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
struct WorkflowSummary {
|
||||
id: i64,
|
||||
#[serde(rename = "ref")]
|
||||
workflow_ref: String,
|
||||
pack_ref: String,
|
||||
label: String,
|
||||
description: Option<String>,
|
||||
version: String,
|
||||
tags: Vec<String>,
|
||||
enabled: bool,
|
||||
created: String,
|
||||
updated: String,
|
||||
}
|
||||
|
||||
// ── Command dispatch ────────────────────────────────────────────────────
|
||||
|
||||
pub async fn handle_workflow_command(
|
||||
profile: &Option<String>,
|
||||
command: WorkflowCommands,
|
||||
api_url: &Option<String>,
|
||||
output_format: OutputFormat,
|
||||
) -> Result<()> {
|
||||
match command {
|
||||
WorkflowCommands::Upload { action_file, force } => {
|
||||
handle_upload(profile, action_file, force, api_url, output_format).await
|
||||
}
|
||||
WorkflowCommands::List { pack, tags, search } => {
|
||||
handle_list(profile, pack, tags, search, api_url, output_format).await
|
||||
}
|
||||
WorkflowCommands::Show { workflow_ref } => {
|
||||
handle_show(profile, workflow_ref, api_url, output_format).await
|
||||
}
|
||||
WorkflowCommands::Delete { workflow_ref, yes } => {
|
||||
handle_delete(profile, workflow_ref, yes, api_url, output_format).await
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Upload ──────────────────────────────────────────────────────────────
|
||||
|
||||
async fn handle_upload(
|
||||
profile: &Option<String>,
|
||||
action_file: String,
|
||||
force: bool,
|
||||
api_url: &Option<String>,
|
||||
output_format: OutputFormat,
|
||||
) -> Result<()> {
|
||||
let action_path = Path::new(&action_file);
|
||||
|
||||
// ── 1. Validate & read the action YAML ──────────────────────────────
|
||||
if !action_path.exists() {
|
||||
anyhow::bail!("Action YAML file not found: {}", action_file);
|
||||
}
|
||||
if !action_path.is_file() {
|
||||
anyhow::bail!("Path is not a file: {}", action_file);
|
||||
}
|
||||
|
||||
let action_yaml_content =
|
||||
std::fs::read_to_string(action_path).context("Failed to read action YAML file")?;
|
||||
|
||||
let action: ActionYaml = serde_yaml_ng::from_str(&action_yaml_content)
|
||||
.context("Failed to parse action YAML file")?;
|
||||
|
||||
// ── 2. Extract pack_ref and workflow name from the action ref ────────
|
||||
let (pack_ref, workflow_name) = split_action_ref(&action.action_ref)?;
|
||||
|
||||
// ── 3. Resolve the workflow_file path ───────────────────────────────
|
||||
let workflow_file_rel = action.workflow_file.as_deref().ok_or_else(|| {
|
||||
anyhow::anyhow!(
|
||||
"Action YAML does not contain a 'workflow_file' field. \
|
||||
This command requires a workflow action — regular actions should be \
|
||||
uploaded as part of a pack."
|
||||
)
|
||||
})?;
|
||||
|
||||
// workflow_file is relative to the actions/ directory. The action YAML is
|
||||
// typically at `<pack>/actions/<name>.yaml`, so the workflow file is
|
||||
// resolved relative to the action YAML's parent directory.
|
||||
let workflow_path = resolve_workflow_path(action_path, workflow_file_rel)?;
|
||||
|
||||
if !workflow_path.exists() {
|
||||
anyhow::bail!(
|
||||
"Workflow file not found: {}\n \
|
||||
(resolved from workflow_file: '{}' relative to '{}')",
|
||||
workflow_path.display(),
|
||||
workflow_file_rel,
|
||||
action_path
|
||||
.parent()
|
||||
.unwrap_or(Path::new("."))
|
||||
.display()
|
||||
);
|
||||
}
|
||||
|
||||
// ── 4. Read and parse the workflow YAML ─────────────────────────────
|
||||
let workflow_yaml_content =
|
||||
std::fs::read_to_string(&workflow_path).context("Failed to read workflow YAML file")?;
|
||||
|
||||
let workflow_definition: serde_json::Value =
|
||||
serde_yaml_ng::from_str(&workflow_yaml_content).context(format!(
|
||||
"Failed to parse workflow YAML file: {}",
|
||||
workflow_path.display()
|
||||
))?;
|
||||
|
||||
// Extract version from the workflow definition, defaulting to "1.0.0"
|
||||
let version = workflow_definition
|
||||
.get("version")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("1.0.0")
|
||||
.to_string();
|
||||
|
||||
// ── 5. Build the API request ────────────────────────────────────────
|
||||
//
|
||||
// Merge the action-level fields from the workflow definition back into the
|
||||
// definition payload (the API's SaveWorkflowFileRequest.definition carries
|
||||
// the full blob; write_workflow_yaml on the server side strips the action-
|
||||
// level fields before writing the graph-only file).
|
||||
let mut definition_map: serde_json::Map<String, serde_json::Value> =
|
||||
if let Some(obj) = workflow_definition.as_object() {
|
||||
obj.clone()
|
||||
} else {
|
||||
serde_json::Map::new()
|
||||
};
|
||||
|
||||
// Ensure action-level fields are present in the definition (the API and
|
||||
// web UI store the combined form in the database; the server splits them
|
||||
// into two files on disk).
|
||||
if let Some(params) = &action.parameters {
|
||||
definition_map
|
||||
.entry("parameters".to_string())
|
||||
.or_insert_with(|| params.clone());
|
||||
}
|
||||
if let Some(out) = &action.output {
|
||||
definition_map
|
||||
.entry("output".to_string())
|
||||
.or_insert_with(|| out.clone());
|
||||
}
|
||||
|
||||
let request = SaveWorkflowFileRequest {
|
||||
name: workflow_name.clone(),
|
||||
label: if action.label.is_empty() {
|
||||
workflow_name.clone()
|
||||
} else {
|
||||
action.label.clone()
|
||||
},
|
||||
description: action.description.clone(),
|
||||
version,
|
||||
pack_ref: pack_ref.clone(),
|
||||
definition: serde_json::Value::Object(definition_map),
|
||||
param_schema: action.parameters.clone(),
|
||||
out_schema: action.output.clone(),
|
||||
tags: action.tags.clone(),
|
||||
enabled: action.enabled,
|
||||
};
|
||||
|
||||
// ── 6. Print progress ───────────────────────────────────────────────
|
||||
if output_format == OutputFormat::Table {
|
||||
output::print_info(&format!(
|
||||
"Uploading workflow action '{}.{}' to pack '{}'",
|
||||
pack_ref, workflow_name, pack_ref,
|
||||
));
|
||||
output::print_info(&format!(" Action YAML: {}", action_path.display()));
|
||||
output::print_info(&format!(" Workflow YAML: {}", workflow_path.display()));
|
||||
}
|
||||
|
||||
// ── 7. Send to API ──────────────────────────────────────────────────
|
||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||
let mut client = ApiClient::from_config(&config, api_url);
|
||||
|
||||
let workflow_ref = format!("{}.{}", pack_ref, workflow_name);
|
||||
|
||||
// Try create first; if 409 Conflict and --force, fall back to update.
|
||||
let create_path = format!("/packs/{}/workflow-files", pack_ref);
|
||||
|
||||
let result: Result<WorkflowResponse> = client.post(&create_path, &request).await;
|
||||
|
||||
let response: WorkflowResponse = match result {
|
||||
Ok(resp) => resp,
|
||||
Err(err) => {
|
||||
let err_str = err.to_string();
|
||||
if err_str.contains("409") || err_str.to_lowercase().contains("conflict") {
|
||||
if !force {
|
||||
anyhow::bail!(
|
||||
"Workflow '{}' already exists. Use --force to update it.",
|
||||
workflow_ref
|
||||
);
|
||||
}
|
||||
|
||||
if output_format == OutputFormat::Table {
|
||||
output::print_info("Workflow already exists, updating...");
|
||||
}
|
||||
|
||||
let update_path = format!("/workflows/{}/file", workflow_ref);
|
||||
client.put(&update_path, &request).await.context(
|
||||
"Failed to update existing workflow. \
|
||||
Check that the pack exists and the workflow ref is correct.",
|
||||
)?
|
||||
} else {
|
||||
return Err(err).context("Failed to upload workflow");
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// ── 8. Print result ─────────────────────────────────────────────────
|
||||
match output_format {
|
||||
OutputFormat::Json | OutputFormat::Yaml => {
|
||||
output::print_output(&response, output_format)?;
|
||||
}
|
||||
OutputFormat::Table => {
|
||||
println!();
|
||||
output::print_success(&format!(
|
||||
"Workflow '{}' uploaded successfully",
|
||||
response.workflow_ref
|
||||
));
|
||||
output::print_key_value_table(vec![
|
||||
("ID", response.id.to_string()),
|
||||
("Reference", response.workflow_ref.clone()),
|
||||
("Pack", response.pack_ref.clone()),
|
||||
("Label", response.label.clone()),
|
||||
("Version", response.version.clone()),
|
||||
(
|
||||
"Tags",
|
||||
if response.tags.is_empty() {
|
||||
"none".to_string()
|
||||
} else {
|
||||
response.tags.join(", ")
|
||||
},
|
||||
),
|
||||
("Enabled", output::format_bool(response.enabled)),
|
||||
]);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ── List ────────────────────────────────────────────────────────────────
|
||||
|
||||
async fn handle_list(
|
||||
profile: &Option<String>,
|
||||
pack: Option<String>,
|
||||
tags: Option<String>,
|
||||
search: Option<String>,
|
||||
api_url: &Option<String>,
|
||||
output_format: OutputFormat,
|
||||
) -> Result<()> {
|
||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||
let mut client = ApiClient::from_config(&config, api_url);
|
||||
|
||||
let path = if let Some(ref pack_ref) = pack {
|
||||
format!("/packs/{}/workflows", pack_ref)
|
||||
} else {
|
||||
let mut query_parts: Vec<String> = Vec::new();
|
||||
if let Some(ref t) = tags {
|
||||
query_parts.push(format!("tags={}", urlencoding::encode(t)));
|
||||
}
|
||||
if let Some(ref s) = search {
|
||||
query_parts.push(format!("search={}", urlencoding::encode(s)));
|
||||
}
|
||||
if query_parts.is_empty() {
|
||||
"/workflows".to_string()
|
||||
} else {
|
||||
format!("/workflows?{}", query_parts.join("&"))
|
||||
}
|
||||
};
|
||||
|
||||
let workflows: Vec<WorkflowSummary> = client.get(&path).await?;
|
||||
|
||||
match output_format {
|
||||
OutputFormat::Json | OutputFormat::Yaml => {
|
||||
output::print_output(&workflows, output_format)?;
|
||||
}
|
||||
OutputFormat::Table => {
|
||||
if workflows.is_empty() {
|
||||
output::print_info("No workflows found");
|
||||
} else {
|
||||
let mut table = output::create_table();
|
||||
output::add_header(
|
||||
&mut table,
|
||||
vec!["ID", "Reference", "Pack", "Label", "Version", "Enabled", "Tags"],
|
||||
);
|
||||
|
||||
for wf in &workflows {
|
||||
table.add_row(vec![
|
||||
wf.id.to_string(),
|
||||
wf.workflow_ref.clone(),
|
||||
wf.pack_ref.clone(),
|
||||
output::truncate(&wf.label, 30),
|
||||
wf.version.clone(),
|
||||
output::format_bool(wf.enabled),
|
||||
if wf.tags.is_empty() {
|
||||
"-".to_string()
|
||||
} else {
|
||||
output::truncate(&wf.tags.join(", "), 25)
|
||||
},
|
||||
]);
|
||||
}
|
||||
|
||||
println!("{}", table);
|
||||
output::print_info(&format!("{} workflow(s) found", workflows.len()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ── Show ────────────────────────────────────────────────────────────────
|
||||
|
||||
async fn handle_show(
|
||||
profile: &Option<String>,
|
||||
workflow_ref: String,
|
||||
api_url: &Option<String>,
|
||||
output_format: OutputFormat,
|
||||
) -> Result<()> {
|
||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||
let mut client = ApiClient::from_config(&config, api_url);
|
||||
|
||||
let path = format!("/workflows/{}", workflow_ref);
|
||||
let workflow: WorkflowResponse = client.get(&path).await?;
|
||||
|
||||
match output_format {
|
||||
OutputFormat::Json | OutputFormat::Yaml => {
|
||||
output::print_output(&workflow, output_format)?;
|
||||
}
|
||||
OutputFormat::Table => {
|
||||
output::print_section(&format!("Workflow: {}", workflow.workflow_ref));
|
||||
output::print_key_value_table(vec![
|
||||
("ID", workflow.id.to_string()),
|
||||
("Reference", workflow.workflow_ref.clone()),
|
||||
("Pack", workflow.pack_ref.clone()),
|
||||
("Pack ID", workflow.pack.to_string()),
|
||||
("Label", workflow.label.clone()),
|
||||
(
|
||||
"Description",
|
||||
workflow
|
||||
.description
|
||||
.clone()
|
||||
.unwrap_or_else(|| "-".to_string()),
|
||||
),
|
||||
("Version", workflow.version.clone()),
|
||||
("Enabled", output::format_bool(workflow.enabled)),
|
||||
(
|
||||
"Tags",
|
||||
if workflow.tags.is_empty() {
|
||||
"none".to_string()
|
||||
} else {
|
||||
workflow.tags.join(", ")
|
||||
},
|
||||
),
|
||||
("Created", output::format_timestamp(&workflow.created)),
|
||||
("Updated", output::format_timestamp(&workflow.updated)),
|
||||
]);
|
||||
|
||||
// Show parameter schema if present
|
||||
if let Some(ref params) = workflow.param_schema {
|
||||
if !params.is_null() && params.as_object().is_some_and(|o| !o.is_empty()) {
|
||||
output::print_section("Parameters");
|
||||
let yaml = serde_yaml_ng::to_string(params)?;
|
||||
println!("{}", yaml);
|
||||
}
|
||||
}
|
||||
|
||||
// Show output schema if present
|
||||
if let Some(ref out) = workflow.out_schema {
|
||||
if !out.is_null() && out.as_object().is_some_and(|o| !o.is_empty()) {
|
||||
output::print_section("Output Schema");
|
||||
let yaml = serde_yaml_ng::to_string(out)?;
|
||||
println!("{}", yaml);
|
||||
}
|
||||
}
|
||||
|
||||
// Show task summary from definition
|
||||
if let Some(tasks) = workflow.definition.get("tasks") {
|
||||
if let Some(arr) = tasks.as_array() {
|
||||
output::print_section("Tasks");
|
||||
let mut table = output::create_table();
|
||||
output::add_header(&mut table, vec!["#", "Name", "Action", "Transitions"]);
|
||||
|
||||
for (i, task) in arr.iter().enumerate() {
|
||||
let name = task
|
||||
.get("name")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("?");
|
||||
let action = task
|
||||
.get("action")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("-");
|
||||
|
||||
let transition_count = task
|
||||
.get("next")
|
||||
.and_then(|v| v.as_array())
|
||||
.map(|a| {
|
||||
// Count total target tasks across all transitions
|
||||
a.iter()
|
||||
.filter_map(|t| {
|
||||
t.get("do").and_then(|d| d.as_array()).map(|d| d.len())
|
||||
})
|
||||
.sum::<usize>()
|
||||
})
|
||||
.unwrap_or(0);
|
||||
|
||||
let transitions_str = if transition_count == 0 {
|
||||
"terminal".to_string()
|
||||
} else {
|
||||
format!("{} target(s)", transition_count)
|
||||
};
|
||||
|
||||
table.add_row(vec![
|
||||
(i + 1).to_string(),
|
||||
name.to_string(),
|
||||
output::truncate(action, 35),
|
||||
transitions_str,
|
||||
]);
|
||||
}
|
||||
|
||||
println!("{}", table);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ── Delete ──────────────────────────────────────────────────────────────
|
||||
|
||||
async fn handle_delete(
|
||||
profile: &Option<String>,
|
||||
workflow_ref: String,
|
||||
yes: bool,
|
||||
api_url: &Option<String>,
|
||||
output_format: OutputFormat,
|
||||
) -> Result<()> {
|
||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||
let mut client = ApiClient::from_config(&config, api_url);
|
||||
|
||||
if !yes && output_format == OutputFormat::Table {
|
||||
let confirm = dialoguer::Confirm::new()
|
||||
.with_prompt(format!(
|
||||
"Are you sure you want to delete workflow '{}'?",
|
||||
workflow_ref
|
||||
))
|
||||
.default(false)
|
||||
.interact()?;
|
||||
|
||||
if !confirm {
|
||||
output::print_info("Delete cancelled");
|
||||
return Ok(());
|
||||
}
|
||||
}
|
||||
|
||||
let path = format!("/workflows/{}", workflow_ref);
|
||||
client.delete_no_response(&path).await?;
|
||||
|
||||
match output_format {
|
||||
OutputFormat::Json | OutputFormat::Yaml => {
|
||||
let msg = serde_json::json!({"message": format!("Workflow '{}' deleted", workflow_ref)});
|
||||
output::print_output(&msg, output_format)?;
|
||||
}
|
||||
OutputFormat::Table => {
|
||||
output::print_success(&format!("Workflow '{}' deleted successfully", workflow_ref));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ── Helpers ─────────────────────────────────────────────────────────────
|
||||
|
||||
/// Split an action ref like `pack_name.action_name` into `(pack_ref, name)`.
|
||||
///
|
||||
/// Supports multi-segment pack refs: `org.pack.action` → `("org.pack", "action")`.
|
||||
/// The last dot-separated segment is the workflow/action name; everything before
|
||||
/// it is the pack ref.
|
||||
fn split_action_ref(action_ref: &str) -> Result<(String, String)> {
|
||||
let dot_pos = action_ref.rfind('.').ok_or_else(|| {
|
||||
anyhow::anyhow!(
|
||||
"Invalid action ref '{}': expected format 'pack_ref.name' (at least one dot)",
|
||||
action_ref
|
||||
)
|
||||
})?;
|
||||
|
||||
let pack_ref = &action_ref[..dot_pos];
|
||||
let name = &action_ref[dot_pos + 1..];
|
||||
|
||||
if pack_ref.is_empty() || name.is_empty() {
|
||||
anyhow::bail!(
|
||||
"Invalid action ref '{}': both pack_ref and name must be non-empty",
|
||||
action_ref
|
||||
);
|
||||
}
|
||||
|
||||
Ok((pack_ref.to_string(), name.to_string()))
|
||||
}
|
||||
|
||||
/// Resolve the workflow YAML path from the action YAML's location and the
|
||||
/// `workflow_file` value.
|
||||
///
|
||||
/// `workflow_file` is relative to the `actions/` directory. Since the action
|
||||
/// YAML is typically at `<pack>/actions/<name>.yaml`, the workflow path is
|
||||
/// resolved relative to the action YAML's parent directory.
|
||||
fn resolve_workflow_path(action_yaml_path: &Path, workflow_file: &str) -> Result<PathBuf> {
|
||||
let action_dir = action_yaml_path
|
||||
.parent()
|
||||
.unwrap_or(Path::new("."));
|
||||
|
||||
let resolved = action_dir.join(workflow_file);
|
||||
|
||||
// Canonicalize if possible (for better error messages), but don't fail
|
||||
// if the file doesn't exist yet — we'll check existence later.
|
||||
Ok(resolved)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_split_action_ref_simple() {
|
||||
let (pack, name) = split_action_ref("core.echo").unwrap();
|
||||
assert_eq!(pack, "core");
|
||||
assert_eq!(name, "echo");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_split_action_ref_multi_segment_pack() {
|
||||
let (pack, name) = split_action_ref("org.infra.deploy").unwrap();
|
||||
assert_eq!(pack, "org.infra");
|
||||
assert_eq!(name, "deploy");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_split_action_ref_no_dot() {
|
||||
assert!(split_action_ref("nodot").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_split_action_ref_empty_parts() {
|
||||
assert!(split_action_ref(".name").is_err());
|
||||
assert!(split_action_ref("pack.").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_resolve_workflow_path() {
|
||||
let action_path = Path::new("/packs/mypack/actions/deploy.yaml");
|
||||
let resolved =
|
||||
resolve_workflow_path(action_path, "workflows/deploy.workflow.yaml").unwrap();
|
||||
assert_eq!(
|
||||
resolved,
|
||||
PathBuf::from("/packs/mypack/actions/workflows/deploy.workflow.yaml")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_resolve_workflow_path_relative() {
|
||||
let action_path = Path::new("actions/deploy.yaml");
|
||||
let resolved =
|
||||
resolve_workflow_path(action_path, "workflows/deploy.workflow.yaml").unwrap();
|
||||
assert_eq!(
|
||||
resolved,
|
||||
PathBuf::from("actions/workflows/deploy.workflow.yaml")
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -16,6 +16,7 @@ use commands::{
|
||||
rule::RuleCommands,
|
||||
sensor::SensorCommands,
|
||||
trigger::TriggerCommands,
|
||||
workflow::WorkflowCommands,
|
||||
};
|
||||
|
||||
#[derive(Parser)]
|
||||
@@ -78,6 +79,11 @@ enum Commands {
|
||||
#[command(subcommand)]
|
||||
command: ExecutionCommands,
|
||||
},
|
||||
/// Workflow management
|
||||
Workflow {
|
||||
#[command(subcommand)]
|
||||
command: WorkflowCommands,
|
||||
},
|
||||
/// Trigger management
|
||||
Trigger {
|
||||
#[command(subcommand)]
|
||||
@@ -172,6 +178,15 @@ async fn main() {
|
||||
)
|
||||
.await
|
||||
}
|
||||
Commands::Workflow { command } => {
|
||||
commands::workflow::handle_workflow_command(
|
||||
&cli.profile,
|
||||
command,
|
||||
&cli.api_url,
|
||||
output_format,
|
||||
)
|
||||
.await
|
||||
}
|
||||
Commands::Trigger { command } => {
|
||||
commands::trigger::handle_trigger_command(
|
||||
&cli.profile,
|
||||
|
||||
@@ -438,3 +438,38 @@ pub async fn mock_not_found(server: &MockServer, path_pattern: &str) {
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
/// Mock a successful pack create response (POST /api/v1/packs)
|
||||
#[allow(dead_code)]
|
||||
pub async fn mock_pack_create(server: &MockServer) {
|
||||
Mock::given(method("POST"))
|
||||
.and(path("/api/v1/packs"))
|
||||
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
|
||||
"data": {
|
||||
"id": 42,
|
||||
"ref": "my_pack",
|
||||
"label": "My Pack",
|
||||
"description": "A test pack",
|
||||
"version": "0.1.0",
|
||||
"author": null,
|
||||
"enabled": true,
|
||||
"tags": ["test"],
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
})))
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
/// Mock a 409 conflict response for pack create
|
||||
#[allow(dead_code)]
|
||||
pub async fn mock_pack_create_conflict(server: &MockServer) {
|
||||
Mock::given(method("POST"))
|
||||
.and(path("/api/v1/packs"))
|
||||
.respond_with(ResponseTemplate::new(409).set_body_json(json!({
|
||||
"error": "Pack with ref 'my_pack' already exists"
|
||||
})))
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
@@ -4,6 +4,11 @@
|
||||
|
||||
use assert_cmd::Command;
|
||||
use predicates::prelude::*;
|
||||
use serde_json::json;
|
||||
use wiremock::{
|
||||
matchers::{body_json, method, path},
|
||||
Mock, ResponseTemplate,
|
||||
};
|
||||
|
||||
mod common;
|
||||
use common::*;
|
||||
@@ -222,6 +227,231 @@ async fn test_pack_get_json_output() {
|
||||
.stdout(predicate::str::contains(r#""ref": "core""#));
|
||||
}
|
||||
|
||||
// ── pack create tests ──────────────────────────────────────────────────
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_create_non_interactive() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_pack_create(&fixture.mock_server).await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("pack")
|
||||
.arg("create")
|
||||
.arg("--ref")
|
||||
.arg("my_pack")
|
||||
.arg("--label")
|
||||
.arg("My Pack")
|
||||
.arg("--description")
|
||||
.arg("A test pack")
|
||||
.arg("--pack-version")
|
||||
.arg("0.1.0")
|
||||
.arg("--tags")
|
||||
.arg("test");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("my_pack"))
|
||||
.stdout(predicate::str::contains("created successfully"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_create_json_output() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_pack_create(&fixture.mock_server).await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("--json")
|
||||
.arg("pack")
|
||||
.arg("create")
|
||||
.arg("--ref")
|
||||
.arg("my_pack");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains(r#""ref": "my_pack""#))
|
||||
.stdout(predicate::str::contains(r#""id": 42"#));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_create_conflict() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_pack_create_conflict(&fixture.mock_server).await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("pack")
|
||||
.arg("create")
|
||||
.arg("--ref")
|
||||
.arg("my_pack");
|
||||
|
||||
cmd.assert()
|
||||
.failure()
|
||||
.stderr(predicate::str::contains("already exists"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_create_missing_ref() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("pack")
|
||||
.arg("create");
|
||||
|
||||
cmd.assert()
|
||||
.failure()
|
||||
.stderr(predicate::str::contains("Pack ref is required"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_create_default_label_from_ref() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
// Use a custom mock that validates the request body contains the derived label
|
||||
Mock::given(method("POST"))
|
||||
.and(path("/api/v1/packs"))
|
||||
.and(body_json(json!({
|
||||
"ref": "my_cool_pack",
|
||||
"label": "My Cool Pack",
|
||||
"version": "0.1.0",
|
||||
"tags": []
|
||||
})))
|
||||
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
|
||||
"data": {
|
||||
"id": 99,
|
||||
"ref": "my_cool_pack",
|
||||
"label": "My Cool Pack",
|
||||
"version": "0.1.0",
|
||||
"enabled": true,
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
})))
|
||||
.mount(&fixture.mock_server)
|
||||
.await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("pack")
|
||||
.arg("create")
|
||||
.arg("--ref")
|
||||
.arg("my_cool_pack");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("my_cool_pack"))
|
||||
.stdout(predicate::str::contains("created successfully"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_create_default_version() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
// Verify the default version "0.1.0" is sent when --pack-version is not specified
|
||||
Mock::given(method("POST"))
|
||||
.and(path("/api/v1/packs"))
|
||||
.and(body_json(json!({
|
||||
"ref": "versioned_pack",
|
||||
"label": "Versioned Pack",
|
||||
"version": "0.1.0",
|
||||
"tags": []
|
||||
})))
|
||||
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
|
||||
"data": {
|
||||
"id": 7,
|
||||
"ref": "versioned_pack",
|
||||
"label": "Versioned Pack",
|
||||
"version": "0.1.0",
|
||||
"enabled": true,
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
})))
|
||||
.mount(&fixture.mock_server)
|
||||
.await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("pack")
|
||||
.arg("create")
|
||||
.arg("--ref")
|
||||
.arg("versioned_pack");
|
||||
|
||||
cmd.assert().success();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_create_with_tags() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
Mock::given(method("POST"))
|
||||
.and(path("/api/v1/packs"))
|
||||
.and(body_json(json!({
|
||||
"ref": "tagged",
|
||||
"label": "Tagged",
|
||||
"version": "0.1.0",
|
||||
"tags": ["networking", "monitoring"]
|
||||
})))
|
||||
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
|
||||
"data": {
|
||||
"id": 10,
|
||||
"ref": "tagged",
|
||||
"label": "Tagged",
|
||||
"version": "0.1.0",
|
||||
"tags": ["networking", "monitoring"],
|
||||
"enabled": true,
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
})))
|
||||
.mount(&fixture.mock_server)
|
||||
.await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("pack")
|
||||
.arg("create")
|
||||
.arg("--ref")
|
||||
.arg("tagged")
|
||||
.arg("--tags")
|
||||
.arg("networking,monitoring");
|
||||
|
||||
cmd.assert().success();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_list_empty_result() {
|
||||
let fixture = TestFixture::new().await;
|
||||
|
||||
777
crates/cli/tests/test_workflows.rs
Normal file
777
crates/cli/tests/test_workflows.rs
Normal file
@@ -0,0 +1,777 @@
|
||||
//! Integration tests for CLI workflow commands
|
||||
|
||||
#![allow(deprecated)]
|
||||
|
||||
use assert_cmd::Command;
|
||||
use predicates::prelude::*;
|
||||
use serde_json::json;
|
||||
use std::fs;
|
||||
use wiremock::matchers::{method, path};
|
||||
use wiremock::{Mock, MockServer, ResponseTemplate};
|
||||
|
||||
mod common;
|
||||
use common::*;
|
||||
|
||||
// ── Mock helpers ────────────────────────────────────────────────────────
|
||||
|
||||
async fn mock_workflow_list(server: &MockServer) {
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v1/workflows"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
|
||||
"data": [
|
||||
{
|
||||
"id": 1,
|
||||
"ref": "core.install_packs",
|
||||
"pack_ref": "core",
|
||||
"label": "Install Packs",
|
||||
"description": "Install one or more packs",
|
||||
"version": "1.0.0",
|
||||
"tags": ["core", "packs"],
|
||||
"enabled": true,
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"ref": "mypack.deploy",
|
||||
"pack_ref": "mypack",
|
||||
"label": "Deploy App",
|
||||
"description": "Deploy an application",
|
||||
"version": "2.0.0",
|
||||
"tags": ["deploy"],
|
||||
"enabled": true,
|
||||
"created": "2024-01-02T00:00:00Z",
|
||||
"updated": "2024-01-02T00:00:00Z"
|
||||
}
|
||||
]
|
||||
})))
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn mock_workflow_list_by_pack(server: &MockServer, pack_ref: &str) {
|
||||
let p = format!("/api/v1/packs/{}/workflows", pack_ref);
|
||||
Mock::given(method("GET"))
|
||||
.and(path(p.as_str()))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
|
||||
"data": [
|
||||
{
|
||||
"id": 1,
|
||||
"ref": format!("{}.example_workflow", pack_ref),
|
||||
"pack_ref": pack_ref,
|
||||
"label": "Example Workflow",
|
||||
"description": "An example workflow",
|
||||
"version": "1.0.0",
|
||||
"tags": [],
|
||||
"enabled": true,
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
]
|
||||
})))
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn mock_workflow_get(server: &MockServer, workflow_ref: &str) {
|
||||
let p = format!("/api/v1/workflows/{}", workflow_ref);
|
||||
Mock::given(method("GET"))
|
||||
.and(path(p.as_str()))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
|
||||
"data": {
|
||||
"id": 1,
|
||||
"ref": workflow_ref,
|
||||
"pack": 1,
|
||||
"pack_ref": "mypack",
|
||||
"label": "My Workflow",
|
||||
"description": "A test workflow",
|
||||
"version": "1.0.0",
|
||||
"param_schema": {
|
||||
"url": {"type": "string", "required": true},
|
||||
"timeout": {"type": "integer", "default": 30}
|
||||
},
|
||||
"out_schema": {
|
||||
"status": {"type": "string"}
|
||||
},
|
||||
"definition": {
|
||||
"version": "1.0.0",
|
||||
"vars": {"result": null},
|
||||
"tasks": [
|
||||
{
|
||||
"name": "step1",
|
||||
"action": "core.echo",
|
||||
"input": {"message": "hello"},
|
||||
"next": [
|
||||
{"when": "{{ succeeded() }}", "do": ["step2"]}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "step2",
|
||||
"action": "core.echo",
|
||||
"input": {"message": "done"}
|
||||
}
|
||||
]
|
||||
},
|
||||
"tags": ["test", "demo"],
|
||||
"enabled": true,
|
||||
"created": "2024-01-01T00:00:00Z",
|
||||
"updated": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
})))
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn mock_workflow_delete(server: &MockServer, workflow_ref: &str) {
|
||||
let p = format!("/api/v1/workflows/{}", workflow_ref);
|
||||
Mock::given(method("DELETE"))
|
||||
.and(path(p.as_str()))
|
||||
.respond_with(ResponseTemplate::new(204))
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn mock_workflow_save(server: &MockServer, pack_ref: &str) {
|
||||
let p = format!("/api/v1/packs/{}/workflow-files", pack_ref);
|
||||
Mock::given(method("POST"))
|
||||
.and(path(p.as_str()))
|
||||
.respond_with(ResponseTemplate::new(201).set_body_json(json!({
|
||||
"data": {
|
||||
"id": 10,
|
||||
"ref": format!("{}.deploy", pack_ref),
|
||||
"pack": 1,
|
||||
"pack_ref": pack_ref,
|
||||
"label": "Deploy App",
|
||||
"description": "Deploy the application",
|
||||
"version": "1.0.0",
|
||||
"param_schema": null,
|
||||
"out_schema": null,
|
||||
"definition": {"version": "1.0.0", "tasks": []},
|
||||
"tags": ["deploy"],
|
||||
"enabled": true,
|
||||
"created": "2024-01-10T00:00:00Z",
|
||||
"updated": "2024-01-10T00:00:00Z"
|
||||
}
|
||||
})))
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn mock_workflow_save_conflict(server: &MockServer, pack_ref: &str) {
|
||||
let p = format!("/api/v1/packs/{}/workflow-files", pack_ref);
|
||||
Mock::given(method("POST"))
|
||||
.and(path(p.as_str()))
|
||||
.respond_with(ResponseTemplate::new(409).set_body_json(json!({
|
||||
"error": "Workflow with ref 'mypack.deploy' already exists"
|
||||
})))
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
async fn mock_workflow_update(server: &MockServer, workflow_ref: &str) {
|
||||
let p = format!("/api/v1/workflows/{}/file", workflow_ref);
|
||||
Mock::given(method("PUT"))
|
||||
.and(path(p.as_str()))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
|
||||
"data": {
|
||||
"id": 10,
|
||||
"ref": workflow_ref,
|
||||
"pack": 1,
|
||||
"pack_ref": "mypack",
|
||||
"label": "Deploy App",
|
||||
"description": "Deploy the application",
|
||||
"version": "1.0.0",
|
||||
"param_schema": null,
|
||||
"out_schema": null,
|
||||
"definition": {"version": "1.0.0", "tasks": []},
|
||||
"tags": ["deploy"],
|
||||
"enabled": true,
|
||||
"created": "2024-01-10T00:00:00Z",
|
||||
"updated": "2024-01-10T12:00:00Z"
|
||||
}
|
||||
})))
|
||||
.mount(server)
|
||||
.await;
|
||||
}
|
||||
|
||||
// ── Helper to write action + workflow YAML to temp dirs ─────────────────
|
||||
|
||||
struct WorkflowFixture {
|
||||
_dir: tempfile::TempDir,
|
||||
action_yaml_path: String,
|
||||
}
|
||||
|
||||
impl WorkflowFixture {
|
||||
fn new(action_ref: &str, workflow_file: &str) -> Self {
|
||||
let dir = tempfile::TempDir::new().expect("Failed to create temp dir");
|
||||
let actions_dir = dir.path().join("actions");
|
||||
let workflows_dir = actions_dir.join("workflows");
|
||||
fs::create_dir_all(&workflows_dir).unwrap();
|
||||
|
||||
// Write the action YAML
|
||||
let action_yaml = format!(
|
||||
r#"ref: {}
|
||||
label: "Deploy App"
|
||||
description: "Deploy the application"
|
||||
enabled: true
|
||||
workflow_file: {}
|
||||
|
||||
parameters:
|
||||
environment:
|
||||
type: string
|
||||
required: true
|
||||
description: "Target environment"
|
||||
version:
|
||||
type: string
|
||||
default: "latest"
|
||||
|
||||
output:
|
||||
status:
|
||||
type: string
|
||||
|
||||
tags:
|
||||
- deploy
|
||||
"#,
|
||||
action_ref, workflow_file,
|
||||
);
|
||||
|
||||
let action_name = action_ref.rsplit('.').next().unwrap();
|
||||
let action_path = actions_dir.join(format!("{}.yaml", action_name));
|
||||
fs::write(&action_path, &action_yaml).unwrap();
|
||||
|
||||
// Write the workflow YAML
|
||||
let workflow_yaml = r#"version: "1.0.0"
|
||||
|
||||
vars:
|
||||
deploy_result: null
|
||||
|
||||
tasks:
|
||||
- name: prepare
|
||||
action: core.echo
|
||||
input:
|
||||
message: "Preparing deployment"
|
||||
next:
|
||||
- when: "{{ succeeded() }}"
|
||||
do:
|
||||
- deploy
|
||||
|
||||
- name: deploy
|
||||
action: core.echo
|
||||
input:
|
||||
message: "Deploying"
|
||||
next:
|
||||
- when: "{{ succeeded() }}"
|
||||
do:
|
||||
- verify
|
||||
|
||||
- name: verify
|
||||
action: core.echo
|
||||
input:
|
||||
message: "Verifying"
|
||||
|
||||
output_map:
|
||||
status: "{{ 'success' if workflow.deploy_result else 'unknown' }}"
|
||||
"#;
|
||||
|
||||
let workflow_path = workflows_dir.join(format!("{}.workflow.yaml", action_name));
|
||||
fs::write(&workflow_path, workflow_yaml).unwrap();
|
||||
|
||||
Self {
|
||||
action_yaml_path: action_path.to_string_lossy().to_string(),
|
||||
_dir: dir,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── List tests ──────────────────────────────────────────────────────────
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_list_authenticated() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_workflow_list(&fixture.mock_server).await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("list");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("core.install_packs"))
|
||||
.stdout(predicate::str::contains("mypack.deploy"))
|
||||
.stdout(predicate::str::contains("2 workflow(s) found"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_list_by_pack() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_workflow_list_by_pack(&fixture.mock_server, "core").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("list")
|
||||
.arg("--pack")
|
||||
.arg("core");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("core.example_workflow"))
|
||||
.stdout(predicate::str::contains("1 workflow(s) found"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_list_json_output() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_workflow_list(&fixture.mock_server).await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("--json")
|
||||
.arg("workflow")
|
||||
.arg("list");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("\"core.install_packs\""))
|
||||
.stdout(predicate::str::contains("\"mypack.deploy\""));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_list_yaml_output() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_workflow_list(&fixture.mock_server).await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("--yaml")
|
||||
.arg("workflow")
|
||||
.arg("list");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("core.install_packs"))
|
||||
.stdout(predicate::str::contains("mypack.deploy"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_list_empty() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v1/workflows"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(json!({
|
||||
"data": []
|
||||
})))
|
||||
.mount(&fixture.mock_server)
|
||||
.await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("list");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("No workflows found"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_list_unauthenticated() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_default_config();
|
||||
|
||||
mock_unauthorized(&fixture.mock_server, "/api/v1/workflows").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("list");
|
||||
|
||||
cmd.assert().failure();
|
||||
}
|
||||
|
||||
// ── Show tests ──────────────────────────────────────────────────────────
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_show() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_workflow_get(&fixture.mock_server, "mypack.my_workflow").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("show")
|
||||
.arg("mypack.my_workflow");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("mypack.my_workflow"))
|
||||
.stdout(predicate::str::contains("My Workflow"))
|
||||
.stdout(predicate::str::contains("1.0.0"))
|
||||
.stdout(predicate::str::contains("test, demo"))
|
||||
// Tasks table should show task names
|
||||
.stdout(predicate::str::contains("step1"))
|
||||
.stdout(predicate::str::contains("step2"))
|
||||
.stdout(predicate::str::contains("core.echo"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_show_json_output() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_workflow_get(&fixture.mock_server, "mypack.my_workflow").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("--json")
|
||||
.arg("workflow")
|
||||
.arg("show")
|
||||
.arg("mypack.my_workflow");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("\"mypack.my_workflow\""))
|
||||
.stdout(predicate::str::contains("\"My Workflow\""))
|
||||
.stdout(predicate::str::contains("\"definition\""));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_show_not_found() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_not_found(&fixture.mock_server, "/api/v1/workflows/nonexistent.wf").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("show")
|
||||
.arg("nonexistent.wf");
|
||||
|
||||
cmd.assert().failure();
|
||||
}
|
||||
|
||||
// ── Delete tests ────────────────────────────────────────────────────────
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_delete_with_yes_flag() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_workflow_delete(&fixture.mock_server, "mypack.my_workflow").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("delete")
|
||||
.arg("mypack.my_workflow")
|
||||
.arg("--yes");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("deleted successfully"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_delete_json_output() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
mock_workflow_delete(&fixture.mock_server, "mypack.my_workflow").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("--json")
|
||||
.arg("workflow")
|
||||
.arg("delete")
|
||||
.arg("mypack.my_workflow")
|
||||
.arg("--yes");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("\"message\""))
|
||||
.stdout(predicate::str::contains("deleted"));
|
||||
}
|
||||
|
||||
// ── Upload tests ────────────────────────────────────────────────────────
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_upload_success() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
let wf_fixture =
|
||||
WorkflowFixture::new("mypack.deploy", "workflows/deploy.workflow.yaml");
|
||||
|
||||
mock_workflow_save(&fixture.mock_server, "mypack").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("upload")
|
||||
.arg(&wf_fixture.action_yaml_path);
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("uploaded successfully"))
|
||||
.stdout(predicate::str::contains("mypack.deploy"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_upload_json_output() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
let wf_fixture =
|
||||
WorkflowFixture::new("mypack.deploy", "workflows/deploy.workflow.yaml");
|
||||
|
||||
mock_workflow_save(&fixture.mock_server, "mypack").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("--json")
|
||||
.arg("workflow")
|
||||
.arg("upload")
|
||||
.arg(&wf_fixture.action_yaml_path);
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("\"mypack.deploy\""))
|
||||
.stdout(predicate::str::contains("\"Deploy App\""));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_upload_conflict_without_force() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
let wf_fixture =
|
||||
WorkflowFixture::new("mypack.deploy", "workflows/deploy.workflow.yaml");
|
||||
|
||||
mock_workflow_save_conflict(&fixture.mock_server, "mypack").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("upload")
|
||||
.arg(&wf_fixture.action_yaml_path);
|
||||
|
||||
cmd.assert()
|
||||
.failure()
|
||||
.stderr(predicate::str::contains("already exists"))
|
||||
.stderr(predicate::str::contains("--force"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_upload_conflict_with_force() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
let wf_fixture =
|
||||
WorkflowFixture::new("mypack.deploy", "workflows/deploy.workflow.yaml");
|
||||
|
||||
mock_workflow_save_conflict(&fixture.mock_server, "mypack").await;
|
||||
mock_workflow_update(&fixture.mock_server, "mypack.deploy").await;
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("upload")
|
||||
.arg(&wf_fixture.action_yaml_path)
|
||||
.arg("--force");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("uploaded successfully"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_upload_missing_action_file() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("upload")
|
||||
.arg("/nonexistent/path/action.yaml");
|
||||
|
||||
cmd.assert()
|
||||
.failure()
|
||||
.stderr(predicate::str::contains("not found"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_upload_missing_workflow_file() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
// Create a temp dir with only the action YAML, no workflow file
|
||||
let dir = tempfile::TempDir::new().unwrap();
|
||||
let actions_dir = dir.path().join("actions");
|
||||
fs::create_dir_all(&actions_dir).unwrap();
|
||||
|
||||
let action_yaml = r#"ref: mypack.deploy
|
||||
label: "Deploy App"
|
||||
workflow_file: workflows/deploy.workflow.yaml
|
||||
"#;
|
||||
let action_path = actions_dir.join("deploy.yaml");
|
||||
fs::write(&action_path, action_yaml).unwrap();
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("upload")
|
||||
.arg(action_path.to_string_lossy().as_ref());
|
||||
|
||||
cmd.assert()
|
||||
.failure()
|
||||
.stderr(predicate::str::contains("Workflow file not found"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_upload_action_without_workflow_file_field() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
// Create a temp dir with a regular (non-workflow) action YAML
|
||||
let dir = tempfile::TempDir::new().unwrap();
|
||||
let actions_dir = dir.path().join("actions");
|
||||
fs::create_dir_all(&actions_dir).unwrap();
|
||||
|
||||
let action_yaml = r#"ref: mypack.echo
|
||||
label: "Echo"
|
||||
description: "A regular action, not a workflow"
|
||||
runner_type: shell
|
||||
entry_point: echo.sh
|
||||
"#;
|
||||
let action_path = actions_dir.join("echo.yaml");
|
||||
fs::write(&action_path, action_yaml).unwrap();
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("upload")
|
||||
.arg(action_path.to_string_lossy().as_ref());
|
||||
|
||||
cmd.assert()
|
||||
.failure()
|
||||
.stderr(predicate::str::contains("workflow_file"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_upload_invalid_action_yaml() {
|
||||
let fixture = TestFixture::new().await;
|
||||
fixture.write_authenticated_config("valid_token", "refresh_token");
|
||||
|
||||
let dir = tempfile::TempDir::new().unwrap();
|
||||
let bad_yaml_path = dir.path().join("bad.yaml");
|
||||
fs::write(&bad_yaml_path, "this is not valid yaml: [[[").unwrap();
|
||||
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.env("XDG_CONFIG_HOME", fixture.config_dir_path())
|
||||
.env("HOME", fixture.config_dir_path())
|
||||
.arg("--api-url")
|
||||
.arg(fixture.server_url())
|
||||
.arg("workflow")
|
||||
.arg("upload")
|
||||
.arg(bad_yaml_path.to_string_lossy().as_ref());
|
||||
|
||||
cmd.assert()
|
||||
.failure()
|
||||
.stderr(predicate::str::contains("Failed to parse action YAML"));
|
||||
}
|
||||
|
||||
// ── Help text tests ─────────────────────────────────────────────────────
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_help() {
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.arg("workflow").arg("--help");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("upload"))
|
||||
.stdout(predicate::str::contains("list"))
|
||||
.stdout(predicate::str::contains("show"))
|
||||
.stdout(predicate::str::contains("delete"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_workflow_upload_help() {
|
||||
let mut cmd = Command::cargo_bin("attune").unwrap();
|
||||
cmd.arg("workflow").arg("upload").arg("--help");
|
||||
|
||||
cmd.assert()
|
||||
.success()
|
||||
.stdout(predicate::str::contains("action"))
|
||||
.stdout(predicate::str::contains("workflow_file"))
|
||||
.stdout(predicate::str::contains("--force"));
|
||||
}
|
||||
@@ -1052,6 +1052,14 @@ pub mod execution {
|
||||
/// Task name within the workflow
|
||||
pub task_name: String,
|
||||
|
||||
/// Name of the predecessor task whose completion triggered this task's
|
||||
/// dispatch. `None` for entry-point tasks (dispatched at workflow
|
||||
/// start). Used by the timeline UI to draw only the transitions that
|
||||
/// actually fired rather than every possible transition from the
|
||||
/// workflow definition.
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub triggered_by: Option<String>,
|
||||
|
||||
/// Index for with-items iteration (0-based)
|
||||
pub task_index: Option<i32>,
|
||||
|
||||
|
||||
@@ -7,19 +7,33 @@
|
||||
//! Components are loaded in dependency order:
|
||||
//! 1. Runtimes (no dependencies)
|
||||
//! 2. Triggers (no dependencies)
|
||||
//! 3. Actions (depend on runtime)
|
||||
//! 3. Actions (depend on runtime; workflow actions also create workflow_definition records)
|
||||
//! 4. Sensors (depend on triggers and runtime)
|
||||
//!
|
||||
//! All loaders use **upsert** semantics: if an entity with the same ref already
|
||||
//! exists it is updated in place (preserving its database ID); otherwise a new
|
||||
//! row is created. After loading, entities that belong to the pack but whose
|
||||
//! refs are no longer present in the YAML files are deleted.
|
||||
//!
|
||||
//! ## Workflow Actions
|
||||
//!
|
||||
//! An action YAML may include a `workflow_file` field pointing to a workflow
|
||||
//! definition file relative to the `actions/` directory (e.g.,
|
||||
//! `workflow_file: workflows/deploy.workflow.yaml`). When present the loader:
|
||||
//!
|
||||
//! 1. Reads and parses the referenced workflow YAML file.
|
||||
//! 2. Creates or updates a `workflow_definition` record in the database.
|
||||
//! 3. Creates the action record with `workflow_def` linked to the definition.
|
||||
//!
|
||||
//! This allows the action YAML to control action-level metadata (ref, label,
|
||||
//! parameters, policies) independently of the workflow graph. Multiple actions
|
||||
//! can reference the same workflow file with different configurations.
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::path::Path;
|
||||
|
||||
use sqlx::PgPool;
|
||||
use tracing::{info, warn};
|
||||
use tracing::{debug, info, warn};
|
||||
|
||||
use crate::error::{Error, Result};
|
||||
use crate::models::Id;
|
||||
@@ -32,8 +46,12 @@ use crate::repositories::trigger::{
|
||||
CreateSensorInput, CreateTriggerInput, SensorRepository, TriggerRepository, UpdateSensorInput,
|
||||
UpdateTriggerInput,
|
||||
};
|
||||
use crate::repositories::workflow::{
|
||||
CreateWorkflowDefinitionInput, UpdateWorkflowDefinitionInput, WorkflowDefinitionRepository,
|
||||
};
|
||||
use crate::repositories::{Create, Delete, FindById, FindByRef, Update};
|
||||
use crate::version_matching::extract_version_components;
|
||||
use crate::workflow::parser::parse_workflow_yaml;
|
||||
|
||||
/// Result of loading pack components into the database.
|
||||
#[derive(Debug, Default)]
|
||||
@@ -588,6 +606,13 @@ impl<'a> PackComponentLoader<'a> {
|
||||
/// Load action definitions from `pack_dir/actions/*.yaml`.
|
||||
///
|
||||
/// Returns the list of loaded action refs for cleanup.
|
||||
///
|
||||
/// When an action YAML contains a `workflow_file` field, the loader reads
|
||||
/// the referenced workflow definition, creates/updates a
|
||||
/// `workflow_definition` record, and links the action to it via the
|
||||
/// `action.workflow_def` FK. This enables the action YAML to control
|
||||
/// action-level metadata independently of the workflow graph, and allows
|
||||
/// multiple actions to share the same workflow file.
|
||||
async fn load_actions(
|
||||
&self,
|
||||
pack_dir: &Path,
|
||||
@@ -636,19 +661,64 @@ impl<'a> PackComponentLoader<'a> {
|
||||
.unwrap_or("")
|
||||
.to_string();
|
||||
|
||||
let entrypoint = data
|
||||
.get("entry_point")
|
||||
// ── Workflow file handling ──────────────────────────────────
|
||||
// If the action declares `workflow_file`, load the referenced
|
||||
// workflow definition and link the action to it.
|
||||
let workflow_file_field = data
|
||||
.get("workflow_file")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("")
|
||||
.to_string();
|
||||
.map(|s| s.to_string());
|
||||
|
||||
// Resolve runtime ID from runner_type
|
||||
let runner_type = data
|
||||
.get("runner_type")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("shell");
|
||||
let workflow_def_id: Option<Id> = if let Some(ref wf_path) = workflow_file_field {
|
||||
match self
|
||||
.load_workflow_for_action(
|
||||
&actions_dir,
|
||||
wf_path,
|
||||
&action_ref,
|
||||
&label,
|
||||
&description,
|
||||
&data,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(id) => Some(id),
|
||||
Err(e) => {
|
||||
let msg = format!(
|
||||
"Failed to load workflow file '{}' for action '{}': {}",
|
||||
wf_path, action_ref, e
|
||||
);
|
||||
warn!("{}", msg);
|
||||
result.warnings.push(msg);
|
||||
// Continue creating the action without workflow link
|
||||
None
|
||||
}
|
||||
}
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let runtime_id = self.resolve_runtime_id(runner_type).await?;
|
||||
// For workflow actions the entrypoint is the workflow file path;
|
||||
// for regular actions it comes from entry_point in the YAML.
|
||||
let entrypoint = if let Some(ref wf_path) = workflow_file_field {
|
||||
wf_path.clone()
|
||||
} else {
|
||||
data.get("entry_point")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("")
|
||||
.to_string()
|
||||
};
|
||||
|
||||
// Resolve runtime ID from runner_type (workflow actions have no
|
||||
// runner_type and get runtime = None).
|
||||
let runtime_id = if workflow_file_field.is_some() {
|
||||
None
|
||||
} else {
|
||||
let runner_type = data
|
||||
.get("runner_type")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("shell");
|
||||
self.resolve_runtime_id(runner_type).await?
|
||||
};
|
||||
|
||||
let param_schema = data
|
||||
.get("parameters")
|
||||
@@ -701,6 +771,19 @@ impl<'a> PackComponentLoader<'a> {
|
||||
Ok(_) => {
|
||||
info!("Updated action '{}' (ID: {})", action_ref, existing.id);
|
||||
result.actions_updated += 1;
|
||||
|
||||
// Re-link workflow definition if present
|
||||
if let Some(wf_id) = workflow_def_id {
|
||||
if let Err(e) =
|
||||
ActionRepository::link_workflow_def(self.pool, existing.id, wf_id)
|
||||
.await
|
||||
{
|
||||
warn!(
|
||||
"Failed to link workflow def {} to action '{}': {}",
|
||||
wf_id, action_ref, e
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
let msg = format!("Failed to update action '{}': {}", action_ref, e);
|
||||
@@ -745,8 +828,25 @@ impl<'a> PackComponentLoader<'a> {
|
||||
match create_result {
|
||||
Ok(id) => {
|
||||
info!("Created action '{}' (ID: {})", action_ref, id);
|
||||
loaded_refs.push(action_ref);
|
||||
loaded_refs.push(action_ref.clone());
|
||||
result.actions_loaded += 1;
|
||||
|
||||
// Link workflow definition if present
|
||||
if let Some(wf_id) = workflow_def_id {
|
||||
if let Err(e) =
|
||||
ActionRepository::link_workflow_def(self.pool, id, wf_id).await
|
||||
{
|
||||
warn!(
|
||||
"Failed to link workflow def {} to new action '{}': {}",
|
||||
wf_id, action_ref, e
|
||||
);
|
||||
} else {
|
||||
info!(
|
||||
"Linked action '{}' (ID: {}) to workflow definition (ID: {})",
|
||||
action_ref, id, wf_id
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
// Check for unique constraint violation (already exists race condition)
|
||||
@@ -771,6 +871,146 @@ impl<'a> PackComponentLoader<'a> {
|
||||
Ok(loaded_refs)
|
||||
}
|
||||
|
||||
/// Load a workflow definition file referenced by an action's `workflow_file`
|
||||
/// field and create/update the corresponding `workflow_definition` record.
|
||||
///
|
||||
/// Returns the database ID of the workflow definition.
|
||||
async fn load_workflow_for_action(
|
||||
&self,
|
||||
actions_dir: &Path,
|
||||
workflow_file_path: &str,
|
||||
action_ref: &str,
|
||||
action_label: &str,
|
||||
action_description: &str,
|
||||
action_data: &serde_yaml_ng::Value,
|
||||
) -> Result<Id> {
|
||||
let full_path = actions_dir.join(workflow_file_path);
|
||||
if !full_path.exists() {
|
||||
return Err(Error::validation(format!(
|
||||
"Workflow file '{}' not found at '{}'",
|
||||
workflow_file_path,
|
||||
full_path.display()
|
||||
)));
|
||||
}
|
||||
|
||||
let content = std::fs::read_to_string(&full_path).map_err(|e| {
|
||||
Error::io(format!(
|
||||
"Failed to read workflow file '{}': {}",
|
||||
full_path.display(),
|
||||
e
|
||||
))
|
||||
})?;
|
||||
|
||||
let mut workflow_yaml = parse_workflow_yaml(&content)?;
|
||||
|
||||
// The action YAML is authoritative for action-level metadata.
|
||||
// Fill in ref/label/description/tags from the action when the
|
||||
// workflow file omits them (action-linked workflow files should
|
||||
// contain only the execution graph).
|
||||
if workflow_yaml.r#ref.is_empty() {
|
||||
workflow_yaml.r#ref = action_ref.to_string();
|
||||
}
|
||||
if workflow_yaml.label.is_empty() {
|
||||
workflow_yaml.label = action_label.to_string();
|
||||
}
|
||||
if workflow_yaml.description.is_none() {
|
||||
workflow_yaml.description = Some(action_description.to_string());
|
||||
}
|
||||
if workflow_yaml.tags.is_empty() {
|
||||
if let Some(tags_val) = action_data.get("tags") {
|
||||
if let Some(tags_seq) = tags_val.as_sequence() {
|
||||
workflow_yaml.tags = tags_seq
|
||||
.iter()
|
||||
.filter_map(|v| v.as_str().map(|s| s.to_string()))
|
||||
.collect();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let workflow_ref = workflow_yaml.r#ref.clone();
|
||||
|
||||
// The action YAML is authoritative for param_schema / out_schema.
|
||||
// Fall back to the workflow file's own schemas only if the action
|
||||
// YAML doesn't define them.
|
||||
let param_schema = action_data
|
||||
.get("parameters")
|
||||
.and_then(|v| serde_json::to_value(v).ok())
|
||||
.or_else(|| workflow_yaml.parameters.clone());
|
||||
|
||||
let out_schema = action_data
|
||||
.get("output")
|
||||
.and_then(|v| serde_json::to_value(v).ok())
|
||||
.or_else(|| workflow_yaml.output.clone());
|
||||
|
||||
let definition_json = serde_json::to_value(&workflow_yaml)
|
||||
.map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?;
|
||||
|
||||
// Derive label/description for the DB record from the action YAML,
|
||||
// since it is authoritative. The workflow file values were already
|
||||
// used as fallback above when populating workflow_yaml.
|
||||
let label = workflow_yaml.label.clone();
|
||||
let description = workflow_yaml.description.clone();
|
||||
let tags = workflow_yaml.tags.clone();
|
||||
|
||||
// Check if this workflow definition already exists
|
||||
if let Some(existing) =
|
||||
WorkflowDefinitionRepository::find_by_ref(self.pool, &workflow_ref).await?
|
||||
{
|
||||
debug!(
|
||||
"Updating existing workflow definition '{}' (ID: {})",
|
||||
workflow_ref, existing.id
|
||||
);
|
||||
|
||||
let update_input = UpdateWorkflowDefinitionInput {
|
||||
label: Some(label),
|
||||
description,
|
||||
version: Some(workflow_yaml.version.clone()),
|
||||
param_schema,
|
||||
out_schema,
|
||||
definition: Some(definition_json),
|
||||
tags: Some(tags),
|
||||
enabled: Some(true),
|
||||
};
|
||||
|
||||
WorkflowDefinitionRepository::update(self.pool, existing.id, update_input).await?;
|
||||
|
||||
info!(
|
||||
"Updated workflow definition '{}' (ID: {}) for action '{}'",
|
||||
workflow_ref, existing.id, action_ref
|
||||
);
|
||||
|
||||
Ok(existing.id)
|
||||
} else {
|
||||
debug!(
|
||||
"Creating new workflow definition '{}' for action '{}'",
|
||||
workflow_ref, action_ref
|
||||
);
|
||||
|
||||
let create_input = CreateWorkflowDefinitionInput {
|
||||
r#ref: workflow_ref.clone(),
|
||||
pack: self.pack_id,
|
||||
pack_ref: self.pack_ref.clone(),
|
||||
label,
|
||||
description,
|
||||
version: workflow_yaml.version.clone(),
|
||||
param_schema,
|
||||
out_schema,
|
||||
definition: definition_json,
|
||||
tags,
|
||||
enabled: true,
|
||||
};
|
||||
|
||||
let created = WorkflowDefinitionRepository::create(self.pool, create_input).await?;
|
||||
|
||||
info!(
|
||||
"Created workflow definition '{}' (ID: {}) for action '{}'",
|
||||
workflow_ref, created.id, action_ref
|
||||
);
|
||||
|
||||
Ok(created.id)
|
||||
}
|
||||
}
|
||||
|
||||
/// Load sensor definitions from `pack_dir/sensors/*.yaml`.
|
||||
///
|
||||
/// Returns the list of loaded sensor refs for cleanup.
|
||||
|
||||
@@ -115,12 +115,17 @@ pub fn validate_workflow_expressions(
|
||||
match directive {
|
||||
PublishDirective::Simple(map) => {
|
||||
for (pk, pv) in map {
|
||||
validate_template(
|
||||
pv,
|
||||
&format!("{task_loc} next[{ti}].publish.{pk}"),
|
||||
&known_names,
|
||||
&mut warnings,
|
||||
);
|
||||
// Only validate string values as templates;
|
||||
// non-string literals (booleans, numbers, etc.)
|
||||
// pass through unchanged and have no expressions.
|
||||
if let Some(s) = pv.as_str() {
|
||||
validate_template(
|
||||
s,
|
||||
&format!("{task_loc} next[{ti}].publish.{pk}"),
|
||||
&known_names,
|
||||
&mut warnings,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
PublishDirective::Key(_) => { /* nothing to validate */ }
|
||||
@@ -132,12 +137,16 @@ pub fn validate_workflow_expressions(
|
||||
for directive in &task.publish {
|
||||
if let PublishDirective::Simple(map) = directive {
|
||||
for (pk, pv) in map {
|
||||
validate_template(
|
||||
pv,
|
||||
&format!("{task_loc} publish.{pk}"),
|
||||
&known_names,
|
||||
&mut warnings,
|
||||
);
|
||||
// Only validate string values as templates;
|
||||
// non-string literals pass through unchanged.
|
||||
if let Some(s) = pv.as_str() {
|
||||
validate_template(
|
||||
s,
|
||||
&format!("{task_loc} publish.{pk}"),
|
||||
&known_names,
|
||||
&mut warnings,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -567,7 +576,7 @@ mod tests {
|
||||
fn test_transition_publish_validated() {
|
||||
let mut task = action_task("step1");
|
||||
let mut publish_map = HashMap::new();
|
||||
publish_map.insert("out".to_string(), "{{ unknown_thing }}".to_string());
|
||||
publish_map.insert("out".to_string(), serde_json::Value::String("{{ unknown_thing }}".to_string()));
|
||||
task.next = vec![super::super::parser::TaskTransition {
|
||||
when: Some("{{ succeeded() }}".to_string()),
|
||||
publish: vec![PublishDirective::Simple(publish_map)],
|
||||
|
||||
@@ -109,32 +109,49 @@ impl WorkflowLoader {
|
||||
}
|
||||
|
||||
/// Load all workflows from a specific pack
|
||||
///
|
||||
/// Scans two directories in order:
|
||||
/// 1. `{pack_dir}/workflows/` — legacy/standalone workflow files
|
||||
/// 2. `{pack_dir}/actions/workflows/` — visual-builder and action-linked workflow files
|
||||
///
|
||||
/// If the same workflow ref appears in both directories, the version from
|
||||
/// `actions/workflows/` wins (it is scanned second and overwrites the map entry).
|
||||
pub async fn load_pack_workflows(
|
||||
&self,
|
||||
pack_name: &str,
|
||||
pack_dir: &Path,
|
||||
) -> Result<HashMap<String, LoadedWorkflow>> {
|
||||
let workflows_dir = pack_dir.join("workflows");
|
||||
|
||||
if !workflows_dir.exists() {
|
||||
debug!("No workflows directory in pack '{}'", pack_name);
|
||||
return Ok(HashMap::new());
|
||||
}
|
||||
|
||||
let workflow_files = self.scan_workflow_files(&workflows_dir, pack_name).await?;
|
||||
let mut workflows = HashMap::new();
|
||||
|
||||
for file in workflow_files {
|
||||
match self.load_workflow_file(&file).await {
|
||||
Ok(loaded) => {
|
||||
workflows.insert(loaded.file.ref_name.clone(), loaded);
|
||||
}
|
||||
Err(e) => {
|
||||
warn!("Failed to load workflow '{}': {}", file.path.display(), e);
|
||||
// Scan both workflow directories
|
||||
let scan_dirs: Vec<std::path::PathBuf> = vec![
|
||||
pack_dir.join("workflows"),
|
||||
pack_dir.join("actions").join("workflows"),
|
||||
];
|
||||
|
||||
for workflows_dir in &scan_dirs {
|
||||
if !workflows_dir.exists() {
|
||||
continue;
|
||||
}
|
||||
|
||||
let workflow_files = self.scan_workflow_files(workflows_dir, pack_name).await?;
|
||||
|
||||
for file in workflow_files {
|
||||
match self.load_workflow_file(&file).await {
|
||||
Ok(loaded) => {
|
||||
workflows.insert(loaded.file.ref_name.clone(), loaded);
|
||||
}
|
||||
Err(e) => {
|
||||
warn!("Failed to load workflow '{}': {}", file.path.display(), e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if workflows.is_empty() {
|
||||
debug!("No workflows found in pack '{}'", pack_name);
|
||||
}
|
||||
|
||||
Ok(workflows)
|
||||
}
|
||||
|
||||
@@ -185,6 +202,10 @@ impl WorkflowLoader {
|
||||
}
|
||||
|
||||
/// Reload a specific workflow by reference
|
||||
///
|
||||
/// Searches for the workflow file in both `workflows/` and
|
||||
/// `actions/workflows/` directories, trying `.yaml`, `.yml`, and
|
||||
/// `.workflow.yaml` extensions.
|
||||
pub async fn reload_workflow(&self, ref_name: &str) -> Result<LoadedWorkflow> {
|
||||
let parts: Vec<&str> = ref_name.split('.').collect();
|
||||
if parts.len() != 2 {
|
||||
@@ -198,36 +219,35 @@ impl WorkflowLoader {
|
||||
let workflow_name = parts[1];
|
||||
|
||||
let pack_dir = self.config.packs_base_dir.join(pack_name);
|
||||
let workflow_path = pack_dir
|
||||
.join("workflows")
|
||||
.join(format!("{}.yaml", workflow_name));
|
||||
|
||||
if !workflow_path.exists() {
|
||||
// Try .yml extension
|
||||
let workflow_path_yml = pack_dir
|
||||
.join("workflows")
|
||||
.join(format!("{}.yml", workflow_name));
|
||||
if workflow_path_yml.exists() {
|
||||
let file = WorkflowFile {
|
||||
path: workflow_path_yml,
|
||||
pack: pack_name.to_string(),
|
||||
name: workflow_name.to_string(),
|
||||
ref_name: ref_name.to_string(),
|
||||
};
|
||||
return self.load_workflow_file(&file).await;
|
||||
// Candidate directories and filename patterns to search
|
||||
let dirs = [
|
||||
pack_dir.join("actions").join("workflows"),
|
||||
pack_dir.join("workflows"),
|
||||
];
|
||||
let extensions = [
|
||||
format!("{}.workflow.yaml", workflow_name),
|
||||
format!("{}.yaml", workflow_name),
|
||||
format!("{}.workflow.yml", workflow_name),
|
||||
format!("{}.yml", workflow_name),
|
||||
];
|
||||
|
||||
for dir in &dirs {
|
||||
for filename in &extensions {
|
||||
let candidate = dir.join(filename);
|
||||
if candidate.exists() {
|
||||
let file = WorkflowFile {
|
||||
path: candidate,
|
||||
pack: pack_name.to_string(),
|
||||
name: workflow_name.to_string(),
|
||||
ref_name: ref_name.to_string(),
|
||||
};
|
||||
return self.load_workflow_file(&file).await;
|
||||
}
|
||||
}
|
||||
|
||||
return Err(Error::not_found("workflow", "ref", ref_name));
|
||||
}
|
||||
|
||||
let file = WorkflowFile {
|
||||
path: workflow_path,
|
||||
pack: pack_name.to_string(),
|
||||
name: workflow_name.to_string(),
|
||||
ref_name: ref_name.to_string(),
|
||||
};
|
||||
|
||||
self.load_workflow_file(&file).await
|
||||
Err(Error::not_found("workflow", "ref", ref_name))
|
||||
}
|
||||
|
||||
/// Scan pack directories
|
||||
@@ -259,6 +279,11 @@ impl WorkflowLoader {
|
||||
}
|
||||
|
||||
/// Scan workflow files in a directory
|
||||
///
|
||||
/// Handles both `{name}.yaml` and `{name}.workflow.yaml` naming
|
||||
/// conventions. For files with a `.workflow.yaml` suffix (produced by
|
||||
/// the visual workflow builder), the `.workflow` portion is stripped
|
||||
/// when deriving the workflow name and ref.
|
||||
async fn scan_workflow_files(
|
||||
&self,
|
||||
workflows_dir: &Path,
|
||||
@@ -278,7 +303,14 @@ impl WorkflowLoader {
|
||||
if path.is_file() {
|
||||
if let Some(ext) = path.extension() {
|
||||
if ext == "yaml" || ext == "yml" {
|
||||
if let Some(name) = path.file_stem().and_then(|n| n.to_str()) {
|
||||
if let Some(raw_stem) = path.file_stem().and_then(|n| n.to_str()) {
|
||||
// Strip `.workflow` suffix if present:
|
||||
// "deploy.workflow.yaml" -> stem "deploy.workflow" -> name "deploy"
|
||||
// "deploy.yaml" -> stem "deploy" -> name "deploy"
|
||||
let name = raw_stem
|
||||
.strip_suffix(".workflow")
|
||||
.unwrap_or(raw_stem);
|
||||
|
||||
let ref_name = format!("{}.{}", pack_name, name);
|
||||
workflow_files.push(WorkflowFile {
|
||||
path: path.clone(),
|
||||
@@ -475,4 +507,161 @@ tasks:
|
||||
.to_string()
|
||||
.contains("exceeds maximum size"));
|
||||
}
|
||||
|
||||
/// Verify that `scan_workflow_files` strips the `.workflow` suffix from
|
||||
/// filenames like `deploy.workflow.yaml`, yielding name `deploy` and
|
||||
/// ref `pack.deploy` instead of `pack.deploy.workflow`.
|
||||
#[tokio::test]
|
||||
async fn test_scan_workflow_files_strips_workflow_suffix() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let packs_dir = temp_dir.path().to_path_buf();
|
||||
let pack_dir = packs_dir.join("my_pack");
|
||||
let workflows_dir = pack_dir.join("actions").join("workflows");
|
||||
fs::create_dir_all(&workflows_dir).await.unwrap();
|
||||
|
||||
let workflow_yaml = r#"
|
||||
ref: my_pack.deploy
|
||||
label: Deploy
|
||||
version: "1.0.0"
|
||||
tasks:
|
||||
- name: step1
|
||||
action: core.noop
|
||||
"#;
|
||||
fs::write(workflows_dir.join("deploy.workflow.yaml"), workflow_yaml)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let config = LoaderConfig {
|
||||
packs_base_dir: packs_dir,
|
||||
skip_validation: true,
|
||||
max_file_size: 1024 * 1024,
|
||||
};
|
||||
|
||||
let loader = WorkflowLoader::new(config);
|
||||
let files = loader
|
||||
.scan_workflow_files(&workflows_dir, "my_pack")
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(files.len(), 1);
|
||||
assert_eq!(files[0].name, "deploy");
|
||||
assert_eq!(files[0].ref_name, "my_pack.deploy");
|
||||
}
|
||||
|
||||
/// Verify that `load_pack_workflows` discovers workflow files in both
|
||||
/// `workflows/` (legacy) and `actions/workflows/` (visual builder)
|
||||
/// directories, and that `actions/workflows/` wins on ref collision.
|
||||
#[tokio::test]
|
||||
async fn test_load_pack_workflows_scans_both_directories() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let packs_dir = temp_dir.path().to_path_buf();
|
||||
let pack_dir = packs_dir.join("dual_pack");
|
||||
|
||||
// Legacy directory: workflows/
|
||||
let legacy_dir = pack_dir.join("workflows");
|
||||
fs::create_dir_all(&legacy_dir).await.unwrap();
|
||||
|
||||
let legacy_yaml = r#"
|
||||
ref: dual_pack.alpha
|
||||
label: Alpha (legacy)
|
||||
version: "1.0.0"
|
||||
tasks:
|
||||
- name: t1
|
||||
action: core.noop
|
||||
"#;
|
||||
fs::write(legacy_dir.join("alpha.yaml"), legacy_yaml)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Also put a workflow that only exists in the legacy dir
|
||||
let beta_yaml = r#"
|
||||
ref: dual_pack.beta
|
||||
label: Beta
|
||||
version: "1.0.0"
|
||||
tasks:
|
||||
- name: t1
|
||||
action: core.noop
|
||||
"#;
|
||||
fs::write(legacy_dir.join("beta.yaml"), beta_yaml)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Visual builder directory: actions/workflows/
|
||||
let builder_dir = pack_dir.join("actions").join("workflows");
|
||||
fs::create_dir_all(&builder_dir).await.unwrap();
|
||||
|
||||
let builder_yaml = r#"
|
||||
ref: dual_pack.alpha
|
||||
label: Alpha (builder)
|
||||
version: "2.0.0"
|
||||
tasks:
|
||||
- name: t1
|
||||
action: core.noop
|
||||
"#;
|
||||
fs::write(builder_dir.join("alpha.workflow.yaml"), builder_yaml)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let config = LoaderConfig {
|
||||
packs_base_dir: packs_dir,
|
||||
skip_validation: true,
|
||||
max_file_size: 1024 * 1024,
|
||||
};
|
||||
|
||||
let loader = WorkflowLoader::new(config);
|
||||
let workflows = loader
|
||||
.load_pack_workflows("dual_pack", &pack_dir)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Both alpha and beta should be present
|
||||
assert_eq!(workflows.len(), 2);
|
||||
assert!(workflows.contains_key("dual_pack.alpha"));
|
||||
assert!(workflows.contains_key("dual_pack.beta"));
|
||||
|
||||
// Alpha should come from actions/workflows/ (scanned second, overwrites)
|
||||
let alpha = &workflows["dual_pack.alpha"];
|
||||
assert_eq!(alpha.workflow.label, "Alpha (builder)");
|
||||
assert_eq!(alpha.workflow.version, "2.0.0");
|
||||
|
||||
// Beta only exists in legacy dir
|
||||
let beta = &workflows["dual_pack.beta"];
|
||||
assert_eq!(beta.workflow.label, "Beta");
|
||||
}
|
||||
|
||||
/// Verify that `reload_workflow` finds files in `actions/workflows/`
|
||||
/// with the `.workflow.yaml` extension.
|
||||
#[tokio::test]
|
||||
async fn test_reload_workflow_finds_actions_workflows_dir() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let packs_dir = temp_dir.path().to_path_buf();
|
||||
let pack_dir = packs_dir.join("rp");
|
||||
let builder_dir = pack_dir.join("actions").join("workflows");
|
||||
fs::create_dir_all(&builder_dir).await.unwrap();
|
||||
|
||||
let yaml = r#"
|
||||
ref: rp.deploy
|
||||
label: Deploy
|
||||
version: "1.0.0"
|
||||
tasks:
|
||||
- name: step1
|
||||
action: core.noop
|
||||
"#;
|
||||
fs::write(builder_dir.join("deploy.workflow.yaml"), yaml)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let config = LoaderConfig {
|
||||
packs_base_dir: packs_dir,
|
||||
skip_validation: true,
|
||||
max_file_size: 1024 * 1024,
|
||||
};
|
||||
|
||||
let loader = WorkflowLoader::new(config);
|
||||
let loaded = loader.reload_workflow("rp.deploy").await.unwrap();
|
||||
|
||||
assert_eq!(loaded.workflow.r#ref, "rp.deploy");
|
||||
assert_eq!(loaded.file.name, "deploy");
|
||||
assert_eq!(loaded.file.ref_name, "rp.deploy");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -78,14 +78,26 @@ impl From<ParseError> for crate::error::Error {
|
||||
}
|
||||
|
||||
/// Complete workflow definition parsed from YAML
|
||||
///
|
||||
/// When loaded via an action's `workflow_file` field, the `ref` and `label`
|
||||
/// fields are optional — the action YAML is authoritative for those values.
|
||||
/// For standalone workflow files (in `workflows/`), they should be present.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, Validate)]
|
||||
pub struct WorkflowDefinition {
|
||||
/// Unique reference (e.g., "my_pack.deploy_app")
|
||||
#[validate(length(min = 1, max = 255))]
|
||||
/// Unique reference (e.g., "my_pack.deploy_app").
|
||||
///
|
||||
/// Optional for action-linked workflow files (supplied by the action YAML).
|
||||
/// Required for standalone workflow files.
|
||||
#[serde(default)]
|
||||
#[validate(length(max = 255))]
|
||||
pub r#ref: String,
|
||||
|
||||
/// Human-readable label
|
||||
#[validate(length(min = 1, max = 255))]
|
||||
/// Human-readable label.
|
||||
///
|
||||
/// Optional for action-linked workflow files (supplied by the action YAML).
|
||||
/// Required for standalone workflow files.
|
||||
#[serde(default)]
|
||||
#[validate(length(max = 255))]
|
||||
pub label: String,
|
||||
|
||||
/// Optional description
|
||||
@@ -412,11 +424,19 @@ pub enum TaskType {
|
||||
}
|
||||
|
||||
/// Variable publishing directive
|
||||
///
|
||||
/// Publish directives map variable names to values. Values may be template
|
||||
/// expressions (strings containing `{{ }}`), literal strings, or any other
|
||||
/// JSON-compatible type (booleans, numbers, arrays, objects). Non-string
|
||||
/// literals are preserved through the rendering pipeline so that, for example,
|
||||
/// `validation_passed: true` publishes the boolean `true`, not the string
|
||||
/// `"true"`.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(untagged)]
|
||||
pub enum PublishDirective {
|
||||
/// Simple key-value pair
|
||||
Simple(HashMap<String, String>),
|
||||
/// Key-value pair where the value can be any JSON-compatible type
|
||||
/// (string template, boolean, number, array, object, null).
|
||||
Simple(HashMap<String, serde_json::Value>),
|
||||
/// Just a key (publishes entire result under that key)
|
||||
Key(String),
|
||||
}
|
||||
@@ -1315,4 +1335,175 @@ tasks:
|
||||
assert!(workflow.tasks[0].next[0].chart_meta.is_none());
|
||||
assert!(workflow.tasks[0].next[1].chart_meta.is_none());
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Action-linked workflow file (no ref/label)
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn test_parse_action_linked_workflow_without_ref_and_label() {
|
||||
// Action-linked workflow files (in actions/workflows/) omit ref and
|
||||
// label — those are supplied by the companion action YAML. The
|
||||
// parser must accept such files and default the fields to empty
|
||||
// strings.
|
||||
let yaml = r#"
|
||||
version: 1.0.0
|
||||
|
||||
vars:
|
||||
counter: 0
|
||||
|
||||
tasks:
|
||||
- name: step1
|
||||
action: core.echo
|
||||
input:
|
||||
message: "hello"
|
||||
next:
|
||||
- when: "{{ succeeded() }}"
|
||||
do:
|
||||
- step2
|
||||
- name: step2
|
||||
action: core.echo
|
||||
input:
|
||||
message: "world"
|
||||
|
||||
output_map:
|
||||
result: "{{ task.step2.result }}"
|
||||
"#;
|
||||
|
||||
let result = parse_workflow_yaml(yaml);
|
||||
assert!(result.is_ok(), "Parse failed: {:?}", result.err());
|
||||
let workflow = result.unwrap();
|
||||
|
||||
// ref and label default to empty strings
|
||||
assert_eq!(workflow.r#ref, "");
|
||||
assert_eq!(workflow.label, "");
|
||||
|
||||
// Graph fields are parsed normally
|
||||
assert_eq!(workflow.version, "1.0.0");
|
||||
assert_eq!(workflow.tasks.len(), 2);
|
||||
assert_eq!(workflow.tasks[0].name, "step1");
|
||||
assert!(workflow.vars.contains_key("counter"));
|
||||
assert!(workflow.output_map.is_some());
|
||||
|
||||
// No parameters or output schema (those come from the action YAML)
|
||||
assert!(workflow.parameters.is_none());
|
||||
assert!(workflow.output.is_none());
|
||||
assert!(workflow.tags.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_parse_standalone_workflow_still_works_with_ref_and_label() {
|
||||
// Standalone workflow files (in workflows/) still carry ref and label.
|
||||
// Verify they continue to parse correctly.
|
||||
let yaml = r#"
|
||||
ref: mypack.deploy
|
||||
label: Deploy Workflow
|
||||
description: Deploys the application
|
||||
version: 2.0.0
|
||||
|
||||
parameters:
|
||||
target:
|
||||
type: string
|
||||
required: true
|
||||
|
||||
tags:
|
||||
- deploy
|
||||
- production
|
||||
|
||||
tasks:
|
||||
- name: deploy
|
||||
action: core.run
|
||||
input:
|
||||
target: "{{ parameters.target }}"
|
||||
"#;
|
||||
|
||||
let result = parse_workflow_yaml(yaml);
|
||||
assert!(result.is_ok(), "Parse failed: {:?}", result.err());
|
||||
let workflow = result.unwrap();
|
||||
|
||||
assert_eq!(workflow.r#ref, "mypack.deploy");
|
||||
assert_eq!(workflow.label, "Deploy Workflow");
|
||||
assert_eq!(
|
||||
workflow.description.as_deref(),
|
||||
Some("Deploys the application")
|
||||
);
|
||||
assert_eq!(workflow.version, "2.0.0");
|
||||
assert!(workflow.parameters.is_some());
|
||||
assert_eq!(workflow.tags, vec!["deploy", "production"]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_typed_publish_values_in_transitions() {
|
||||
// Regression test: publish directive values that are booleans, numbers,
|
||||
// or null must parse successfully (not just strings). Previously
|
||||
// `PublishDirective::Simple(HashMap<String, String>)` rejected them.
|
||||
let yaml = r#"
|
||||
ref: test.typed_publish
|
||||
label: Typed Publish
|
||||
version: 1.0.0
|
||||
tasks:
|
||||
- name: validate
|
||||
action: core.echo
|
||||
next:
|
||||
- when: "{{ succeeded() }}"
|
||||
publish:
|
||||
- validation_passed: true
|
||||
- count: 42
|
||||
- ratio: 3.14
|
||||
- label: "hello"
|
||||
- template_val: "{{ result().data }}"
|
||||
- nothing: null
|
||||
do:
|
||||
- finalize
|
||||
- when: "{{ failed() }}"
|
||||
publish:
|
||||
- validation_passed: false
|
||||
do:
|
||||
- handle_error
|
||||
- name: finalize
|
||||
action: core.echo
|
||||
- name: handle_error
|
||||
action: core.echo
|
||||
"#;
|
||||
|
||||
let result = parse_workflow_yaml(yaml);
|
||||
assert!(result.is_ok(), "Parse failed: {:?}", result.err());
|
||||
let workflow = result.unwrap();
|
||||
|
||||
let task = &workflow.tasks[0];
|
||||
assert_eq!(task.name, "validate");
|
||||
assert_eq!(task.next.len(), 2);
|
||||
|
||||
// Success transition: 6 publish directives with mixed types
|
||||
let success_transition = &task.next[0];
|
||||
assert_eq!(success_transition.publish.len(), 6);
|
||||
|
||||
// Verify each typed value survived parsing
|
||||
for directive in &success_transition.publish {
|
||||
if let PublishDirective::Simple(map) = directive {
|
||||
if let Some(val) = map.get("validation_passed") {
|
||||
assert_eq!(val, &serde_json::Value::Bool(true), "boolean true");
|
||||
} else if let Some(val) = map.get("count") {
|
||||
assert_eq!(val, &serde_json::json!(42), "integer");
|
||||
} else if let Some(val) = map.get("ratio") {
|
||||
assert_eq!(val, &serde_json::json!(3.14), "float");
|
||||
} else if let Some(val) = map.get("label") {
|
||||
assert_eq!(val, &serde_json::json!("hello"), "string");
|
||||
} else if let Some(val) = map.get("template_val") {
|
||||
assert_eq!(val, &serde_json::json!("{{ result().data }}"), "template");
|
||||
} else if let Some(val) = map.get("nothing") {
|
||||
assert!(val.is_null(), "null");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Failure transition: boolean false
|
||||
let failure_transition = &task.next[1];
|
||||
assert_eq!(failure_transition.publish.len(), 1);
|
||||
if let PublishDirective::Simple(map) = &failure_transition.publish[0] {
|
||||
assert_eq!(map.get("validation_passed"), Some(&serde_json::Value::Bool(false)));
|
||||
} else {
|
||||
panic!("Expected Simple publish directive");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,6 +4,11 @@
|
||||
//! Workflows are stored in the `workflow_definition` table with their full YAML definition
|
||||
//! as JSON. A companion action record is also created so that workflows appear in
|
||||
//! action lists and the workflow builder's action palette.
|
||||
//!
|
||||
//! Standalone workflow files (in `workflows/`) carry their own `ref` and `label`.
|
||||
//! Action-linked workflow files (in `actions/workflows/`, referenced via
|
||||
//! `workflow_file`) may omit those fields — the registrar falls back to
|
||||
//! `WorkflowFile.ref_name` / `WorkflowFile.name` derived from the filename.
|
||||
|
||||
use crate::error::{Error, Result};
|
||||
use crate::repositories::action::{ActionRepository, CreateActionInput, UpdateActionInput};
|
||||
@@ -61,6 +66,32 @@ impl WorkflowRegistrar {
|
||||
Self { pool, options }
|
||||
}
|
||||
|
||||
/// Resolve the effective ref for a workflow.
|
||||
///
|
||||
/// Prefers the value declared in the YAML; falls back to the
|
||||
/// `WorkflowFile.ref_name` derived from the filename when the YAML
|
||||
/// omits it (action-linked workflow files).
|
||||
fn effective_ref(loaded: &LoadedWorkflow) -> String {
|
||||
if loaded.workflow.r#ref.is_empty() {
|
||||
loaded.file.ref_name.clone()
|
||||
} else {
|
||||
loaded.workflow.r#ref.clone()
|
||||
}
|
||||
}
|
||||
|
||||
/// Resolve the effective label for a workflow.
|
||||
///
|
||||
/// Prefers the value declared in the YAML; falls back to the
|
||||
/// `WorkflowFile.name` (human-readable filename stem) when the YAML
|
||||
/// omits it.
|
||||
fn effective_label(loaded: &LoadedWorkflow) -> String {
|
||||
if loaded.workflow.label.is_empty() {
|
||||
loaded.file.name.clone()
|
||||
} else {
|
||||
loaded.workflow.label.clone()
|
||||
}
|
||||
}
|
||||
|
||||
/// Register a single workflow
|
||||
pub async fn register_workflow(&self, loaded: &LoadedWorkflow) -> Result<RegistrationResult> {
|
||||
debug!("Registering workflow: {}", loaded.file.ref_name);
|
||||
@@ -91,6 +122,12 @@ impl WorkflowRegistrar {
|
||||
warnings.push(err.clone());
|
||||
}
|
||||
|
||||
// Resolve effective ref/label — prefer workflow YAML values, fall
|
||||
// back to filename-derived values for action-linked workflow files
|
||||
// that omit action-level metadata.
|
||||
let effective_ref = Self::effective_ref(loaded);
|
||||
let effective_label = Self::effective_label(loaded);
|
||||
|
||||
let (workflow_def_id, created) = if let Some(existing) = existing_workflow {
|
||||
if !self.options.update_existing {
|
||||
return Err(Error::already_exists(
|
||||
@@ -102,7 +139,13 @@ impl WorkflowRegistrar {
|
||||
|
||||
info!("Updating existing workflow: {}", loaded.file.ref_name);
|
||||
let workflow_def_id = self
|
||||
.update_workflow(&existing.id, &loaded.workflow, &pack.r#ref)
|
||||
.update_workflow(
|
||||
&existing.id,
|
||||
&loaded.workflow,
|
||||
&pack.r#ref,
|
||||
&effective_ref,
|
||||
&effective_label,
|
||||
)
|
||||
.await?;
|
||||
|
||||
// Update or create the companion action record
|
||||
@@ -112,6 +155,8 @@ impl WorkflowRegistrar {
|
||||
pack.id,
|
||||
&pack.r#ref,
|
||||
&loaded.file.name,
|
||||
&effective_ref,
|
||||
&effective_label,
|
||||
)
|
||||
.await?;
|
||||
|
||||
@@ -119,7 +164,14 @@ impl WorkflowRegistrar {
|
||||
} else {
|
||||
info!("Creating new workflow: {}", loaded.file.ref_name);
|
||||
let workflow_def_id = self
|
||||
.create_workflow(&loaded.workflow, &loaded.file.pack, pack.id, &pack.r#ref)
|
||||
.create_workflow(
|
||||
&loaded.workflow,
|
||||
&loaded.file.pack,
|
||||
pack.id,
|
||||
&pack.r#ref,
|
||||
&effective_ref,
|
||||
&effective_label,
|
||||
)
|
||||
.await?;
|
||||
|
||||
// Create a companion action record so the workflow appears in action lists
|
||||
@@ -129,6 +181,8 @@ impl WorkflowRegistrar {
|
||||
pack.id,
|
||||
&pack.r#ref,
|
||||
&loaded.file.name,
|
||||
&effective_ref,
|
||||
&effective_label,
|
||||
)
|
||||
.await?;
|
||||
|
||||
@@ -195,6 +249,9 @@ impl WorkflowRegistrar {
|
||||
/// This ensures the workflow appears in action lists and the action palette
|
||||
/// in the workflow builder. The action is linked to the workflow definition
|
||||
/// via the `workflow_def` FK.
|
||||
///
|
||||
/// `effective_ref` and `effective_label` are the resolved values (which may
|
||||
/// have been derived from the filename when the workflow YAML omits them).
|
||||
async fn create_companion_action(
|
||||
&self,
|
||||
workflow_def_id: i64,
|
||||
@@ -202,14 +259,16 @@ impl WorkflowRegistrar {
|
||||
pack_id: i64,
|
||||
pack_ref: &str,
|
||||
workflow_name: &str,
|
||||
effective_ref: &str,
|
||||
effective_label: &str,
|
||||
) -> Result<()> {
|
||||
let entrypoint = format!("workflows/{}.workflow.yaml", workflow_name);
|
||||
|
||||
let action_input = CreateActionInput {
|
||||
r#ref: workflow.r#ref.clone(),
|
||||
r#ref: effective_ref.to_string(),
|
||||
pack: pack_id,
|
||||
pack_ref: pack_ref.to_string(),
|
||||
label: workflow.label.clone(),
|
||||
label: effective_label.to_string(),
|
||||
description: workflow.description.clone().unwrap_or_default(),
|
||||
entrypoint,
|
||||
runtime: None,
|
||||
@@ -226,7 +285,7 @@ impl WorkflowRegistrar {
|
||||
|
||||
info!(
|
||||
"Created companion action '{}' (ID: {}) for workflow definition (ID: {})",
|
||||
workflow.r#ref, action.id, workflow_def_id
|
||||
effective_ref, action.id, workflow_def_id
|
||||
);
|
||||
|
||||
Ok(())
|
||||
@@ -236,6 +295,9 @@ impl WorkflowRegistrar {
|
||||
///
|
||||
/// If the action already exists, update it. If it doesn't exist (e.g., for
|
||||
/// workflows registered before the companion-action fix), create it.
|
||||
///
|
||||
/// `effective_ref` and `effective_label` are the resolved values (which may
|
||||
/// have been derived from the filename when the workflow YAML omits them).
|
||||
async fn ensure_companion_action(
|
||||
&self,
|
||||
workflow_def_id: i64,
|
||||
@@ -243,6 +305,8 @@ impl WorkflowRegistrar {
|
||||
pack_id: i64,
|
||||
pack_ref: &str,
|
||||
workflow_name: &str,
|
||||
effective_ref: &str,
|
||||
effective_label: &str,
|
||||
) -> Result<()> {
|
||||
let existing_action =
|
||||
ActionRepository::find_by_workflow_def(&self.pool, workflow_def_id).await?;
|
||||
@@ -250,7 +314,7 @@ impl WorkflowRegistrar {
|
||||
if let Some(action) = existing_action {
|
||||
// Update the existing companion action to stay in sync
|
||||
let update_input = UpdateActionInput {
|
||||
label: Some(workflow.label.clone()),
|
||||
label: Some(effective_label.to_string()),
|
||||
description: workflow.description.clone(),
|
||||
entrypoint: Some(format!("workflows/{}.workflow.yaml", workflow_name)),
|
||||
runtime: None,
|
||||
@@ -276,6 +340,8 @@ impl WorkflowRegistrar {
|
||||
pack_id,
|
||||
pack_ref,
|
||||
workflow_name,
|
||||
effective_ref,
|
||||
effective_label,
|
||||
)
|
||||
.await?;
|
||||
}
|
||||
@@ -284,27 +350,32 @@ impl WorkflowRegistrar {
|
||||
}
|
||||
|
||||
/// Create a new workflow definition
|
||||
///
|
||||
/// `effective_ref` and `effective_label` are the resolved values (which may
|
||||
/// have been derived from the filename when the workflow YAML omits them).
|
||||
async fn create_workflow(
|
||||
&self,
|
||||
workflow: &WorkflowYaml,
|
||||
_pack_name: &str,
|
||||
pack_id: i64,
|
||||
pack_ref: &str,
|
||||
effective_ref: &str,
|
||||
effective_label: &str,
|
||||
) -> Result<i64> {
|
||||
// Convert the parsed workflow back to JSON for storage
|
||||
let definition = serde_json::to_value(workflow)
|
||||
.map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?;
|
||||
|
||||
let input = CreateWorkflowDefinitionInput {
|
||||
r#ref: workflow.r#ref.clone(),
|
||||
r#ref: effective_ref.to_string(),
|
||||
pack: pack_id,
|
||||
pack_ref: pack_ref.to_string(),
|
||||
label: workflow.label.clone(),
|
||||
label: effective_label.to_string(),
|
||||
description: workflow.description.clone(),
|
||||
version: workflow.version.clone(),
|
||||
param_schema: workflow.parameters.clone(),
|
||||
out_schema: workflow.output.clone(),
|
||||
definition: definition,
|
||||
definition,
|
||||
tags: workflow.tags.clone(),
|
||||
enabled: true,
|
||||
};
|
||||
@@ -315,18 +386,23 @@ impl WorkflowRegistrar {
|
||||
}
|
||||
|
||||
/// Update an existing workflow definition
|
||||
///
|
||||
/// `effective_ref` and `effective_label` are the resolved values (which may
|
||||
/// have been derived from the filename when the workflow YAML omits them).
|
||||
async fn update_workflow(
|
||||
&self,
|
||||
workflow_id: &i64,
|
||||
workflow: &WorkflowYaml,
|
||||
_pack_ref: &str,
|
||||
_effective_ref: &str,
|
||||
effective_label: &str,
|
||||
) -> Result<i64> {
|
||||
// Convert the parsed workflow back to JSON for storage
|
||||
let definition = serde_json::to_value(workflow)
|
||||
.map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?;
|
||||
|
||||
let input = UpdateWorkflowDefinitionInput {
|
||||
label: Some(workflow.label.clone()),
|
||||
label: Some(effective_label.to_string()),
|
||||
description: workflow.description.clone(),
|
||||
version: Some(workflow.version.clone()),
|
||||
param_schema: workflow.parameters.clone(),
|
||||
|
||||
@@ -42,27 +42,12 @@ use crate::workflow::graph::TaskGraph;
|
||||
|
||||
/// Extract workflow parameters from an execution's `config` field.
|
||||
///
|
||||
/// The config may be stored in two formats:
|
||||
/// 1. Wrapped: `{"parameters": {"n": 5, ...}}` — used by child task executions
|
||||
/// 2. Flat: `{"n": 5, ...}` — used by the API for manual executions
|
||||
///
|
||||
/// This helper checks for a `"parameters"` key first, and if absent treats
|
||||
/// the entire config object as the parameters (matching the worker's logic
|
||||
/// in `ActionExecutor::prepare_execution_context`).
|
||||
/// All executions store config in flat format: `{"n": 5, ...}`.
|
||||
/// The config object itself IS the parameters map.
|
||||
fn extract_workflow_params(config: &Option<JsonValue>) -> JsonValue {
|
||||
match config {
|
||||
Some(c) => {
|
||||
// Prefer the wrapped format if present
|
||||
if let Some(params) = c.get("parameters") {
|
||||
params.clone()
|
||||
} else if c.is_object() {
|
||||
// Flat format — the config itself is the parameters
|
||||
c.clone()
|
||||
} else {
|
||||
serde_json::json!({})
|
||||
}
|
||||
}
|
||||
None => serde_json::json!({}),
|
||||
Some(c) if c.is_object() => c.clone(),
|
||||
_ => serde_json::json!({}),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -100,10 +85,7 @@ fn apply_param_defaults(params: JsonValue, param_schema: &Option<JsonValue>) ->
|
||||
};
|
||||
if needs_default {
|
||||
if let Some(default_val) = prop.get("default") {
|
||||
debug!(
|
||||
"Applying default for parameter '{}': {}",
|
||||
key, default_val
|
||||
);
|
||||
debug!("Applying default for parameter '{}': {}", key, default_val);
|
||||
obj.insert(key.clone(), default_val.clone());
|
||||
}
|
||||
}
|
||||
@@ -234,8 +216,25 @@ impl ExecutionScheduler {
|
||||
worker.id, execution_id
|
||||
);
|
||||
|
||||
// Apply parameter defaults from the action's param_schema.
|
||||
// This mirrors what `process_workflow_execution` does for workflows
|
||||
// so that non-workflow executions also get missing parameters filled
|
||||
// in from the action's declared defaults.
|
||||
let execution_config = {
|
||||
let raw_config = execution.config.clone();
|
||||
let params = extract_workflow_params(&raw_config);
|
||||
let params_with_defaults = apply_param_defaults(params, &action.param_schema);
|
||||
// Config is already flat — just use the defaults-applied version
|
||||
if params_with_defaults.is_object()
|
||||
&& !params_with_defaults.as_object().unwrap().is_empty()
|
||||
{
|
||||
Some(params_with_defaults)
|
||||
} else {
|
||||
raw_config
|
||||
}
|
||||
};
|
||||
|
||||
// Update execution status to scheduled
|
||||
let execution_config = execution.config.clone();
|
||||
let mut execution_for_update = execution;
|
||||
execution_for_update.status = ExecutionStatus::Scheduled;
|
||||
ExecutionRepository::update(pool, execution_for_update.id, execution_for_update.into())
|
||||
@@ -391,6 +390,7 @@ impl ExecutionScheduler {
|
||||
&workflow_execution.id,
|
||||
task_node,
|
||||
&wf_ctx,
|
||||
None, // entry-point task — no predecessor
|
||||
)
|
||||
.await?;
|
||||
} else {
|
||||
@@ -407,6 +407,10 @@ impl ExecutionScheduler {
|
||||
/// Create a child execution for a single workflow task and dispatch it to
|
||||
/// a worker. The child execution references the parent workflow execution
|
||||
/// via `workflow_task` metadata.
|
||||
///
|
||||
/// `triggered_by` is the name of the predecessor task whose completion
|
||||
/// caused this task to be scheduled. Pass `None` for entry-point tasks
|
||||
/// dispatched at workflow start.
|
||||
async fn dispatch_workflow_task(
|
||||
pool: &PgPool,
|
||||
publisher: &Publisher,
|
||||
@@ -415,6 +419,7 @@ impl ExecutionScheduler {
|
||||
workflow_execution_id: &i64,
|
||||
task_node: &crate::workflow::graph::TaskNode,
|
||||
wf_ctx: &WorkflowContext,
|
||||
triggered_by: Option<&str>,
|
||||
) -> Result<()> {
|
||||
let action_ref: String = match &task_node.action {
|
||||
Some(a) => a.clone(),
|
||||
@@ -461,6 +466,7 @@ impl ExecutionScheduler {
|
||||
&action_ref,
|
||||
with_items_expr,
|
||||
wf_ctx,
|
||||
triggered_by,
|
||||
)
|
||||
.await;
|
||||
}
|
||||
@@ -484,12 +490,12 @@ impl ExecutionScheduler {
|
||||
task_node.input.clone()
|
||||
};
|
||||
|
||||
// Build task config from the (rendered) input
|
||||
// Build task config from the (rendered) input.
|
||||
// Store as flat parameters (consistent with manual and rule-triggered
|
||||
// executions) — no {"parameters": ...} wrapper.
|
||||
let task_config: Option<JsonValue> =
|
||||
if rendered_input.is_object() && !rendered_input.as_object().unwrap().is_empty() {
|
||||
Some(serde_json::json!({
|
||||
"parameters": rendered_input
|
||||
}))
|
||||
Some(rendered_input.clone())
|
||||
} else if let Some(parent_config) = &parent_execution.config {
|
||||
Some(parent_config.clone())
|
||||
} else {
|
||||
@@ -500,6 +506,7 @@ impl ExecutionScheduler {
|
||||
let workflow_task = WorkflowTaskMetadata {
|
||||
workflow_execution: *workflow_execution_id,
|
||||
task_name: task_node.name.clone(),
|
||||
triggered_by: triggered_by.map(String::from),
|
||||
task_index: None,
|
||||
task_batch: None,
|
||||
retry_count: 0,
|
||||
@@ -587,6 +594,7 @@ impl ExecutionScheduler {
|
||||
action_ref: &str,
|
||||
with_items_expr: &str,
|
||||
wf_ctx: &WorkflowContext,
|
||||
triggered_by: Option<&str>,
|
||||
) -> Result<()> {
|
||||
// Resolve the with_items expression to a JSON array
|
||||
let items_value = wf_ctx
|
||||
@@ -647,9 +655,11 @@ impl ExecutionScheduler {
|
||||
task_node.input.clone()
|
||||
};
|
||||
|
||||
// Store as flat parameters (consistent with manual and rule-triggered
|
||||
// executions) — no {"parameters": ...} wrapper.
|
||||
let task_config: Option<JsonValue> =
|
||||
if rendered_input.is_object() && !rendered_input.as_object().unwrap().is_empty() {
|
||||
Some(serde_json::json!({ "parameters": rendered_input }))
|
||||
Some(rendered_input.clone())
|
||||
} else if let Some(parent_config) = &parent_execution.config {
|
||||
Some(parent_config.clone())
|
||||
} else {
|
||||
@@ -659,6 +669,7 @@ impl ExecutionScheduler {
|
||||
let workflow_task = WorkflowTaskMetadata {
|
||||
workflow_execution: *workflow_execution_id,
|
||||
task_name: task_node.name.clone(),
|
||||
triggered_by: triggered_by.map(String::from),
|
||||
task_index: Some(index as i32),
|
||||
task_batch: None,
|
||||
retry_count: 0,
|
||||
@@ -961,8 +972,7 @@ impl ExecutionScheduler {
|
||||
.and_then(|n| n.concurrency)
|
||||
.unwrap_or(1);
|
||||
|
||||
let free_slots =
|
||||
concurrency_limit.saturating_sub(in_flight_count.0 as usize);
|
||||
let free_slots = concurrency_limit.saturating_sub(in_flight_count.0 as usize);
|
||||
|
||||
if free_slots > 0 {
|
||||
if let Err(e) = Self::publish_pending_with_items_children(
|
||||
@@ -1009,6 +1019,39 @@ impl ExecutionScheduler {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------
|
||||
// Race-condition guard: when multiple with_items children
|
||||
// complete nearly simultaneously, the worker updates their
|
||||
// DB status to Completed *before* the completion MQ message
|
||||
// is processed. This means several advance_workflow calls
|
||||
// (processed sequentially by the completion listener) can
|
||||
// each see "0 siblings remaining" and fall through to
|
||||
// transition evaluation, dispatching successor tasks
|
||||
// multiple times.
|
||||
//
|
||||
// To prevent this we re-check the *persisted*
|
||||
// completed/failed task lists that were loaded from the
|
||||
// workflow_execution record at the top of this function.
|
||||
// If `task_name` is already present, a previous
|
||||
// advance_workflow invocation already handled the final
|
||||
// completion of this with_items task and dispatched its
|
||||
// successors — we can safely return.
|
||||
// ---------------------------------------------------------
|
||||
if workflow_execution
|
||||
.completed_tasks
|
||||
.contains(&task_name.to_string())
|
||||
|| workflow_execution
|
||||
.failed_tasks
|
||||
.contains(&task_name.to_string())
|
||||
{
|
||||
debug!(
|
||||
"with_items task '{}' already in persisted completed/failed list — \
|
||||
another advance_workflow call already handled final completion, skipping",
|
||||
task_name,
|
||||
);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// All items done — check if any failed
|
||||
let any_failed: Vec<(i64,)> = sqlx::query_as(
|
||||
"SELECT id \
|
||||
@@ -1129,10 +1172,10 @@ impl ExecutionScheduler {
|
||||
if should_fire {
|
||||
// Process publish directives from this transition
|
||||
if !transition.publish.is_empty() {
|
||||
let publish_map: HashMap<String, String> = transition
|
||||
let publish_map: HashMap<String, JsonValue> = transition
|
||||
.publish
|
||||
.iter()
|
||||
.map(|p| (p.name.clone(), p.expression.clone()))
|
||||
.map(|p| (p.name.clone(), p.value.clone()))
|
||||
.collect();
|
||||
if let Err(e) = wf_ctx.publish_from_result(
|
||||
&serde_json::json!({}),
|
||||
@@ -1161,6 +1204,41 @@ impl ExecutionScheduler {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Guard against dispatching a task that has already
|
||||
// been dispatched (exists as a child execution in
|
||||
// this workflow). This catches edge cases where
|
||||
// the persisted completed/failed lists haven't been
|
||||
// updated yet but a child execution was already
|
||||
// created by a prior advance_workflow call.
|
||||
//
|
||||
// This is critical for with_items predecessors:
|
||||
// workers update DB status to Completed before the
|
||||
// completion MQ message is processed, so multiple
|
||||
// with_items items can each see "0 siblings
|
||||
// remaining" and attempt to dispatch the same
|
||||
// successor. The query covers both regular tasks
|
||||
// (task_index IS NULL) and with_items tasks
|
||||
// (task_index IS NOT NULL) so that neither case
|
||||
// can be double-dispatched.
|
||||
let already_dispatched: (i64,) = sqlx::query_as(
|
||||
"SELECT COUNT(*) \
|
||||
FROM execution \
|
||||
WHERE workflow_task->>'workflow_execution' = $1::text \
|
||||
AND workflow_task->>'task_name' = $2",
|
||||
)
|
||||
.bind(workflow_execution_id.to_string())
|
||||
.bind(next_task_name.as_str())
|
||||
.fetch_one(pool)
|
||||
.await?;
|
||||
|
||||
if already_dispatched.0 > 0 {
|
||||
debug!(
|
||||
"Skipping task '{}' — already dispatched ({} existing execution(s))",
|
||||
next_task_name, already_dispatched.0
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check join barrier: if the task has a `join` count,
|
||||
// only schedule it when enough predecessors are done.
|
||||
if let Some(next_node) = graph.get_task(next_task_name) {
|
||||
@@ -1210,6 +1288,7 @@ impl ExecutionScheduler {
|
||||
&workflow_execution_id,
|
||||
task_node,
|
||||
&wf_ctx,
|
||||
Some(task_name), // predecessor that triggered this task
|
||||
)
|
||||
.await
|
||||
{
|
||||
@@ -1716,19 +1795,8 @@ mod tests {
|
||||
assert_eq!(free, 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_workflow_params_wrapped_format() {
|
||||
// Child task executions store config as {"parameters": {...}}
|
||||
let config = Some(serde_json::json!({
|
||||
"parameters": {"n": 5, "name": "test"}
|
||||
}));
|
||||
let params = extract_workflow_params(&config);
|
||||
assert_eq!(params, serde_json::json!({"n": 5, "name": "test"}));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_workflow_params_flat_format() {
|
||||
// API manual executions store config as flat {"n": 5, ...}
|
||||
let config = Some(serde_json::json!({"n": 5, "name": "test"}));
|
||||
let params = extract_workflow_params(&config);
|
||||
assert_eq!(params, serde_json::json!({"n": 5, "name": "test"}));
|
||||
@@ -1742,7 +1810,6 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_extract_workflow_params_non_object() {
|
||||
// Edge case: config is a non-object JSON value
|
||||
let config = Some(serde_json::json!("not an object"));
|
||||
let params = extract_workflow_params(&config);
|
||||
assert_eq!(params, serde_json::json!({}));
|
||||
@@ -1756,14 +1823,17 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_extract_workflow_params_wrapped_takes_precedence() {
|
||||
// If config has a "parameters" key, that value is used even if
|
||||
// the config object also has other top-level keys
|
||||
fn test_extract_workflow_params_with_parameters_key() {
|
||||
// A "parameters" key is just a regular parameter — not unwrapped
|
||||
let config = Some(serde_json::json!({
|
||||
"parameters": {"n": 5},
|
||||
"context": {"rule": "test"}
|
||||
}));
|
||||
let params = extract_workflow_params(&config);
|
||||
assert_eq!(params, serde_json::json!({"n": 5}));
|
||||
// Returns the whole object as-is — "parameters" is treated as a normal key
|
||||
assert_eq!(
|
||||
params,
|
||||
serde_json::json!({"parameters": {"n": 5}, "context": {"rule": "test"}})
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -412,24 +412,26 @@ impl WorkflowContext {
|
||||
|
||||
/// Publish variables from a task result.
|
||||
///
|
||||
/// Each publish directive is a `(name, expression)` pair where the
|
||||
/// expression is a template string like `"{{ result().data.items }}"`.
|
||||
/// The expression is rendered with `render_json`-style type preservation
|
||||
/// so that non-string values (arrays, numbers, booleans) keep their type.
|
||||
/// Each publish directive is a `(name, value)` pair where the value is
|
||||
/// any JSON-compatible type. String values are treated as template
|
||||
/// expressions (e.g. `"{{ result().data.items }}"`) and rendered with
|
||||
/// type preservation. Non-string values (booleans, numbers, arrays,
|
||||
/// objects, null) pass through `render_json` unchanged, preserving
|
||||
/// their original type.
|
||||
pub fn publish_from_result(
|
||||
&mut self,
|
||||
result: &JsonValue,
|
||||
publish_vars: &[String],
|
||||
publish_map: Option<&HashMap<String, String>>,
|
||||
publish_map: Option<&HashMap<String, JsonValue>>,
|
||||
) -> ContextResult<()> {
|
||||
// If publish map is provided, use it
|
||||
if let Some(map) = publish_map {
|
||||
for (var_name, template) in map {
|
||||
// Use type-preserving rendering: if the entire template is a
|
||||
// single expression like `{{ result().data.items }}`, preserve
|
||||
// the underlying JsonValue type (e.g. an array stays an array).
|
||||
let json_value = JsonValue::String(template.clone());
|
||||
let value = self.render_json(&json_value)?;
|
||||
for (var_name, json_value) in map {
|
||||
// render_json handles all types: strings are template-rendered
|
||||
// (with type preservation for pure `{{ }}` expressions), while
|
||||
// booleans, numbers, arrays, objects, and null pass through
|
||||
// unchanged.
|
||||
let value = self.render_json(json_value)?;
|
||||
self.set_var(var_name, value);
|
||||
}
|
||||
} else {
|
||||
@@ -1095,7 +1097,7 @@ mod tests {
|
||||
let mut publish_map = HashMap::new();
|
||||
publish_map.insert(
|
||||
"number_list".to_string(),
|
||||
"{{ result().data.items }}".to_string(),
|
||||
JsonValue::String("{{ result().data.items }}".to_string()),
|
||||
);
|
||||
|
||||
ctx.publish_from_result(&json!({}), &[], Some(&publish_map))
|
||||
@@ -1117,6 +1119,52 @@ mod tests {
|
||||
assert_eq!(ctx.get_var("my_var").unwrap(), result);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_publish_typed_values() {
|
||||
// Non-string publish values (booleans, numbers, null) should pass
|
||||
// through render_json unchanged and be stored with their original type.
|
||||
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
||||
ctx.set_last_task_outcome(json!({"status": "ok"}), TaskOutcome::Succeeded);
|
||||
|
||||
let mut publish_map = HashMap::new();
|
||||
publish_map.insert("flag".to_string(), JsonValue::Bool(true));
|
||||
publish_map.insert("count".to_string(), json!(42));
|
||||
publish_map.insert("ratio".to_string(), json!(3.14));
|
||||
publish_map.insert("nothing".to_string(), JsonValue::Null);
|
||||
publish_map.insert(
|
||||
"template".to_string(),
|
||||
JsonValue::String("{{ result().status }}".to_string()),
|
||||
);
|
||||
publish_map.insert(
|
||||
"plain_str".to_string(),
|
||||
JsonValue::String("hello".to_string()),
|
||||
);
|
||||
|
||||
ctx.publish_from_result(&json!({}), &[], Some(&publish_map))
|
||||
.unwrap();
|
||||
|
||||
// Boolean preserved as boolean (not string "true")
|
||||
assert_eq!(ctx.get_var("flag").unwrap(), json!(true));
|
||||
assert!(ctx.get_var("flag").unwrap().is_boolean());
|
||||
|
||||
// Integer preserved
|
||||
assert_eq!(ctx.get_var("count").unwrap(), json!(42));
|
||||
assert!(ctx.get_var("count").unwrap().is_number());
|
||||
|
||||
// Float preserved
|
||||
assert_eq!(ctx.get_var("ratio").unwrap(), json!(3.14));
|
||||
|
||||
// Null preserved
|
||||
assert_eq!(ctx.get_var("nothing").unwrap(), json!(null));
|
||||
assert!(ctx.get_var("nothing").unwrap().is_null());
|
||||
|
||||
// Template expression rendered with type preservation
|
||||
assert_eq!(ctx.get_var("template").unwrap(), json!("ok"));
|
||||
|
||||
// Plain string stays as string
|
||||
assert_eq!(ctx.get_var("plain_str").unwrap(), json!("hello"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_published_var_accessible_via_workflow_namespace() {
|
||||
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
||||
|
||||
@@ -11,6 +11,7 @@
|
||||
//! - `do` — next tasks to invoke when the condition is met
|
||||
|
||||
use attune_common::workflow::{Task, TaskType, WorkflowDefinition};
|
||||
use serde_json::Value as JsonValue;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
|
||||
/// Result type for graph operations
|
||||
@@ -101,11 +102,23 @@ pub struct GraphTransition {
|
||||
pub do_tasks: Vec<String>,
|
||||
}
|
||||
|
||||
/// A single publish variable (key = expression)
|
||||
/// A single publish variable (key = value).
|
||||
///
|
||||
/// The `value` field holds either a template expression (as a `JsonValue::String`
|
||||
/// containing `{{ }}`), a literal string, or any other JSON-compatible type
|
||||
/// (boolean, number, array, object, null). The workflow context's `render_json`
|
||||
/// method handles all of these: strings are template-rendered (with type
|
||||
/// preservation for pure expressions), while non-string values pass through
|
||||
/// unchanged.
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
pub struct PublishVar {
|
||||
pub name: String,
|
||||
pub expression: String,
|
||||
/// The publish value — may be a template string, literal boolean, number,
|
||||
/// array, object, or null. Renamed from `expression` (which only supported
|
||||
/// strings); the serde alias ensures existing serialized task graphs that
|
||||
/// use the old field name still deserialize correctly.
|
||||
#[serde(alias = "expression")]
|
||||
pub value: JsonValue,
|
||||
}
|
||||
|
||||
/// Retry configuration
|
||||
@@ -463,14 +476,14 @@ fn extract_publish_vars(publish: &[attune_common::workflow::PublishDirective]) -
|
||||
for (key, value) in map {
|
||||
vars.push(PublishVar {
|
||||
name: key.clone(),
|
||||
expression: value.clone(),
|
||||
value: value.clone(),
|
||||
});
|
||||
}
|
||||
}
|
||||
PublishDirective::Key(key) => {
|
||||
vars.push(PublishVar {
|
||||
name: key.clone(),
|
||||
expression: "{{ result() }}".to_string(),
|
||||
value: JsonValue::String("{{ result() }}".to_string()),
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -678,7 +691,7 @@ tasks:
|
||||
assert_eq!(transitions.len(), 1);
|
||||
assert_eq!(transitions[0].publish.len(), 1);
|
||||
assert_eq!(transitions[0].publish[0].name, "msg");
|
||||
assert_eq!(transitions[0].publish[0].expression, "task1 done");
|
||||
assert_eq!(transitions[0].publish[0].value, JsonValue::String("task1 done".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -932,4 +945,82 @@ tasks:
|
||||
assert!(next.contains(&"failure_task".to_string()));
|
||||
assert!(next.contains(&"always_task".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_typed_publish_values() {
|
||||
// Verify that non-string publish values (booleans, numbers, null)
|
||||
// are preserved through parsing and graph construction.
|
||||
let yaml = r#"
|
||||
ref: test.typed_publish
|
||||
label: Typed Publish Test
|
||||
version: 1.0.0
|
||||
tasks:
|
||||
- name: task1
|
||||
action: core.echo
|
||||
next:
|
||||
- when: "{{ succeeded() }}"
|
||||
publish:
|
||||
- validation_passed: true
|
||||
- count: 42
|
||||
- ratio: 3.14
|
||||
- label: "hello"
|
||||
- template_val: "{{ result().data }}"
|
||||
- nothing: null
|
||||
do:
|
||||
- task2
|
||||
- when: "{{ failed() }}"
|
||||
publish:
|
||||
- validation_passed: false
|
||||
do:
|
||||
- task2
|
||||
- name: task2
|
||||
action: core.echo
|
||||
"#;
|
||||
|
||||
let workflow = workflow::parse_workflow_yaml(yaml).unwrap();
|
||||
let graph = TaskGraph::from_workflow(&workflow).unwrap();
|
||||
|
||||
let task1 = graph.get_task("task1").unwrap();
|
||||
assert_eq!(task1.transitions.len(), 2);
|
||||
|
||||
// Success transition should have 6 publish vars
|
||||
let success_publish = &task1.transitions[0].publish;
|
||||
assert_eq!(success_publish.len(), 6);
|
||||
|
||||
// Build a lookup map for easier assertions
|
||||
let publish_map: HashMap<&str, &JsonValue> = success_publish
|
||||
.iter()
|
||||
.map(|p| (p.name.as_str(), &p.value))
|
||||
.collect();
|
||||
|
||||
// Boolean true is preserved as a JSON boolean
|
||||
assert_eq!(publish_map["validation_passed"], &JsonValue::Bool(true));
|
||||
|
||||
// Integer is preserved as a JSON number
|
||||
assert_eq!(publish_map["count"], &serde_json::json!(42));
|
||||
|
||||
// Float is preserved as a JSON number
|
||||
assert_eq!(publish_map["ratio"], &serde_json::json!(3.14));
|
||||
|
||||
// Plain string stays as a string
|
||||
assert_eq!(
|
||||
publish_map["label"],
|
||||
&JsonValue::String("hello".to_string())
|
||||
);
|
||||
|
||||
// Template expression stays as a string (rendered later by context)
|
||||
assert_eq!(
|
||||
publish_map["template_val"],
|
||||
&JsonValue::String("{{ result().data }}".to_string())
|
||||
);
|
||||
|
||||
// Null is preserved
|
||||
assert_eq!(publish_map["nothing"], &JsonValue::Null);
|
||||
|
||||
// Failure transition should have boolean false
|
||||
let failure_publish = &task1.transitions[1].publish;
|
||||
assert_eq!(failure_publish.len(), 1);
|
||||
assert_eq!(failure_publish[0].name, "validation_passed");
|
||||
assert_eq!(failure_publish[0].value, JsonValue::Bool(false));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -162,11 +162,16 @@ pub enum TaskType {
|
||||
}
|
||||
|
||||
/// Variable publishing directive
|
||||
///
|
||||
/// Values may be template expressions (strings containing `{{ }}`), literal
|
||||
/// strings, or any other JSON-compatible type (booleans, numbers, arrays,
|
||||
/// objects). Non-string literals are preserved through the rendering pipeline.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(untagged)]
|
||||
pub enum PublishDirective {
|
||||
/// Simple key-value pair
|
||||
Simple(HashMap<String, String>),
|
||||
/// Key-value pair where the value can be any JSON-compatible type
|
||||
/// (string template, boolean, number, array, object, null).
|
||||
Simple(HashMap<String, serde_json::Value>),
|
||||
/// Just a key (publishes entire result under that key)
|
||||
Key(String),
|
||||
}
|
||||
|
||||
@@ -4,6 +4,11 @@
|
||||
//! Workflows are stored in the `workflow_definition` table with their full YAML definition
|
||||
//! as JSON. A companion action record is also created so that workflows appear in
|
||||
//! action lists and the workflow builder's action palette.
|
||||
//!
|
||||
//! Standalone workflow files (in `workflows/`) carry their own `ref` and `label`.
|
||||
//! Action-linked workflow files (in `actions/workflows/`, referenced via
|
||||
//! `workflow_file`) may omit those fields — the registrar falls back to
|
||||
//! `WorkflowFile.ref_name` / `WorkflowFile.name` derived from the filename.
|
||||
|
||||
use attune_common::error::{Error, Result};
|
||||
use attune_common::repositories::action::{ActionRepository, CreateActionInput, UpdateActionInput};
|
||||
@@ -63,6 +68,32 @@ impl WorkflowRegistrar {
|
||||
Self { pool, options }
|
||||
}
|
||||
|
||||
/// Resolve the effective ref for a workflow.
|
||||
///
|
||||
/// Prefers the value declared in the YAML; falls back to the
|
||||
/// `WorkflowFile.ref_name` derived from the filename when the YAML
|
||||
/// omits it (action-linked workflow files).
|
||||
fn effective_ref(loaded: &LoadedWorkflow) -> String {
|
||||
if loaded.workflow.r#ref.is_empty() {
|
||||
loaded.file.ref_name.clone()
|
||||
} else {
|
||||
loaded.workflow.r#ref.clone()
|
||||
}
|
||||
}
|
||||
|
||||
/// Resolve the effective label for a workflow.
|
||||
///
|
||||
/// Prefers the value declared in the YAML; falls back to the
|
||||
/// `WorkflowFile.name` (human-readable filename stem) when the YAML
|
||||
/// omits it.
|
||||
fn effective_label(loaded: &LoadedWorkflow) -> String {
|
||||
if loaded.workflow.label.is_empty() {
|
||||
loaded.file.name.clone()
|
||||
} else {
|
||||
loaded.workflow.label.clone()
|
||||
}
|
||||
}
|
||||
|
||||
/// Register a single workflow
|
||||
pub async fn register_workflow(&self, loaded: &LoadedWorkflow) -> Result<RegistrationResult> {
|
||||
debug!("Registering workflow: {}", loaded.file.ref_name);
|
||||
@@ -93,6 +124,12 @@ impl WorkflowRegistrar {
|
||||
warnings.push(err.clone());
|
||||
}
|
||||
|
||||
// Resolve effective ref/label — prefer workflow YAML values, fall
|
||||
// back to filename-derived values for action-linked workflow files
|
||||
// that omit action-level metadata.
|
||||
let effective_ref = Self::effective_ref(loaded);
|
||||
let effective_label = Self::effective_label(loaded);
|
||||
|
||||
let (workflow_def_id, created) = if let Some(existing) = existing_workflow {
|
||||
if !self.options.update_existing {
|
||||
return Err(Error::already_exists(
|
||||
@@ -104,7 +141,13 @@ impl WorkflowRegistrar {
|
||||
|
||||
info!("Updating existing workflow: {}", loaded.file.ref_name);
|
||||
let workflow_def_id = self
|
||||
.update_workflow(&existing.id, &loaded.workflow, &pack.r#ref)
|
||||
.update_workflow(
|
||||
&existing.id,
|
||||
&loaded.workflow,
|
||||
&pack.r#ref,
|
||||
&effective_ref,
|
||||
&effective_label,
|
||||
)
|
||||
.await?;
|
||||
|
||||
// Update or create the companion action record
|
||||
@@ -114,6 +157,8 @@ impl WorkflowRegistrar {
|
||||
pack.id,
|
||||
&pack.r#ref,
|
||||
&loaded.file.name,
|
||||
&effective_ref,
|
||||
&effective_label,
|
||||
)
|
||||
.await?;
|
||||
|
||||
@@ -121,7 +166,14 @@ impl WorkflowRegistrar {
|
||||
} else {
|
||||
info!("Creating new workflow: {}", loaded.file.ref_name);
|
||||
let workflow_def_id = self
|
||||
.create_workflow(&loaded.workflow, &loaded.file.pack, pack.id, &pack.r#ref)
|
||||
.create_workflow(
|
||||
&loaded.workflow,
|
||||
&loaded.file.pack,
|
||||
pack.id,
|
||||
&pack.r#ref,
|
||||
&effective_ref,
|
||||
&effective_label,
|
||||
)
|
||||
.await?;
|
||||
|
||||
// Create a companion action record so the workflow appears in action lists
|
||||
@@ -131,6 +183,8 @@ impl WorkflowRegistrar {
|
||||
pack.id,
|
||||
&pack.r#ref,
|
||||
&loaded.file.name,
|
||||
&effective_ref,
|
||||
&effective_label,
|
||||
)
|
||||
.await?;
|
||||
|
||||
@@ -197,6 +251,9 @@ impl WorkflowRegistrar {
|
||||
/// This ensures the workflow appears in action lists and the action palette
|
||||
/// in the workflow builder. The action is linked to the workflow definition
|
||||
/// via the `workflow_def` FK.
|
||||
///
|
||||
/// `effective_ref` and `effective_label` are the resolved values (which may
|
||||
/// have been derived from the filename when the workflow YAML omits them).
|
||||
async fn create_companion_action(
|
||||
&self,
|
||||
workflow_def_id: i64,
|
||||
@@ -204,14 +261,16 @@ impl WorkflowRegistrar {
|
||||
pack_id: i64,
|
||||
pack_ref: &str,
|
||||
workflow_name: &str,
|
||||
effective_ref: &str,
|
||||
effective_label: &str,
|
||||
) -> Result<()> {
|
||||
let entrypoint = format!("workflows/{}.workflow.yaml", workflow_name);
|
||||
|
||||
let action_input = CreateActionInput {
|
||||
r#ref: workflow.r#ref.clone(),
|
||||
r#ref: effective_ref.to_string(),
|
||||
pack: pack_id,
|
||||
pack_ref: pack_ref.to_string(),
|
||||
label: workflow.label.clone(),
|
||||
label: effective_label.to_string(),
|
||||
description: workflow.description.clone().unwrap_or_default(),
|
||||
entrypoint,
|
||||
runtime: None,
|
||||
@@ -228,7 +287,7 @@ impl WorkflowRegistrar {
|
||||
|
||||
info!(
|
||||
"Created companion action '{}' (ID: {}) for workflow definition (ID: {})",
|
||||
workflow.r#ref, action.id, workflow_def_id
|
||||
effective_ref, action.id, workflow_def_id
|
||||
);
|
||||
|
||||
Ok(())
|
||||
@@ -238,6 +297,9 @@ impl WorkflowRegistrar {
|
||||
///
|
||||
/// If the action already exists, update it. If it doesn't exist (e.g., for
|
||||
/// workflows registered before the companion-action fix), create it.
|
||||
///
|
||||
/// `effective_ref` and `effective_label` are the resolved values (which may
|
||||
/// have been derived from the filename when the workflow YAML omits them).
|
||||
async fn ensure_companion_action(
|
||||
&self,
|
||||
workflow_def_id: i64,
|
||||
@@ -245,6 +307,8 @@ impl WorkflowRegistrar {
|
||||
pack_id: i64,
|
||||
pack_ref: &str,
|
||||
workflow_name: &str,
|
||||
effective_ref: &str,
|
||||
effective_label: &str,
|
||||
) -> Result<()> {
|
||||
let existing_action =
|
||||
ActionRepository::find_by_workflow_def(&self.pool, workflow_def_id).await?;
|
||||
@@ -252,7 +316,7 @@ impl WorkflowRegistrar {
|
||||
if let Some(action) = existing_action {
|
||||
// Update the existing companion action to stay in sync
|
||||
let update_input = UpdateActionInput {
|
||||
label: Some(workflow.label.clone()),
|
||||
label: Some(effective_label.to_string()),
|
||||
description: workflow.description.clone(),
|
||||
entrypoint: Some(format!("workflows/{}.workflow.yaml", workflow_name)),
|
||||
runtime: None,
|
||||
@@ -278,6 +342,8 @@ impl WorkflowRegistrar {
|
||||
pack_id,
|
||||
pack_ref,
|
||||
workflow_name,
|
||||
effective_ref,
|
||||
effective_label,
|
||||
)
|
||||
.await?;
|
||||
}
|
||||
@@ -286,27 +352,32 @@ impl WorkflowRegistrar {
|
||||
}
|
||||
|
||||
/// Create a new workflow definition
|
||||
///
|
||||
/// `effective_ref` and `effective_label` are the resolved values (which may
|
||||
/// have been derived from the filename when the workflow YAML omits them).
|
||||
async fn create_workflow(
|
||||
&self,
|
||||
workflow: &WorkflowYaml,
|
||||
_pack_name: &str,
|
||||
pack_id: i64,
|
||||
pack_ref: &str,
|
||||
effective_ref: &str,
|
||||
effective_label: &str,
|
||||
) -> Result<i64> {
|
||||
// Convert the parsed workflow back to JSON for storage
|
||||
let definition = serde_json::to_value(workflow)
|
||||
.map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?;
|
||||
|
||||
let input = CreateWorkflowDefinitionInput {
|
||||
r#ref: workflow.r#ref.clone(),
|
||||
r#ref: effective_ref.to_string(),
|
||||
pack: pack_id,
|
||||
pack_ref: pack_ref.to_string(),
|
||||
label: workflow.label.clone(),
|
||||
label: effective_label.to_string(),
|
||||
description: workflow.description.clone(),
|
||||
version: workflow.version.clone(),
|
||||
param_schema: workflow.parameters.clone(),
|
||||
out_schema: workflow.output.clone(),
|
||||
definition: definition,
|
||||
definition,
|
||||
tags: workflow.tags.clone(),
|
||||
enabled: true,
|
||||
};
|
||||
@@ -317,18 +388,23 @@ impl WorkflowRegistrar {
|
||||
}
|
||||
|
||||
/// Update an existing workflow definition
|
||||
///
|
||||
/// `effective_ref` and `effective_label` are the resolved values (which may
|
||||
/// have been derived from the filename when the workflow YAML omits them).
|
||||
async fn update_workflow(
|
||||
&self,
|
||||
workflow_id: &i64,
|
||||
workflow: &WorkflowYaml,
|
||||
_pack_ref: &str,
|
||||
_effective_ref: &str,
|
||||
effective_label: &str,
|
||||
) -> Result<i64> {
|
||||
// Convert the parsed workflow back to JSON for storage
|
||||
let definition = serde_json::to_value(workflow)
|
||||
.map_err(|e| Error::validation(format!("Failed to serialize workflow: {}", e)))?;
|
||||
|
||||
let input = UpdateWorkflowDefinitionInput {
|
||||
label: Some(workflow.label.clone()),
|
||||
label: Some(effective_label.to_string()),
|
||||
description: workflow.description.clone(),
|
||||
version: Some(workflow.version.clone()),
|
||||
param_schema: workflow.parameters.clone(),
|
||||
|
||||
@@ -257,41 +257,27 @@ impl ActionExecutor {
|
||||
execution.id
|
||||
);
|
||||
|
||||
// Extract parameters from execution config
|
||||
// Extract parameters from execution config.
|
||||
// Config is always stored in flat format: the config object itself
|
||||
// is the parameters map (e.g. {"url": "...", "method": "GET"}).
|
||||
let mut parameters = HashMap::new();
|
||||
|
||||
if let Some(config) = &execution.config {
|
||||
info!("Execution config present: {:?}", config);
|
||||
debug!("Execution config present: {:?}", config);
|
||||
|
||||
// Try to get parameters from config.parameters first
|
||||
if let Some(params) = config.get("parameters") {
|
||||
info!("Found config.parameters key");
|
||||
if let JsonValue::Object(map) = params {
|
||||
for (key, value) in map {
|
||||
parameters.insert(key.clone(), value.clone());
|
||||
}
|
||||
}
|
||||
} else if let JsonValue::Object(map) = config {
|
||||
info!("No config.parameters key, treating entire config as parameters");
|
||||
// If no parameters key, treat entire config as parameters
|
||||
// (this handles rule action_params being placed at root level)
|
||||
if let JsonValue::Object(map) = config {
|
||||
for (key, value) in map {
|
||||
// Skip special keys that aren't action parameters
|
||||
if key != "context" && key != "env" {
|
||||
info!("Adding parameter: {} = {:?}", key, value);
|
||||
parameters.insert(key.clone(), value.clone());
|
||||
} else {
|
||||
info!("Skipping special key: {}", key);
|
||||
}
|
||||
debug!("Adding parameter: {} = {:?}", key, value);
|
||||
parameters.insert(key.clone(), value.clone());
|
||||
}
|
||||
} else {
|
||||
info!("Config is not an Object, cannot extract parameters");
|
||||
}
|
||||
} else {
|
||||
info!("No execution config present");
|
||||
debug!("No execution config present");
|
||||
}
|
||||
|
||||
info!(
|
||||
debug!(
|
||||
"Extracted {} parameters: {:?}",
|
||||
parameters.len(),
|
||||
parameters
|
||||
|
||||
@@ -56,19 +56,14 @@ pub async fn execute_streaming(
|
||||
let mut error = None;
|
||||
|
||||
// Write parameters first if using stdin delivery.
|
||||
// Skip empty/trivial content ("{}","","[]") to avoid polluting stdin
|
||||
// before secrets — scripts that read secrets via readline() expect
|
||||
// the secrets JSON as the first line.
|
||||
let has_real_params = parameters_stdin
|
||||
.map(|s| !matches!(s.trim(), "" | "{}" | "[]"))
|
||||
.unwrap_or(false);
|
||||
// When the caller provides parameters_stdin (i.e. the action uses
|
||||
// stdin delivery), always write the content — even if it's "{}" —
|
||||
// because the script expects to read valid JSON from stdin.
|
||||
if let Some(params_data) = parameters_stdin {
|
||||
if has_real_params {
|
||||
if let Err(e) = stdin.write_all(params_data.as_bytes()).await {
|
||||
error = Some(format!("Failed to write parameters to stdin: {}", e));
|
||||
} else if let Err(e) = stdin.write_all(b"\n---ATTUNE_PARAMS_END---\n").await {
|
||||
error = Some(format!("Failed to write parameter delimiter: {}", e));
|
||||
}
|
||||
if let Err(e) = stdin.write_all(params_data.as_bytes()).await {
|
||||
error = Some(format!("Failed to write parameters to stdin: {}", e));
|
||||
} else if let Err(e) = stdin.write_all(b"\n---ATTUNE_PARAMS_END---\n").await {
|
||||
error = Some(format!("Failed to write parameter delimiter: {}", e));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -98,6 +98,7 @@ services:
|
||||
- ./docker/init-packs.sh:/init-packs.sh:ro
|
||||
- packs_data:/opt/attune/packs
|
||||
- runtime_envs:/opt/attune/runtime_envs
|
||||
- artifacts_data:/opt/attune/artifacts
|
||||
environment:
|
||||
DB_HOST: postgres
|
||||
DB_PORT: 5432
|
||||
|
||||
@@ -25,6 +25,9 @@ RUN apt-get update && apt-get install -y \
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Increase rustc stack size to prevent SIGSEGV during release builds
|
||||
ENV RUST_MIN_STACK=16777216
|
||||
|
||||
# Copy workspace manifests and source code
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
COPY crates/ ./crates/
|
||||
@@ -65,6 +68,9 @@ RUN apt-get update && apt-get install -y \
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Increase rustc stack size to prevent SIGSEGV during release builds
|
||||
ENV RUST_MIN_STACK=16777216
|
||||
|
||||
# Copy workspace files
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
COPY crates/ ./crates/
|
||||
|
||||
@@ -27,6 +27,9 @@ RUN apt-get update && apt-get install -y \
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Increase rustc stack size to prevent SIGSEGV during release builds
|
||||
ENV RUST_MIN_STACK=16777216
|
||||
|
||||
# Copy dependency metadata first so `cargo fetch` layer is cached
|
||||
# when only source code changes (Cargo.toml/Cargo.lock stay the same)
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
|
||||
@@ -29,6 +29,9 @@ RUN apt-get update && apt-get install -y \
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Increase rustc stack size to prevent SIGSEGV during release builds
|
||||
ENV RUST_MIN_STACK=16777216
|
||||
|
||||
# Copy workspace configuration
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
|
||||
|
||||
@@ -30,6 +30,9 @@ RUN apt-get update && apt-get install -y \
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Increase rustc stack size to prevent SIGSEGV during release builds
|
||||
ENV RUST_MIN_STACK=16777216
|
||||
|
||||
# Copy workspace files
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
COPY crates/ ./crates/
|
||||
|
||||
@@ -30,6 +30,9 @@ RUN apt-get update && apt-get install -y \
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Increase rustc stack size to prevent SIGSEGV during release builds
|
||||
ENV RUST_MIN_STACK=16777216
|
||||
|
||||
# Copy dependency metadata first so `cargo fetch` layer is cached
|
||||
# when only source code changes (Cargo.toml/Cargo.lock stay the same)
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
|
||||
@@ -27,6 +27,9 @@ RUN apt-get update && apt-get install -y \
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Increase rustc stack size to prevent SIGSEGV during release builds
|
||||
ENV RUST_MIN_STACK=16777216
|
||||
|
||||
# Copy workspace manifests and source code
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
COPY crates/ ./crates/
|
||||
|
||||
@@ -35,6 +35,9 @@ RUN apt-get update && apt-get install -y \
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Increase rustc stack size to prevent SIGSEGV during release builds
|
||||
ENV RUST_MIN_STACK=16777216
|
||||
|
||||
# Copy dependency metadata first so `cargo fetch` layer is cached
|
||||
# when only source code changes (Cargo.toml/Cargo.lock stay the same)
|
||||
COPY Cargo.toml Cargo.lock ./
|
||||
|
||||
@@ -78,6 +78,17 @@ else
|
||||
echo -e "${YELLOW}⚠${NC} Runtime environments directory not mounted, skipping"
|
||||
fi
|
||||
|
||||
# Initialise artifacts volume with correct ownership.
|
||||
# The API service (creates directories for file-backed artifact versions) and
|
||||
# workers (write artifact files during execution) both run as attune uid 1000.
|
||||
ARTIFACTS_DIR="${ARTIFACTS_DIR:-/opt/attune/artifacts}"
|
||||
if [ -d "$ARTIFACTS_DIR" ] || mkdir -p "$ARTIFACTS_DIR" 2>/dev/null; then
|
||||
chown -R 1000:1000 "$ARTIFACTS_DIR"
|
||||
echo -e "${GREEN}✓${NC} Artifacts directory ready at: $ARTIFACTS_DIR"
|
||||
else
|
||||
echo -e "${YELLOW}⚠${NC} Artifacts directory not mounted, skipping"
|
||||
fi
|
||||
|
||||
# Check if source packs directory exists
|
||||
if [ ! -d "$SOURCE_PACKS_DIR" ]; then
|
||||
echo -e "${RED}✗${NC} Source packs directory not found: $SOURCE_PACKS_DIR"
|
||||
|
||||
@@ -35,6 +35,7 @@ BEGIN
|
||||
'parent', NEW.parent,
|
||||
'result', NEW.result,
|
||||
'started_at', NEW.started_at,
|
||||
'workflow_task', NEW.workflow_task,
|
||||
'created', NEW.created,
|
||||
'updated', NEW.updated
|
||||
);
|
||||
@@ -77,6 +78,7 @@ BEGIN
|
||||
'parent', NEW.parent,
|
||||
'result', NEW.result,
|
||||
'started_at', NEW.started_at,
|
||||
'workflow_task', NEW.workflow_task,
|
||||
'created', NEW.created,
|
||||
'updated', NEW.updated
|
||||
);
|
||||
|
||||
Submodule packs.external/python_example updated: 9414ee34e2...4df156f210
@@ -314,8 +314,117 @@ class PackLoader:
|
||||
print(f" ⚠ Could not resolve runtime for runner_type '{runner_type}'")
|
||||
return None
|
||||
|
||||
def upsert_workflow_definition(
|
||||
self,
|
||||
cursor,
|
||||
workflow_file_path: str,
|
||||
action_ref: str,
|
||||
action_data: Dict[str, Any],
|
||||
) -> Optional[int]:
|
||||
"""Load a workflow definition file and upsert it in the database.
|
||||
|
||||
When an action YAML contains a `workflow_file` field, this method reads
|
||||
the referenced workflow YAML, creates or updates the corresponding
|
||||
`workflow_definition` row, and returns its ID so the action can be linked
|
||||
via the `workflow_def` FK.
|
||||
|
||||
The action YAML's `parameters` and `output` fields take precedence over
|
||||
the workflow file's own schemas (allowing the action to customise the
|
||||
exposed interface without touching the workflow graph).
|
||||
|
||||
Args:
|
||||
cursor: Database cursor.
|
||||
workflow_file_path: Path to the workflow file relative to the
|
||||
``actions/`` directory (e.g. ``workflows/deploy.workflow.yaml``).
|
||||
action_ref: The ref of the action that references this workflow.
|
||||
action_data: The parsed action YAML dict (used for schema overrides).
|
||||
|
||||
Returns:
|
||||
The database ID of the workflow_definition row, or None on failure.
|
||||
"""
|
||||
actions_dir = self.pack_dir / "actions"
|
||||
full_path = actions_dir / workflow_file_path
|
||||
if not full_path.exists():
|
||||
print(f" ⚠ Workflow file '{workflow_file_path}' not found at {full_path}")
|
||||
return None
|
||||
|
||||
try:
|
||||
workflow_data = self.load_yaml(full_path)
|
||||
except Exception as e:
|
||||
print(f" ⚠ Failed to parse workflow file '{workflow_file_path}': {e}")
|
||||
return None
|
||||
|
||||
# The action YAML is authoritative for action-level metadata.
|
||||
# Fall back to the workflow file's own values only when present
|
||||
# (standalone workflow files in workflows/ still carry them).
|
||||
workflow_ref = workflow_data.get("ref") or action_ref
|
||||
label = workflow_data.get("label") or action_data.get("label", "")
|
||||
description = workflow_data.get("description") or action_data.get(
|
||||
"description", ""
|
||||
)
|
||||
version = workflow_data.get("version", "1.0.0")
|
||||
tags = workflow_data.get("tags") or action_data.get("tags", [])
|
||||
|
||||
# The action YAML is authoritative for param_schema / out_schema.
|
||||
# Fall back to the workflow file's own schemas only if the action
|
||||
# YAML doesn't define them.
|
||||
param_schema = action_data.get("parameters") or workflow_data.get("parameters")
|
||||
out_schema = action_data.get("output") or workflow_data.get("output")
|
||||
|
||||
param_schema_json = json.dumps(param_schema) if param_schema else None
|
||||
out_schema_json = json.dumps(out_schema) if out_schema else None
|
||||
|
||||
# Store the full workflow definition as JSON
|
||||
definition_json = json.dumps(workflow_data)
|
||||
tags_list = tags if isinstance(tags, list) else []
|
||||
|
||||
cursor.execute(
|
||||
"""
|
||||
INSERT INTO workflow_definition (
|
||||
ref, pack, pack_ref, label, description, version,
|
||||
param_schema, out_schema, definition, tags, enabled
|
||||
)
|
||||
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
|
||||
ON CONFLICT (ref) DO UPDATE SET
|
||||
label = EXCLUDED.label,
|
||||
description = EXCLUDED.description,
|
||||
version = EXCLUDED.version,
|
||||
param_schema = EXCLUDED.param_schema,
|
||||
out_schema = EXCLUDED.out_schema,
|
||||
definition = EXCLUDED.definition,
|
||||
tags = EXCLUDED.tags,
|
||||
enabled = EXCLUDED.enabled,
|
||||
updated = NOW()
|
||||
RETURNING id
|
||||
""",
|
||||
(
|
||||
workflow_ref,
|
||||
self.pack_id,
|
||||
self.pack_ref,
|
||||
label,
|
||||
description,
|
||||
version,
|
||||
param_schema_json,
|
||||
out_schema_json,
|
||||
definition_json,
|
||||
tags_list,
|
||||
True,
|
||||
),
|
||||
)
|
||||
|
||||
workflow_def_id = cursor.fetchone()[0]
|
||||
print(f" ✓ Workflow definition '{workflow_ref}' (ID: {workflow_def_id})")
|
||||
return workflow_def_id
|
||||
|
||||
def upsert_actions(self, runtime_ids: Dict[str, int]) -> Dict[str, int]:
|
||||
"""Load action definitions"""
|
||||
"""Load action definitions.
|
||||
|
||||
When an action YAML contains a ``workflow_file`` field, the loader reads
|
||||
the referenced workflow definition, upserts a ``workflow_definition``
|
||||
record, and links the action to it via ``action.workflow_def``. This
|
||||
allows the action YAML to control action-level metadata independently
|
||||
of the workflow graph, and lets multiple actions share a workflow file.
|
||||
"""
|
||||
print("\n→ Loading actions...")
|
||||
|
||||
actions_dir = self.pack_dir / "actions"
|
||||
@@ -324,6 +433,7 @@ class PackLoader:
|
||||
return {}
|
||||
|
||||
action_ids = {}
|
||||
workflow_count = 0
|
||||
cursor = self.conn.cursor()
|
||||
|
||||
for yaml_file in sorted(actions_dir.glob("*.yaml")):
|
||||
@@ -340,18 +450,36 @@ class PackLoader:
|
||||
label = action_data.get("label") or generate_label(name)
|
||||
description = action_data.get("description", "")
|
||||
|
||||
# Determine entrypoint
|
||||
entrypoint = action_data.get("entry_point", "")
|
||||
if not entrypoint:
|
||||
# Try to find corresponding script file
|
||||
for ext in [".sh", ".py"]:
|
||||
script_path = actions_dir / f"{name}{ext}"
|
||||
if script_path.exists():
|
||||
entrypoint = str(script_path.relative_to(self.packs_dir))
|
||||
break
|
||||
# ── Workflow file handling ───────────────────────────────────
|
||||
workflow_file = action_data.get("workflow_file")
|
||||
workflow_def_id: Optional[int] = None
|
||||
|
||||
# Resolve runtime ID for this action
|
||||
runtime_id = self.resolve_action_runtime(action_data, runtime_ids)
|
||||
if workflow_file:
|
||||
workflow_def_id = self.upsert_workflow_definition(
|
||||
cursor, workflow_file, ref, action_data
|
||||
)
|
||||
if workflow_def_id is not None:
|
||||
workflow_count += 1
|
||||
|
||||
# For workflow actions the entrypoint is the workflow file path;
|
||||
# for regular actions it comes from entry_point in the YAML.
|
||||
if workflow_file:
|
||||
entrypoint = workflow_file
|
||||
else:
|
||||
entrypoint = action_data.get("entry_point", "")
|
||||
if not entrypoint:
|
||||
# Try to find corresponding script file
|
||||
for ext in [".sh", ".py"]:
|
||||
script_path = actions_dir / f"{name}{ext}"
|
||||
if script_path.exists():
|
||||
entrypoint = str(script_path.relative_to(self.packs_dir))
|
||||
break
|
||||
|
||||
# Resolve runtime ID (workflow actions have no runtime)
|
||||
if workflow_file:
|
||||
runtime_id = None
|
||||
else:
|
||||
runtime_id = self.resolve_action_runtime(action_data, runtime_ids)
|
||||
|
||||
param_schema = json.dumps(action_data.get("parameters", {}))
|
||||
out_schema = json.dumps(action_data.get("output", {}))
|
||||
@@ -423,9 +551,25 @@ class PackLoader:
|
||||
|
||||
action_id = cursor.fetchone()[0]
|
||||
action_ids[ref] = action_id
|
||||
print(f" ✓ Action '{ref}' (ID: {action_id})")
|
||||
|
||||
# Link action to workflow definition if present
|
||||
if workflow_def_id is not None:
|
||||
cursor.execute(
|
||||
"""
|
||||
UPDATE action SET workflow_def = %s, updated = NOW()
|
||||
WHERE id = %s
|
||||
""",
|
||||
(workflow_def_id, action_id),
|
||||
)
|
||||
print(
|
||||
f" ✓ Action '{ref}' (ID: {action_id}) → workflow def {workflow_def_id}"
|
||||
)
|
||||
else:
|
||||
print(f" ✓ Action '{ref}' (ID: {action_id})")
|
||||
|
||||
cursor.close()
|
||||
if workflow_count > 0:
|
||||
print(f" ({workflow_count} workflow definition(s) registered)")
|
||||
return action_ids
|
||||
|
||||
def upsert_sensors(
|
||||
@@ -561,7 +705,15 @@ class PackLoader:
|
||||
return sensor_ids
|
||||
|
||||
def load_pack(self):
|
||||
"""Main loading process"""
|
||||
"""Main loading process.
|
||||
|
||||
Components are loaded in dependency order:
|
||||
1. Runtimes (no dependencies)
|
||||
2. Triggers (no dependencies)
|
||||
3. Actions (depend on runtime; workflow actions also create
|
||||
workflow_definition records)
|
||||
4. Sensors (depend on triggers and runtime)
|
||||
"""
|
||||
print("=" * 60)
|
||||
print(f"Pack Loader - {self.pack_name}")
|
||||
print("=" * 60)
|
||||
@@ -581,7 +733,7 @@ class PackLoader:
|
||||
# Load triggers
|
||||
trigger_ids = self.upsert_triggers()
|
||||
|
||||
# Load actions (with runtime resolution)
|
||||
# Load actions (with runtime resolution + workflow definitions)
|
||||
action_ids = self.upsert_actions(runtime_ids)
|
||||
|
||||
# Load sensors
|
||||
|
||||
@@ -62,6 +62,7 @@ export type ExecutionResponse = {
|
||||
workflow_task?: {
|
||||
workflow_execution: number;
|
||||
task_name: string;
|
||||
triggered_by?: string | null;
|
||||
task_index?: number | null;
|
||||
task_batch?: number | null;
|
||||
retry_count: number;
|
||||
|
||||
@@ -54,6 +54,7 @@ export type ExecutionSummary = {
|
||||
workflow_task?: {
|
||||
workflow_execution: number;
|
||||
task_name: string;
|
||||
triggered_by?: string | null;
|
||||
task_index?: number | null;
|
||||
task_batch?: number | null;
|
||||
retry_count: number;
|
||||
|
||||
@@ -1,326 +0,0 @@
|
||||
import { useState, useMemo } from "react";
|
||||
import { Link } from "react-router-dom";
|
||||
import { formatDistanceToNow } from "date-fns";
|
||||
import {
|
||||
ChevronDown,
|
||||
ChevronRight,
|
||||
Workflow,
|
||||
CheckCircle2,
|
||||
XCircle,
|
||||
Clock,
|
||||
Loader2,
|
||||
AlertTriangle,
|
||||
Ban,
|
||||
CircleDot,
|
||||
RotateCcw,
|
||||
} from "lucide-react";
|
||||
import { useChildExecutions } from "@/hooks/useExecutions";
|
||||
import { useExecutionStream } from "@/hooks/useExecutionStream";
|
||||
|
||||
interface WorkflowTasksPanelProps {
|
||||
/** The parent (workflow) execution ID */
|
||||
parentExecutionId: number;
|
||||
/** Whether the panel starts collapsed (default: false — open by default for workflows) */
|
||||
defaultCollapsed?: boolean;
|
||||
}
|
||||
|
||||
/** Format a duration in ms to a human-readable string. */
|
||||
function formatDuration(ms: number): string {
|
||||
if (ms < 1000) return `${ms}ms`;
|
||||
const secs = ms / 1000;
|
||||
if (secs < 60) return `${secs.toFixed(1)}s`;
|
||||
const mins = Math.floor(secs / 60);
|
||||
const remainSecs = Math.round(secs % 60);
|
||||
if (mins < 60) return `${mins}m ${remainSecs}s`;
|
||||
const hrs = Math.floor(mins / 60);
|
||||
const remainMins = mins % 60;
|
||||
return `${hrs}h ${remainMins}m`;
|
||||
}
|
||||
|
||||
function getStatusIcon(status: string) {
|
||||
switch (status) {
|
||||
case "completed":
|
||||
return <CheckCircle2 className="h-4 w-4 text-green-500" />;
|
||||
case "failed":
|
||||
return <XCircle className="h-4 w-4 text-red-500" />;
|
||||
case "running":
|
||||
return <Loader2 className="h-4 w-4 text-blue-500 animate-spin" />;
|
||||
case "requested":
|
||||
case "scheduling":
|
||||
case "scheduled":
|
||||
return <Clock className="h-4 w-4 text-yellow-500" />;
|
||||
case "timeout":
|
||||
return <AlertTriangle className="h-4 w-4 text-orange-500" />;
|
||||
case "canceling":
|
||||
case "cancelled":
|
||||
return <Ban className="h-4 w-4 text-gray-400" />;
|
||||
case "abandoned":
|
||||
return <AlertTriangle className="h-4 w-4 text-red-400" />;
|
||||
default:
|
||||
return <CircleDot className="h-4 w-4 text-gray-400" />;
|
||||
}
|
||||
}
|
||||
|
||||
function getStatusBadgeClasses(status: string): string {
|
||||
switch (status) {
|
||||
case "completed":
|
||||
return "bg-green-100 text-green-800";
|
||||
case "failed":
|
||||
return "bg-red-100 text-red-800";
|
||||
case "running":
|
||||
return "bg-blue-100 text-blue-800";
|
||||
case "requested":
|
||||
case "scheduling":
|
||||
case "scheduled":
|
||||
return "bg-yellow-100 text-yellow-800";
|
||||
case "timeout":
|
||||
return "bg-orange-100 text-orange-800";
|
||||
case "canceling":
|
||||
case "cancelled":
|
||||
return "bg-gray-100 text-gray-800";
|
||||
case "abandoned":
|
||||
return "bg-red-100 text-red-600";
|
||||
default:
|
||||
return "bg-gray-100 text-gray-800";
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Panel that displays workflow task (child) executions for a parent
|
||||
* workflow execution. Shows each task's name, action, status, and timing.
|
||||
*/
|
||||
export default function WorkflowTasksPanel({
|
||||
parentExecutionId,
|
||||
defaultCollapsed = false,
|
||||
}: WorkflowTasksPanelProps) {
|
||||
const [isCollapsed, setIsCollapsed] = useState(defaultCollapsed);
|
||||
const { data, isLoading, error } = useChildExecutions(parentExecutionId);
|
||||
|
||||
// Subscribe to the unfiltered execution stream so that child execution
|
||||
// WebSocket notifications update the ["executions", { parent }] query cache
|
||||
// in real-time (the detail page only subscribes filtered by its own ID).
|
||||
useExecutionStream({ enabled: true });
|
||||
|
||||
const tasks = useMemo(() => {
|
||||
if (!data?.data) return [];
|
||||
return data.data;
|
||||
}, [data]);
|
||||
|
||||
const summary = useMemo(() => {
|
||||
const total = tasks.length;
|
||||
const completed = tasks.filter((t) => t.status === "completed").length;
|
||||
const failed = tasks.filter((t) => t.status === "failed").length;
|
||||
const running = tasks.filter(
|
||||
(t) =>
|
||||
t.status === "running" ||
|
||||
t.status === "requested" ||
|
||||
t.status === "scheduling" ||
|
||||
t.status === "scheduled",
|
||||
).length;
|
||||
const other = total - completed - failed - running;
|
||||
return { total, completed, failed, running, other };
|
||||
}, [tasks]);
|
||||
|
||||
if (!isLoading && tasks.length === 0 && !error) {
|
||||
// No child tasks — nothing to show
|
||||
return null;
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="bg-white shadow rounded-lg">
|
||||
{/* Header */}
|
||||
<button
|
||||
onClick={() => setIsCollapsed(!isCollapsed)}
|
||||
className="w-full flex items-center justify-between p-6 text-left hover:bg-gray-50 rounded-lg transition-colors"
|
||||
>
|
||||
<div className="flex items-center gap-3">
|
||||
{isCollapsed ? (
|
||||
<ChevronRight className="h-5 w-5 text-gray-400" />
|
||||
) : (
|
||||
<ChevronDown className="h-5 w-5 text-gray-400" />
|
||||
)}
|
||||
<Workflow className="h-5 w-5 text-indigo-500" />
|
||||
<h2 className="text-xl font-semibold">Workflow Tasks</h2>
|
||||
{!isLoading && (
|
||||
<span className="text-sm text-gray-500">
|
||||
({summary.total} task{summary.total !== 1 ? "s" : ""})
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Summary badges */}
|
||||
{!isCollapsed || !isLoading ? (
|
||||
<div className="flex items-center gap-2">
|
||||
{summary.completed > 0 && (
|
||||
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-green-100 text-green-800">
|
||||
<CheckCircle2 className="h-3 w-3" />
|
||||
{summary.completed}
|
||||
</span>
|
||||
)}
|
||||
{summary.running > 0 && (
|
||||
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800">
|
||||
<Loader2 className="h-3 w-3 animate-spin" />
|
||||
{summary.running}
|
||||
</span>
|
||||
)}
|
||||
{summary.failed > 0 && (
|
||||
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-red-100 text-red-800">
|
||||
<XCircle className="h-3 w-3" />
|
||||
{summary.failed}
|
||||
</span>
|
||||
)}
|
||||
{summary.other > 0 && (
|
||||
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-gray-100 text-gray-700">
|
||||
{summary.other}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
) : null}
|
||||
</button>
|
||||
|
||||
{/* Content */}
|
||||
{!isCollapsed && (
|
||||
<div className="px-6 pb-6">
|
||||
{isLoading && (
|
||||
<div className="flex items-center justify-center py-8">
|
||||
<Loader2 className="h-5 w-5 animate-spin text-gray-400" />
|
||||
<span className="ml-2 text-sm text-gray-500">
|
||||
Loading workflow tasks…
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{error && (
|
||||
<div className="bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded text-sm">
|
||||
Error loading workflow tasks:{" "}
|
||||
{error instanceof Error ? error.message : "Unknown error"}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{!isLoading && !error && tasks.length > 0 && (
|
||||
<div className="space-y-2">
|
||||
{/* Column headers */}
|
||||
<div className="grid grid-cols-12 gap-3 px-3 py-2 text-xs font-medium text-gray-500 uppercase tracking-wider border-b border-gray-100">
|
||||
<div className="col-span-1">#</div>
|
||||
<div className="col-span-3">Task</div>
|
||||
<div className="col-span-3">Action</div>
|
||||
<div className="col-span-2">Status</div>
|
||||
<div className="col-span-2">Duration</div>
|
||||
<div className="col-span-1">Retry</div>
|
||||
</div>
|
||||
|
||||
{/* Task rows */}
|
||||
{tasks.map((task, idx) => {
|
||||
const wt = task.workflow_task;
|
||||
const taskName = wt?.task_name ?? `Task ${idx + 1}`;
|
||||
const retryCount = wt?.retry_count ?? 0;
|
||||
const maxRetries = wt?.max_retries ?? 0;
|
||||
const timedOut = wt?.timed_out ?? false;
|
||||
|
||||
// Compute duration from started_at → updated (actual run time)
|
||||
const startedAt = task.started_at
|
||||
? new Date(task.started_at)
|
||||
: null;
|
||||
const created = new Date(task.created);
|
||||
const updated = new Date(task.updated);
|
||||
const isTerminal =
|
||||
task.status === "completed" ||
|
||||
task.status === "failed" ||
|
||||
task.status === "timeout";
|
||||
const durationMs =
|
||||
wt?.duration_ms ??
|
||||
(isTerminal && startedAt
|
||||
? updated.getTime() - startedAt.getTime()
|
||||
: null);
|
||||
|
||||
return (
|
||||
<Link
|
||||
key={task.id}
|
||||
to={`/executions/${task.id}`}
|
||||
className="grid grid-cols-12 gap-3 px-3 py-3 rounded-lg hover:bg-gray-50 transition-colors items-center group"
|
||||
>
|
||||
{/* Index */}
|
||||
<div className="col-span-1 text-sm text-gray-400 font-mono">
|
||||
{idx + 1}
|
||||
</div>
|
||||
|
||||
{/* Task name */}
|
||||
<div className="col-span-3 flex items-center gap-2 min-w-0">
|
||||
{getStatusIcon(task.status)}
|
||||
<span
|
||||
className="text-sm font-medium text-gray-900 truncate group-hover:text-blue-600"
|
||||
title={taskName}
|
||||
>
|
||||
{taskName}
|
||||
</span>
|
||||
{wt?.task_index != null && (
|
||||
<span className="text-xs text-gray-400 flex-shrink-0">
|
||||
[{wt.task_index}]
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Action ref */}
|
||||
<div className="col-span-3 min-w-0">
|
||||
<span
|
||||
className="text-sm text-gray-600 truncate block"
|
||||
title={task.action_ref}
|
||||
>
|
||||
{task.action_ref}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{/* Status badge */}
|
||||
<div className="col-span-2 flex items-center gap-1.5">
|
||||
<span
|
||||
className={`inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium ${getStatusBadgeClasses(task.status)}`}
|
||||
>
|
||||
{task.status}
|
||||
</span>
|
||||
{timedOut && (
|
||||
<span title="Timed out">
|
||||
<AlertTriangle className="h-3.5 w-3.5 text-orange-500" />
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Duration */}
|
||||
<div className="col-span-2 text-sm text-gray-500">
|
||||
{task.status === "running" ? (
|
||||
<span className="text-blue-600">
|
||||
{formatDistanceToNow(startedAt ?? created, {
|
||||
addSuffix: false,
|
||||
})}
|
||||
…
|
||||
</span>
|
||||
) : durationMs != null && durationMs > 0 ? (
|
||||
formatDuration(durationMs)
|
||||
) : (
|
||||
<span className="text-gray-300">—</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Retry info */}
|
||||
<div className="col-span-1 text-sm text-gray-500">
|
||||
{maxRetries > 0 ? (
|
||||
<span
|
||||
className="inline-flex items-center gap-0.5"
|
||||
title={`Attempt ${retryCount + 1} of ${maxRetries + 1}`}
|
||||
>
|
||||
<RotateCcw className="h-3 w-3" />
|
||||
{retryCount}/{maxRetries}
|
||||
</span>
|
||||
) : (
|
||||
<span className="text-gray-300">—</span>
|
||||
)}
|
||||
</div>
|
||||
</Link>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
import { useState, useMemo, useEffect, useCallback } from "react";
|
||||
import { useState, useMemo, useEffect, useCallback, useRef } from "react";
|
||||
import { formatDistanceToNow } from "date-fns";
|
||||
import {
|
||||
ChevronDown,
|
||||
@@ -14,6 +14,7 @@ import {
|
||||
Download,
|
||||
Eye,
|
||||
X,
|
||||
Radio,
|
||||
} from "lucide-react";
|
||||
import {
|
||||
useExecutionArtifacts,
|
||||
@@ -136,7 +137,110 @@ function TextFileDetail({
|
||||
const [content, setContent] = useState<string | null>(null);
|
||||
const [loadError, setLoadError] = useState<string | null>(null);
|
||||
const [isLoadingContent, setIsLoadingContent] = useState(true);
|
||||
const [isStreaming, setIsStreaming] = useState(false);
|
||||
const [isWaiting, setIsWaiting] = useState(false);
|
||||
const [streamDone, setStreamDone] = useState(false);
|
||||
const preRef = useRef<HTMLPreElement>(null);
|
||||
const eventSourceRef = useRef<EventSource | null>(null);
|
||||
// Track whether the user has scrolled away from the bottom so we can
|
||||
// auto-scroll only when they're already at the end.
|
||||
const userScrolledAwayRef = useRef(false);
|
||||
|
||||
// Auto-scroll the <pre> to the bottom when new content arrives,
|
||||
// unless the user has deliberately scrolled up.
|
||||
const scrollToBottom = useCallback(() => {
|
||||
const el = preRef.current;
|
||||
if (el && !userScrolledAwayRef.current) {
|
||||
el.scrollTop = el.scrollHeight;
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Detect whether the user has scrolled away from the bottom.
|
||||
const handleScroll = useCallback(() => {
|
||||
const el = preRef.current;
|
||||
if (!el) return;
|
||||
const atBottom = el.scrollHeight - el.scrollTop - el.clientHeight < 24;
|
||||
userScrolledAwayRef.current = !atBottom;
|
||||
}, []);
|
||||
|
||||
// ---- SSE streaming path (used when execution is running) ----
|
||||
useEffect(() => {
|
||||
if (!isRunning) return;
|
||||
|
||||
const token = localStorage.getItem("access_token");
|
||||
if (!token) {
|
||||
setLoadError("No authentication token available");
|
||||
setIsLoadingContent(false);
|
||||
return;
|
||||
}
|
||||
|
||||
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/stream?token=${encodeURIComponent(token)}`;
|
||||
const es = new EventSource(url);
|
||||
eventSourceRef.current = es;
|
||||
setIsStreaming(true);
|
||||
setStreamDone(false);
|
||||
|
||||
es.addEventListener("waiting", (e: MessageEvent) => {
|
||||
setIsWaiting(true);
|
||||
setIsLoadingContent(false);
|
||||
// If the message says "File found", the next event will be content
|
||||
if (e.data?.includes("File found")) {
|
||||
setIsWaiting(false);
|
||||
}
|
||||
});
|
||||
|
||||
es.addEventListener("content", (e: MessageEvent) => {
|
||||
setContent(e.data);
|
||||
setLoadError(null);
|
||||
setIsLoadingContent(false);
|
||||
setIsWaiting(false);
|
||||
// Scroll after React renders the new content
|
||||
requestAnimationFrame(scrollToBottom);
|
||||
});
|
||||
|
||||
es.addEventListener("append", (e: MessageEvent) => {
|
||||
setContent((prev) => (prev ?? "") + e.data);
|
||||
setLoadError(null);
|
||||
requestAnimationFrame(scrollToBottom);
|
||||
});
|
||||
|
||||
es.addEventListener("done", () => {
|
||||
setStreamDone(true);
|
||||
setIsStreaming(false);
|
||||
es.close();
|
||||
});
|
||||
|
||||
es.addEventListener("error", (e: MessageEvent) => {
|
||||
// SSE spec fires generic error events on connection close.
|
||||
// Only show user-facing errors if the server sent an explicit event.
|
||||
if (e.data) {
|
||||
setLoadError(e.data);
|
||||
}
|
||||
});
|
||||
|
||||
es.onerror = () => {
|
||||
// Connection dropped — EventSource will auto-reconnect, but if it
|
||||
// reaches CLOSED state we fall back to the download endpoint.
|
||||
if (es.readyState === EventSource.CLOSED) {
|
||||
setIsStreaming(false);
|
||||
// If we never got any content via SSE, fall back to download
|
||||
setContent((prev) => {
|
||||
if (prev === null) {
|
||||
// Will be handled by the fetch fallback below
|
||||
}
|
||||
return prev;
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
return () => {
|
||||
es.close();
|
||||
eventSourceRef.current = null;
|
||||
setIsStreaming(false);
|
||||
};
|
||||
}, [artifactId, isRunning, scrollToBottom]);
|
||||
|
||||
// ---- Fetch fallback (used when not running, or SSE never connected) ----
|
||||
const fetchContent = useCallback(async () => {
|
||||
const token = localStorage.getItem("access_token");
|
||||
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/download`;
|
||||
@@ -159,16 +263,10 @@ function TextFileDetail({
|
||||
}
|
||||
}, [artifactId]);
|
||||
|
||||
// Initial load
|
||||
// When NOT running (execution completed), use download endpoint once.
|
||||
useEffect(() => {
|
||||
if (isRunning) return;
|
||||
fetchContent();
|
||||
}, [fetchContent]);
|
||||
|
||||
// Poll while running to pick up new file versions
|
||||
useEffect(() => {
|
||||
if (!isRunning) return;
|
||||
const interval = setInterval(fetchContent, 3000);
|
||||
return () => clearInterval(interval);
|
||||
}, [isRunning, fetchContent]);
|
||||
|
||||
return (
|
||||
@@ -179,10 +277,19 @@ function TextFileDetail({
|
||||
{artifactName ?? "Text File"}
|
||||
</h4>
|
||||
<div className="flex items-center gap-2">
|
||||
{isRunning && (
|
||||
<div className="flex items-center gap-1 text-xs text-blue-600">
|
||||
{isStreaming && !streamDone && (
|
||||
<div className="flex items-center gap-1 text-xs text-green-600">
|
||||
<Radio className="h-3 w-3 animate-pulse" />
|
||||
<span>Streaming</span>
|
||||
</div>
|
||||
)}
|
||||
{streamDone && (
|
||||
<span className="text-xs text-gray-500">Stream complete</span>
|
||||
)}
|
||||
{isWaiting && (
|
||||
<div className="flex items-center gap-1 text-xs text-amber-600">
|
||||
<Loader2 className="h-3 w-3 animate-spin" />
|
||||
<span>Live</span>
|
||||
<span>Waiting for file…</span>
|
||||
</div>
|
||||
)}
|
||||
<button
|
||||
@@ -194,7 +301,7 @@ function TextFileDetail({
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{isLoadingContent && (
|
||||
{isLoadingContent && !isWaiting && (
|
||||
<div className="flex items-center gap-2 py-2 text-sm text-gray-500">
|
||||
<Loader2 className="h-4 w-4 animate-spin" />
|
||||
Loading content…
|
||||
@@ -206,10 +313,20 @@ function TextFileDetail({
|
||||
)}
|
||||
|
||||
{!isLoadingContent && !loadError && content !== null && (
|
||||
<pre className="max-h-64 overflow-y-auto bg-gray-900 text-gray-100 rounded p-3 text-xs font-mono whitespace-pre-wrap break-all">
|
||||
<pre
|
||||
ref={preRef}
|
||||
onScroll={handleScroll}
|
||||
className="max-h-64 overflow-y-auto bg-gray-900 text-gray-100 rounded p-3 text-xs font-mono whitespace-pre-wrap break-all"
|
||||
>
|
||||
{content || <span className="text-gray-500 italic">(empty)</span>}
|
||||
</pre>
|
||||
)}
|
||||
|
||||
{isWaiting && content === null && !loadError && (
|
||||
<div className="bg-gray-900 rounded p-3 text-xs text-gray-500 italic">
|
||||
Waiting for the worker to write the file…
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
446
web/src/components/executions/WorkflowDetailsPanel.tsx
Normal file
446
web/src/components/executions/WorkflowDetailsPanel.tsx
Normal file
@@ -0,0 +1,446 @@
|
||||
import { useState, useMemo } from "react";
|
||||
import { Link } from "react-router-dom";
|
||||
import { formatDistanceToNow } from "date-fns";
|
||||
import {
|
||||
ChevronDown,
|
||||
ChevronRight,
|
||||
Workflow,
|
||||
ChartGantt,
|
||||
List,
|
||||
CheckCircle2,
|
||||
XCircle,
|
||||
Clock,
|
||||
Loader2,
|
||||
AlertTriangle,
|
||||
Ban,
|
||||
CircleDot,
|
||||
RotateCcw,
|
||||
} from "lucide-react";
|
||||
import type { ExecutionSummary } from "@/api";
|
||||
import { useChildExecutions } from "@/hooks/useExecutions";
|
||||
import { useExecutionStream } from "@/hooks/useExecutionStream";
|
||||
import WorkflowTimelineDAG, {
|
||||
type ParentExecutionInfo,
|
||||
} from "@/components/executions/workflow-timeline";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type TabId = "timeline" | "tasks";
|
||||
|
||||
interface WorkflowDetailsPanelProps {
|
||||
/** The parent (workflow) execution */
|
||||
parentExecution: ParentExecutionInfo;
|
||||
/** The action_ref of the parent execution (used to fetch workflow def) */
|
||||
actionRef: string;
|
||||
/** Whether the panel starts collapsed (default: false) */
|
||||
defaultCollapsed?: boolean;
|
||||
/** Which tab to show initially (default: "timeline") */
|
||||
defaultTab?: TabId;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function formatDuration(ms: number): string {
|
||||
if (ms < 1000) return `${ms}ms`;
|
||||
const secs = ms / 1000;
|
||||
if (secs < 60) return `${secs.toFixed(1)}s`;
|
||||
const mins = Math.floor(secs / 60);
|
||||
const remainSecs = Math.round(secs % 60);
|
||||
if (mins < 60) return `${mins}m ${remainSecs}s`;
|
||||
const hrs = Math.floor(mins / 60);
|
||||
const remainMins = mins % 60;
|
||||
return `${hrs}h ${remainMins}m`;
|
||||
}
|
||||
|
||||
function getStatusIcon(status: string) {
|
||||
switch (status) {
|
||||
case "completed":
|
||||
return <CheckCircle2 className="h-4 w-4 text-green-500" />;
|
||||
case "failed":
|
||||
return <XCircle className="h-4 w-4 text-red-500" />;
|
||||
case "running":
|
||||
return <Loader2 className="h-4 w-4 text-blue-500 animate-spin" />;
|
||||
case "requested":
|
||||
case "scheduling":
|
||||
case "scheduled":
|
||||
return <Clock className="h-4 w-4 text-yellow-500" />;
|
||||
case "timeout":
|
||||
return <AlertTriangle className="h-4 w-4 text-orange-500" />;
|
||||
case "canceling":
|
||||
case "cancelled":
|
||||
return <Ban className="h-4 w-4 text-gray-400" />;
|
||||
case "abandoned":
|
||||
return <AlertTriangle className="h-4 w-4 text-red-400" />;
|
||||
default:
|
||||
return <CircleDot className="h-4 w-4 text-gray-400" />;
|
||||
}
|
||||
}
|
||||
|
||||
function getStatusBadgeClasses(status: string): string {
|
||||
switch (status) {
|
||||
case "completed":
|
||||
return "bg-green-100 text-green-800";
|
||||
case "failed":
|
||||
return "bg-red-100 text-red-800";
|
||||
case "running":
|
||||
return "bg-blue-100 text-blue-800";
|
||||
case "requested":
|
||||
case "scheduling":
|
||||
case "scheduled":
|
||||
return "bg-yellow-100 text-yellow-800";
|
||||
case "timeout":
|
||||
return "bg-orange-100 text-orange-800";
|
||||
case "canceling":
|
||||
case "cancelled":
|
||||
return "bg-gray-100 text-gray-800";
|
||||
case "abandoned":
|
||||
return "bg-red-100 text-red-600";
|
||||
default:
|
||||
return "bg-gray-100 text-gray-800";
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Component
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Combined "Workflow Details" panel that sits at the top of the execution
|
||||
* detail page for workflow executions. Contains two tabs:
|
||||
* - **Timeline** — the Gantt-style WorkflowTimelineDAG
|
||||
* - **Tasks** — the tabular list of child task executions
|
||||
*/
|
||||
export default function WorkflowDetailsPanel({
|
||||
parentExecution,
|
||||
actionRef,
|
||||
defaultCollapsed = false,
|
||||
defaultTab = "timeline",
|
||||
}: WorkflowDetailsPanelProps) {
|
||||
const [isCollapsed, setIsCollapsed] = useState(defaultCollapsed);
|
||||
const [activeTab, setActiveTab] = useState<TabId>(defaultTab);
|
||||
|
||||
// Fetch child executions (shared between both tabs' summary badges)
|
||||
const { data, isLoading, error } = useChildExecutions(parentExecution.id);
|
||||
|
||||
// Subscribe to unfiltered execution stream so child execution WebSocket
|
||||
// notifications update the query cache in real-time.
|
||||
useExecutionStream({ enabled: true });
|
||||
|
||||
const tasks = useMemo(() => data?.data ?? [], [data]);
|
||||
|
||||
const summary = useMemo(() => {
|
||||
const total = tasks.length;
|
||||
const completed = tasks.filter((t) => t.status === "completed").length;
|
||||
const failed = tasks.filter((t) => t.status === "failed").length;
|
||||
const running = tasks.filter(
|
||||
(t) =>
|
||||
t.status === "running" ||
|
||||
t.status === "requested" ||
|
||||
t.status === "scheduling" ||
|
||||
t.status === "scheduled",
|
||||
).length;
|
||||
const other = total - completed - failed - running;
|
||||
return { total, completed, failed, running, other };
|
||||
}, [tasks]);
|
||||
|
||||
// Don't render at all if there are no children and we're done loading
|
||||
if (!isLoading && tasks.length === 0 && !error) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="bg-white shadow rounded-lg">
|
||||
{/* ----------------------------------------------------------------- */}
|
||||
{/* Header row: collapse toggle + title + summary badges */}
|
||||
{/* ----------------------------------------------------------------- */}
|
||||
<button
|
||||
onClick={() => setIsCollapsed(!isCollapsed)}
|
||||
className="w-full flex items-center justify-between px-6 py-4 text-left hover:bg-gray-50 rounded-t-lg transition-colors"
|
||||
>
|
||||
<div className="flex items-center gap-3">
|
||||
{isCollapsed ? (
|
||||
<ChevronRight className="h-5 w-5 text-gray-400" />
|
||||
) : (
|
||||
<ChevronDown className="h-5 w-5 text-gray-400" />
|
||||
)}
|
||||
<Workflow className="h-5 w-5 text-indigo-500" />
|
||||
<h2 className="text-xl font-semibold">Workflow Details</h2>
|
||||
{!isLoading && (
|
||||
<span className="text-sm text-gray-500">
|
||||
({summary.total} task{summary.total !== 1 ? "s" : ""})
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Summary badges (always visible) */}
|
||||
<div className="flex items-center gap-2">
|
||||
{summary.completed > 0 && (
|
||||
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-green-100 text-green-800">
|
||||
<CheckCircle2 className="h-3 w-3" />
|
||||
{summary.completed}
|
||||
</span>
|
||||
)}
|
||||
{summary.running > 0 && (
|
||||
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800">
|
||||
<Loader2 className="h-3 w-3 animate-spin" />
|
||||
{summary.running}
|
||||
</span>
|
||||
)}
|
||||
{summary.failed > 0 && (
|
||||
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-red-100 text-red-800">
|
||||
<XCircle className="h-3 w-3" />
|
||||
{summary.failed}
|
||||
</span>
|
||||
)}
|
||||
{summary.other > 0 && (
|
||||
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-gray-100 text-gray-700">
|
||||
{summary.other}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
</button>
|
||||
|
||||
{/* ----------------------------------------------------------------- */}
|
||||
{/* Body (collapsible) */}
|
||||
{/* ----------------------------------------------------------------- */}
|
||||
{!isCollapsed && (
|
||||
<div className="border-t border-gray-100">
|
||||
{/* Tab bar */}
|
||||
<div className="flex items-center gap-1 px-6 pt-3 pb-0">
|
||||
<TabButton
|
||||
active={activeTab === "timeline"}
|
||||
onClick={() => setActiveTab("timeline")}
|
||||
icon={<ChartGantt className="h-4 w-4" />}
|
||||
label="Timeline"
|
||||
/>
|
||||
<TabButton
|
||||
active={activeTab === "tasks"}
|
||||
onClick={() => setActiveTab("tasks")}
|
||||
icon={<List className="h-4 w-4" />}
|
||||
label="Tasks"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Tab content — both tabs stay mounted so the timeline's
|
||||
ResizeObserver remains active and containerWidth never resets. */}
|
||||
<div className={activeTab === "timeline" ? "" : "hidden"}>
|
||||
<WorkflowTimelineDAG
|
||||
parentExecution={parentExecution}
|
||||
actionRef={actionRef}
|
||||
embedded
|
||||
/>
|
||||
</div>
|
||||
<div className={activeTab === "tasks" ? "" : "hidden"}>
|
||||
<TasksTab tasks={tasks} isLoading={isLoading} error={error} />
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tab Button
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function TabButton({
|
||||
active,
|
||||
onClick,
|
||||
icon,
|
||||
label,
|
||||
}: {
|
||||
active: boolean;
|
||||
onClick: () => void;
|
||||
icon: React.ReactNode;
|
||||
label: string;
|
||||
}) {
|
||||
return (
|
||||
<button
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
onClick();
|
||||
}}
|
||||
className={`
|
||||
flex items-center gap-1.5 px-3 py-2 text-sm font-medium rounded-t-md
|
||||
transition-colors border-b-2
|
||||
${
|
||||
active
|
||||
? "text-indigo-700 border-indigo-500 bg-indigo-50/50"
|
||||
: "text-gray-500 border-transparent hover:text-gray-700 hover:bg-gray-50"
|
||||
}
|
||||
`}
|
||||
>
|
||||
{icon}
|
||||
{label}
|
||||
</button>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tasks Tab — table of child task executions
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function TasksTab({
|
||||
tasks,
|
||||
isLoading,
|
||||
error,
|
||||
}: {
|
||||
tasks: ExecutionSummary[];
|
||||
isLoading: boolean;
|
||||
error: unknown;
|
||||
}) {
|
||||
if (isLoading) {
|
||||
return (
|
||||
<div className="flex items-center justify-center py-8">
|
||||
<Loader2 className="h-5 w-5 animate-spin text-gray-400" />
|
||||
<span className="ml-2 text-sm text-gray-500">
|
||||
Loading workflow tasks…
|
||||
</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (error) {
|
||||
return (
|
||||
<div className="mx-6 my-4 bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded text-sm">
|
||||
Error loading workflow tasks:{" "}
|
||||
{error instanceof Error ? error.message : "Unknown error"}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (tasks.length === 0) {
|
||||
return (
|
||||
<div className="flex items-center justify-center py-8 text-sm text-gray-500">
|
||||
No workflow tasks yet.
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="px-6 pb-6 pt-2">
|
||||
<div className="space-y-2">
|
||||
{/* Column headers */}
|
||||
<div className="grid grid-cols-12 gap-3 px-3 py-2 text-xs font-medium text-gray-500 uppercase tracking-wider border-b border-gray-100">
|
||||
<div className="col-span-1">#</div>
|
||||
<div className="col-span-3">Task</div>
|
||||
<div className="col-span-3">Action</div>
|
||||
<div className="col-span-2">Status</div>
|
||||
<div className="col-span-2">Duration</div>
|
||||
<div className="col-span-1">Retry</div>
|
||||
</div>
|
||||
|
||||
{/* Task rows */}
|
||||
{tasks.map((task, idx) => {
|
||||
const wt = task.workflow_task;
|
||||
const taskName = wt?.task_name ?? `Task ${idx + 1}`;
|
||||
const retryCount = wt?.retry_count ?? 0;
|
||||
const maxRetries = wt?.max_retries ?? 0;
|
||||
const timedOut = wt?.timed_out ?? false;
|
||||
|
||||
// Compute duration from started_at → updated (actual run time)
|
||||
const startedAt = task.started_at ? new Date(task.started_at) : null;
|
||||
const created = new Date(task.created);
|
||||
const updated = new Date(task.updated);
|
||||
const isTerminal =
|
||||
task.status === "completed" ||
|
||||
task.status === "failed" ||
|
||||
task.status === "timeout";
|
||||
const durationMs =
|
||||
wt?.duration_ms ??
|
||||
(isTerminal && startedAt
|
||||
? updated.getTime() - startedAt.getTime()
|
||||
: null);
|
||||
|
||||
return (
|
||||
<Link
|
||||
key={task.id}
|
||||
to={`/executions/${task.id}`}
|
||||
className="grid grid-cols-12 gap-3 px-3 py-3 rounded-lg hover:bg-gray-50 transition-colors items-center group"
|
||||
>
|
||||
{/* Index */}
|
||||
<div className="col-span-1 text-sm text-gray-400 font-mono">
|
||||
{idx + 1}
|
||||
</div>
|
||||
|
||||
{/* Task name */}
|
||||
<div className="col-span-3 flex items-center gap-2 min-w-0">
|
||||
{getStatusIcon(task.status)}
|
||||
<span
|
||||
className="text-sm font-medium text-gray-900 truncate group-hover:text-blue-600"
|
||||
title={taskName}
|
||||
>
|
||||
{taskName}
|
||||
</span>
|
||||
{wt?.task_index != null && (
|
||||
<span className="text-xs text-gray-400 flex-shrink-0">
|
||||
[{wt.task_index}]
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Action ref */}
|
||||
<div className="col-span-3 min-w-0">
|
||||
<span
|
||||
className="text-sm text-gray-600 truncate block"
|
||||
title={task.action_ref}
|
||||
>
|
||||
{task.action_ref}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{/* Status badge */}
|
||||
<div className="col-span-2 flex items-center gap-1.5">
|
||||
<span
|
||||
className={`inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium ${getStatusBadgeClasses(task.status)}`}
|
||||
>
|
||||
{task.status}
|
||||
</span>
|
||||
{timedOut && (
|
||||
<span title="Timed out">
|
||||
<AlertTriangle className="h-3.5 w-3.5 text-orange-500" />
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Duration */}
|
||||
<div className="col-span-2 text-sm text-gray-500">
|
||||
{task.status === "running" ? (
|
||||
<span className="text-blue-600">
|
||||
{formatDistanceToNow(startedAt ?? created, {
|
||||
addSuffix: false,
|
||||
})}
|
||||
…
|
||||
</span>
|
||||
) : durationMs != null && durationMs > 0 ? (
|
||||
formatDuration(durationMs)
|
||||
) : (
|
||||
<span className="text-gray-300">—</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Retry info */}
|
||||
<div className="col-span-1 text-sm text-gray-500">
|
||||
{maxRetries > 0 ? (
|
||||
<span
|
||||
className="inline-flex items-center gap-0.5"
|
||||
title={`Attempt ${retryCount + 1} of ${maxRetries + 1}`}
|
||||
>
|
||||
<RotateCcw className="h-3 w-3" />
|
||||
{retryCount}/{maxRetries}
|
||||
</span>
|
||||
) : (
|
||||
<span className="text-gray-300">—</span>
|
||||
)}
|
||||
</div>
|
||||
</Link>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,461 @@
|
||||
/**
|
||||
* TimelineModal — Full-screen modal for the Workflow Timeline DAG.
|
||||
*
|
||||
* Opens as a portal overlay with:
|
||||
* - A much larger vertical layout (more lane height, bigger bars)
|
||||
* - A timescale zoom slider that re-computes the layout at wider widths
|
||||
* - Horizontal scroll for zoomed-in views
|
||||
* - All the same interactions as the inline renderer (hover, click, double-click)
|
||||
* - Escape key / close button to dismiss
|
||||
*/
|
||||
|
||||
import { useState, useRef, useCallback, useMemo, useEffect } from "react";
|
||||
import { createPortal } from "react-dom";
|
||||
import { X, ZoomIn, ZoomOut, RotateCcw, GitBranch } from "lucide-react";
|
||||
|
||||
import type {
|
||||
TimelineTask,
|
||||
TimelineEdge,
|
||||
TimelineMilestone,
|
||||
LayoutConfig,
|
||||
ComputedLayout,
|
||||
} from "./types";
|
||||
import { DEFAULT_LAYOUT } from "./types";
|
||||
import { computeLayout } from "./layout";
|
||||
import TimelineRenderer from "./TimelineRenderer";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Props
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface TimelineModalProps {
|
||||
/** Whether the modal is open */
|
||||
isOpen: boolean;
|
||||
/** Callback to close the modal */
|
||||
onClose: () => void;
|
||||
/** Timeline tasks */
|
||||
tasks: TimelineTask[];
|
||||
/** Structural dependency edges between tasks */
|
||||
taskEdges: TimelineEdge[];
|
||||
/** Synthetic milestone nodes */
|
||||
milestones: TimelineMilestone[];
|
||||
/** Edges connecting milestones */
|
||||
milestoneEdges: TimelineEdge[];
|
||||
/** Direct task→task edge keys replaced by milestone-routed paths */
|
||||
suppressedEdgeKeys?: Set<string>;
|
||||
/** Callback when a task is double-clicked (navigate to execution) */
|
||||
onTaskClick?: (task: TimelineTask) => void;
|
||||
/** Summary stats for the header */
|
||||
summary: {
|
||||
total: number;
|
||||
completed: number;
|
||||
failed: number;
|
||||
running: number;
|
||||
other: number;
|
||||
durationMs: number | null;
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Constants
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** The modal layout uses more generous spacing */
|
||||
const MODAL_LAYOUT: LayoutConfig = {
|
||||
...DEFAULT_LAYOUT,
|
||||
laneHeight: 44,
|
||||
barHeight: 28,
|
||||
lanePadding: 8,
|
||||
milestoneSize: 12,
|
||||
paddingTop: 44,
|
||||
paddingBottom: 24,
|
||||
paddingLeft: 24,
|
||||
paddingRight: 24,
|
||||
minBarWidth: 12,
|
||||
};
|
||||
|
||||
const MIN_ZOOM = 1;
|
||||
const MAX_ZOOM = 8;
|
||||
const ZOOM_STEP = 0.25;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Component
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export default function TimelineModal({
|
||||
isOpen,
|
||||
onClose,
|
||||
tasks,
|
||||
taskEdges,
|
||||
milestones,
|
||||
milestoneEdges,
|
||||
suppressedEdgeKeys,
|
||||
onTaskClick,
|
||||
summary,
|
||||
}: TimelineModalProps) {
|
||||
const [zoom, setZoom] = useState(1);
|
||||
const scrollRef = useRef<HTMLDivElement>(null);
|
||||
const containerRef = useRef<HTMLDivElement>(null);
|
||||
const [containerWidth, setContainerWidth] = useState(1200);
|
||||
|
||||
// ---- Observe container width ----
|
||||
useEffect(() => {
|
||||
if (!isOpen) return;
|
||||
const el = containerRef.current;
|
||||
if (!el) return;
|
||||
|
||||
// Initial measurement
|
||||
setContainerWidth(el.clientWidth);
|
||||
|
||||
const observer = new ResizeObserver((entries) => {
|
||||
for (const entry of entries) {
|
||||
if (entry.contentRect.width > 0) {
|
||||
setContainerWidth(entry.contentRect.width);
|
||||
}
|
||||
}
|
||||
});
|
||||
observer.observe(el);
|
||||
return () => observer.disconnect();
|
||||
}, [isOpen]);
|
||||
|
||||
// ---- Keyboard handling (Escape to close) ----
|
||||
useEffect(() => {
|
||||
if (!isOpen) return;
|
||||
const handleKey = (e: KeyboardEvent) => {
|
||||
if (e.key === "Escape") {
|
||||
onClose();
|
||||
}
|
||||
};
|
||||
window.addEventListener("keydown", handleKey);
|
||||
return () => window.removeEventListener("keydown", handleKey);
|
||||
}, [isOpen, onClose]);
|
||||
|
||||
// ---- Prevent body scroll when modal is open ----
|
||||
useEffect(() => {
|
||||
if (!isOpen) return;
|
||||
const prev = document.body.style.overflow;
|
||||
document.body.style.overflow = "hidden";
|
||||
return () => {
|
||||
document.body.style.overflow = prev;
|
||||
};
|
||||
}, [isOpen]);
|
||||
|
||||
// ---- Adjust layout config based on task count ----
|
||||
const layoutConfig: LayoutConfig = useMemo(() => {
|
||||
const taskCount = tasks.length;
|
||||
if (taskCount > 80) {
|
||||
return {
|
||||
...MODAL_LAYOUT,
|
||||
laneHeight: 32,
|
||||
barHeight: 20,
|
||||
lanePadding: 6,
|
||||
};
|
||||
}
|
||||
if (taskCount > 40) {
|
||||
return {
|
||||
...MODAL_LAYOUT,
|
||||
laneHeight: 38,
|
||||
barHeight: 24,
|
||||
lanePadding: 7,
|
||||
};
|
||||
}
|
||||
return MODAL_LAYOUT;
|
||||
}, [tasks.length]);
|
||||
|
||||
// ---- Compute layout at the zoomed width ----
|
||||
const layout: ComputedLayout | null = useMemo(() => {
|
||||
if (tasks.length === 0) return null;
|
||||
// Zoom stretches the timeline horizontally
|
||||
const effectiveWidth = Math.max(containerWidth * zoom, 600);
|
||||
return computeLayout(
|
||||
tasks,
|
||||
taskEdges,
|
||||
milestones,
|
||||
milestoneEdges,
|
||||
effectiveWidth,
|
||||
layoutConfig,
|
||||
suppressedEdgeKeys,
|
||||
);
|
||||
}, [
|
||||
tasks,
|
||||
taskEdges,
|
||||
milestones,
|
||||
milestoneEdges,
|
||||
containerWidth,
|
||||
zoom,
|
||||
layoutConfig,
|
||||
suppressedEdgeKeys,
|
||||
]);
|
||||
|
||||
// ---- Zoom handlers ----
|
||||
const handleZoomIn = useCallback(() => {
|
||||
setZoom((z) => Math.min(MAX_ZOOM, z + ZOOM_STEP));
|
||||
}, []);
|
||||
|
||||
const handleZoomOut = useCallback(() => {
|
||||
setZoom((z) => Math.max(MIN_ZOOM, z - ZOOM_STEP));
|
||||
}, []);
|
||||
|
||||
const handleZoomReset = useCallback(() => {
|
||||
setZoom(1);
|
||||
if (scrollRef.current) {
|
||||
scrollRef.current.scrollLeft = 0;
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleZoomSlider = useCallback(
|
||||
(e: React.ChangeEvent<HTMLInputElement>) => {
|
||||
setZoom(parseFloat(e.target.value));
|
||||
},
|
||||
[],
|
||||
);
|
||||
|
||||
// ---- Wheel zoom on the timeline area ----
|
||||
const handleWheel = useCallback((e: React.WheelEvent) => {
|
||||
// Only zoom on Ctrl+wheel or meta+wheel to avoid interfering with normal scroll
|
||||
if (!e.ctrlKey && !e.metaKey) return;
|
||||
|
||||
e.preventDefault();
|
||||
const delta = e.deltaY > 0 ? -ZOOM_STEP : ZOOM_STEP;
|
||||
setZoom((z) => {
|
||||
const newZoom = Math.max(MIN_ZOOM, Math.min(MAX_ZOOM, z + delta));
|
||||
return newZoom;
|
||||
});
|
||||
}, []);
|
||||
|
||||
if (!isOpen) return null;
|
||||
|
||||
const content = (
|
||||
<div
|
||||
className="fixed inset-0 z-50 flex flex-col"
|
||||
style={{ backgroundColor: "rgba(0, 0, 0, 0.6)" }}
|
||||
onClick={(e) => {
|
||||
// Close on backdrop click
|
||||
if (e.target === e.currentTarget) onClose();
|
||||
}}
|
||||
>
|
||||
{/* Modal container */}
|
||||
<div className="flex flex-col m-4 md:m-6 lg:m-8 bg-white rounded-xl shadow-2xl overflow-hidden flex-1 min-h-0">
|
||||
{/* ---- Header ---- */}
|
||||
<div className="flex items-center justify-between px-5 py-3 border-b border-gray-200 bg-gray-50/80 flex-shrink-0">
|
||||
<div className="flex items-center gap-3">
|
||||
<GitBranch className="h-4 w-4 text-indigo-500" />
|
||||
<h2 className="text-sm font-semibold text-gray-800">
|
||||
Workflow Timeline
|
||||
</h2>
|
||||
<span className="text-xs text-gray-400">
|
||||
{summary.total} task{summary.total !== 1 ? "s" : ""}
|
||||
{summary.durationMs != null && (
|
||||
<> · {formatDurationShort(summary.durationMs)}</>
|
||||
)}
|
||||
</span>
|
||||
|
||||
{/* Summary badges */}
|
||||
<div className="flex items-center gap-1.5 ml-2">
|
||||
{summary.completed > 0 && (
|
||||
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-green-100 text-green-700">
|
||||
{summary.completed} ✓
|
||||
</span>
|
||||
)}
|
||||
{summary.running > 0 && (
|
||||
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-blue-100 text-blue-700">
|
||||
{summary.running} ⟳
|
||||
</span>
|
||||
)}
|
||||
{summary.failed > 0 && (
|
||||
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-red-100 text-red-700">
|
||||
{summary.failed} ✗
|
||||
</span>
|
||||
)}
|
||||
{summary.other > 0 && (
|
||||
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-gray-100 text-gray-500">
|
||||
{summary.other}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Right: zoom controls + close */}
|
||||
<div className="flex items-center gap-3">
|
||||
{/* Zoom controls */}
|
||||
<div className="flex items-center gap-2 bg-white border border-gray-200 rounded-lg px-2.5 py-1.5 shadow-sm">
|
||||
<button
|
||||
onClick={handleZoomOut}
|
||||
disabled={zoom <= MIN_ZOOM}
|
||||
className="p-0.5 text-gray-500 hover:text-gray-800 disabled:text-gray-300 disabled:cursor-not-allowed"
|
||||
title="Zoom out"
|
||||
>
|
||||
<ZoomOut className="h-3.5 w-3.5" />
|
||||
</button>
|
||||
|
||||
<input
|
||||
type="range"
|
||||
min={MIN_ZOOM}
|
||||
max={MAX_ZOOM}
|
||||
step={ZOOM_STEP}
|
||||
value={zoom}
|
||||
onChange={handleZoomSlider}
|
||||
className="w-24 h-1 accent-indigo-500 cursor-pointer"
|
||||
title={`Timescale: ${Math.round(zoom * 100)}%`}
|
||||
/>
|
||||
|
||||
<button
|
||||
onClick={handleZoomIn}
|
||||
disabled={zoom >= MAX_ZOOM}
|
||||
className="p-0.5 text-gray-500 hover:text-gray-800 disabled:text-gray-300 disabled:cursor-not-allowed"
|
||||
title="Zoom in"
|
||||
>
|
||||
<ZoomIn className="h-3.5 w-3.5" />
|
||||
</button>
|
||||
|
||||
<span className="text-xs text-gray-500 font-mono tabular-nums w-10 text-center">
|
||||
{Math.round(zoom * 100)}%
|
||||
</span>
|
||||
|
||||
{zoom !== 1 && (
|
||||
<button
|
||||
onClick={handleZoomReset}
|
||||
className="p-0.5 text-gray-400 hover:text-gray-700"
|
||||
title="Reset zoom"
|
||||
>
|
||||
<RotateCcw className="h-3 w-3" />
|
||||
</button>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Close button */}
|
||||
<button
|
||||
onClick={onClose}
|
||||
className="p-1.5 text-gray-400 hover:text-gray-700 hover:bg-gray-100 rounded-lg transition-colors"
|
||||
title="Close (Esc)"
|
||||
>
|
||||
<X className="h-5 w-5" />
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* ---- Legend ---- */}
|
||||
<div className="flex items-center gap-3 px-5 py-2 text-[10px] text-gray-400 border-b border-gray-100 flex-shrink-0">
|
||||
<LegendItem color="#22c55e" label="Completed" />
|
||||
<LegendItem color="#3b82f6" label="Running" />
|
||||
<LegendItem color="#ef4444" label="Failed" dashed />
|
||||
<LegendItem color="#f97316" label="Timeout" dotted />
|
||||
<LegendItem color="#9ca3af" label="Pending" />
|
||||
<span className="ml-2 text-gray-300">|</span>
|
||||
<EdgeLegendItem color="#22c55e" label="Succeeded" />
|
||||
<EdgeLegendItem color="#ef4444" label="Failed" dashed />
|
||||
<EdgeLegendItem color="#9ca3af" label="Always" />
|
||||
<span className="ml-auto text-gray-300">
|
||||
Ctrl+scroll to zoom · Click task to highlight path · Double-click to
|
||||
view
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{/* ---- Timeline body ---- */}
|
||||
<div
|
||||
ref={containerRef}
|
||||
className="flex-1 min-h-0 overflow-auto"
|
||||
onWheel={handleWheel}
|
||||
>
|
||||
{layout ? (
|
||||
<div ref={scrollRef} className="min-h-full">
|
||||
<TimelineRenderer
|
||||
layout={layout}
|
||||
tasks={tasks}
|
||||
config={layoutConfig}
|
||||
onTaskClick={onTaskClick}
|
||||
idPrefix="modal-"
|
||||
/>
|
||||
</div>
|
||||
) : (
|
||||
<div className="flex items-center justify-center h-full">
|
||||
<span className="text-sm text-gray-400">No tasks to display</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
return createPortal(content, document.body);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Legend sub-components (duplicated from WorkflowTimelineDAG to keep modal
|
||||
// self-contained — these are tiny presentational helpers)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function LegendItem({
|
||||
color,
|
||||
label,
|
||||
dashed,
|
||||
dotted,
|
||||
}: {
|
||||
color: string;
|
||||
label: string;
|
||||
dashed?: boolean;
|
||||
dotted?: boolean;
|
||||
}) {
|
||||
return (
|
||||
<span className="flex items-center gap-1">
|
||||
<span
|
||||
className="inline-block w-5 h-2.5 rounded-sm"
|
||||
style={{
|
||||
backgroundColor: color,
|
||||
opacity: 0.7,
|
||||
border: dashed
|
||||
? `1px dashed ${color}`
|
||||
: dotted
|
||||
? `1px dotted ${color}`
|
||||
: undefined,
|
||||
}}
|
||||
/>
|
||||
<span>{label}</span>
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
function EdgeLegendItem({
|
||||
color,
|
||||
label,
|
||||
dashed,
|
||||
}: {
|
||||
color: string;
|
||||
label: string;
|
||||
dashed?: boolean;
|
||||
}) {
|
||||
return (
|
||||
<span className="flex items-center gap-1">
|
||||
<svg width="16" height="8" viewBox="0 0 16 8">
|
||||
<line
|
||||
x1="0"
|
||||
y1="4"
|
||||
x2="16"
|
||||
y2="4"
|
||||
stroke={color}
|
||||
strokeWidth="1.5"
|
||||
strokeDasharray={dashed ? "3 2" : undefined}
|
||||
opacity="0.7"
|
||||
/>
|
||||
<polygon points="12,1 16,4 12,7" fill={color} opacity="0.6" />
|
||||
</svg>
|
||||
<span>{label}</span>
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function formatDurationShort(ms: number): string {
|
||||
if (ms < 1000) return `${Math.round(ms)}ms`;
|
||||
const secs = ms / 1000;
|
||||
if (secs < 60) return `${secs.toFixed(1)}s`;
|
||||
const mins = Math.floor(secs / 60);
|
||||
const remainSecs = Math.round(secs % 60);
|
||||
if (mins < 60) return `${mins}m ${remainSecs}s`;
|
||||
const hrs = Math.floor(mins / 60);
|
||||
const remainMins = mins % 60;
|
||||
return `${hrs}h ${remainMins}m`;
|
||||
}
|
||||
1053
web/src/components/executions/workflow-timeline/TimelineRenderer.tsx
Normal file
1053
web/src/components/executions/workflow-timeline/TimelineRenderer.tsx
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,572 @@
|
||||
/**
|
||||
* WorkflowTimelineDAG — Orchestrator component for the Prefect-style
|
||||
* workflow run timeline visualization.
|
||||
*
|
||||
* This component:
|
||||
* 1. Fetches the workflow definition (for transition metadata)
|
||||
* 2. Transforms child execution summaries into timeline structures
|
||||
* 3. Computes the DAG layout (lanes, positions, edges)
|
||||
* 4. Delegates rendering to TimelineRenderer
|
||||
*
|
||||
* It is designed to be embedded in the ExecutionDetailPage for workflow
|
||||
* executions, receiving child execution data from the parent.
|
||||
*/
|
||||
|
||||
import { useMemo, useRef, useCallback, useState, useEffect } from "react";
|
||||
import { useNavigate } from "react-router-dom";
|
||||
import type { ExecutionSummary } from "@/api";
|
||||
import { useWorkflow } from "@/hooks/useWorkflows";
|
||||
import { useChildExecutions } from "@/hooks/useExecutions";
|
||||
import { useExecutionStream } from "@/hooks/useExecutionStream";
|
||||
import {
|
||||
ChartGantt,
|
||||
ChevronDown,
|
||||
ChevronRight,
|
||||
Loader2,
|
||||
Maximize2,
|
||||
} from "lucide-react";
|
||||
|
||||
import type {
|
||||
TimelineTask,
|
||||
TimelineEdge,
|
||||
TimelineMilestone,
|
||||
WorkflowDefinition,
|
||||
LayoutConfig,
|
||||
} from "./types";
|
||||
import { DEFAULT_LAYOUT } from "./types";
|
||||
import {
|
||||
buildTimelineTasks,
|
||||
collapseWithItemsGroups,
|
||||
buildEdges,
|
||||
buildMilestones,
|
||||
} from "./data";
|
||||
import { computeLayout } from "./layout";
|
||||
import TimelineRenderer from "./TimelineRenderer";
|
||||
import TimelineModal from "./TimelineModal";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Minimal parent execution shape accepted by this component.
|
||||
// Both ExecutionResponse and ExecutionSummary satisfy this interface,
|
||||
// so callers don't need an ugly cast.
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface ParentExecutionInfo {
|
||||
id: number;
|
||||
action_ref: string;
|
||||
status: string;
|
||||
created: string;
|
||||
updated: string;
|
||||
started_at?: string | null;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Props
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface WorkflowTimelineDAGProps {
|
||||
/** The parent (workflow) execution — accepts ExecutionResponse or ExecutionSummary */
|
||||
parentExecution: ParentExecutionInfo;
|
||||
/** The action_ref of the parent execution (used to fetch workflow def) */
|
||||
actionRef: string;
|
||||
/** Whether the panel starts collapsed */
|
||||
defaultCollapsed?: boolean;
|
||||
/**
|
||||
* When true, renders only the timeline content (legend, renderer, modal)
|
||||
* without the outer card wrapper, header button, or collapse toggle.
|
||||
* Used when the component is embedded inside another panel (e.g. WorkflowDetailsPanel).
|
||||
*/
|
||||
embedded?: boolean;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Component
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export default function WorkflowTimelineGraph({
|
||||
parentExecution,
|
||||
actionRef,
|
||||
defaultCollapsed = false,
|
||||
embedded = false,
|
||||
}: WorkflowTimelineDAGProps) {
|
||||
const navigate = useNavigate();
|
||||
const containerRef = useRef<HTMLDivElement>(null);
|
||||
const [isCollapsed, setIsCollapsed] = useState(
|
||||
embedded ? false : defaultCollapsed,
|
||||
);
|
||||
const [isModalOpen, setIsModalOpen] = useState(false);
|
||||
const [containerWidth, setContainerWidth] = useState(900);
|
||||
const [nowMs, setNowMs] = useState(Date.now);
|
||||
|
||||
// ---- Determine if the workflow is still in-flight ----
|
||||
const isTerminal = [
|
||||
"completed",
|
||||
"failed",
|
||||
"timeout",
|
||||
"cancelled",
|
||||
"abandoned",
|
||||
].includes(parentExecution.status);
|
||||
|
||||
// ---- Smooth animation via requestAnimationFrame ----
|
||||
// While the workflow is running and the panel is visible, tick at display
|
||||
// refresh rate (~60fps) so running task bars and the time axis grow smoothly.
|
||||
useEffect(() => {
|
||||
if (isTerminal || (!embedded && isCollapsed)) return;
|
||||
let rafId: number;
|
||||
const tick = () => {
|
||||
setNowMs(Date.now());
|
||||
rafId = requestAnimationFrame(tick);
|
||||
};
|
||||
rafId = requestAnimationFrame(tick);
|
||||
return () => cancelAnimationFrame(rafId);
|
||||
}, [isTerminal, isCollapsed, embedded]);
|
||||
|
||||
// ---- Data fetching ----
|
||||
|
||||
// Fetch child executions
|
||||
const { data: childData, isLoading: childrenLoading } = useChildExecutions(
|
||||
parentExecution.id,
|
||||
);
|
||||
|
||||
// Subscribe to real-time execution updates so child tasks update live
|
||||
useExecutionStream({ enabled: true });
|
||||
|
||||
// Fetch workflow definition for transition metadata
|
||||
// The workflow ref matches the action ref for workflow actions
|
||||
const { data: workflowData } = useWorkflow(actionRef);
|
||||
|
||||
const childExecutions: ExecutionSummary[] = useMemo(() => {
|
||||
return childData?.data ?? [];
|
||||
}, [childData]);
|
||||
|
||||
const workflowDef: WorkflowDefinition | null = useMemo(() => {
|
||||
if (!workflowData?.data?.definition) return null;
|
||||
return workflowData.data.definition as WorkflowDefinition;
|
||||
}, [workflowData]);
|
||||
|
||||
// ---- Observe container width for responsive layout ----
|
||||
useEffect(() => {
|
||||
const el = containerRef.current;
|
||||
if (!el) return;
|
||||
|
||||
const observer = new ResizeObserver((entries) => {
|
||||
for (const entry of entries) {
|
||||
const w = entry.contentRect.width;
|
||||
if (w > 0) setContainerWidth(w);
|
||||
}
|
||||
});
|
||||
|
||||
observer.observe(el);
|
||||
return () => observer.disconnect();
|
||||
}, [isCollapsed]);
|
||||
|
||||
// ---- Build timeline data structures ----
|
||||
// Split into two phases:
|
||||
// 1. Structural memo — edges and upstream/downstream links. These depend
|
||||
// only on the set of child executions and the workflow definition, NOT
|
||||
// on the current time. Recomputes only when real data changes.
|
||||
// 2. Per-frame memo — task time positions, milestones, and layout. These
|
||||
// depend on `nowMs` so they update every animation frame (~60fps) while
|
||||
// the workflow is running, giving smooth bar growth.
|
||||
|
||||
// Phase 1: Build tasks (without time-dependent endMs) and compute edges.
|
||||
// `buildEdges` mutates tasks' upstreamIds/downstreamIds, so we must call
|
||||
// it in the same memo that creates the task objects.
|
||||
const { structuralTasks, taskEdges } = useMemo(() => {
|
||||
if (childExecutions.length === 0) {
|
||||
return {
|
||||
structuralTasks: [] as TimelineTask[],
|
||||
taskEdges: [] as TimelineEdge[],
|
||||
};
|
||||
}
|
||||
|
||||
// Build individual tasks, then collapse large with_items groups into
|
||||
// single synthetic nodes before computing edges.
|
||||
const rawTasks = buildTimelineTasks(childExecutions, workflowDef);
|
||||
const { tasks: structuralTasks, memberToGroup } = collapseWithItemsGroups(
|
||||
rawTasks,
|
||||
childExecutions,
|
||||
workflowDef,
|
||||
);
|
||||
|
||||
// Derive dependency edges (purely structural — no time dependency).
|
||||
// Pass the collapse mapping so edges redirect to group nodes.
|
||||
const taskEdges = buildEdges(
|
||||
structuralTasks,
|
||||
childExecutions,
|
||||
workflowDef,
|
||||
memberToGroup,
|
||||
);
|
||||
|
||||
return { structuralTasks, taskEdges };
|
||||
}, [childExecutions, workflowDef]);
|
||||
|
||||
// Phase 2: Patch running-task time positions and build milestones.
|
||||
// This runs every animation frame while the workflow is active.
|
||||
const { tasks, milestones, milestoneEdges, suppressedEdgeKeys } =
|
||||
useMemo(() => {
|
||||
if (structuralTasks.length === 0) {
|
||||
return {
|
||||
tasks: [] as TimelineTask[],
|
||||
milestones: [] as TimelineMilestone[],
|
||||
milestoneEdges: [] as TimelineEdge[],
|
||||
suppressedEdgeKeys: new Set<string>(),
|
||||
};
|
||||
}
|
||||
|
||||
// Patch endMs / durationMs for running tasks so bars grow in real time.
|
||||
// We shallow-clone each task that needs updating to keep React diffing
|
||||
// efficient (unchanged tasks keep the same object identity).
|
||||
const tasks = structuralTasks.map((t) => {
|
||||
if (t.state === "running" && t.startMs != null) {
|
||||
const endMs = nowMs;
|
||||
return { ...t, endMs, durationMs: endMs - t.startMs };
|
||||
}
|
||||
return t;
|
||||
});
|
||||
|
||||
// Build milestones (start/end diamonds, merge/fork junctions)
|
||||
const parentAsSummary: ExecutionSummary = {
|
||||
id: parentExecution.id,
|
||||
action_ref: parentExecution.action_ref,
|
||||
status: parentExecution.status as ExecutionSummary["status"],
|
||||
created: parentExecution.created,
|
||||
updated: parentExecution.updated,
|
||||
started_at: parentExecution.started_at,
|
||||
};
|
||||
const { milestones, milestoneEdges, suppressedEdgeKeys } =
|
||||
buildMilestones(tasks, parentAsSummary);
|
||||
|
||||
return { tasks, milestones, milestoneEdges, suppressedEdgeKeys };
|
||||
}, [structuralTasks, parentExecution, nowMs]);
|
||||
|
||||
// ---- Compute layout ----
|
||||
|
||||
const layoutConfig: LayoutConfig = useMemo(() => {
|
||||
// Adjust layout based on task count for readability
|
||||
const taskCount = tasks.length;
|
||||
if (taskCount > 50) {
|
||||
return {
|
||||
...DEFAULT_LAYOUT,
|
||||
laneHeight: 26,
|
||||
barHeight: 16,
|
||||
lanePadding: 5,
|
||||
};
|
||||
}
|
||||
if (taskCount > 20) {
|
||||
return {
|
||||
...DEFAULT_LAYOUT,
|
||||
laneHeight: 30,
|
||||
barHeight: 18,
|
||||
lanePadding: 6,
|
||||
};
|
||||
}
|
||||
return DEFAULT_LAYOUT;
|
||||
}, [tasks.length]);
|
||||
|
||||
const layout = useMemo(() => {
|
||||
if (tasks.length === 0) return null;
|
||||
|
||||
return computeLayout(
|
||||
tasks,
|
||||
taskEdges,
|
||||
milestones,
|
||||
milestoneEdges,
|
||||
containerWidth,
|
||||
layoutConfig,
|
||||
suppressedEdgeKeys,
|
||||
);
|
||||
}, [
|
||||
tasks,
|
||||
taskEdges,
|
||||
milestones,
|
||||
milestoneEdges,
|
||||
containerWidth,
|
||||
layoutConfig,
|
||||
suppressedEdgeKeys,
|
||||
]);
|
||||
|
||||
// ---- Handlers ----
|
||||
|
||||
const handleTaskClick = useCallback(
|
||||
(task: TimelineTask) => {
|
||||
navigate(`/executions/${task.id}`);
|
||||
},
|
||||
[navigate],
|
||||
);
|
||||
|
||||
// ---- Summary stats ----
|
||||
|
||||
const summary = useMemo(() => {
|
||||
const total = childExecutions.length;
|
||||
const completed = childExecutions.filter(
|
||||
(e) => e.status === "completed",
|
||||
).length;
|
||||
const failed = childExecutions.filter((e) => e.status === "failed").length;
|
||||
const running = childExecutions.filter(
|
||||
(e) =>
|
||||
e.status === "running" ||
|
||||
e.status === "requested" ||
|
||||
e.status === "scheduling" ||
|
||||
e.status === "scheduled",
|
||||
).length;
|
||||
const other = total - completed - failed - running;
|
||||
|
||||
// Compute overall duration from the already-patched tasks array so we
|
||||
// get the live running-task endMs values for free.
|
||||
let durationMs: number | null = null;
|
||||
const taskStartTimes = tasks
|
||||
.filter((t) => t.startMs != null)
|
||||
.map((t) => t.startMs!);
|
||||
const taskEndTimes = tasks
|
||||
.filter((t) => t.endMs != null)
|
||||
.map((t) => t.endMs!);
|
||||
|
||||
if (taskStartTimes.length > 0 && taskEndTimes.length > 0) {
|
||||
durationMs = Math.max(...taskEndTimes) - Math.min(...taskStartTimes);
|
||||
}
|
||||
|
||||
return { total, completed, failed, running, other, durationMs };
|
||||
}, [childExecutions, tasks]);
|
||||
|
||||
// ---- Early returns ----
|
||||
|
||||
if (childrenLoading && childExecutions.length === 0) {
|
||||
return (
|
||||
<div className={embedded ? "" : "bg-white shadow rounded-lg"}>
|
||||
<div className="flex items-center gap-3 p-4">
|
||||
<Loader2 className="h-4 w-4 animate-spin text-gray-400" />
|
||||
<span className="text-sm text-gray-500">
|
||||
Loading workflow timeline…
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (childExecutions.length === 0) {
|
||||
if (embedded) {
|
||||
return (
|
||||
<div className="flex items-center justify-center py-8 text-sm text-gray-500">
|
||||
No workflow tasks yet.
|
||||
</div>
|
||||
);
|
||||
}
|
||||
return null; // No child tasks to display
|
||||
}
|
||||
|
||||
// ---- Shared content (legend + renderer + modal) ----
|
||||
const timelineContent = (
|
||||
<>
|
||||
{/* Expand to modal */}
|
||||
<div className="flex justify-end px-3 py-1">
|
||||
<button
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
setIsModalOpen(true);
|
||||
}}
|
||||
className="flex items-center gap-1 text-[10px] text-gray-400 hover:text-gray-600 transition-colors"
|
||||
title="Open expanded timeline with zoom"
|
||||
>
|
||||
<Maximize2 className="h-3 w-3" />
|
||||
Expand
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Legend */}
|
||||
<div className="flex items-center gap-3 px-5 pb-2 text-[10px] text-gray-400">
|
||||
<LegendItem color="#22c55e" label="Completed" />
|
||||
<LegendItem color="#3b82f6" label="Running" />
|
||||
<LegendItem color="#ef4444" label="Failed" dashed />
|
||||
<LegendItem color="#f97316" label="Timeout" dotted />
|
||||
<LegendItem color="#9ca3af" label="Pending" />
|
||||
<span className="ml-2 text-gray-300">|</span>
|
||||
<EdgeLegendItem color="#22c55e" label="Succeeded" />
|
||||
<EdgeLegendItem color="#ef4444" label="Failed" dashed />
|
||||
<EdgeLegendItem color="#9ca3af" label="Always" />
|
||||
</div>
|
||||
|
||||
{/* Timeline renderer */}
|
||||
{layout ? (
|
||||
<div
|
||||
className={embedded ? "pb-3" : "px-2 pb-3"}
|
||||
style={{
|
||||
minHeight: layout.totalHeight + 8,
|
||||
}}
|
||||
>
|
||||
<TimelineRenderer
|
||||
layout={layout}
|
||||
tasks={tasks}
|
||||
config={layoutConfig}
|
||||
onTaskClick={handleTaskClick}
|
||||
/>
|
||||
</div>
|
||||
) : (
|
||||
<div className="flex items-center justify-center py-8">
|
||||
<Loader2 className="h-4 w-4 animate-spin text-gray-300" />
|
||||
<span className="ml-2 text-xs text-gray-400">Computing layout…</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* ---- Expanded modal ---- */}
|
||||
{isModalOpen && (
|
||||
<TimelineModal
|
||||
isOpen
|
||||
onClose={() => setIsModalOpen(false)}
|
||||
tasks={tasks}
|
||||
taskEdges={taskEdges}
|
||||
milestones={milestones}
|
||||
milestoneEdges={milestoneEdges}
|
||||
suppressedEdgeKeys={suppressedEdgeKeys}
|
||||
onTaskClick={handleTaskClick}
|
||||
summary={summary}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
);
|
||||
|
||||
// ---- Embedded mode: no card, no header, just the content ----
|
||||
if (embedded) {
|
||||
return (
|
||||
<div ref={containerRef} className="pt-1">
|
||||
{timelineContent}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ---- Standalone mode: full card with header + collapse ----
|
||||
return (
|
||||
<div className="bg-white shadow rounded-lg" ref={containerRef}>
|
||||
{/* ---- Header ---- */}
|
||||
<button
|
||||
onClick={() => setIsCollapsed(!isCollapsed)}
|
||||
className="w-full flex items-center justify-between px-5 py-3 text-left hover:bg-gray-50 rounded-t-lg transition-colors"
|
||||
>
|
||||
<div className="flex items-center gap-2.5">
|
||||
{isCollapsed ? (
|
||||
<ChevronRight className="h-4 w-4 text-gray-400" />
|
||||
) : (
|
||||
<ChevronDown className="h-4 w-4 text-gray-400" />
|
||||
)}
|
||||
<ChartGantt className="h-4 w-4 text-indigo-500" />
|
||||
<h3 className="text-sm font-semibold text-gray-800">
|
||||
Workflow Timeline
|
||||
</h3>
|
||||
<span className="text-xs text-gray-400">
|
||||
{summary.total} task{summary.total !== 1 ? "s" : ""}
|
||||
{summary.durationMs != null && (
|
||||
<> · {formatDurationShort(summary.durationMs)}</>
|
||||
)}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{/* Summary badges */}
|
||||
<div className="flex items-center gap-1.5">
|
||||
{summary.completed > 0 && (
|
||||
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-green-100 text-green-700">
|
||||
{summary.completed} ✓
|
||||
</span>
|
||||
)}
|
||||
{summary.running > 0 && (
|
||||
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-blue-100 text-blue-700">
|
||||
{summary.running} ⟳
|
||||
</span>
|
||||
)}
|
||||
{summary.failed > 0 && (
|
||||
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-red-100 text-red-700">
|
||||
{summary.failed} ✗
|
||||
</span>
|
||||
)}
|
||||
{summary.other > 0 && (
|
||||
<span className="inline-flex items-center gap-0.5 px-1.5 py-0.5 rounded-full text-[10px] font-medium bg-gray-100 text-gray-500">
|
||||
{summary.other}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
</button>
|
||||
|
||||
{/* ---- Body ---- */}
|
||||
{!isCollapsed && (
|
||||
<div className="border-t border-gray-100">{timelineContent}</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Legend sub-components
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function LegendItem({
|
||||
color,
|
||||
label,
|
||||
dashed,
|
||||
dotted,
|
||||
}: {
|
||||
color: string;
|
||||
label: string;
|
||||
dashed?: boolean;
|
||||
dotted?: boolean;
|
||||
}) {
|
||||
return (
|
||||
<span className="flex items-center gap-1">
|
||||
<span
|
||||
className="inline-block w-5 h-2.5 rounded-sm"
|
||||
style={{
|
||||
backgroundColor: color,
|
||||
opacity: 0.7,
|
||||
border: dashed
|
||||
? `1px dashed ${color}`
|
||||
: dotted
|
||||
? `1px dotted ${color}`
|
||||
: undefined,
|
||||
}}
|
||||
/>
|
||||
<span>{label}</span>
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
function EdgeLegendItem({
|
||||
color,
|
||||
label,
|
||||
dashed,
|
||||
}: {
|
||||
color: string;
|
||||
label: string;
|
||||
dashed?: boolean;
|
||||
}) {
|
||||
return (
|
||||
<span className="flex items-center gap-1">
|
||||
<svg width="16" height="8" viewBox="0 0 16 8">
|
||||
<line
|
||||
x1="0"
|
||||
y1="4"
|
||||
x2="16"
|
||||
y2="4"
|
||||
stroke={color}
|
||||
strokeWidth="1.5"
|
||||
strokeDasharray={dashed ? "3 2" : undefined}
|
||||
opacity="0.7"
|
||||
/>
|
||||
<polygon points="12,1 16,4 12,7" fill={color} opacity="0.6" />
|
||||
</svg>
|
||||
<span>{label}</span>
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function formatDurationShort(ms: number): string {
|
||||
if (ms < 1000) return `${Math.round(ms)}ms`;
|
||||
const secs = ms / 1000;
|
||||
if (secs < 60) return `${secs.toFixed(1)}s`;
|
||||
const mins = Math.floor(secs / 60);
|
||||
const remainSecs = Math.round(secs % 60);
|
||||
if (mins < 60) return `${mins}m ${remainSecs}s`;
|
||||
const hrs = Math.floor(mins / 60);
|
||||
const remainMins = mins % 60;
|
||||
return `${hrs}h ${remainMins}m`;
|
||||
}
|
||||
1376
web/src/components/executions/workflow-timeline/data.ts
Normal file
1376
web/src/components/executions/workflow-timeline/data.ts
Normal file
File diff suppressed because it is too large
Load Diff
41
web/src/components/executions/workflow-timeline/index.ts
Normal file
41
web/src/components/executions/workflow-timeline/index.ts
Normal file
@@ -0,0 +1,41 @@
|
||||
/**
|
||||
* Workflow Timeline DAG — barrel exports.
|
||||
*
|
||||
* Usage:
|
||||
* import WorkflowTimelineDAG from "@/components/executions/workflow-timeline";
|
||||
*/
|
||||
|
||||
export { default } from "./WorkflowTimelineGraph";
|
||||
export type { ParentExecutionInfo } from "./WorkflowTimelineGraph";
|
||||
export { default as TimelineRenderer } from "./TimelineRenderer";
|
||||
export { default as TimelineModal } from "./TimelineModal";
|
||||
|
||||
// Re-export types consumers might need
|
||||
export type {
|
||||
TimelineTask,
|
||||
TimelineEdge,
|
||||
TimelineMilestone,
|
||||
TimelineNode,
|
||||
ComputedLayout,
|
||||
TaskState,
|
||||
EdgeKind,
|
||||
MilestoneKind,
|
||||
TooltipData,
|
||||
LayoutConfig,
|
||||
WorkflowDefinition,
|
||||
WithItemsGroupInfo,
|
||||
} from "./types";
|
||||
|
||||
export { WITH_ITEMS_COLLAPSE_THRESHOLD } from "./types";
|
||||
|
||||
// Re-export data utilities for testing / advanced usage
|
||||
export {
|
||||
buildTimelineTasks,
|
||||
buildEdges,
|
||||
buildMilestones,
|
||||
findConnectedPath,
|
||||
edgeKey,
|
||||
} from "./data";
|
||||
|
||||
// Re-export layout utilities
|
||||
export { computeLayout, computeGridLines, computeEdgePath } from "./layout";
|
||||
673
web/src/components/executions/workflow-timeline/layout.ts
Normal file
673
web/src/components/executions/workflow-timeline/layout.ts
Normal file
@@ -0,0 +1,673 @@
|
||||
/**
|
||||
* Layout Engine for the Workflow Timeline DAG.
|
||||
*
|
||||
* Responsible for:
|
||||
* 1. Computing the time→pixel x-scale from task time bounds.
|
||||
* 2. Assigning tasks to non-overlapping y-lanes (greedy packing).
|
||||
* 3. Positioning milestone nodes.
|
||||
* 4. Producing the final ComputedLayout consumed by the SVG renderer.
|
||||
*/
|
||||
|
||||
import type {
|
||||
TimelineTask,
|
||||
TimelineEdge,
|
||||
TimelineMilestone,
|
||||
TimelineNode,
|
||||
ComputedLayout,
|
||||
LayoutConfig,
|
||||
} from "./types";
|
||||
import { DEFAULT_LAYOUT } from "./types";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Time scale helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface TimeScale {
|
||||
/** Minimum time (epoch ms) */
|
||||
minMs: number;
|
||||
/** Maximum time (epoch ms) */
|
||||
maxMs: number;
|
||||
/** Available pixel width for the time axis */
|
||||
axisWidth: number;
|
||||
/** Pixels per millisecond */
|
||||
pxPerMs: number;
|
||||
}
|
||||
|
||||
function buildTimeScale(
|
||||
tasks: TimelineTask[],
|
||||
milestones: TimelineMilestone[],
|
||||
chartWidth: number,
|
||||
config: LayoutConfig,
|
||||
): TimeScale {
|
||||
// Collect all time values
|
||||
const times: number[] = [];
|
||||
for (const t of tasks) {
|
||||
if (t.startMs != null) times.push(t.startMs);
|
||||
if (t.endMs != null) times.push(t.endMs);
|
||||
}
|
||||
for (const m of milestones) {
|
||||
times.push(m.timeMs);
|
||||
}
|
||||
|
||||
if (times.length === 0) {
|
||||
// Fallback: a 10-second window around now
|
||||
const now = Date.now();
|
||||
times.push(now - 5000, now + 5000);
|
||||
}
|
||||
|
||||
let minMs = Math.min(...times);
|
||||
let maxMs = Math.max(...times);
|
||||
|
||||
// Add a small buffer so nodes at the edges aren't right on the border
|
||||
const rangeMs = maxMs - minMs;
|
||||
const bufferMs = Math.max(rangeMs * 0.04, 200); // at least 200ms buffer
|
||||
minMs -= bufferMs;
|
||||
maxMs += bufferMs;
|
||||
|
||||
const axisWidth = chartWidth - config.paddingLeft - config.paddingRight;
|
||||
const pxPerMs = axisWidth / Math.max(maxMs - minMs, 1);
|
||||
|
||||
return { minMs, maxMs, axisWidth, pxPerMs };
|
||||
}
|
||||
|
||||
/** Convert a timestamp (epoch ms) to an x pixel position */
|
||||
function timeToPx(ms: number, scale: TimeScale, config: LayoutConfig): number {
|
||||
return config.paddingLeft + (ms - scale.minMs) * scale.pxPerMs;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Lane assignment (greedy packing)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface LaneInterval {
|
||||
/** Left x pixel (inclusive) */
|
||||
left: number;
|
||||
/** Right x pixel (inclusive) */
|
||||
right: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Assign each task to the first lane where it doesn't overlap with
|
||||
* any existing task bar in that lane.
|
||||
*
|
||||
* Tasks are sorted by startTime (earliest first), then by duration
|
||||
* descending (longer bars first) to maximise packing efficiency.
|
||||
*
|
||||
* After initial packing we optionally reorder lanes so tasks with
|
||||
* shared upstream dependencies are adjacent.
|
||||
*/
|
||||
function assignLanes(
|
||||
tasks: TimelineTask[],
|
||||
scale: TimeScale,
|
||||
config: LayoutConfig,
|
||||
): Map<string, number> {
|
||||
// Build a sortable list with pixel extents
|
||||
type Entry = {
|
||||
task: TimelineTask;
|
||||
left: number;
|
||||
right: number;
|
||||
};
|
||||
|
||||
const entries: Entry[] = tasks.map((t) => {
|
||||
const left = t.startMs != null ? timeToPx(t.startMs, scale, config) : 0;
|
||||
let right =
|
||||
t.endMs != null
|
||||
? timeToPx(t.endMs, scale, config)
|
||||
: left + config.minBarWidth;
|
||||
// Ensure minimum width
|
||||
if (right - left < config.minBarWidth) {
|
||||
right = left + config.minBarWidth;
|
||||
}
|
||||
return { task: t, left, right };
|
||||
});
|
||||
|
||||
// Sort: by start position, then by width descending (longer bars first)
|
||||
entries.sort((a, b) => {
|
||||
if (a.left !== b.left) return a.left - b.left;
|
||||
return b.right - b.left - (a.right - a.left);
|
||||
});
|
||||
|
||||
// Greedy lane packing
|
||||
const lanes: LaneInterval[][] = []; // lanes[laneIndex] = list of intervals
|
||||
const assignment = new Map<string, number>();
|
||||
|
||||
for (const entry of entries) {
|
||||
let placed = false;
|
||||
const gap = 4; // minimum px gap between bars in the same lane
|
||||
|
||||
for (let lane = 0; lane < lanes.length; lane++) {
|
||||
const intervals = lanes[lane];
|
||||
const overlaps = intervals.some(
|
||||
(iv) => entry.left < iv.right + gap && entry.right + gap > iv.left,
|
||||
);
|
||||
if (!overlaps) {
|
||||
intervals.push({ left: entry.left, right: entry.right });
|
||||
assignment.set(entry.task.id, lane);
|
||||
placed = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!placed) {
|
||||
// Open a new lane
|
||||
lanes.push([{ left: entry.left, right: entry.right }]);
|
||||
assignment.set(entry.task.id, lanes.length - 1);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Optional lane reordering to cluster related tasks ---
|
||||
// Build a lane affinity score based on shared upstream dependencies.
|
||||
// We do a simple bubble-pass: for each pair of adjacent lanes,
|
||||
// if swapping them increases the total number of adjacent upstream-sharing
|
||||
// task pairs, do the swap.
|
||||
const laneCount = lanes.length;
|
||||
if (laneCount > 2) {
|
||||
const laneIds: number[] = Array.from({ length: laneCount }, (_, i) => i);
|
||||
|
||||
// Build lane→taskIds mapping
|
||||
const tasksByLane = new Map<number, string[]>();
|
||||
for (const [taskId, lane] of assignment) {
|
||||
const list = tasksByLane.get(lane) ?? [];
|
||||
list.push(taskId);
|
||||
tasksByLane.set(lane, list);
|
||||
}
|
||||
|
||||
// Build a task→upstreams lookup
|
||||
const taskUpstreams = new Map<string, Set<string>>();
|
||||
for (const t of tasks) {
|
||||
taskUpstreams.set(t.id, new Set(t.upstreamIds));
|
||||
}
|
||||
|
||||
// Affinity between two lanes: count of task pairs that share upstream deps
|
||||
function laneAffinity(laneA: number, laneB: number): number {
|
||||
const aTasks = tasksByLane.get(laneA) ?? [];
|
||||
const bTasks = tasksByLane.get(laneB) ?? [];
|
||||
let score = 0;
|
||||
for (const a of aTasks) {
|
||||
const aUp = taskUpstreams.get(a);
|
||||
if (!aUp || aUp.size === 0) continue;
|
||||
for (const b of bTasks) {
|
||||
const bUp = taskUpstreams.get(b);
|
||||
if (!bUp || bUp.size === 0) continue;
|
||||
// Count shared upstreams
|
||||
for (const u of aUp) {
|
||||
if (bUp.has(u)) {
|
||||
score++;
|
||||
break; // one shared upstream is enough for this pair
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return score;
|
||||
}
|
||||
|
||||
// Simple bubble sort passes (max 3 passes for stability)
|
||||
for (let pass = 0; pass < 3; pass++) {
|
||||
let swapped = false;
|
||||
for (let i = 0; i < laneIds.length - 1; i++) {
|
||||
const curr = laneIds[i];
|
||||
const next = laneIds[i + 1];
|
||||
|
||||
// Check if swapping improves adjacency with neighbours
|
||||
const prev = i > 0 ? laneIds[i - 1] : -1;
|
||||
const after = i + 2 < laneIds.length ? laneIds[i + 2] : -1;
|
||||
|
||||
let scoreBefore = 0;
|
||||
let scoreAfter = 0;
|
||||
|
||||
if (prev >= 0) {
|
||||
scoreBefore += laneAffinity(prev, curr);
|
||||
scoreAfter += laneAffinity(prev, next);
|
||||
}
|
||||
if (after >= 0) {
|
||||
scoreBefore += laneAffinity(next, after);
|
||||
scoreAfter += laneAffinity(curr, after);
|
||||
}
|
||||
scoreBefore += laneAffinity(curr, next);
|
||||
scoreAfter += laneAffinity(next, curr); // same, symmetric
|
||||
|
||||
if (scoreAfter > scoreBefore) {
|
||||
laneIds[i] = next;
|
||||
laneIds[i + 1] = curr;
|
||||
swapped = true;
|
||||
}
|
||||
}
|
||||
if (!swapped) break;
|
||||
}
|
||||
|
||||
// Remap lane assignments to the reordered indices
|
||||
const reorderMap = new Map<number, number>();
|
||||
for (let newIdx = 0; newIdx < laneIds.length; newIdx++) {
|
||||
reorderMap.set(laneIds[newIdx], newIdx);
|
||||
}
|
||||
for (const [taskId, oldLane] of assignment) {
|
||||
assignment.set(taskId, reorderMap.get(oldLane) ?? oldLane);
|
||||
}
|
||||
}
|
||||
|
||||
return assignment;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Milestone lane assignment
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Position milestones in a lane that centres them vertically relative to
|
||||
* the tasks they connect to. Start and end milestones go to a middle lane.
|
||||
* Internal merge/fork milestones are placed at the median lane of their
|
||||
* connected tasks.
|
||||
*/
|
||||
function assignMilestoneLanes(
|
||||
milestones: TimelineMilestone[],
|
||||
milestoneEdges: TimelineEdge[],
|
||||
taskLanes: Map<string, number>,
|
||||
laneCount: number,
|
||||
): Map<string, number> {
|
||||
const assignment = new Map<string, number>();
|
||||
const midLane = Math.max(0, Math.floor((laneCount - 1) / 2));
|
||||
|
||||
for (const ms of milestones) {
|
||||
if (ms.kind === "start" || ms.kind === "end") {
|
||||
assignment.set(ms.id, midLane);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Gather lanes of connected tasks
|
||||
const connectedLanes: number[] = [];
|
||||
for (const e of milestoneEdges) {
|
||||
if (e.from === ms.id) {
|
||||
const lane = taskLanes.get(e.to);
|
||||
if (lane != null) connectedLanes.push(lane);
|
||||
}
|
||||
if (e.to === ms.id) {
|
||||
const lane = taskLanes.get(e.from);
|
||||
if (lane != null) connectedLanes.push(lane);
|
||||
}
|
||||
}
|
||||
|
||||
if (connectedLanes.length > 0) {
|
||||
connectedLanes.sort((a, b) => a - b);
|
||||
const median = connectedLanes[Math.floor(connectedLanes.length / 2)];
|
||||
assignment.set(ms.id, median);
|
||||
} else {
|
||||
assignment.set(ms.id, midLane);
|
||||
}
|
||||
}
|
||||
|
||||
return assignment;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Build TimelineNode array
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function buildNodes(
|
||||
tasks: TimelineTask[],
|
||||
milestones: TimelineMilestone[],
|
||||
taskLanes: Map<string, number>,
|
||||
milestoneLanes: Map<string, number>,
|
||||
scale: TimeScale,
|
||||
config: LayoutConfig,
|
||||
): TimelineNode[] {
|
||||
const nodes: TimelineNode[] = [];
|
||||
|
||||
// Task nodes
|
||||
for (const task of tasks) {
|
||||
const lane = taskLanes.get(task.id) ?? 0;
|
||||
const left =
|
||||
task.startMs != null
|
||||
? timeToPx(task.startMs, scale, config)
|
||||
: timeToPx(
|
||||
scale.maxMs - (scale.maxMs - scale.minMs) * 0.05,
|
||||
scale,
|
||||
config,
|
||||
);
|
||||
let right =
|
||||
task.endMs != null
|
||||
? timeToPx(task.endMs, scale, config)
|
||||
: left + config.minBarWidth;
|
||||
|
||||
if (right - left < config.minBarWidth) {
|
||||
right = left + config.minBarWidth;
|
||||
}
|
||||
|
||||
const y =
|
||||
config.paddingTop +
|
||||
lane * config.laneHeight +
|
||||
(config.laneHeight - config.barHeight) / 2;
|
||||
|
||||
nodes.push({
|
||||
type: "task",
|
||||
id: task.id,
|
||||
lane,
|
||||
x: left,
|
||||
y,
|
||||
width: right - left,
|
||||
task,
|
||||
});
|
||||
}
|
||||
|
||||
// Milestone nodes
|
||||
for (const ms of milestones) {
|
||||
const lane = milestoneLanes.get(ms.id) ?? 0;
|
||||
const x = timeToPx(ms.timeMs, scale, config);
|
||||
const y =
|
||||
config.paddingTop + lane * config.laneHeight + config.laneHeight / 2;
|
||||
|
||||
nodes.push({
|
||||
type: "milestone",
|
||||
id: ms.id,
|
||||
lane,
|
||||
x,
|
||||
y,
|
||||
width: config.milestoneSize,
|
||||
milestone: ms,
|
||||
});
|
||||
}
|
||||
|
||||
return nodes;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Grid line computation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface GridLine {
|
||||
/** X pixel position */
|
||||
x: number;
|
||||
/** Human-readable label */
|
||||
label: string;
|
||||
/** Whether this is a major gridline (gets a label) */
|
||||
major: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute vertical gridlines at "nice" time intervals.
|
||||
*
|
||||
* Picks an interval that gives roughly 6–12 major gridlines across
|
||||
* the visible chart width.
|
||||
*/
|
||||
export function computeGridLines(
|
||||
scale: TimeScale,
|
||||
config: LayoutConfig,
|
||||
): GridLine[] {
|
||||
const rangeMs = scale.maxMs - scale.minMs;
|
||||
if (rangeMs <= 0) return [];
|
||||
|
||||
// Target ~8 major gridlines
|
||||
const targetCount = 8;
|
||||
const rawInterval = rangeMs / targetCount;
|
||||
|
||||
// Snap to a "nice" interval
|
||||
const niceIntervals = [
|
||||
100,
|
||||
200,
|
||||
500, // sub-second
|
||||
1000,
|
||||
2000,
|
||||
5000, // seconds
|
||||
10_000,
|
||||
15_000,
|
||||
30_000, // tens of seconds
|
||||
60_000,
|
||||
120_000,
|
||||
300_000, // minutes
|
||||
600_000,
|
||||
900_000,
|
||||
1_800_000, // tens of minutes
|
||||
3_600_000,
|
||||
7_200_000, // hours
|
||||
14_400_000,
|
||||
28_800_000,
|
||||
43_200_000, // multi-hour
|
||||
86_400_000, // day
|
||||
];
|
||||
|
||||
let interval = niceIntervals[0];
|
||||
for (const ni of niceIntervals) {
|
||||
interval = ni;
|
||||
if (ni >= rawInterval) break;
|
||||
}
|
||||
|
||||
const lines: GridLine[] = [];
|
||||
|
||||
// Start at the first "nice" multiple >= minMs
|
||||
const firstTick = Math.ceil(scale.minMs / interval) * interval;
|
||||
|
||||
for (let ms = firstTick; ms <= scale.maxMs; ms += interval) {
|
||||
const x = timeToPx(ms, scale, config);
|
||||
lines.push({
|
||||
x,
|
||||
label: formatTimeLabel(ms, interval),
|
||||
major: true,
|
||||
});
|
||||
|
||||
// Add a minor gridline halfway if the interval is large enough
|
||||
if (interval >= 2000) {
|
||||
const midMs = ms + interval / 2;
|
||||
if (midMs < scale.maxMs) {
|
||||
lines.push({
|
||||
x: timeToPx(midMs, scale, config),
|
||||
label: "",
|
||||
major: false,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return lines;
|
||||
}
|
||||
|
||||
/** Format a timestamp as a short label relative to the chart start */
|
||||
function formatTimeLabel(ms: number, intervalMs: number): string {
|
||||
const date = new Date(ms);
|
||||
|
||||
if (intervalMs >= 86_400_000) {
|
||||
// Days — show date
|
||||
return date.toLocaleDateString(undefined, {
|
||||
month: "short",
|
||||
day: "numeric",
|
||||
});
|
||||
}
|
||||
|
||||
if (intervalMs >= 3_600_000) {
|
||||
// Hours — show HH:MM
|
||||
return date.toLocaleTimeString(undefined, {
|
||||
hour: "2-digit",
|
||||
minute: "2-digit",
|
||||
});
|
||||
}
|
||||
|
||||
if (intervalMs >= 60_000) {
|
||||
// Minutes — show HH:MM:SS
|
||||
return date.toLocaleTimeString(undefined, {
|
||||
hour: "2-digit",
|
||||
minute: "2-digit",
|
||||
second: "2-digit",
|
||||
});
|
||||
}
|
||||
|
||||
if (intervalMs >= 1000) {
|
||||
// Seconds — show HH:MM:SS
|
||||
return date.toLocaleTimeString(undefined, {
|
||||
hour: "2-digit",
|
||||
minute: "2-digit",
|
||||
second: "2-digit",
|
||||
});
|
||||
}
|
||||
|
||||
// Sub-second — show with milliseconds
|
||||
return (
|
||||
date.toLocaleTimeString(undefined, {
|
||||
hour: "2-digit",
|
||||
minute: "2-digit",
|
||||
second: "2-digit",
|
||||
}) +
|
||||
"." +
|
||||
String(date.getMilliseconds()).padStart(3, "0")
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public API: computeLayout
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function computeLayout(
|
||||
tasks: TimelineTask[],
|
||||
taskEdges: TimelineEdge[],
|
||||
milestones: TimelineMilestone[],
|
||||
milestoneEdges: TimelineEdge[],
|
||||
/** Desired chart width (pixels). The layout will use this for the x-scale. */
|
||||
chartWidth: number,
|
||||
configOverrides?: Partial<LayoutConfig>,
|
||||
/** Direct task→task edge keys that are replaced by milestone-routed paths.
|
||||
* These are filtered out of `taskEdges` to avoid duplicate rendering. */
|
||||
suppressedEdgeKeys?: Set<string>,
|
||||
): ComputedLayout {
|
||||
const config: LayoutConfig = { ...DEFAULT_LAYOUT, ...configOverrides };
|
||||
|
||||
// Use a reasonable minimum width
|
||||
const effectiveWidth = Math.max(chartWidth, 400);
|
||||
|
||||
// 1. Build time scale
|
||||
const scale = buildTimeScale(tasks, milestones, effectiveWidth, config);
|
||||
|
||||
// 2. Assign task lanes
|
||||
const taskLanes = assignLanes(tasks, scale, config);
|
||||
|
||||
// Count lanes
|
||||
let laneCount = 0;
|
||||
for (const lane of taskLanes.values()) {
|
||||
laneCount = Math.max(laneCount, lane + 1);
|
||||
}
|
||||
// Ensure at least 1 lane even if there are no tasks
|
||||
laneCount = Math.max(laneCount, 1);
|
||||
|
||||
// 3. Assign milestone lanes
|
||||
const milestoneLanes = assignMilestoneLanes(
|
||||
milestones,
|
||||
milestoneEdges,
|
||||
taskLanes,
|
||||
laneCount,
|
||||
);
|
||||
|
||||
// 4. Build node positions
|
||||
const nodes = buildNodes(
|
||||
tasks,
|
||||
milestones,
|
||||
taskLanes,
|
||||
milestoneLanes,
|
||||
scale,
|
||||
config,
|
||||
);
|
||||
|
||||
// 5. Merge all edges, filtering out any task edges that have been
|
||||
// replaced by milestone-routed paths (e.g. A→C replaced by A→merge→C).
|
||||
const filteredTaskEdges = suppressedEdgeKeys?.size
|
||||
? taskEdges.filter((e) => !suppressedEdgeKeys.has(`${e.from}→${e.to}`))
|
||||
: taskEdges;
|
||||
const allEdges = [...filteredTaskEdges, ...milestoneEdges];
|
||||
|
||||
// Deduplicate edges (same from→to)
|
||||
const edgeSet = new Set<string>();
|
||||
const dedupedEdges: TimelineEdge[] = [];
|
||||
for (const e of allEdges) {
|
||||
const key = `${e.from}→${e.to}`;
|
||||
if (!edgeSet.has(key)) {
|
||||
edgeSet.add(key);
|
||||
dedupedEdges.push(e);
|
||||
}
|
||||
}
|
||||
|
||||
// 6. Compute total dimensions
|
||||
const totalWidth = effectiveWidth;
|
||||
const totalHeight =
|
||||
config.paddingTop + laneCount * config.laneHeight + config.paddingBottom;
|
||||
|
||||
return {
|
||||
nodes,
|
||||
edges: dedupedEdges,
|
||||
totalWidth,
|
||||
totalHeight,
|
||||
laneCount,
|
||||
minTimeMs: scale.minMs,
|
||||
maxTimeMs: scale.maxMs,
|
||||
pxPerMs: scale.pxPerMs,
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Bezier edge path generation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Generate an SVG cubic Bezier path string for an edge between two nodes.
|
||||
*
|
||||
* Edges flow left→right. The control points bend horizontally so curves
|
||||
* are smooth and mostly follow the x-axis direction.
|
||||
*
|
||||
* Anchoring:
|
||||
* - Task nodes: outgoing from right-center, incoming at left-center
|
||||
* - Milestones: connect at center
|
||||
*/
|
||||
export function computeEdgePath(
|
||||
fromNode: TimelineNode,
|
||||
toNode: TimelineNode,
|
||||
config: LayoutConfig = DEFAULT_LAYOUT,
|
||||
): string {
|
||||
let x1: number, y1: number, x2: number, y2: number;
|
||||
|
||||
// Source anchor
|
||||
if (fromNode.type === "task") {
|
||||
x1 = fromNode.x + fromNode.width; // right edge
|
||||
y1 = fromNode.y + config.barHeight / 2; // vertical center
|
||||
} else {
|
||||
x1 = fromNode.x;
|
||||
y1 = fromNode.y;
|
||||
}
|
||||
|
||||
// Target anchor
|
||||
if (toNode.type === "task") {
|
||||
x2 = toNode.x; // left edge
|
||||
y2 = toNode.y + config.barHeight / 2; // vertical center
|
||||
} else {
|
||||
x2 = toNode.x;
|
||||
y2 = toNode.y;
|
||||
}
|
||||
|
||||
// Handle edge case where target is to the left of source (e.g., timing quirks)
|
||||
// In that case, draw a slight arc that loops
|
||||
const dx = x2 - x1;
|
||||
const dy = y2 - y1;
|
||||
|
||||
if (dx < 5) {
|
||||
// Target is to the left or very close — use an S-curve that goes
|
||||
// slightly below/above and loops back
|
||||
const loopOffset = Math.max(30, Math.abs(dx) + 20);
|
||||
const yMid = (y1 + y2) / 2 + (dy >= 0 ? 20 : -20);
|
||||
|
||||
return [
|
||||
`M ${x1} ${y1}`,
|
||||
`C ${x1 + loopOffset} ${y1}, ${x2 - loopOffset} ${yMid}, ${(x1 + x2) / 2} ${yMid}`,
|
||||
`C ${(x1 + x2) / 2 + loopOffset} ${yMid}, ${x2 - loopOffset} ${y2}, ${x2} ${y2}`,
|
||||
].join(" ");
|
||||
}
|
||||
|
||||
// Normal left→right Bezier
|
||||
// Control point offset: 40% of horizontal distance, clamped
|
||||
const cpOffset = Math.min(Math.max(dx * 0.4, 20), 120);
|
||||
|
||||
const cx1 = x1 + cpOffset;
|
||||
const cy1 = y1;
|
||||
const cx2 = x2 - cpOffset;
|
||||
const cy2 = y2;
|
||||
|
||||
return `M ${x1} ${y1} C ${cx1} ${cy1}, ${cx2} ${cy2}, ${x2} ${y2}`;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Export timeToPx for use by the renderer (gridlines etc.)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export { timeToPx, type TimeScale };
|
||||
285
web/src/components/executions/workflow-timeline/types.ts
Normal file
285
web/src/components/executions/workflow-timeline/types.ts
Normal file
@@ -0,0 +1,285 @@
|
||||
/**
|
||||
* Workflow Timeline DAG Types
|
||||
*
|
||||
* Types for the Prefect-style workflow run timeline visualization.
|
||||
* This component renders workflow task executions as horizontal duration bars
|
||||
* on a time axis with curved dependency edges showing the DAG structure.
|
||||
*/
|
||||
|
||||
import type { ExecutionSummary } from "@/api";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Core data types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export type TaskState =
|
||||
| "completed"
|
||||
| "running"
|
||||
| "failed"
|
||||
| "pending"
|
||||
| "timeout"
|
||||
| "cancelled"
|
||||
| "abandoned";
|
||||
|
||||
/**
|
||||
* Metadata for a collapsed with_items group node.
|
||||
* When a with_items task has ≥ WITH_ITEMS_COLLAPSE_THRESHOLD items, all
|
||||
* individual item executions are merged into a single TimelineTask carrying
|
||||
* this info so the renderer can display a compact "task ×N" bar.
|
||||
*/
|
||||
export interface WithItemsGroupInfo {
|
||||
/** Total number of items in the group */
|
||||
totalItems: number;
|
||||
/** Per-state item counts */
|
||||
completed: number;
|
||||
failed: number;
|
||||
running: number;
|
||||
pending: number;
|
||||
timedOut: number;
|
||||
cancelled: number;
|
||||
/** Concurrency limit declared on the task (0 = unlimited / unknown) */
|
||||
concurrency: number;
|
||||
/** IDs of all member executions (for upstream/downstream tracking) */
|
||||
memberIds: string[];
|
||||
}
|
||||
|
||||
/** Threshold at which with_items children are collapsed into a single node */
|
||||
export const WITH_ITEMS_COLLAPSE_THRESHOLD = 10;
|
||||
|
||||
/** A single task run positioned on the timeline */
|
||||
export interface TimelineTask {
|
||||
/** Unique identifier (execution ID as string) */
|
||||
id: string;
|
||||
/** Display name (task_name from workflow_task metadata) */
|
||||
name: string;
|
||||
/** Action reference */
|
||||
actionRef: string;
|
||||
/** Visual state for coloring */
|
||||
state: TaskState;
|
||||
/** Start time as epoch ms (null if not yet started) */
|
||||
startMs: number | null;
|
||||
/** End time as epoch ms (null if still running or not started) */
|
||||
endMs: number | null;
|
||||
/** IDs of upstream tasks this depends on */
|
||||
upstreamIds: string[];
|
||||
/** IDs of downstream tasks that depend on this */
|
||||
downstreamIds: string[];
|
||||
/** with_items task index (null if not a with_items expansion) */
|
||||
taskIndex: number | null;
|
||||
/** Whether this task timed out */
|
||||
timedOut: boolean;
|
||||
/** Retry info */
|
||||
retryCount: number;
|
||||
maxRetries: number;
|
||||
/** Duration in ms (from metadata or computed) */
|
||||
durationMs: number | null;
|
||||
/** Original execution summary for tooltip details */
|
||||
execution: ExecutionSummary;
|
||||
/**
|
||||
* Present only on collapsed with_items group nodes.
|
||||
* When set, this task represents multiple item executions merged into one.
|
||||
*/
|
||||
groupInfo?: WithItemsGroupInfo;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Synthetic milestone / junction nodes
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export type MilestoneKind = "start" | "end" | "merge" | "fork";
|
||||
|
||||
export interface TimelineMilestone {
|
||||
id: string;
|
||||
kind: MilestoneKind;
|
||||
/** Position on the time axis (epoch ms) */
|
||||
timeMs: number;
|
||||
/** Human-readable label */
|
||||
label: string;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Unified node type (task bar OR milestone)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export type TimelineNodeType = "task" | "milestone";
|
||||
|
||||
export interface TimelineNode {
|
||||
type: TimelineNodeType;
|
||||
/** Unique ID */
|
||||
id: string;
|
||||
/** Assigned lane (y index) */
|
||||
lane: number;
|
||||
/** Pixel positions (computed by layout) */
|
||||
x: number;
|
||||
y: number;
|
||||
width: number;
|
||||
/** Original data */
|
||||
task?: TimelineTask;
|
||||
milestone?: TimelineMilestone;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Edges
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export type EdgeKind = "success" | "failure" | "always" | "timeout" | "custom";
|
||||
|
||||
export interface TimelineEdge {
|
||||
/** Source node ID */
|
||||
from: string;
|
||||
/** Target node ID */
|
||||
to: string;
|
||||
/** Visual classification for coloring */
|
||||
kind: EdgeKind;
|
||||
/** Optional transition label (e.g. "succeeded", "failed") */
|
||||
label?: string;
|
||||
/** Optional custom color from workflow definition */
|
||||
color?: string;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Layout constants
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface LayoutConfig {
|
||||
/** Height of each lane in pixels */
|
||||
laneHeight: number;
|
||||
/** Height of a task bar in pixels */
|
||||
barHeight: number;
|
||||
/** Vertical padding within each lane */
|
||||
lanePadding: number;
|
||||
/** Size of milestone diamond/square in pixels */
|
||||
milestoneSize: number;
|
||||
/** Left padding for the chart area (px) */
|
||||
paddingLeft: number;
|
||||
/** Right padding for the chart area (px) */
|
||||
paddingRight: number;
|
||||
/** Top padding for the time axis area (px) */
|
||||
paddingTop: number;
|
||||
/** Bottom padding (px) */
|
||||
paddingBottom: number;
|
||||
/** Minimum bar width for very short tasks (px) */
|
||||
minBarWidth: number;
|
||||
/** Horizontal gap between milestone and adjacent bars (px) */
|
||||
milestoneGap: number;
|
||||
}
|
||||
|
||||
export const DEFAULT_LAYOUT: LayoutConfig = {
|
||||
laneHeight: 32,
|
||||
barHeight: 20,
|
||||
lanePadding: 6,
|
||||
milestoneSize: 10,
|
||||
paddingLeft: 20,
|
||||
paddingRight: 20,
|
||||
paddingTop: 36,
|
||||
paddingBottom: 16,
|
||||
minBarWidth: 8,
|
||||
milestoneGap: 12,
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Computed layout result
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface ComputedLayout {
|
||||
nodes: TimelineNode[];
|
||||
edges: TimelineEdge[];
|
||||
/** Total width needed (px) */
|
||||
totalWidth: number;
|
||||
/** Total height needed (px) */
|
||||
totalHeight: number;
|
||||
/** Number of lanes used */
|
||||
laneCount: number;
|
||||
/** Time bounds */
|
||||
minTimeMs: number;
|
||||
maxTimeMs: number;
|
||||
/** The linear scale factor: px per ms */
|
||||
pxPerMs: number;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Interaction state
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface TooltipData {
|
||||
task: TimelineTask;
|
||||
x: number;
|
||||
y: number;
|
||||
}
|
||||
|
||||
export interface ViewState {
|
||||
/** Horizontal scroll offset (px) */
|
||||
scrollX: number;
|
||||
/** Zoom level (1.0 = default) */
|
||||
zoom: number;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Workflow definition transition types (for edge extraction)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface WorkflowDefinitionTransition {
|
||||
when?: string;
|
||||
publish?: Record<string, string>[];
|
||||
do?: string[];
|
||||
__chart_meta__?: {
|
||||
label?: string;
|
||||
color?: string;
|
||||
line_style?: string;
|
||||
};
|
||||
}
|
||||
|
||||
export interface WorkflowDefinitionTask {
|
||||
name: string;
|
||||
action?: string;
|
||||
next?: WorkflowDefinitionTransition[];
|
||||
/** Number of inbound tasks that must complete before this task runs */
|
||||
join?: number;
|
||||
/** with_items expression (present when the task fans out over a list) */
|
||||
with_items?: string;
|
||||
/** Max concurrent items for with_items (default 1 = serial) */
|
||||
concurrency?: number;
|
||||
// Legacy fields (auto-converted to next)
|
||||
on_success?: string | string[];
|
||||
on_failure?: string | string[];
|
||||
on_complete?: string | string[];
|
||||
on_timeout?: string | string[];
|
||||
}
|
||||
|
||||
export interface WorkflowDefinition {
|
||||
ref?: string;
|
||||
label?: string;
|
||||
tasks?: WorkflowDefinitionTask[];
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Color constants
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const STATE_COLORS: Record<
|
||||
TaskState,
|
||||
{ bg: string; border: string; text: string }
|
||||
> = {
|
||||
completed: { bg: "#dcfce7", border: "#22c55e", text: "#15803d" },
|
||||
running: { bg: "#dbeafe", border: "#3b82f6", text: "#1d4ed8" },
|
||||
failed: { bg: "#fee2e2", border: "#ef4444", text: "#b91c1c" },
|
||||
pending: { bg: "#f3f4f6", border: "#9ca3af", text: "#6b7280" },
|
||||
timeout: { bg: "#ffedd5", border: "#f97316", text: "#c2410c" },
|
||||
cancelled: { bg: "#f3f4f6", border: "#9ca3af", text: "#6b7280" },
|
||||
abandoned: { bg: "#fee2e2", border: "#f87171", text: "#b91c1c" },
|
||||
};
|
||||
|
||||
export const EDGE_KIND_COLORS: Record<EdgeKind, string> = {
|
||||
success: "#22c55e",
|
||||
failure: "#ef4444",
|
||||
always: "#9ca3af",
|
||||
timeout: "#f97316",
|
||||
custom: "#8b5cf6",
|
||||
};
|
||||
|
||||
export const MILESTONE_COLORS: Record<MilestoneKind, string> = {
|
||||
start: "#6b7280",
|
||||
end: "#6b7280",
|
||||
merge: "#8b5cf6",
|
||||
fork: "#8b5cf6",
|
||||
};
|
||||
@@ -160,6 +160,10 @@ export function useExecutionStream(options: UseExecutionStreamOptions = {}) {
|
||||
// Extract query params from the query key (format: ["executions", params])
|
||||
const queryParams = queryKey[1];
|
||||
|
||||
// Child execution queries (keyed by { parent: id }) fetch all pages
|
||||
// and must not be capped — the timeline DAG needs every child.
|
||||
const isChildQuery = !!(queryParams as any)?.parent;
|
||||
|
||||
const old = oldData as any;
|
||||
|
||||
// Check if execution already exists in the list
|
||||
@@ -224,7 +228,9 @@ export function useExecutionStream(options: UseExecutionStreamOptions = {}) {
|
||||
if (hasUnsupportedFilters(queryParams)) {
|
||||
return;
|
||||
}
|
||||
updatedData = [executionData, ...old.data].slice(0, 50);
|
||||
updatedData = isChildQuery
|
||||
? [...old.data, executionData]
|
||||
: [executionData, ...old.data].slice(0, 50);
|
||||
totalItemsDelta = 1;
|
||||
} else {
|
||||
// No boundary crossing: either both match (execution was
|
||||
@@ -240,8 +246,11 @@ export function useExecutionStream(options: UseExecutionStreamOptions = {}) {
|
||||
}
|
||||
|
||||
if (matchesQuery) {
|
||||
// Add to beginning and cap at 50 items to prevent unbounded growth
|
||||
updatedData = [executionData, ...old.data].slice(0, 50);
|
||||
// Add to the list. Child queries keep all items (no cap);
|
||||
// other lists cap at 50 to prevent unbounded growth.
|
||||
updatedData = isChildQuery
|
||||
? [...old.data, executionData]
|
||||
: [executionData, ...old.data].slice(0, 50);
|
||||
totalItemsDelta = 1;
|
||||
} else {
|
||||
return;
|
||||
|
||||
@@ -116,11 +116,34 @@ export function useChildExecutions(parentId: number | undefined) {
|
||||
return useQuery({
|
||||
queryKey: ["executions", { parent: parentId }],
|
||||
queryFn: async () => {
|
||||
const response = await ExecutionsService.listExecutions({
|
||||
// Fetch page 1 with max page size (API caps at 100)
|
||||
const first = await ExecutionsService.listExecutions({
|
||||
parent: parentId,
|
||||
perPage: 100,
|
||||
page: 1,
|
||||
});
|
||||
return response;
|
||||
|
||||
const { total_pages } = first.pagination;
|
||||
if (total_pages <= 1) return first;
|
||||
|
||||
// Fetch remaining pages in parallel
|
||||
const remaining = await Promise.all(
|
||||
Array.from({ length: total_pages - 1 }, (_, i) =>
|
||||
ExecutionsService.listExecutions({
|
||||
parent: parentId,
|
||||
perPage: 100,
|
||||
page: i + 2,
|
||||
}),
|
||||
),
|
||||
);
|
||||
|
||||
// Merge all pages into the first response
|
||||
for (const page of remaining) {
|
||||
first.data.push(...page.data);
|
||||
}
|
||||
first.pagination.total_pages = 1;
|
||||
first.pagination.page_size = first.data.length;
|
||||
return first;
|
||||
},
|
||||
enabled: !!parentId,
|
||||
staleTime: 5000,
|
||||
|
||||
@@ -45,6 +45,8 @@ import {
|
||||
generateUniqueTaskName,
|
||||
generateTaskId,
|
||||
builderStateToDefinition,
|
||||
builderStateToGraph,
|
||||
builderStateToActionYaml,
|
||||
definitionToBuilderState,
|
||||
validateWorkflow,
|
||||
addTransitionTarget,
|
||||
@@ -585,12 +587,14 @@ export default function WorkflowBuilderPage() {
|
||||
doSave();
|
||||
}, [startNodeWarning, doSave]);
|
||||
|
||||
// YAML preview — generate proper YAML from builder state
|
||||
const yamlPreview = useMemo(() => {
|
||||
// YAML previews — two separate panels for the two-file model:
|
||||
// 1. Action YAML (ref, label, parameters, output, tags, workflow_file)
|
||||
// 2. Workflow YAML (version, vars, tasks, output_map — graph only)
|
||||
const actionYamlPreview = useMemo(() => {
|
||||
if (!showYamlPreview) return "";
|
||||
try {
|
||||
const definition = builderStateToDefinition(state, actionSchemaMap);
|
||||
return yaml.dump(definition, {
|
||||
const actionDef = builderStateToActionYaml(state);
|
||||
return yaml.dump(actionDef, {
|
||||
indent: 2,
|
||||
lineWidth: 120,
|
||||
noRefs: true,
|
||||
@@ -599,7 +603,24 @@ export default function WorkflowBuilderPage() {
|
||||
forceQuotes: false,
|
||||
});
|
||||
} catch {
|
||||
return "# Error generating YAML preview";
|
||||
return "# Error generating action YAML preview";
|
||||
}
|
||||
}, [state, showYamlPreview]);
|
||||
|
||||
const workflowYamlPreview = useMemo(() => {
|
||||
if (!showYamlPreview) return "";
|
||||
try {
|
||||
const graphDef = builderStateToGraph(state, actionSchemaMap);
|
||||
return yaml.dump(graphDef, {
|
||||
indent: 2,
|
||||
lineWidth: 120,
|
||||
noRefs: true,
|
||||
sortKeys: false,
|
||||
quotingType: '"',
|
||||
forceQuotes: false,
|
||||
});
|
||||
} catch {
|
||||
return "# Error generating workflow YAML preview";
|
||||
}
|
||||
}, [state, showYamlPreview, actionSchemaMap]);
|
||||
|
||||
@@ -854,26 +875,64 @@ export default function WorkflowBuilderPage() {
|
||||
{/* Main content area */}
|
||||
<div className="flex-1 flex overflow-hidden">
|
||||
{showYamlPreview ? (
|
||||
/* Raw YAML mode — full-width YAML view */
|
||||
<div className="flex-1 flex flex-col overflow-hidden bg-gray-900">
|
||||
<div className="flex items-center gap-2 px-4 py-2 bg-gray-800 border-b border-gray-700 flex-shrink-0">
|
||||
<FileCode className="w-4 h-4 text-gray-400" />
|
||||
<span className="text-sm font-medium text-gray-300">
|
||||
Workflow Definition
|
||||
</span>
|
||||
<span className="text-[10px] text-gray-500 ml-1">
|
||||
(read-only preview of the generated YAML)
|
||||
</span>
|
||||
<div className="ml-auto">
|
||||
/* Raw YAML mode — two-panel view: Action YAML + Workflow YAML */
|
||||
<div className="flex-1 flex overflow-hidden">
|
||||
{/* Left panel: Action YAML */}
|
||||
<div className="w-2/5 flex flex-col overflow-hidden bg-gray-900 border-r border-gray-700">
|
||||
<div className="flex items-center gap-2 px-4 py-2 bg-gray-800 border-b border-gray-700 flex-shrink-0">
|
||||
<FileCode className="w-4 h-4 text-blue-400" />
|
||||
<span className="text-sm font-medium text-gray-300">
|
||||
Action YAML
|
||||
</span>
|
||||
<span className="text-[10px] text-gray-500 ml-1">
|
||||
actions/{state.name}.yaml
|
||||
</span>
|
||||
<div className="flex-1" />
|
||||
<button
|
||||
onClick={() => {
|
||||
navigator.clipboard.writeText(yamlPreview).then(() => {
|
||||
setYamlCopied(true);
|
||||
setTimeout(() => setYamlCopied(false), 2000);
|
||||
});
|
||||
navigator.clipboard.writeText(actionYamlPreview);
|
||||
}}
|
||||
className="flex items-center gap-1.5 px-2.5 py-1 text-xs font-medium rounded transition-colors text-gray-400 hover:text-gray-200 hover:bg-gray-700"
|
||||
title="Copy YAML to clipboard"
|
||||
className="flex items-center gap-1 px-2 py-1 text-xs text-gray-400 hover:text-gray-200 bg-gray-700 hover:bg-gray-600 rounded transition-colors"
|
||||
title="Copy action YAML to clipboard"
|
||||
>
|
||||
<Copy className="w-3.5 h-3.5" />
|
||||
<span>Copy</span>
|
||||
</button>
|
||||
</div>
|
||||
<div className="px-4 py-2 bg-gray-800/50 border-b border-gray-700/50 flex-shrink-0">
|
||||
<p className="text-[10px] text-gray-500 leading-relaxed">
|
||||
Defines the action identity, parameters, and output schema.
|
||||
References the workflow file via{" "}
|
||||
<code className="text-gray-400">workflow_file</code>.
|
||||
</p>
|
||||
</div>
|
||||
<pre className="flex-1 overflow-auto p-4 text-sm font-mono text-blue-300 whitespace-pre leading-relaxed">
|
||||
{actionYamlPreview}
|
||||
</pre>
|
||||
</div>
|
||||
|
||||
{/* Right panel: Workflow YAML (graph only) */}
|
||||
<div className="flex-1 flex flex-col overflow-hidden bg-gray-900">
|
||||
<div className="flex items-center gap-2 px-4 py-2 bg-gray-800 border-b border-gray-700 flex-shrink-0">
|
||||
<FileCode className="w-4 h-4 text-green-400" />
|
||||
<span className="text-sm font-medium text-gray-300">
|
||||
Workflow YAML
|
||||
</span>
|
||||
<span className="text-[10px] text-gray-500 ml-1">
|
||||
actions/workflows/{state.name}.workflow.yaml
|
||||
</span>
|
||||
<div className="flex-1" />
|
||||
<button
|
||||
onClick={() => {
|
||||
navigator.clipboard
|
||||
.writeText(workflowYamlPreview)
|
||||
.then(() => {
|
||||
setYamlCopied(true);
|
||||
setTimeout(() => setYamlCopied(false), 2000);
|
||||
});
|
||||
}}
|
||||
className="flex items-center gap-1 px-2 py-1 text-xs text-gray-400 hover:text-gray-200 bg-gray-700 hover:bg-gray-600 rounded transition-colors"
|
||||
title="Copy workflow YAML to clipboard"
|
||||
>
|
||||
{yamlCopied ? (
|
||||
<>
|
||||
@@ -883,15 +942,21 @@ export default function WorkflowBuilderPage() {
|
||||
) : (
|
||||
<>
|
||||
<Copy className="w-3.5 h-3.5" />
|
||||
Copy
|
||||
<span>Copy</span>
|
||||
</>
|
||||
)}
|
||||
</button>
|
||||
</div>
|
||||
<div className="px-4 py-2 bg-gray-800/50 border-b border-gray-700/50 flex-shrink-0">
|
||||
<p className="text-[10px] text-gray-500 leading-relaxed">
|
||||
Execution graph only — tasks, transitions, variables. No
|
||||
action-level metadata (those are in the action YAML).
|
||||
</p>
|
||||
</div>
|
||||
<pre className="flex-1 overflow-auto p-4 text-sm font-mono text-green-400 whitespace-pre leading-relaxed">
|
||||
{workflowYamlPreview}
|
||||
</pre>
|
||||
</div>
|
||||
<pre className="flex-1 overflow-auto p-6 text-sm font-mono text-green-400 whitespace-pre leading-relaxed">
|
||||
{yamlPreview}
|
||||
</pre>
|
||||
</div>
|
||||
) : (
|
||||
<>
|
||||
|
||||
@@ -22,9 +22,9 @@ import { useState, useMemo } from "react";
|
||||
import { RotateCcw, Loader2 } from "lucide-react";
|
||||
import ExecuteActionModal from "@/components/common/ExecuteActionModal";
|
||||
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
|
||||
import WorkflowTasksPanel from "@/components/common/WorkflowTasksPanel";
|
||||
import ExecutionArtifactsPanel from "@/components/executions/ExecutionArtifactsPanel";
|
||||
import ExecutionProgressBar from "@/components/executions/ExecutionProgressBar";
|
||||
import WorkflowDetailsPanel from "@/components/executions/WorkflowDetailsPanel";
|
||||
|
||||
const getStatusColor = (status: string) => {
|
||||
switch (status) {
|
||||
@@ -279,6 +279,16 @@ export default function ExecutionDetailPage() {
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Workflow Details — combined timeline + tasks panel (top of page for workflows) */}
|
||||
{isWorkflow && (
|
||||
<div className="mb-6">
|
||||
<WorkflowDetailsPanel
|
||||
parentExecution={execution}
|
||||
actionRef={execution.action_ref}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Re-Run Modal */}
|
||||
{showRerunModal && actionData?.data && (
|
||||
<ExecuteActionModal
|
||||
@@ -542,13 +552,6 @@ export default function ExecutionDetailPage() {
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Workflow Tasks (shown only for workflow executions) */}
|
||||
{isWorkflow && (
|
||||
<div className="mt-6">
|
||||
<WorkflowTasksPanel parentExecutionId={execution.id} />
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Artifacts */}
|
||||
<div className="mt-6">
|
||||
<ExecutionArtifactsPanel
|
||||
|
||||
@@ -222,6 +222,10 @@ export interface ParamDefinition {
|
||||
}
|
||||
|
||||
/** Workflow definition as stored in the YAML file / API */
|
||||
/**
|
||||
* Full workflow definition — used for DB storage and the save API payload.
|
||||
* Contains both action-level metadata AND the execution graph.
|
||||
*/
|
||||
export interface WorkflowYamlDefinition {
|
||||
ref: string;
|
||||
label: string;
|
||||
@@ -235,6 +239,37 @@ export interface WorkflowYamlDefinition {
|
||||
tags?: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Graph-only workflow definition — written to the `.workflow.yaml` file on disk.
|
||||
*
|
||||
* Action-linked workflow files contain only the execution graph. The companion
|
||||
* action YAML (`actions/{name}.yaml`) is authoritative for `ref`, `label`,
|
||||
* `description`, `parameters`, `output`, and `tags`.
|
||||
*/
|
||||
export interface WorkflowGraphDefinition {
|
||||
version: string;
|
||||
vars?: Record<string, unknown>;
|
||||
tasks: WorkflowYamlTask[];
|
||||
output_map?: Record<string, string>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Action YAML definition — written to the companion `actions/{name}.yaml` file.
|
||||
*
|
||||
* Controls the action's identity and exposed interface. References the workflow
|
||||
* file via `workflow_file`.
|
||||
*/
|
||||
export interface ActionYamlDefinition {
|
||||
ref: string;
|
||||
label: string;
|
||||
description?: string;
|
||||
enabled: boolean;
|
||||
workflow_file: string;
|
||||
parameters?: Record<string, unknown>;
|
||||
output?: Record<string, unknown>;
|
||||
tags?: string[];
|
||||
}
|
||||
|
||||
/** Chart-only metadata for a transition edge (not consumed by the backend) */
|
||||
export interface TransitionChartMeta {
|
||||
/** Custom display label for the transition */
|
||||
@@ -382,6 +417,52 @@ export function builderStateToDefinition(
|
||||
state: WorkflowBuilderState,
|
||||
actionSchemas?: Map<string, Record<string, unknown> | null>,
|
||||
): WorkflowYamlDefinition {
|
||||
const graph = builderStateToGraph(state, actionSchemas);
|
||||
const definition: WorkflowYamlDefinition = {
|
||||
ref: `${state.packRef}.${state.name}`,
|
||||
label: state.label,
|
||||
version: state.version,
|
||||
tasks: graph.tasks,
|
||||
};
|
||||
|
||||
if (state.description) {
|
||||
definition.description = state.description;
|
||||
}
|
||||
|
||||
if (Object.keys(state.parameters).length > 0) {
|
||||
definition.parameters = state.parameters;
|
||||
}
|
||||
|
||||
if (Object.keys(state.output).length > 0) {
|
||||
definition.output = state.output;
|
||||
}
|
||||
|
||||
if (graph.vars && Object.keys(graph.vars).length > 0) {
|
||||
definition.vars = graph.vars;
|
||||
}
|
||||
|
||||
if (graph.output_map) {
|
||||
definition.output_map = graph.output_map;
|
||||
}
|
||||
|
||||
if (state.tags.length > 0) {
|
||||
definition.tags = state.tags;
|
||||
}
|
||||
|
||||
return definition;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract the graph-only workflow definition from builder state.
|
||||
*
|
||||
* This produces the content that should be written to the `.workflow.yaml`
|
||||
* file on disk — no `ref`, `label`, `description`, `parameters`, `output`,
|
||||
* or `tags`. Those belong in the companion action YAML.
|
||||
*/
|
||||
export function builderStateToGraph(
|
||||
state: WorkflowBuilderState,
|
||||
actionSchemas?: Map<string, Record<string, unknown> | null>,
|
||||
): WorkflowGraphDefinition {
|
||||
const tasks: WorkflowYamlTask[] = state.tasks.map((task) => {
|
||||
const yamlTask: WorkflowYamlTask = {
|
||||
name: task.name,
|
||||
@@ -446,34 +527,51 @@ export function builderStateToDefinition(
|
||||
return yamlTask;
|
||||
});
|
||||
|
||||
const definition: WorkflowYamlDefinition = {
|
||||
ref: `${state.packRef}.${state.name}`,
|
||||
label: state.label,
|
||||
const graph: WorkflowGraphDefinition = {
|
||||
version: state.version,
|
||||
tasks,
|
||||
};
|
||||
|
||||
if (Object.keys(state.vars).length > 0) {
|
||||
graph.vars = state.vars;
|
||||
}
|
||||
|
||||
return graph;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract the action YAML definition from builder state.
|
||||
*
|
||||
* This produces the content for the companion `actions/{name}.yaml` file
|
||||
* that owns action-level metadata and references the workflow file.
|
||||
*/
|
||||
export function builderStateToActionYaml(
|
||||
state: WorkflowBuilderState,
|
||||
): ActionYamlDefinition {
|
||||
const action: ActionYamlDefinition = {
|
||||
ref: `${state.packRef}.${state.name}`,
|
||||
label: state.label,
|
||||
enabled: state.enabled,
|
||||
workflow_file: `workflows/${state.name}.workflow.yaml`,
|
||||
};
|
||||
|
||||
if (state.description) {
|
||||
definition.description = state.description;
|
||||
action.description = state.description;
|
||||
}
|
||||
|
||||
if (Object.keys(state.parameters).length > 0) {
|
||||
definition.parameters = state.parameters;
|
||||
action.parameters = state.parameters;
|
||||
}
|
||||
|
||||
if (Object.keys(state.output).length > 0) {
|
||||
definition.output = state.output;
|
||||
}
|
||||
|
||||
if (Object.keys(state.vars).length > 0) {
|
||||
definition.vars = state.vars;
|
||||
action.output = state.output;
|
||||
}
|
||||
|
||||
if (state.tags.length > 0) {
|
||||
definition.tags = state.tags;
|
||||
action.tags = state.tags;
|
||||
}
|
||||
|
||||
return definition;
|
||||
return action;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
@@ -0,0 +1,120 @@
|
||||
# RUST_MIN_STACK Fix & Workflow File Metadata Separation
|
||||
|
||||
**Date**: 2026-02-05
|
||||
|
||||
## Summary
|
||||
|
||||
Three related changes: (1) fixed `rustc` SIGSEGV crashes during Docker release builds by increasing the compiler stack size, (2) enforced the separation of concerns between action YAML and workflow YAML files across the parser, loaders, and registrars, and (3) updated the workflow builder UI and API save endpoints to produce the correct two-file layout.
|
||||
|
||||
## Problem 1: rustc SIGSEGV in Docker Builds
|
||||
|
||||
Docker Compose builds were failing with `rustc interrupted by SIGSEGV` during release compilation. The error message suggested increasing `RUST_MIN_STACK` to 16 MiB.
|
||||
|
||||
### Fix
|
||||
|
||||
Added `ENV RUST_MIN_STACK=16777216` to the build stage of all 7 Rust Dockerfiles:
|
||||
|
||||
- `docker/Dockerfile` (both build stages)
|
||||
- `docker/Dockerfile.optimized`
|
||||
- `docker/Dockerfile.worker`
|
||||
- `docker/Dockerfile.worker.optimized`
|
||||
- `docker/Dockerfile.sensor.optimized`
|
||||
- `docker/Dockerfile.pack-binaries`
|
||||
- `docker/Dockerfile.pack-builder`
|
||||
|
||||
Also added `export RUST_MIN_STACK := 16777216` to the `Makefile` for local builds.
|
||||
|
||||
## Problem 2: Workflow File Metadata Duplication
|
||||
|
||||
The `timeline_demo.yaml` workflow file (in `actions/workflows/`) redundantly defined `ref`, `label`, `description`, `parameters`, `output`, and `tags` — all of which are action-level concerns that belong exclusively in the companion action YAML (`actions/timeline_demo.yaml`). This violated the design principle that action YAML owns the interface and workflow YAML owns the execution graph.
|
||||
|
||||
The root cause was that `WorkflowDefinition` required `ref` and `label` as mandatory fields, forcing even action-linked workflow files to include them.
|
||||
|
||||
### Backend Parser & Loader Changes
|
||||
|
||||
**`crates/common/src/workflow/parser.rs`**:
|
||||
- Made `ref` and `label` optional with `#[serde(default)]` and removed `min = 1` validators
|
||||
- Added two new tests: `test_parse_action_linked_workflow_without_ref_and_label` and `test_parse_standalone_workflow_still_works_with_ref_and_label`
|
||||
|
||||
**`crates/common/src/pack_registry/loader.rs`**:
|
||||
- `load_workflow_for_action()` now fills in `ref`/`label`/`description`/`tags` from the action YAML when the workflow file omits them (action YAML is authoritative)
|
||||
|
||||
**`crates/common/src/workflow/registrar.rs`** and **`crates/executor/src/workflow/registrar.rs`**:
|
||||
- Added `effective_ref()` and `effective_label()` helper methods that fall back to `WorkflowFile.ref_name` / `WorkflowFile.name` (derived from filename) when the workflow YAML omits them
|
||||
- Threaded effective values through `create_workflow`, `update_workflow`, `create_companion_action`, and `ensure_companion_action`
|
||||
|
||||
**`scripts/load_core_pack.py`**:
|
||||
- `upsert_workflow_definition()` now derives `ref`/`label`/`description`/`tags` from the action YAML when the workflow file omits them
|
||||
|
||||
**`packs.external/python_example/actions/workflows/timeline_demo.yaml`**:
|
||||
- Stripped `ref`, `label`, `description`, `parameters`, `output`, and `tags` — file now contains only `version`, `vars`, `tasks`, and `output_map`
|
||||
|
||||
## Problem 3: Workflow Builder Wrote Full Definition to Disk
|
||||
|
||||
The visual workflow builder's save endpoints (`POST /api/v1/packs/{pack_ref}/workflow-files` and `PUT /api/v1/workflows/{ref}/file`) were writing the full `WorkflowYamlDefinition` — including action-level metadata — to the `.workflow.yaml` file on disk. The YAML viewer also showed a single monolithic preview.
|
||||
|
||||
### API Save Endpoint Changes
|
||||
|
||||
**`crates/api/src/routes/workflows.rs`** — `write_workflow_yaml()`:
|
||||
- Now writes **two files** per save:
|
||||
1. **Workflow YAML** (`actions/workflows/{name}.workflow.yaml`) — graph-only via `strip_action_level_fields()` which removes `ref`, `label`, `description`, `parameters`, `output`, `tags`
|
||||
2. **Action YAML** (`actions/{name}.yaml`) — action-level metadata via `build_action_yaml()` which produces `ref`, `label`, `description`, `enabled`, `workflow_file`, `parameters`, `output`, `tags`
|
||||
- Added `strip_action_level_fields()` helper — extracts only `version`, `vars`, `tasks`, `output_map` from the definition JSON
|
||||
- Added `build_action_yaml()` helper — constructs the companion action YAML with proper formatting and comments
|
||||
|
||||
### Frontend Changes
|
||||
|
||||
**`web/src/types/workflow.ts`**:
|
||||
- Added `WorkflowGraphDefinition` interface (graph-only: `version`, `vars`, `tasks`, `output_map`)
|
||||
- Added `ActionYamlDefinition` interface (action metadata: `ref`, `label`, `description`, `enabled`, `workflow_file`, `parameters`, `output`, `tags`)
|
||||
- Added `builderStateToGraph()` — extracts graph-only definition from builder state
|
||||
- Added `builderStateToActionYaml()` — extracts action metadata from builder state
|
||||
- Refactored `builderStateToDefinition()` to delegate to `builderStateToGraph()` internally
|
||||
|
||||
**`web/src/pages/actions/WorkflowBuilderPage.tsx`**:
|
||||
- YAML viewer now shows **two side-by-side panels** instead of a single preview:
|
||||
- **Left panel (blue, 2/5 width)**: Action YAML — shows `actions/{name}.yaml` content with ref, label, parameters, workflow_file reference
|
||||
- **Right panel (green, 3/5 width)**: Workflow YAML — shows `actions/workflows/{name}.workflow.yaml` with graph-only content (version, vars, tasks)
|
||||
- Each panel has its own copy button, filename label, and description bar explaining the file's role
|
||||
- Separate `actionYamlPreview` and `workflowYamlPreview` memos replace the old `yamlPreview`
|
||||
|
||||
## Design: Two Valid Workflow File Conventions
|
||||
|
||||
1. **Standalone workflows** (`workflows/*.yaml`) — no companion action YAML, so they carry their own `ref`, `label`, `parameters`, etc. Loaded by `WorkflowLoader.sync_pack_workflows()`.
|
||||
|
||||
2. **Action-linked workflows** (`actions/workflows/*.yaml`) — referenced via `workflow_file` from an action YAML. The action YAML is the single authoritative source for `ref`, `label`, `description`, `parameters`, `output`, and `tags`. The workflow file contains only the execution graph: `version`, `vars`, `tasks`, `output_map`.
|
||||
|
||||
The visual workflow builder and API save endpoints now produce the action-linked layout (convention 2) with properly separated files.
|
||||
|
||||
## Files Changed
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `docker/Dockerfile` | Added `RUST_MIN_STACK=16777216` (both stages) |
|
||||
| `docker/Dockerfile.optimized` | Added `RUST_MIN_STACK=16777216` |
|
||||
| `docker/Dockerfile.worker` | Added `RUST_MIN_STACK=16777216` |
|
||||
| `docker/Dockerfile.worker.optimized` | Added `RUST_MIN_STACK=16777216` |
|
||||
| `docker/Dockerfile.sensor.optimized` | Added `RUST_MIN_STACK=16777216` |
|
||||
| `docker/Dockerfile.pack-binaries` | Added `RUST_MIN_STACK=16777216` |
|
||||
| `docker/Dockerfile.pack-builder` | Added `RUST_MIN_STACK=16777216` |
|
||||
| `Makefile` | Added `export RUST_MIN_STACK` |
|
||||
| `crates/common/src/workflow/parser.rs` | Optional `ref`/`label`, 2 new tests |
|
||||
| `crates/common/src/pack_registry/loader.rs` | Action YAML fallback for metadata |
|
||||
| `crates/common/src/workflow/registrar.rs` | `effective_ref()`/`effective_label()` |
|
||||
| `crates/executor/src/workflow/registrar.rs` | `effective_ref()`/`effective_label()` |
|
||||
| `scripts/load_core_pack.py` | Action YAML fallback for metadata |
|
||||
| `crates/api/src/routes/workflows.rs` | Two-file write, `strip_action_level_fields()`, `build_action_yaml()` |
|
||||
| `web/src/types/workflow.ts` | `WorkflowGraphDefinition`, `ActionYamlDefinition`, `builderStateToGraph()`, `builderStateToActionYaml()` |
|
||||
| `web/src/pages/actions/WorkflowBuilderPage.tsx` | Two-panel YAML viewer |
|
||||
| `packs.external/python_example/actions/workflows/timeline_demo.yaml` | Stripped action-level metadata |
|
||||
| `AGENTS.md` | Updated Workflow File Storage, YAML viewer, Docker Build Optimization sections |
|
||||
|
||||
## Test Results
|
||||
|
||||
- All 23 parser tests pass (including 2 new)
|
||||
- All 9 loader tests pass
|
||||
- All 2 registrar tests pass
|
||||
- All 598 workspace lib tests pass
|
||||
- Zero TypeScript errors
|
||||
- Zero compiler warnings
|
||||
- Zero build errors
|
||||
56
work-summary/2026-03-04-cli-workflow-upload.md
Normal file
56
work-summary/2026-03-04-cli-workflow-upload.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# CLI Workflow Upload Command
|
||||
|
||||
**Date**: 2026-03-04
|
||||
|
||||
## Summary
|
||||
|
||||
Added a `workflow` subcommand group to the Attune CLI, enabling users to upload individual workflow actions to existing packs without requiring a full pack upload. Also fixed a pre-existing `-y` short flag conflict across multiple CLI subcommands.
|
||||
|
||||
## Changes Made
|
||||
|
||||
### New File: `crates/cli/src/commands/workflow.rs`
|
||||
|
||||
New CLI subcommand module with four commands:
|
||||
|
||||
- **`attune workflow upload <action-yaml-path>`** — Reads a local action YAML file, extracts the `workflow_file` field to locate the companion workflow YAML, determines the pack from the action ref (e.g., `mypack.deploy` → pack `mypack`), and uploads both files to the API via `POST /api/v1/packs/{pack_ref}/workflow-files`. On 409 Conflict, fails unless `--force` is passed, which triggers a `PUT /api/v1/workflows/{ref}/file` update instead.
|
||||
- **`attune workflow list`** — Lists workflows with optional `--pack`, `--tags`, and `--search` filters.
|
||||
- **`attune workflow show <ref>`** — Shows workflow details including a task summary table (name, action, transition count).
|
||||
- **`attune workflow delete <ref>`** — Deletes a workflow with `--yes` confirmation bypass.
|
||||
|
||||
### Modified Files
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `crates/cli/src/commands/mod.rs` | Added `pub mod workflow` |
|
||||
| `crates/cli/src/main.rs` | Added `Workflow` variant to `Commands` enum, import, and dispatch |
|
||||
| `crates/cli/src/commands/action.rs` | Fixed `-y` short flag conflict on `Delete.yes` |
|
||||
| `crates/cli/src/commands/trigger.rs` | Fixed `-y` short flag conflict on `Delete.yes` |
|
||||
| `crates/cli/src/commands/pack.rs` | Fixed `-y` short flag conflict on `Uninstall.yes` |
|
||||
| `AGENTS.md` | Added workflow CLI documentation to CLI Tool section |
|
||||
|
||||
### New Test File: `crates/cli/tests/test_workflows.rs`
|
||||
|
||||
21 integration tests covering:
|
||||
- List (authenticated, by pack, JSON/YAML output, empty, unauthenticated)
|
||||
- Show (table, JSON, not found)
|
||||
- Delete (with `--yes`, JSON output)
|
||||
- Upload (success, JSON output, conflict without force, conflict with force, missing action file, missing workflow file, non-workflow action, invalid YAML)
|
||||
- Help text (workflow help, upload help)
|
||||
|
||||
### Bug Fix: `-y` Short Flag Conflict
|
||||
|
||||
The global `--yaml` flag uses `-y` as its short form. Three existing subcommands (`action delete`, `trigger delete`, `pack uninstall`) also defined `-y` as a short flag for `--yes`. This caused a clap runtime panic when both flags were in scope (e.g., `attune --yaml action delete ref --yes`). Fixed by removing the short flag from all `yes` arguments — they now only accept `--yes` (long form).
|
||||
|
||||
## Design Decisions
|
||||
|
||||
- **Reuses existing API endpoints** — No new server-side code needed. The CLI constructs a `SaveWorkflowFileRequest` JSON payload from the two local YAML files and posts to the existing workflow-file endpoints.
|
||||
- **Pack determined from action ref** — The pack ref is extracted from the action's `ref` field using the last-dot convention (e.g., `org.infra.deploy` → pack `org.infra`, name `deploy`).
|
||||
- **Workflow path resolution** — The `workflow_file` value is resolved relative to the action YAML's parent directory, matching how the pack loader resolves it relative to the `actions/` directory.
|
||||
- **Create-or-update pattern** — Upload attempts create first; on 409 with `--force`, falls back to update. This matches the `pack upload --force` UX pattern.
|
||||
|
||||
## Test Results
|
||||
|
||||
- **Unit tests**: 6 new (split_action_ref, resolve_workflow_path variants)
|
||||
- **Integration tests**: 21 new
|
||||
- **Total CLI tests**: 160 passed, 0 failed, 1 ignored (pre-existing)
|
||||
- **Compiler warnings**: 0
|
||||
82
work-summary/2026-03-04-typed-publish-directives.md
Normal file
82
work-summary/2026-03-04-typed-publish-directives.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Typed Publish Directives in Workflow Definitions
|
||||
|
||||
**Date**: 2026-03-04
|
||||
|
||||
## Problem
|
||||
|
||||
The `python_example.timeline_demo` workflow action failed to execute with:
|
||||
|
||||
```
|
||||
Runtime not found: No runtime found for action: python_example.timeline_demo
|
||||
(available: node.js, python, shell)
|
||||
```
|
||||
|
||||
This error was misleading — the real issue was that the workflow definition YAML
|
||||
failed to parse during pack registration, so the `workflow_definition` record was
|
||||
never created and the action's `workflow_def` FK remained NULL. Without a linked
|
||||
workflow definition, the executor treated it as a regular action and dispatched it
|
||||
to a worker, which couldn't find a runtime for a workflow action.
|
||||
|
||||
### Root Cause
|
||||
|
||||
The YAML parsing error was:
|
||||
|
||||
```
|
||||
tasks[7].next[0].publish: data did not match any variant of untagged enum
|
||||
PublishDirective at line 234 column 11
|
||||
```
|
||||
|
||||
The `PublishDirective::Simple` variant was defined as `HashMap<String, String>`,
|
||||
but the workflow YAML contained non-string publish values:
|
||||
|
||||
```yaml
|
||||
publish:
|
||||
- validation_passed: true # boolean, not a string
|
||||
- validation_passed: false # boolean, not a string
|
||||
```
|
||||
|
||||
YAML parses `true`/`false` as booleans, which couldn't deserialize into `String`.
|
||||
|
||||
## Solution
|
||||
|
||||
Changed `PublishDirective::Simple` from `HashMap<String, String>` to
|
||||
`HashMap<String, serde_json::Value>` so publish directives can carry any
|
||||
JSON-compatible type: strings (including template expressions), booleans,
|
||||
numbers, arrays, objects, and null.
|
||||
|
||||
### Files Modified
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `crates/common/src/workflow/parser.rs` | `PublishDirective::Simple` value type → `serde_json::Value` |
|
||||
| `crates/executor/src/workflow/parser.rs` | Same change (executor's local copy) |
|
||||
| `crates/executor/src/workflow/graph.rs` | Renamed `PublishVar.expression: String` → `PublishVar.value: JsonValue` with `#[serde(alias = "expression")]` for backward compat with stored task graphs; imported `serde_json::Value` |
|
||||
| `crates/executor/src/scheduler.rs` | Updated publish map from `HashMap<String, String>` to `HashMap<String, JsonValue>` |
|
||||
| `crates/executor/src/workflow/context.rs` | `publish_from_result` accepts `HashMap<String, JsonValue>`, passes values directly to `render_json` (strings get template-rendered, non-strings pass through unchanged) |
|
||||
| `crates/common/src/workflow/expression_validator.rs` | Only validates string values as templates; non-string literals are skipped |
|
||||
| `packs.external/python_example/actions/workflows/timeline_demo.yaml` | Fixed `result().items` → `result().data.items` (secondary bug in workflow definition) |
|
||||
|
||||
### Type Preservation
|
||||
|
||||
The rendering pipeline now correctly preserves types end-to-end:
|
||||
|
||||
- **String values** (e.g., `"{{ result().data }}"`) → rendered through expression engine with type preservation
|
||||
- **Boolean values** (e.g., `true`) → stored as `JsonValue::Bool(true)`, pass through `render_json` unchanged
|
||||
- **Numeric values** (e.g., `42`, `3.14`) → stored as `JsonValue::Number`, pass through unchanged
|
||||
- **Null** → stored as `JsonValue::Null`, passes through unchanged
|
||||
- **Arrays/Objects** → stored as-is, with any nested string templates rendered recursively
|
||||
|
||||
### Tests Added
|
||||
|
||||
- `parser::tests::test_typed_publish_values_in_transitions` — verifies YAML parsing of booleans, numbers, strings, templates, and null in publish directives
|
||||
- `graph::tests::test_typed_publish_values` — verifies typed values survive graph construction
|
||||
- `context::tests::test_publish_typed_values` — verifies typed values pass through `publish_from_result` with correct types (boolean stays boolean, not string "true")
|
||||
|
||||
## Verification
|
||||
|
||||
After deploying the fix:
|
||||
|
||||
1. Re-registered `python_example` pack — workflow definition created successfully (ID: 2)
|
||||
2. Action `python_example.timeline_demo` linked to `workflow_def = 2`
|
||||
3. Executed the workflow — executor correctly identified it as a workflow action and orchestrated 15 child task executions through all stages: initialize → parallel fan-out (build/lint/scan) → merge join → generate items → with_items(×3) → validate → finalize
|
||||
4. Workflow variables confirmed type preservation: `validation_passed: true` (boolean), `items_processed: 3` (integer), `number_list: [1, 2, 3]` (array)
|
||||
55
work-summary/2026-03-04-with-items-race-condition-fix.md
Normal file
55
work-summary/2026-03-04-with-items-race-condition-fix.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Fix: with_items Race Condition Causing Duplicate Task Dispatches
|
||||
|
||||
**Date**: 2026-03-04
|
||||
**Component**: Executor service (`crates/executor/src/scheduler.rs`)
|
||||
**Issue**: Workflow tasks downstream of `with_items` tasks were being dispatched multiple times
|
||||
|
||||
## Problem
|
||||
|
||||
When a `with_items` task (e.g., `process_items` with `concurrency: 3`) had multiple items completing nearly simultaneously, the downstream successor task (e.g., `validate`) would be dispatched once per concurrently-completing item instead of once total.
|
||||
|
||||
**Root cause**: Workers update execution status in the database to `Completed` *before* publishing the `ExecutionCompleted` MQ message. The completion listener processes MQ messages sequentially, but by the time it processes item N's completion message, items N+1, N+2, etc. may already be marked `Completed` in the database. This means the `siblings_remaining` query (which checks DB status) returns 0 for multiple items, and each one falls through to transition evaluation and dispatches the successor task.
|
||||
|
||||
### Concrete Scenario
|
||||
|
||||
With `process_items` (5 items, `concurrency: 3`) → `validate`:
|
||||
|
||||
1. Items 3 and 4 finish on separate workers nearly simultaneously
|
||||
2. Worker for item 3 updates DB: status = Completed, then publishes MQ message
|
||||
3. Worker for item 4 updates DB: status = Completed, then publishes MQ message
|
||||
4. Completion listener processes item 3's message:
|
||||
- `siblings_remaining` query: item 4 is already Completed in DB → **0 remaining**
|
||||
- Falls through → dispatches `validate` ✓
|
||||
5. Completion listener processes item 4's message:
|
||||
- `siblings_remaining` query: all items completed → **0 remaining**
|
||||
- Falls through → dispatches `validate` **again** ✗
|
||||
|
||||
With `concurrency: 3` and tasks of equal duration, up to 3 items could complete simultaneously, causing the successor to be dispatched 3 times.
|
||||
|
||||
## Fix
|
||||
|
||||
Two-layer defense added to `advance_workflow()`:
|
||||
|
||||
### Layer 1: Persisted state check (with_items early return)
|
||||
|
||||
After the `siblings_remaining` check passes (all items done), but before evaluating transitions, the fix checks whether `task_name` is already present in the *persisted* `completed_tasks` or `failed_tasks` from the `workflow_execution` record. If so, a previous `advance_workflow` invocation already handled this task's final completion — return early.
|
||||
|
||||
This is efficient because it uses data already loaded at the top of the function.
|
||||
|
||||
### Layer 2: Already-dispatched DB check (all successor tasks)
|
||||
|
||||
Before dispatching ANY successor task, the fix queries the `execution` table for existing child executions with the same `workflow_execution` ID and `task_name`. If any exist, the successor has already been dispatched by a prior call — skip it.
|
||||
|
||||
This belt-and-suspenders guard catches edge cases regardless of how the race manifests, including scenarios where the persisted `completed_tasks` list hasn't been updated yet.
|
||||
|
||||
## Files Changed
|
||||
|
||||
- `crates/executor/src/scheduler.rs` — Added two guards in `advance_workflow()`:
|
||||
1. Lines ~1035-1066: Early return for with_items tasks already in persisted completed/failed lists
|
||||
2. Lines ~1220-1250: DB existence check before dispatching any successor task
|
||||
|
||||
## Testing
|
||||
|
||||
- All 601 unit tests pass across the workspace (0 failures, 8 intentionally ignored)
|
||||
- Zero compiler warnings
|
||||
- The fix is defensive and backward-compatible — no changes to data models, APIs, or MQ protocols
|
||||
104
work-summary/2026-03-04-workflow-file-action-field.md
Normal file
104
work-summary/2026-03-04-workflow-file-action-field.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# Workflow Action `workflow_file` Field & Timeline Demo Workflow
|
||||
|
||||
**Date**: 2026-03-04
|
||||
**Scope**: Pack loading architecture, workflow file discovery, demo workflow
|
||||
|
||||
## Summary
|
||||
|
||||
Introduced a `workflow_file` field for action YAML definitions that separates action-level metadata from workflow graph definitions. This enables a clean conceptual divide: the action YAML controls ref, label, parameters, policies, and tags, while the workflow file contains the execution graph (tasks, transitions, variables). Multiple actions can reference the same workflow file with different configurations, which has implications for policies and parameter mapping.
|
||||
|
||||
Also created a comprehensive demo workflow in the `python_example` pack that exercises the Workflow Timeline DAG visualizer.
|
||||
|
||||
## Architecture Change
|
||||
|
||||
### Before
|
||||
|
||||
Workflows could be registered two ways, each with limitations:
|
||||
|
||||
1. **`workflows/` directory** (pack root) — scanned by `WorkflowLoader`, registered by `WorkflowRegistrar` which auto-creates a companion action. No separation of action metadata from workflow definition.
|
||||
2. **API endpoints** (`POST /api/v1/packs/{ref}/workflow-files`) — writes to `actions/workflows/`, creates both `workflow_definition` and companion `action` records. Only available via the visual builder, not during pack file loading.
|
||||
|
||||
The `PackComponentLoader` had no awareness of workflow files at all — it only loaded actions, triggers, runtimes, and sensors from their respective directories.
|
||||
|
||||
### After
|
||||
|
||||
A third path is now supported, bridging both worlds:
|
||||
|
||||
3. **Action YAML with `workflow_file` field** — an action YAML in `actions/*.yaml` can include `workflow_file: workflows/timeline_demo.yaml` (path relative to `actions/`). During pack loading, the `PackComponentLoader`:
|
||||
- Reads and parses the referenced workflow YAML
|
||||
- Creates/updates a `workflow_definition` record
|
||||
- Creates the action record with `workflow_def` FK linked
|
||||
- Skips runtime resolution (workflow actions have no runner_type)
|
||||
- Uses the workflow file path as the entrypoint
|
||||
|
||||
This preserves the clean separation the visual builder already uses (action metadata in one place, workflow graph in another) while making it work with the pack file loading pipeline.
|
||||
|
||||
### Dual-Directory Workflow Scanning
|
||||
|
||||
The `WorkflowLoader` now scans **two** directories:
|
||||
|
||||
1. `{pack_dir}/workflows/` — legacy standalone workflow files
|
||||
2. `{pack_dir}/actions/workflows/` — visual-builder and action-linked workflow files
|
||||
|
||||
Files with `.workflow.yaml` suffix have the `.workflow` portion stripped when deriving the name/ref (e.g., `deploy.workflow.yaml` → name `deploy`, ref `pack.deploy`). If the same ref appears in both directories, `actions/workflows/` wins. The `reload_workflow` method searches `actions/workflows/` first with all extension variants.
|
||||
|
||||
## Files Changed
|
||||
|
||||
### Rust (`crates/common/src/pack_registry/loader.rs`)
|
||||
|
||||
- Added imports for `WorkflowDefinitionRepository`, `CreateWorkflowDefinitionInput`, `UpdateWorkflowDefinitionInput`, and `parse_workflow_yaml`
|
||||
- **`load_actions()`**: When action YAML contains `workflow_file`, calls `load_workflow_for_action()` to create/update the workflow definition, sets entrypoint to the workflow file path, skips runtime resolution, and links the action to the workflow definition after creation/update
|
||||
- **`load_workflow_for_action()`** (new): Reads and parses the workflow YAML, creates or updates the `workflow_definition` record, respects action YAML schema overrides (action's `parameters`/`output` take precedence over the workflow file's own schemas)
|
||||
|
||||
### Rust (`crates/common/src/workflow/loader.rs`)
|
||||
|
||||
- **`load_pack_workflows()`**: Now scans both `workflows/` and `actions/workflows/`, with the latter taking precedence on ref collision
|
||||
- **`reload_workflow()`**: Searches `actions/workflows/` first, trying `.workflow.yaml`, `.yaml`, `.workflow.yml`, and `.yml` extensions before falling back to `workflows/`
|
||||
- **`scan_workflow_files()`**: Strips `.workflow` suffix from filenames (e.g., `deploy.workflow.yaml` → name `deploy`)
|
||||
- **3 new tests**: `test_scan_workflow_files_strips_workflow_suffix`, `test_load_pack_workflows_scans_both_directories`, `test_reload_workflow_finds_actions_workflows_dir`
|
||||
|
||||
### Python (`scripts/load_core_pack.py`)
|
||||
|
||||
- **`upsert_workflow_definition()`** (new): Reads a workflow YAML file, upserts into `workflow_definition` table, returns the ID
|
||||
- **`upsert_actions()`**: Detects `workflow_file` field, calls `upsert_workflow_definition()`, sets entrypoint to workflow file path, skips runtime resolution for workflow actions, links action to workflow definition via `UPDATE action SET workflow_def = ...`
|
||||
|
||||
### Demo Pack Files (`packs.external/python_example/`)
|
||||
|
||||
- **`actions/simulate_work.py`** + **`actions/simulate_work.yaml`**: New action that simulates a unit of work with configurable duration, optional failure simulation, and structured JSON output
|
||||
- **`actions/timeline_demo.yaml`**: Action YAML with `workflow_file: workflows/timeline_demo.yaml` — controls action-level metadata
|
||||
- **`actions/workflows/timeline_demo.yaml`**: Workflow definition with 11 tasks, 18 transition edges, exercising parallel fan-out/fan-in, `with_items` + concurrency, failure paths, retries, timeouts, publish directives, and custom edge styling via `__chart_meta__`
|
||||
|
||||
### Documentation
|
||||
|
||||
- **`AGENTS.md`**: Updated Pack Component Loading Order, added Workflow Action YAML (`workflow_file` field) section, added Workflow File Discovery (dual-directory scanning) section, added pitfall #7 (never put workflow content directly in action YAML), renumbered subsequent items
|
||||
- **`packs.external/python_example/README.md`**: Added docs for `simulate_work`, `timeline_demo` workflow, and usage examples
|
||||
|
||||
## Test Results
|
||||
|
||||
- **596 unit tests passing**, 0 failures
|
||||
- **0 compiler warnings** across the workspace
|
||||
- 3 new tests for the workflow loader changes, all passing
|
||||
- Integration tests require `attune_test` database (pre-existing infrastructure issue, unrelated)
|
||||
|
||||
## Timeline Demo Workflow Features
|
||||
|
||||
The `python_example.timeline_demo` workflow creates this execution shape:
|
||||
|
||||
```
|
||||
initialize ─┬─► build_artifacts(6s) ────────────────┐
|
||||
├─► run_linter(3s) ─────┐ ├─► merge_results ─► generate_items ─► process_items(×5, 3∥) ─► validate ─┬─► finalize_success
|
||||
└─► security_scan(4s) ──┘ │ └─► handle_failure ─► finalize_failure
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
| Feature | How Exercised |
|
||||
|---------|--------------|
|
||||
| Parallel fan-out | `initialize` → 3 branches with different durations |
|
||||
| Fan-in / join | `merge_results` with `join: 3` |
|
||||
| `with_items` + concurrency | `process_items` expands to N items, `concurrency: 3` |
|
||||
| Failure paths | Every task has `{{ failed() }}` transitions |
|
||||
| Timeout handling | `security_scan` has `timeout: 30` + `{{ timed_out() }}` |
|
||||
| Retries | `build_artifacts` and `validate` with retry configs |
|
||||
| Publish directives | Results passed between stages |
|
||||
| Custom edge colors/labels | Via `__chart_meta__` on transitions |
|
||||
| Configurable failure | `fail_validation=true` exercises the error path |
|
||||
83
work-summary/workflow-timeline-dag.md
Normal file
83
work-summary/workflow-timeline-dag.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# Workflow Timeline DAG Visualization
|
||||
|
||||
**Date**: 2026-02-05
|
||||
**Component**: `web/src/components/executions/workflow-timeline/`
|
||||
**Integration**: `web/src/pages/executions/ExecutionDetailPage.tsx`
|
||||
|
||||
## Overview
|
||||
|
||||
Added a Prefect-style workflow run timeline DAG visualization to the execution detail page for workflow executions. The component renders child task executions as horizontal duration bars on a time axis, connected by curved dependency edges that reflect the actual workflow definition transitions.
|
||||
|
||||
## Architecture
|
||||
|
||||
The implementation is a pure SVG renderer with no additional dependencies — it uses React, TypeScript, and inline SVG only (no D3, no React Flow, no new npm packages).
|
||||
|
||||
### Module Structure
|
||||
|
||||
```
|
||||
web/src/components/executions/workflow-timeline/
|
||||
├── index.ts # Barrel exports
|
||||
├── types.ts # Type definitions, color constants, layout config
|
||||
├── data.ts # Data transformation (executions → timeline structures)
|
||||
├── layout.ts # Layout engine (lane assignment, time scaling, edge paths)
|
||||
├── TimelineRenderer.tsx # SVG renderer with interactions
|
||||
└── WorkflowTimelineDAG.tsx # Orchestrator component (data fetching + layout + render)
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
|
||||
1. **WorkflowTimelineDAG** (orchestrator) fetches child executions via `useChildExecutions` and the workflow definition via `useWorkflow(actionRef)`.
|
||||
2. **data.ts** transforms `ExecutionSummary[]` + `WorkflowDefinition` into `TimelineTask[]`, `TimelineEdge[]`, and `TimelineMilestone[]`.
|
||||
3. **layout.ts** computes lane assignments (greedy packing), time→pixel scale, node positions, grid lines, and cubic Bezier edge paths.
|
||||
4. **TimelineRenderer** renders everything as SVG with interactive features.
|
||||
|
||||
## Key Features
|
||||
|
||||
### Visualization
|
||||
- **Task bars**: Horizontal rounded rectangles colored by state (green=completed, blue=running, red=failed, gray=pending, orange=timeout). Left accent bar indicates state. Running tasks pulse.
|
||||
- **Milestones**: Synthetic start/end diamond nodes plus merge/fork junctions inserted when fan-in/fan-out exceeds 3 tasks.
|
||||
- **Edges**: Curved cubic Bezier dependency lines with transition-aware coloring and labels derived from the workflow definition (`succeeded`, `failed`, `timed out`, custom expressions). Failure edges are dashed, timeout edges use dash-dot pattern.
|
||||
- **Time axis**: Vertical gridlines at "nice" intervals with timestamp labels along the top.
|
||||
- **Lane packing**: Greedy algorithm assigns tasks to non-overlapping y-lanes, with optional lane reordering to cluster tasks with shared upstream dependencies.
|
||||
|
||||
### Workflow Metadata Integration
|
||||
- Fetches the workflow definition to extract the `next` transition array from each task definition.
|
||||
- Maps definition task names to execution IDs (handles `with_items` expansions with multiple executions per task name).
|
||||
- Classifies `when` expressions (`{{ succeeded() }}`, `{{ failed() }}`, `{{ timed_out() }}`) into edge kinds with appropriate colors.
|
||||
- Reads `__chart_meta__` labels and custom colors from workflow definition transitions.
|
||||
- Falls back to timing-based heuristic edge inference when no workflow definition is available.
|
||||
|
||||
### Interactions
|
||||
- **Hover tooltip**: Shows task name, state, action ref, start/end times, duration, retry info, upstream/downstream counts.
|
||||
- **Click selection**: Clicking a task highlights its full upstream/downstream path (BFS traversal) and dims unrelated nodes/edges.
|
||||
- **Double-click navigation**: Navigates to the child execution's detail page.
|
||||
- **Horizontal zoom**: Mouse wheel zooms the x-axis while keeping y-lanes stable. Zoom anchors to cursor position.
|
||||
- **Pan**: Alt+drag or middle-mouse-drag pans horizontally via native scroll.
|
||||
- **Expand/compact toggle**: Expand button widens the chart for complex workflows.
|
||||
|
||||
### Performance
|
||||
- Edge paths are memoized per layout computation.
|
||||
- Node lookups use a `Map<string, TimelineNode>` for O(1) access.
|
||||
- Grid lines and highlighted paths are memoized with stable dependency arrays.
|
||||
- ResizeObserver tracks container width for responsive layout without polling.
|
||||
- No additional npm dependencies; SVG rendering handles 300+ tasks efficiently.
|
||||
|
||||
## Integration Point
|
||||
|
||||
The `WorkflowTimelineDAG` component is rendered on the execution detail page (`ExecutionDetailPage.tsx`) above the existing `WorkflowTasksPanel`, conditioned on `isWorkflow` (action has `workflow_def`).
|
||||
|
||||
Both components share a single TanStack Query cache entry for child executions (`["executions", { parent: id }]`) and both subscribe to WebSocket execution streams for real-time updates.
|
||||
|
||||
The `WorkflowTimelineDAG` accepts a `ParentExecutionInfo` interface (satisfied by both `ExecutionResponse` and `ExecutionSummary`) to avoid type casting at the integration point.
|
||||
|
||||
## Files Changed
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `web/src/components/executions/workflow-timeline/types.ts` | New — type definitions |
|
||||
| `web/src/components/executions/workflow-timeline/data.ts` | New — data transformation |
|
||||
| `web/src/components/executions/workflow-timeline/layout.ts` | New — layout engine |
|
||||
| `web/src/components/executions/workflow-timeline/TimelineRenderer.tsx` | New — SVG renderer |
|
||||
| `web/src/components/executions/workflow-timeline/WorkflowTimelineDAG.tsx` | New — orchestrator |
|
||||
| `web/src/components/executions/workflow-timeline/index.ts` | New — barrel exports |
|
||||
| `web/src/pages/executions/ExecutionDetailPage.tsx` | Modified — import + render WorkflowTimelineDAG |
|
||||
Reference in New Issue
Block a user