Compare commits
2 Commits
42a9f1d31a
...
8299e5efcb
| Author | SHA1 | Date | |
|---|---|---|---|
| 8299e5efcb | |||
| 5da940639a |
47
AGENTS.md
47
AGENTS.md
@@ -57,7 +57,12 @@ attune/
|
|||||||
2. **attune-executor**: Manages execution lifecycle, scheduling, policy enforcement, workflow orchestration
|
2. **attune-executor**: Manages execution lifecycle, scheduling, policy enforcement, workflow orchestration
|
||||||
3. **attune-worker**: Executes actions in multiple runtimes (Python/Node.js/containers)
|
3. **attune-worker**: Executes actions in multiple runtimes (Python/Node.js/containers)
|
||||||
4. **attune-sensor**: Monitors triggers, generates events
|
4. **attune-sensor**: Monitors triggers, generates events
|
||||||
5. **attune-notifier**: Real-time notifications via PostgreSQL LISTEN/NOTIFY + WebSocket
|
5. **attune-notifier**: Real-time notifications via PostgreSQL LISTEN/NOTIFY + WebSocket (port 8081)
|
||||||
|
- **PostgreSQL listener**: Uses `PgListener::listen_all()` (single batch command) to subscribe to all 11 channels. **Do NOT use individual `listen()` calls in a loop** — this leaves the listener in a broken state where it stops receiving after the last call.
|
||||||
|
- **Artifact notifications**: `artifact_created` and `artifact_updated` channels. The `artifact_updated` trigger extracts a progress summary (`progress_percent`, `progress_message`, `progress_entries`) from the last entry in the `data` JSONB array for progress-type artifacts, enabling inline progress bars without extra API calls. The Web UI uses `useArtifactStream` hook to subscribe to `entity_type:artifact` notifications and invalidate React Query caches + push progress summaries to a `artifact_progress` cache key.
|
||||||
|
- **WebSocket protocol** (client → server): `{"type":"subscribe","filter":"entity:execution:<id>"}` — filter formats: `all`, `entity_type:<type>`, `entity:<type>:<id>`, `user:<id>`, `notification_type:<type>`
|
||||||
|
- **WebSocket protocol** (server → client): All messages use `#[serde(tag="type")]` — `{"type":"welcome","client_id":"...","message":"..."}` on connect; `{"type":"notification","notification_type":"...","entity_type":"...","entity_id":...,"payload":{...},"user_id":null,"timestamp":"..."}` for notifications; `{"type":"error","message":"..."}` for errors
|
||||||
|
- **Key invariant**: The outgoing task in `websocket_server.rs` MUST wrap `Notification` in `ClientMessage::Notification(notification)` before serializing — bare `Notification` serialization omits the `"type"` field and breaks clients
|
||||||
|
|
||||||
**Communication**: Services communicate via RabbitMQ for async operations
|
**Communication**: Services communicate via RabbitMQ for async operations
|
||||||
|
|
||||||
@@ -66,7 +71,7 @@ attune/
|
|||||||
**All Attune services run via Docker Compose.**
|
**All Attune services run via Docker Compose.**
|
||||||
|
|
||||||
- **Compose file**: `docker-compose.yaml` (root directory)
|
- **Compose file**: `docker-compose.yaml` (root directory)
|
||||||
- **Configuration**: `config.docker.yaml` (Docker-specific settings)
|
- **Configuration**: `config.docker.yaml` (Docker-specific settings, including `artifacts_dir: /opt/attune/artifacts`)
|
||||||
- **Default user**: `test@attune.local` / `TestPass123!` (auto-created)
|
- **Default user**: `test@attune.local` / `TestPass123!` (auto-created)
|
||||||
|
|
||||||
**Services**:
|
**Services**:
|
||||||
@@ -74,6 +79,13 @@ attune/
|
|||||||
- **Init** (run-once): migrations, init-user, init-packs
|
- **Init** (run-once): migrations, init-user, init-packs
|
||||||
- **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000)
|
- **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000)
|
||||||
|
|
||||||
|
**Volumes** (named):
|
||||||
|
- `postgres_data`, `rabbitmq_data`, `redis_data` — infrastructure state
|
||||||
|
- `packs_data` — pack files (shared across all services)
|
||||||
|
- `runtime_envs` — isolated runtime environments (virtualenvs, node_modules)
|
||||||
|
- `artifacts_data` — file-backed artifact storage (shared between API rw, workers rw, executor ro)
|
||||||
|
- `*_logs` — per-service log volumes
|
||||||
|
|
||||||
**Commands**:
|
**Commands**:
|
||||||
```bash
|
```bash
|
||||||
docker compose up -d # Start all services
|
docker compose up -d # Start all services
|
||||||
@@ -148,6 +160,8 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- **Inquiry**: Human-in-the-loop async interaction (approvals, inputs)
|
- **Inquiry**: Human-in-the-loop async interaction (approvals, inputs)
|
||||||
- **Identity**: User/service account with RBAC permissions
|
- **Identity**: User/service account with RBAC permissions
|
||||||
- **Key**: Encrypted secrets storage
|
- **Key**: Encrypted secrets storage
|
||||||
|
- **Artifact**: Tracked output from executions (files, logs, progress indicators). Metadata + optional structured `data` (JSONB). Linked to execution via plain BIGINT (no FK). Supports retention policies (version-count or time-based). File-type artifacts (FileBinary, FileDataTable, FileImage, FileText) use disk-based storage on a shared volume; Progress and Url artifacts use DB storage. Each artifact has a `visibility` field (`ArtifactVisibility` enum: `public` or `private`, DB default `private`). Public artifacts are viewable by all authenticated users; private artifacts are restricted based on the artifact's `scope` (Identity, Pack, Action, Sensor) and `owner` fields. **Type-aware API default**: when `visibility` is omitted from `POST /api/v1/artifacts`, the API defaults to `public` for Progress artifacts (informational status indicators anyone watching an execution should see) and `private` for all other types. Callers can always override by explicitly setting `visibility`. Full RBAC enforcement is deferred — the column and basic filtering are in place for future permission checks.
|
||||||
|
- **ArtifactVersion**: Immutable content snapshot for an artifact. File-type versions store a `file_path` (relative path on shared volume) with `content` BYTEA left NULL. DB-stored versions use `content` BYTEA and/or `content_json` JSONB. Version number auto-assigned via `next_artifact_version()`. Retention trigger auto-deletes oldest versions beyond limit. Invariant: exactly one of `content`, `content_json`, or `file_path` should be non-NULL per row.
|
||||||
|
|
||||||
## Key Tools & Libraries
|
## Key Tools & Libraries
|
||||||
|
|
||||||
@@ -166,6 +180,7 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- **OpenAPI**: utoipa, utoipa-swagger-ui
|
- **OpenAPI**: utoipa, utoipa-swagger-ui
|
||||||
- **Message Queue**: lapin (RabbitMQ)
|
- **Message Queue**: lapin (RabbitMQ)
|
||||||
- **HTTP Client**: reqwest
|
- **HTTP Client**: reqwest
|
||||||
|
- **Archive/Compression**: tar, flate2 (used for pack upload/extraction)
|
||||||
- **Testing**: mockall, tempfile, serial_test
|
- **Testing**: mockall, tempfile, serial_test
|
||||||
|
|
||||||
### Web UI Dependencies
|
### Web UI Dependencies
|
||||||
@@ -186,6 +201,7 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- **Key Settings**:
|
- **Key Settings**:
|
||||||
- `packs_base_dir` - Where pack files are stored (default: `/opt/attune/packs`)
|
- `packs_base_dir` - Where pack files are stored (default: `/opt/attune/packs`)
|
||||||
- `runtime_envs_dir` - Where isolated runtime environments are created (default: `/opt/attune/runtime_envs`)
|
- `runtime_envs_dir` - Where isolated runtime environments are created (default: `/opt/attune/runtime_envs`)
|
||||||
|
- `artifacts_dir` - Where file-backed artifacts are stored (default: `/opt/attune/artifacts`). Shared volume between API and workers.
|
||||||
|
|
||||||
## Authentication & Security
|
## Authentication & Security
|
||||||
- **Auth Type**: JWT (access tokens: 1h, refresh tokens: 7d)
|
- **Auth Type**: JWT (access tokens: 1h, refresh tokens: 7d)
|
||||||
@@ -222,8 +238,10 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- **Entity History Tracking (TimescaleDB)**: Append-only `<table>_history` hypertables track field-level changes to `execution` and `worker` tables. Populated by PostgreSQL `AFTER INSERT OR UPDATE OR DELETE` triggers — no Rust code changes needed for recording. Uses JSONB diff format (`old_values`/`new_values`) with a `changed_fields TEXT[]` column for efficient filtering. Worker heartbeat-only updates are excluded. There are **no `event_history` or `enforcement_history` tables** — events are immutable and enforcements have a single deterministic status transition, so both tables are hypertables themselves. See `docs/plans/timescaledb-entity-history.md` for full design. The execution history trigger tracks: `status`, `result`, `executor`, `workflow_task`, `env_vars`, `started_at`.
|
- **Entity History Tracking (TimescaleDB)**: Append-only `<table>_history` hypertables track field-level changes to `execution` and `worker` tables. Populated by PostgreSQL `AFTER INSERT OR UPDATE OR DELETE` triggers — no Rust code changes needed for recording. Uses JSONB diff format (`old_values`/`new_values`) with a `changed_fields TEXT[]` column for efficient filtering. Worker heartbeat-only updates are excluded. There are **no `event_history` or `enforcement_history` tables** — events are immutable and enforcements have a single deterministic status transition, so both tables are hypertables themselves. See `docs/plans/timescaledb-entity-history.md` for full design. The execution history trigger tracks: `status`, `result`, `executor`, `workflow_task`, `env_vars`, `started_at`.
|
||||||
- **History Large-Field Guardrails**: The `execution` history trigger stores a compact **digest summary** instead of the full value for the `result` column (which can be arbitrarily large). The digest is produced by the `_jsonb_digest_summary(JSONB)` helper function and has the shape `{"digest": "md5:<hex>", "size": <bytes>, "type": "<jsonb_typeof>"}`. This preserves change-detection semantics while avoiding history table bloat. The full result is always available on the live `execution` row. When adding new large JSONB columns to history triggers, use `_jsonb_digest_summary()` instead of storing the raw value.
|
- **History Large-Field Guardrails**: The `execution` history trigger stores a compact **digest summary** instead of the full value for the `result` column (which can be arbitrarily large). The digest is produced by the `_jsonb_digest_summary(JSONB)` helper function and has the shape `{"digest": "md5:<hex>", "size": <bytes>, "type": "<jsonb_typeof>"}`. This preserves change-detection semantics while avoiding history table bloat. The full result is always available on the live `execution` row. When adding new large JSONB columns to history triggers, use `_jsonb_digest_summary()` instead of storing the raw value.
|
||||||
- **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option<Id>` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, `execution.started_at`, and `event.source` are also nullable. `enforcement.event` is nullable but has no FK constraint (event is a hypertable). `execution.enforcement` is nullable but has no FK constraint (enforcement is a hypertable). All FK columns on the execution table (`action`, `parent`, `original_execution`, `enforcement`, `executor`, `workflow_def`) have no FK constraints (execution is a hypertable). `inquiry.execution` and `workflow_execution.execution` also have no FK constraints. `enforcement.resolved_at` is nullable — `None` while status is `created`, set when resolved. `execution.started_at` is nullable — `None` until the worker sets status to `running`.
|
- **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option<Id>` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, `execution.started_at`, and `event.source` are also nullable. `enforcement.event` is nullable but has no FK constraint (event is a hypertable). `execution.enforcement` is nullable but has no FK constraint (enforcement is a hypertable). All FK columns on the execution table (`action`, `parent`, `original_execution`, `enforcement`, `executor`, `workflow_def`) have no FK constraints (execution is a hypertable). `inquiry.execution` and `workflow_execution.execution` also have no FK constraints. `enforcement.resolved_at` is nullable — `None` while status is `created`, set when resolved. `execution.started_at` is nullable — `None` until the worker sets status to `running`.
|
||||||
**Table Count**: 20 tables total in the schema (including `runtime_version`, 2 `*_history` hypertables, and the `event`, `enforcement`, + `execution` hypertables)
|
**Table Count**: 21 tables total in the schema (including `runtime_version`, `artifact_version`, 2 `*_history` hypertables, and the `event`, `enforcement`, + `execution` hypertables)
|
||||||
**Migration Count**: 9 migrations (`000001` through `000009`) — see `migrations/` directory
|
**Migration Count**: 10 migrations (`000001` through `000010`) — see `migrations/` directory
|
||||||
|
- **Artifact System**: The `artifact` table stores metadata + structured data (progress entries via JSONB `data` column). The `artifact_version` table stores immutable content snapshots — either on disk (via `file_path` column) or in DB (via `content` BYTEA / `content_json` JSONB). Version numbering is auto-assigned via `next_artifact_version()` SQL function. A DB trigger (`enforce_artifact_retention`) auto-deletes oldest versions when count exceeds the artifact's `retention_limit`. `artifact.execution` is a plain BIGINT (no FK — execution is a hypertable). Progress-type artifacts use `artifact.data` (atomic JSON array append); file-type artifacts use `artifact_version` rows with `file_path` set. Binary content is excluded from default queries for performance (`SELECT_COLUMNS` vs `SELECT_COLUMNS_WITH_CONTENT`). **Visibility**: Each artifact has a `visibility` column (`artifact_visibility_enum`: `public` or `private`, DB default `private`). The `CreateArtifactRequest` DTO accepts `visibility` as `Option<ArtifactVisibility>` — when omitted the API route handler applies a **type-aware default**: `public` for Progress artifacts (informational status indicators), `private` for all other types. Callers can always override explicitly. Public artifacts are viewable by all authenticated users; private artifacts are restricted based on the artifact's `scope` (Identity, Pack, Action, Sensor) and `owner` fields. The visibility field is filterable via the search/list API (`?visibility=public`). Full RBAC enforcement is deferred — the column and basic query filtering are in place for future permission checks. **Notifications**: `artifact_created` and `artifact_updated` DB triggers (in migration `000008`) fire PostgreSQL NOTIFY with entity_type `artifact` and include `visibility` in the payload. The `artifact_updated` trigger extracts a progress summary (`progress_percent`, `progress_message`, `progress_entries`) from the last entry of the `data` JSONB array for progress-type artifacts. The Web UI `ExecutionProgressBar` component (`web/src/components/executions/ExecutionProgressBar.tsx`) renders an inline progress bar in the Execution Details card using the `useArtifactStream` hook (`web/src/hooks/useArtifactStream.ts`) for real-time WebSocket updates, with polling fallback via `useExecutionArtifacts`.
|
||||||
|
- **File-Based Artifact Storage**: File-type artifacts (FileBinary, FileDataTable, FileImage, FileText) use a shared filesystem volume instead of PostgreSQL BYTEA. The `artifact_version.file_path` column stores the relative path from the `artifacts_dir` root (e.g., `mypack/build_log/v1.txt`). Pattern: `{ref_with_dots_as_dirs}/v{version}.{ext}`. The artifact ref (globally unique) is used as the directory key — no execution ID in the path, so artifacts can outlive executions and be shared across them. **Endpoint**: `POST /api/v1/artifacts/{id}/versions/file` allocates a version number and file path without any file content; the execution process writes the file to `$ATTUNE_ARTIFACTS_DIR/{file_path}`. **Download**: `GET /api/v1/artifacts/{id}/download` and version-specific downloads check `file_path` first (read from disk), fall back to DB BYTEA/JSON. **Finalization**: After execution exits, the worker stats all file-backed versions for that execution and updates `size_bytes` on both `artifact_version` and parent `artifact` rows via direct DB access. **Cleanup**: Delete endpoints remove disk files before deleting DB rows; empty parent directories are cleaned up. **Backward compatible**: Existing DB-stored artifacts (`file_path = NULL`) continue to work unchanged.
|
||||||
- **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order.
|
- **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order.
|
||||||
|
|
||||||
### Workflow Execution Orchestration
|
### Workflow Execution Orchestration
|
||||||
@@ -303,6 +321,8 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- `ATTUNE_ACTION` - Action ref (always present)
|
- `ATTUNE_ACTION` - Action ref (always present)
|
||||||
- `ATTUNE_EXEC_ID` - Execution database ID (always present)
|
- `ATTUNE_EXEC_ID` - Execution database ID (always present)
|
||||||
- `ATTUNE_API_TOKEN` - Execution-scoped API token (always present)
|
- `ATTUNE_API_TOKEN` - Execution-scoped API token (always present)
|
||||||
|
- `ATTUNE_API_URL` - API base URL (always present)
|
||||||
|
- `ATTUNE_ARTIFACTS_DIR` - Absolute path to shared artifact volume (always present, e.g., `/opt/attune/artifacts`)
|
||||||
- `ATTUNE_RULE` - Rule ref (if triggered by rule)
|
- `ATTUNE_RULE` - Rule ref (if triggered by rule)
|
||||||
- `ATTUNE_TRIGGER` - Trigger ref (if triggered by event/trigger)
|
- `ATTUNE_TRIGGER` - Trigger ref (if triggered by event/trigger)
|
||||||
- **Custom Environment Variables**: Optional, set via `execution.env_vars` JSONB field (for debug flags, runtime config only)
|
- **Custom Environment Variables**: Optional, set via `execution.env_vars` JSONB field (for debug flags, runtime config only)
|
||||||
@@ -489,10 +509,23 @@ make db-reset # Drop & recreate DB
|
|||||||
cargo install --path crates/cli # Install CLI
|
cargo install --path crates/cli # Install CLI
|
||||||
attune auth login # Login
|
attune auth login # Login
|
||||||
attune pack list # List packs
|
attune pack list # List packs
|
||||||
|
attune pack upload ./path/to/pack # Upload local pack to API (works with Docker)
|
||||||
|
attune pack register /opt/attune/packs/mypak # Register from API-visible path
|
||||||
attune action execute <ref> --param key=value
|
attune action execute <ref> --param key=value
|
||||||
attune execution list # Monitor executions
|
attune execution list # Monitor executions
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Pack Upload vs Register**:
|
||||||
|
- `attune pack upload <local-path>` — Tarballs the local directory and POSTs it to `POST /api/v1/packs/upload`. Works regardless of whether the API is local or in Docker. This is the primary way to install packs from your local machine into a Dockerized system.
|
||||||
|
- `attune pack register <server-path>` — Sends a filesystem path string to the API (`POST /api/v1/packs/register`). Only works if the path is accessible from inside the API container (e.g. `/opt/attune/packs/...` or `/opt/attune/packs.dev/...`).
|
||||||
|
|
||||||
|
**Pack Upload API endpoint**: `POST /api/v1/packs/upload` — accepts `multipart/form-data` with:
|
||||||
|
- `pack` (required): a `.tar.gz` archive of the pack directory
|
||||||
|
- `force` (optional, text): `"true"` to overwrite an existing pack with the same ref
|
||||||
|
- `skip_tests` (optional, text): `"true"` to skip test execution after registration
|
||||||
|
|
||||||
|
The server extracts the archive to a temp directory, finds the `pack.yaml` (at root or one level deep), then moves it to `{packs_base_dir}/{pack_ref}/` and calls `register_pack_internal`.
|
||||||
|
|
||||||
## Test Failure Protocol
|
## Test Failure Protocol
|
||||||
|
|
||||||
**Proactively investigate and fix test failures when discovered, even if unrelated to the current task.**
|
**Proactively investigate and fix test failures when discovered, even if unrelated to the current task.**
|
||||||
@@ -597,9 +630,9 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
|
|||||||
- **Web UI**: Static files served separately or via API service
|
- **Web UI**: Static files served separately or via API service
|
||||||
|
|
||||||
## Current Development Status
|
## Current Development Status
|
||||||
- ✅ **Complete**: Database migrations (20 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`)
|
- ✅ **Complete**: Database migrations (21 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`), Artifact content system (versioned file/JSON storage, progress-append semantics, binary upload/download, retention enforcement, execution-linked artifacts, 18 API endpoints under `/api/v1/artifacts/`, file-backed disk storage via shared volume for file-type artifacts), CLI `--wait` flag (WebSocket-first with polling fallback — connects to notifier on port 8081, subscribes to execution, returns immediately on terminal status; falls back to exponential-backoff REST polling if WS unavailable; polling always gets at least 10s budget regardless of how long WS path ran)
|
||||||
- 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management
|
- 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management, Artifact UI (web UI for browsing/downloading artifacts), Notifier service WebSocket (functional but lacks auth — the WS connection is unauthenticated; the subscribe filter controls visibility)
|
||||||
- 📋 **Planned**: Notifier service, execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage
|
- 📋 **Planned**: Execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage
|
||||||
|
|
||||||
## Quick Reference
|
## Quick Reference
|
||||||
|
|
||||||
|
|||||||
17
Cargo.toml
17
Cargo.toml
@@ -73,6 +73,9 @@ jsonschema = "0.38"
|
|||||||
# OpenAPI/Swagger
|
# OpenAPI/Swagger
|
||||||
utoipa = { version = "5.4", features = ["chrono", "uuid"] }
|
utoipa = { version = "5.4", features = ["chrono", "uuid"] }
|
||||||
|
|
||||||
|
# JWT
|
||||||
|
jsonwebtoken = { version = "10.2", features = ["rust_crypto"] }
|
||||||
|
|
||||||
# Encryption
|
# Encryption
|
||||||
argon2 = "0.5"
|
argon2 = "0.5"
|
||||||
ring = "0.17"
|
ring = "0.17"
|
||||||
@@ -91,6 +94,16 @@ hyper = { version = "1.0", features = ["full"] }
|
|||||||
# File system utilities
|
# File system utilities
|
||||||
walkdir = "2.4"
|
walkdir = "2.4"
|
||||||
|
|
||||||
|
# Archive/compression
|
||||||
|
tar = "0.4"
|
||||||
|
flate2 = "1.0"
|
||||||
|
|
||||||
|
# WebSocket client
|
||||||
|
tokio-tungstenite = { version = "0.26", features = ["native-tls"] }
|
||||||
|
|
||||||
|
# URL parsing
|
||||||
|
url = "2.5"
|
||||||
|
|
||||||
# Async utilities
|
# Async utilities
|
||||||
async-trait = "0.1"
|
async-trait = "0.1"
|
||||||
futures = "0.3"
|
futures = "0.3"
|
||||||
@@ -98,9 +111,11 @@ futures = "0.3"
|
|||||||
# Version matching
|
# Version matching
|
||||||
semver = { version = "1.0", features = ["serde"] }
|
semver = { version = "1.0", features = ["serde"] }
|
||||||
|
|
||||||
|
# Temp files
|
||||||
|
tempfile = "3.8"
|
||||||
|
|
||||||
# Testing
|
# Testing
|
||||||
mockall = "0.14"
|
mockall = "0.14"
|
||||||
tempfile = "3.8"
|
|
||||||
serial_test = "3.2"
|
serial_test = "3.2"
|
||||||
|
|
||||||
# Concurrent data structures
|
# Concurrent data structures
|
||||||
|
|||||||
@@ -55,6 +55,11 @@ packs_base_dir: ./packs
|
|||||||
# Pattern: {runtime_envs_dir}/{pack_ref}/{runtime_name}
|
# Pattern: {runtime_envs_dir}/{pack_ref}/{runtime_name}
|
||||||
runtime_envs_dir: ./runtime_envs
|
runtime_envs_dir: ./runtime_envs
|
||||||
|
|
||||||
|
# Artifacts directory (shared volume for file-based artifact storage).
|
||||||
|
# File-type artifacts are written here by execution processes and served by the API.
|
||||||
|
# Pattern: {artifacts_dir}/{ref_slug}/v{version}.{ext}
|
||||||
|
artifacts_dir: ./artifacts
|
||||||
|
|
||||||
# Worker service configuration
|
# Worker service configuration
|
||||||
worker:
|
worker:
|
||||||
service_name: attune-worker-e2e
|
service_name: attune-worker-e2e
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ async-trait = { workspace = true }
|
|||||||
futures = { workspace = true }
|
futures = { workspace = true }
|
||||||
|
|
||||||
# Web framework
|
# Web framework
|
||||||
axum = { workspace = true }
|
axum = { workspace = true, features = ["multipart"] }
|
||||||
tower = { workspace = true }
|
tower = { workspace = true }
|
||||||
tower-http = { workspace = true }
|
tower-http = { workspace = true }
|
||||||
|
|
||||||
@@ -68,8 +68,14 @@ jsonschema = { workspace = true }
|
|||||||
# HTTP client
|
# HTTP client
|
||||||
reqwest = { workspace = true }
|
reqwest = { workspace = true }
|
||||||
|
|
||||||
|
# Archive/compression
|
||||||
|
tar = { workspace = true }
|
||||||
|
flate2 = { workspace = true }
|
||||||
|
|
||||||
|
# Temp files (used for pack upload extraction)
|
||||||
|
tempfile = { workspace = true }
|
||||||
|
|
||||||
# Authentication
|
# Authentication
|
||||||
jsonwebtoken = { version = "10.2", features = ["rust_crypto"] }
|
|
||||||
argon2 = { workspace = true }
|
argon2 = { workspace = true }
|
||||||
rand = "0.9"
|
rand = "0.9"
|
||||||
|
|
||||||
|
|||||||
@@ -1,389 +1,11 @@
|
|||||||
//! JWT token generation and validation
|
//! JWT token generation and validation
|
||||||
|
//!
|
||||||
|
//! This module re-exports all JWT functionality from `attune_common::auth::jwt`.
|
||||||
|
//! The canonical implementation lives in the common crate so that all services
|
||||||
|
//! (API, worker, sensor) share the same token types and signing logic.
|
||||||
|
|
||||||
use chrono::{Duration, Utc};
|
pub use attune_common::auth::jwt::{
|
||||||
use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation};
|
extract_token_from_header, generate_access_token, generate_execution_token,
|
||||||
use serde::{Deserialize, Serialize};
|
generate_refresh_token, generate_sensor_token, generate_token, validate_token, Claims,
|
||||||
use thiserror::Error;
|
JwtConfig, JwtError, TokenType,
|
||||||
|
};
|
||||||
#[derive(Debug, Error)]
|
|
||||||
pub enum JwtError {
|
|
||||||
#[error("Failed to encode JWT: {0}")]
|
|
||||||
EncodeError(String),
|
|
||||||
#[error("Failed to decode JWT: {0}")]
|
|
||||||
DecodeError(String),
|
|
||||||
#[error("Token has expired")]
|
|
||||||
Expired,
|
|
||||||
#[error("Invalid token")]
|
|
||||||
Invalid,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// JWT Claims structure
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
||||||
pub struct Claims {
|
|
||||||
/// Subject (identity ID)
|
|
||||||
pub sub: String,
|
|
||||||
/// Identity login
|
|
||||||
pub login: String,
|
|
||||||
/// Issued at (Unix timestamp)
|
|
||||||
pub iat: i64,
|
|
||||||
/// Expiration time (Unix timestamp)
|
|
||||||
pub exp: i64,
|
|
||||||
/// Token type (access or refresh)
|
|
||||||
#[serde(default)]
|
|
||||||
pub token_type: TokenType,
|
|
||||||
/// Optional scope (e.g., "sensor", "service")
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub scope: Option<String>,
|
|
||||||
/// Optional metadata (e.g., trigger_types for sensors)
|
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
|
||||||
pub metadata: Option<serde_json::Value>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
|
||||||
#[serde(rename_all = "lowercase")]
|
|
||||||
pub enum TokenType {
|
|
||||||
Access,
|
|
||||||
Refresh,
|
|
||||||
Sensor,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for TokenType {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self::Access
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Configuration for JWT tokens
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct JwtConfig {
|
|
||||||
/// Secret key for signing tokens
|
|
||||||
pub secret: String,
|
|
||||||
/// Access token expiration duration (in seconds)
|
|
||||||
pub access_token_expiration: i64,
|
|
||||||
/// Refresh token expiration duration (in seconds)
|
|
||||||
pub refresh_token_expiration: i64,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for JwtConfig {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self {
|
|
||||||
secret: "insecure_default_secret_change_in_production".to_string(),
|
|
||||||
access_token_expiration: 3600, // 1 hour
|
|
||||||
refresh_token_expiration: 604800, // 7 days
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Generate a JWT access token
|
|
||||||
///
|
|
||||||
/// # Arguments
|
|
||||||
/// * `identity_id` - The identity ID
|
|
||||||
/// * `login` - The identity login
|
|
||||||
/// * `config` - JWT configuration
|
|
||||||
///
|
|
||||||
/// # Returns
|
|
||||||
/// * `Result<String, JwtError>` - The encoded JWT token
|
|
||||||
pub fn generate_access_token(
|
|
||||||
identity_id: i64,
|
|
||||||
login: &str,
|
|
||||||
config: &JwtConfig,
|
|
||||||
) -> Result<String, JwtError> {
|
|
||||||
generate_token(identity_id, login, config, TokenType::Access)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Generate a JWT refresh token
|
|
||||||
///
|
|
||||||
/// # Arguments
|
|
||||||
/// * `identity_id` - The identity ID
|
|
||||||
/// * `login` - The identity login
|
|
||||||
/// * `config` - JWT configuration
|
|
||||||
///
|
|
||||||
/// # Returns
|
|
||||||
/// * `Result<String, JwtError>` - The encoded JWT token
|
|
||||||
pub fn generate_refresh_token(
|
|
||||||
identity_id: i64,
|
|
||||||
login: &str,
|
|
||||||
config: &JwtConfig,
|
|
||||||
) -> Result<String, JwtError> {
|
|
||||||
generate_token(identity_id, login, config, TokenType::Refresh)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Generate a JWT token
|
|
||||||
///
|
|
||||||
/// # Arguments
|
|
||||||
/// * `identity_id` - The identity ID
|
|
||||||
/// * `login` - The identity login
|
|
||||||
/// * `config` - JWT configuration
|
|
||||||
/// * `token_type` - Type of token to generate
|
|
||||||
///
|
|
||||||
/// # Returns
|
|
||||||
/// * `Result<String, JwtError>` - The encoded JWT token
|
|
||||||
pub fn generate_token(
|
|
||||||
identity_id: i64,
|
|
||||||
login: &str,
|
|
||||||
config: &JwtConfig,
|
|
||||||
token_type: TokenType,
|
|
||||||
) -> Result<String, JwtError> {
|
|
||||||
let now = Utc::now();
|
|
||||||
let expiration = match token_type {
|
|
||||||
TokenType::Access => config.access_token_expiration,
|
|
||||||
TokenType::Refresh => config.refresh_token_expiration,
|
|
||||||
TokenType::Sensor => 86400, // Sensor tokens handled separately via generate_sensor_token()
|
|
||||||
};
|
|
||||||
|
|
||||||
let exp = (now + Duration::seconds(expiration)).timestamp();
|
|
||||||
|
|
||||||
let claims = Claims {
|
|
||||||
sub: identity_id.to_string(),
|
|
||||||
login: login.to_string(),
|
|
||||||
iat: now.timestamp(),
|
|
||||||
exp,
|
|
||||||
token_type,
|
|
||||||
scope: None,
|
|
||||||
metadata: None,
|
|
||||||
};
|
|
||||||
|
|
||||||
encode(
|
|
||||||
&Header::default(),
|
|
||||||
&claims,
|
|
||||||
&EncodingKey::from_secret(config.secret.as_bytes()),
|
|
||||||
)
|
|
||||||
.map_err(|e| JwtError::EncodeError(e.to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Generate a sensor token with specific trigger types
|
|
||||||
///
|
|
||||||
/// # Arguments
|
|
||||||
/// * `identity_id` - The identity ID for the sensor
|
|
||||||
/// * `sensor_ref` - The sensor reference (e.g., "sensor:core.timer")
|
|
||||||
/// * `trigger_types` - List of trigger types this sensor can create events for
|
|
||||||
/// * `config` - JWT configuration
|
|
||||||
/// * `ttl_seconds` - Time to live in seconds (default: 24 hours)
|
|
||||||
///
|
|
||||||
/// # Returns
|
|
||||||
/// * `Result<String, JwtError>` - The encoded JWT token
|
|
||||||
pub fn generate_sensor_token(
|
|
||||||
identity_id: i64,
|
|
||||||
sensor_ref: &str,
|
|
||||||
trigger_types: Vec<String>,
|
|
||||||
config: &JwtConfig,
|
|
||||||
ttl_seconds: Option<i64>,
|
|
||||||
) -> Result<String, JwtError> {
|
|
||||||
let now = Utc::now();
|
|
||||||
let expiration = ttl_seconds.unwrap_or(86400); // Default: 24 hours
|
|
||||||
let exp = (now + Duration::seconds(expiration)).timestamp();
|
|
||||||
|
|
||||||
let metadata = serde_json::json!({
|
|
||||||
"trigger_types": trigger_types,
|
|
||||||
});
|
|
||||||
|
|
||||||
let claims = Claims {
|
|
||||||
sub: identity_id.to_string(),
|
|
||||||
login: sensor_ref.to_string(),
|
|
||||||
iat: now.timestamp(),
|
|
||||||
exp,
|
|
||||||
token_type: TokenType::Sensor,
|
|
||||||
scope: Some("sensor".to_string()),
|
|
||||||
metadata: Some(metadata),
|
|
||||||
};
|
|
||||||
|
|
||||||
encode(
|
|
||||||
&Header::default(),
|
|
||||||
&claims,
|
|
||||||
&EncodingKey::from_secret(config.secret.as_bytes()),
|
|
||||||
)
|
|
||||||
.map_err(|e| JwtError::EncodeError(e.to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Validate and decode a JWT token
|
|
||||||
///
|
|
||||||
/// # Arguments
|
|
||||||
/// * `token` - The JWT token string
|
|
||||||
/// * `config` - JWT configuration
|
|
||||||
///
|
|
||||||
/// # Returns
|
|
||||||
/// * `Result<Claims, JwtError>` - The decoded claims if valid
|
|
||||||
pub fn validate_token(token: &str, config: &JwtConfig) -> Result<Claims, JwtError> {
|
|
||||||
let validation = Validation::default();
|
|
||||||
|
|
||||||
decode::<Claims>(
|
|
||||||
token,
|
|
||||||
&DecodingKey::from_secret(config.secret.as_bytes()),
|
|
||||||
&validation,
|
|
||||||
)
|
|
||||||
.map(|data| data.claims)
|
|
||||||
.map_err(|e| {
|
|
||||||
if e.to_string().contains("ExpiredSignature") {
|
|
||||||
JwtError::Expired
|
|
||||||
} else {
|
|
||||||
JwtError::DecodeError(e.to_string())
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Extract token from Authorization header
|
|
||||||
///
|
|
||||||
/// # Arguments
|
|
||||||
/// * `auth_header` - The Authorization header value
|
|
||||||
///
|
|
||||||
/// # Returns
|
|
||||||
/// * `Option<&str>` - The token if present and valid format
|
|
||||||
pub fn extract_token_from_header(auth_header: &str) -> Option<&str> {
|
|
||||||
if auth_header.starts_with("Bearer ") {
|
|
||||||
Some(&auth_header[7..])
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
fn test_config() -> JwtConfig {
|
|
||||||
JwtConfig {
|
|
||||||
secret: "test_secret_key_for_testing".to_string(),
|
|
||||||
access_token_expiration: 3600,
|
|
||||||
refresh_token_expiration: 604800,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_generate_and_validate_access_token() {
|
|
||||||
let config = test_config();
|
|
||||||
let token =
|
|
||||||
generate_access_token(123, "testuser", &config).expect("Failed to generate token");
|
|
||||||
|
|
||||||
let claims = validate_token(&token, &config).expect("Failed to validate token");
|
|
||||||
|
|
||||||
assert_eq!(claims.sub, "123");
|
|
||||||
assert_eq!(claims.login, "testuser");
|
|
||||||
assert_eq!(claims.token_type, TokenType::Access);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_generate_and_validate_refresh_token() {
|
|
||||||
let config = test_config();
|
|
||||||
let token =
|
|
||||||
generate_refresh_token(456, "anotheruser", &config).expect("Failed to generate token");
|
|
||||||
|
|
||||||
let claims = validate_token(&token, &config).expect("Failed to validate token");
|
|
||||||
|
|
||||||
assert_eq!(claims.sub, "456");
|
|
||||||
assert_eq!(claims.login, "anotheruser");
|
|
||||||
assert_eq!(claims.token_type, TokenType::Refresh);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_invalid_token() {
|
|
||||||
let config = test_config();
|
|
||||||
let result = validate_token("invalid.token.here", &config);
|
|
||||||
assert!(result.is_err());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_token_with_wrong_secret() {
|
|
||||||
let config = test_config();
|
|
||||||
let token = generate_access_token(789, "user", &config).expect("Failed to generate token");
|
|
||||||
|
|
||||||
let wrong_config = JwtConfig {
|
|
||||||
secret: "different_secret".to_string(),
|
|
||||||
..config
|
|
||||||
};
|
|
||||||
|
|
||||||
let result = validate_token(&token, &wrong_config);
|
|
||||||
assert!(result.is_err());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_expired_token() {
|
|
||||||
// Create a token that's already expired by setting exp in the past
|
|
||||||
let now = Utc::now().timestamp();
|
|
||||||
let expired_claims = Claims {
|
|
||||||
sub: "999".to_string(),
|
|
||||||
login: "expireduser".to_string(),
|
|
||||||
iat: now - 3600,
|
|
||||||
exp: now - 1800, // Expired 30 minutes ago
|
|
||||||
token_type: TokenType::Access,
|
|
||||||
scope: None,
|
|
||||||
metadata: None,
|
|
||||||
};
|
|
||||||
|
|
||||||
let config = test_config();
|
|
||||||
|
|
||||||
let expired_token = encode(
|
|
||||||
&Header::default(),
|
|
||||||
&expired_claims,
|
|
||||||
&EncodingKey::from_secret(config.secret.as_bytes()),
|
|
||||||
)
|
|
||||||
.expect("Failed to encode token");
|
|
||||||
|
|
||||||
// Validate the expired token
|
|
||||||
let result = validate_token(&expired_token, &config);
|
|
||||||
assert!(matches!(result, Err(JwtError::Expired)));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_extract_token_from_header() {
|
|
||||||
let header = "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9";
|
|
||||||
let token = extract_token_from_header(header);
|
|
||||||
assert_eq!(token, Some("eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"));
|
|
||||||
|
|
||||||
let invalid_header = "Token abc123";
|
|
||||||
let token = extract_token_from_header(invalid_header);
|
|
||||||
assert_eq!(token, None);
|
|
||||||
|
|
||||||
let no_token = "Bearer ";
|
|
||||||
let token = extract_token_from_header(no_token);
|
|
||||||
assert_eq!(token, Some(""));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_claims_serialization() {
|
|
||||||
let claims = Claims {
|
|
||||||
sub: "123".to_string(),
|
|
||||||
login: "testuser".to_string(),
|
|
||||||
iat: 1234567890,
|
|
||||||
exp: 1234571490,
|
|
||||||
token_type: TokenType::Access,
|
|
||||||
scope: None,
|
|
||||||
metadata: None,
|
|
||||||
};
|
|
||||||
|
|
||||||
let json = serde_json::to_string(&claims).expect("Failed to serialize");
|
|
||||||
let deserialized: Claims = serde_json::from_str(&json).expect("Failed to deserialize");
|
|
||||||
|
|
||||||
assert_eq!(claims.sub, deserialized.sub);
|
|
||||||
assert_eq!(claims.login, deserialized.login);
|
|
||||||
assert_eq!(claims.token_type, deserialized.token_type);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_generate_sensor_token() {
|
|
||||||
let config = test_config();
|
|
||||||
let trigger_types = vec!["core.timer".to_string(), "core.webhook".to_string()];
|
|
||||||
|
|
||||||
let token = generate_sensor_token(
|
|
||||||
999,
|
|
||||||
"sensor:core.timer",
|
|
||||||
trigger_types.clone(),
|
|
||||||
&config,
|
|
||||||
Some(86400),
|
|
||||||
)
|
|
||||||
.expect("Failed to generate sensor token");
|
|
||||||
|
|
||||||
let claims = validate_token(&token, &config).expect("Failed to validate token");
|
|
||||||
|
|
||||||
assert_eq!(claims.sub, "999");
|
|
||||||
assert_eq!(claims.login, "sensor:core.timer");
|
|
||||||
assert_eq!(claims.token_type, TokenType::Sensor);
|
|
||||||
assert_eq!(claims.scope, Some("sensor".to_string()));
|
|
||||||
|
|
||||||
let metadata = claims.metadata.expect("Metadata should be present");
|
|
||||||
let trigger_types_from_token = metadata["trigger_types"]
|
|
||||||
.as_array()
|
|
||||||
.expect("trigger_types should be an array");
|
|
||||||
|
|
||||||
assert_eq!(trigger_types_from_token.len(), 2);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -10,7 +10,9 @@ use axum::{
|
|||||||
use serde_json::json;
|
use serde_json::json;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
use super::jwt::{extract_token_from_header, validate_token, Claims, JwtConfig, TokenType};
|
use attune_common::auth::jwt::{
|
||||||
|
extract_token_from_header, validate_token, Claims, JwtConfig, TokenType,
|
||||||
|
};
|
||||||
|
|
||||||
/// Authentication middleware state
|
/// Authentication middleware state
|
||||||
#[derive(Clone)]
|
#[derive(Clone)]
|
||||||
@@ -105,8 +107,11 @@ impl axum::extract::FromRequestParts<crate::state::SharedState> for RequireAuth
|
|||||||
_ => AuthError::InvalidToken,
|
_ => AuthError::InvalidToken,
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
// Allow both access tokens and sensor tokens
|
// Allow access, sensor, and execution-scoped tokens
|
||||||
if claims.token_type != TokenType::Access && claims.token_type != TokenType::Sensor {
|
if claims.token_type != TokenType::Access
|
||||||
|
&& claims.token_type != TokenType::Sensor
|
||||||
|
&& claims.token_type != TokenType::Execution
|
||||||
|
{
|
||||||
return Err(AuthError::InvalidToken);
|
return Err(AuthError::InvalidToken);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -154,7 +159,7 @@ mod tests {
|
|||||||
login: "testuser".to_string(),
|
login: "testuser".to_string(),
|
||||||
iat: 1234567890,
|
iat: 1234567890,
|
||||||
exp: 1234571490,
|
exp: 1234571490,
|
||||||
token_type: super::super::jwt::TokenType::Access,
|
token_type: TokenType::Access,
|
||||||
scope: None,
|
scope: None,
|
||||||
metadata: None,
|
metadata: None,
|
||||||
};
|
};
|
||||||
|
|||||||
526
crates/api/src/dto/artifact.rs
Normal file
526
crates/api/src/dto/artifact.rs
Normal file
@@ -0,0 +1,526 @@
|
|||||||
|
//! Artifact DTOs for API requests and responses
|
||||||
|
|
||||||
|
use chrono::{DateTime, Utc};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use serde_json::Value as JsonValue;
|
||||||
|
use utoipa::{IntoParams, ToSchema};
|
||||||
|
|
||||||
|
use attune_common::models::enums::{
|
||||||
|
ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType,
|
||||||
|
};
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Artifact DTOs
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
/// Request DTO for creating a new artifact
|
||||||
|
#[derive(Debug, Clone, Deserialize, ToSchema)]
|
||||||
|
pub struct CreateArtifactRequest {
|
||||||
|
/// Artifact reference (unique identifier, e.g. "build.log", "test.results")
|
||||||
|
#[schema(example = "mypack.build_log")]
|
||||||
|
pub r#ref: String,
|
||||||
|
|
||||||
|
/// Owner scope type
|
||||||
|
#[schema(example = "action")]
|
||||||
|
pub scope: OwnerType,
|
||||||
|
|
||||||
|
/// Owner identifier (ref string of the owning entity)
|
||||||
|
#[schema(example = "mypack.deploy")]
|
||||||
|
pub owner: String,
|
||||||
|
|
||||||
|
/// Artifact type
|
||||||
|
#[schema(example = "file_text")]
|
||||||
|
pub r#type: ArtifactType,
|
||||||
|
|
||||||
|
/// Visibility level (public = all users, private = scope/owner restricted).
|
||||||
|
/// If omitted, defaults to `public` for progress artifacts and `private` for all others.
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
|
|
||||||
|
/// Retention policy type
|
||||||
|
#[serde(default = "default_retention_policy")]
|
||||||
|
#[schema(example = "versions")]
|
||||||
|
pub retention_policy: RetentionPolicyType,
|
||||||
|
|
||||||
|
/// Retention limit (number of versions, days, hours, or minutes depending on policy)
|
||||||
|
#[serde(default = "default_retention_limit")]
|
||||||
|
#[schema(example = 5)]
|
||||||
|
pub retention_limit: i32,
|
||||||
|
|
||||||
|
/// Human-readable name
|
||||||
|
#[schema(example = "Build Log")]
|
||||||
|
pub name: Option<String>,
|
||||||
|
|
||||||
|
/// Optional description
|
||||||
|
#[schema(example = "Output log from the build action")]
|
||||||
|
pub description: Option<String>,
|
||||||
|
|
||||||
|
/// MIME content type (e.g. "text/plain", "application/json")
|
||||||
|
#[schema(example = "text/plain")]
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
|
||||||
|
/// Execution ID that produced this artifact
|
||||||
|
#[schema(example = 42)]
|
||||||
|
pub execution: Option<i64>,
|
||||||
|
|
||||||
|
/// Initial structured data (for progress-type artifacts or metadata)
|
||||||
|
#[schema(value_type = Option<Object>)]
|
||||||
|
pub data: Option<JsonValue>,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_retention_policy() -> RetentionPolicyType {
|
||||||
|
RetentionPolicyType::Versions
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_retention_limit() -> i32 {
|
||||||
|
5
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Request DTO for updating an existing artifact
|
||||||
|
#[derive(Debug, Clone, Deserialize, ToSchema)]
|
||||||
|
pub struct UpdateArtifactRequest {
|
||||||
|
/// Updated owner scope
|
||||||
|
pub scope: Option<OwnerType>,
|
||||||
|
|
||||||
|
/// Updated owner identifier
|
||||||
|
pub owner: Option<String>,
|
||||||
|
|
||||||
|
/// Updated artifact type
|
||||||
|
pub r#type: Option<ArtifactType>,
|
||||||
|
|
||||||
|
/// Updated visibility
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
|
|
||||||
|
/// Updated retention policy
|
||||||
|
pub retention_policy: Option<RetentionPolicyType>,
|
||||||
|
|
||||||
|
/// Updated retention limit
|
||||||
|
pub retention_limit: Option<i32>,
|
||||||
|
|
||||||
|
/// Updated name
|
||||||
|
pub name: Option<String>,
|
||||||
|
|
||||||
|
/// Updated description
|
||||||
|
pub description: Option<String>,
|
||||||
|
|
||||||
|
/// Updated content type
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
|
||||||
|
/// Updated structured data (replaces existing data entirely)
|
||||||
|
pub data: Option<JsonValue>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Request DTO for appending to a progress-type artifact
|
||||||
|
#[derive(Debug, Clone, Deserialize, ToSchema)]
|
||||||
|
pub struct AppendProgressRequest {
|
||||||
|
/// The entry to append to the progress data array.
|
||||||
|
/// Can be any JSON value (string, object, number, etc.)
|
||||||
|
#[schema(value_type = Object, example = json!({"step": "compile", "status": "done", "duration_ms": 1234}))]
|
||||||
|
pub entry: JsonValue,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Request DTO for setting the full data payload on an artifact
|
||||||
|
#[derive(Debug, Clone, Deserialize, ToSchema)]
|
||||||
|
pub struct SetDataRequest {
|
||||||
|
/// The data to set (replaces existing data entirely)
|
||||||
|
#[schema(value_type = Object)]
|
||||||
|
pub data: JsonValue,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Response DTO for artifact information
|
||||||
|
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||||
|
pub struct ArtifactResponse {
|
||||||
|
/// Artifact ID
|
||||||
|
#[schema(example = 1)]
|
||||||
|
pub id: i64,
|
||||||
|
|
||||||
|
/// Artifact reference
|
||||||
|
#[schema(example = "mypack.build_log")]
|
||||||
|
pub r#ref: String,
|
||||||
|
|
||||||
|
/// Owner scope type
|
||||||
|
pub scope: OwnerType,
|
||||||
|
|
||||||
|
/// Owner identifier
|
||||||
|
#[schema(example = "mypack.deploy")]
|
||||||
|
pub owner: String,
|
||||||
|
|
||||||
|
/// Artifact type
|
||||||
|
pub r#type: ArtifactType,
|
||||||
|
|
||||||
|
/// Visibility level
|
||||||
|
pub visibility: ArtifactVisibility,
|
||||||
|
|
||||||
|
/// Retention policy
|
||||||
|
pub retention_policy: RetentionPolicyType,
|
||||||
|
|
||||||
|
/// Retention limit
|
||||||
|
#[schema(example = 5)]
|
||||||
|
pub retention_limit: i32,
|
||||||
|
|
||||||
|
/// Human-readable name
|
||||||
|
#[schema(example = "Build Log")]
|
||||||
|
pub name: Option<String>,
|
||||||
|
|
||||||
|
/// Description
|
||||||
|
pub description: Option<String>,
|
||||||
|
|
||||||
|
/// MIME content type
|
||||||
|
#[schema(example = "text/plain")]
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
|
||||||
|
/// Size of the latest version in bytes
|
||||||
|
pub size_bytes: Option<i64>,
|
||||||
|
|
||||||
|
/// Execution that produced this artifact
|
||||||
|
pub execution: Option<i64>,
|
||||||
|
|
||||||
|
/// Structured data (progress entries, metadata, etc.)
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub data: Option<JsonValue>,
|
||||||
|
|
||||||
|
/// Creation timestamp
|
||||||
|
pub created: DateTime<Utc>,
|
||||||
|
|
||||||
|
/// Last update timestamp
|
||||||
|
pub updated: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Simplified artifact for list endpoints
|
||||||
|
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||||
|
pub struct ArtifactSummary {
|
||||||
|
/// Artifact ID
|
||||||
|
pub id: i64,
|
||||||
|
|
||||||
|
/// Artifact reference
|
||||||
|
pub r#ref: String,
|
||||||
|
|
||||||
|
/// Artifact type
|
||||||
|
pub r#type: ArtifactType,
|
||||||
|
|
||||||
|
/// Visibility level
|
||||||
|
pub visibility: ArtifactVisibility,
|
||||||
|
|
||||||
|
/// Human-readable name
|
||||||
|
pub name: Option<String>,
|
||||||
|
|
||||||
|
/// MIME content type
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
|
||||||
|
/// Size of latest version in bytes
|
||||||
|
pub size_bytes: Option<i64>,
|
||||||
|
|
||||||
|
/// Execution that produced this artifact
|
||||||
|
pub execution: Option<i64>,
|
||||||
|
|
||||||
|
/// Owner scope
|
||||||
|
pub scope: OwnerType,
|
||||||
|
|
||||||
|
/// Owner identifier
|
||||||
|
pub owner: String,
|
||||||
|
|
||||||
|
/// Creation timestamp
|
||||||
|
pub created: DateTime<Utc>,
|
||||||
|
|
||||||
|
/// Last update timestamp
|
||||||
|
pub updated: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Query parameters for filtering artifacts
|
||||||
|
#[derive(Debug, Clone, Deserialize, IntoParams)]
|
||||||
|
pub struct ArtifactQueryParams {
|
||||||
|
/// Filter by owner scope type
|
||||||
|
pub scope: Option<OwnerType>,
|
||||||
|
|
||||||
|
/// Filter by owner identifier
|
||||||
|
pub owner: Option<String>,
|
||||||
|
|
||||||
|
/// Filter by artifact type
|
||||||
|
pub r#type: Option<ArtifactType>,
|
||||||
|
|
||||||
|
/// Filter by visibility
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
|
|
||||||
|
/// Filter by execution ID
|
||||||
|
pub execution: Option<i64>,
|
||||||
|
|
||||||
|
/// Search by name (case-insensitive substring match)
|
||||||
|
pub name: Option<String>,
|
||||||
|
|
||||||
|
/// Page number (1-based)
|
||||||
|
#[serde(default = "default_page")]
|
||||||
|
#[param(example = 1, minimum = 1)]
|
||||||
|
pub page: u32,
|
||||||
|
|
||||||
|
/// Items per page
|
||||||
|
#[serde(default = "default_per_page")]
|
||||||
|
#[param(example = 20, minimum = 1, maximum = 100)]
|
||||||
|
pub per_page: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ArtifactQueryParams {
|
||||||
|
pub fn offset(&self) -> u32 {
|
||||||
|
(self.page.saturating_sub(1)) * self.per_page
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn limit(&self) -> u32 {
|
||||||
|
self.per_page.min(100)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_page() -> u32 {
|
||||||
|
1
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_per_page() -> u32 {
|
||||||
|
20
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// ArtifactVersion DTOs
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
/// Request DTO for creating a new artifact version with JSON content
|
||||||
|
#[derive(Debug, Clone, Deserialize, ToSchema)]
|
||||||
|
pub struct CreateVersionJsonRequest {
|
||||||
|
/// Structured JSON content for this version
|
||||||
|
#[schema(value_type = Object)]
|
||||||
|
pub content: JsonValue,
|
||||||
|
|
||||||
|
/// MIME content type override (defaults to "application/json")
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
|
||||||
|
/// Free-form metadata about this version
|
||||||
|
#[schema(value_type = Option<Object>)]
|
||||||
|
pub meta: Option<JsonValue>,
|
||||||
|
|
||||||
|
/// Who created this version (e.g. action ref, identity, "system")
|
||||||
|
pub created_by: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Request DTO for creating a new file-backed artifact version.
|
||||||
|
/// No file content is included — the caller writes the file directly to
|
||||||
|
/// `$ATTUNE_ARTIFACTS_DIR/{file_path}` after receiving the response.
|
||||||
|
#[derive(Debug, Clone, Deserialize, ToSchema)]
|
||||||
|
pub struct CreateFileVersionRequest {
|
||||||
|
/// MIME content type (e.g. "text/plain", "application/octet-stream")
|
||||||
|
#[schema(example = "text/plain")]
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
|
||||||
|
/// Free-form metadata about this version
|
||||||
|
#[schema(value_type = Option<Object>)]
|
||||||
|
pub meta: Option<JsonValue>,
|
||||||
|
|
||||||
|
/// Who created this version (e.g. action ref, identity, "system")
|
||||||
|
pub created_by: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Response DTO for an artifact version (without binary content)
|
||||||
|
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||||
|
pub struct ArtifactVersionResponse {
|
||||||
|
/// Version ID
|
||||||
|
pub id: i64,
|
||||||
|
|
||||||
|
/// Parent artifact ID
|
||||||
|
pub artifact: i64,
|
||||||
|
|
||||||
|
/// Version number (1-based)
|
||||||
|
pub version: i32,
|
||||||
|
|
||||||
|
/// MIME content type
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
|
||||||
|
/// Size of content in bytes
|
||||||
|
pub size_bytes: Option<i64>,
|
||||||
|
|
||||||
|
/// Structured JSON content (if this version has JSON data)
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub content_json: Option<JsonValue>,
|
||||||
|
|
||||||
|
/// Relative file path for disk-backed versions (from artifacts_dir root).
|
||||||
|
/// When present, the file content lives on the shared volume, not in the DB.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub file_path: Option<String>,
|
||||||
|
|
||||||
|
/// Free-form metadata
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub meta: Option<JsonValue>,
|
||||||
|
|
||||||
|
/// Who created this version
|
||||||
|
pub created_by: Option<String>,
|
||||||
|
|
||||||
|
/// Creation timestamp
|
||||||
|
pub created: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Simplified version for list endpoints
|
||||||
|
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||||
|
pub struct ArtifactVersionSummary {
|
||||||
|
/// Version ID
|
||||||
|
pub id: i64,
|
||||||
|
|
||||||
|
/// Version number
|
||||||
|
pub version: i32,
|
||||||
|
|
||||||
|
/// MIME content type
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
|
||||||
|
/// Size of content in bytes
|
||||||
|
pub size_bytes: Option<i64>,
|
||||||
|
|
||||||
|
/// Relative file path for disk-backed versions
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub file_path: Option<String>,
|
||||||
|
|
||||||
|
/// Who created this version
|
||||||
|
pub created_by: Option<String>,
|
||||||
|
|
||||||
|
/// Creation timestamp
|
||||||
|
pub created: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Conversions
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
impl From<attune_common::models::artifact::Artifact> for ArtifactResponse {
|
||||||
|
fn from(a: attune_common::models::artifact::Artifact) -> Self {
|
||||||
|
Self {
|
||||||
|
id: a.id,
|
||||||
|
r#ref: a.r#ref,
|
||||||
|
scope: a.scope,
|
||||||
|
owner: a.owner,
|
||||||
|
r#type: a.r#type,
|
||||||
|
visibility: a.visibility,
|
||||||
|
retention_policy: a.retention_policy,
|
||||||
|
retention_limit: a.retention_limit,
|
||||||
|
name: a.name,
|
||||||
|
description: a.description,
|
||||||
|
content_type: a.content_type,
|
||||||
|
size_bytes: a.size_bytes,
|
||||||
|
execution: a.execution,
|
||||||
|
data: a.data,
|
||||||
|
created: a.created,
|
||||||
|
updated: a.updated,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<attune_common::models::artifact::Artifact> for ArtifactSummary {
|
||||||
|
fn from(a: attune_common::models::artifact::Artifact) -> Self {
|
||||||
|
Self {
|
||||||
|
id: a.id,
|
||||||
|
r#ref: a.r#ref,
|
||||||
|
r#type: a.r#type,
|
||||||
|
visibility: a.visibility,
|
||||||
|
name: a.name,
|
||||||
|
content_type: a.content_type,
|
||||||
|
size_bytes: a.size_bytes,
|
||||||
|
execution: a.execution,
|
||||||
|
scope: a.scope,
|
||||||
|
owner: a.owner,
|
||||||
|
created: a.created,
|
||||||
|
updated: a.updated,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<attune_common::models::artifact_version::ArtifactVersion> for ArtifactVersionResponse {
|
||||||
|
fn from(v: attune_common::models::artifact_version::ArtifactVersion) -> Self {
|
||||||
|
Self {
|
||||||
|
id: v.id,
|
||||||
|
artifact: v.artifact,
|
||||||
|
version: v.version,
|
||||||
|
content_type: v.content_type,
|
||||||
|
size_bytes: v.size_bytes,
|
||||||
|
content_json: v.content_json,
|
||||||
|
file_path: v.file_path,
|
||||||
|
meta: v.meta,
|
||||||
|
created_by: v.created_by,
|
||||||
|
created: v.created,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<attune_common::models::artifact_version::ArtifactVersion> for ArtifactVersionSummary {
|
||||||
|
fn from(v: attune_common::models::artifact_version::ArtifactVersion) -> Self {
|
||||||
|
Self {
|
||||||
|
id: v.id,
|
||||||
|
version: v.version,
|
||||||
|
content_type: v.content_type,
|
||||||
|
size_bytes: v.size_bytes,
|
||||||
|
file_path: v.file_path,
|
||||||
|
created_by: v.created_by,
|
||||||
|
created: v.created,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_query_params_defaults() {
|
||||||
|
let json = r#"{}"#;
|
||||||
|
let params: ArtifactQueryParams = serde_json::from_str(json).unwrap();
|
||||||
|
assert_eq!(params.page, 1);
|
||||||
|
assert_eq!(params.per_page, 20);
|
||||||
|
assert!(params.scope.is_none());
|
||||||
|
assert!(params.r#type.is_none());
|
||||||
|
assert!(params.visibility.is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_query_params_offset() {
|
||||||
|
let params = ArtifactQueryParams {
|
||||||
|
scope: None,
|
||||||
|
owner: None,
|
||||||
|
r#type: None,
|
||||||
|
visibility: None,
|
||||||
|
execution: None,
|
||||||
|
name: None,
|
||||||
|
page: 3,
|
||||||
|
per_page: 20,
|
||||||
|
};
|
||||||
|
assert_eq!(params.offset(), 40);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_query_params_limit_cap() {
|
||||||
|
let params = ArtifactQueryParams {
|
||||||
|
scope: None,
|
||||||
|
owner: None,
|
||||||
|
r#type: None,
|
||||||
|
visibility: None,
|
||||||
|
execution: None,
|
||||||
|
name: None,
|
||||||
|
page: 1,
|
||||||
|
per_page: 200,
|
||||||
|
};
|
||||||
|
assert_eq!(params.limit(), 100);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_create_request_defaults() {
|
||||||
|
let json = r#"{
|
||||||
|
"ref": "test.artifact",
|
||||||
|
"scope": "system",
|
||||||
|
"owner": "",
|
||||||
|
"type": "file_text"
|
||||||
|
}"#;
|
||||||
|
let req: CreateArtifactRequest = serde_json::from_str(json).unwrap();
|
||||||
|
assert_eq!(req.retention_policy, RetentionPolicyType::Versions);
|
||||||
|
assert_eq!(req.retention_limit, 5);
|
||||||
|
assert!(
|
||||||
|
req.visibility.is_none(),
|
||||||
|
"Omitting visibility should deserialize as None (server applies type-aware default)"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_append_progress_request() {
|
||||||
|
let json = r#"{"entry": {"step": "build", "status": "done"}}"#;
|
||||||
|
let req: AppendProgressRequest = serde_json::from_str(json).unwrap();
|
||||||
|
assert!(req.entry.is_object());
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
pub mod action;
|
pub mod action;
|
||||||
pub mod analytics;
|
pub mod analytics;
|
||||||
|
pub mod artifact;
|
||||||
pub mod auth;
|
pub mod auth;
|
||||||
pub mod common;
|
pub mod common;
|
||||||
pub mod event;
|
pub mod event;
|
||||||
@@ -21,6 +22,11 @@ pub use analytics::{
|
|||||||
ExecutionStatusTimeSeriesResponse, ExecutionThroughputResponse, FailureRateResponse,
|
ExecutionStatusTimeSeriesResponse, ExecutionThroughputResponse, FailureRateResponse,
|
||||||
TimeSeriesPoint,
|
TimeSeriesPoint,
|
||||||
};
|
};
|
||||||
|
pub use artifact::{
|
||||||
|
AppendProgressRequest, ArtifactQueryParams, ArtifactResponse, ArtifactSummary,
|
||||||
|
ArtifactVersionResponse, ArtifactVersionSummary, CreateArtifactRequest,
|
||||||
|
CreateVersionJsonRequest, SetDataRequest, UpdateArtifactRequest,
|
||||||
|
};
|
||||||
pub use auth::{
|
pub use auth::{
|
||||||
ChangePasswordRequest, CurrentUserResponse, LoginRequest, RefreshTokenRequest, RegisterRequest,
|
ChangePasswordRequest, CurrentUserResponse, LoginRequest, RefreshTokenRequest, RegisterRequest,
|
||||||
TokenResponse,
|
TokenResponse,
|
||||||
|
|||||||
@@ -33,6 +33,86 @@ struct Args {
|
|||||||
port: Option<u16>,
|
port: Option<u16>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Attempt to connect to RabbitMQ and create a publisher.
|
||||||
|
/// Returns the publisher on success.
|
||||||
|
async fn try_connect_publisher(mq_url: &str) -> Result<Publisher> {
|
||||||
|
let mq_connection = Connection::connect(mq_url).await?;
|
||||||
|
|
||||||
|
// Setup common message queue infrastructure (exchanges and DLX)
|
||||||
|
let mq_setup_config = attune_common::mq::MessageQueueConfig::default();
|
||||||
|
if let Err(e) = mq_connection
|
||||||
|
.setup_common_infrastructure(&mq_setup_config)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
warn!(
|
||||||
|
"Failed to setup common MQ infrastructure (may already exist): {}",
|
||||||
|
e
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let publisher = Publisher::new(
|
||||||
|
&mq_connection,
|
||||||
|
PublisherConfig {
|
||||||
|
confirm_publish: true,
|
||||||
|
timeout_secs: 30,
|
||||||
|
exchange: "attune.executions".to_string(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(publisher)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Background task that keeps trying to establish the MQ publisher connection.
|
||||||
|
/// Once connected it installs the publisher into `state`, then monitors the
|
||||||
|
/// connection health and reconnects if it drops.
|
||||||
|
async fn mq_reconnect_loop(state: Arc<AppState>, mq_url: String) {
|
||||||
|
// Retry delay sequence (seconds): 1, 2, 4, 8, 16, 30, 30, …
|
||||||
|
let delays: &[u64] = &[1, 2, 4, 8, 16, 30];
|
||||||
|
let mut attempt: usize = 0;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
let delay = delays.get(attempt).copied().unwrap_or(30);
|
||||||
|
|
||||||
|
match try_connect_publisher(&mq_url).await {
|
||||||
|
Ok(publisher) => {
|
||||||
|
info!(
|
||||||
|
"Message queue publisher connected (attempt {})",
|
||||||
|
attempt + 1
|
||||||
|
);
|
||||||
|
state.set_publisher(Arc::new(publisher)).await;
|
||||||
|
attempt = 0; // reset backoff after a successful connect
|
||||||
|
|
||||||
|
// Poll liveness: the publisher will error on use when the
|
||||||
|
// underlying channel is gone. We do a lightweight wait here so
|
||||||
|
// we notice disconnections and attempt to reconnect.
|
||||||
|
loop {
|
||||||
|
tokio::time::sleep(tokio::time::Duration::from_secs(10)).await;
|
||||||
|
if state.get_publisher().await.is_none() {
|
||||||
|
// Something cleared the publisher externally; re-enter
|
||||||
|
// the outer connect loop.
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
// TODO: add a real health-check ping when the lapin API
|
||||||
|
// exposes one (e.g. channel.basic_noop). For now a broken
|
||||||
|
// publisher will be detected on the first failed publish and
|
||||||
|
// can be cleared by the handler to trigger reconnection here.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"Failed to connect to message queue (attempt {}, retrying in {}s): {}",
|
||||||
|
attempt + 1,
|
||||||
|
delay,
|
||||||
|
e
|
||||||
|
);
|
||||||
|
tokio::time::sleep(tokio::time::Duration::from_secs(delay)).await;
|
||||||
|
attempt = attempt.saturating_add(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
async fn main() -> Result<()> {
|
async fn main() -> Result<()> {
|
||||||
// Initialize tracing subscriber
|
// Initialize tracing subscriber
|
||||||
@@ -66,59 +146,21 @@ async fn main() -> Result<()> {
|
|||||||
let database = Database::new(&config.database).await?;
|
let database = Database::new(&config.database).await?;
|
||||||
info!("Database connection established");
|
info!("Database connection established");
|
||||||
|
|
||||||
// Initialize message queue connection and publisher (optional)
|
// Initialize application state (publisher starts as None)
|
||||||
let mut state = AppState::new(database.pool().clone(), config.clone());
|
let state = Arc::new(AppState::new(database.pool().clone(), config.clone()));
|
||||||
|
|
||||||
|
// Spawn background MQ reconnect loop if a message queue is configured.
|
||||||
|
// The loop will keep retrying until it connects, then install the publisher
|
||||||
|
// into the shared state so request handlers can use it immediately.
|
||||||
if let Some(ref mq_config) = config.message_queue {
|
if let Some(ref mq_config) = config.message_queue {
|
||||||
info!("Connecting to message queue...");
|
info!("Message queue configured – starting background connection loop...");
|
||||||
match Connection::connect(&mq_config.url).await {
|
let mq_url = mq_config.url.clone();
|
||||||
Ok(mq_connection) => {
|
let state_clone = state.clone();
|
||||||
info!("Message queue connection established");
|
tokio::spawn(async move {
|
||||||
|
mq_reconnect_loop(state_clone, mq_url).await;
|
||||||
// Setup common message queue infrastructure (exchanges and DLX)
|
});
|
||||||
let mq_setup_config = attune_common::mq::MessageQueueConfig::default();
|
|
||||||
match mq_connection
|
|
||||||
.setup_common_infrastructure(&mq_setup_config)
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
Ok(_) => info!("Common message queue infrastructure setup completed"),
|
|
||||||
Err(e) => {
|
|
||||||
warn!(
|
|
||||||
"Failed to setup common MQ infrastructure (may already exist): {}",
|
|
||||||
e
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create publisher
|
|
||||||
match Publisher::new(
|
|
||||||
&mq_connection,
|
|
||||||
PublisherConfig {
|
|
||||||
confirm_publish: true,
|
|
||||||
timeout_secs: 30,
|
|
||||||
exchange: "attune.executions".to_string(),
|
|
||||||
},
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
Ok(publisher) => {
|
|
||||||
info!("Message queue publisher initialized");
|
|
||||||
state = state.with_publisher(Arc::new(publisher));
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
warn!("Failed to create publisher: {}", e);
|
|
||||||
warn!("Executions will not be queued for processing");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
warn!("Failed to connect to message queue: {}", e);
|
|
||||||
warn!("Executions will not be queued for processing");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
} else {
|
||||||
warn!("Message queue not configured");
|
warn!("Message queue not configured – executions will not be queued for processing");
|
||||||
warn!("Executions will not be queued for processing");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
info!(
|
info!(
|
||||||
@@ -143,7 +185,7 @@ async fn main() -> Result<()> {
|
|||||||
info!("PostgreSQL notification listener started");
|
info!("PostgreSQL notification listener started");
|
||||||
|
|
||||||
// Create and start server
|
// Create and start server
|
||||||
let server = Server::new(std::sync::Arc::new(state));
|
let server = Server::new(state.clone());
|
||||||
|
|
||||||
info!("Attune API Service is ready");
|
info!("Attune API Service is ready");
|
||||||
|
|
||||||
|
|||||||
1326
crates/api/src/routes/artifacts.rs
Normal file
1326
crates/api/src/routes/artifacts.rs
Normal file
File diff suppressed because it is too large
Load Diff
@@ -170,7 +170,7 @@ pub async fn create_event(
|
|||||||
let event = EventRepository::create(&state.db, input).await?;
|
let event = EventRepository::create(&state.db, input).await?;
|
||||||
|
|
||||||
// Publish EventCreated message to message queue if publisher is available
|
// Publish EventCreated message to message queue if publisher is available
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let message_payload = EventCreatedPayload {
|
let message_payload = EventCreatedPayload {
|
||||||
event_id: event.id,
|
event_id: event.id,
|
||||||
trigger_id: event.trigger,
|
trigger_id: event.trigger,
|
||||||
|
|||||||
@@ -99,7 +99,7 @@ pub async fn create_execution(
|
|||||||
.with_source("api-service")
|
.with_source("api-service")
|
||||||
.with_correlation_id(uuid::Uuid::new_v4());
|
.with_correlation_id(uuid::Uuid::new_v4());
|
||||||
|
|
||||||
if let Some(publisher) = &state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
publisher.publish_envelope(&message).await.map_err(|e| {
|
publisher.publish_envelope(&message).await.map_err(|e| {
|
||||||
ApiError::InternalServerError(format!("Failed to publish message: {}", e))
|
ApiError::InternalServerError(format!("Failed to publish message: {}", e))
|
||||||
})?;
|
})?;
|
||||||
|
|||||||
@@ -403,7 +403,7 @@ pub async fn respond_to_inquiry(
|
|||||||
let updated_inquiry = InquiryRepository::update(&state.db, id, update_input).await?;
|
let updated_inquiry = InquiryRepository::update(&state.db, id, update_input).await?;
|
||||||
|
|
||||||
// Publish InquiryResponded message if publisher is available
|
// Publish InquiryResponded message if publisher is available
|
||||||
if let Some(publisher) = &state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let user_id = user
|
let user_id = user
|
||||||
.0
|
.0
|
||||||
.identity_id()
|
.identity_id()
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
pub mod actions;
|
pub mod actions;
|
||||||
pub mod analytics;
|
pub mod analytics;
|
||||||
|
pub mod artifacts;
|
||||||
pub mod auth;
|
pub mod auth;
|
||||||
pub mod events;
|
pub mod events;
|
||||||
pub mod executions;
|
pub mod executions;
|
||||||
@@ -17,6 +18,7 @@ pub mod workflows;
|
|||||||
|
|
||||||
pub use actions::routes as action_routes;
|
pub use actions::routes as action_routes;
|
||||||
pub use analytics::routes as analytics_routes;
|
pub use analytics::routes as analytics_routes;
|
||||||
|
pub use artifacts::routes as artifact_routes;
|
||||||
pub use auth::routes as auth_routes;
|
pub use auth::routes as auth_routes;
|
||||||
pub use events::routes as event_routes;
|
pub use events::routes as event_routes;
|
||||||
pub use executions::routes as execution_routes;
|
pub use executions::routes as execution_routes;
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
//! Pack management API routes
|
//! Pack management API routes
|
||||||
|
|
||||||
use axum::{
|
use axum::{
|
||||||
extract::{Path, Query, State},
|
extract::{Multipart, Path, Query, State},
|
||||||
http::StatusCode,
|
http::StatusCode,
|
||||||
response::IntoResponse,
|
response::IntoResponse,
|
||||||
routing::get,
|
routing::get,
|
||||||
@@ -448,6 +448,190 @@ async fn execute_and_store_pack_tests(
|
|||||||
Some(Ok(result))
|
Some(Ok(result))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Upload and register a pack from a tar.gz archive (multipart/form-data)
|
||||||
|
///
|
||||||
|
/// The archive should be a gzipped tar containing the pack directory at its root
|
||||||
|
/// (i.e. the archive should unpack to files like `pack.yaml`, `actions/`, etc.).
|
||||||
|
/// The multipart field name must be `pack`.
|
||||||
|
///
|
||||||
|
/// Optional form fields:
|
||||||
|
/// - `force`: `"true"` to overwrite an existing pack with the same ref
|
||||||
|
/// - `skip_tests`: `"true"` to skip test execution after registration
|
||||||
|
#[utoipa::path(
|
||||||
|
post,
|
||||||
|
path = "/api/v1/packs/upload",
|
||||||
|
tag = "packs",
|
||||||
|
request_body(content = String, content_type = "multipart/form-data"),
|
||||||
|
responses(
|
||||||
|
(status = 201, description = "Pack uploaded and registered successfully", body = inline(ApiResponse<PackInstallResponse>)),
|
||||||
|
(status = 400, description = "Invalid archive or missing pack.yaml"),
|
||||||
|
(status = 409, description = "Pack already exists (use force=true to overwrite)"),
|
||||||
|
),
|
||||||
|
security(("bearer_auth" = []))
|
||||||
|
)]
|
||||||
|
pub async fn upload_pack(
|
||||||
|
State(state): State<Arc<AppState>>,
|
||||||
|
RequireAuth(user): RequireAuth,
|
||||||
|
mut multipart: Multipart,
|
||||||
|
) -> ApiResult<impl IntoResponse> {
|
||||||
|
use std::io::Cursor;
|
||||||
|
|
||||||
|
const MAX_PACK_SIZE: usize = 100 * 1024 * 1024; // 100 MB
|
||||||
|
|
||||||
|
let mut pack_bytes: Option<Vec<u8>> = None;
|
||||||
|
let mut force = false;
|
||||||
|
let mut skip_tests = false;
|
||||||
|
|
||||||
|
// Parse multipart fields
|
||||||
|
while let Some(field) = multipart
|
||||||
|
.next_field()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ApiError::BadRequest(format!("Multipart error: {}", e)))?
|
||||||
|
{
|
||||||
|
match field.name() {
|
||||||
|
Some("pack") => {
|
||||||
|
let data = field.bytes().await.map_err(|e| {
|
||||||
|
ApiError::BadRequest(format!("Failed to read pack data: {}", e))
|
||||||
|
})?;
|
||||||
|
if data.len() > MAX_PACK_SIZE {
|
||||||
|
return Err(ApiError::BadRequest(format!(
|
||||||
|
"Pack archive too large: {} bytes (max {} bytes)",
|
||||||
|
data.len(),
|
||||||
|
MAX_PACK_SIZE
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
pack_bytes = Some(data.to_vec());
|
||||||
|
}
|
||||||
|
Some("force") => {
|
||||||
|
let val = field.text().await.map_err(|e| {
|
||||||
|
ApiError::BadRequest(format!("Failed to read force field: {}", e))
|
||||||
|
})?;
|
||||||
|
force = val.trim().eq_ignore_ascii_case("true");
|
||||||
|
}
|
||||||
|
Some("skip_tests") => {
|
||||||
|
let val = field.text().await.map_err(|e| {
|
||||||
|
ApiError::BadRequest(format!("Failed to read skip_tests field: {}", e))
|
||||||
|
})?;
|
||||||
|
skip_tests = val.trim().eq_ignore_ascii_case("true");
|
||||||
|
}
|
||||||
|
_ => {
|
||||||
|
// Consume and ignore unknown fields
|
||||||
|
let _ = field.bytes().await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let pack_data = pack_bytes.ok_or_else(|| {
|
||||||
|
ApiError::BadRequest("Missing required 'pack' field in multipart upload".to_string())
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Extract the tar.gz archive into a temporary directory
|
||||||
|
let temp_extract_dir = tempfile::tempdir().map_err(|e| {
|
||||||
|
ApiError::InternalServerError(format!("Failed to create temp directory: {}", e))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
{
|
||||||
|
let cursor = Cursor::new(&pack_data[..]);
|
||||||
|
let gz = flate2::read::GzDecoder::new(cursor);
|
||||||
|
let mut archive = tar::Archive::new(gz);
|
||||||
|
archive.unpack(temp_extract_dir.path()).map_err(|e| {
|
||||||
|
ApiError::BadRequest(format!(
|
||||||
|
"Failed to extract pack archive (must be a valid .tar.gz): {}",
|
||||||
|
e
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find pack.yaml — it may be at the root or inside a single subdirectory
|
||||||
|
// (e.g. when GitHub tarballs add a top-level directory)
|
||||||
|
let pack_root = find_pack_root(temp_extract_dir.path()).ok_or_else(|| {
|
||||||
|
ApiError::BadRequest(
|
||||||
|
"Could not find pack.yaml in the uploaded archive. \
|
||||||
|
Ensure the archive contains pack.yaml at its root or in a single top-level directory."
|
||||||
|
.to_string(),
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Read pack ref from pack.yaml to determine the final storage path
|
||||||
|
let pack_yaml_path = pack_root.join("pack.yaml");
|
||||||
|
let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)
|
||||||
|
.map_err(|e| ApiError::InternalServerError(format!("Failed to read pack.yaml: {}", e)))?;
|
||||||
|
let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)
|
||||||
|
.map_err(|e| ApiError::BadRequest(format!("Failed to parse pack.yaml: {}", e)))?;
|
||||||
|
let pack_ref = pack_yaml
|
||||||
|
.get("ref")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| ApiError::BadRequest("Missing 'ref' field in pack.yaml".to_string()))?
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
// Move pack to permanent storage
|
||||||
|
use attune_common::pack_registry::PackStorage;
|
||||||
|
let storage = PackStorage::new(&state.config.packs_base_dir);
|
||||||
|
let final_path = storage
|
||||||
|
.install_pack(&pack_root, &pack_ref, None)
|
||||||
|
.map_err(|e| {
|
||||||
|
ApiError::InternalServerError(format!("Failed to move pack to storage: {}", e))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
tracing::info!(
|
||||||
|
"Pack '{}' uploaded and stored at {:?}",
|
||||||
|
pack_ref,
|
||||||
|
final_path
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register the pack in the database
|
||||||
|
let pack_id = register_pack_internal(
|
||||||
|
state.clone(),
|
||||||
|
user.claims.sub,
|
||||||
|
final_path.to_string_lossy().to_string(),
|
||||||
|
force,
|
||||||
|
skip_tests,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
// Clean up permanent storage on failure
|
||||||
|
let _ = std::fs::remove_dir_all(&final_path);
|
||||||
|
e
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Fetch the registered pack
|
||||||
|
let pack = PackRepository::find_by_id(&state.db, pack_id)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| ApiError::NotFound(format!("Pack with ID {} not found", pack_id)))?;
|
||||||
|
|
||||||
|
let response = ApiResponse::with_message(
|
||||||
|
PackInstallResponse {
|
||||||
|
pack: PackResponse::from(pack),
|
||||||
|
test_result: None,
|
||||||
|
tests_skipped: skip_tests,
|
||||||
|
},
|
||||||
|
"Pack uploaded and registered successfully",
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok((StatusCode::CREATED, Json(response)))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Walk the extracted directory and find the directory that contains `pack.yaml`.
|
||||||
|
/// Returns the path of the directory containing `pack.yaml`, or `None` if not found.
|
||||||
|
fn find_pack_root(base: &std::path::Path) -> Option<PathBuf> {
|
||||||
|
// Check root first
|
||||||
|
if base.join("pack.yaml").exists() {
|
||||||
|
return Some(base.to_path_buf());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check one level deep (e.g. GitHub tarballs: repo-main/pack.yaml)
|
||||||
|
if let Ok(entries) = std::fs::read_dir(base) {
|
||||||
|
for entry in entries.flatten() {
|
||||||
|
let path = entry.path();
|
||||||
|
if path.is_dir() && path.join("pack.yaml").exists() {
|
||||||
|
return Some(path);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
/// Register a pack from local filesystem
|
/// Register a pack from local filesystem
|
||||||
#[utoipa::path(
|
#[utoipa::path(
|
||||||
post,
|
post,
|
||||||
@@ -1051,7 +1235,7 @@ async fn register_pack_internal(
|
|||||||
|
|
||||||
// Publish pack.registered event so workers can proactively set up
|
// Publish pack.registered event so workers can proactively set up
|
||||||
// runtime environments (virtualenvs, node_modules, etc.).
|
// runtime environments (virtualenvs, node_modules, etc.).
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let runtime_names = attune_common::pack_environment::collect_runtime_names_for_pack(
|
let runtime_names = attune_common::pack_environment::collect_runtime_names_for_pack(
|
||||||
&state.db, pack.id, &pack_path,
|
&state.db, pack.id, &pack_path,
|
||||||
)
|
)
|
||||||
@@ -2241,6 +2425,7 @@ pub fn routes() -> Router<Arc<AppState>> {
|
|||||||
axum::routing::post(register_packs_batch),
|
axum::routing::post(register_packs_batch),
|
||||||
)
|
)
|
||||||
.route("/packs/install", axum::routing::post(install_pack))
|
.route("/packs/install", axum::routing::post(install_pack))
|
||||||
|
.route("/packs/upload", axum::routing::post(upload_pack))
|
||||||
.route("/packs/download", axum::routing::post(download_packs))
|
.route("/packs/download", axum::routing::post(download_packs))
|
||||||
.route(
|
.route(
|
||||||
"/packs/dependencies",
|
"/packs/dependencies",
|
||||||
|
|||||||
@@ -341,7 +341,7 @@ pub async fn create_rule(
|
|||||||
let rule = RuleRepository::create(&state.db, rule_input).await?;
|
let rule = RuleRepository::create(&state.db, rule_input).await?;
|
||||||
|
|
||||||
// Publish RuleCreated message to notify sensor service
|
// Publish RuleCreated message to notify sensor service
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let payload = RuleCreatedPayload {
|
let payload = RuleCreatedPayload {
|
||||||
rule_id: rule.id,
|
rule_id: rule.id,
|
||||||
rule_ref: rule.r#ref.clone(),
|
rule_ref: rule.r#ref.clone(),
|
||||||
@@ -440,7 +440,7 @@ pub async fn update_rule(
|
|||||||
// If the rule is enabled and trigger params changed, publish RuleEnabled message
|
// If the rule is enabled and trigger params changed, publish RuleEnabled message
|
||||||
// to notify sensors to restart with new parameters
|
// to notify sensors to restart with new parameters
|
||||||
if rule.enabled && trigger_params_changed {
|
if rule.enabled && trigger_params_changed {
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let payload = RuleEnabledPayload {
|
let payload = RuleEnabledPayload {
|
||||||
rule_id: rule.id,
|
rule_id: rule.id,
|
||||||
rule_ref: rule.r#ref.clone(),
|
rule_ref: rule.r#ref.clone(),
|
||||||
@@ -543,7 +543,7 @@ pub async fn enable_rule(
|
|||||||
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
|
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
|
||||||
|
|
||||||
// Publish RuleEnabled message to notify sensor service
|
// Publish RuleEnabled message to notify sensor service
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let payload = RuleEnabledPayload {
|
let payload = RuleEnabledPayload {
|
||||||
rule_id: rule.id,
|
rule_id: rule.id,
|
||||||
rule_ref: rule.r#ref.clone(),
|
rule_ref: rule.r#ref.clone(),
|
||||||
@@ -606,7 +606,7 @@ pub async fn disable_rule(
|
|||||||
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
|
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
|
||||||
|
|
||||||
// Publish RuleDisabled message to notify sensor service
|
// Publish RuleDisabled message to notify sensor service
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let payload = RuleDisabledPayload {
|
let payload = RuleDisabledPayload {
|
||||||
rule_id: rule.id,
|
rule_id: rule.id,
|
||||||
rule_ref: rule.r#ref.clone(),
|
rule_ref: rule.r#ref.clone(),
|
||||||
|
|||||||
@@ -650,7 +650,7 @@ pub async fn receive_webhook(
|
|||||||
"Webhook event {} created, attempting to publish EventCreated message",
|
"Webhook event {} created, attempting to publish EventCreated message",
|
||||||
event.id
|
event.id
|
||||||
);
|
);
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let message_payload = EventCreatedPayload {
|
let message_payload = EventCreatedPayload {
|
||||||
event_id: event.id,
|
event_id: event.id,
|
||||||
trigger_id: event.trigger,
|
trigger_id: event.trigger,
|
||||||
|
|||||||
@@ -57,8 +57,7 @@ impl Server {
|
|||||||
.merge(routes::webhook_routes())
|
.merge(routes::webhook_routes())
|
||||||
.merge(routes::history_routes())
|
.merge(routes::history_routes())
|
||||||
.merge(routes::analytics_routes())
|
.merge(routes::analytics_routes())
|
||||||
// TODO: Add more route modules here
|
.merge(routes::artifact_routes())
|
||||||
// etc.
|
|
||||||
.with_state(self.state.clone());
|
.with_state(self.state.clone());
|
||||||
|
|
||||||
// Auth routes at root level (not versioned for frontend compatibility)
|
// Auth routes at root level (not versioned for frontend compatibility)
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
use sqlx::PgPool;
|
use sqlx::PgPool;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use tokio::sync::broadcast;
|
use tokio::sync::{broadcast, RwLock};
|
||||||
|
|
||||||
use crate::auth::jwt::JwtConfig;
|
use crate::auth::jwt::JwtConfig;
|
||||||
use attune_common::{config::Config, mq::Publisher};
|
use attune_common::{config::Config, mq::Publisher};
|
||||||
@@ -18,8 +18,8 @@ pub struct AppState {
|
|||||||
pub cors_origins: Vec<String>,
|
pub cors_origins: Vec<String>,
|
||||||
/// Application configuration
|
/// Application configuration
|
||||||
pub config: Arc<Config>,
|
pub config: Arc<Config>,
|
||||||
/// Optional message queue publisher
|
/// Optional message queue publisher (shared, swappable after reconnection)
|
||||||
pub publisher: Option<Arc<Publisher>>,
|
pub publisher: Arc<RwLock<Option<Arc<Publisher>>>>,
|
||||||
/// Broadcast channel for SSE notifications
|
/// Broadcast channel for SSE notifications
|
||||||
pub broadcast_tx: broadcast::Sender<String>,
|
pub broadcast_tx: broadcast::Sender<String>,
|
||||||
}
|
}
|
||||||
@@ -50,15 +50,20 @@ impl AppState {
|
|||||||
jwt_config: Arc::new(jwt_config),
|
jwt_config: Arc::new(jwt_config),
|
||||||
cors_origins,
|
cors_origins,
|
||||||
config: Arc::new(config),
|
config: Arc::new(config),
|
||||||
publisher: None,
|
publisher: Arc::new(RwLock::new(None)),
|
||||||
broadcast_tx,
|
broadcast_tx,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Set the message queue publisher
|
/// Set the message queue publisher (called once at startup or after reconnection)
|
||||||
pub fn with_publisher(mut self, publisher: Arc<Publisher>) -> Self {
|
pub async fn set_publisher(&self, publisher: Arc<Publisher>) {
|
||||||
self.publisher = Some(publisher);
|
let mut guard = self.publisher.write().await;
|
||||||
self
|
*guard = Some(publisher);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get a clone of the current publisher, if available
|
||||||
|
pub async fn get_publisher(&self) -> Option<Arc<Publisher>> {
|
||||||
|
self.publisher.read().await.clone()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -16,12 +16,13 @@ attune-common = { path = "../common" }
|
|||||||
|
|
||||||
# Async runtime
|
# Async runtime
|
||||||
tokio = { workspace = true }
|
tokio = { workspace = true }
|
||||||
|
futures = { workspace = true }
|
||||||
|
|
||||||
# CLI framework
|
# CLI framework
|
||||||
clap = { workspace = true, features = ["derive", "env", "string"] }
|
clap = { workspace = true, features = ["derive", "env", "string"] }
|
||||||
|
|
||||||
# HTTP client
|
# HTTP client
|
||||||
reqwest = { workspace = true }
|
reqwest = { workspace = true, features = ["multipart", "stream"] }
|
||||||
|
|
||||||
# Serialization
|
# Serialization
|
||||||
serde = { workspace = true }
|
serde = { workspace = true }
|
||||||
@@ -41,6 +42,14 @@ dirs = "5.0"
|
|||||||
|
|
||||||
# URL encoding
|
# URL encoding
|
||||||
urlencoding = "2.1"
|
urlencoding = "2.1"
|
||||||
|
url = { workspace = true }
|
||||||
|
|
||||||
|
# Archive/compression
|
||||||
|
tar = { workspace = true }
|
||||||
|
flate2 = { workspace = true }
|
||||||
|
|
||||||
|
# WebSocket client (for notifier integration)
|
||||||
|
tokio-tungstenite = { workspace = true }
|
||||||
|
|
||||||
# Terminal UI
|
# Terminal UI
|
||||||
colored = "2.1"
|
colored = "2.1"
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
use anyhow::{Context, Result};
|
use anyhow::{Context, Result};
|
||||||
use reqwest::{Client as HttpClient, Method, RequestBuilder, Response, StatusCode};
|
use reqwest::{multipart, Client as HttpClient, Method, RequestBuilder, Response, StatusCode};
|
||||||
use serde::{de::DeserializeOwned, Serialize};
|
use serde::{de::DeserializeOwned, Serialize};
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
@@ -39,7 +39,7 @@ impl ApiClient {
|
|||||||
|
|
||||||
Self {
|
Self {
|
||||||
client: HttpClient::builder()
|
client: HttpClient::builder()
|
||||||
.timeout(Duration::from_secs(30))
|
.timeout(Duration::from_secs(300)) // longer timeout for uploads
|
||||||
.build()
|
.build()
|
||||||
.expect("Failed to build HTTP client"),
|
.expect("Failed to build HTTP client"),
|
||||||
base_url,
|
base_url,
|
||||||
@@ -50,10 +50,15 @@ impl ApiClient {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Create a new API client
|
/// Create a new API client
|
||||||
|
/// Return the base URL this client is configured to talk to.
|
||||||
|
pub fn base_url(&self) -> &str {
|
||||||
|
&self.base_url
|
||||||
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
pub fn new(base_url: String, auth_token: Option<String>) -> Self {
|
pub fn new(base_url: String, auth_token: Option<String>) -> Self {
|
||||||
let client = HttpClient::builder()
|
let client = HttpClient::builder()
|
||||||
.timeout(Duration::from_secs(30))
|
.timeout(Duration::from_secs(300))
|
||||||
.build()
|
.build()
|
||||||
.expect("Failed to build HTTP client");
|
.expect("Failed to build HTTP client");
|
||||||
|
|
||||||
@@ -296,6 +301,55 @@ impl ApiClient {
|
|||||||
anyhow::bail!("API error ({}): {}", status, error_text);
|
anyhow::bail!("API error ({}): {}", status, error_text);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// POST a multipart/form-data request with a file field and optional text fields.
|
||||||
|
///
|
||||||
|
/// - `file_field_name`: the multipart field name for the file
|
||||||
|
/// - `file_bytes`: raw bytes of the file content
|
||||||
|
/// - `file_name`: filename hint sent in the Content-Disposition header
|
||||||
|
/// - `mime_type`: MIME type of the file (e.g. `"application/gzip"`)
|
||||||
|
/// - `extra_fields`: additional text key/value fields to include in the form
|
||||||
|
pub async fn multipart_post<T: DeserializeOwned>(
|
||||||
|
&mut self,
|
||||||
|
path: &str,
|
||||||
|
file_field_name: &str,
|
||||||
|
file_bytes: Vec<u8>,
|
||||||
|
file_name: &str,
|
||||||
|
mime_type: &str,
|
||||||
|
extra_fields: Vec<(&str, String)>,
|
||||||
|
) -> Result<T> {
|
||||||
|
let url = format!("{}/api/v1{}", self.base_url, path);
|
||||||
|
|
||||||
|
let file_part = multipart::Part::bytes(file_bytes)
|
||||||
|
.file_name(file_name.to_string())
|
||||||
|
.mime_str(mime_type)
|
||||||
|
.context("Invalid MIME type")?;
|
||||||
|
|
||||||
|
let mut form = multipart::Form::new().part(file_field_name.to_string(), file_part);
|
||||||
|
|
||||||
|
for (key, value) in extra_fields {
|
||||||
|
form = form.text(key.to_string(), value);
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut req = self.client.post(&url).multipart(form);
|
||||||
|
|
||||||
|
if let Some(token) = &self.auth_token {
|
||||||
|
req = req.bearer_auth(token);
|
||||||
|
}
|
||||||
|
|
||||||
|
let response = req.send().await.context("Failed to send multipart request to API")?;
|
||||||
|
|
||||||
|
// Handle 401 + refresh (same pattern as execute())
|
||||||
|
if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() {
|
||||||
|
if self.refresh_auth_token().await? {
|
||||||
|
return Err(anyhow::anyhow!(
|
||||||
|
"Token expired and was refreshed. Please retry your command."
|
||||||
|
));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
self.handle_response(response).await
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ use std::collections::HashMap;
|
|||||||
use crate::client::ApiClient;
|
use crate::client::ApiClient;
|
||||||
use crate::config::CliConfig;
|
use crate::config::CliConfig;
|
||||||
use crate::output::{self, OutputFormat};
|
use crate::output::{self, OutputFormat};
|
||||||
|
use crate::wait::{wait_for_execution, WaitOptions};
|
||||||
|
|
||||||
#[derive(Subcommand)]
|
#[derive(Subcommand)]
|
||||||
pub enum ActionCommands {
|
pub enum ActionCommands {
|
||||||
@@ -74,6 +75,11 @@ pub enum ActionCommands {
|
|||||||
/// Timeout in seconds when waiting (default: 300)
|
/// Timeout in seconds when waiting (default: 300)
|
||||||
#[arg(long, default_value = "300", requires = "wait")]
|
#[arg(long, default_value = "300", requires = "wait")]
|
||||||
timeout: u64,
|
timeout: u64,
|
||||||
|
|
||||||
|
/// Notifier WebSocket base URL (e.g. ws://localhost:8081).
|
||||||
|
/// Derived from --api-url automatically when not set.
|
||||||
|
#[arg(long, requires = "wait")]
|
||||||
|
notifier_url: Option<String>,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -182,6 +188,7 @@ pub async fn handle_action_command(
|
|||||||
params_json,
|
params_json,
|
||||||
wait,
|
wait,
|
||||||
timeout,
|
timeout,
|
||||||
|
notifier_url,
|
||||||
} => {
|
} => {
|
||||||
handle_execute(
|
handle_execute(
|
||||||
action_ref,
|
action_ref,
|
||||||
@@ -191,6 +198,7 @@ pub async fn handle_action_command(
|
|||||||
api_url,
|
api_url,
|
||||||
wait,
|
wait,
|
||||||
timeout,
|
timeout,
|
||||||
|
notifier_url,
|
||||||
output_format,
|
output_format,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
@@ -415,6 +423,7 @@ async fn handle_execute(
|
|||||||
api_url: &Option<String>,
|
api_url: &Option<String>,
|
||||||
wait: bool,
|
wait: bool,
|
||||||
timeout: u64,
|
timeout: u64,
|
||||||
|
notifier_url: Option<String>,
|
||||||
output_format: OutputFormat,
|
output_format: OutputFormat,
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||||
@@ -453,62 +462,61 @@ async fn handle_execute(
|
|||||||
}
|
}
|
||||||
|
|
||||||
let path = "/executions/execute".to_string();
|
let path = "/executions/execute".to_string();
|
||||||
let mut execution: Execution = client.post(&path, &request).await?;
|
let execution: Execution = client.post(&path, &request).await?;
|
||||||
|
|
||||||
if wait {
|
if !wait {
|
||||||
match output_format {
|
match output_format {
|
||||||
|
OutputFormat::Json | OutputFormat::Yaml => {
|
||||||
|
output::print_output(&execution, output_format)?;
|
||||||
|
}
|
||||||
OutputFormat::Table => {
|
OutputFormat::Table => {
|
||||||
output::print_info(&format!(
|
output::print_success(&format!("Execution {} started", execution.id));
|
||||||
"Waiting for execution {} to complete...",
|
output::print_key_value_table(vec![
|
||||||
execution.id
|
("Execution ID", execution.id.to_string()),
|
||||||
));
|
("Action", execution.action_ref.clone()),
|
||||||
|
("Status", output::format_status(&execution.status)),
|
||||||
|
]);
|
||||||
}
|
}
|
||||||
_ => {}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Poll for completion
|
|
||||||
let start = std::time::Instant::now();
|
|
||||||
let timeout_duration = std::time::Duration::from_secs(timeout);
|
|
||||||
|
|
||||||
loop {
|
|
||||||
if start.elapsed() > timeout_duration {
|
|
||||||
anyhow::bail!("Execution timed out after {} seconds", timeout);
|
|
||||||
}
|
|
||||||
|
|
||||||
let exec_path = format!("/executions/{}", execution.id);
|
|
||||||
execution = client.get(&exec_path).await?;
|
|
||||||
|
|
||||||
if execution.status == "succeeded"
|
|
||||||
|| execution.status == "failed"
|
|
||||||
|| execution.status == "canceled"
|
|
||||||
{
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
|
|
||||||
}
|
}
|
||||||
|
return Ok(());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Table => {
|
||||||
|
output::print_info(&format!(
|
||||||
|
"Waiting for execution {} to complete...",
|
||||||
|
execution.id
|
||||||
|
));
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
|
||||||
|
let verbose = matches!(output_format, OutputFormat::Table);
|
||||||
|
let summary = wait_for_execution(WaitOptions {
|
||||||
|
execution_id: execution.id,
|
||||||
|
timeout_secs: timeout,
|
||||||
|
api_client: &mut client,
|
||||||
|
notifier_ws_url: notifier_url,
|
||||||
|
verbose,
|
||||||
|
})
|
||||||
|
.await?;
|
||||||
|
|
||||||
match output_format {
|
match output_format {
|
||||||
OutputFormat::Json | OutputFormat::Yaml => {
|
OutputFormat::Json | OutputFormat::Yaml => {
|
||||||
output::print_output(&execution, output_format)?;
|
output::print_output(&summary, output_format)?;
|
||||||
}
|
}
|
||||||
OutputFormat::Table => {
|
OutputFormat::Table => {
|
||||||
output::print_success(&format!(
|
output::print_success(&format!("Execution {} completed", summary.id));
|
||||||
"Execution {} {}",
|
|
||||||
execution.id,
|
|
||||||
if wait { "completed" } else { "started" }
|
|
||||||
));
|
|
||||||
output::print_section("Execution Details");
|
output::print_section("Execution Details");
|
||||||
output::print_key_value_table(vec![
|
output::print_key_value_table(vec![
|
||||||
("Execution ID", execution.id.to_string()),
|
("Execution ID", summary.id.to_string()),
|
||||||
("Action", execution.action_ref.clone()),
|
("Action", summary.action_ref.clone()),
|
||||||
("Status", output::format_status(&execution.status)),
|
("Status", output::format_status(&summary.status)),
|
||||||
("Created", output::format_timestamp(&execution.created)),
|
("Created", output::format_timestamp(&summary.created)),
|
||||||
("Updated", output::format_timestamp(&execution.updated)),
|
("Updated", output::format_timestamp(&summary.updated)),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
if let Some(result) = execution.result {
|
if let Some(result) = summary.result {
|
||||||
if !result.is_null() {
|
if !result.is_null() {
|
||||||
output::print_section("Result");
|
output::print_section("Result");
|
||||||
println!("{}", serde_json::to_string_pretty(&result)?);
|
println!("{}", serde_json::to_string_pretty(&result)?);
|
||||||
|
|||||||
@@ -17,6 +17,14 @@ pub enum AuthCommands {
|
|||||||
/// Password (will prompt if not provided)
|
/// Password (will prompt if not provided)
|
||||||
#[arg(long)]
|
#[arg(long)]
|
||||||
password: Option<String>,
|
password: Option<String>,
|
||||||
|
|
||||||
|
/// API URL to log in to (saved into the profile for future use)
|
||||||
|
#[arg(long)]
|
||||||
|
url: Option<String>,
|
||||||
|
|
||||||
|
/// Save credentials into a named profile (creates it if it doesn't exist)
|
||||||
|
#[arg(long)]
|
||||||
|
save_profile: Option<String>,
|
||||||
},
|
},
|
||||||
/// Log out and clear authentication tokens
|
/// Log out and clear authentication tokens
|
||||||
Logout,
|
Logout,
|
||||||
@@ -53,8 +61,22 @@ pub async fn handle_auth_command(
|
|||||||
output_format: OutputFormat,
|
output_format: OutputFormat,
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
match command {
|
match command {
|
||||||
AuthCommands::Login { username, password } => {
|
AuthCommands::Login {
|
||||||
handle_login(username, password, profile, api_url, output_format).await
|
username,
|
||||||
|
password,
|
||||||
|
url,
|
||||||
|
save_profile,
|
||||||
|
} => {
|
||||||
|
// --url is a convenient alias for --api-url at login time
|
||||||
|
let effective_api_url = url.or_else(|| api_url.clone());
|
||||||
|
handle_login(
|
||||||
|
username,
|
||||||
|
password,
|
||||||
|
save_profile.as_ref().or(profile.as_ref()),
|
||||||
|
&effective_api_url,
|
||||||
|
output_format,
|
||||||
|
)
|
||||||
|
.await
|
||||||
}
|
}
|
||||||
AuthCommands::Logout => handle_logout(profile, output_format).await,
|
AuthCommands::Logout => handle_logout(profile, output_format).await,
|
||||||
AuthCommands::Whoami => handle_whoami(profile, api_url, output_format).await,
|
AuthCommands::Whoami => handle_whoami(profile, api_url, output_format).await,
|
||||||
@@ -65,11 +87,44 @@ pub async fn handle_auth_command(
|
|||||||
async fn handle_login(
|
async fn handle_login(
|
||||||
username: String,
|
username: String,
|
||||||
password: Option<String>,
|
password: Option<String>,
|
||||||
profile: &Option<String>,
|
profile: Option<&String>,
|
||||||
api_url: &Option<String>,
|
api_url: &Option<String>,
|
||||||
output_format: OutputFormat,
|
output_format: OutputFormat,
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
// Determine which profile name will own these credentials.
|
||||||
|
// If --save-profile / --profile was given, use that; otherwise use the
|
||||||
|
// currently-active profile.
|
||||||
|
let mut config = CliConfig::load()?;
|
||||||
|
let target_profile_name = profile
|
||||||
|
.cloned()
|
||||||
|
.unwrap_or_else(|| config.current_profile.clone());
|
||||||
|
|
||||||
|
// If a URL was provided and the target profile doesn't exist yet, create it.
|
||||||
|
if !config.profiles.contains_key(&target_profile_name) {
|
||||||
|
let url = api_url.clone().unwrap_or_else(|| "http://localhost:8080".to_string());
|
||||||
|
use crate::config::Profile;
|
||||||
|
config.set_profile(
|
||||||
|
target_profile_name.clone(),
|
||||||
|
Profile {
|
||||||
|
api_url: url,
|
||||||
|
auth_token: None,
|
||||||
|
refresh_token: None,
|
||||||
|
output_format: None,
|
||||||
|
description: None,
|
||||||
|
},
|
||||||
|
)?;
|
||||||
|
} else if let Some(url) = api_url {
|
||||||
|
// Profile exists — update its api_url if an explicit URL was provided.
|
||||||
|
if let Some(p) = config.profiles.get_mut(&target_profile_name) {
|
||||||
|
p.api_url = url.clone();
|
||||||
|
}
|
||||||
|
config.save()?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build a temporary config view that points at the target profile so
|
||||||
|
// ApiClient uses the right base URL.
|
||||||
|
let mut login_config = CliConfig::load()?;
|
||||||
|
login_config.current_profile = target_profile_name.clone();
|
||||||
|
|
||||||
// Prompt for password if not provided
|
// Prompt for password if not provided
|
||||||
let password = match password {
|
let password = match password {
|
||||||
@@ -82,7 +137,7 @@ async fn handle_login(
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
let mut client = ApiClient::from_config(&config, api_url);
|
let mut client = ApiClient::from_config(&login_config, api_url);
|
||||||
|
|
||||||
let login_req = LoginRequest {
|
let login_req = LoginRequest {
|
||||||
login: username,
|
login: username,
|
||||||
@@ -91,12 +146,17 @@ async fn handle_login(
|
|||||||
|
|
||||||
let response: LoginResponse = client.post("/auth/login", &login_req).await?;
|
let response: LoginResponse = client.post("/auth/login", &login_req).await?;
|
||||||
|
|
||||||
// Save tokens to config
|
// Persist tokens into the target profile.
|
||||||
let mut config = CliConfig::load()?;
|
let mut config = CliConfig::load()?;
|
||||||
config.set_auth(
|
// Ensure the profile exists (it may have just been created above and saved).
|
||||||
response.access_token.clone(),
|
if let Some(p) = config.profiles.get_mut(&target_profile_name) {
|
||||||
response.refresh_token.clone(),
|
p.auth_token = Some(response.access_token.clone());
|
||||||
)?;
|
p.refresh_token = Some(response.refresh_token.clone());
|
||||||
|
config.save()?;
|
||||||
|
} else {
|
||||||
|
// Fallback: set_auth writes to the current profile.
|
||||||
|
config.set_auth(response.access_token.clone(), response.refresh_token.clone())?;
|
||||||
|
}
|
||||||
|
|
||||||
match output_format {
|
match output_format {
|
||||||
OutputFormat::Json | OutputFormat::Yaml => {
|
OutputFormat::Json | OutputFormat::Yaml => {
|
||||||
@@ -105,6 +165,12 @@ async fn handle_login(
|
|||||||
OutputFormat::Table => {
|
OutputFormat::Table => {
|
||||||
output::print_success("Successfully logged in");
|
output::print_success("Successfully logged in");
|
||||||
output::print_info(&format!("Token expires in {} seconds", response.expires_in));
|
output::print_info(&format!("Token expires in {} seconds", response.expires_in));
|
||||||
|
if target_profile_name != config.current_profile {
|
||||||
|
output::print_info(&format!(
|
||||||
|
"Credentials saved to profile '{}'",
|
||||||
|
target_profile_name
|
||||||
|
));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
use anyhow::Result;
|
use anyhow::{Context, Result};
|
||||||
use clap::Subcommand;
|
use clap::Subcommand;
|
||||||
|
use flate2::{write::GzEncoder, Compression};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
|
|
||||||
@@ -77,9 +78,9 @@ pub enum PackCommands {
|
|||||||
#[arg(short = 'y', long)]
|
#[arg(short = 'y', long)]
|
||||||
yes: bool,
|
yes: bool,
|
||||||
},
|
},
|
||||||
/// Register a pack from a local directory
|
/// Register a pack from a local directory (path must be accessible by the API server)
|
||||||
Register {
|
Register {
|
||||||
/// Path to pack directory
|
/// Path to pack directory (must be a path the API server can access)
|
||||||
path: String,
|
path: String,
|
||||||
|
|
||||||
/// Force re-registration if pack already exists
|
/// Force re-registration if pack already exists
|
||||||
@@ -90,6 +91,22 @@ pub enum PackCommands {
|
|||||||
#[arg(long)]
|
#[arg(long)]
|
||||||
skip_tests: bool,
|
skip_tests: bool,
|
||||||
},
|
},
|
||||||
|
/// Upload a local pack directory to the API server and register it
|
||||||
|
///
|
||||||
|
/// This command tarballs the local directory and streams it to the API,
|
||||||
|
/// so it works regardless of whether the API is local or running in Docker.
|
||||||
|
Upload {
|
||||||
|
/// Path to the local pack directory (must contain pack.yaml)
|
||||||
|
path: String,
|
||||||
|
|
||||||
|
/// Force re-registration if a pack with the same ref already exists
|
||||||
|
#[arg(short, long)]
|
||||||
|
force: bool,
|
||||||
|
|
||||||
|
/// Skip running pack tests after upload
|
||||||
|
#[arg(long)]
|
||||||
|
skip_tests: bool,
|
||||||
|
},
|
||||||
/// Test a pack's test suite
|
/// Test a pack's test suite
|
||||||
Test {
|
Test {
|
||||||
/// Pack reference (name) or path to pack directory
|
/// Pack reference (name) or path to pack directory
|
||||||
@@ -256,6 +273,15 @@ struct RegisterPackRequest {
|
|||||||
skip_tests: bool,
|
skip_tests: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
struct UploadPackResponse {
|
||||||
|
pack: Pack,
|
||||||
|
#[serde(default)]
|
||||||
|
test_result: Option<serde_json::Value>,
|
||||||
|
#[serde(default)]
|
||||||
|
tests_skipped: bool,
|
||||||
|
}
|
||||||
|
|
||||||
pub async fn handle_pack_command(
|
pub async fn handle_pack_command(
|
||||||
profile: &Option<String>,
|
profile: &Option<String>,
|
||||||
command: PackCommands,
|
command: PackCommands,
|
||||||
@@ -296,6 +322,11 @@ pub async fn handle_pack_command(
|
|||||||
force,
|
force,
|
||||||
skip_tests,
|
skip_tests,
|
||||||
} => handle_register(profile, path, force, skip_tests, api_url, output_format).await,
|
} => handle_register(profile, path, force, skip_tests, api_url, output_format).await,
|
||||||
|
PackCommands::Upload {
|
||||||
|
path,
|
||||||
|
force,
|
||||||
|
skip_tests,
|
||||||
|
} => handle_upload(profile, path, force, skip_tests, api_url, output_format).await,
|
||||||
PackCommands::Test {
|
PackCommands::Test {
|
||||||
pack,
|
pack,
|
||||||
verbose,
|
verbose,
|
||||||
@@ -593,6 +624,160 @@ async fn handle_uninstall(
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn handle_upload(
|
||||||
|
profile: &Option<String>,
|
||||||
|
path: String,
|
||||||
|
force: bool,
|
||||||
|
skip_tests: bool,
|
||||||
|
api_url: &Option<String>,
|
||||||
|
output_format: OutputFormat,
|
||||||
|
) -> Result<()> {
|
||||||
|
let pack_dir = Path::new(&path);
|
||||||
|
|
||||||
|
// Validate the directory exists and contains pack.yaml
|
||||||
|
if !pack_dir.exists() {
|
||||||
|
anyhow::bail!("Path does not exist: {}", path);
|
||||||
|
}
|
||||||
|
if !pack_dir.is_dir() {
|
||||||
|
anyhow::bail!("Path is not a directory: {}", path);
|
||||||
|
}
|
||||||
|
let pack_yaml_path = pack_dir.join("pack.yaml");
|
||||||
|
if !pack_yaml_path.exists() {
|
||||||
|
anyhow::bail!("No pack.yaml found in: {}", path);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read pack ref from pack.yaml so we can display it
|
||||||
|
let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)
|
||||||
|
.context("Failed to read pack.yaml")?;
|
||||||
|
let pack_yaml: serde_yaml_ng::Value =
|
||||||
|
serde_yaml_ng::from_str(&pack_yaml_content).context("Failed to parse pack.yaml")?;
|
||||||
|
let pack_ref = pack_yaml
|
||||||
|
.get("ref")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("unknown");
|
||||||
|
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Table => {
|
||||||
|
output::print_info(&format!(
|
||||||
|
"Uploading pack '{}' from: {}",
|
||||||
|
pack_ref, path
|
||||||
|
));
|
||||||
|
output::print_info("Creating archive...");
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build an in-memory tar.gz of the pack directory
|
||||||
|
let tar_gz_bytes = {
|
||||||
|
let buf = Vec::new();
|
||||||
|
let enc = GzEncoder::new(buf, Compression::default());
|
||||||
|
let mut tar = tar::Builder::new(enc);
|
||||||
|
|
||||||
|
// Walk the directory and add files to the archive
|
||||||
|
// We strip the leading path so the archive root is the pack directory contents
|
||||||
|
let abs_pack_dir = pack_dir
|
||||||
|
.canonicalize()
|
||||||
|
.context("Failed to resolve pack directory path")?;
|
||||||
|
|
||||||
|
append_dir_to_tar(&mut tar, &abs_pack_dir, &abs_pack_dir)?;
|
||||||
|
|
||||||
|
let encoder = tar.into_inner().context("Failed to finalise tar archive")?;
|
||||||
|
encoder.finish().context("Failed to flush gzip stream")?
|
||||||
|
};
|
||||||
|
|
||||||
|
let archive_size_kb = tar_gz_bytes.len() / 1024;
|
||||||
|
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Table => {
|
||||||
|
output::print_info(&format!(
|
||||||
|
"Archive ready ({} KB), uploading...",
|
||||||
|
archive_size_kb
|
||||||
|
));
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
|
||||||
|
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||||
|
let mut client = ApiClient::from_config(&config, api_url);
|
||||||
|
|
||||||
|
let mut extra_fields = Vec::new();
|
||||||
|
if force {
|
||||||
|
extra_fields.push(("force", "true".to_string()));
|
||||||
|
}
|
||||||
|
if skip_tests {
|
||||||
|
extra_fields.push(("skip_tests", "true".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let archive_name = format!("{}.tar.gz", pack_ref);
|
||||||
|
let response: UploadPackResponse = client
|
||||||
|
.multipart_post(
|
||||||
|
"/packs/upload",
|
||||||
|
"pack",
|
||||||
|
tar_gz_bytes,
|
||||||
|
&archive_name,
|
||||||
|
"application/gzip",
|
||||||
|
extra_fields,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Json | OutputFormat::Yaml => {
|
||||||
|
output::print_output(&response, output_format)?;
|
||||||
|
}
|
||||||
|
OutputFormat::Table => {
|
||||||
|
println!();
|
||||||
|
output::print_success(&format!(
|
||||||
|
"✓ Pack '{}' uploaded and registered successfully",
|
||||||
|
response.pack.pack_ref
|
||||||
|
));
|
||||||
|
output::print_info(&format!(" Version: {}", response.pack.version));
|
||||||
|
output::print_info(&format!(" ID: {}", response.pack.id));
|
||||||
|
|
||||||
|
if response.tests_skipped {
|
||||||
|
output::print_info(" ⚠ Tests were skipped");
|
||||||
|
} else if let Some(test_result) = &response.test_result {
|
||||||
|
if let Some(status) = test_result.get("status").and_then(|s| s.as_str()) {
|
||||||
|
if status == "passed" {
|
||||||
|
output::print_success(" ✓ All tests passed");
|
||||||
|
} else if status == "failed" {
|
||||||
|
output::print_error(" ✗ Some tests failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Recursively append a directory's contents to a tar archive.
|
||||||
|
/// `base` is the root directory being archived; `dir` is the current directory
|
||||||
|
/// being walked. Files are stored with paths relative to `base`.
|
||||||
|
fn append_dir_to_tar<W: std::io::Write>(
|
||||||
|
tar: &mut tar::Builder<W>,
|
||||||
|
base: &Path,
|
||||||
|
dir: &Path,
|
||||||
|
) -> Result<()> {
|
||||||
|
for entry in std::fs::read_dir(dir).context("Failed to read directory")? {
|
||||||
|
let entry = entry.context("Failed to read directory entry")?;
|
||||||
|
let entry_path = entry.path();
|
||||||
|
let relative_path = entry_path
|
||||||
|
.strip_prefix(base)
|
||||||
|
.context("Failed to compute relative path")?;
|
||||||
|
|
||||||
|
if entry_path.is_dir() {
|
||||||
|
append_dir_to_tar(tar, base, &entry_path)?;
|
||||||
|
} else if entry_path.is_file() {
|
||||||
|
tar.append_path_with_name(&entry_path, relative_path)
|
||||||
|
.with_context(|| {
|
||||||
|
format!("Failed to add {} to archive", entry_path.display())
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
// symlinks are intentionally skipped
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
async fn handle_register(
|
async fn handle_register(
|
||||||
profile: &Option<String>,
|
profile: &Option<String>,
|
||||||
path: String,
|
path: String,
|
||||||
@@ -604,19 +789,39 @@ async fn handle_register(
|
|||||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||||
let mut client = ApiClient::from_config(&config, api_url);
|
let mut client = ApiClient::from_config(&config, api_url);
|
||||||
|
|
||||||
|
// Warn if the path looks like a local filesystem path that the API server
|
||||||
|
// probably can't see (i.e. not a known container mount point).
|
||||||
|
let looks_local = !path.starts_with("/opt/attune/")
|
||||||
|
&& !path.starts_with("/app/")
|
||||||
|
&& !path.starts_with("/packs");
|
||||||
|
if looks_local {
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Table => {
|
||||||
|
output::print_info(&format!("Registering pack from: {}", path));
|
||||||
|
eprintln!(
|
||||||
|
"⚠ Warning: '{}' looks like a local path. If the API is running in \
|
||||||
|
Docker it may not be able to access this path.\n \
|
||||||
|
Use `attune pack upload {}` instead to upload the pack directly.",
|
||||||
|
path, path
|
||||||
|
);
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Table => {
|
||||||
|
output::print_info(&format!("Registering pack from: {}", path));
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let request = RegisterPackRequest {
|
let request = RegisterPackRequest {
|
||||||
path: path.clone(),
|
path: path.clone(),
|
||||||
force,
|
force,
|
||||||
skip_tests,
|
skip_tests,
|
||||||
};
|
};
|
||||||
|
|
||||||
match output_format {
|
|
||||||
OutputFormat::Table => {
|
|
||||||
output::print_info(&format!("Registering pack from: {}", path));
|
|
||||||
}
|
|
||||||
_ => {}
|
|
||||||
}
|
|
||||||
|
|
||||||
let response: PackInstallResponse = client.post("/packs/register", &request).await?;
|
let response: PackInstallResponse = client.post("/packs/register", &request).await?;
|
||||||
|
|
||||||
match output_format {
|
match output_format {
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ mod client;
|
|||||||
mod commands;
|
mod commands;
|
||||||
mod config;
|
mod config;
|
||||||
mod output;
|
mod output;
|
||||||
|
mod wait;
|
||||||
|
|
||||||
use commands::{
|
use commands::{
|
||||||
action::{handle_action_command, ActionCommands},
|
action::{handle_action_command, ActionCommands},
|
||||||
@@ -112,6 +113,11 @@ enum Commands {
|
|||||||
/// Timeout in seconds when waiting (default: 300)
|
/// Timeout in seconds when waiting (default: 300)
|
||||||
#[arg(long, default_value = "300", requires = "wait")]
|
#[arg(long, default_value = "300", requires = "wait")]
|
||||||
timeout: u64,
|
timeout: u64,
|
||||||
|
|
||||||
|
/// Notifier WebSocket base URL (e.g. ws://localhost:8081).
|
||||||
|
/// Derived from --api-url automatically when not set.
|
||||||
|
#[arg(long, requires = "wait")]
|
||||||
|
notifier_url: Option<String>,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -193,6 +199,7 @@ async fn main() {
|
|||||||
params_json,
|
params_json,
|
||||||
wait,
|
wait,
|
||||||
timeout,
|
timeout,
|
||||||
|
notifier_url,
|
||||||
} => {
|
} => {
|
||||||
// Delegate to action execute command
|
// Delegate to action execute command
|
||||||
handle_action_command(
|
handle_action_command(
|
||||||
@@ -203,6 +210,7 @@ async fn main() {
|
|||||||
params_json,
|
params_json,
|
||||||
wait,
|
wait,
|
||||||
timeout,
|
timeout,
|
||||||
|
notifier_url,
|
||||||
},
|
},
|
||||||
&cli.api_url,
|
&cli.api_url,
|
||||||
output_format,
|
output_format,
|
||||||
|
|||||||
556
crates/cli/src/wait.rs
Normal file
556
crates/cli/src/wait.rs
Normal file
@@ -0,0 +1,556 @@
|
|||||||
|
//! Waiting for execution completion.
|
||||||
|
//!
|
||||||
|
//! Tries to connect to the notifier WebSocket first so the CLI reacts
|
||||||
|
//! *immediately* when the execution reaches a terminal state. If the
|
||||||
|
//! notifier is unreachable (not configured, different port, Docker network
|
||||||
|
//! boundary, etc.) it transparently falls back to REST polling.
|
||||||
|
//!
|
||||||
|
//! Public surface:
|
||||||
|
//! - [`WaitOptions`] – caller-supplied parameters
|
||||||
|
//! - [`wait_for_execution`] – the single entry point
|
||||||
|
|
||||||
|
use anyhow::Result;
|
||||||
|
use futures::{SinkExt, StreamExt};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::time::{Duration, Instant};
|
||||||
|
use tokio_tungstenite::{connect_async, tungstenite::Message};
|
||||||
|
|
||||||
|
use crate::client::ApiClient;
|
||||||
|
|
||||||
|
// ── terminal status helpers ───────────────────────────────────────────────────
|
||||||
|
|
||||||
|
fn is_terminal(status: &str) -> bool {
|
||||||
|
matches!(
|
||||||
|
status,
|
||||||
|
"completed" | "succeeded" | "failed" | "canceled" | "cancelled" | "timeout" | "timed_out"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── public types ─────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Result returned when the wait completes.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct ExecutionSummary {
|
||||||
|
pub id: i64,
|
||||||
|
pub status: String,
|
||||||
|
pub action_ref: String,
|
||||||
|
pub result: Option<serde_json::Value>,
|
||||||
|
pub created: String,
|
||||||
|
pub updated: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parameters that control how we wait.
|
||||||
|
pub struct WaitOptions<'a> {
|
||||||
|
/// Execution ID to watch.
|
||||||
|
pub execution_id: i64,
|
||||||
|
/// Overall wall-clock limit (seconds). Defaults to 300 if `None`.
|
||||||
|
pub timeout_secs: u64,
|
||||||
|
/// REST API client (already authenticated).
|
||||||
|
pub api_client: &'a mut ApiClient,
|
||||||
|
/// Base URL of the *notifier* WebSocket service, e.g. `ws://localhost:8081`.
|
||||||
|
/// Derived from the API URL when not explicitly set.
|
||||||
|
pub notifier_ws_url: Option<String>,
|
||||||
|
/// If `true`, print progress lines to stderr.
|
||||||
|
pub verbose: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── notifier WebSocket messages (mirrors websocket_server.rs) ────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize)]
|
||||||
|
#[serde(tag = "type")]
|
||||||
|
enum ClientMsg {
|
||||||
|
#[serde(rename = "subscribe")]
|
||||||
|
Subscribe { filter: String },
|
||||||
|
#[serde(rename = "ping")]
|
||||||
|
Ping,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
#[serde(tag = "type")]
|
||||||
|
enum ServerMsg {
|
||||||
|
#[serde(rename = "welcome")]
|
||||||
|
Welcome {
|
||||||
|
client_id: String,
|
||||||
|
#[allow(dead_code)]
|
||||||
|
message: String,
|
||||||
|
},
|
||||||
|
#[serde(rename = "notification")]
|
||||||
|
Notification(NotifierNotification),
|
||||||
|
#[serde(rename = "error")]
|
||||||
|
Error { message: String },
|
||||||
|
#[serde(other)]
|
||||||
|
Unknown,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct NotifierNotification {
|
||||||
|
pub notification_type: String,
|
||||||
|
pub entity_type: String,
|
||||||
|
pub entity_id: i64,
|
||||||
|
pub payload: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── REST execution shape ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct RestExecution {
|
||||||
|
id: i64,
|
||||||
|
action_ref: String,
|
||||||
|
status: String,
|
||||||
|
result: Option<serde_json::Value>,
|
||||||
|
created: String,
|
||||||
|
updated: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<RestExecution> for ExecutionSummary {
|
||||||
|
fn from(e: RestExecution) -> Self {
|
||||||
|
Self {
|
||||||
|
id: e.id,
|
||||||
|
status: e.status,
|
||||||
|
action_ref: e.action_ref,
|
||||||
|
result: e.result,
|
||||||
|
created: e.created,
|
||||||
|
updated: e.updated,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── entry point ───────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Wait for `execution_id` to reach a terminal status.
|
||||||
|
///
|
||||||
|
/// 1. Attempts a WebSocket connection to the notifier and subscribes to the
|
||||||
|
/// specific execution with the filter `entity:execution:<id>`.
|
||||||
|
/// 2. If the connection fails (or the notifier URL can't be derived) it falls
|
||||||
|
/// back to polling `GET /executions/<id>` every 2 seconds.
|
||||||
|
/// 3. In both cases, an overall `timeout_secs` wall-clock limit is enforced.
|
||||||
|
///
|
||||||
|
/// Returns the final [`ExecutionSummary`] on success or an error if the
|
||||||
|
/// timeout is exceeded or a fatal error occurs.
|
||||||
|
pub async fn wait_for_execution(opts: WaitOptions<'_>) -> Result<ExecutionSummary> {
|
||||||
|
let overall_deadline = Instant::now() + Duration::from_secs(opts.timeout_secs);
|
||||||
|
|
||||||
|
// Reserve at least this long for polling after WebSocket gives up.
|
||||||
|
// This ensures the polling fallback always gets a fair chance even when
|
||||||
|
// the WS path consumes most of the timeout budget.
|
||||||
|
const MIN_POLL_BUDGET: Duration = Duration::from_secs(10);
|
||||||
|
|
||||||
|
// Try WebSocket path first; fall through to polling on any connection error.
|
||||||
|
if let Some(ws_url) = resolve_ws_url(&opts) {
|
||||||
|
// Give WS at most (timeout - MIN_POLL_BUDGET) so polling always has headroom.
|
||||||
|
let ws_deadline = if overall_deadline > Instant::now() + MIN_POLL_BUDGET {
|
||||||
|
overall_deadline - MIN_POLL_BUDGET
|
||||||
|
} else {
|
||||||
|
// Timeout is very short; skip WS entirely and go straight to polling.
|
||||||
|
overall_deadline
|
||||||
|
};
|
||||||
|
|
||||||
|
match wait_via_websocket(
|
||||||
|
&ws_url,
|
||||||
|
opts.execution_id,
|
||||||
|
ws_deadline,
|
||||||
|
opts.verbose,
|
||||||
|
opts.api_client,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(summary) => return Ok(summary),
|
||||||
|
Err(ws_err) => {
|
||||||
|
if opts.verbose {
|
||||||
|
eprintln!(" [notifier: {}] falling back to polling", ws_err);
|
||||||
|
}
|
||||||
|
// Fall through to polling below.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else if opts.verbose {
|
||||||
|
eprintln!(" [notifier URL not configured] using polling");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Polling always uses the full overall deadline, so at minimum MIN_POLL_BUDGET
|
||||||
|
// remains (and often the full timeout if WS failed at connect time).
|
||||||
|
wait_via_polling(
|
||||||
|
opts.api_client,
|
||||||
|
opts.execution_id,
|
||||||
|
overall_deadline,
|
||||||
|
opts.verbose,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── WebSocket path ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
async fn wait_via_websocket(
|
||||||
|
ws_base_url: &str,
|
||||||
|
execution_id: i64,
|
||||||
|
deadline: Instant,
|
||||||
|
verbose: bool,
|
||||||
|
api_client: &mut ApiClient,
|
||||||
|
) -> Result<ExecutionSummary> {
|
||||||
|
// Build the full WS endpoint URL.
|
||||||
|
let ws_url = format!("{}/ws", ws_base_url.trim_end_matches('/'));
|
||||||
|
|
||||||
|
let connect_timeout = Duration::from_secs(5);
|
||||||
|
let remaining = deadline.saturating_duration_since(Instant::now());
|
||||||
|
if remaining.is_zero() {
|
||||||
|
anyhow::bail!("WS budget exhausted before connect");
|
||||||
|
}
|
||||||
|
let effective_connect_timeout = connect_timeout.min(remaining);
|
||||||
|
|
||||||
|
let connect_result =
|
||||||
|
tokio::time::timeout(effective_connect_timeout, connect_async(&ws_url)).await;
|
||||||
|
|
||||||
|
let (ws_stream, _response) = match connect_result {
|
||||||
|
Ok(Ok(pair)) => pair,
|
||||||
|
Ok(Err(e)) => anyhow::bail!("WebSocket connect failed: {}", e),
|
||||||
|
Err(_) => anyhow::bail!("WebSocket connect timed out"),
|
||||||
|
};
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [notifier] connected to {}", ws_url);
|
||||||
|
}
|
||||||
|
|
||||||
|
let (mut write, mut read) = ws_stream.split();
|
||||||
|
|
||||||
|
// Wait for the welcome message before subscribing.
|
||||||
|
tokio::time::timeout(Duration::from_secs(5), async {
|
||||||
|
while let Some(msg) = read.next().await {
|
||||||
|
if let Ok(Message::Text(txt)) = msg {
|
||||||
|
if let Ok(ServerMsg::Welcome { client_id, .. }) =
|
||||||
|
serde_json::from_str::<ServerMsg>(&txt)
|
||||||
|
{
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [notifier] session id {}", client_id);
|
||||||
|
}
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
anyhow::bail!("connection closed before welcome")
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|_| anyhow::anyhow!("timed out waiting for welcome message"))??;
|
||||||
|
|
||||||
|
// Subscribe to this specific execution.
|
||||||
|
let subscribe_msg = ClientMsg::Subscribe {
|
||||||
|
filter: format!("entity:execution:{}", execution_id),
|
||||||
|
};
|
||||||
|
let subscribe_json = serde_json::to_string(&subscribe_msg)?;
|
||||||
|
SinkExt::send(&mut write, Message::Text(subscribe_json.into())).await?;
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
eprintln!(
|
||||||
|
" [notifier] subscribed to entity:execution:{}",
|
||||||
|
execution_id
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Race-condition guard ──────────────────────────────────────────────
|
||||||
|
// The execution may have already completed in the window between the
|
||||||
|
// initial POST and when the WS subscription became active. Check once
|
||||||
|
// with the REST API *after* subscribing so there is no gap: either the
|
||||||
|
// notification arrives after this check (and we'll catch it in the loop
|
||||||
|
// below) or we catch the terminal state here.
|
||||||
|
{
|
||||||
|
let path = format!("/executions/{}", execution_id);
|
||||||
|
if let Ok(exec) = api_client.get::<RestExecution>(&path).await {
|
||||||
|
if is_terminal(&exec.status) {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(
|
||||||
|
" [notifier] execution {} already terminal ('{}') — caught by post-subscribe check",
|
||||||
|
execution_id, exec.status
|
||||||
|
);
|
||||||
|
}
|
||||||
|
return Ok(exec.into());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Periodically ping to keep the connection alive and check the deadline.
|
||||||
|
let ping_interval = Duration::from_secs(15);
|
||||||
|
let mut next_ping = Instant::now() + ping_interval;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
let remaining = deadline.saturating_duration_since(Instant::now());
|
||||||
|
if remaining.is_zero() {
|
||||||
|
anyhow::bail!("timed out waiting for execution {}", execution_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait up to the earlier of: next ping time or deadline.
|
||||||
|
let wait_for = remaining.min(next_ping.saturating_duration_since(Instant::now()));
|
||||||
|
|
||||||
|
let msg_result = tokio::time::timeout(wait_for, read.next()).await;
|
||||||
|
|
||||||
|
match msg_result {
|
||||||
|
// Received a message within the window.
|
||||||
|
Ok(Some(Ok(Message::Text(txt)))) => {
|
||||||
|
match serde_json::from_str::<ServerMsg>(&txt) {
|
||||||
|
Ok(ServerMsg::Notification(n)) => {
|
||||||
|
if n.entity_type == "execution" && n.entity_id == execution_id {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(
|
||||||
|
" [notifier] {} for execution {} — status={:?}",
|
||||||
|
n.notification_type,
|
||||||
|
execution_id,
|
||||||
|
n.payload.get("status").and_then(|s| s.as_str()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract status from the notification payload.
|
||||||
|
// The notifier broadcasts the full execution row in
|
||||||
|
// `payload`, so we can read the status directly.
|
||||||
|
if let Some(status) = n.payload.get("status").and_then(|s| s.as_str()) {
|
||||||
|
if is_terminal(status) {
|
||||||
|
// Build a summary from the payload; fall
|
||||||
|
// back to a REST fetch for missing fields.
|
||||||
|
return build_summary_from_payload(execution_id, &n.payload);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Not our execution or not yet terminal — keep waiting.
|
||||||
|
}
|
||||||
|
Ok(ServerMsg::Error { message }) => {
|
||||||
|
anyhow::bail!("notifier error: {}", message);
|
||||||
|
}
|
||||||
|
Ok(ServerMsg::Welcome { .. } | ServerMsg::Unknown) => {
|
||||||
|
// Ignore unexpected / unrecognised messages.
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
// Log parse failures at trace level — they can happen if the
|
||||||
|
// server sends a message format we don't recognise yet.
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [notifier] ignoring unrecognised message: {}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Connection closed cleanly.
|
||||||
|
Ok(Some(Ok(Message::Close(_)))) | Ok(None) => {
|
||||||
|
anyhow::bail!("notifier WebSocket closed unexpectedly");
|
||||||
|
}
|
||||||
|
// Ping/pong frames — ignore.
|
||||||
|
Ok(Some(Ok(
|
||||||
|
Message::Ping(_) | Message::Pong(_) | Message::Binary(_) | Message::Frame(_),
|
||||||
|
))) => {}
|
||||||
|
// WebSocket transport error.
|
||||||
|
Ok(Some(Err(e))) => {
|
||||||
|
anyhow::bail!("WebSocket error: {}", e);
|
||||||
|
}
|
||||||
|
// Timeout waiting for a message — time to ping.
|
||||||
|
Err(_timeout) => {
|
||||||
|
let now = Instant::now();
|
||||||
|
if now >= next_ping {
|
||||||
|
let _ = SinkExt::send(
|
||||||
|
&mut write,
|
||||||
|
Message::Text(serde_json::to_string(&ClientMsg::Ping)?.into()),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
next_ping = now + ping_interval;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build an [`ExecutionSummary`] from the notification payload.
|
||||||
|
/// The notifier payload matches the REST execution shape closely enough that
|
||||||
|
/// we can deserialize it directly.
|
||||||
|
fn build_summary_from_payload(
|
||||||
|
execution_id: i64,
|
||||||
|
payload: &serde_json::Value,
|
||||||
|
) -> Result<ExecutionSummary> {
|
||||||
|
// Try a full deserialize first.
|
||||||
|
if let Ok(exec) = serde_json::from_value::<RestExecution>(payload.clone()) {
|
||||||
|
return Ok(exec.into());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Partial payload — assemble what we can.
|
||||||
|
Ok(ExecutionSummary {
|
||||||
|
id: execution_id,
|
||||||
|
status: payload
|
||||||
|
.get("status")
|
||||||
|
.and_then(|s| s.as_str())
|
||||||
|
.unwrap_or("unknown")
|
||||||
|
.to_string(),
|
||||||
|
action_ref: payload
|
||||||
|
.get("action_ref")
|
||||||
|
.and_then(|s| s.as_str())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string(),
|
||||||
|
result: payload.get("result").cloned(),
|
||||||
|
created: payload
|
||||||
|
.get("created")
|
||||||
|
.and_then(|s| s.as_str())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string(),
|
||||||
|
updated: payload
|
||||||
|
.get("updated")
|
||||||
|
.and_then(|s| s.as_str())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── polling fallback ──────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
const POLL_INTERVAL: Duration = Duration::from_millis(500);
|
||||||
|
const POLL_INTERVAL_MAX: Duration = Duration::from_secs(2);
|
||||||
|
/// How quickly the poll interval grows on each successive check.
|
||||||
|
const POLL_BACKOFF_FACTOR: f64 = 1.5;
|
||||||
|
|
||||||
|
async fn wait_via_polling(
|
||||||
|
client: &mut ApiClient,
|
||||||
|
execution_id: i64,
|
||||||
|
deadline: Instant,
|
||||||
|
verbose: bool,
|
||||||
|
) -> Result<ExecutionSummary> {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [poll] watching execution {}", execution_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut interval = POLL_INTERVAL;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
// Poll immediately first, before sleeping — catches the case where the
|
||||||
|
// execution already finished while we were connecting to the notifier.
|
||||||
|
let path = format!("/executions/{}", execution_id);
|
||||||
|
match client.get::<RestExecution>(&path).await {
|
||||||
|
Ok(exec) => {
|
||||||
|
if is_terminal(&exec.status) {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [poll] execution {} is {}", execution_id, exec.status);
|
||||||
|
}
|
||||||
|
return Ok(exec.into());
|
||||||
|
}
|
||||||
|
if verbose {
|
||||||
|
eprintln!(
|
||||||
|
" [poll] status = {} — checking again in {:.1}s",
|
||||||
|
exec.status,
|
||||||
|
interval.as_secs_f64()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [poll] request failed ({}), retrying…", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check deadline *after* the poll attempt so we always do at least one check.
|
||||||
|
if Instant::now() >= deadline {
|
||||||
|
anyhow::bail!("timed out waiting for execution {}", execution_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sleep, but wake up if we'd overshoot the deadline.
|
||||||
|
let sleep_for = interval.min(deadline.saturating_duration_since(Instant::now()));
|
||||||
|
tokio::time::sleep(sleep_for).await;
|
||||||
|
|
||||||
|
// Exponential back-off up to the cap.
|
||||||
|
interval = Duration::from_secs_f64(
|
||||||
|
(interval.as_secs_f64() * POLL_BACKOFF_FACTOR).min(POLL_INTERVAL_MAX.as_secs_f64()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── URL resolution ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Derive the notifier WebSocket base URL.
|
||||||
|
///
|
||||||
|
/// Priority:
|
||||||
|
/// 1. Explicit `notifier_ws_url` in [`WaitOptions`].
|
||||||
|
/// 2. Replace the API base URL scheme (`http` → `ws`) and port (`8080` → `8081`).
|
||||||
|
/// This covers the standard single-host layout where both services share the
|
||||||
|
/// same hostname.
|
||||||
|
fn resolve_ws_url(opts: &WaitOptions<'_>) -> Option<String> {
|
||||||
|
if let Some(url) = &opts.notifier_ws_url {
|
||||||
|
return Some(url.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ask the client for its base URL by building a dummy request path
|
||||||
|
// and stripping the path portion — we don't have direct access to
|
||||||
|
// base_url here so we derive it from the config instead.
|
||||||
|
let api_url = opts.api_client.base_url();
|
||||||
|
|
||||||
|
// Transform http(s)://host:PORT/... → ws(s)://host:8081
|
||||||
|
let ws_url = derive_notifier_url(&api_url)?;
|
||||||
|
Some(ws_url)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Convert an HTTP API base URL into the expected notifier WebSocket URL.
|
||||||
|
///
|
||||||
|
/// - `http://localhost:8080` → `ws://localhost:8081`
|
||||||
|
/// - `https://api.example.com` → `wss://api.example.com:8081`
|
||||||
|
/// - `http://api.example.com:9000` → `ws://api.example.com:8081`
|
||||||
|
fn derive_notifier_url(api_url: &str) -> Option<String> {
|
||||||
|
let url = url::Url::parse(api_url).ok()?;
|
||||||
|
let ws_scheme = match url.scheme() {
|
||||||
|
"https" => "wss",
|
||||||
|
_ => "ws",
|
||||||
|
};
|
||||||
|
let host = url.host_str()?;
|
||||||
|
Some(format!("{}://{}:8081", ws_scheme, host))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_is_terminal() {
|
||||||
|
assert!(is_terminal("completed"));
|
||||||
|
assert!(is_terminal("succeeded"));
|
||||||
|
assert!(is_terminal("failed"));
|
||||||
|
assert!(is_terminal("canceled"));
|
||||||
|
assert!(is_terminal("cancelled"));
|
||||||
|
assert!(is_terminal("timeout"));
|
||||||
|
assert!(is_terminal("timed_out"));
|
||||||
|
assert!(!is_terminal("requested"));
|
||||||
|
assert!(!is_terminal("scheduled"));
|
||||||
|
assert!(!is_terminal("running"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_derive_notifier_url() {
|
||||||
|
assert_eq!(
|
||||||
|
derive_notifier_url("http://localhost:8080"),
|
||||||
|
Some("ws://localhost:8081".to_string())
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
derive_notifier_url("https://api.example.com"),
|
||||||
|
Some("wss://api.example.com:8081".to_string())
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
derive_notifier_url("http://api.example.com:9000"),
|
||||||
|
Some("ws://api.example.com:8081".to_string())
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
derive_notifier_url("http://10.0.0.5:8080"),
|
||||||
|
Some("ws://10.0.0.5:8081".to_string())
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_summary_from_full_payload() {
|
||||||
|
let payload = serde_json::json!({
|
||||||
|
"id": 42,
|
||||||
|
"action_ref": "core.echo",
|
||||||
|
"status": "completed",
|
||||||
|
"result": { "stdout": "hi" },
|
||||||
|
"created": "2026-01-01T00:00:00Z",
|
||||||
|
"updated": "2026-01-01T00:00:01Z"
|
||||||
|
});
|
||||||
|
let summary = build_summary_from_payload(42, &payload).unwrap();
|
||||||
|
assert_eq!(summary.id, 42);
|
||||||
|
assert_eq!(summary.status, "completed");
|
||||||
|
assert_eq!(summary.action_ref, "core.echo");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_summary_from_partial_payload() {
|
||||||
|
let payload = serde_json::json!({ "status": "failed" });
|
||||||
|
let summary = build_summary_from_payload(7, &payload).unwrap();
|
||||||
|
assert_eq!(summary.id, 7);
|
||||||
|
assert_eq!(summary.status, "failed");
|
||||||
|
assert_eq!(summary.action_ref, "");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -53,6 +53,9 @@ jsonschema = { workspace = true }
|
|||||||
# OpenAPI
|
# OpenAPI
|
||||||
utoipa = { workspace = true }
|
utoipa = { workspace = true }
|
||||||
|
|
||||||
|
# JWT
|
||||||
|
jsonwebtoken = { workspace = true }
|
||||||
|
|
||||||
# Encryption
|
# Encryption
|
||||||
argon2 = { workspace = true }
|
argon2 = { workspace = true }
|
||||||
ring = { workspace = true }
|
ring = { workspace = true }
|
||||||
|
|||||||
460
crates/common/src/auth/jwt.rs
Normal file
460
crates/common/src/auth/jwt.rs
Normal file
@@ -0,0 +1,460 @@
|
|||||||
|
//! JWT token generation and validation
|
||||||
|
//!
|
||||||
|
//! Shared across all Attune services. Token types:
|
||||||
|
//! - **Access**: Standard user login tokens (1h default)
|
||||||
|
//! - **Refresh**: Long-lived refresh tokens (7d default)
|
||||||
|
//! - **Sensor**: Sensor service tokens with trigger type metadata (24h default)
|
||||||
|
//! - **Execution**: Short-lived tokens scoped to a single execution (matching execution timeout)
|
||||||
|
|
||||||
|
use chrono::{Duration, Utc};
|
||||||
|
use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use thiserror::Error;
|
||||||
|
|
||||||
|
#[derive(Debug, Error)]
|
||||||
|
pub enum JwtError {
|
||||||
|
#[error("Failed to encode JWT: {0}")]
|
||||||
|
EncodeError(String),
|
||||||
|
#[error("Failed to decode JWT: {0}")]
|
||||||
|
DecodeError(String),
|
||||||
|
#[error("Token has expired")]
|
||||||
|
Expired,
|
||||||
|
#[error("Invalid token")]
|
||||||
|
Invalid,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// JWT Claims structure
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct Claims {
|
||||||
|
/// Subject (identity ID)
|
||||||
|
pub sub: String,
|
||||||
|
/// Identity login (or descriptor like "execution:123")
|
||||||
|
pub login: String,
|
||||||
|
/// Issued at (Unix timestamp)
|
||||||
|
pub iat: i64,
|
||||||
|
/// Expiration time (Unix timestamp)
|
||||||
|
pub exp: i64,
|
||||||
|
/// Token type (access, refresh, sensor, or execution)
|
||||||
|
#[serde(default)]
|
||||||
|
pub token_type: TokenType,
|
||||||
|
/// Optional scope (e.g., "sensor", "execution")
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub scope: Option<String>,
|
||||||
|
/// Optional metadata (e.g., trigger_types for sensors, execution_id for execution tokens)
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub metadata: Option<serde_json::Value>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||||
|
#[serde(rename_all = "lowercase")]
|
||||||
|
pub enum TokenType {
|
||||||
|
Access,
|
||||||
|
Refresh,
|
||||||
|
Sensor,
|
||||||
|
Execution,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for TokenType {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::Access
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Configuration for JWT tokens
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct JwtConfig {
|
||||||
|
/// Secret key for signing tokens
|
||||||
|
pub secret: String,
|
||||||
|
/// Access token expiration duration (in seconds)
|
||||||
|
pub access_token_expiration: i64,
|
||||||
|
/// Refresh token expiration duration (in seconds)
|
||||||
|
pub refresh_token_expiration: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for JwtConfig {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self {
|
||||||
|
secret: "insecure_default_secret_change_in_production".to_string(),
|
||||||
|
access_token_expiration: 3600, // 1 hour
|
||||||
|
refresh_token_expiration: 604800, // 7 days
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a JWT access token
|
||||||
|
pub fn generate_access_token(
|
||||||
|
identity_id: i64,
|
||||||
|
login: &str,
|
||||||
|
config: &JwtConfig,
|
||||||
|
) -> Result<String, JwtError> {
|
||||||
|
generate_token(identity_id, login, config, TokenType::Access)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a JWT refresh token
|
||||||
|
pub fn generate_refresh_token(
|
||||||
|
identity_id: i64,
|
||||||
|
login: &str,
|
||||||
|
config: &JwtConfig,
|
||||||
|
) -> Result<String, JwtError> {
|
||||||
|
generate_token(identity_id, login, config, TokenType::Refresh)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a JWT token with a specific type
|
||||||
|
pub fn generate_token(
|
||||||
|
identity_id: i64,
|
||||||
|
login: &str,
|
||||||
|
config: &JwtConfig,
|
||||||
|
token_type: TokenType,
|
||||||
|
) -> Result<String, JwtError> {
|
||||||
|
let now = Utc::now();
|
||||||
|
let expiration = match token_type {
|
||||||
|
TokenType::Access => config.access_token_expiration,
|
||||||
|
TokenType::Refresh => config.refresh_token_expiration,
|
||||||
|
// Sensor and Execution tokens are generated via their own dedicated functions
|
||||||
|
// with explicit TTLs; this fallback should not normally be reached.
|
||||||
|
TokenType::Sensor => 86400,
|
||||||
|
TokenType::Execution => 300,
|
||||||
|
};
|
||||||
|
|
||||||
|
let exp = (now + Duration::seconds(expiration)).timestamp();
|
||||||
|
|
||||||
|
let claims = Claims {
|
||||||
|
sub: identity_id.to_string(),
|
||||||
|
login: login.to_string(),
|
||||||
|
iat: now.timestamp(),
|
||||||
|
exp,
|
||||||
|
token_type,
|
||||||
|
scope: None,
|
||||||
|
metadata: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
encode(
|
||||||
|
&Header::default(),
|
||||||
|
&claims,
|
||||||
|
&EncodingKey::from_secret(config.secret.as_bytes()),
|
||||||
|
)
|
||||||
|
.map_err(|e| JwtError::EncodeError(e.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a sensor token with specific trigger types
|
||||||
|
///
|
||||||
|
/// # Arguments
|
||||||
|
/// * `identity_id` - The identity ID for the sensor
|
||||||
|
/// * `sensor_ref` - The sensor reference (e.g., "sensor:core.timer")
|
||||||
|
/// * `trigger_types` - List of trigger types this sensor can create events for
|
||||||
|
/// * `config` - JWT configuration
|
||||||
|
/// * `ttl_seconds` - Time to live in seconds (default: 24 hours)
|
||||||
|
pub fn generate_sensor_token(
|
||||||
|
identity_id: i64,
|
||||||
|
sensor_ref: &str,
|
||||||
|
trigger_types: Vec<String>,
|
||||||
|
config: &JwtConfig,
|
||||||
|
ttl_seconds: Option<i64>,
|
||||||
|
) -> Result<String, JwtError> {
|
||||||
|
let now = Utc::now();
|
||||||
|
let expiration = ttl_seconds.unwrap_or(86400); // Default: 24 hours
|
||||||
|
let exp = (now + Duration::seconds(expiration)).timestamp();
|
||||||
|
|
||||||
|
let metadata = serde_json::json!({
|
||||||
|
"trigger_types": trigger_types,
|
||||||
|
});
|
||||||
|
|
||||||
|
let claims = Claims {
|
||||||
|
sub: identity_id.to_string(),
|
||||||
|
login: sensor_ref.to_string(),
|
||||||
|
iat: now.timestamp(),
|
||||||
|
exp,
|
||||||
|
token_type: TokenType::Sensor,
|
||||||
|
scope: Some("sensor".to_string()),
|
||||||
|
metadata: Some(metadata),
|
||||||
|
};
|
||||||
|
|
||||||
|
encode(
|
||||||
|
&Header::default(),
|
||||||
|
&claims,
|
||||||
|
&EncodingKey::from_secret(config.secret.as_bytes()),
|
||||||
|
)
|
||||||
|
.map_err(|e| JwtError::EncodeError(e.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate an execution-scoped token.
|
||||||
|
///
|
||||||
|
/// These tokens are short-lived (matching the execution timeout) and scoped
|
||||||
|
/// to a single execution. They allow actions to call back into the Attune API
|
||||||
|
/// (e.g., to create artifacts, update progress) without full user credentials.
|
||||||
|
///
|
||||||
|
/// The token is automatically invalidated when it expires. The TTL defaults to
|
||||||
|
/// the execution timeout plus a 60-second grace period to account for cleanup.
|
||||||
|
///
|
||||||
|
/// # Arguments
|
||||||
|
/// * `identity_id` - The identity ID that triggered the execution
|
||||||
|
/// * `execution_id` - The execution ID this token is scoped to
|
||||||
|
/// * `action_ref` - The action reference for audit/logging
|
||||||
|
/// * `config` - JWT configuration (uses the same signing secret as all tokens)
|
||||||
|
/// * `ttl_seconds` - Time to live in seconds (defaults to 360 = 5 min timeout + 60s grace)
|
||||||
|
pub fn generate_execution_token(
|
||||||
|
identity_id: i64,
|
||||||
|
execution_id: i64,
|
||||||
|
action_ref: &str,
|
||||||
|
config: &JwtConfig,
|
||||||
|
ttl_seconds: Option<i64>,
|
||||||
|
) -> Result<String, JwtError> {
|
||||||
|
let now = Utc::now();
|
||||||
|
let expiration = ttl_seconds.unwrap_or(360); // Default: 6 minutes (5 min timeout + grace)
|
||||||
|
let exp = (now + Duration::seconds(expiration)).timestamp();
|
||||||
|
|
||||||
|
let metadata = serde_json::json!({
|
||||||
|
"execution_id": execution_id,
|
||||||
|
"action_ref": action_ref,
|
||||||
|
});
|
||||||
|
|
||||||
|
let claims = Claims {
|
||||||
|
sub: identity_id.to_string(),
|
||||||
|
login: format!("execution:{}", execution_id),
|
||||||
|
iat: now.timestamp(),
|
||||||
|
exp,
|
||||||
|
token_type: TokenType::Execution,
|
||||||
|
scope: Some("execution".to_string()),
|
||||||
|
metadata: Some(metadata),
|
||||||
|
};
|
||||||
|
|
||||||
|
encode(
|
||||||
|
&Header::default(),
|
||||||
|
&claims,
|
||||||
|
&EncodingKey::from_secret(config.secret.as_bytes()),
|
||||||
|
)
|
||||||
|
.map_err(|e| JwtError::EncodeError(e.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Validate and decode a JWT token
|
||||||
|
pub fn validate_token(token: &str, config: &JwtConfig) -> Result<Claims, JwtError> {
|
||||||
|
let validation = Validation::default();
|
||||||
|
|
||||||
|
decode::<Claims>(
|
||||||
|
token,
|
||||||
|
&DecodingKey::from_secret(config.secret.as_bytes()),
|
||||||
|
&validation,
|
||||||
|
)
|
||||||
|
.map(|data| data.claims)
|
||||||
|
.map_err(|e| {
|
||||||
|
if e.to_string().contains("ExpiredSignature") {
|
||||||
|
JwtError::Expired
|
||||||
|
} else {
|
||||||
|
JwtError::DecodeError(e.to_string())
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Extract token from Authorization header
|
||||||
|
pub fn extract_token_from_header(auth_header: &str) -> Option<&str> {
|
||||||
|
if auth_header.starts_with("Bearer ") {
|
||||||
|
Some(&auth_header[7..])
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
fn test_config() -> JwtConfig {
|
||||||
|
JwtConfig {
|
||||||
|
secret: "test_secret_key_for_testing".to_string(),
|
||||||
|
access_token_expiration: 3600,
|
||||||
|
refresh_token_expiration: 604800,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_generate_and_validate_access_token() {
|
||||||
|
let config = test_config();
|
||||||
|
let token =
|
||||||
|
generate_access_token(123, "testuser", &config).expect("Failed to generate token");
|
||||||
|
|
||||||
|
let claims = validate_token(&token, &config).expect("Failed to validate token");
|
||||||
|
|
||||||
|
assert_eq!(claims.sub, "123");
|
||||||
|
assert_eq!(claims.login, "testuser");
|
||||||
|
assert_eq!(claims.token_type, TokenType::Access);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_generate_and_validate_refresh_token() {
|
||||||
|
let config = test_config();
|
||||||
|
let token =
|
||||||
|
generate_refresh_token(456, "anotheruser", &config).expect("Failed to generate token");
|
||||||
|
|
||||||
|
let claims = validate_token(&token, &config).expect("Failed to validate token");
|
||||||
|
|
||||||
|
assert_eq!(claims.sub, "456");
|
||||||
|
assert_eq!(claims.login, "anotheruser");
|
||||||
|
assert_eq!(claims.token_type, TokenType::Refresh);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_invalid_token() {
|
||||||
|
let config = test_config();
|
||||||
|
let result = validate_token("invalid.token.here", &config);
|
||||||
|
assert!(result.is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_token_with_wrong_secret() {
|
||||||
|
let config = test_config();
|
||||||
|
let token = generate_access_token(789, "user", &config).expect("Failed to generate token");
|
||||||
|
|
||||||
|
let wrong_config = JwtConfig {
|
||||||
|
secret: "different_secret".to_string(),
|
||||||
|
..config
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = validate_token(&token, &wrong_config);
|
||||||
|
assert!(result.is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_expired_token() {
|
||||||
|
let now = Utc::now().timestamp();
|
||||||
|
let expired_claims = Claims {
|
||||||
|
sub: "999".to_string(),
|
||||||
|
login: "expireduser".to_string(),
|
||||||
|
iat: now - 3600,
|
||||||
|
exp: now - 1800,
|
||||||
|
token_type: TokenType::Access,
|
||||||
|
scope: None,
|
||||||
|
metadata: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let config = test_config();
|
||||||
|
|
||||||
|
let expired_token = encode(
|
||||||
|
&Header::default(),
|
||||||
|
&expired_claims,
|
||||||
|
&EncodingKey::from_secret(config.secret.as_bytes()),
|
||||||
|
)
|
||||||
|
.expect("Failed to encode token");
|
||||||
|
|
||||||
|
let result = validate_token(&expired_token, &config);
|
||||||
|
assert!(matches!(result, Err(JwtError::Expired)));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_extract_token_from_header() {
|
||||||
|
let header = "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9";
|
||||||
|
let token = extract_token_from_header(header);
|
||||||
|
assert_eq!(token, Some("eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"));
|
||||||
|
|
||||||
|
let invalid_header = "Token abc123";
|
||||||
|
let token = extract_token_from_header(invalid_header);
|
||||||
|
assert_eq!(token, None);
|
||||||
|
|
||||||
|
let no_token = "Bearer ";
|
||||||
|
let token = extract_token_from_header(no_token);
|
||||||
|
assert_eq!(token, Some(""));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_claims_serialization() {
|
||||||
|
let claims = Claims {
|
||||||
|
sub: "123".to_string(),
|
||||||
|
login: "testuser".to_string(),
|
||||||
|
iat: 1234567890,
|
||||||
|
exp: 1234571490,
|
||||||
|
token_type: TokenType::Access,
|
||||||
|
scope: None,
|
||||||
|
metadata: None,
|
||||||
|
};
|
||||||
|
|
||||||
|
let json = serde_json::to_string(&claims).expect("Failed to serialize");
|
||||||
|
let deserialized: Claims = serde_json::from_str(&json).expect("Failed to deserialize");
|
||||||
|
|
||||||
|
assert_eq!(claims.sub, deserialized.sub);
|
||||||
|
assert_eq!(claims.login, deserialized.login);
|
||||||
|
assert_eq!(claims.token_type, deserialized.token_type);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_generate_sensor_token() {
|
||||||
|
let config = test_config();
|
||||||
|
let trigger_types = vec!["core.timer".to_string(), "core.webhook".to_string()];
|
||||||
|
|
||||||
|
let token = generate_sensor_token(
|
||||||
|
999,
|
||||||
|
"sensor:core.timer",
|
||||||
|
trigger_types.clone(),
|
||||||
|
&config,
|
||||||
|
Some(86400),
|
||||||
|
)
|
||||||
|
.expect("Failed to generate sensor token");
|
||||||
|
|
||||||
|
let claims = validate_token(&token, &config).expect("Failed to validate token");
|
||||||
|
|
||||||
|
assert_eq!(claims.sub, "999");
|
||||||
|
assert_eq!(claims.login, "sensor:core.timer");
|
||||||
|
assert_eq!(claims.token_type, TokenType::Sensor);
|
||||||
|
assert_eq!(claims.scope, Some("sensor".to_string()));
|
||||||
|
|
||||||
|
let metadata = claims.metadata.expect("Metadata should be present");
|
||||||
|
let trigger_types_from_token = metadata["trigger_types"]
|
||||||
|
.as_array()
|
||||||
|
.expect("trigger_types should be an array");
|
||||||
|
|
||||||
|
assert_eq!(trigger_types_from_token.len(), 2);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_generate_execution_token() {
|
||||||
|
let config = test_config();
|
||||||
|
|
||||||
|
let token =
|
||||||
|
generate_execution_token(42, 12345, "python_example.artifact_demo", &config, None)
|
||||||
|
.expect("Failed to generate execution token");
|
||||||
|
|
||||||
|
let claims = validate_token(&token, &config).expect("Failed to validate token");
|
||||||
|
|
||||||
|
assert_eq!(claims.sub, "42");
|
||||||
|
assert_eq!(claims.login, "execution:12345");
|
||||||
|
assert_eq!(claims.token_type, TokenType::Execution);
|
||||||
|
assert_eq!(claims.scope, Some("execution".to_string()));
|
||||||
|
|
||||||
|
let metadata = claims.metadata.expect("Metadata should be present");
|
||||||
|
assert_eq!(metadata["execution_id"], 12345);
|
||||||
|
assert_eq!(metadata["action_ref"], "python_example.artifact_demo");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_execution_token_custom_ttl() {
|
||||||
|
let config = test_config();
|
||||||
|
|
||||||
|
let token = generate_execution_token(1, 100, "core.echo", &config, Some(600))
|
||||||
|
.expect("Failed to generate execution token");
|
||||||
|
|
||||||
|
let claims = validate_token(&token, &config).expect("Failed to validate token");
|
||||||
|
|
||||||
|
// Should expire roughly 600 seconds from now
|
||||||
|
let now = Utc::now().timestamp();
|
||||||
|
let diff = claims.exp - now;
|
||||||
|
assert!(
|
||||||
|
diff > 590 && diff <= 600,
|
||||||
|
"TTL should be ~600s, got {}s",
|
||||||
|
diff
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_token_type_serialization() {
|
||||||
|
// Ensure all token types round-trip through JSON correctly
|
||||||
|
for tt in [
|
||||||
|
TokenType::Access,
|
||||||
|
TokenType::Refresh,
|
||||||
|
TokenType::Sensor,
|
||||||
|
TokenType::Execution,
|
||||||
|
] {
|
||||||
|
let json = serde_json::to_string(&tt).expect("Failed to serialize");
|
||||||
|
let deserialized: TokenType =
|
||||||
|
serde_json::from_str(&json).expect("Failed to deserialize");
|
||||||
|
assert_eq!(tt, deserialized);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
13
crates/common/src/auth/mod.rs
Normal file
13
crates/common/src/auth/mod.rs
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
//! Authentication primitives shared across Attune services.
|
||||||
|
//!
|
||||||
|
//! This module provides JWT token types, generation, and validation functions
|
||||||
|
//! that are used by the API (for all token types), the worker (for execution-scoped
|
||||||
|
//! tokens), and the sensor service (for sensor tokens).
|
||||||
|
|
||||||
|
pub mod jwt;
|
||||||
|
|
||||||
|
pub use jwt::{
|
||||||
|
extract_token_from_header, generate_access_token, generate_execution_token,
|
||||||
|
generate_refresh_token, generate_sensor_token, generate_token, validate_token, Claims,
|
||||||
|
JwtConfig, JwtError, TokenType,
|
||||||
|
};
|
||||||
@@ -582,6 +582,13 @@ pub struct Config {
|
|||||||
#[serde(default = "default_runtime_envs_dir")]
|
#[serde(default = "default_runtime_envs_dir")]
|
||||||
pub runtime_envs_dir: String,
|
pub runtime_envs_dir: String,
|
||||||
|
|
||||||
|
/// Artifacts directory (shared volume for file-based artifact storage).
|
||||||
|
/// File-type artifacts (FileBinary, FileDatatable, FileText, Log) are stored
|
||||||
|
/// on disk at this location rather than in the database.
|
||||||
|
/// Pattern: {artifacts_dir}/{ref_slug}/v{version}.{ext}
|
||||||
|
#[serde(default = "default_artifacts_dir")]
|
||||||
|
pub artifacts_dir: String,
|
||||||
|
|
||||||
/// Notifier configuration (optional, for notifier service)
|
/// Notifier configuration (optional, for notifier service)
|
||||||
pub notifier: Option<NotifierConfig>,
|
pub notifier: Option<NotifierConfig>,
|
||||||
|
|
||||||
@@ -609,6 +616,10 @@ fn default_runtime_envs_dir() -> String {
|
|||||||
"/opt/attune/runtime_envs".to_string()
|
"/opt/attune/runtime_envs".to_string()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn default_artifacts_dir() -> String {
|
||||||
|
"/opt/attune/artifacts".to_string()
|
||||||
|
}
|
||||||
|
|
||||||
impl Default for DatabaseConfig {
|
impl Default for DatabaseConfig {
|
||||||
fn default() -> Self {
|
fn default() -> Self {
|
||||||
Self {
|
Self {
|
||||||
@@ -844,6 +855,7 @@ mod tests {
|
|||||||
sensor: None,
|
sensor: None,
|
||||||
packs_base_dir: default_packs_base_dir(),
|
packs_base_dir: default_packs_base_dir(),
|
||||||
runtime_envs_dir: default_runtime_envs_dir(),
|
runtime_envs_dir: default_runtime_envs_dir(),
|
||||||
|
artifacts_dir: default_artifacts_dir(),
|
||||||
notifier: None,
|
notifier: None,
|
||||||
pack_registry: PackRegistryConfig::default(),
|
pack_registry: PackRegistryConfig::default(),
|
||||||
executor: None,
|
executor: None,
|
||||||
@@ -917,6 +929,7 @@ mod tests {
|
|||||||
sensor: None,
|
sensor: None,
|
||||||
packs_base_dir: default_packs_base_dir(),
|
packs_base_dir: default_packs_base_dir(),
|
||||||
runtime_envs_dir: default_runtime_envs_dir(),
|
runtime_envs_dir: default_runtime_envs_dir(),
|
||||||
|
artifacts_dir: default_artifacts_dir(),
|
||||||
notifier: None,
|
notifier: None,
|
||||||
pack_registry: PackRegistryConfig::default(),
|
pack_registry: PackRegistryConfig::default(),
|
||||||
executor: None,
|
executor: None,
|
||||||
|
|||||||
@@ -6,6 +6,7 @@
|
|||||||
//! - Configuration
|
//! - Configuration
|
||||||
//! - Utilities
|
//! - Utilities
|
||||||
|
|
||||||
|
pub mod auth;
|
||||||
pub mod config;
|
pub mod config;
|
||||||
pub mod crypto;
|
pub mod crypto;
|
||||||
pub mod db;
|
pub mod db;
|
||||||
|
|||||||
@@ -10,6 +10,8 @@ use sqlx::FromRow;
|
|||||||
|
|
||||||
// Re-export common types
|
// Re-export common types
|
||||||
pub use action::*;
|
pub use action::*;
|
||||||
|
pub use artifact::Artifact;
|
||||||
|
pub use artifact_version::ArtifactVersion;
|
||||||
pub use entity_history::*;
|
pub use entity_history::*;
|
||||||
pub use enums::*;
|
pub use enums::*;
|
||||||
pub use event::*;
|
pub use event::*;
|
||||||
@@ -355,7 +357,7 @@ pub mod enums {
|
|||||||
Url,
|
Url,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Type)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Type, ToSchema)]
|
||||||
#[sqlx(type_name = "artifact_retention_enum", rename_all = "lowercase")]
|
#[sqlx(type_name = "artifact_retention_enum", rename_all = "lowercase")]
|
||||||
#[serde(rename_all = "lowercase")]
|
#[serde(rename_all = "lowercase")]
|
||||||
pub enum RetentionPolicyType {
|
pub enum RetentionPolicyType {
|
||||||
@@ -365,6 +367,24 @@ pub mod enums {
|
|||||||
Minutes,
|
Minutes,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Visibility level for artifacts.
|
||||||
|
/// - `Public`: viewable by all authenticated users on the platform.
|
||||||
|
/// - `Private`: restricted based on the artifact's `scope` and `owner` fields.
|
||||||
|
/// Full RBAC enforcement is deferred; for now the field enables filtering.
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Type, ToSchema)]
|
||||||
|
#[sqlx(type_name = "artifact_visibility_enum", rename_all = "lowercase")]
|
||||||
|
#[serde(rename_all = "lowercase")]
|
||||||
|
pub enum ArtifactVisibility {
|
||||||
|
Public,
|
||||||
|
Private,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for ArtifactVisibility {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::Private
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Type, ToSchema)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Type, ToSchema)]
|
||||||
#[sqlx(type_name = "workflow_task_status_enum", rename_all = "lowercase")]
|
#[sqlx(type_name = "workflow_task_status_enum", rename_all = "lowercase")]
|
||||||
#[serde(rename_all = "lowercase")]
|
#[serde(rename_all = "lowercase")]
|
||||||
@@ -1266,11 +1286,73 @@ pub mod artifact {
|
|||||||
pub scope: OwnerType,
|
pub scope: OwnerType,
|
||||||
pub owner: String,
|
pub owner: String,
|
||||||
pub r#type: ArtifactType,
|
pub r#type: ArtifactType,
|
||||||
|
pub visibility: ArtifactVisibility,
|
||||||
pub retention_policy: RetentionPolicyType,
|
pub retention_policy: RetentionPolicyType,
|
||||||
pub retention_limit: i32,
|
pub retention_limit: i32,
|
||||||
|
/// Human-readable name (e.g. "Build Log", "Test Results")
|
||||||
|
pub name: Option<String>,
|
||||||
|
/// Optional longer description
|
||||||
|
pub description: Option<String>,
|
||||||
|
/// MIME content type (e.g. "application/json", "text/plain")
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
/// Size of the latest version's content in bytes
|
||||||
|
pub size_bytes: Option<i64>,
|
||||||
|
/// Execution that produced this artifact (no FK — execution is a hypertable)
|
||||||
|
pub execution: Option<Id>,
|
||||||
|
/// Structured JSONB data for progress artifacts or metadata
|
||||||
|
pub data: Option<serde_json::Value>,
|
||||||
pub created: DateTime<Utc>,
|
pub created: DateTime<Utc>,
|
||||||
pub updated: DateTime<Utc>,
|
pub updated: DateTime<Utc>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Select columns for Artifact queries (excludes DB-only columns if any arise).
|
||||||
|
/// Must be kept in sync with the Artifact struct field order.
|
||||||
|
pub const SELECT_COLUMNS: &str =
|
||||||
|
"id, ref, scope, owner, type, visibility, retention_policy, retention_limit, \
|
||||||
|
name, description, content_type, size_bytes, execution, data, \
|
||||||
|
created, updated";
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Artifact version model — immutable content snapshots
|
||||||
|
pub mod artifact_version {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
|
||||||
|
pub struct ArtifactVersion {
|
||||||
|
pub id: Id,
|
||||||
|
/// Parent artifact
|
||||||
|
pub artifact: Id,
|
||||||
|
/// Version number (1-based, monotonically increasing per artifact)
|
||||||
|
pub version: i32,
|
||||||
|
/// MIME content type for this version
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
/// Size of content in bytes
|
||||||
|
pub size_bytes: Option<i64>,
|
||||||
|
/// Binary content (file data) — not included in default queries for performance
|
||||||
|
#[serde(skip_serializing)]
|
||||||
|
pub content: Option<Vec<u8>>,
|
||||||
|
/// Structured JSON content
|
||||||
|
pub content_json: Option<serde_json::Value>,
|
||||||
|
/// Relative path from `artifacts_dir` root for disk-stored content.
|
||||||
|
/// When set, `content` BYTEA is NULL — the file lives on a shared volume.
|
||||||
|
/// Pattern: `{ref_slug}/v{version}.{ext}`
|
||||||
|
pub file_path: Option<String>,
|
||||||
|
/// Free-form metadata about this version
|
||||||
|
pub meta: Option<serde_json::Value>,
|
||||||
|
/// Who created this version
|
||||||
|
pub created_by: Option<String>,
|
||||||
|
pub created: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Select columns WITHOUT the potentially large `content` BYTEA column.
|
||||||
|
/// Use `SELECT_COLUMNS_WITH_CONTENT` when you need the binary payload.
|
||||||
|
pub const SELECT_COLUMNS: &str = "id, artifact, version, content_type, size_bytes, \
|
||||||
|
NULL::bytea AS content, content_json, file_path, meta, created_by, created";
|
||||||
|
|
||||||
|
/// Select columns INCLUDING the binary `content` column.
|
||||||
|
pub const SELECT_COLUMNS_WITH_CONTENT: &str =
|
||||||
|
"id, artifact, version, content_type, size_bytes, \
|
||||||
|
content, content_json, file_path, meta, created_by, created";
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Workflow orchestration models
|
/// Workflow orchestration models
|
||||||
|
|||||||
@@ -5,7 +5,7 @@
|
|||||||
//! with headers and payload.
|
//! with headers and payload.
|
||||||
|
|
||||||
use chrono::{DateTime, Utc};
|
use chrono::{DateTime, Utc};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Deserializer, Serialize};
|
||||||
use serde_json::Value as JsonValue;
|
use serde_json::Value as JsonValue;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
@@ -124,6 +124,17 @@ impl MessageType {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Deserialize a UUID, substituting a freshly-generated one when the value is
|
||||||
|
/// null or absent. This keeps envelope parsing tolerant of messages that were
|
||||||
|
/// hand-crafted or produced by older tooling.
|
||||||
|
fn deserialize_uuid_default<'de, D>(deserializer: D) -> Result<Uuid, D::Error>
|
||||||
|
where
|
||||||
|
D: Deserializer<'de>,
|
||||||
|
{
|
||||||
|
let opt: Option<Uuid> = Option::deserialize(deserializer)?;
|
||||||
|
Ok(opt.unwrap_or_else(Uuid::new_v4))
|
||||||
|
}
|
||||||
|
|
||||||
/// Message envelope that wraps all messages with metadata
|
/// Message envelope that wraps all messages with metadata
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct MessageEnvelope<T>
|
pub struct MessageEnvelope<T>
|
||||||
@@ -131,9 +142,17 @@ where
|
|||||||
T: Clone,
|
T: Clone,
|
||||||
{
|
{
|
||||||
/// Unique message identifier
|
/// Unique message identifier
|
||||||
|
#[serde(
|
||||||
|
default = "Uuid::new_v4",
|
||||||
|
deserialize_with = "deserialize_uuid_default"
|
||||||
|
)]
|
||||||
pub message_id: Uuid,
|
pub message_id: Uuid,
|
||||||
|
|
||||||
/// Correlation ID for tracing related messages
|
/// Correlation ID for tracing related messages
|
||||||
|
#[serde(
|
||||||
|
default = "Uuid::new_v4",
|
||||||
|
deserialize_with = "deserialize_uuid_default"
|
||||||
|
)]
|
||||||
pub correlation_id: Uuid,
|
pub correlation_id: Uuid,
|
||||||
|
|
||||||
/// Message type
|
/// Message type
|
||||||
|
|||||||
@@ -1,14 +1,19 @@
|
|||||||
//! Artifact repository for database operations
|
//! Artifact and ArtifactVersion repositories for database operations
|
||||||
|
|
||||||
use crate::models::{
|
use crate::models::{
|
||||||
artifact::*,
|
artifact::*,
|
||||||
enums::{ArtifactType, OwnerType, RetentionPolicyType},
|
artifact_version::ArtifactVersion,
|
||||||
|
enums::{ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType},
|
||||||
};
|
};
|
||||||
use crate::Result;
|
use crate::Result;
|
||||||
use sqlx::{Executor, Postgres, QueryBuilder};
|
use sqlx::{Executor, Postgres, QueryBuilder};
|
||||||
|
|
||||||
use super::{Create, Delete, FindById, FindByRef, List, Repository, Update};
|
use super::{Create, Delete, FindById, FindByRef, List, Repository, Update};
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// ArtifactRepository
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
pub struct ArtifactRepository;
|
pub struct ArtifactRepository;
|
||||||
|
|
||||||
impl Repository for ArtifactRepository {
|
impl Repository for ArtifactRepository {
|
||||||
@@ -24,8 +29,14 @@ pub struct CreateArtifactInput {
|
|||||||
pub scope: OwnerType,
|
pub scope: OwnerType,
|
||||||
pub owner: String,
|
pub owner: String,
|
||||||
pub r#type: ArtifactType,
|
pub r#type: ArtifactType,
|
||||||
|
pub visibility: ArtifactVisibility,
|
||||||
pub retention_policy: RetentionPolicyType,
|
pub retention_policy: RetentionPolicyType,
|
||||||
pub retention_limit: i32,
|
pub retention_limit: i32,
|
||||||
|
pub name: Option<String>,
|
||||||
|
pub description: Option<String>,
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
pub execution: Option<i64>,
|
||||||
|
pub data: Option<serde_json::Value>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Default)]
|
#[derive(Debug, Clone, Default)]
|
||||||
@@ -34,8 +45,33 @@ pub struct UpdateArtifactInput {
|
|||||||
pub scope: Option<OwnerType>,
|
pub scope: Option<OwnerType>,
|
||||||
pub owner: Option<String>,
|
pub owner: Option<String>,
|
||||||
pub r#type: Option<ArtifactType>,
|
pub r#type: Option<ArtifactType>,
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
pub retention_policy: Option<RetentionPolicyType>,
|
pub retention_policy: Option<RetentionPolicyType>,
|
||||||
pub retention_limit: Option<i32>,
|
pub retention_limit: Option<i32>,
|
||||||
|
pub name: Option<String>,
|
||||||
|
pub description: Option<String>,
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
pub size_bytes: Option<i64>,
|
||||||
|
pub data: Option<serde_json::Value>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Filters for searching artifacts
|
||||||
|
#[derive(Debug, Clone, Default)]
|
||||||
|
pub struct ArtifactSearchFilters {
|
||||||
|
pub scope: Option<OwnerType>,
|
||||||
|
pub owner: Option<String>,
|
||||||
|
pub r#type: Option<ArtifactType>,
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
|
pub execution: Option<i64>,
|
||||||
|
pub name_contains: Option<String>,
|
||||||
|
pub limit: u32,
|
||||||
|
pub offset: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Search result with total count
|
||||||
|
pub struct ArtifactSearchResult {
|
||||||
|
pub rows: Vec<Artifact>,
|
||||||
|
pub total: i64,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[async_trait::async_trait]
|
#[async_trait::async_trait]
|
||||||
@@ -44,15 +80,12 @@ impl FindById for ArtifactRepository {
|
|||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
sqlx::query_as::<_, Artifact>(
|
let query = format!("SELECT {} FROM artifact WHERE id = $1", SELECT_COLUMNS);
|
||||||
"SELECT id, ref, scope, owner, type, retention_policy, retention_limit, created, updated
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
FROM artifact
|
.bind(id)
|
||||||
WHERE id = $1",
|
.fetch_optional(executor)
|
||||||
)
|
.await
|
||||||
.bind(id)
|
.map_err(Into::into)
|
||||||
.fetch_optional(executor)
|
|
||||||
.await
|
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -62,15 +95,12 @@ impl FindByRef for ArtifactRepository {
|
|||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
sqlx::query_as::<_, Artifact>(
|
let query = format!("SELECT {} FROM artifact WHERE ref = $1", SELECT_COLUMNS);
|
||||||
"SELECT id, ref, scope, owner, type, retention_policy, retention_limit, created, updated
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
FROM artifact
|
.bind(ref_str)
|
||||||
WHERE ref = $1",
|
.fetch_optional(executor)
|
||||||
)
|
.await
|
||||||
.bind(ref_str)
|
.map_err(Into::into)
|
||||||
.fetch_optional(executor)
|
|
||||||
.await
|
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -80,15 +110,14 @@ impl List for ArtifactRepository {
|
|||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
sqlx::query_as::<_, Artifact>(
|
let query = format!(
|
||||||
"SELECT id, ref, scope, owner, type, retention_policy, retention_limit, created, updated
|
"SELECT {} FROM artifact ORDER BY created DESC LIMIT 1000",
|
||||||
FROM artifact
|
SELECT_COLUMNS
|
||||||
ORDER BY created DESC
|
);
|
||||||
LIMIT 1000",
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
)
|
.fetch_all(executor)
|
||||||
.fetch_all(executor)
|
.await
|
||||||
.await
|
.map_err(Into::into)
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -100,20 +129,29 @@ impl Create for ArtifactRepository {
|
|||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
sqlx::query_as::<_, Artifact>(
|
let query = format!(
|
||||||
"INSERT INTO artifact (ref, scope, owner, type, retention_policy, retention_limit)
|
"INSERT INTO artifact (ref, scope, owner, type, visibility, retention_policy, retention_limit, \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6)
|
name, description, content_type, execution, data) \
|
||||||
RETURNING id, ref, scope, owner, type, retention_policy, retention_limit, created, updated",
|
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) \
|
||||||
)
|
RETURNING {}",
|
||||||
.bind(&input.r#ref)
|
SELECT_COLUMNS
|
||||||
.bind(input.scope)
|
);
|
||||||
.bind(&input.owner)
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
.bind(input.r#type)
|
.bind(&input.r#ref)
|
||||||
.bind(input.retention_policy)
|
.bind(input.scope)
|
||||||
.bind(input.retention_limit)
|
.bind(&input.owner)
|
||||||
.fetch_one(executor)
|
.bind(input.r#type)
|
||||||
.await
|
.bind(input.visibility)
|
||||||
.map_err(Into::into)
|
.bind(input.retention_policy)
|
||||||
|
.bind(input.retention_limit)
|
||||||
|
.bind(&input.name)
|
||||||
|
.bind(&input.description)
|
||||||
|
.bind(&input.content_type)
|
||||||
|
.bind(input.execution)
|
||||||
|
.bind(&input.data)
|
||||||
|
.fetch_one(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -125,59 +163,41 @@ impl Update for ArtifactRepository {
|
|||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
// Build update query dynamically
|
|
||||||
let mut query = QueryBuilder::new("UPDATE artifact SET ");
|
let mut query = QueryBuilder::new("UPDATE artifact SET ");
|
||||||
let mut has_updates = false;
|
let mut has_updates = false;
|
||||||
|
|
||||||
if let Some(ref_value) = &input.r#ref {
|
macro_rules! push_field {
|
||||||
query.push("ref = ").push_bind(ref_value);
|
($field:expr, $col:expr) => {
|
||||||
has_updates = true;
|
if let Some(val) = $field {
|
||||||
}
|
if has_updates {
|
||||||
if let Some(scope) = input.scope {
|
query.push(", ");
|
||||||
if has_updates {
|
}
|
||||||
query.push(", ");
|
query.push(concat!($col, " = ")).push_bind(val);
|
||||||
}
|
has_updates = true;
|
||||||
query.push("scope = ").push_bind(scope);
|
}
|
||||||
has_updates = true;
|
};
|
||||||
}
|
|
||||||
if let Some(owner) = &input.owner {
|
|
||||||
if has_updates {
|
|
||||||
query.push(", ");
|
|
||||||
}
|
|
||||||
query.push("owner = ").push_bind(owner);
|
|
||||||
has_updates = true;
|
|
||||||
}
|
|
||||||
if let Some(artifact_type) = input.r#type {
|
|
||||||
if has_updates {
|
|
||||||
query.push(", ");
|
|
||||||
}
|
|
||||||
query.push("type = ").push_bind(artifact_type);
|
|
||||||
has_updates = true;
|
|
||||||
}
|
|
||||||
if let Some(retention_policy) = input.retention_policy {
|
|
||||||
if has_updates {
|
|
||||||
query.push(", ");
|
|
||||||
}
|
|
||||||
query
|
|
||||||
.push("retention_policy = ")
|
|
||||||
.push_bind(retention_policy);
|
|
||||||
has_updates = true;
|
|
||||||
}
|
|
||||||
if let Some(retention_limit) = input.retention_limit {
|
|
||||||
if has_updates {
|
|
||||||
query.push(", ");
|
|
||||||
}
|
|
||||||
query.push("retention_limit = ").push_bind(retention_limit);
|
|
||||||
has_updates = true;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
push_field!(&input.r#ref, "ref");
|
||||||
|
push_field!(input.scope, "scope");
|
||||||
|
push_field!(&input.owner, "owner");
|
||||||
|
push_field!(input.r#type, "type");
|
||||||
|
push_field!(input.visibility, "visibility");
|
||||||
|
push_field!(input.retention_policy, "retention_policy");
|
||||||
|
push_field!(input.retention_limit, "retention_limit");
|
||||||
|
push_field!(&input.name, "name");
|
||||||
|
push_field!(&input.description, "description");
|
||||||
|
push_field!(&input.content_type, "content_type");
|
||||||
|
push_field!(input.size_bytes, "size_bytes");
|
||||||
|
push_field!(&input.data, "data");
|
||||||
|
|
||||||
if !has_updates {
|
if !has_updates {
|
||||||
// No updates requested, fetch and return existing entity
|
|
||||||
return Self::get_by_id(executor, id).await;
|
return Self::get_by_id(executor, id).await;
|
||||||
}
|
}
|
||||||
|
|
||||||
query.push(", updated = NOW() WHERE id = ").push_bind(id);
|
query.push(", updated = NOW() WHERE id = ").push_bind(id);
|
||||||
query.push(" RETURNING id, ref, scope, owner, type, retention_policy, retention_limit, created, updated");
|
query.push(" RETURNING ");
|
||||||
|
query.push(SELECT_COLUMNS);
|
||||||
|
|
||||||
query
|
query
|
||||||
.build_query_as::<Artifact>()
|
.build_query_as::<Artifact>()
|
||||||
@@ -202,21 +222,123 @@ impl Delete for ArtifactRepository {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl ArtifactRepository {
|
impl ArtifactRepository {
|
||||||
|
/// Search artifacts with filters and pagination
|
||||||
|
pub async fn search<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
filters: &ArtifactSearchFilters,
|
||||||
|
) -> Result<ArtifactSearchResult>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + Copy + 'e,
|
||||||
|
{
|
||||||
|
// Build WHERE clauses
|
||||||
|
let mut conditions: Vec<String> = Vec::new();
|
||||||
|
let mut param_idx: usize = 0;
|
||||||
|
|
||||||
|
if filters.scope.is_some() {
|
||||||
|
param_idx += 1;
|
||||||
|
conditions.push(format!("scope = ${}", param_idx));
|
||||||
|
}
|
||||||
|
if filters.owner.is_some() {
|
||||||
|
param_idx += 1;
|
||||||
|
conditions.push(format!("owner = ${}", param_idx));
|
||||||
|
}
|
||||||
|
if filters.r#type.is_some() {
|
||||||
|
param_idx += 1;
|
||||||
|
conditions.push(format!("type = ${}", param_idx));
|
||||||
|
}
|
||||||
|
if filters.visibility.is_some() {
|
||||||
|
param_idx += 1;
|
||||||
|
conditions.push(format!("visibility = ${}", param_idx));
|
||||||
|
}
|
||||||
|
if filters.execution.is_some() {
|
||||||
|
param_idx += 1;
|
||||||
|
conditions.push(format!("execution = ${}", param_idx));
|
||||||
|
}
|
||||||
|
if filters.name_contains.is_some() {
|
||||||
|
param_idx += 1;
|
||||||
|
conditions.push(format!("name ILIKE '%' || ${} || '%'", param_idx));
|
||||||
|
}
|
||||||
|
|
||||||
|
let where_clause = if conditions.is_empty() {
|
||||||
|
String::new()
|
||||||
|
} else {
|
||||||
|
format!("WHERE {}", conditions.join(" AND "))
|
||||||
|
};
|
||||||
|
|
||||||
|
// Count query
|
||||||
|
let count_sql = format!("SELECT COUNT(*) AS cnt FROM artifact {}", where_clause);
|
||||||
|
let mut count_query = sqlx::query_scalar::<_, i64>(&count_sql);
|
||||||
|
|
||||||
|
// Bind params for count
|
||||||
|
if let Some(scope) = filters.scope {
|
||||||
|
count_query = count_query.bind(scope);
|
||||||
|
}
|
||||||
|
if let Some(ref owner) = filters.owner {
|
||||||
|
count_query = count_query.bind(owner.clone());
|
||||||
|
}
|
||||||
|
if let Some(r#type) = filters.r#type {
|
||||||
|
count_query = count_query.bind(r#type);
|
||||||
|
}
|
||||||
|
if let Some(visibility) = filters.visibility {
|
||||||
|
count_query = count_query.bind(visibility);
|
||||||
|
}
|
||||||
|
if let Some(execution) = filters.execution {
|
||||||
|
count_query = count_query.bind(execution);
|
||||||
|
}
|
||||||
|
if let Some(ref name) = filters.name_contains {
|
||||||
|
count_query = count_query.bind(name.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
let total = count_query.fetch_one(executor).await?;
|
||||||
|
|
||||||
|
// Data query
|
||||||
|
let limit = filters.limit.min(1000);
|
||||||
|
let offset = filters.offset;
|
||||||
|
let data_sql = format!(
|
||||||
|
"SELECT {} FROM artifact {} ORDER BY created DESC LIMIT {} OFFSET {}",
|
||||||
|
SELECT_COLUMNS, where_clause, limit, offset
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut data_query = sqlx::query_as::<_, Artifact>(&data_sql);
|
||||||
|
|
||||||
|
if let Some(scope) = filters.scope {
|
||||||
|
data_query = data_query.bind(scope);
|
||||||
|
}
|
||||||
|
if let Some(ref owner) = filters.owner {
|
||||||
|
data_query = data_query.bind(owner.clone());
|
||||||
|
}
|
||||||
|
if let Some(r#type) = filters.r#type {
|
||||||
|
data_query = data_query.bind(r#type);
|
||||||
|
}
|
||||||
|
if let Some(visibility) = filters.visibility {
|
||||||
|
data_query = data_query.bind(visibility);
|
||||||
|
}
|
||||||
|
if let Some(execution) = filters.execution {
|
||||||
|
data_query = data_query.bind(execution);
|
||||||
|
}
|
||||||
|
if let Some(ref name) = filters.name_contains {
|
||||||
|
data_query = data_query.bind(name.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
let rows = data_query.fetch_all(executor).await?;
|
||||||
|
|
||||||
|
Ok(ArtifactSearchResult { rows, total })
|
||||||
|
}
|
||||||
|
|
||||||
/// Find artifacts by scope
|
/// Find artifacts by scope
|
||||||
pub async fn find_by_scope<'e, E>(executor: E, scope: OwnerType) -> Result<Vec<Artifact>>
|
pub async fn find_by_scope<'e, E>(executor: E, scope: OwnerType) -> Result<Vec<Artifact>>
|
||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
sqlx::query_as::<_, Artifact>(
|
let query = format!(
|
||||||
"SELECT id, ref, scope, owner, type, retention_policy, retention_limit, created, updated
|
"SELECT {} FROM artifact WHERE scope = $1 ORDER BY created DESC",
|
||||||
FROM artifact
|
SELECT_COLUMNS
|
||||||
WHERE scope = $1
|
);
|
||||||
ORDER BY created DESC",
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
)
|
.bind(scope)
|
||||||
.bind(scope)
|
.fetch_all(executor)
|
||||||
.fetch_all(executor)
|
.await
|
||||||
.await
|
.map_err(Into::into)
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Find artifacts by owner
|
/// Find artifacts by owner
|
||||||
@@ -224,16 +346,15 @@ impl ArtifactRepository {
|
|||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
sqlx::query_as::<_, Artifact>(
|
let query = format!(
|
||||||
"SELECT id, ref, scope, owner, type, retention_policy, retention_limit, created, updated
|
"SELECT {} FROM artifact WHERE owner = $1 ORDER BY created DESC",
|
||||||
FROM artifact
|
SELECT_COLUMNS
|
||||||
WHERE owner = $1
|
);
|
||||||
ORDER BY created DESC",
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
)
|
.bind(owner)
|
||||||
.bind(owner)
|
.fetch_all(executor)
|
||||||
.fetch_all(executor)
|
.await
|
||||||
.await
|
.map_err(Into::into)
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Find artifacts by type
|
/// Find artifacts by type
|
||||||
@@ -244,19 +365,18 @@ impl ArtifactRepository {
|
|||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
sqlx::query_as::<_, Artifact>(
|
let query = format!(
|
||||||
"SELECT id, ref, scope, owner, type, retention_policy, retention_limit, created, updated
|
"SELECT {} FROM artifact WHERE type = $1 ORDER BY created DESC",
|
||||||
FROM artifact
|
SELECT_COLUMNS
|
||||||
WHERE type = $1
|
);
|
||||||
ORDER BY created DESC",
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
)
|
.bind(artifact_type)
|
||||||
.bind(artifact_type)
|
.fetch_all(executor)
|
||||||
.fetch_all(executor)
|
.await
|
||||||
.await
|
.map_err(Into::into)
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Find artifacts by scope and owner (common query pattern)
|
/// Find artifacts by scope and owner
|
||||||
pub async fn find_by_scope_and_owner<'e, E>(
|
pub async fn find_by_scope_and_owner<'e, E>(
|
||||||
executor: E,
|
executor: E,
|
||||||
scope: OwnerType,
|
scope: OwnerType,
|
||||||
@@ -265,17 +385,32 @@ impl ArtifactRepository {
|
|||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
sqlx::query_as::<_, Artifact>(
|
let query = format!(
|
||||||
"SELECT id, ref, scope, owner, type, retention_policy, retention_limit, created, updated
|
"SELECT {} FROM artifact WHERE scope = $1 AND owner = $2 ORDER BY created DESC",
|
||||||
FROM artifact
|
SELECT_COLUMNS
|
||||||
WHERE scope = $1 AND owner = $2
|
);
|
||||||
ORDER BY created DESC",
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
)
|
.bind(scope)
|
||||||
.bind(scope)
|
.bind(owner)
|
||||||
.bind(owner)
|
.fetch_all(executor)
|
||||||
.fetch_all(executor)
|
.await
|
||||||
.await
|
.map_err(Into::into)
|
||||||
.map_err(Into::into)
|
}
|
||||||
|
|
||||||
|
/// Find artifacts by execution ID
|
||||||
|
pub async fn find_by_execution<'e, E>(executor: E, execution_id: i64) -> Result<Vec<Artifact>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact WHERE execution = $1 ORDER BY created DESC",
|
||||||
|
SELECT_COLUMNS
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
|
.bind(execution_id)
|
||||||
|
.fetch_all(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Find artifacts by retention policy
|
/// Find artifacts by retention policy
|
||||||
@@ -286,15 +421,377 @@ impl ArtifactRepository {
|
|||||||
where
|
where
|
||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
sqlx::query_as::<_, Artifact>(
|
let query = format!(
|
||||||
"SELECT id, ref, scope, owner, type, retention_policy, retention_limit, created, updated
|
"SELECT {} FROM artifact WHERE retention_policy = $1 ORDER BY created DESC",
|
||||||
FROM artifact
|
SELECT_COLUMNS
|
||||||
WHERE retention_policy = $1
|
);
|
||||||
ORDER BY created DESC",
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
)
|
.bind(retention_policy)
|
||||||
.bind(retention_policy)
|
.fetch_all(executor)
|
||||||
.fetch_all(executor)
|
.await
|
||||||
.await
|
.map_err(Into::into)
|
||||||
.map_err(Into::into)
|
}
|
||||||
|
|
||||||
|
/// Append data to a progress-type artifact.
|
||||||
|
///
|
||||||
|
/// If `artifact.data` is currently NULL, it is initialized as a JSON array
|
||||||
|
/// containing the new entry. Otherwise the entry is appended to the existing
|
||||||
|
/// array. This is done atomically in a single SQL statement.
|
||||||
|
pub async fn append_progress<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
id: i64,
|
||||||
|
entry: &serde_json::Value,
|
||||||
|
) -> Result<Artifact>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"UPDATE artifact \
|
||||||
|
SET data = CASE \
|
||||||
|
WHEN data IS NULL THEN jsonb_build_array($2::jsonb) \
|
||||||
|
ELSE data || jsonb_build_array($2::jsonb) \
|
||||||
|
END, \
|
||||||
|
updated = NOW() \
|
||||||
|
WHERE id = $1 AND type = 'progress' \
|
||||||
|
RETURNING {}",
|
||||||
|
SELECT_COLUMNS
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
|
.bind(id)
|
||||||
|
.bind(entry)
|
||||||
|
.fetch_one(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Replace the full data payload on a progress-type artifact (for "set" semantics).
|
||||||
|
pub async fn set_data<'e, E>(executor: E, id: i64, data: &serde_json::Value) -> Result<Artifact>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"UPDATE artifact SET data = $2, updated = NOW() \
|
||||||
|
WHERE id = $1 RETURNING {}",
|
||||||
|
SELECT_COLUMNS
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, Artifact>(&query)
|
||||||
|
.bind(id)
|
||||||
|
.bind(data)
|
||||||
|
.fetch_one(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update the size_bytes of an artifact (used by worker finalization to sync
|
||||||
|
/// the parent artifact's size with the latest file-based version).
|
||||||
|
pub async fn update_size_bytes<'e, E>(executor: E, id: i64, size_bytes: i64) -> Result<bool>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let result =
|
||||||
|
sqlx::query("UPDATE artifact SET size_bytes = $1, updated = NOW() WHERE id = $2")
|
||||||
|
.bind(size_bytes)
|
||||||
|
.bind(id)
|
||||||
|
.execute(executor)
|
||||||
|
.await?;
|
||||||
|
Ok(result.rows_affected() > 0)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// ArtifactVersionRepository
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
use crate::models::artifact_version;
|
||||||
|
|
||||||
|
pub struct ArtifactVersionRepository;
|
||||||
|
|
||||||
|
impl Repository for ArtifactVersionRepository {
|
||||||
|
type Entity = ArtifactVersion;
|
||||||
|
fn table_name() -> &'static str {
|
||||||
|
"artifact_version"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct CreateArtifactVersionInput {
|
||||||
|
pub artifact: i64,
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
pub content: Option<Vec<u8>>,
|
||||||
|
pub content_json: Option<serde_json::Value>,
|
||||||
|
pub file_path: Option<String>,
|
||||||
|
pub meta: Option<serde_json::Value>,
|
||||||
|
pub created_by: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ArtifactVersionRepository {
|
||||||
|
/// Find a version by ID (without binary content for performance)
|
||||||
|
pub async fn find_by_id<'e, E>(executor: E, id: i64) -> Result<Option<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact_version WHERE id = $1",
|
||||||
|
artifact_version::SELECT_COLUMNS
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(id)
|
||||||
|
.fetch_optional(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Find a version by ID including binary content
|
||||||
|
pub async fn find_by_id_with_content<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
id: i64,
|
||||||
|
) -> Result<Option<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact_version WHERE id = $1",
|
||||||
|
artifact_version::SELECT_COLUMNS_WITH_CONTENT
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(id)
|
||||||
|
.fetch_optional(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all versions for an artifact (without binary content), newest first
|
||||||
|
pub async fn list_by_artifact<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
artifact_id: i64,
|
||||||
|
) -> Result<Vec<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact_version WHERE artifact = $1 ORDER BY version DESC",
|
||||||
|
artifact_version::SELECT_COLUMNS
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(artifact_id)
|
||||||
|
.fetch_all(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the latest version for an artifact (without binary content)
|
||||||
|
pub async fn find_latest<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
artifact_id: i64,
|
||||||
|
) -> Result<Option<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact_version WHERE artifact = $1 ORDER BY version DESC LIMIT 1",
|
||||||
|
artifact_version::SELECT_COLUMNS
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(artifact_id)
|
||||||
|
.fetch_optional(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the latest version for an artifact (with binary content)
|
||||||
|
pub async fn find_latest_with_content<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
artifact_id: i64,
|
||||||
|
) -> Result<Option<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact_version WHERE artifact = $1 ORDER BY version DESC LIMIT 1",
|
||||||
|
artifact_version::SELECT_COLUMNS_WITH_CONTENT
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(artifact_id)
|
||||||
|
.fetch_optional(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get a specific version by artifact and version number (without binary content)
|
||||||
|
pub async fn find_by_version<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
artifact_id: i64,
|
||||||
|
version: i32,
|
||||||
|
) -> Result<Option<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact_version WHERE artifact = $1 AND version = $2",
|
||||||
|
artifact_version::SELECT_COLUMNS
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(artifact_id)
|
||||||
|
.bind(version)
|
||||||
|
.fetch_optional(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get a specific version by artifact and version number (with binary content)
|
||||||
|
pub async fn find_by_version_with_content<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
artifact_id: i64,
|
||||||
|
version: i32,
|
||||||
|
) -> Result<Option<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact_version WHERE artifact = $1 AND version = $2",
|
||||||
|
artifact_version::SELECT_COLUMNS_WITH_CONTENT
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(artifact_id)
|
||||||
|
.bind(version)
|
||||||
|
.fetch_optional(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a new artifact version. The version number is auto-assigned
|
||||||
|
/// (MAX(version) + 1) and the retention trigger fires after insert.
|
||||||
|
pub async fn create<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
input: CreateArtifactVersionInput,
|
||||||
|
) -> Result<ArtifactVersion>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let size_bytes = input.content.as_ref().map(|c| c.len() as i64).or_else(|| {
|
||||||
|
input
|
||||||
|
.content_json
|
||||||
|
.as_ref()
|
||||||
|
.map(|j| serde_json::to_string(j).unwrap_or_default().len() as i64)
|
||||||
|
});
|
||||||
|
|
||||||
|
let query = format!(
|
||||||
|
"INSERT INTO artifact_version \
|
||||||
|
(artifact, version, content_type, size_bytes, content, content_json, file_path, meta, created_by) \
|
||||||
|
VALUES ($1, next_artifact_version($1), $2, $3, $4, $5, $6, $7, $8) \
|
||||||
|
RETURNING {}",
|
||||||
|
artifact_version::SELECT_COLUMNS_WITH_CONTENT
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(input.artifact)
|
||||||
|
.bind(&input.content_type)
|
||||||
|
.bind(size_bytes)
|
||||||
|
.bind(&input.content)
|
||||||
|
.bind(&input.content_json)
|
||||||
|
.bind(&input.file_path)
|
||||||
|
.bind(&input.meta)
|
||||||
|
.bind(&input.created_by)
|
||||||
|
.fetch_one(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Delete a specific version by ID
|
||||||
|
pub async fn delete<'e, E>(executor: E, id: i64) -> Result<bool>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let result = sqlx::query("DELETE FROM artifact_version WHERE id = $1")
|
||||||
|
.bind(id)
|
||||||
|
.execute(executor)
|
||||||
|
.await?;
|
||||||
|
Ok(result.rows_affected() > 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Delete all versions for an artifact
|
||||||
|
pub async fn delete_all_for_artifact<'e, E>(executor: E, artifact_id: i64) -> Result<u64>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let result = sqlx::query("DELETE FROM artifact_version WHERE artifact = $1")
|
||||||
|
.bind(artifact_id)
|
||||||
|
.execute(executor)
|
||||||
|
.await?;
|
||||||
|
Ok(result.rows_affected())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Count versions for an artifact
|
||||||
|
pub async fn count_by_artifact<'e, E>(executor: E, artifact_id: i64) -> Result<i64>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
sqlx::query_scalar::<_, i64>("SELECT COUNT(*) FROM artifact_version WHERE artifact = $1")
|
||||||
|
.bind(artifact_id)
|
||||||
|
.fetch_one(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update the size_bytes of a specific artifact version (used by worker finalization).
|
||||||
|
pub async fn update_size_bytes<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
version_id: i64,
|
||||||
|
size_bytes: i64,
|
||||||
|
) -> Result<bool>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let result = sqlx::query("UPDATE artifact_version SET size_bytes = $1 WHERE id = $2")
|
||||||
|
.bind(size_bytes)
|
||||||
|
.bind(version_id)
|
||||||
|
.execute(executor)
|
||||||
|
.await?;
|
||||||
|
Ok(result.rows_affected() > 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Find all file-backed versions linked to an execution.
|
||||||
|
/// Joins artifact_version → artifact on artifact.execution to find all
|
||||||
|
/// file-based versions produced by a given execution.
|
||||||
|
pub async fn find_file_versions_by_execution<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
execution_id: i64,
|
||||||
|
) -> Result<Vec<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT av.{} \
|
||||||
|
FROM artifact_version av \
|
||||||
|
JOIN artifact a ON av.artifact = a.id \
|
||||||
|
WHERE a.execution = $1 AND av.file_path IS NOT NULL",
|
||||||
|
artifact_version::SELECT_COLUMNS
|
||||||
|
.split(", ")
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.join(", av.")
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(execution_id)
|
||||||
|
.fetch_all(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Find all file-backed versions for a specific artifact (used for disk cleanup on delete).
|
||||||
|
pub async fn find_file_versions_by_artifact<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
artifact_id: i64,
|
||||||
|
) -> Result<Vec<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact_version WHERE artifact = $1 AND file_path IS NOT NULL",
|
||||||
|
artifact_version::SELECT_COLUMNS
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(artifact_id)
|
||||||
|
.fetch_all(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ pub mod workflow;
|
|||||||
// Re-export repository types
|
// Re-export repository types
|
||||||
pub use action::{ActionRepository, PolicyRepository};
|
pub use action::{ActionRepository, PolicyRepository};
|
||||||
pub use analytics::AnalyticsRepository;
|
pub use analytics::AnalyticsRepository;
|
||||||
pub use artifact::ArtifactRepository;
|
pub use artifact::{ArtifactRepository, ArtifactVersionRepository};
|
||||||
pub use entity_history::EntityHistoryRepository;
|
pub use entity_history::EntityHistoryRepository;
|
||||||
pub use event::{EnforcementRepository, EventRepository};
|
pub use event::{EnforcementRepository, EventRepository};
|
||||||
pub use execution::ExecutionRepository;
|
pub use execution::ExecutionRepository;
|
||||||
|
|||||||
@@ -3,7 +3,9 @@
|
|||||||
//! Tests cover CRUD operations, specialized queries, constraints,
|
//! Tests cover CRUD operations, specialized queries, constraints,
|
||||||
//! enum handling, timestamps, and edge cases.
|
//! enum handling, timestamps, and edge cases.
|
||||||
|
|
||||||
use attune_common::models::enums::{ArtifactType, OwnerType, RetentionPolicyType};
|
use attune_common::models::enums::{
|
||||||
|
ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType,
|
||||||
|
};
|
||||||
use attune_common::repositories::artifact::{
|
use attune_common::repositories::artifact::{
|
||||||
ArtifactRepository, CreateArtifactInput, UpdateArtifactInput,
|
ArtifactRepository, CreateArtifactInput, UpdateArtifactInput,
|
||||||
};
|
};
|
||||||
@@ -65,8 +67,14 @@ impl ArtifactFixture {
|
|||||||
scope: OwnerType::System,
|
scope: OwnerType::System,
|
||||||
owner: self.unique_owner("system"),
|
owner: self.unique_owner("system"),
|
||||||
r#type: ArtifactType::FileText,
|
r#type: ArtifactType::FileText,
|
||||||
|
visibility: ArtifactVisibility::default(),
|
||||||
retention_policy: RetentionPolicyType::Versions,
|
retention_policy: RetentionPolicyType::Versions,
|
||||||
retention_limit: 5,
|
retention_limit: 5,
|
||||||
|
name: None,
|
||||||
|
description: None,
|
||||||
|
content_type: None,
|
||||||
|
execution: None,
|
||||||
|
data: None,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -247,8 +255,14 @@ async fn test_update_artifact_all_fields() {
|
|||||||
scope: Some(OwnerType::Identity),
|
scope: Some(OwnerType::Identity),
|
||||||
owner: Some(fixture.unique_owner("identity")),
|
owner: Some(fixture.unique_owner("identity")),
|
||||||
r#type: Some(ArtifactType::FileImage),
|
r#type: Some(ArtifactType::FileImage),
|
||||||
|
visibility: Some(ArtifactVisibility::Public),
|
||||||
retention_policy: Some(RetentionPolicyType::Days),
|
retention_policy: Some(RetentionPolicyType::Days),
|
||||||
retention_limit: Some(30),
|
retention_limit: Some(30),
|
||||||
|
name: Some("Updated Name".to_string()),
|
||||||
|
description: Some("Updated description".to_string()),
|
||||||
|
content_type: Some("image/png".to_string()),
|
||||||
|
size_bytes: Some(12345),
|
||||||
|
data: Some(serde_json::json!({"key": "value"})),
|
||||||
};
|
};
|
||||||
|
|
||||||
let updated = ArtifactRepository::update(&pool, created.id, update_input.clone())
|
let updated = ArtifactRepository::update(&pool, created.id, update_input.clone())
|
||||||
|
|||||||
@@ -31,7 +31,6 @@ clap = { workspace = true }
|
|||||||
lapin = { workspace = true }
|
lapin = { workspace = true }
|
||||||
redis = { workspace = true }
|
redis = { workspace = true }
|
||||||
dashmap = { workspace = true }
|
dashmap = { workspace = true }
|
||||||
tera = "1.19"
|
|
||||||
serde_yaml_ng = { workspace = true }
|
serde_yaml_ng = { workspace = true }
|
||||||
validator = { workspace = true }
|
validator = { workspace = true }
|
||||||
futures = { workspace = true }
|
futures = { workspace = true }
|
||||||
|
|||||||
@@ -28,7 +28,3 @@ pub use queue_manager::{ExecutionQueueManager, QueueConfig, QueueStats};
|
|||||||
pub use retry_manager::{RetryAnalysis, RetryConfig, RetryManager, RetryReason};
|
pub use retry_manager::{RetryAnalysis, RetryConfig, RetryManager, RetryReason};
|
||||||
pub use timeout_monitor::{ExecutionTimeoutMonitor, TimeoutMonitorConfig};
|
pub use timeout_monitor::{ExecutionTimeoutMonitor, TimeoutMonitorConfig};
|
||||||
pub use worker_health::{HealthMetrics, HealthProbeConfig, HealthStatus, WorkerHealthProbe};
|
pub use worker_health::{HealthMetrics, HealthProbeConfig, HealthStatus, WorkerHealthProbe};
|
||||||
pub use workflow::{
|
|
||||||
parse_workflow_yaml, BackoffStrategy, ParseError, TemplateEngine, VariableContext,
|
|
||||||
WorkflowDefinition, WorkflowValidator,
|
|
||||||
};
|
|
||||||
|
|||||||
@@ -61,9 +61,6 @@ pub type ContextResult<T> = Result<T, ContextError>;
|
|||||||
/// Errors that can occur during context operations
|
/// Errors that can occur during context operations
|
||||||
#[derive(Debug, Error)]
|
#[derive(Debug, Error)]
|
||||||
pub enum ContextError {
|
pub enum ContextError {
|
||||||
#[error("Template rendering error: {0}")]
|
|
||||||
TemplateError(String),
|
|
||||||
|
|
||||||
#[error("Variable not found: {0}")]
|
#[error("Variable not found: {0}")]
|
||||||
VariableNotFound(String),
|
VariableNotFound(String),
|
||||||
|
|
||||||
@@ -200,16 +197,19 @@ impl WorkflowContext {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Get a workflow-scoped variable by name.
|
/// Get a workflow-scoped variable by name.
|
||||||
|
#[allow(dead_code)] // Part of complete context API; used in tests
|
||||||
pub fn get_var(&self, name: &str) -> Option<JsonValue> {
|
pub fn get_var(&self, name: &str) -> Option<JsonValue> {
|
||||||
self.variables.get(name).map(|entry| entry.value().clone())
|
self.variables.get(name).map(|entry| entry.value().clone())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Store a completed task's result (accessible as `task.<name>.*`).
|
/// Store a completed task's result (accessible as `task.<name>.*`).
|
||||||
|
#[allow(dead_code)] // Part of complete context API; used in tests
|
||||||
pub fn set_task_result(&mut self, task_name: &str, result: JsonValue) {
|
pub fn set_task_result(&mut self, task_name: &str, result: JsonValue) {
|
||||||
self.task_results.insert(task_name.to_string(), result);
|
self.task_results.insert(task_name.to_string(), result);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get a task result by task name.
|
/// Get a task result by task name.
|
||||||
|
#[allow(dead_code)] // Part of complete context API; used in tests
|
||||||
pub fn get_task_result(&self, task_name: &str) -> Option<JsonValue> {
|
pub fn get_task_result(&self, task_name: &str) -> Option<JsonValue> {
|
||||||
self.task_results
|
self.task_results
|
||||||
.get(task_name)
|
.get(task_name)
|
||||||
@@ -217,11 +217,13 @@ impl WorkflowContext {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Set the pack configuration (accessible as `config.<key>`).
|
/// Set the pack configuration (accessible as `config.<key>`).
|
||||||
|
#[allow(dead_code)] // Part of complete context API; used in tests
|
||||||
pub fn set_pack_config(&mut self, config: JsonValue) {
|
pub fn set_pack_config(&mut self, config: JsonValue) {
|
||||||
self.pack_config = Arc::new(config);
|
self.pack_config = Arc::new(config);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Set the keystore secrets (accessible as `keystore.<key>`).
|
/// Set the keystore secrets (accessible as `keystore.<key>`).
|
||||||
|
#[allow(dead_code)] // Part of complete context API; used in tests
|
||||||
pub fn set_keystore(&mut self, secrets: JsonValue) {
|
pub fn set_keystore(&mut self, secrets: JsonValue) {
|
||||||
self.keystore = Arc::new(secrets);
|
self.keystore = Arc::new(secrets);
|
||||||
}
|
}
|
||||||
@@ -233,6 +235,7 @@ impl WorkflowContext {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Clear current item
|
/// Clear current item
|
||||||
|
#[allow(dead_code)] // Part of complete context API; symmetric with set_current_item
|
||||||
pub fn clear_current_item(&mut self) {
|
pub fn clear_current_item(&mut self) {
|
||||||
self.current_item = None;
|
self.current_item = None;
|
||||||
self.current_index = None;
|
self.current_index = None;
|
||||||
@@ -440,6 +443,7 @@ impl WorkflowContext {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Export context for storage
|
/// Export context for storage
|
||||||
|
#[allow(dead_code)] // Part of complete context API; used in tests
|
||||||
pub fn export(&self) -> JsonValue {
|
pub fn export(&self) -> JsonValue {
|
||||||
let variables: HashMap<String, JsonValue> = self
|
let variables: HashMap<String, JsonValue> = self
|
||||||
.variables
|
.variables
|
||||||
@@ -470,6 +474,7 @@ impl WorkflowContext {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Import context from stored data
|
/// Import context from stored data
|
||||||
|
#[allow(dead_code)] // Part of complete context API; used in tests
|
||||||
pub fn import(data: JsonValue) -> ContextResult<Self> {
|
pub fn import(data: JsonValue) -> ContextResult<Self> {
|
||||||
let variables = DashMap::new();
|
let variables = DashMap::new();
|
||||||
if let Some(obj) = data["variables"].as_object() {
|
if let Some(obj) = data["variables"].as_object() {
|
||||||
@@ -677,7 +682,9 @@ mod tests {
|
|||||||
ctx.set_var("greeting", json!("Hello"));
|
ctx.set_var("greeting", json!("Hello"));
|
||||||
|
|
||||||
// Canonical: workflow.<name>
|
// Canonical: workflow.<name>
|
||||||
let result = ctx.render_template("{{ workflow.greeting }} World").unwrap();
|
let result = ctx
|
||||||
|
.render_template("{{ workflow.greeting }} World")
|
||||||
|
.unwrap();
|
||||||
assert_eq!(result, "Hello World");
|
assert_eq!(result, "Hello World");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -699,7 +706,9 @@ mod tests {
|
|||||||
let ctx = WorkflowContext::new(json!({}), vars);
|
let ctx = WorkflowContext::new(json!({}), vars);
|
||||||
|
|
||||||
// Backward-compat alias: variables.<name>
|
// Backward-compat alias: variables.<name>
|
||||||
let result = ctx.render_template("{{ variables.greeting }} World").unwrap();
|
let result = ctx
|
||||||
|
.render_template("{{ variables.greeting }} World")
|
||||||
|
.unwrap();
|
||||||
assert_eq!(result, "Hello World");
|
assert_eq!(result, "Hello World");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -735,7 +744,9 @@ mod tests {
|
|||||||
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
||||||
ctx.set_task_result("fetch", json!({"result": {"data": {"id": 42}}}));
|
ctx.set_task_result("fetch", json!({"result": {"data": {"id": 42}}}));
|
||||||
|
|
||||||
let val = ctx.evaluate_expression("task.fetch.result.data.id").unwrap();
|
let val = ctx
|
||||||
|
.evaluate_expression("task.fetch.result.data.id")
|
||||||
|
.unwrap();
|
||||||
assert_eq!(val, json!(42));
|
assert_eq!(val, json!(42));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -744,7 +755,9 @@ mod tests {
|
|||||||
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
||||||
ctx.set_task_result("run_cmd", json!({"result": {"stdout": "hello world"}}));
|
ctx.set_task_result("run_cmd", json!({"result": {"stdout": "hello world"}}));
|
||||||
|
|
||||||
let val = ctx.evaluate_expression("task.run_cmd.result.stdout").unwrap();
|
let val = ctx
|
||||||
|
.evaluate_expression("task.run_cmd.result.stdout")
|
||||||
|
.unwrap();
|
||||||
assert_eq!(val, json!("hello world"));
|
assert_eq!(val, json!("hello world"));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -755,14 +768,14 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn test_config_namespace() {
|
fn test_config_namespace() {
|
||||||
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
||||||
ctx.set_pack_config(json!({"api_token": "tok_abc123", "base_url": "https://api.example.com"}));
|
ctx.set_pack_config(
|
||||||
|
json!({"api_token": "tok_abc123", "base_url": "https://api.example.com"}),
|
||||||
|
);
|
||||||
|
|
||||||
let val = ctx.evaluate_expression("config.api_token").unwrap();
|
let val = ctx.evaluate_expression("config.api_token").unwrap();
|
||||||
assert_eq!(val, json!("tok_abc123"));
|
assert_eq!(val, json!("tok_abc123"));
|
||||||
|
|
||||||
let result = ctx
|
let result = ctx.render_template("URL: {{ config.base_url }}").unwrap();
|
||||||
.render_template("URL: {{ config.base_url }}")
|
|
||||||
.unwrap();
|
|
||||||
assert_eq!(result, "URL: https://api.example.com");
|
assert_eq!(result, "URL: https://api.example.com");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -796,7 +809,9 @@ mod tests {
|
|||||||
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
||||||
ctx.set_keystore(json!({"My Secret Key": "value123"}));
|
ctx.set_keystore(json!({"My Secret Key": "value123"}));
|
||||||
|
|
||||||
let val = ctx.evaluate_expression("keystore[\"My Secret Key\"]").unwrap();
|
let val = ctx
|
||||||
|
.evaluate_expression("keystore[\"My Secret Key\"]")
|
||||||
|
.unwrap();
|
||||||
assert_eq!(val, json!("value123"));
|
assert_eq!(val, json!("value123"));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -850,9 +865,7 @@ mod tests {
|
|||||||
assert!(ctx
|
assert!(ctx
|
||||||
.evaluate_condition("parameters.x > 50 or parameters.y > 15")
|
.evaluate_condition("parameters.x > 50 or parameters.y > 15")
|
||||||
.unwrap());
|
.unwrap());
|
||||||
assert!(ctx
|
assert!(ctx.evaluate_condition("not parameters.x > 50").unwrap());
|
||||||
.evaluate_condition("not parameters.x > 50")
|
|
||||||
.unwrap());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -863,16 +876,15 @@ mod tests {
|
|||||||
assert!(ctx.evaluate_condition("\"admin\" in roles").unwrap());
|
assert!(ctx.evaluate_condition("\"admin\" in roles").unwrap());
|
||||||
assert!(!ctx.evaluate_condition("\"root\" in roles").unwrap());
|
assert!(!ctx.evaluate_condition("\"root\" in roles").unwrap());
|
||||||
// Via canonical workflow namespace
|
// Via canonical workflow namespace
|
||||||
assert!(ctx.evaluate_condition("\"admin\" in workflow.roles").unwrap());
|
assert!(ctx
|
||||||
|
.evaluate_condition("\"admin\" in workflow.roles")
|
||||||
|
.unwrap());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_condition_with_function_calls() {
|
fn test_condition_with_function_calls() {
|
||||||
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
let mut ctx = WorkflowContext::new(json!({}), HashMap::new());
|
||||||
ctx.set_last_task_outcome(
|
ctx.set_last_task_outcome(json!({"status": "ok", "code": 200}), TaskOutcome::Succeeded);
|
||||||
json!({"status": "ok", "code": 200}),
|
|
||||||
TaskOutcome::Succeeded,
|
|
||||||
);
|
|
||||||
assert!(ctx.evaluate_condition("succeeded()").unwrap());
|
assert!(ctx.evaluate_condition("succeeded()").unwrap());
|
||||||
assert!(!ctx.evaluate_condition("failed()").unwrap());
|
assert!(!ctx.evaluate_condition("failed()").unwrap());
|
||||||
assert!(ctx
|
assert!(ctx
|
||||||
@@ -889,9 +901,7 @@ mod tests {
|
|||||||
ctx.set_var("items", json!([1, 2, 3, 4, 5]));
|
ctx.set_var("items", json!([1, 2, 3, 4, 5]));
|
||||||
assert!(ctx.evaluate_condition("length(items) > 3").unwrap());
|
assert!(ctx.evaluate_condition("length(items) > 3").unwrap());
|
||||||
assert!(!ctx.evaluate_condition("length(items) > 10").unwrap());
|
assert!(!ctx.evaluate_condition("length(items) > 10").unwrap());
|
||||||
assert!(ctx
|
assert!(ctx.evaluate_condition("length(items) == 5").unwrap());
|
||||||
.evaluate_condition("length(items) == 5")
|
|
||||||
.unwrap());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -916,10 +926,8 @@ mod tests {
|
|||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_expression_string_concat() {
|
fn test_expression_string_concat() {
|
||||||
let ctx = WorkflowContext::new(
|
let ctx =
|
||||||
json!({"first": "Hello", "second": "World"}),
|
WorkflowContext::new(json!({"first": "Hello", "second": "World"}), HashMap::new());
|
||||||
HashMap::new(),
|
|
||||||
);
|
|
||||||
let input = json!({"msg": "{{ parameters.first + \" \" + parameters.second }}"});
|
let input = json!({"msg": "{{ parameters.first + \" \" + parameters.second }}"});
|
||||||
let result = ctx.render_json(&input).unwrap();
|
let result = ctx.render_json(&input).unwrap();
|
||||||
assert_eq!(result["msg"], json!("Hello World"));
|
assert_eq!(result["msg"], json!("Hello World"));
|
||||||
|
|||||||
@@ -1,776 +0,0 @@
|
|||||||
//! Workflow Execution Coordinator
|
|
||||||
//!
|
|
||||||
//! This module orchestrates workflow execution, managing task dependencies,
|
|
||||||
//! parallel execution, state transitions, and error handling.
|
|
||||||
|
|
||||||
use crate::workflow::context::WorkflowContext;
|
|
||||||
use crate::workflow::graph::{TaskGraph, TaskNode};
|
|
||||||
use crate::workflow::task_executor::{TaskExecutionResult, TaskExecutionStatus, TaskExecutor};
|
|
||||||
use attune_common::error::{Error, Result};
|
|
||||||
use attune_common::models::{
|
|
||||||
execution::{Execution, WorkflowTaskMetadata},
|
|
||||||
ExecutionStatus, Id, WorkflowExecution,
|
|
||||||
};
|
|
||||||
use attune_common::mq::MessageQueue;
|
|
||||||
use attune_common::workflow::WorkflowDefinition;
|
|
||||||
use chrono::Utc;
|
|
||||||
use serde_json::{json, Value as JsonValue};
|
|
||||||
use sqlx::PgPool;
|
|
||||||
use std::collections::{HashMap, HashSet};
|
|
||||||
use std::sync::Arc;
|
|
||||||
use tokio::sync::Mutex;
|
|
||||||
use tracing::{debug, error, info, warn};
|
|
||||||
|
|
||||||
/// Workflow execution coordinator
|
|
||||||
pub struct WorkflowCoordinator {
|
|
||||||
db_pool: PgPool,
|
|
||||||
mq: MessageQueue,
|
|
||||||
task_executor: TaskExecutor,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl WorkflowCoordinator {
|
|
||||||
/// Create a new workflow coordinator
|
|
||||||
pub fn new(db_pool: PgPool, mq: MessageQueue) -> Self {
|
|
||||||
let task_executor = TaskExecutor::new(db_pool.clone(), mq.clone());
|
|
||||||
|
|
||||||
Self {
|
|
||||||
db_pool,
|
|
||||||
mq,
|
|
||||||
task_executor,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Start a new workflow execution
|
|
||||||
pub async fn start_workflow(
|
|
||||||
&self,
|
|
||||||
workflow_ref: &str,
|
|
||||||
parameters: JsonValue,
|
|
||||||
parent_execution_id: Option<Id>,
|
|
||||||
) -> Result<WorkflowExecutionHandle> {
|
|
||||||
info!(
|
|
||||||
"Starting workflow: {} with params: {:?}",
|
|
||||||
workflow_ref, parameters
|
|
||||||
);
|
|
||||||
|
|
||||||
// Load workflow definition
|
|
||||||
let workflow_def = sqlx::query_as::<_, attune_common::models::WorkflowDefinition>(
|
|
||||||
"SELECT * FROM attune.workflow_definition WHERE ref = $1",
|
|
||||||
)
|
|
||||||
.bind(workflow_ref)
|
|
||||||
.fetch_optional(&self.db_pool)
|
|
||||||
.await?
|
|
||||||
.ok_or_else(|| Error::not_found("workflow_definition", "ref", workflow_ref))?;
|
|
||||||
|
|
||||||
if !workflow_def.enabled {
|
|
||||||
return Err(Error::validation("Workflow is disabled"));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse workflow definition
|
|
||||||
let definition: WorkflowDefinition = serde_json::from_value(workflow_def.definition)
|
|
||||||
.map_err(|e| Error::validation(format!("Invalid workflow definition: {}", e)))?;
|
|
||||||
|
|
||||||
// Build task graph
|
|
||||||
let graph = TaskGraph::from_workflow(&definition)
|
|
||||||
.map_err(|e| Error::validation(format!("Failed to build task graph: {}", e)))?;
|
|
||||||
|
|
||||||
// Create parent execution record
|
|
||||||
// TODO: Implement proper execution creation
|
|
||||||
let _parent_execution_id_temp = parent_execution_id.unwrap_or(1); // Placeholder
|
|
||||||
|
|
||||||
let parent_execution = sqlx::query_as::<_, attune_common::models::Execution>(
|
|
||||||
r#"
|
|
||||||
INSERT INTO attune.execution (action_ref, pack, input, parent, status)
|
|
||||||
VALUES ($1, $2, $3, $4, $5)
|
|
||||||
RETURNING *
|
|
||||||
"#,
|
|
||||||
)
|
|
||||||
.bind(workflow_ref)
|
|
||||||
.bind(workflow_def.pack)
|
|
||||||
.bind(¶meters)
|
|
||||||
.bind(parent_execution_id)
|
|
||||||
.bind(ExecutionStatus::Running)
|
|
||||||
.fetch_one(&self.db_pool)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
// Initialize workflow context
|
|
||||||
let initial_vars: HashMap<String, JsonValue> = definition
|
|
||||||
.vars
|
|
||||||
.iter()
|
|
||||||
.map(|(k, v)| (k.clone(), v.clone()))
|
|
||||||
.collect();
|
|
||||||
let context = WorkflowContext::new(parameters, initial_vars);
|
|
||||||
|
|
||||||
// Create workflow execution record
|
|
||||||
let workflow_execution = self
|
|
||||||
.create_workflow_execution_record(
|
|
||||||
parent_execution.id,
|
|
||||||
workflow_def.id,
|
|
||||||
&graph,
|
|
||||||
&context,
|
|
||||||
)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
info!(
|
|
||||||
"Created workflow execution {} for workflow {}",
|
|
||||||
workflow_execution.id, workflow_ref
|
|
||||||
);
|
|
||||||
|
|
||||||
// Create execution handle
|
|
||||||
let handle = WorkflowExecutionHandle {
|
|
||||||
coordinator: Arc::new(self.clone_ref()),
|
|
||||||
execution_id: workflow_execution.id,
|
|
||||||
parent_execution_id: parent_execution.id,
|
|
||||||
workflow_def_id: workflow_def.id,
|
|
||||||
graph,
|
|
||||||
state: Arc::new(Mutex::new(WorkflowExecutionState {
|
|
||||||
context,
|
|
||||||
status: ExecutionStatus::Running,
|
|
||||||
completed_tasks: HashSet::new(),
|
|
||||||
failed_tasks: HashSet::new(),
|
|
||||||
skipped_tasks: HashSet::new(),
|
|
||||||
executing_tasks: HashSet::new(),
|
|
||||||
scheduled_tasks: HashSet::new(),
|
|
||||||
join_state: HashMap::new(),
|
|
||||||
task_executions: HashMap::new(),
|
|
||||||
paused: false,
|
|
||||||
pause_reason: None,
|
|
||||||
error_message: None,
|
|
||||||
})),
|
|
||||||
};
|
|
||||||
|
|
||||||
// Update execution status to running
|
|
||||||
self.update_workflow_execution_status(workflow_execution.id, ExecutionStatus::Running)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(handle)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Create workflow execution record in database
|
|
||||||
async fn create_workflow_execution_record(
|
|
||||||
&self,
|
|
||||||
execution_id: Id,
|
|
||||||
workflow_def_id: Id,
|
|
||||||
graph: &TaskGraph,
|
|
||||||
context: &WorkflowContext,
|
|
||||||
) -> Result<WorkflowExecution> {
|
|
||||||
let task_graph_json = serde_json::to_value(graph)
|
|
||||||
.map_err(|e| Error::internal(format!("Failed to serialize task graph: {}", e)))?;
|
|
||||||
|
|
||||||
let variables = context.export();
|
|
||||||
|
|
||||||
sqlx::query_as::<_, WorkflowExecution>(
|
|
||||||
r#"
|
|
||||||
INSERT INTO attune.workflow_execution (
|
|
||||||
execution, workflow_def, current_tasks, completed_tasks,
|
|
||||||
failed_tasks, skipped_tasks, variables, task_graph,
|
|
||||||
status, paused
|
|
||||||
)
|
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
|
|
||||||
RETURNING *
|
|
||||||
"#,
|
|
||||||
)
|
|
||||||
.bind(execution_id)
|
|
||||||
.bind(workflow_def_id)
|
|
||||||
.bind(&[] as &[String])
|
|
||||||
.bind(&[] as &[String])
|
|
||||||
.bind(&[] as &[String])
|
|
||||||
.bind(&[] as &[String])
|
|
||||||
.bind(variables)
|
|
||||||
.bind(task_graph_json)
|
|
||||||
.bind(ExecutionStatus::Running)
|
|
||||||
.bind(false)
|
|
||||||
.fetch_one(&self.db_pool)
|
|
||||||
.await
|
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Update workflow execution status
|
|
||||||
async fn update_workflow_execution_status(
|
|
||||||
&self,
|
|
||||||
workflow_execution_id: Id,
|
|
||||||
status: ExecutionStatus,
|
|
||||||
) -> Result<()> {
|
|
||||||
sqlx::query(
|
|
||||||
r#"
|
|
||||||
UPDATE attune.workflow_execution
|
|
||||||
SET status = $1, updated = NOW()
|
|
||||||
WHERE id = $2
|
|
||||||
"#,
|
|
||||||
)
|
|
||||||
.bind(status)
|
|
||||||
.bind(workflow_execution_id)
|
|
||||||
.execute(&self.db_pool)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Update workflow execution state
|
|
||||||
async fn update_workflow_execution_state(
|
|
||||||
&self,
|
|
||||||
workflow_execution_id: Id,
|
|
||||||
state: &WorkflowExecutionState,
|
|
||||||
) -> Result<()> {
|
|
||||||
let current_tasks: Vec<String> = state.executing_tasks.iter().cloned().collect();
|
|
||||||
let completed_tasks: Vec<String> = state.completed_tasks.iter().cloned().collect();
|
|
||||||
let failed_tasks: Vec<String> = state.failed_tasks.iter().cloned().collect();
|
|
||||||
let skipped_tasks: Vec<String> = state.skipped_tasks.iter().cloned().collect();
|
|
||||||
|
|
||||||
sqlx::query(
|
|
||||||
r#"
|
|
||||||
UPDATE attune.workflow_execution
|
|
||||||
SET
|
|
||||||
current_tasks = $1,
|
|
||||||
completed_tasks = $2,
|
|
||||||
failed_tasks = $3,
|
|
||||||
skipped_tasks = $4,
|
|
||||||
variables = $5,
|
|
||||||
status = $6,
|
|
||||||
paused = $7,
|
|
||||||
pause_reason = $8,
|
|
||||||
error_message = $9,
|
|
||||||
updated = NOW()
|
|
||||||
WHERE id = $10
|
|
||||||
"#,
|
|
||||||
)
|
|
||||||
.bind(¤t_tasks)
|
|
||||||
.bind(&completed_tasks)
|
|
||||||
.bind(&failed_tasks)
|
|
||||||
.bind(&skipped_tasks)
|
|
||||||
.bind(state.context.export())
|
|
||||||
.bind(state.status)
|
|
||||||
.bind(state.paused)
|
|
||||||
.bind(&state.pause_reason)
|
|
||||||
.bind(&state.error_message)
|
|
||||||
.bind(workflow_execution_id)
|
|
||||||
.execute(&self.db_pool)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Create a task execution record
|
|
||||||
async fn create_task_execution_record(
|
|
||||||
&self,
|
|
||||||
workflow_execution_id: Id,
|
|
||||||
parent_execution_id: Id,
|
|
||||||
task: &TaskNode,
|
|
||||||
task_index: Option<i32>,
|
|
||||||
task_batch: Option<i32>,
|
|
||||||
) -> Result<Execution> {
|
|
||||||
let max_retries = task.retry.as_ref().map(|r| r.count as i32).unwrap_or(0);
|
|
||||||
let timeout = task.timeout.map(|t| t as i32);
|
|
||||||
|
|
||||||
// Create workflow task metadata
|
|
||||||
let workflow_task = WorkflowTaskMetadata {
|
|
||||||
workflow_execution: workflow_execution_id,
|
|
||||||
task_name: task.name.clone(),
|
|
||||||
task_index,
|
|
||||||
task_batch,
|
|
||||||
retry_count: 0,
|
|
||||||
max_retries,
|
|
||||||
next_retry_at: None,
|
|
||||||
timeout_seconds: timeout,
|
|
||||||
timed_out: false,
|
|
||||||
duration_ms: None,
|
|
||||||
started_at: Some(Utc::now()),
|
|
||||||
completed_at: None,
|
|
||||||
};
|
|
||||||
|
|
||||||
sqlx::query_as::<_, Execution>(
|
|
||||||
r#"
|
|
||||||
INSERT INTO attune.execution (
|
|
||||||
action_ref, parent, status, workflow_task
|
|
||||||
)
|
|
||||||
VALUES ($1, $2, $3, $4)
|
|
||||||
RETURNING *
|
|
||||||
"#,
|
|
||||||
)
|
|
||||||
.bind(&task.name)
|
|
||||||
.bind(parent_execution_id)
|
|
||||||
.bind(ExecutionStatus::Running)
|
|
||||||
.bind(sqlx::types::Json(&workflow_task))
|
|
||||||
.fetch_one(&self.db_pool)
|
|
||||||
.await
|
|
||||||
.map_err(Into::into)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Update task execution record
|
|
||||||
async fn update_task_execution_record(
|
|
||||||
&self,
|
|
||||||
task_execution_id: Id,
|
|
||||||
result: &TaskExecutionResult,
|
|
||||||
) -> Result<()> {
|
|
||||||
let status = match result.status {
|
|
||||||
TaskExecutionStatus::Success => ExecutionStatus::Completed,
|
|
||||||
TaskExecutionStatus::Failed => ExecutionStatus::Failed,
|
|
||||||
TaskExecutionStatus::Timeout => ExecutionStatus::Timeout,
|
|
||||||
TaskExecutionStatus::Skipped => ExecutionStatus::Cancelled,
|
|
||||||
};
|
|
||||||
|
|
||||||
// Fetch current execution to get workflow_task metadata
|
|
||||||
let execution =
|
|
||||||
sqlx::query_as::<_, Execution>("SELECT * FROM attune.execution WHERE id = $1")
|
|
||||||
.bind(task_execution_id)
|
|
||||||
.fetch_one(&self.db_pool)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
// Update workflow_task metadata
|
|
||||||
if let Some(mut workflow_task) = execution.workflow_task {
|
|
||||||
workflow_task.completed_at = if result.status == TaskExecutionStatus::Success {
|
|
||||||
Some(Utc::now())
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
workflow_task.duration_ms = Some(result.duration_ms);
|
|
||||||
workflow_task.retry_count = result.retry_count;
|
|
||||||
workflow_task.next_retry_at = result.next_retry_at;
|
|
||||||
workflow_task.timed_out = result.status == TaskExecutionStatus::Timeout;
|
|
||||||
|
|
||||||
let _error_json = result.error.as_ref().map(|e| {
|
|
||||||
json!({
|
|
||||||
"message": e.message,
|
|
||||||
"type": e.error_type,
|
|
||||||
"details": e.details
|
|
||||||
})
|
|
||||||
});
|
|
||||||
|
|
||||||
sqlx::query(
|
|
||||||
r#"
|
|
||||||
UPDATE attune.execution
|
|
||||||
SET
|
|
||||||
status = $1,
|
|
||||||
result = $2,
|
|
||||||
workflow_task = $3,
|
|
||||||
updated = NOW()
|
|
||||||
WHERE id = $4
|
|
||||||
"#,
|
|
||||||
)
|
|
||||||
.bind(status)
|
|
||||||
.bind(&result.output)
|
|
||||||
.bind(sqlx::types::Json(&workflow_task))
|
|
||||||
.bind(task_execution_id)
|
|
||||||
.execute(&self.db_pool)
|
|
||||||
.await?;
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Clone reference for Arc sharing
|
|
||||||
fn clone_ref(&self) -> Self {
|
|
||||||
Self {
|
|
||||||
db_pool: self.db_pool.clone(),
|
|
||||||
mq: self.mq.clone(),
|
|
||||||
task_executor: TaskExecutor::new(self.db_pool.clone(), self.mq.clone()),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Workflow execution state
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct WorkflowExecutionState {
|
|
||||||
pub context: WorkflowContext,
|
|
||||||
pub status: ExecutionStatus,
|
|
||||||
pub completed_tasks: HashSet<String>,
|
|
||||||
pub failed_tasks: HashSet<String>,
|
|
||||||
pub skipped_tasks: HashSet<String>,
|
|
||||||
/// Tasks currently executing
|
|
||||||
pub executing_tasks: HashSet<String>,
|
|
||||||
/// Tasks scheduled but not yet executing
|
|
||||||
pub scheduled_tasks: HashSet<String>,
|
|
||||||
/// Join state tracking: task_name -> set of completed predecessor tasks
|
|
||||||
pub join_state: HashMap<String, HashSet<String>>,
|
|
||||||
pub task_executions: HashMap<String, Vec<Id>>,
|
|
||||||
pub paused: bool,
|
|
||||||
pub pause_reason: Option<String>,
|
|
||||||
pub error_message: Option<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Handle for managing a workflow execution
|
|
||||||
pub struct WorkflowExecutionHandle {
|
|
||||||
coordinator: Arc<WorkflowCoordinator>,
|
|
||||||
execution_id: Id,
|
|
||||||
parent_execution_id: Id,
|
|
||||||
#[allow(dead_code)]
|
|
||||||
workflow_def_id: Id,
|
|
||||||
graph: TaskGraph,
|
|
||||||
state: Arc<Mutex<WorkflowExecutionState>>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl WorkflowExecutionHandle {
|
|
||||||
/// Execute the workflow to completion
|
|
||||||
pub async fn execute(&self) -> Result<WorkflowExecutionResult> {
|
|
||||||
info!("Executing workflow {}", self.execution_id);
|
|
||||||
|
|
||||||
// Start with entry point tasks
|
|
||||||
{
|
|
||||||
let mut state = self.state.lock().await;
|
|
||||||
for task_name in &self.graph.entry_points {
|
|
||||||
info!("Scheduling entry point task: {}", task_name);
|
|
||||||
state.scheduled_tasks.insert(task_name.clone());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for all tasks to complete
|
|
||||||
loop {
|
|
||||||
// Check for and spawn scheduled tasks
|
|
||||||
let tasks_to_spawn = {
|
|
||||||
let mut state = self.state.lock().await;
|
|
||||||
let mut to_spawn = Vec::new();
|
|
||||||
for task_name in state.scheduled_tasks.iter() {
|
|
||||||
to_spawn.push(task_name.clone());
|
|
||||||
}
|
|
||||||
// Clear scheduled tasks as we're about to spawn them
|
|
||||||
state.scheduled_tasks.clear();
|
|
||||||
to_spawn
|
|
||||||
};
|
|
||||||
|
|
||||||
// Spawn scheduled tasks
|
|
||||||
for task_name in tasks_to_spawn {
|
|
||||||
self.spawn_task_execution(task_name).await;
|
|
||||||
}
|
|
||||||
|
|
||||||
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
|
|
||||||
|
|
||||||
let state = self.state.lock().await;
|
|
||||||
|
|
||||||
// Check if workflow is paused
|
|
||||||
if state.paused {
|
|
||||||
info!("Workflow {} is paused", self.execution_id);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if workflow is complete (nothing executing and nothing scheduled)
|
|
||||||
if state.executing_tasks.is_empty() && state.scheduled_tasks.is_empty() {
|
|
||||||
info!("Workflow {} completed", self.execution_id);
|
|
||||||
drop(state);
|
|
||||||
|
|
||||||
let mut state = self.state.lock().await;
|
|
||||||
if state.failed_tasks.is_empty() {
|
|
||||||
state.status = ExecutionStatus::Completed;
|
|
||||||
} else {
|
|
||||||
state.status = ExecutionStatus::Failed;
|
|
||||||
state.error_message = Some(format!(
|
|
||||||
"Workflow failed: {} tasks failed",
|
|
||||||
state.failed_tasks.len()
|
|
||||||
));
|
|
||||||
}
|
|
||||||
self.coordinator
|
|
||||||
.update_workflow_execution_state(self.execution_id, &state)
|
|
||||||
.await?;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
let state = self.state.lock().await;
|
|
||||||
Ok(WorkflowExecutionResult {
|
|
||||||
status: state.status,
|
|
||||||
output: state.context.export(),
|
|
||||||
completed_tasks: state.completed_tasks.len(),
|
|
||||||
failed_tasks: state.failed_tasks.len(),
|
|
||||||
skipped_tasks: state.skipped_tasks.len(),
|
|
||||||
error_message: state.error_message.clone(),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Spawn a task execution in a new tokio task
|
|
||||||
async fn spawn_task_execution(&self, task_name: String) {
|
|
||||||
let coordinator = self.coordinator.clone();
|
|
||||||
let state_arc = self.state.clone();
|
|
||||||
let workflow_execution_id = self.execution_id;
|
|
||||||
let parent_execution_id = self.parent_execution_id;
|
|
||||||
let graph = self.graph.clone();
|
|
||||||
|
|
||||||
tokio::spawn(async move {
|
|
||||||
if let Err(e) = Self::execute_task_async(
|
|
||||||
coordinator,
|
|
||||||
state_arc,
|
|
||||||
workflow_execution_id,
|
|
||||||
parent_execution_id,
|
|
||||||
graph,
|
|
||||||
task_name,
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
error!("Task execution failed: {}", e);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Execute a single task asynchronously
|
|
||||||
async fn execute_task_async(
|
|
||||||
coordinator: Arc<WorkflowCoordinator>,
|
|
||||||
state: Arc<Mutex<WorkflowExecutionState>>,
|
|
||||||
workflow_execution_id: Id,
|
|
||||||
parent_execution_id: Id,
|
|
||||||
graph: TaskGraph,
|
|
||||||
task_name: String,
|
|
||||||
) -> Result<()> {
|
|
||||||
// Move task from scheduled to executing
|
|
||||||
let task = {
|
|
||||||
let mut state = state.lock().await;
|
|
||||||
state.scheduled_tasks.remove(&task_name);
|
|
||||||
state.executing_tasks.insert(task_name.clone());
|
|
||||||
|
|
||||||
// Get the task node
|
|
||||||
match graph.get_task(&task_name) {
|
|
||||||
Some(task) => task.clone(),
|
|
||||||
None => {
|
|
||||||
error!("Task {} not found in graph", task_name);
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
info!("Executing task: {}", task.name);
|
|
||||||
|
|
||||||
// Create task execution record
|
|
||||||
let task_execution = coordinator
|
|
||||||
.create_task_execution_record(
|
|
||||||
workflow_execution_id,
|
|
||||||
parent_execution_id,
|
|
||||||
&task,
|
|
||||||
None,
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
// Get context for execution
|
|
||||||
let mut context = {
|
|
||||||
let state = state.lock().await;
|
|
||||||
state.context.clone()
|
|
||||||
};
|
|
||||||
|
|
||||||
// Execute task
|
|
||||||
let result = coordinator
|
|
||||||
.task_executor
|
|
||||||
.execute_task(
|
|
||||||
&task,
|
|
||||||
&mut context,
|
|
||||||
workflow_execution_id,
|
|
||||||
parent_execution_id,
|
|
||||||
)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
// Update task execution record
|
|
||||||
coordinator
|
|
||||||
.update_task_execution_record(task_execution.id, &result)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
// Update workflow state based on result
|
|
||||||
let success = matches!(result.status, TaskExecutionStatus::Success);
|
|
||||||
|
|
||||||
{
|
|
||||||
let mut state = state.lock().await;
|
|
||||||
state.executing_tasks.remove(&task.name);
|
|
||||||
|
|
||||||
match result.status {
|
|
||||||
TaskExecutionStatus::Success => {
|
|
||||||
state.completed_tasks.insert(task.name.clone());
|
|
||||||
// Update context with task result
|
|
||||||
if let Some(output) = result.output {
|
|
||||||
state.context.set_task_result(&task.name, output);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
TaskExecutionStatus::Failed => {
|
|
||||||
if result.should_retry {
|
|
||||||
// Task will be retried, keep it in scheduled
|
|
||||||
info!("Task {} will be retried", task.name);
|
|
||||||
state.scheduled_tasks.insert(task.name.clone());
|
|
||||||
// TODO: Schedule retry with delay
|
|
||||||
} else {
|
|
||||||
state.failed_tasks.insert(task.name.clone());
|
|
||||||
if let Some(ref error) = result.error {
|
|
||||||
warn!("Task {} failed: {}", task.name, error.message);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
TaskExecutionStatus::Timeout => {
|
|
||||||
state.failed_tasks.insert(task.name.clone());
|
|
||||||
warn!("Task {} timed out", task.name);
|
|
||||||
}
|
|
||||||
TaskExecutionStatus::Skipped => {
|
|
||||||
state.skipped_tasks.insert(task.name.clone());
|
|
||||||
debug!("Task {} skipped", task.name);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Persist state
|
|
||||||
coordinator
|
|
||||||
.update_workflow_execution_state(workflow_execution_id, &state)
|
|
||||||
.await?;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Evaluate transitions and schedule next tasks
|
|
||||||
Self::on_task_completion(state.clone(), graph.clone(), task.name.clone(), success).await?;
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Handle task completion by evaluating transitions and scheduling next tasks
|
|
||||||
async fn on_task_completion(
|
|
||||||
state: Arc<Mutex<WorkflowExecutionState>>,
|
|
||||||
graph: TaskGraph,
|
|
||||||
completed_task: String,
|
|
||||||
success: bool,
|
|
||||||
) -> Result<()> {
|
|
||||||
// Get next tasks based on transitions
|
|
||||||
let next_tasks = graph.next_tasks(&completed_task, success);
|
|
||||||
|
|
||||||
info!(
|
|
||||||
"Task {} completed (success={}), next tasks: {:?}",
|
|
||||||
completed_task, success, next_tasks
|
|
||||||
);
|
|
||||||
|
|
||||||
// Collect tasks to schedule
|
|
||||||
let mut tasks_to_schedule = Vec::new();
|
|
||||||
|
|
||||||
for next_task_name in next_tasks {
|
|
||||||
let mut state = state.lock().await;
|
|
||||||
|
|
||||||
// Check if task already scheduled or executing
|
|
||||||
if state.scheduled_tasks.contains(&next_task_name)
|
|
||||||
|| state.executing_tasks.contains(&next_task_name)
|
|
||||||
{
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(task_node) = graph.get_task(&next_task_name) {
|
|
||||||
// Check join conditions
|
|
||||||
if let Some(join_count) = task_node.join {
|
|
||||||
// Update join state
|
|
||||||
let join_completions = state
|
|
||||||
.join_state
|
|
||||||
.entry(next_task_name.clone())
|
|
||||||
.or_insert_with(HashSet::new);
|
|
||||||
join_completions.insert(completed_task.clone());
|
|
||||||
|
|
||||||
// Check if join is satisfied
|
|
||||||
if join_completions.len() >= join_count {
|
|
||||||
info!(
|
|
||||||
"Join condition satisfied for task {}: {}/{} completed",
|
|
||||||
next_task_name,
|
|
||||||
join_completions.len(),
|
|
||||||
join_count
|
|
||||||
);
|
|
||||||
state.scheduled_tasks.insert(next_task_name.clone());
|
|
||||||
tasks_to_schedule.push(next_task_name);
|
|
||||||
} else {
|
|
||||||
info!(
|
|
||||||
"Join condition not yet satisfied for task {}: {}/{} completed",
|
|
||||||
next_task_name,
|
|
||||||
join_completions.len(),
|
|
||||||
join_count
|
|
||||||
);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// No join, schedule immediately
|
|
||||||
state.scheduled_tasks.insert(next_task_name.clone());
|
|
||||||
tasks_to_schedule.push(next_task_name);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
error!("Next task {} not found in graph", next_task_name);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Pause workflow execution
|
|
||||||
pub async fn pause(&self, reason: Option<String>) -> Result<()> {
|
|
||||||
let mut state = self.state.lock().await;
|
|
||||||
state.paused = true;
|
|
||||||
state.pause_reason = reason;
|
|
||||||
|
|
||||||
self.coordinator
|
|
||||||
.update_workflow_execution_state(self.execution_id, &state)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
info!("Workflow {} paused", self.execution_id);
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Resume workflow execution
|
|
||||||
pub async fn resume(&self) -> Result<()> {
|
|
||||||
let mut state = self.state.lock().await;
|
|
||||||
state.paused = false;
|
|
||||||
state.pause_reason = None;
|
|
||||||
|
|
||||||
self.coordinator
|
|
||||||
.update_workflow_execution_state(self.execution_id, &state)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
info!("Workflow {} resumed", self.execution_id);
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Cancel workflow execution
|
|
||||||
pub async fn cancel(&self) -> Result<()> {
|
|
||||||
let mut state = self.state.lock().await;
|
|
||||||
state.status = ExecutionStatus::Cancelled;
|
|
||||||
|
|
||||||
self.coordinator
|
|
||||||
.update_workflow_execution_state(self.execution_id, &state)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
info!("Workflow {} cancelled", self.execution_id);
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Get current execution status
|
|
||||||
pub async fn status(&self) -> WorkflowExecutionStatus {
|
|
||||||
let state = self.state.lock().await;
|
|
||||||
WorkflowExecutionStatus {
|
|
||||||
execution_id: self.execution_id,
|
|
||||||
status: state.status,
|
|
||||||
completed_tasks: state.completed_tasks.len(),
|
|
||||||
failed_tasks: state.failed_tasks.len(),
|
|
||||||
skipped_tasks: state.skipped_tasks.len(),
|
|
||||||
executing_tasks: state.executing_tasks.iter().cloned().collect(),
|
|
||||||
scheduled_tasks: state.scheduled_tasks.iter().cloned().collect(),
|
|
||||||
total_tasks: self.graph.nodes.len(),
|
|
||||||
paused: state.paused,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Result of workflow execution
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct WorkflowExecutionResult {
|
|
||||||
pub status: ExecutionStatus,
|
|
||||||
pub output: JsonValue,
|
|
||||||
pub completed_tasks: usize,
|
|
||||||
pub failed_tasks: usize,
|
|
||||||
pub skipped_tasks: usize,
|
|
||||||
pub error_message: Option<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Current status of workflow execution
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct WorkflowExecutionStatus {
|
|
||||||
pub execution_id: Id,
|
|
||||||
pub status: ExecutionStatus,
|
|
||||||
pub completed_tasks: usize,
|
|
||||||
pub failed_tasks: usize,
|
|
||||||
pub skipped_tasks: usize,
|
|
||||||
pub executing_tasks: Vec<String>,
|
|
||||||
pub scheduled_tasks: Vec<String>,
|
|
||||||
pub total_tasks: usize,
|
|
||||||
pub paused: bool,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
|
|
||||||
// Note: These tests require a database connection and are integration tests
|
|
||||||
// They should be run with `cargo test --features integration-tests`
|
|
||||||
|
|
||||||
#[tokio::test]
|
|
||||||
#[ignore] // Requires database
|
|
||||||
async fn test_workflow_coordinator_creation() {
|
|
||||||
// This is a placeholder test
|
|
||||||
// Actual tests would require database setup
|
|
||||||
assert!(true);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -21,9 +21,6 @@ pub type GraphResult<T> = Result<T, GraphError>;
|
|||||||
pub enum GraphError {
|
pub enum GraphError {
|
||||||
#[error("Invalid task reference: {0}")]
|
#[error("Invalid task reference: {0}")]
|
||||||
InvalidTaskReference(String),
|
InvalidTaskReference(String),
|
||||||
|
|
||||||
#[error("Graph building error: {0}")]
|
|
||||||
BuildError(String),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Executable task graph
|
/// Executable task graph
|
||||||
@@ -197,6 +194,7 @@ impl TaskGraph {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Get all tasks that can transition into the given task (inbound edges)
|
/// Get all tasks that can transition into the given task (inbound edges)
|
||||||
|
#[allow(dead_code)] // Part of complete graph API; used in tests
|
||||||
pub fn get_inbound_tasks(&self, task_name: &str) -> Vec<String> {
|
pub fn get_inbound_tasks(&self, task_name: &str) -> Vec<String> {
|
||||||
self.inbound_edges
|
self.inbound_edges
|
||||||
.get(task_name)
|
.get(task_name)
|
||||||
@@ -221,7 +219,8 @@ impl TaskGraph {
|
|||||||
/// * `success` - Whether the task succeeded
|
/// * `success` - Whether the task succeeded
|
||||||
///
|
///
|
||||||
/// # Returns
|
/// # Returns
|
||||||
/// A vector of (task_name, publish_vars) tuples to schedule next
|
/// A vector of task names to schedule next
|
||||||
|
#[allow(dead_code)] // Part of complete graph API; used in tests
|
||||||
pub fn next_tasks(&self, task_name: &str, success: bool) -> Vec<String> {
|
pub fn next_tasks(&self, task_name: &str, success: bool) -> Vec<String> {
|
||||||
let mut next = Vec::new();
|
let mut next = Vec::new();
|
||||||
|
|
||||||
@@ -251,7 +250,8 @@ impl TaskGraph {
|
|||||||
/// Get the next tasks with full transition information.
|
/// Get the next tasks with full transition information.
|
||||||
///
|
///
|
||||||
/// Returns matching transitions with their publish directives and targets,
|
/// Returns matching transitions with their publish directives and targets,
|
||||||
/// giving the coordinator full context for variable publishing.
|
/// giving the caller full context for variable publishing.
|
||||||
|
#[allow(dead_code)] // Part of complete graph API; used in tests
|
||||||
pub fn matching_transitions(&self, task_name: &str, success: bool) -> Vec<&GraphTransition> {
|
pub fn matching_transitions(&self, task_name: &str, success: bool) -> Vec<&GraphTransition> {
|
||||||
let mut matching = Vec::new();
|
let mut matching = Vec::new();
|
||||||
|
|
||||||
@@ -275,6 +275,7 @@ impl TaskGraph {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Collect all unique target task names from all transitions of a given task.
|
/// Collect all unique target task names from all transitions of a given task.
|
||||||
|
#[allow(dead_code)] // Part of complete graph API; used in tests
|
||||||
pub fn all_transition_targets(&self, task_name: &str) -> HashSet<String> {
|
pub fn all_transition_targets(&self, task_name: &str) -> HashSet<String> {
|
||||||
let mut targets = HashSet::new();
|
let mut targets = HashSet::new();
|
||||||
if let Some(node) = self.nodes.get(task_name) {
|
if let Some(node) = self.nodes.get(task_name) {
|
||||||
|
|||||||
@@ -1,60 +1,12 @@
|
|||||||
//! Workflow orchestration module
|
//! Workflow orchestration module
|
||||||
//!
|
//!
|
||||||
//! This module provides workflow execution, orchestration, parsing, validation,
|
//! This module provides workflow execution context, graph building, and
|
||||||
//! and template rendering capabilities for the Attune workflow orchestration system.
|
//! orchestration capabilities for the Attune workflow engine.
|
||||||
//!
|
//!
|
||||||
//! # Modules
|
//! # Modules
|
||||||
//!
|
//!
|
||||||
//! - `parser`: Parse YAML workflow definitions into structured types
|
|
||||||
//! - `graph`: Build executable task graphs from workflow definitions
|
|
||||||
//! - `context`: Manage workflow execution context and variables
|
//! - `context`: Manage workflow execution context and variables
|
||||||
//! - `task_executor`: Execute individual workflow tasks
|
//! - `graph`: Build executable task graphs from workflow definitions
|
||||||
//! - `coordinator`: Orchestrate workflow execution with state management
|
|
||||||
//! - `template`: Template engine for variable interpolation (Jinja2-like syntax)
|
|
||||||
//!
|
|
||||||
//! # Example
|
|
||||||
//!
|
|
||||||
//! ```no_run
|
|
||||||
//! use attune_executor::workflow::{parse_workflow_yaml, WorkflowCoordinator};
|
|
||||||
//!
|
|
||||||
//! // Parse a workflow YAML file
|
|
||||||
//! let yaml = r#"
|
|
||||||
//! ref: my_pack.my_workflow
|
|
||||||
//! label: My Workflow
|
|
||||||
//! version: 1.0.0
|
|
||||||
//! tasks:
|
|
||||||
//! - name: hello
|
|
||||||
//! action: core.echo
|
|
||||||
//! input:
|
|
||||||
//! message: "{{ parameters.name }}"
|
|
||||||
//! "#;
|
|
||||||
//!
|
|
||||||
//! let workflow = parse_workflow_yaml(yaml).expect("Failed to parse workflow");
|
|
||||||
//! ```
|
|
||||||
|
|
||||||
// Phase 2: Workflow Execution Engine
|
|
||||||
pub mod context;
|
pub mod context;
|
||||||
pub mod coordinator;
|
|
||||||
pub mod graph;
|
pub mod graph;
|
||||||
pub mod task_executor;
|
|
||||||
pub mod template;
|
|
||||||
|
|
||||||
// Re-export workflow utilities from common crate
|
|
||||||
pub use attune_common::workflow::{
|
|
||||||
parse_workflow_file, parse_workflow_yaml, workflow_to_json, BackoffStrategy, DecisionBranch,
|
|
||||||
LoadedWorkflow, LoaderConfig, ParseError, ParseResult, PublishDirective, RegistrationOptions,
|
|
||||||
RegistrationResult, RetryConfig, Task, TaskType, ValidationError, ValidationResult,
|
|
||||||
WorkflowDefinition, WorkflowFile, WorkflowLoader, WorkflowRegistrar, WorkflowValidator,
|
|
||||||
};
|
|
||||||
|
|
||||||
// Re-export Phase 2 components
|
|
||||||
pub use context::{ContextError, ContextResult, WorkflowContext};
|
|
||||||
pub use coordinator::{
|
|
||||||
WorkflowCoordinator, WorkflowExecutionHandle, WorkflowExecutionResult, WorkflowExecutionState,
|
|
||||||
WorkflowExecutionStatus,
|
|
||||||
};
|
|
||||||
pub use graph::{GraphError, GraphResult, GraphTransition, TaskGraph, TaskNode};
|
|
||||||
pub use task_executor::{
|
|
||||||
TaskExecutionError, TaskExecutionResult, TaskExecutionStatus, TaskExecutor,
|
|
||||||
};
|
|
||||||
pub use template::{TemplateEngine, TemplateError, TemplateResult, VariableContext, VariableScope};
|
|
||||||
|
|||||||
@@ -1,871 +0,0 @@
|
|||||||
//! Task Executor
|
|
||||||
//!
|
|
||||||
//! This module handles the execution of individual workflow tasks,
|
|
||||||
//! including action invocation, retries, timeouts, and with-items iteration.
|
|
||||||
|
|
||||||
use crate::workflow::context::WorkflowContext;
|
|
||||||
use crate::workflow::graph::{BackoffStrategy, RetryConfig, TaskNode};
|
|
||||||
use attune_common::error::{Error, Result};
|
|
||||||
use attune_common::models::Id;
|
|
||||||
use attune_common::mq::MessageQueue;
|
|
||||||
use chrono::{DateTime, Utc};
|
|
||||||
use serde_json::{json, Value as JsonValue};
|
|
||||||
use sqlx::PgPool;
|
|
||||||
use std::time::Duration;
|
|
||||||
use tokio::time::timeout;
|
|
||||||
use tracing::{debug, error, info, warn};
|
|
||||||
|
|
||||||
/// Task execution result
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct TaskExecutionResult {
|
|
||||||
/// Execution status
|
|
||||||
pub status: TaskExecutionStatus,
|
|
||||||
|
|
||||||
/// Task output/result
|
|
||||||
pub output: Option<JsonValue>,
|
|
||||||
|
|
||||||
/// Error information
|
|
||||||
pub error: Option<TaskExecutionError>,
|
|
||||||
|
|
||||||
/// Execution duration in milliseconds
|
|
||||||
pub duration_ms: i64,
|
|
||||||
|
|
||||||
/// Whether the task should be retried
|
|
||||||
pub should_retry: bool,
|
|
||||||
|
|
||||||
/// Next retry time (if applicable)
|
|
||||||
pub next_retry_at: Option<DateTime<Utc>>,
|
|
||||||
|
|
||||||
/// Number of retries performed
|
|
||||||
pub retry_count: i32,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Task execution status
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
|
||||||
pub enum TaskExecutionStatus {
|
|
||||||
Success,
|
|
||||||
Failed,
|
|
||||||
Timeout,
|
|
||||||
Skipped,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Task execution error
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct TaskExecutionError {
|
|
||||||
pub message: String,
|
|
||||||
pub error_type: String,
|
|
||||||
pub details: Option<JsonValue>,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Task executor
|
|
||||||
pub struct TaskExecutor {
|
|
||||||
db_pool: PgPool,
|
|
||||||
mq: MessageQueue,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl TaskExecutor {
|
|
||||||
/// Create a new task executor
|
|
||||||
pub fn new(db_pool: PgPool, mq: MessageQueue) -> Self {
|
|
||||||
Self { db_pool, mq }
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Execute a task
|
|
||||||
pub async fn execute_task(
|
|
||||||
&self,
|
|
||||||
task: &TaskNode,
|
|
||||||
context: &mut WorkflowContext,
|
|
||||||
workflow_execution_id: Id,
|
|
||||||
parent_execution_id: Id,
|
|
||||||
) -> Result<TaskExecutionResult> {
|
|
||||||
info!("Executing task: {}", task.name);
|
|
||||||
|
|
||||||
let start_time = Utc::now();
|
|
||||||
|
|
||||||
// Check if task should be skipped (when condition)
|
|
||||||
if let Some(ref condition) = task.when {
|
|
||||||
match context.evaluate_condition(condition) {
|
|
||||||
Ok(should_run) => {
|
|
||||||
if !should_run {
|
|
||||||
info!("Task {} skipped due to when condition", task.name);
|
|
||||||
return Ok(TaskExecutionResult {
|
|
||||||
status: TaskExecutionStatus::Skipped,
|
|
||||||
output: None,
|
|
||||||
error: None,
|
|
||||||
duration_ms: 0,
|
|
||||||
should_retry: false,
|
|
||||||
next_retry_at: None,
|
|
||||||
retry_count: 0,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
warn!(
|
|
||||||
"Failed to evaluate when condition for task {}: {}",
|
|
||||||
task.name, e
|
|
||||||
);
|
|
||||||
// Continue execution if condition evaluation fails
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if this is a with-items task
|
|
||||||
if let Some(ref with_items_expr) = task.with_items {
|
|
||||||
return self
|
|
||||||
.execute_with_items(
|
|
||||||
task,
|
|
||||||
context,
|
|
||||||
workflow_execution_id,
|
|
||||||
parent_execution_id,
|
|
||||||
with_items_expr,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Execute single task
|
|
||||||
let result = self
|
|
||||||
.execute_single_task(task, context, workflow_execution_id, parent_execution_id, 0)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
let duration_ms = (Utc::now() - start_time).num_milliseconds();
|
|
||||||
|
|
||||||
// Store task result in context
|
|
||||||
if let Some(ref output) = result.output {
|
|
||||||
context.set_task_result(&task.name, output.clone());
|
|
||||||
|
|
||||||
// Publish variables from matching transitions
|
|
||||||
let success = matches!(result.status, TaskExecutionStatus::Success);
|
|
||||||
for transition in &task.transitions {
|
|
||||||
let should_fire = match transition.kind() {
|
|
||||||
super::graph::TransitionKind::Succeeded => success,
|
|
||||||
super::graph::TransitionKind::Failed => !success,
|
|
||||||
super::graph::TransitionKind::TimedOut => !success,
|
|
||||||
super::graph::TransitionKind::Always => true,
|
|
||||||
super::graph::TransitionKind::Custom => true,
|
|
||||||
};
|
|
||||||
if should_fire && !transition.publish.is_empty() {
|
|
||||||
let var_names: Vec<String> =
|
|
||||||
transition.publish.iter().map(|p| p.name.clone()).collect();
|
|
||||||
if let Err(e) = context.publish_from_result(output, &var_names, None) {
|
|
||||||
warn!("Failed to publish variables for task {}: {}", task.name, e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(TaskExecutionResult {
|
|
||||||
duration_ms,
|
|
||||||
..result
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Execute a single task (without with-items iteration)
|
|
||||||
async fn execute_single_task(
|
|
||||||
&self,
|
|
||||||
task: &TaskNode,
|
|
||||||
context: &WorkflowContext,
|
|
||||||
workflow_execution_id: Id,
|
|
||||||
parent_execution_id: Id,
|
|
||||||
retry_count: i32,
|
|
||||||
) -> Result<TaskExecutionResult> {
|
|
||||||
let start_time = Utc::now();
|
|
||||||
|
|
||||||
// Render task input
|
|
||||||
let input = match context.render_json(&task.input) {
|
|
||||||
Ok(rendered) => rendered,
|
|
||||||
Err(e) => {
|
|
||||||
error!("Failed to render task input for {}: {}", task.name, e);
|
|
||||||
return Ok(TaskExecutionResult {
|
|
||||||
status: TaskExecutionStatus::Failed,
|
|
||||||
output: None,
|
|
||||||
error: Some(TaskExecutionError {
|
|
||||||
message: format!("Failed to render task input: {}", e),
|
|
||||||
error_type: "template_error".to_string(),
|
|
||||||
details: None,
|
|
||||||
}),
|
|
||||||
duration_ms: 0,
|
|
||||||
should_retry: false,
|
|
||||||
next_retry_at: None,
|
|
||||||
retry_count,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Execute based on task type
|
|
||||||
let result = match task.task_type {
|
|
||||||
attune_common::workflow::TaskType::Action => {
|
|
||||||
self.execute_action(task, input, workflow_execution_id, parent_execution_id)
|
|
||||||
.await
|
|
||||||
}
|
|
||||||
attune_common::workflow::TaskType::Parallel => {
|
|
||||||
self.execute_parallel(task, context, workflow_execution_id, parent_execution_id)
|
|
||||||
.await
|
|
||||||
}
|
|
||||||
attune_common::workflow::TaskType::Workflow => {
|
|
||||||
self.execute_workflow(task, input, workflow_execution_id, parent_execution_id)
|
|
||||||
.await
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
let duration_ms = (Utc::now() - start_time).num_milliseconds();
|
|
||||||
|
|
||||||
// Apply timeout if specified
|
|
||||||
let result = if let Some(timeout_secs) = task.timeout {
|
|
||||||
self.apply_timeout(result, timeout_secs).await
|
|
||||||
} else {
|
|
||||||
result
|
|
||||||
};
|
|
||||||
|
|
||||||
// Handle retries
|
|
||||||
let mut result = result?;
|
|
||||||
result.retry_count = retry_count;
|
|
||||||
|
|
||||||
if result.status == TaskExecutionStatus::Failed {
|
|
||||||
if let Some(ref retry_config) = task.retry {
|
|
||||||
if retry_count < retry_config.count as i32 {
|
|
||||||
// Check if we should retry based on error condition
|
|
||||||
let should_retry = if let Some(ref _on_error) = retry_config.on_error {
|
|
||||||
// TODO: Evaluate error condition
|
|
||||||
true
|
|
||||||
} else {
|
|
||||||
true
|
|
||||||
};
|
|
||||||
|
|
||||||
if should_retry {
|
|
||||||
result.should_retry = true;
|
|
||||||
result.next_retry_at =
|
|
||||||
Some(calculate_retry_time(retry_config, retry_count));
|
|
||||||
info!(
|
|
||||||
"Task {} failed, will retry (attempt {}/{})",
|
|
||||||
task.name,
|
|
||||||
retry_count + 1,
|
|
||||||
retry_config.count
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
result.duration_ms = duration_ms;
|
|
||||||
Ok(result)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Execute an action task
|
|
||||||
async fn execute_action(
|
|
||||||
&self,
|
|
||||||
task: &TaskNode,
|
|
||||||
input: JsonValue,
|
|
||||||
_workflow_execution_id: Id,
|
|
||||||
parent_execution_id: Id,
|
|
||||||
) -> Result<TaskExecutionResult> {
|
|
||||||
let action_ref = match &task.action {
|
|
||||||
Some(action) => action,
|
|
||||||
None => {
|
|
||||||
return Ok(TaskExecutionResult {
|
|
||||||
status: TaskExecutionStatus::Failed,
|
|
||||||
output: None,
|
|
||||||
error: Some(TaskExecutionError {
|
|
||||||
message: "Action task missing action reference".to_string(),
|
|
||||||
error_type: "configuration_error".to_string(),
|
|
||||||
details: None,
|
|
||||||
}),
|
|
||||||
duration_ms: 0,
|
|
||||||
should_retry: false,
|
|
||||||
next_retry_at: None,
|
|
||||||
retry_count: 0,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
debug!("Executing action: {} with input: {:?}", action_ref, input);
|
|
||||||
|
|
||||||
// Create execution record in database
|
|
||||||
let execution = sqlx::query_as::<_, attune_common::models::Execution>(
|
|
||||||
r#"
|
|
||||||
INSERT INTO attune.execution (action_ref, input, parent, status)
|
|
||||||
VALUES ($1, $2, $3, $4)
|
|
||||||
RETURNING *
|
|
||||||
"#,
|
|
||||||
)
|
|
||||||
.bind(action_ref)
|
|
||||||
.bind(&input)
|
|
||||||
.bind(parent_execution_id)
|
|
||||||
.bind(attune_common::models::ExecutionStatus::Scheduled)
|
|
||||||
.fetch_one(&self.db_pool)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
// Queue action for execution by worker
|
|
||||||
// TODO: Implement proper message queue publishing
|
|
||||||
info!(
|
|
||||||
"Created action execution {} for task {} (queuing not yet implemented)",
|
|
||||||
execution.id, task.name
|
|
||||||
);
|
|
||||||
|
|
||||||
// For now, return pending status
|
|
||||||
// In a real implementation, we would wait for completion via message queue
|
|
||||||
Ok(TaskExecutionResult {
|
|
||||||
status: TaskExecutionStatus::Success,
|
|
||||||
output: Some(json!({
|
|
||||||
"execution_id": execution.id,
|
|
||||||
"status": "queued"
|
|
||||||
})),
|
|
||||||
error: None,
|
|
||||||
duration_ms: 0,
|
|
||||||
should_retry: false,
|
|
||||||
next_retry_at: None,
|
|
||||||
retry_count: 0,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Execute parallel tasks
|
|
||||||
async fn execute_parallel(
|
|
||||||
&self,
|
|
||||||
task: &TaskNode,
|
|
||||||
context: &WorkflowContext,
|
|
||||||
workflow_execution_id: Id,
|
|
||||||
parent_execution_id: Id,
|
|
||||||
) -> Result<TaskExecutionResult> {
|
|
||||||
let sub_tasks = match &task.sub_tasks {
|
|
||||||
Some(tasks) => tasks,
|
|
||||||
None => {
|
|
||||||
return Ok(TaskExecutionResult {
|
|
||||||
status: TaskExecutionStatus::Failed,
|
|
||||||
output: None,
|
|
||||||
error: Some(TaskExecutionError {
|
|
||||||
message: "Parallel task missing sub-tasks".to_string(),
|
|
||||||
error_type: "configuration_error".to_string(),
|
|
||||||
details: None,
|
|
||||||
}),
|
|
||||||
duration_ms: 0,
|
|
||||||
should_retry: false,
|
|
||||||
next_retry_at: None,
|
|
||||||
retry_count: 0,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
info!("Executing {} parallel tasks", sub_tasks.len());
|
|
||||||
|
|
||||||
// Execute all sub-tasks in parallel
|
|
||||||
let mut futures = Vec::new();
|
|
||||||
|
|
||||||
for subtask in sub_tasks {
|
|
||||||
let subtask_clone = subtask.clone();
|
|
||||||
let subtask_name = subtask.name.clone();
|
|
||||||
let context = context.clone();
|
|
||||||
let db_pool = self.db_pool.clone();
|
|
||||||
let mq = self.mq.clone();
|
|
||||||
|
|
||||||
let future = async move {
|
|
||||||
let executor = TaskExecutor::new(db_pool, mq);
|
|
||||||
let result = executor
|
|
||||||
.execute_single_task(
|
|
||||||
&subtask_clone,
|
|
||||||
&context,
|
|
||||||
workflow_execution_id,
|
|
||||||
parent_execution_id,
|
|
||||||
0,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
(subtask_name, result)
|
|
||||||
};
|
|
||||||
|
|
||||||
futures.push(future);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for all tasks to complete
|
|
||||||
let task_results = futures::future::join_all(futures).await;
|
|
||||||
|
|
||||||
let mut results = Vec::new();
|
|
||||||
let mut all_succeeded = true;
|
|
||||||
let mut errors = Vec::new();
|
|
||||||
|
|
||||||
for (task_name, result) in task_results {
|
|
||||||
match result {
|
|
||||||
Ok(result) => {
|
|
||||||
if result.status != TaskExecutionStatus::Success {
|
|
||||||
all_succeeded = false;
|
|
||||||
if let Some(error) = &result.error {
|
|
||||||
errors.push(json!({
|
|
||||||
"task": task_name,
|
|
||||||
"error": error.message
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
results.push(json!({
|
|
||||||
"task": task_name,
|
|
||||||
"status": format!("{:?}", result.status),
|
|
||||||
"output": result.output
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
all_succeeded = false;
|
|
||||||
errors.push(json!({
|
|
||||||
"task": task_name,
|
|
||||||
"error": e.to_string()
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
let status = if all_succeeded {
|
|
||||||
TaskExecutionStatus::Success
|
|
||||||
} else {
|
|
||||||
TaskExecutionStatus::Failed
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(TaskExecutionResult {
|
|
||||||
status,
|
|
||||||
output: Some(json!({
|
|
||||||
"results": results
|
|
||||||
})),
|
|
||||||
error: if errors.is_empty() {
|
|
||||||
None
|
|
||||||
} else {
|
|
||||||
Some(TaskExecutionError {
|
|
||||||
message: format!("{} parallel tasks failed", errors.len()),
|
|
||||||
error_type: "parallel_execution_error".to_string(),
|
|
||||||
details: Some(json!({"errors": errors})),
|
|
||||||
})
|
|
||||||
},
|
|
||||||
duration_ms: 0,
|
|
||||||
should_retry: false,
|
|
||||||
next_retry_at: None,
|
|
||||||
retry_count: 0,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Execute a workflow task (nested workflow)
|
|
||||||
async fn execute_workflow(
|
|
||||||
&self,
|
|
||||||
_task: &TaskNode,
|
|
||||||
_input: JsonValue,
|
|
||||||
_workflow_execution_id: Id,
|
|
||||||
_parent_execution_id: Id,
|
|
||||||
) -> Result<TaskExecutionResult> {
|
|
||||||
// TODO: Implement nested workflow execution
|
|
||||||
// For now, return not implemented
|
|
||||||
warn!("Workflow task execution not yet implemented");
|
|
||||||
|
|
||||||
Ok(TaskExecutionResult {
|
|
||||||
status: TaskExecutionStatus::Failed,
|
|
||||||
output: None,
|
|
||||||
error: Some(TaskExecutionError {
|
|
||||||
message: "Nested workflow execution not yet implemented".to_string(),
|
|
||||||
error_type: "not_implemented".to_string(),
|
|
||||||
details: None,
|
|
||||||
}),
|
|
||||||
duration_ms: 0,
|
|
||||||
should_retry: false,
|
|
||||||
next_retry_at: None,
|
|
||||||
retry_count: 0,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Execute task with with-items iteration
|
|
||||||
async fn execute_with_items(
|
|
||||||
&self,
|
|
||||||
task: &TaskNode,
|
|
||||||
context: &mut WorkflowContext,
|
|
||||||
workflow_execution_id: Id,
|
|
||||||
parent_execution_id: Id,
|
|
||||||
items_expr: &str,
|
|
||||||
) -> Result<TaskExecutionResult> {
|
|
||||||
// Render items expression
|
|
||||||
let items_str = context.render_template(items_expr).map_err(|e| {
|
|
||||||
Error::validation(format!("Failed to render with-items expression: {}", e))
|
|
||||||
})?;
|
|
||||||
|
|
||||||
// Parse items (should be a JSON array)
|
|
||||||
let items: Vec<JsonValue> = serde_json::from_str(&items_str).map_err(|e| {
|
|
||||||
Error::validation(format!(
|
|
||||||
"with-items expression did not produce valid JSON array: {}",
|
|
||||||
e
|
|
||||||
))
|
|
||||||
})?;
|
|
||||||
|
|
||||||
info!("Executing task {} with {} items", task.name, items.len());
|
|
||||||
|
|
||||||
let items_len = items.len(); // Store length before consuming items
|
|
||||||
let concurrency = task.concurrency.unwrap_or(10);
|
|
||||||
|
|
||||||
let mut all_results = Vec::new();
|
|
||||||
let mut all_succeeded = true;
|
|
||||||
let mut errors = Vec::new();
|
|
||||||
|
|
||||||
// Check if batch processing is enabled
|
|
||||||
if let Some(batch_size) = task.batch_size {
|
|
||||||
// Batch mode: split items into batches and pass as arrays
|
|
||||||
debug!(
|
|
||||||
"Processing {} items in batches of {} (batch mode)",
|
|
||||||
items.len(),
|
|
||||||
batch_size
|
|
||||||
);
|
|
||||||
|
|
||||||
let batches: Vec<Vec<JsonValue>> = items
|
|
||||||
.chunks(batch_size)
|
|
||||||
.map(|chunk| chunk.to_vec())
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
debug!("Created {} batches", batches.len());
|
|
||||||
|
|
||||||
// Execute batches with concurrency limit
|
|
||||||
let mut handles = Vec::new();
|
|
||||||
let semaphore = std::sync::Arc::new(tokio::sync::Semaphore::new(concurrency));
|
|
||||||
|
|
||||||
for (batch_idx, batch) in batches.into_iter().enumerate() {
|
|
||||||
let permit = semaphore.clone().acquire_owned().await.unwrap();
|
|
||||||
|
|
||||||
let executor = TaskExecutor::new(self.db_pool.clone(), self.mq.clone());
|
|
||||||
let task = task.clone();
|
|
||||||
let mut batch_context = context.clone();
|
|
||||||
|
|
||||||
// Set current_item to the batch array
|
|
||||||
batch_context.set_current_item(json!(batch), batch_idx);
|
|
||||||
|
|
||||||
let handle = tokio::spawn(async move {
|
|
||||||
let result = executor
|
|
||||||
.execute_single_task(
|
|
||||||
&task,
|
|
||||||
&batch_context,
|
|
||||||
workflow_execution_id,
|
|
||||||
parent_execution_id,
|
|
||||||
0,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
drop(permit);
|
|
||||||
(batch_idx, result)
|
|
||||||
});
|
|
||||||
|
|
||||||
handles.push(handle);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for all batches to complete
|
|
||||||
for handle in handles {
|
|
||||||
match handle.await {
|
|
||||||
Ok((batch_idx, Ok(result))) => {
|
|
||||||
if result.status != TaskExecutionStatus::Success {
|
|
||||||
all_succeeded = false;
|
|
||||||
if let Some(error) = &result.error {
|
|
||||||
errors.push(json!({
|
|
||||||
"batch": batch_idx,
|
|
||||||
"error": error.message
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
all_results.push(json!({
|
|
||||||
"batch": batch_idx,
|
|
||||||
"status": format!("{:?}", result.status),
|
|
||||||
"output": result.output
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
Ok((batch_idx, Err(e))) => {
|
|
||||||
all_succeeded = false;
|
|
||||||
errors.push(json!({
|
|
||||||
"batch": batch_idx,
|
|
||||||
"error": e.to_string()
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
all_succeeded = false;
|
|
||||||
errors.push(json!({
|
|
||||||
"error": format!("Task panicked: {}", e)
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Individual mode: process each item separately
|
|
||||||
debug!(
|
|
||||||
"Processing {} items individually (no batch_size specified)",
|
|
||||||
items.len()
|
|
||||||
);
|
|
||||||
|
|
||||||
// Execute items with concurrency limit
|
|
||||||
let mut handles = Vec::new();
|
|
||||||
let semaphore = std::sync::Arc::new(tokio::sync::Semaphore::new(concurrency));
|
|
||||||
|
|
||||||
for (item_idx, item) in items.into_iter().enumerate() {
|
|
||||||
let permit = semaphore.clone().acquire_owned().await.unwrap();
|
|
||||||
|
|
||||||
let executor = TaskExecutor::new(self.db_pool.clone(), self.mq.clone());
|
|
||||||
let task = task.clone();
|
|
||||||
let mut item_context = context.clone();
|
|
||||||
|
|
||||||
// Set current_item to the individual item
|
|
||||||
item_context.set_current_item(item, item_idx);
|
|
||||||
|
|
||||||
let handle = tokio::spawn(async move {
|
|
||||||
let result = executor
|
|
||||||
.execute_single_task(
|
|
||||||
&task,
|
|
||||||
&item_context,
|
|
||||||
workflow_execution_id,
|
|
||||||
parent_execution_id,
|
|
||||||
0,
|
|
||||||
)
|
|
||||||
.await;
|
|
||||||
drop(permit);
|
|
||||||
(item_idx, result)
|
|
||||||
});
|
|
||||||
|
|
||||||
handles.push(handle);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for all items to complete
|
|
||||||
for handle in handles {
|
|
||||||
match handle.await {
|
|
||||||
Ok((idx, Ok(result))) => {
|
|
||||||
if result.status != TaskExecutionStatus::Success {
|
|
||||||
all_succeeded = false;
|
|
||||||
if let Some(error) = &result.error {
|
|
||||||
errors.push(json!({
|
|
||||||
"index": idx,
|
|
||||||
"error": error.message
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
all_results.push(json!({
|
|
||||||
"index": idx,
|
|
||||||
"status": format!("{:?}", result.status),
|
|
||||||
"output": result.output
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
Ok((idx, Err(e))) => {
|
|
||||||
all_succeeded = false;
|
|
||||||
errors.push(json!({
|
|
||||||
"index": idx,
|
|
||||||
"error": e.to_string()
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
all_succeeded = false;
|
|
||||||
errors.push(json!({
|
|
||||||
"error": format!("Task panicked: {}", e)
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
context.clear_current_item();
|
|
||||||
|
|
||||||
let status = if all_succeeded {
|
|
||||||
TaskExecutionStatus::Success
|
|
||||||
} else {
|
|
||||||
TaskExecutionStatus::Failed
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(TaskExecutionResult {
|
|
||||||
status,
|
|
||||||
output: Some(json!({
|
|
||||||
"results": all_results,
|
|
||||||
"total": items_len
|
|
||||||
})),
|
|
||||||
error: if errors.is_empty() {
|
|
||||||
None
|
|
||||||
} else {
|
|
||||||
Some(TaskExecutionError {
|
|
||||||
message: format!("{} items failed", errors.len()),
|
|
||||||
error_type: "with_items_error".to_string(),
|
|
||||||
details: Some(json!({"errors": errors})),
|
|
||||||
})
|
|
||||||
},
|
|
||||||
duration_ms: 0,
|
|
||||||
should_retry: false,
|
|
||||||
next_retry_at: None,
|
|
||||||
retry_count: 0,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Apply timeout to task execution
|
|
||||||
async fn apply_timeout(
|
|
||||||
&self,
|
|
||||||
result_future: Result<TaskExecutionResult>,
|
|
||||||
timeout_secs: u32,
|
|
||||||
) -> Result<TaskExecutionResult> {
|
|
||||||
match timeout(Duration::from_secs(timeout_secs as u64), async {
|
|
||||||
result_future
|
|
||||||
})
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
Ok(result) => result,
|
|
||||||
Err(_) => {
|
|
||||||
warn!("Task execution timed out after {} seconds", timeout_secs);
|
|
||||||
Ok(TaskExecutionResult {
|
|
||||||
status: TaskExecutionStatus::Timeout,
|
|
||||||
output: None,
|
|
||||||
error: Some(TaskExecutionError {
|
|
||||||
message: format!("Task timed out after {} seconds", timeout_secs),
|
|
||||||
error_type: "timeout".to_string(),
|
|
||||||
details: None,
|
|
||||||
}),
|
|
||||||
duration_ms: (timeout_secs * 1000) as i64,
|
|
||||||
should_retry: false,
|
|
||||||
next_retry_at: None,
|
|
||||||
retry_count: 0,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Calculate next retry time based on retry configuration
|
|
||||||
fn calculate_retry_time(config: &RetryConfig, retry_count: i32) -> DateTime<Utc> {
|
|
||||||
let delay_secs = match config.backoff {
|
|
||||||
BackoffStrategy::Constant => config.delay,
|
|
||||||
BackoffStrategy::Linear => config.delay * (retry_count as u32 + 1),
|
|
||||||
BackoffStrategy::Exponential => {
|
|
||||||
let exp_delay = config.delay * 2_u32.pow(retry_count as u32);
|
|
||||||
if let Some(max_delay) = config.max_delay {
|
|
||||||
exp_delay.min(max_delay)
|
|
||||||
} else {
|
|
||||||
exp_delay
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
Utc::now() + chrono::Duration::seconds(delay_secs as i64)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_calculate_retry_time_constant() {
|
|
||||||
let config = RetryConfig {
|
|
||||||
count: 3,
|
|
||||||
delay: 10,
|
|
||||||
backoff: BackoffStrategy::Constant,
|
|
||||||
max_delay: None,
|
|
||||||
on_error: None,
|
|
||||||
};
|
|
||||||
|
|
||||||
let now = Utc::now();
|
|
||||||
let retry_time = calculate_retry_time(&config, 0);
|
|
||||||
let diff = (retry_time - now).num_seconds();
|
|
||||||
|
|
||||||
assert!(diff >= 9 && diff <= 11); // Allow 1 second tolerance
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_calculate_retry_time_exponential() {
|
|
||||||
let config = RetryConfig {
|
|
||||||
count: 3,
|
|
||||||
delay: 10,
|
|
||||||
backoff: BackoffStrategy::Exponential,
|
|
||||||
max_delay: Some(100),
|
|
||||||
on_error: None,
|
|
||||||
};
|
|
||||||
|
|
||||||
let now = Utc::now();
|
|
||||||
|
|
||||||
// First retry: 10 * 2^0 = 10
|
|
||||||
let retry1 = calculate_retry_time(&config, 0);
|
|
||||||
assert!((retry1 - now).num_seconds() >= 9 && (retry1 - now).num_seconds() <= 11);
|
|
||||||
|
|
||||||
// Second retry: 10 * 2^1 = 20
|
|
||||||
let retry2 = calculate_retry_time(&config, 1);
|
|
||||||
assert!((retry2 - now).num_seconds() >= 19 && (retry2 - now).num_seconds() <= 21);
|
|
||||||
|
|
||||||
// Third retry: 10 * 2^2 = 40
|
|
||||||
let retry3 = calculate_retry_time(&config, 2);
|
|
||||||
assert!((retry3 - now).num_seconds() >= 39 && (retry3 - now).num_seconds() <= 41);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_calculate_retry_time_exponential_with_max() {
|
|
||||||
let config = RetryConfig {
|
|
||||||
count: 10,
|
|
||||||
delay: 10,
|
|
||||||
backoff: BackoffStrategy::Exponential,
|
|
||||||
max_delay: Some(100),
|
|
||||||
on_error: None,
|
|
||||||
};
|
|
||||||
|
|
||||||
let now = Utc::now();
|
|
||||||
|
|
||||||
// Retry with high count should be capped at max_delay
|
|
||||||
let retry = calculate_retry_time(&config, 10);
|
|
||||||
assert!((retry - now).num_seconds() >= 99 && (retry - now).num_seconds() <= 101);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_with_items_batch_creation() {
|
|
||||||
use serde_json::json;
|
|
||||||
|
|
||||||
// Test batch_size=3 with 7 items
|
|
||||||
let items = vec![
|
|
||||||
json!({"id": 1}),
|
|
||||||
json!({"id": 2}),
|
|
||||||
json!({"id": 3}),
|
|
||||||
json!({"id": 4}),
|
|
||||||
json!({"id": 5}),
|
|
||||||
json!({"id": 6}),
|
|
||||||
json!({"id": 7}),
|
|
||||||
];
|
|
||||||
|
|
||||||
let batch_size = 3;
|
|
||||||
let batches: Vec<Vec<JsonValue>> = items
|
|
||||||
.chunks(batch_size)
|
|
||||||
.map(|chunk| chunk.to_vec())
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
// Should create 3 batches: [1,2,3], [4,5,6], [7]
|
|
||||||
assert_eq!(batches.len(), 3);
|
|
||||||
assert_eq!(batches[0].len(), 3);
|
|
||||||
assert_eq!(batches[1].len(), 3);
|
|
||||||
assert_eq!(batches[2].len(), 1); // Last batch can be smaller
|
|
||||||
|
|
||||||
// Verify content - batches are arrays
|
|
||||||
assert_eq!(batches[0][0], json!({"id": 1}));
|
|
||||||
assert_eq!(batches[2][0], json!({"id": 7}));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_with_items_no_batch_size_individual_processing() {
|
|
||||||
use serde_json::json;
|
|
||||||
|
|
||||||
// Without batch_size, items are processed individually
|
|
||||||
let items = vec![json!({"id": 1}), json!({"id": 2}), json!({"id": 3})];
|
|
||||||
|
|
||||||
// Each item should be processed separately (not as batches)
|
|
||||||
assert_eq!(items.len(), 3);
|
|
||||||
|
|
||||||
// Verify individual items
|
|
||||||
assert_eq!(items[0], json!({"id": 1}));
|
|
||||||
assert_eq!(items[1], json!({"id": 2}));
|
|
||||||
assert_eq!(items[2], json!({"id": 3}));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_with_items_batch_vs_individual() {
|
|
||||||
use serde_json::json;
|
|
||||||
|
|
||||||
let items = vec![json!({"id": 1}), json!({"id": 2}), json!({"id": 3})];
|
|
||||||
|
|
||||||
// With batch_size: items are grouped into batches (arrays)
|
|
||||||
let batch_size = Some(2);
|
|
||||||
if let Some(bs) = batch_size {
|
|
||||||
let batches: Vec<Vec<JsonValue>> = items
|
|
||||||
.clone()
|
|
||||||
.chunks(bs)
|
|
||||||
.map(|chunk| chunk.to_vec())
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
// 2 batches: [1,2], [3]
|
|
||||||
assert_eq!(batches.len(), 2);
|
|
||||||
assert_eq!(batches[0], vec![json!({"id": 1}), json!({"id": 2})]);
|
|
||||||
assert_eq!(batches[1], vec![json!({"id": 3})]);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Without batch_size: items processed individually
|
|
||||||
let batch_size: Option<usize> = None;
|
|
||||||
if batch_size.is_none() {
|
|
||||||
// Each item is a single value, not wrapped in array
|
|
||||||
for (idx, item) in items.iter().enumerate() {
|
|
||||||
assert_eq!(item["id"], idx + 1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,360 +0,0 @@
|
|||||||
//! Template engine for workflow variable interpolation
|
|
||||||
//!
|
|
||||||
//! This module provides template rendering using Tera (Jinja2-like syntax)
|
|
||||||
//! with support for multi-scope variable contexts.
|
|
||||||
|
|
||||||
use serde_json::Value as JsonValue;
|
|
||||||
use std::collections::HashMap;
|
|
||||||
use tera::{Context, Tera};
|
|
||||||
|
|
||||||
/// Result type for template operations
|
|
||||||
pub type TemplateResult<T> = Result<T, TemplateError>;
|
|
||||||
|
|
||||||
/// Errors that can occur during template rendering
|
|
||||||
#[derive(Debug, thiserror::Error)]
|
|
||||||
pub enum TemplateError {
|
|
||||||
#[error("Template rendering error: {0}")]
|
|
||||||
RenderError(#[from] tera::Error),
|
|
||||||
|
|
||||||
#[error("Invalid template syntax: {0}")]
|
|
||||||
SyntaxError(String),
|
|
||||||
|
|
||||||
#[error("Variable not found: {0}")]
|
|
||||||
VariableNotFound(String),
|
|
||||||
|
|
||||||
#[error("JSON serialization error: {0}")]
|
|
||||||
JsonError(#[from] serde_json::Error),
|
|
||||||
|
|
||||||
#[error("Invalid scope: {0}")]
|
|
||||||
InvalidScope(String),
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Variable scope priority (higher number = higher priority)
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
|
|
||||||
pub enum VariableScope {
|
|
||||||
/// System-level variables (lowest priority)
|
|
||||||
System = 1,
|
|
||||||
/// Key-value store variables
|
|
||||||
KeyValue = 2,
|
|
||||||
/// Pack configuration
|
|
||||||
PackConfig = 3,
|
|
||||||
/// Workflow parameters (input)
|
|
||||||
Parameters = 4,
|
|
||||||
/// Workflow vars (defined in workflow)
|
|
||||||
Vars = 5,
|
|
||||||
/// Task-specific variables (highest priority)
|
|
||||||
Task = 6,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Template engine with multi-scope variable context
|
|
||||||
pub struct TemplateEngine {
|
|
||||||
// Note: We can't use custom filters with Tera::one_off, so we need to keep tera instance
|
|
||||||
// But Tera doesn't expose a way to register templates without files in the new() constructor
|
|
||||||
// So we'll just use one_off for now and skip custom filters in basic rendering
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for TemplateEngine {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self::new()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl TemplateEngine {
|
|
||||||
/// Create a new template engine
|
|
||||||
pub fn new() -> Self {
|
|
||||||
Self {}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Render a template string with the given context
|
|
||||||
pub fn render(&self, template: &str, context: &VariableContext) -> TemplateResult<String> {
|
|
||||||
let tera_context = context.to_tera_context()?;
|
|
||||||
|
|
||||||
// Use one-off template rendering
|
|
||||||
// Note: Custom filters are not supported with one_off rendering
|
|
||||||
Tera::one_off(template, &tera_context, true).map_err(TemplateError::from)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Render a template and parse result as JSON
|
|
||||||
pub fn render_json(
|
|
||||||
&self,
|
|
||||||
template: &str,
|
|
||||||
context: &VariableContext,
|
|
||||||
) -> TemplateResult<JsonValue> {
|
|
||||||
let rendered = self.render(template, context)?;
|
|
||||||
serde_json::from_str(&rendered).map_err(TemplateError::from)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Check if a template string contains valid syntax
|
|
||||||
pub fn validate_template(&self, template: &str) -> TemplateResult<()> {
|
|
||||||
Tera::one_off(template, &Context::new(), true)
|
|
||||||
.map(|_| ())
|
|
||||||
.map_err(TemplateError::from)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Multi-scope variable context for template rendering
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
pub struct VariableContext {
|
|
||||||
/// System-level variables
|
|
||||||
system: HashMap<String, JsonValue>,
|
|
||||||
/// Key-value store variables
|
|
||||||
kv: HashMap<String, JsonValue>,
|
|
||||||
/// Pack configuration
|
|
||||||
pack_config: HashMap<String, JsonValue>,
|
|
||||||
/// Workflow parameters (input)
|
|
||||||
parameters: HashMap<String, JsonValue>,
|
|
||||||
/// Workflow vars
|
|
||||||
vars: HashMap<String, JsonValue>,
|
|
||||||
/// Task results and metadata
|
|
||||||
task: HashMap<String, JsonValue>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Default for VariableContext {
|
|
||||||
fn default() -> Self {
|
|
||||||
Self::new()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl VariableContext {
|
|
||||||
/// Create a new empty variable context
|
|
||||||
pub fn new() -> Self {
|
|
||||||
Self {
|
|
||||||
system: HashMap::new(),
|
|
||||||
kv: HashMap::new(),
|
|
||||||
pack_config: HashMap::new(),
|
|
||||||
parameters: HashMap::new(),
|
|
||||||
vars: HashMap::new(),
|
|
||||||
task: HashMap::new(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Set system variables
|
|
||||||
pub fn with_system(mut self, vars: HashMap<String, JsonValue>) -> Self {
|
|
||||||
self.system = vars;
|
|
||||||
self
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Set key-value store variables
|
|
||||||
pub fn with_kv(mut self, vars: HashMap<String, JsonValue>) -> Self {
|
|
||||||
self.kv = vars;
|
|
||||||
self
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Set pack configuration
|
|
||||||
pub fn with_pack_config(mut self, config: HashMap<String, JsonValue>) -> Self {
|
|
||||||
self.pack_config = config;
|
|
||||||
self
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Set workflow parameters
|
|
||||||
pub fn with_parameters(mut self, params: HashMap<String, JsonValue>) -> Self {
|
|
||||||
self.parameters = params;
|
|
||||||
self
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Set workflow vars
|
|
||||||
pub fn with_vars(mut self, vars: HashMap<String, JsonValue>) -> Self {
|
|
||||||
self.vars = vars;
|
|
||||||
self
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Set task variables
|
|
||||||
pub fn with_task(mut self, task_vars: HashMap<String, JsonValue>) -> Self {
|
|
||||||
self.task = task_vars;
|
|
||||||
self
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Add a single variable to a scope
|
|
||||||
pub fn set(&mut self, scope: VariableScope, key: String, value: JsonValue) {
|
|
||||||
match scope {
|
|
||||||
VariableScope::System => self.system.insert(key, value),
|
|
||||||
VariableScope::KeyValue => self.kv.insert(key, value),
|
|
||||||
VariableScope::PackConfig => self.pack_config.insert(key, value),
|
|
||||||
VariableScope::Parameters => self.parameters.insert(key, value),
|
|
||||||
VariableScope::Vars => self.vars.insert(key, value),
|
|
||||||
VariableScope::Task => self.task.insert(key, value),
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Get a variable from any scope (respects priority)
|
|
||||||
pub fn get(&self, key: &str) -> Option<&JsonValue> {
|
|
||||||
// Check scopes in priority order (highest to lowest)
|
|
||||||
self.task
|
|
||||||
.get(key)
|
|
||||||
.or_else(|| self.vars.get(key))
|
|
||||||
.or_else(|| self.parameters.get(key))
|
|
||||||
.or_else(|| self.pack_config.get(key))
|
|
||||||
.or_else(|| self.kv.get(key))
|
|
||||||
.or_else(|| self.system.get(key))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Convert to Tera context for rendering
|
|
||||||
pub fn to_tera_context(&self) -> TemplateResult<Context> {
|
|
||||||
let mut context = Context::new();
|
|
||||||
|
|
||||||
// Insert scopes as nested objects
|
|
||||||
context.insert("system", &self.system);
|
|
||||||
context.insert("kv", &self.kv);
|
|
||||||
context.insert("pack", &serde_json::json!({ "config": self.pack_config }));
|
|
||||||
context.insert("parameters", &self.parameters);
|
|
||||||
context.insert("vars", &self.vars);
|
|
||||||
context.insert("task", &self.task);
|
|
||||||
|
|
||||||
Ok(context)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Merge another context into this one (preserves priority)
|
|
||||||
pub fn merge(&mut self, other: &VariableContext) {
|
|
||||||
self.system.extend(other.system.clone());
|
|
||||||
self.kv.extend(other.kv.clone());
|
|
||||||
self.pack_config.extend(other.pack_config.clone());
|
|
||||||
self.parameters.extend(other.parameters.clone());
|
|
||||||
self.vars.extend(other.vars.clone());
|
|
||||||
self.task.extend(other.task.clone());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
use serde_json::json;
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_basic_template_rendering() {
|
|
||||||
let engine = TemplateEngine::new();
|
|
||||||
let mut context = VariableContext::new();
|
|
||||||
context.set(
|
|
||||||
VariableScope::Parameters,
|
|
||||||
"name".to_string(),
|
|
||||||
json!("World"),
|
|
||||||
);
|
|
||||||
|
|
||||||
let result = engine.render("Hello {{ parameters.name }}!", &context);
|
|
||||||
assert!(result.is_ok());
|
|
||||||
assert_eq!(result.unwrap(), "Hello World!");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_scope_priority() {
|
|
||||||
let engine = TemplateEngine::new();
|
|
||||||
let mut context = VariableContext::new();
|
|
||||||
|
|
||||||
// Set same variable in multiple scopes
|
|
||||||
context.set(VariableScope::System, "value".to_string(), json!("system"));
|
|
||||||
context.set(VariableScope::Vars, "value".to_string(), json!("vars"));
|
|
||||||
context.set(VariableScope::Task, "value".to_string(), json!("task"));
|
|
||||||
|
|
||||||
// Task scope should win (highest priority)
|
|
||||||
let result = engine.render("{{ task.value }}", &context);
|
|
||||||
assert_eq!(result.unwrap(), "task");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_nested_variables() {
|
|
||||||
let engine = TemplateEngine::new();
|
|
||||||
let mut context = VariableContext::new();
|
|
||||||
context.set(
|
|
||||||
VariableScope::Parameters,
|
|
||||||
"config".to_string(),
|
|
||||||
json!({"database": {"host": "localhost", "port": 5432}}),
|
|
||||||
);
|
|
||||||
|
|
||||||
let result = engine.render(
|
|
||||||
"postgres://{{ parameters.config.database.host }}:{{ parameters.config.database.port }}",
|
|
||||||
&context,
|
|
||||||
);
|
|
||||||
assert_eq!(result.unwrap(), "postgres://localhost:5432");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Note: Custom filter tests are disabled since we're using Tera::one_off
|
|
||||||
// which doesn't support custom filters. In production, we would need to
|
|
||||||
// use a pre-configured Tera instance with templates registered.
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_json_operations() {
|
|
||||||
let engine = TemplateEngine::new();
|
|
||||||
let mut context = VariableContext::new();
|
|
||||||
context.set(
|
|
||||||
VariableScope::Parameters,
|
|
||||||
"data".to_string(),
|
|
||||||
json!({"key": "value"}),
|
|
||||||
);
|
|
||||||
|
|
||||||
// Test accessing JSON properties
|
|
||||||
let result = engine.render("{{ parameters.data.key }}", &context);
|
|
||||||
assert_eq!(result.unwrap(), "value");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_conditional_rendering() {
|
|
||||||
let engine = TemplateEngine::new();
|
|
||||||
let mut context = VariableContext::new();
|
|
||||||
context.set(
|
|
||||||
VariableScope::Parameters,
|
|
||||||
"env".to_string(),
|
|
||||||
json!("production"),
|
|
||||||
);
|
|
||||||
|
|
||||||
let result = engine.render(
|
|
||||||
"{% if parameters.env == 'production' %}prod{% else %}dev{% endif %}",
|
|
||||||
&context,
|
|
||||||
);
|
|
||||||
assert_eq!(result.unwrap(), "prod");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_loop_rendering() {
|
|
||||||
let engine = TemplateEngine::new();
|
|
||||||
let mut context = VariableContext::new();
|
|
||||||
context.set(
|
|
||||||
VariableScope::Parameters,
|
|
||||||
"items".to_string(),
|
|
||||||
json!(["a", "b", "c"]),
|
|
||||||
);
|
|
||||||
|
|
||||||
let result = engine.render(
|
|
||||||
"{% for item in parameters.items %}{{ item }}{% endfor %}",
|
|
||||||
&context,
|
|
||||||
);
|
|
||||||
assert_eq!(result.unwrap(), "abc");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_context_merge() {
|
|
||||||
let mut ctx1 = VariableContext::new();
|
|
||||||
ctx1.set(VariableScope::Vars, "a".to_string(), json!(1));
|
|
||||||
ctx1.set(VariableScope::Vars, "b".to_string(), json!(2));
|
|
||||||
|
|
||||||
let mut ctx2 = VariableContext::new();
|
|
||||||
ctx2.set(VariableScope::Vars, "b".to_string(), json!(3));
|
|
||||||
ctx2.set(VariableScope::Vars, "c".to_string(), json!(4));
|
|
||||||
|
|
||||||
ctx1.merge(&ctx2);
|
|
||||||
|
|
||||||
assert_eq!(ctx1.get("a"), Some(&json!(1)));
|
|
||||||
assert_eq!(ctx1.get("b"), Some(&json!(3))); // ctx2 overwrites
|
|
||||||
assert_eq!(ctx1.get("c"), Some(&json!(4)));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_all_scopes() {
|
|
||||||
let engine = TemplateEngine::new();
|
|
||||||
let context = VariableContext::new()
|
|
||||||
.with_system(HashMap::from([("sys_var".to_string(), json!("system"))]))
|
|
||||||
.with_kv(HashMap::from([("kv_var".to_string(), json!("keyvalue"))]))
|
|
||||||
.with_pack_config(HashMap::from([("setting".to_string(), json!("config"))]))
|
|
||||||
.with_parameters(HashMap::from([("param".to_string(), json!("parameter"))]))
|
|
||||||
.with_vars(HashMap::from([("var".to_string(), json!("variable"))]))
|
|
||||||
.with_task(HashMap::from([(
|
|
||||||
"result".to_string(),
|
|
||||||
json!("task_result"),
|
|
||||||
)]));
|
|
||||||
|
|
||||||
let template = "{{ system.sys_var }}-{{ kv.kv_var }}-{{ pack.config.setting }}-{{ parameters.param }}-{{ vars.var }}-{{ task.result }}";
|
|
||||||
let result = engine.render(template, &context);
|
|
||||||
assert_eq!(
|
|
||||||
result.unwrap(),
|
|
||||||
"system-keyvalue-config-parameter-variable-task_result"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -2,8 +2,9 @@
|
|||||||
|
|
||||||
use anyhow::{Context, Result};
|
use anyhow::{Context, Result};
|
||||||
use sqlx::postgres::PgListener;
|
use sqlx::postgres::PgListener;
|
||||||
|
use std::time::Duration;
|
||||||
use tokio::sync::broadcast;
|
use tokio::sync::broadcast;
|
||||||
use tracing::{debug, error, info, warn};
|
use tracing::{debug, error, info, trace, warn};
|
||||||
|
|
||||||
use crate::service::Notification;
|
use crate::service::Notification;
|
||||||
|
|
||||||
@@ -18,6 +19,8 @@ const NOTIFICATION_CHANNELS: &[&str] = &[
|
|||||||
"enforcement_status_changed",
|
"enforcement_status_changed",
|
||||||
"event_created",
|
"event_created",
|
||||||
"workflow_execution_status_changed",
|
"workflow_execution_status_changed",
|
||||||
|
"artifact_created",
|
||||||
|
"artifact_updated",
|
||||||
];
|
];
|
||||||
|
|
||||||
/// PostgreSQL listener that receives NOTIFY events and broadcasts them
|
/// PostgreSQL listener that receives NOTIFY events and broadcasts them
|
||||||
@@ -46,70 +49,111 @@ impl PostgresListener {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// Create a dedicated listener connection
|
// Create a dedicated listener connection
|
||||||
|
let mut listener = self.create_listener().await?;
|
||||||
|
|
||||||
|
info!("PostgreSQL listener ready — entering recv loop");
|
||||||
|
|
||||||
|
// Periodic heartbeat so we can confirm the task is alive even when idle.
|
||||||
|
let heartbeat_interval = Duration::from_secs(60);
|
||||||
|
let mut next_heartbeat = tokio::time::Instant::now() + heartbeat_interval;
|
||||||
|
|
||||||
|
// Process notifications in a loop
|
||||||
|
loop {
|
||||||
|
// Log a heartbeat if no notification has arrived for a while.
|
||||||
|
let now = tokio::time::Instant::now();
|
||||||
|
if now >= next_heartbeat {
|
||||||
|
info!("PostgreSQL listener heartbeat — still waiting for notifications");
|
||||||
|
next_heartbeat = now + heartbeat_interval;
|
||||||
|
}
|
||||||
|
|
||||||
|
trace!("Calling listener.recv() — waiting for next notification");
|
||||||
|
|
||||||
|
// Use a timeout so the heartbeat fires even during long idle periods.
|
||||||
|
match tokio::time::timeout(heartbeat_interval, listener.recv()).await {
|
||||||
|
// Timed out waiting — loop back and log the heartbeat above.
|
||||||
|
Err(_timeout) => {
|
||||||
|
trace!("listener.recv() timed out — re-entering loop");
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
Ok(recv_result) => match recv_result {
|
||||||
|
Ok(pg_notification) => {
|
||||||
|
let channel = pg_notification.channel();
|
||||||
|
let payload = pg_notification.payload();
|
||||||
|
debug!(
|
||||||
|
"Received PostgreSQL notification: channel={}, payload_len={}",
|
||||||
|
channel,
|
||||||
|
payload.len()
|
||||||
|
);
|
||||||
|
debug!("Notification payload: {}", payload);
|
||||||
|
|
||||||
|
// Parse and broadcast notification
|
||||||
|
if let Err(e) = self.process_notification(channel, payload) {
|
||||||
|
error!(
|
||||||
|
"Failed to process notification from channel '{}': {}",
|
||||||
|
channel, e
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!("Error receiving PostgreSQL notification: {}", e);
|
||||||
|
|
||||||
|
// Sleep briefly before retrying to avoid tight loop on persistent errors
|
||||||
|
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||||
|
|
||||||
|
// Try to reconnect
|
||||||
|
warn!("Attempting to reconnect PostgreSQL listener...");
|
||||||
|
match self.create_listener().await {
|
||||||
|
Ok(new_listener) => {
|
||||||
|
listener = new_listener;
|
||||||
|
next_heartbeat = tokio::time::Instant::now() + heartbeat_interval;
|
||||||
|
info!("PostgreSQL listener reconnected successfully");
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
error!("Failed to reconnect PostgreSQL listener: {}", e);
|
||||||
|
tokio::time::sleep(Duration::from_secs(5)).await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}, // end Ok(recv_result)
|
||||||
|
} // end timeout match
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a fresh [`PgListener`] subscribed to all notification channels.
|
||||||
|
async fn create_listener(&self) -> Result<PgListener> {
|
||||||
|
info!("Connecting PostgreSQL LISTEN connection to {}", {
|
||||||
|
// Mask the password for logging
|
||||||
|
let url = &self.database_url;
|
||||||
|
if let Some(at) = url.rfind('@') {
|
||||||
|
if let Some(colon) = url[..at].rfind(':') {
|
||||||
|
format!("{}:****{}", &url[..colon], &url[at..])
|
||||||
|
} else {
|
||||||
|
url.clone()
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
url.clone()
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
let mut listener = PgListener::connect(&self.database_url)
|
let mut listener = PgListener::connect(&self.database_url)
|
||||||
.await
|
.await
|
||||||
.context("Failed to connect PostgreSQL listener")?;
|
.context("Failed to connect PostgreSQL listener")?;
|
||||||
|
|
||||||
// Listen on all notification channels
|
info!("PostgreSQL LISTEN connection established — subscribing to channels");
|
||||||
for channel in NOTIFICATION_CHANNELS {
|
|
||||||
listener
|
|
||||||
.listen(channel)
|
|
||||||
.await
|
|
||||||
.context(format!("Failed to LISTEN on channel '{}'", channel))?;
|
|
||||||
info!("Listening on PostgreSQL channel: {}", channel);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process notifications in a loop
|
// Use listen_all for a single round-trip instead of N separate commands
|
||||||
loop {
|
listener
|
||||||
match listener.recv().await {
|
.listen_all(NOTIFICATION_CHANNELS.iter().copied())
|
||||||
Ok(pg_notification) => {
|
.await
|
||||||
debug!(
|
.context("Failed to LISTEN on notification channels")?;
|
||||||
"Received PostgreSQL notification: channel={}, payload={}",
|
|
||||||
pg_notification.channel(),
|
|
||||||
pg_notification.payload()
|
|
||||||
);
|
|
||||||
|
|
||||||
// Parse and broadcast notification
|
info!(
|
||||||
if let Err(e) = self
|
"Subscribed to {} PostgreSQL channels: {:?}",
|
||||||
.process_notification(pg_notification.channel(), pg_notification.payload())
|
NOTIFICATION_CHANNELS.len(),
|
||||||
{
|
NOTIFICATION_CHANNELS
|
||||||
error!(
|
);
|
||||||
"Failed to process notification from channel '{}': {}",
|
|
||||||
pg_notification.channel(),
|
|
||||||
e
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
error!("Error receiving PostgreSQL notification: {}", e);
|
|
||||||
|
|
||||||
// Sleep briefly before retrying to avoid tight loop on persistent errors
|
Ok(listener)
|
||||||
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
|
|
||||||
|
|
||||||
// Try to reconnect
|
|
||||||
warn!("Attempting to reconnect PostgreSQL listener...");
|
|
||||||
match PgListener::connect(&self.database_url).await {
|
|
||||||
Ok(new_listener) => {
|
|
||||||
listener = new_listener;
|
|
||||||
// Re-subscribe to all channels
|
|
||||||
for channel in NOTIFICATION_CHANNELS {
|
|
||||||
if let Err(e) = listener.listen(channel).await {
|
|
||||||
error!(
|
|
||||||
"Failed to re-subscribe to channel '{}': {}",
|
|
||||||
channel, e
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
info!("PostgreSQL listener reconnected successfully");
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
error!("Failed to reconnect PostgreSQL listener: {}", e);
|
|
||||||
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Process a PostgreSQL notification and broadcast it to WebSocket clients
|
/// Process a PostgreSQL notification and broadcast it to WebSocket clients
|
||||||
@@ -171,6 +215,8 @@ mod tests {
|
|||||||
assert!(NOTIFICATION_CHANNELS.contains(&"enforcement_created"));
|
assert!(NOTIFICATION_CHANNELS.contains(&"enforcement_created"));
|
||||||
assert!(NOTIFICATION_CHANNELS.contains(&"enforcement_status_changed"));
|
assert!(NOTIFICATION_CHANNELS.contains(&"enforcement_status_changed"));
|
||||||
assert!(NOTIFICATION_CHANNELS.contains(&"inquiry_created"));
|
assert!(NOTIFICATION_CHANNELS.contains(&"inquiry_created"));
|
||||||
|
assert!(NOTIFICATION_CHANNELS.contains(&"artifact_created"));
|
||||||
|
assert!(NOTIFICATION_CHANNELS.contains(&"artifact_updated"));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
use anyhow::Result;
|
use anyhow::Result;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use tokio::sync::broadcast;
|
use tokio::sync::broadcast;
|
||||||
use tracing::{error, info};
|
use tracing::{debug, error, info};
|
||||||
|
|
||||||
use attune_common::config::Config;
|
use attune_common::config::Config;
|
||||||
|
|
||||||
@@ -108,8 +108,25 @@ impl NotifierService {
|
|||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
loop {
|
loop {
|
||||||
tokio::select! {
|
tokio::select! {
|
||||||
Ok(notification) = notification_rx.recv() => {
|
recv_result = notification_rx.recv() => {
|
||||||
subscriber_manager.broadcast(notification);
|
match recv_result {
|
||||||
|
Ok(notification) => {
|
||||||
|
debug!(
|
||||||
|
"Broadcasting notification: type={}, entity_type={}, entity_id={}",
|
||||||
|
notification.notification_type,
|
||||||
|
notification.entity_type,
|
||||||
|
notification.entity_id,
|
||||||
|
);
|
||||||
|
subscriber_manager.broadcast(notification);
|
||||||
|
}
|
||||||
|
Err(tokio::sync::broadcast::error::RecvError::Lagged(n)) => {
|
||||||
|
error!("Notification broadcaster lagged — dropped {} messages", n);
|
||||||
|
}
|
||||||
|
Err(tokio::sync::broadcast::error::RecvError::Closed) => {
|
||||||
|
error!("Notification broadcast channel closed — broadcaster exiting");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
_ = shutdown_rx.recv() => {
|
_ = shutdown_rx.recv() => {
|
||||||
info!("Notification broadcaster shutting down");
|
info!("Notification broadcaster shutting down");
|
||||||
|
|||||||
@@ -180,6 +180,7 @@ impl SubscriberManager {
|
|||||||
// Channel closed, client disconnected
|
// Channel closed, client disconnected
|
||||||
failed_count += 1;
|
failed_count += 1;
|
||||||
to_remove.push(client_id.clone());
|
to_remove.push(client_id.clone());
|
||||||
|
debug!("Client {} disconnected — removing", client_id);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -191,8 +192,12 @@ impl SubscriberManager {
|
|||||||
|
|
||||||
if sent_count > 0 {
|
if sent_count > 0 {
|
||||||
debug!(
|
debug!(
|
||||||
"Broadcast notification: sent={}, failed={}, type={}",
|
"Broadcast notification: sent={}, failed={}, type={}, entity_type={}, entity_id={}",
|
||||||
sent_count, failed_count, notification.notification_type
|
sent_count,
|
||||||
|
failed_count,
|
||||||
|
notification.notification_type,
|
||||||
|
notification.entity_type,
|
||||||
|
notification.entity_id,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -157,8 +157,10 @@ async fn handle_websocket(socket: WebSocket, state: Arc<AppState>) {
|
|||||||
let subscriber_manager_clone = state.subscriber_manager.clone();
|
let subscriber_manager_clone = state.subscriber_manager.clone();
|
||||||
let outgoing_task = tokio::spawn(async move {
|
let outgoing_task = tokio::spawn(async move {
|
||||||
while let Some(notification) = rx.recv().await {
|
while let Some(notification) = rx.recv().await {
|
||||||
// Serialize notification to JSON
|
// Wrap in the tagged ClientMessage envelope so the client sees
|
||||||
match serde_json::to_string(¬ification) {
|
// {"type":"notification", "notification_type":..., "entity_type":..., ...}
|
||||||
|
let envelope = ClientMessage::Notification(notification);
|
||||||
|
match serde_json::to_string(&envelope) {
|
||||||
Ok(json) => {
|
Ok(json) => {
|
||||||
if let Err(e) = ws_sender.send(Message::Text(json.into())).await {
|
if let Err(e) = ws_sender.send(Message::Text(json.into())).await {
|
||||||
error!("Failed to send notification to {}: {}", client_id_clone, e);
|
error!("Failed to send notification to {}: {}", client_id_clone, e);
|
||||||
|
|||||||
@@ -33,5 +33,6 @@ aes-gcm = { workspace = true }
|
|||||||
sha2 = { workspace = true }
|
sha2 = { workspace = true }
|
||||||
base64 = { workspace = true }
|
base64 = { workspace = true }
|
||||||
tempfile = { workspace = true }
|
tempfile = { workspace = true }
|
||||||
|
jsonwebtoken = { workspace = true }
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
|
|||||||
@@ -13,9 +13,11 @@
|
|||||||
//! so the `ProcessRuntime` uses version-specific interpreter binaries,
|
//! so the `ProcessRuntime` uses version-specific interpreter binaries,
|
||||||
//! environment commands, etc.
|
//! environment commands, etc.
|
||||||
|
|
||||||
|
use attune_common::auth::jwt::{generate_execution_token, JwtConfig};
|
||||||
use attune_common::error::{Error, Result};
|
use attune_common::error::{Error, Result};
|
||||||
use attune_common::models::runtime::RuntimeExecutionConfig;
|
use attune_common::models::runtime::RuntimeExecutionConfig;
|
||||||
use attune_common::models::{runtime::Runtime as RuntimeModel, Action, Execution, ExecutionStatus};
|
use attune_common::models::{runtime::Runtime as RuntimeModel, Action, Execution, ExecutionStatus};
|
||||||
|
use attune_common::repositories::artifact::{ArtifactRepository, ArtifactVersionRepository};
|
||||||
use attune_common::repositories::execution::{ExecutionRepository, UpdateExecutionInput};
|
use attune_common::repositories::execution::{ExecutionRepository, UpdateExecutionInput};
|
||||||
use attune_common::repositories::runtime_version::RuntimeVersionRepository;
|
use attune_common::repositories::runtime_version::RuntimeVersionRepository;
|
||||||
use attune_common::repositories::{FindById, Update};
|
use attune_common::repositories::{FindById, Update};
|
||||||
@@ -41,7 +43,20 @@ pub struct ActionExecutor {
|
|||||||
max_stdout_bytes: usize,
|
max_stdout_bytes: usize,
|
||||||
max_stderr_bytes: usize,
|
max_stderr_bytes: usize,
|
||||||
packs_base_dir: PathBuf,
|
packs_base_dir: PathBuf,
|
||||||
|
artifacts_dir: PathBuf,
|
||||||
api_url: String,
|
api_url: String,
|
||||||
|
jwt_config: JwtConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Normalize a server bind address into a connectable URL.
|
||||||
|
///
|
||||||
|
/// When the server binds to `0.0.0.0` (all interfaces) or `::` (IPv6 any),
|
||||||
|
/// we substitute `127.0.0.1` so that actions running on the same host can
|
||||||
|
/// reach the API.
|
||||||
|
fn normalize_api_url(raw_url: &str) -> String {
|
||||||
|
raw_url
|
||||||
|
.replace("://0.0.0.0", "://127.0.0.1")
|
||||||
|
.replace("://[::]", "://127.0.0.1")
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ActionExecutor {
|
impl ActionExecutor {
|
||||||
@@ -54,8 +69,11 @@ impl ActionExecutor {
|
|||||||
max_stdout_bytes: usize,
|
max_stdout_bytes: usize,
|
||||||
max_stderr_bytes: usize,
|
max_stderr_bytes: usize,
|
||||||
packs_base_dir: PathBuf,
|
packs_base_dir: PathBuf,
|
||||||
|
artifacts_dir: PathBuf,
|
||||||
api_url: String,
|
api_url: String,
|
||||||
|
jwt_config: JwtConfig,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
|
let api_url = normalize_api_url(&api_url);
|
||||||
Self {
|
Self {
|
||||||
pool,
|
pool,
|
||||||
runtime_registry,
|
runtime_registry,
|
||||||
@@ -64,7 +82,9 @@ impl ActionExecutor {
|
|||||||
max_stdout_bytes,
|
max_stdout_bytes,
|
||||||
max_stderr_bytes,
|
max_stderr_bytes,
|
||||||
packs_base_dir,
|
packs_base_dir,
|
||||||
|
artifacts_dir,
|
||||||
api_url,
|
api_url,
|
||||||
|
jwt_config,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -126,6 +146,15 @@ impl ActionExecutor {
|
|||||||
// Don't fail the execution just because artifact storage failed
|
// Don't fail the execution just because artifact storage failed
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Finalize file-backed artifacts (stat files on disk and update size_bytes)
|
||||||
|
if let Err(e) = self.finalize_file_artifacts(execution_id).await {
|
||||||
|
warn!(
|
||||||
|
"Failed to finalize file-backed artifacts for execution {}: {}",
|
||||||
|
execution_id, e
|
||||||
|
);
|
||||||
|
// Don't fail the execution just because artifact finalization failed
|
||||||
|
}
|
||||||
|
|
||||||
// Update execution with result
|
// Update execution with result
|
||||||
let is_success = result.is_success();
|
let is_success = result.is_success();
|
||||||
debug!(
|
debug!(
|
||||||
@@ -275,10 +304,39 @@ impl ActionExecutor {
|
|||||||
env.insert("ATTUNE_EXEC_ID".to_string(), execution.id.to_string());
|
env.insert("ATTUNE_EXEC_ID".to_string(), execution.id.to_string());
|
||||||
env.insert("ATTUNE_ACTION".to_string(), execution.action_ref.clone());
|
env.insert("ATTUNE_ACTION".to_string(), execution.action_ref.clone());
|
||||||
env.insert("ATTUNE_API_URL".to_string(), self.api_url.clone());
|
env.insert("ATTUNE_API_URL".to_string(), self.api_url.clone());
|
||||||
|
env.insert(
|
||||||
|
"ATTUNE_ARTIFACTS_DIR".to_string(),
|
||||||
|
self.artifacts_dir.to_string_lossy().to_string(),
|
||||||
|
);
|
||||||
|
|
||||||
// TODO: Generate execution-scoped API token
|
// Generate execution-scoped API token.
|
||||||
// For now, set placeholder to maintain interface compatibility
|
// The identity that triggered the execution is derived from the `sub` claim
|
||||||
env.insert("ATTUNE_API_TOKEN".to_string(), "".to_string());
|
// of the original token; for rule-triggered executions we use identity 1
|
||||||
|
// (the system identity) as a reasonable default.
|
||||||
|
let identity_id: i64 = 1; // System identity fallback
|
||||||
|
// Default timeout is 300s; add 60s grace period for cleanup.
|
||||||
|
// The actual `timeout` variable is computed later in this function,
|
||||||
|
// but the token TTL just needs a reasonable upper bound.
|
||||||
|
let token_ttl = Some(360_i64);
|
||||||
|
match generate_execution_token(
|
||||||
|
identity_id,
|
||||||
|
execution.id,
|
||||||
|
&execution.action_ref,
|
||||||
|
&self.jwt_config,
|
||||||
|
token_ttl,
|
||||||
|
) {
|
||||||
|
Ok(token) => {
|
||||||
|
env.insert("ATTUNE_API_TOKEN".to_string(), token);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"Failed to generate execution token for execution {}: {}. \
|
||||||
|
Actions that call back to the API will not authenticate.",
|
||||||
|
execution.id, e
|
||||||
|
);
|
||||||
|
env.insert("ATTUNE_API_TOKEN".to_string(), String::new());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Add rule and trigger context if execution was triggered by enforcement
|
// Add rule and trigger context if execution was triggered by enforcement
|
||||||
if let Some(enforcement_id) = execution.enforcement {
|
if let Some(enforcement_id) = execution.enforcement {
|
||||||
@@ -616,6 +674,95 @@ impl ActionExecutor {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Finalize file-backed artifacts after execution completes.
|
||||||
|
///
|
||||||
|
/// Scans all artifact versions linked to this execution that have a `file_path`,
|
||||||
|
/// stats each file on disk, and updates `size_bytes` on both the version row
|
||||||
|
/// and the parent artifact row.
|
||||||
|
async fn finalize_file_artifacts(&self, execution_id: i64) -> Result<()> {
|
||||||
|
let versions =
|
||||||
|
ArtifactVersionRepository::find_file_versions_by_execution(&self.pool, execution_id)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if versions.is_empty() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"Finalizing {} file-backed artifact version(s) for execution {}",
|
||||||
|
versions.len(),
|
||||||
|
execution_id,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Track the latest version per artifact so we can update parent size_bytes
|
||||||
|
let mut latest_size_per_artifact: HashMap<i64, (i32, i64)> = HashMap::new();
|
||||||
|
|
||||||
|
for ver in &versions {
|
||||||
|
let file_path = match &ver.file_path {
|
||||||
|
Some(fp) => fp,
|
||||||
|
None => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
let full_path = self.artifacts_dir.join(file_path);
|
||||||
|
let size_bytes = match tokio::fs::metadata(&full_path).await {
|
||||||
|
Ok(metadata) => metadata.len() as i64,
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"Could not stat artifact file '{}' for version {}: {}. Setting size_bytes=0.",
|
||||||
|
full_path.display(),
|
||||||
|
ver.id,
|
||||||
|
e,
|
||||||
|
);
|
||||||
|
0
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Update the version row
|
||||||
|
if let Err(e) =
|
||||||
|
ArtifactVersionRepository::update_size_bytes(&self.pool, ver.id, size_bytes).await
|
||||||
|
{
|
||||||
|
warn!(
|
||||||
|
"Failed to update size_bytes for artifact version {}: {}",
|
||||||
|
ver.id, e,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Track the highest version number per artifact for parent update
|
||||||
|
let entry = latest_size_per_artifact
|
||||||
|
.entry(ver.artifact)
|
||||||
|
.or_insert((ver.version, size_bytes));
|
||||||
|
if ver.version > entry.0 {
|
||||||
|
*entry = (ver.version, size_bytes);
|
||||||
|
}
|
||||||
|
|
||||||
|
debug!(
|
||||||
|
"Finalized artifact version {} (artifact {}): file='{}', size={}",
|
||||||
|
ver.id, ver.artifact, file_path, size_bytes,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update parent artifact size_bytes to reflect the latest version's size
|
||||||
|
for (artifact_id, (_version, size_bytes)) in &latest_size_per_artifact {
|
||||||
|
if let Err(e) =
|
||||||
|
ArtifactRepository::update_size_bytes(&self.pool, *artifact_id, *size_bytes).await
|
||||||
|
{
|
||||||
|
warn!(
|
||||||
|
"Failed to update size_bytes for artifact {}: {}",
|
||||||
|
artifact_id, e,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"Finalized file-backed artifacts for execution {}: {} version(s), {} artifact(s)",
|
||||||
|
execution_id,
|
||||||
|
versions.len(),
|
||||||
|
latest_size_per_artifact.len(),
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
/// Handle successful execution
|
/// Handle successful execution
|
||||||
async fn handle_execution_success(
|
async fn handle_execution_success(
|
||||||
&self,
|
&self,
|
||||||
|
|||||||
@@ -136,7 +136,7 @@ impl WorkerService {
|
|||||||
// Initialize worker registration
|
// Initialize worker registration
|
||||||
let registration = Arc::new(RwLock::new(WorkerRegistration::new(pool.clone(), &config)));
|
let registration = Arc::new(RwLock::new(WorkerRegistration::new(pool.clone(), &config)));
|
||||||
|
|
||||||
// Initialize artifact manager
|
// Initialize artifact manager (legacy, for stdout/stderr log storage)
|
||||||
let artifact_base_dir = std::path::PathBuf::from(
|
let artifact_base_dir = std::path::PathBuf::from(
|
||||||
config
|
config
|
||||||
.worker
|
.worker
|
||||||
@@ -148,6 +148,22 @@ impl WorkerService {
|
|||||||
let artifact_manager = ArtifactManager::new(artifact_base_dir);
|
let artifact_manager = ArtifactManager::new(artifact_base_dir);
|
||||||
artifact_manager.initialize().await?;
|
artifact_manager.initialize().await?;
|
||||||
|
|
||||||
|
// Initialize artifacts directory for file-backed artifact storage (shared volume).
|
||||||
|
// Execution processes write artifact files here; the API serves them from the same path.
|
||||||
|
let artifacts_dir = std::path::PathBuf::from(&config.artifacts_dir);
|
||||||
|
if let Err(e) = tokio::fs::create_dir_all(&artifacts_dir).await {
|
||||||
|
warn!(
|
||||||
|
"Failed to create artifacts directory '{}': {}. File-backed artifacts may not work.",
|
||||||
|
artifacts_dir.display(),
|
||||||
|
e,
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
info!(
|
||||||
|
"Artifacts directory initialized at: {}",
|
||||||
|
artifacts_dir.display()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
let packs_base_dir = std::path::PathBuf::from(&config.packs_base_dir);
|
let packs_base_dir = std::path::PathBuf::from(&config.packs_base_dir);
|
||||||
let runtime_envs_dir = std::path::PathBuf::from(&config.runtime_envs_dir);
|
let runtime_envs_dir = std::path::PathBuf::from(&config.runtime_envs_dir);
|
||||||
|
|
||||||
@@ -285,6 +301,17 @@ impl WorkerService {
|
|||||||
let api_url = std::env::var("ATTUNE_API_URL")
|
let api_url = std::env::var("ATTUNE_API_URL")
|
||||||
.unwrap_or_else(|_| format!("http://{}:{}", config.server.host, config.server.port));
|
.unwrap_or_else(|_| format!("http://{}:{}", config.server.host, config.server.port));
|
||||||
|
|
||||||
|
// Build JWT config for generating execution-scoped tokens
|
||||||
|
let jwt_config = attune_common::auth::jwt::JwtConfig {
|
||||||
|
secret: config
|
||||||
|
.security
|
||||||
|
.jwt_secret
|
||||||
|
.clone()
|
||||||
|
.unwrap_or_else(|| "insecure_default_secret_change_in_production".to_string()),
|
||||||
|
access_token_expiration: config.security.jwt_access_expiration as i64,
|
||||||
|
refresh_token_expiration: config.security.jwt_refresh_expiration as i64,
|
||||||
|
};
|
||||||
|
|
||||||
let executor = Arc::new(ActionExecutor::new(
|
let executor = Arc::new(ActionExecutor::new(
|
||||||
pool.clone(),
|
pool.clone(),
|
||||||
runtime_registry,
|
runtime_registry,
|
||||||
@@ -293,7 +320,9 @@ impl WorkerService {
|
|||||||
max_stdout_bytes,
|
max_stdout_bytes,
|
||||||
max_stderr_bytes,
|
max_stderr_bytes,
|
||||||
packs_base_dir.clone(),
|
packs_base_dir.clone(),
|
||||||
|
artifacts_dir,
|
||||||
api_url,
|
api_url,
|
||||||
|
jwt_config,
|
||||||
));
|
));
|
||||||
|
|
||||||
// Initialize heartbeat manager
|
// Initialize heartbeat manager
|
||||||
|
|||||||
@@ -189,6 +189,7 @@ services:
|
|||||||
- packs_data:/opt/attune/packs:rw
|
- packs_data:/opt/attune/packs:rw
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- api_logs:/opt/attune/logs
|
- api_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -233,6 +234,7 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
|
- artifacts_data:/opt/attune/artifacts:ro
|
||||||
- executor_logs:/opt/attune/logs
|
- executor_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -279,10 +281,12 @@ services:
|
|||||||
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
||||||
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
||||||
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
||||||
|
ATTUNE_API_URL: http://attune-api:8080
|
||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- worker_shell_logs:/opt/attune/logs
|
- worker_shell_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -325,10 +329,12 @@ services:
|
|||||||
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
||||||
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
||||||
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
||||||
|
ATTUNE_API_URL: http://attune-api:8080
|
||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- worker_python_logs:/opt/attune/logs
|
- worker_python_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -371,10 +377,12 @@ services:
|
|||||||
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
||||||
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
||||||
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
||||||
|
ATTUNE_API_URL: http://attune-api:8080
|
||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- worker_node_logs:/opt/attune/logs
|
- worker_node_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -417,10 +425,12 @@ services:
|
|||||||
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
||||||
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
||||||
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
||||||
|
ATTUNE_API_URL: http://attune-api:8080
|
||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- worker_full_logs:/opt/attune/logs
|
- worker_full_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -594,6 +604,8 @@ volumes:
|
|||||||
driver: local
|
driver: local
|
||||||
runtime_envs:
|
runtime_envs:
|
||||||
driver: local
|
driver: local
|
||||||
|
artifacts_data:
|
||||||
|
driver: local
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Networks
|
# Networks
|
||||||
|
|||||||
330
docs/plans/file-based-artifact-storage.md
Normal file
330
docs/plans/file-based-artifact-storage.md
Normal file
@@ -0,0 +1,330 @@
|
|||||||
|
# File-Based Artifact Storage Plan
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Replace PostgreSQL BYTEA storage for file-type artifacts with a shared filesystem volume. Execution processes write artifact files directly to disk via paths assigned by the API; the API serves those files from disk on download. The database stores only metadata (path, size, content type) — no binary content for file-based artifacts.
|
||||||
|
|
||||||
|
**Motivation:**
|
||||||
|
- Eliminates PostgreSQL bloat from large binary artifacts
|
||||||
|
- Enables executions to write files incrementally (streaming logs, large outputs) without buffering in memory for an API upload
|
||||||
|
- Artifacts can be retained independently of execution records (executions are hypertables with 90-day retention)
|
||||||
|
- Decouples artifact lifecycle from execution lifecycle — artifacts created by one execution can be accessed by others or by external systems
|
||||||
|
|
||||||
|
## Artifact Type Classification
|
||||||
|
|
||||||
|
| Type | Storage | Notes |
|
||||||
|
|------|---------|-------|
|
||||||
|
| `FileBinary` | **Disk** (shared volume) | Binary files produced by executions |
|
||||||
|
| `FileDatatable` | **Disk** (shared volume) | Tabular data files (CSV, etc.) |
|
||||||
|
| `FileText` | **Disk** (shared volume) | Text files, logs |
|
||||||
|
| `Log` | **Disk** (shared volume) | Execution stdout/stderr logs |
|
||||||
|
| `Progress` | **DB** (`artifact.data` JSONB) | Small structured progress entries — unchanged |
|
||||||
|
| `Url` | **DB** (`artifact.data` JSONB) | URL references — unchanged |
|
||||||
|
|
||||||
|
## Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
/opt/attune/artifacts/ # artifacts_dir (configurable)
|
||||||
|
└── {artifact_ref_slug}/ # derived from artifact ref (globally unique)
|
||||||
|
├── v1.txt # version 1
|
||||||
|
├── v2.txt # version 2
|
||||||
|
└── v3.txt # version 3
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key decisions:**
|
||||||
|
- **No execution ID in the path.** Artifacts may outlive execution records (hypertable retention) and may be shared across executions or created externally.
|
||||||
|
- **Keyed by artifact ref.** The `ref` column has a unique index, making it a stable, globally unique identifier. Dots in refs become directory separators (e.g., `mypack.build_log` → `mypack/build_log/`).
|
||||||
|
- **Version files named `v{N}.{ext}`** where `N` is the version number from `next_artifact_version()` and `ext` is derived from `content_type`.
|
||||||
|
|
||||||
|
## End-to-End Flow
|
||||||
|
|
||||||
|
### Happy Path
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────────────┐
|
||||||
|
│ Worker │────▶│Execution │────▶│ API │────▶│ Shared Volume │
|
||||||
|
│ Service │ │ Process │ │ Service │ │ /opt/attune/ │
|
||||||
|
│ │ │(Py/Node/ │ │ │ │ artifacts/ │
|
||||||
|
│ │ │ Shell) │ │ │ │ │
|
||||||
|
└──────────┘ └──────────┘ └──────────┘ └────────────────┘
|
||||||
|
│ │ │ │
|
||||||
|
│ 1. Start exec │ │ │
|
||||||
|
│ Set ATTUNE_ │ │ │
|
||||||
|
│ ARTIFACTS_DIR │ │ │
|
||||||
|
│───────────────▶│ │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ 2. POST /api/v1/artifacts │
|
||||||
|
│ │ {ref, type, execution} │
|
||||||
|
│ │───────────────▶│ │
|
||||||
|
│ │ │ 3. Create artifact │
|
||||||
|
│ │ │ row in DB │
|
||||||
|
│ │ │ │
|
||||||
|
│ │◀───────────────│ │
|
||||||
|
│ │ {id, ref, ...}│ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ 4. POST /api/v1/artifacts/{id}/versions
|
||||||
|
│ │ {content_type} │
|
||||||
|
│ │───────────────▶│ │
|
||||||
|
│ │ │ 5. Create version │
|
||||||
|
│ │ │ row (file_path, │
|
||||||
|
│ │ │ no BYTEA content) │
|
||||||
|
│ │ │ + mkdir on disk │
|
||||||
|
│ │◀───────────────│ │
|
||||||
|
│ │ {id, version, │ │
|
||||||
|
│ │ file_path} │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ 6. Write file to │
|
||||||
|
│ │ $ATTUNE_ARTIFACTS_DIR/file_path │
|
||||||
|
│ │─────────────────────────────────────▶│
|
||||||
|
│ │ │ │
|
||||||
|
│ 7. Exec exits │ │ │
|
||||||
|
│◀───────────────│ │ │
|
||||||
|
│ │ │
|
||||||
|
│ 8. Finalize: stat files, │ │
|
||||||
|
│ update size_bytes in DB │ │
|
||||||
|
│ (direct DB access) │ │
|
||||||
|
│─────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
▼ │
|
||||||
|
┌──────────┐ │
|
||||||
|
│ Client │ 9. GET /api/v1/artifacts/{id}/download │
|
||||||
|
│ (UI) │──────────────────▶ API reads from disk ◀──────┘
|
||||||
|
└──────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step-by-Step
|
||||||
|
|
||||||
|
1. **Worker receives execution from MQ**, prepares `ExecutionContext`, sets `ATTUNE_ARTIFACTS_DIR` environment variable.
|
||||||
|
2. **Execution process** calls `POST /api/v1/artifacts` to create the artifact record (ref, type, execution ID, content_type).
|
||||||
|
3. **API** creates the `artifact` row in DB, returns the artifact ID.
|
||||||
|
4. **Execution process** calls `POST /api/v1/artifacts/{id}/versions` to create a new version. For file-type artifacts, the request body contains content_type and optional metadata — **no file content**.
|
||||||
|
5. **API** creates the `artifact_version` row with a computed `file_path` (e.g., `mypack/build_log/v1.txt`), `content` BYTEA left NULL. Creates the parent directory on disk. Returns version ID and `file_path`.
|
||||||
|
6. **Execution process** writes file content to `$ATTUNE_ARTIFACTS_DIR/{file_path}`. Can write incrementally (append, stream, etc.).
|
||||||
|
7. **Execution process exits.**
|
||||||
|
8. **Worker finalizes**: scans artifact versions linked to this execution, `stat()`s each file on disk, updates `artifact_version.size_bytes` and `artifact.size_bytes` in the DB via direct repository access.
|
||||||
|
9. **Client requests download**: API reads from `{artifacts_dir}/{file_path}` on disk and streams the response.
|
||||||
|
|
||||||
|
## Implementation Phases
|
||||||
|
|
||||||
|
### Phase 1: Configuration & Volume Infrastructure
|
||||||
|
|
||||||
|
**`crates/common/src/config.rs`**
|
||||||
|
- Add `artifacts_dir: String` to `Config` struct with default `/opt/attune/artifacts`
|
||||||
|
- Add `default_artifacts_dir()` function
|
||||||
|
|
||||||
|
**`config.development.yaml`**
|
||||||
|
- Add `artifacts_dir: ./artifacts`
|
||||||
|
|
||||||
|
**`config.docker.yaml`**
|
||||||
|
- Add `artifacts_dir: /opt/attune/artifacts`
|
||||||
|
|
||||||
|
**`docker-compose.yaml`**
|
||||||
|
- Add `artifacts_data` named volume
|
||||||
|
- Mount `artifacts_data:/opt/attune/artifacts` in: api (rw), all workers (rw), executor (ro)
|
||||||
|
- Add `ATTUNE__ARTIFACTS_DIR: /opt/attune/artifacts` to service environments where needed
|
||||||
|
|
||||||
|
### Phase 2: Database Schema Changes
|
||||||
|
|
||||||
|
**New migration: `migrations/20250101000011_artifact_file_storage.sql`**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Add file_path to artifact_version for disk-based storage
|
||||||
|
ALTER TABLE artifact_version ADD COLUMN IF NOT EXISTS file_path TEXT;
|
||||||
|
|
||||||
|
-- Index for finding versions by file_path (orphan cleanup)
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_artifact_version_file_path
|
||||||
|
ON artifact_version(file_path) WHERE file_path IS NOT NULL;
|
||||||
|
|
||||||
|
COMMENT ON COLUMN artifact_version.file_path IS
|
||||||
|
'Relative path from artifacts_dir root for disk-stored content. '
|
||||||
|
'When set, content BYTEA is NULL — file lives on shared volume.';
|
||||||
|
```
|
||||||
|
|
||||||
|
**`crates/common/src/models.rs`** — `artifact_version` module:
|
||||||
|
- Add `file_path: Option<String>` to `ArtifactVersion` struct
|
||||||
|
- Update `SELECT_COLUMNS` and `SELECT_COLUMNS_WITH_CONTENT` to include `file_path`
|
||||||
|
|
||||||
|
**`crates/common/src/repositories/artifact.rs`** — `ArtifactVersionRepository`:
|
||||||
|
- Add `file_path: Option<String>` to `CreateArtifactVersionInput`
|
||||||
|
- Wire `file_path` through the `create` query
|
||||||
|
- Add `update_size_bytes(executor, version_id, size_bytes)` method for worker finalization
|
||||||
|
- Add `find_file_versions_by_execution(executor, execution_id)` method — joins `artifact_version` → `artifact` on `artifact.execution` to find all file-based versions for an execution
|
||||||
|
|
||||||
|
### Phase 3: API Changes
|
||||||
|
|
||||||
|
#### Create Version Endpoint (modified)
|
||||||
|
|
||||||
|
`POST /api/v1/artifacts/{id}/versions` — currently `create_version_json`
|
||||||
|
|
||||||
|
Add a new endpoint or modify existing behavior:
|
||||||
|
|
||||||
|
**`POST /api/v1/artifacts/{id}/versions/file`** (new endpoint)
|
||||||
|
- Request body: `CreateFileVersionRequest { content_type: Option<String>, meta: Option<Value>, created_by: Option<String> }`
|
||||||
|
- **No file content in the request** — this is the key difference from `upload_version`
|
||||||
|
- API computes `file_path` from artifact ref + version number + content_type extension
|
||||||
|
- Creates `artifact_version` row with `file_path` set, `content` NULL
|
||||||
|
- Creates parent directory on disk: `{artifacts_dir}/{file_path_parent}/`
|
||||||
|
- Returns `ArtifactVersionResponse` **with `file_path` included**
|
||||||
|
|
||||||
|
**File path computation logic:**
|
||||||
|
```rust
|
||||||
|
fn compute_file_path(artifact_ref: &str, version: i32, content_type: &str) -> String {
|
||||||
|
// "mypack.build_log" → "mypack/build_log"
|
||||||
|
let ref_path = artifact_ref.replace('.', "/");
|
||||||
|
let ext = extension_from_content_type(content_type);
|
||||||
|
format!("{}/v{}.{}", ref_path, version, ext)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Download Endpoints (modified)
|
||||||
|
|
||||||
|
`GET /api/v1/artifacts/{id}/download` and `GET /api/v1/artifacts/{id}/versions/{v}/download`:
|
||||||
|
- If `artifact_version.file_path` is set:
|
||||||
|
- Resolve absolute path: `{artifacts_dir}/{file_path}`
|
||||||
|
- Verify file exists, return 404 if not
|
||||||
|
- `stat()` the file for Content-Length header
|
||||||
|
- Stream file content as response body
|
||||||
|
- If `file_path` is NULL:
|
||||||
|
- Fall back to existing BYTEA/JSON content from DB (backward compatible)
|
||||||
|
|
||||||
|
#### Upload Endpoint (unchanged for now)
|
||||||
|
|
||||||
|
`POST /api/v1/artifacts/{id}/versions/upload` (multipart) — continues to store in DB BYTEA. This remains available for non-execution uploads (external systems, small files, etc.).
|
||||||
|
|
||||||
|
#### Response DTO Changes
|
||||||
|
|
||||||
|
**`crates/api/src/dto/artifact.rs`**:
|
||||||
|
- Add `file_path: Option<String>` to `ArtifactVersionResponse`
|
||||||
|
- Add `file_path: Option<String>` to `ArtifactVersionSummary`
|
||||||
|
- Add `CreateFileVersionRequest` DTO
|
||||||
|
|
||||||
|
### Phase 4: Worker Changes
|
||||||
|
|
||||||
|
#### Environment Variable Injection
|
||||||
|
|
||||||
|
**`crates/worker/src/executor.rs`** — `prepare_execution_context()`:
|
||||||
|
- Add `ATTUNE_ARTIFACTS_DIR` to the standard env vars block:
|
||||||
|
```rust
|
||||||
|
env.insert("ATTUNE_ARTIFACTS_DIR".to_string(), self.artifacts_dir.clone());
|
||||||
|
```
|
||||||
|
- The `ActionExecutor` struct needs to hold the `artifacts_dir` value (sourced from config)
|
||||||
|
|
||||||
|
#### Post-Execution Finalization
|
||||||
|
|
||||||
|
**`crates/worker/src/executor.rs`** — after execution completes (success or failure):
|
||||||
|
|
||||||
|
```
|
||||||
|
async fn finalize_artifacts(&self, execution_id: i64) -> Result<()>
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Query `artifact_version` rows joined through `artifact.execution = execution_id` where `file_path IS NOT NULL`
|
||||||
|
2. For each version with a `file_path`:
|
||||||
|
- Resolve absolute path: `{artifacts_dir}/{file_path}`
|
||||||
|
- `tokio::fs::metadata(path).await` to get file size
|
||||||
|
- If file exists: update `artifact_version.size_bytes` via repository
|
||||||
|
- If file doesn't exist: set `size_bytes = 0` (execution didn't produce the file)
|
||||||
|
3. For each parent artifact: update `artifact.size_bytes` to the latest version's `size_bytes`
|
||||||
|
|
||||||
|
This runs after every execution regardless of success/failure status, since even failed executions may have written partial artifacts.
|
||||||
|
|
||||||
|
#### Simplify Old ArtifactManager
|
||||||
|
|
||||||
|
**`crates/worker/src/artifacts.rs`**:
|
||||||
|
- The existing `ArtifactManager` is a standalone prototype disconnected from the DB-backed system. It can be simplified to only handle the `artifacts_dir` path resolution and directory creation, or removed entirely since the API now manages paths.
|
||||||
|
- Keep the struct as a thin wrapper if it's useful for the finalization logic, but remove the `store_logs`, `store_result`, `store_file` methods that duplicate what the API does.
|
||||||
|
|
||||||
|
### Phase 5: Retention & Cleanup
|
||||||
|
|
||||||
|
#### DB Trigger (existing, minor update)
|
||||||
|
|
||||||
|
The `enforce_artifact_retention` trigger fires `AFTER INSERT ON artifact_version` and deletes old version rows when the count exceeds the limit. This continues to work for row deletion. However, it **cannot** delete files on disk (triggers can't do filesystem I/O).
|
||||||
|
|
||||||
|
#### Orphan File Cleanup (new)
|
||||||
|
|
||||||
|
Add an async cleanup mechanism — either a periodic task in the worker/executor or a dedicated CLI command:
|
||||||
|
|
||||||
|
**`attune artifact cleanup`** (CLI) or periodic task:
|
||||||
|
1. Scan all files under `{artifacts_dir}/`
|
||||||
|
2. For each file, check if a matching `artifact_version.file_path` row exists
|
||||||
|
3. If no row exists (orphaned file), delete the file
|
||||||
|
4. Also delete empty directories
|
||||||
|
|
||||||
|
This handles:
|
||||||
|
- Files left behind after the retention trigger deletes version rows
|
||||||
|
- Files from crashed executions that created directories but whose version rows were cleaned up
|
||||||
|
- Manual DB cleanup scenarios
|
||||||
|
|
||||||
|
**Frequency:** Daily or on-demand via CLI. Orphaned files are not harmful (just wasted disk space), so aggressive cleanup isn't critical.
|
||||||
|
|
||||||
|
#### Artifact Deletion Endpoint
|
||||||
|
|
||||||
|
The existing `DELETE /api/v1/artifacts/{id}` cascades to `artifact_version` rows via FK. Enhance it to also delete files on disk:
|
||||||
|
- Before deleting the DB row, query all versions with `file_path IS NOT NULL`
|
||||||
|
- Delete each file from disk
|
||||||
|
- Then delete the DB row (cascades to version rows)
|
||||||
|
- Clean up empty parent directories
|
||||||
|
|
||||||
|
Similarly for `DELETE /api/v1/artifacts/{id}/versions/{v}`.
|
||||||
|
|
||||||
|
## Schema Summary
|
||||||
|
|
||||||
|
### artifact table (unchanged)
|
||||||
|
|
||||||
|
Existing columns remain. `size_bytes` continues to reflect the latest version's size (updated by worker finalization for file-based artifacts, updated by DB trigger for DB-stored artifacts).
|
||||||
|
|
||||||
|
### artifact_version table (modified)
|
||||||
|
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| `id` | BIGSERIAL | PK |
|
||||||
|
| `artifact` | BIGINT | FK → artifact(id) ON DELETE CASCADE |
|
||||||
|
| `version` | INTEGER | Auto-assigned by `next_artifact_version()` |
|
||||||
|
| `content_type` | TEXT | MIME type |
|
||||||
|
| `size_bytes` | BIGINT | Set by worker finalization for file-based; set at insert for DB-stored |
|
||||||
|
| `content` | BYTEA | NULL for file-based artifacts; populated for DB-stored uploads |
|
||||||
|
| `content_json` | JSONB | For JSON content versions (unchanged) |
|
||||||
|
| **`file_path`** | **TEXT** | **NEW — relative path from `artifacts_dir`. When set, `content` is NULL** |
|
||||||
|
| `meta` | JSONB | Free-form metadata |
|
||||||
|
| `created_by` | TEXT | Who created this version |
|
||||||
|
| `created` | TIMESTAMPTZ | Immutable |
|
||||||
|
|
||||||
|
**Invariant:** Exactly one of `content`, `content_json`, or `file_path` should be non-NULL for a given version row.
|
||||||
|
|
||||||
|
## Files Changed
|
||||||
|
|
||||||
|
| File | Changes |
|
||||||
|
|------|---------|
|
||||||
|
| `crates/common/src/config.rs` | Add `artifacts_dir` field with default |
|
||||||
|
| `crates/common/src/models.rs` | Add `file_path` to `ArtifactVersion` |
|
||||||
|
| `crates/common/src/repositories/artifact.rs` | Wire `file_path` through create; add `update_size_bytes`, `find_file_versions_by_execution` |
|
||||||
|
| `crates/api/src/dto/artifact.rs` | Add `file_path` to version response DTOs; add `CreateFileVersionRequest` |
|
||||||
|
| `crates/api/src/routes/artifacts.rs` | New `create_version_file` endpoint; modify download endpoints for disk reads |
|
||||||
|
| `crates/api/src/state.rs` | No change needed — `config` already accessible via `AppState.config` |
|
||||||
|
| `crates/worker/src/executor.rs` | Inject `ATTUNE_ARTIFACTS_DIR` env var; add `finalize_artifacts()` post-execution |
|
||||||
|
| `crates/worker/src/service.rs` | Pass `artifacts_dir` config to `ActionExecutor` |
|
||||||
|
| `crates/worker/src/artifacts.rs` | Simplify or remove old `ArtifactManager` |
|
||||||
|
| `migrations/20250101000011_artifact_file_storage.sql` | Add `file_path` column to `artifact_version` |
|
||||||
|
| `config.development.yaml` | Add `artifacts_dir: ./artifacts` |
|
||||||
|
| `config.docker.yaml` | Add `artifacts_dir: /opt/attune/artifacts` |
|
||||||
|
| `docker-compose.yaml` | Add `artifacts_data` volume; mount in api + worker services |
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
| Variable | Set By | Available To | Value |
|
||||||
|
|----------|--------|--------------|-------|
|
||||||
|
| `ATTUNE_ARTIFACTS_DIR` | Worker | Execution process | Absolute path to artifacts volume (e.g., `/opt/attune/artifacts`) |
|
||||||
|
| `ATTUNE__ARTIFACTS_DIR` | Docker Compose | API / Worker services | Config override for `artifacts_dir` |
|
||||||
|
|
||||||
|
## Backward Compatibility
|
||||||
|
|
||||||
|
- **Existing DB-stored artifacts continue to work.** Download endpoints check `file_path` first, fall back to BYTEA/JSON content.
|
||||||
|
- **Existing multipart upload endpoint unchanged.** External systems can still upload small files via `POST /artifacts/{id}/versions/upload` — those go to DB as before.
|
||||||
|
- **Progress and URL artifacts unchanged.** They don't use `artifact_version` content at all.
|
||||||
|
- **No data migration needed.** Existing artifacts have `file_path = NULL` and continue to serve from DB.
|
||||||
|
|
||||||
|
## Future Considerations
|
||||||
|
|
||||||
|
- **External object storage (S3/MinIO):** The `file_path` abstraction makes it straightforward to swap the local filesystem for S3 later — the path becomes an object key, and the download endpoint proxies or redirects.
|
||||||
|
- **Streaming writes:** With disk-based storage, a future enhancement could allow the API to stream large file uploads directly to disk instead of buffering in memory.
|
||||||
|
- **Artifact garbage collection:** The orphan cleanup could be integrated into the executor's periodic maintenance loop alongside execution timeout monitoring.
|
||||||
|
- **Cross-execution artifact access:** Since artifacts are keyed by ref (not execution ID), a future enhancement could let actions declare artifact dependencies, and the worker could resolve and mount those paths.
|
||||||
@@ -186,6 +186,18 @@ END $$;
|
|||||||
|
|
||||||
COMMENT ON TYPE artifact_retention_enum IS 'Type of retention policy';
|
COMMENT ON TYPE artifact_retention_enum IS 'Type of retention policy';
|
||||||
|
|
||||||
|
-- ArtifactVisibility enum
|
||||||
|
DO $$ BEGIN
|
||||||
|
CREATE TYPE artifact_visibility_enum AS ENUM (
|
||||||
|
'public',
|
||||||
|
'private'
|
||||||
|
);
|
||||||
|
EXCEPTION
|
||||||
|
WHEN duplicate_object THEN null;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
COMMENT ON TYPE artifact_visibility_enum IS 'Visibility of an artifact (public = viewable by all users, private = scoped by owner)';
|
||||||
|
|
||||||
|
|
||||||
-- PackEnvironmentStatus enum
|
-- PackEnvironmentStatus enum
|
||||||
DO $$ BEGIN
|
DO $$ BEGIN
|
||||||
|
|||||||
@@ -143,6 +143,7 @@ CREATE TABLE artifact (
|
|||||||
scope owner_type_enum NOT NULL DEFAULT 'system',
|
scope owner_type_enum NOT NULL DEFAULT 'system',
|
||||||
owner TEXT NOT NULL DEFAULT '',
|
owner TEXT NOT NULL DEFAULT '',
|
||||||
type artifact_type_enum NOT NULL,
|
type artifact_type_enum NOT NULL,
|
||||||
|
visibility artifact_visibility_enum NOT NULL DEFAULT 'private',
|
||||||
retention_policy artifact_retention_enum NOT NULL DEFAULT 'versions',
|
retention_policy artifact_retention_enum NOT NULL DEFAULT 'versions',
|
||||||
retention_limit INTEGER NOT NULL DEFAULT 1,
|
retention_limit INTEGER NOT NULL DEFAULT 1,
|
||||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
@@ -157,6 +158,8 @@ CREATE INDEX idx_artifact_type ON artifact(type);
|
|||||||
CREATE INDEX idx_artifact_created ON artifact(created DESC);
|
CREATE INDEX idx_artifact_created ON artifact(created DESC);
|
||||||
CREATE INDEX idx_artifact_scope_owner ON artifact(scope, owner);
|
CREATE INDEX idx_artifact_scope_owner ON artifact(scope, owner);
|
||||||
CREATE INDEX idx_artifact_type_created ON artifact(type, created DESC);
|
CREATE INDEX idx_artifact_type_created ON artifact(type, created DESC);
|
||||||
|
CREATE INDEX idx_artifact_visibility ON artifact(visibility);
|
||||||
|
CREATE INDEX idx_artifact_visibility_scope ON artifact(visibility, scope, owner);
|
||||||
|
|
||||||
-- Trigger
|
-- Trigger
|
||||||
CREATE TRIGGER update_artifact_updated
|
CREATE TRIGGER update_artifact_updated
|
||||||
@@ -170,6 +173,7 @@ COMMENT ON COLUMN artifact.ref IS 'Artifact reference/path';
|
|||||||
COMMENT ON COLUMN artifact.scope IS 'Owner type (system, identity, pack, action, sensor)';
|
COMMENT ON COLUMN artifact.scope IS 'Owner type (system, identity, pack, action, sensor)';
|
||||||
COMMENT ON COLUMN artifact.owner IS 'Owner identifier';
|
COMMENT ON COLUMN artifact.owner IS 'Owner identifier';
|
||||||
COMMENT ON COLUMN artifact.type IS 'Artifact type (file, url, progress, etc.)';
|
COMMENT ON COLUMN artifact.type IS 'Artifact type (file, url, progress, etc.)';
|
||||||
|
COMMENT ON COLUMN artifact.visibility IS 'Visibility level: public (all users) or private (scoped by scope/owner)';
|
||||||
COMMENT ON COLUMN artifact.retention_policy IS 'How to retain artifacts (versions, days, hours, minutes)';
|
COMMENT ON COLUMN artifact.retention_policy IS 'How to retain artifacts (versions, days, hours, minutes)';
|
||||||
COMMENT ON COLUMN artifact.retention_limit IS 'Numeric limit for retention policy';
|
COMMENT ON COLUMN artifact.retention_limit IS 'Numeric limit for retention policy';
|
||||||
|
|
||||||
|
|||||||
@@ -329,3 +329,98 @@ CREATE TRIGGER workflow_execution_status_changed_notify
|
|||||||
EXECUTE FUNCTION notify_workflow_execution_status_changed();
|
EXECUTE FUNCTION notify_workflow_execution_status_changed();
|
||||||
|
|
||||||
COMMENT ON FUNCTION notify_workflow_execution_status_changed() IS 'Sends workflow execution status change notifications via PostgreSQL LISTEN/NOTIFY';
|
COMMENT ON FUNCTION notify_workflow_execution_status_changed() IS 'Sends workflow execution status change notifications via PostgreSQL LISTEN/NOTIFY';
|
||||||
|
|
||||||
|
-- ============================================================================
|
||||||
|
-- ARTIFACT NOTIFICATIONS
|
||||||
|
-- ============================================================================
|
||||||
|
|
||||||
|
-- Function to notify on artifact creation
|
||||||
|
CREATE OR REPLACE FUNCTION notify_artifact_created()
|
||||||
|
RETURNS TRIGGER AS $$
|
||||||
|
DECLARE
|
||||||
|
payload JSON;
|
||||||
|
BEGIN
|
||||||
|
payload := json_build_object(
|
||||||
|
'entity_type', 'artifact',
|
||||||
|
'entity_id', NEW.id,
|
||||||
|
'id', NEW.id,
|
||||||
|
'ref', NEW.ref,
|
||||||
|
'type', NEW.type,
|
||||||
|
'visibility', NEW.visibility,
|
||||||
|
'name', NEW.name,
|
||||||
|
'execution', NEW.execution,
|
||||||
|
'scope', NEW.scope,
|
||||||
|
'owner', NEW.owner,
|
||||||
|
'content_type', NEW.content_type,
|
||||||
|
'size_bytes', NEW.size_bytes,
|
||||||
|
'created', NEW.created
|
||||||
|
);
|
||||||
|
|
||||||
|
PERFORM pg_notify('artifact_created', payload::text);
|
||||||
|
|
||||||
|
RETURN NEW;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
-- Trigger on artifact table for creation
|
||||||
|
CREATE TRIGGER artifact_created_notify
|
||||||
|
AFTER INSERT ON artifact
|
||||||
|
FOR EACH ROW
|
||||||
|
EXECUTE FUNCTION notify_artifact_created();
|
||||||
|
|
||||||
|
COMMENT ON FUNCTION notify_artifact_created() IS 'Sends artifact creation notifications via PostgreSQL LISTEN/NOTIFY';
|
||||||
|
|
||||||
|
-- Function to notify on artifact updates (progress appends, data changes)
|
||||||
|
CREATE OR REPLACE FUNCTION notify_artifact_updated()
|
||||||
|
RETURNS TRIGGER AS $$
|
||||||
|
DECLARE
|
||||||
|
payload JSON;
|
||||||
|
latest_percent DOUBLE PRECISION;
|
||||||
|
latest_message TEXT;
|
||||||
|
entry_count INTEGER;
|
||||||
|
BEGIN
|
||||||
|
-- Only notify on actual changes
|
||||||
|
IF TG_OP = 'UPDATE' THEN
|
||||||
|
-- Extract progress summary from data array if this is a progress artifact
|
||||||
|
IF NEW.type = 'progress' AND NEW.data IS NOT NULL AND jsonb_typeof(NEW.data) = 'array' THEN
|
||||||
|
entry_count := jsonb_array_length(NEW.data);
|
||||||
|
IF entry_count > 0 THEN
|
||||||
|
latest_percent := (NEW.data -> (entry_count - 1) ->> 'percent')::DOUBLE PRECISION;
|
||||||
|
latest_message := NEW.data -> (entry_count - 1) ->> 'message';
|
||||||
|
END IF;
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
payload := json_build_object(
|
||||||
|
'entity_type', 'artifact',
|
||||||
|
'entity_id', NEW.id,
|
||||||
|
'id', NEW.id,
|
||||||
|
'ref', NEW.ref,
|
||||||
|
'type', NEW.type,
|
||||||
|
'visibility', NEW.visibility,
|
||||||
|
'name', NEW.name,
|
||||||
|
'execution', NEW.execution,
|
||||||
|
'scope', NEW.scope,
|
||||||
|
'owner', NEW.owner,
|
||||||
|
'content_type', NEW.content_type,
|
||||||
|
'size_bytes', NEW.size_bytes,
|
||||||
|
'progress_percent', latest_percent,
|
||||||
|
'progress_message', latest_message,
|
||||||
|
'progress_entries', entry_count,
|
||||||
|
'created', NEW.created,
|
||||||
|
'updated', NEW.updated
|
||||||
|
);
|
||||||
|
|
||||||
|
PERFORM pg_notify('artifact_updated', payload::text);
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
RETURN NEW;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
-- Trigger on artifact table for updates
|
||||||
|
CREATE TRIGGER artifact_updated_notify
|
||||||
|
AFTER UPDATE ON artifact
|
||||||
|
FOR EACH ROW
|
||||||
|
EXECUTE FUNCTION notify_artifact_updated();
|
||||||
|
|
||||||
|
COMMENT ON FUNCTION notify_artifact_updated() IS 'Sends artifact update notifications via PostgreSQL LISTEN/NOTIFY (includes progress summary for progress-type artifacts)';
|
||||||
|
|||||||
202
migrations/20250101000010_artifact_content.sql
Normal file
202
migrations/20250101000010_artifact_content.sql
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
-- Migration: Artifact Content System
|
||||||
|
-- Description: Enhances the artifact table with content fields (name, description,
|
||||||
|
-- content_type, size_bytes, execution link, structured data, visibility)
|
||||||
|
-- and creates the artifact_version table for versioned file/data storage.
|
||||||
|
--
|
||||||
|
-- The artifact table now serves as the "header" for a logical artifact,
|
||||||
|
-- while artifact_version rows hold the actual immutable content snapshots.
|
||||||
|
-- Progress-type artifacts store their live state directly in artifact.data
|
||||||
|
-- (append-style updates without creating new versions).
|
||||||
|
--
|
||||||
|
-- Version: 20250101000010
|
||||||
|
|
||||||
|
-- ============================================================================
|
||||||
|
-- ENHANCE ARTIFACT TABLE
|
||||||
|
-- ============================================================================
|
||||||
|
|
||||||
|
-- Human-readable name (e.g. "Build Log", "Test Results")
|
||||||
|
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS name TEXT;
|
||||||
|
|
||||||
|
-- Optional longer description
|
||||||
|
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS description TEXT;
|
||||||
|
|
||||||
|
-- MIME content type (e.g. "application/json", "text/plain", "image/png")
|
||||||
|
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS content_type TEXT;
|
||||||
|
|
||||||
|
-- Total size in bytes of the latest version's content (NULL for progress artifacts)
|
||||||
|
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS size_bytes BIGINT;
|
||||||
|
|
||||||
|
-- Execution that produced/owns this artifact (plain BIGINT, no FK — execution is a hypertable)
|
||||||
|
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS execution BIGINT;
|
||||||
|
|
||||||
|
-- Structured data for progress-type artifacts and small structured payloads.
|
||||||
|
-- Progress artifacts append entries here; file artifacts may store parsed metadata.
|
||||||
|
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS data JSONB;
|
||||||
|
|
||||||
|
-- Visibility: public artifacts are viewable by all authenticated users;
|
||||||
|
-- private artifacts are restricted based on the artifact's scope/owner.
|
||||||
|
-- The scope (identity, action, pack, etc.) + owner fields define who can access
|
||||||
|
-- a private artifact. Full RBAC enforcement is deferred — for now the column
|
||||||
|
-- enables filtering and is available for future permission checks.
|
||||||
|
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS visibility artifact_visibility_enum NOT NULL DEFAULT 'private';
|
||||||
|
|
||||||
|
-- New indexes for the added columns
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_artifact_execution ON artifact(execution);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_artifact_name ON artifact(name);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_artifact_execution_type ON artifact(execution, type);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_artifact_visibility ON artifact(visibility);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_artifact_visibility_scope ON artifact(visibility, scope, owner);
|
||||||
|
|
||||||
|
-- Comments for new columns
|
||||||
|
COMMENT ON COLUMN artifact.name IS 'Human-readable artifact name';
|
||||||
|
COMMENT ON COLUMN artifact.description IS 'Optional description of the artifact';
|
||||||
|
COMMENT ON COLUMN artifact.content_type IS 'MIME content type (e.g. application/json, text/plain)';
|
||||||
|
COMMENT ON COLUMN artifact.size_bytes IS 'Size of latest version content in bytes';
|
||||||
|
COMMENT ON COLUMN artifact.execution IS 'Execution that produced this artifact (no FK — execution is a hypertable)';
|
||||||
|
COMMENT ON COLUMN artifact.data IS 'Structured JSONB data for progress artifacts or metadata';
|
||||||
|
COMMENT ON COLUMN artifact.visibility IS 'Access visibility: public (all users) or private (scope/owner-restricted)';
|
||||||
|
|
||||||
|
|
||||||
|
-- ============================================================================
|
||||||
|
-- ARTIFACT_VERSION TABLE
|
||||||
|
-- ============================================================================
|
||||||
|
-- Each row is an immutable snapshot of artifact content. File-type artifacts get
|
||||||
|
-- a new version on each upload; progress-type artifacts do NOT use versions
|
||||||
|
-- (they update artifact.data directly).
|
||||||
|
|
||||||
|
CREATE TABLE artifact_version (
|
||||||
|
id BIGSERIAL PRIMARY KEY,
|
||||||
|
|
||||||
|
-- Parent artifact
|
||||||
|
artifact BIGINT NOT NULL REFERENCES artifact(id) ON DELETE CASCADE,
|
||||||
|
|
||||||
|
-- Monotonically increasing version number within the artifact (1-based)
|
||||||
|
version INTEGER NOT NULL,
|
||||||
|
|
||||||
|
-- MIME content type for this specific version (may differ from parent)
|
||||||
|
content_type TEXT,
|
||||||
|
|
||||||
|
-- Size of the content in bytes
|
||||||
|
size_bytes BIGINT,
|
||||||
|
|
||||||
|
-- Binary content (file uploads, DB-stored). NULL for file-backed versions.
|
||||||
|
content BYTEA,
|
||||||
|
|
||||||
|
-- Structured content (JSON payloads, parsed results, etc.)
|
||||||
|
content_json JSONB,
|
||||||
|
|
||||||
|
-- Relative path from artifacts_dir root for disk-stored content.
|
||||||
|
-- When set, content BYTEA is NULL — file lives on shared volume.
|
||||||
|
-- Pattern: {ref_slug}/v{version}.{ext}
|
||||||
|
-- e.g., "mypack/build_log/v1.txt"
|
||||||
|
file_path TEXT,
|
||||||
|
|
||||||
|
-- Free-form metadata about this version (e.g. commit hash, build number)
|
||||||
|
meta JSONB,
|
||||||
|
|
||||||
|
-- Who or what created this version (identity ref, action ref, "system", etc.)
|
||||||
|
created_by TEXT,
|
||||||
|
|
||||||
|
-- Immutable — no updated column
|
||||||
|
created TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Unique constraint: one version number per artifact
|
||||||
|
ALTER TABLE artifact_version
|
||||||
|
ADD CONSTRAINT uq_artifact_version_artifact_version UNIQUE (artifact, version);
|
||||||
|
|
||||||
|
-- Indexes
|
||||||
|
CREATE INDEX idx_artifact_version_artifact ON artifact_version(artifact);
|
||||||
|
CREATE INDEX idx_artifact_version_artifact_version ON artifact_version(artifact, version DESC);
|
||||||
|
CREATE INDEX idx_artifact_version_created ON artifact_version(created DESC);
|
||||||
|
CREATE INDEX idx_artifact_version_file_path ON artifact_version(file_path) WHERE file_path IS NOT NULL;
|
||||||
|
|
||||||
|
-- Comments
|
||||||
|
COMMENT ON TABLE artifact_version IS 'Immutable content snapshots for artifacts (file uploads, structured data)';
|
||||||
|
COMMENT ON COLUMN artifact_version.artifact IS 'Parent artifact this version belongs to';
|
||||||
|
COMMENT ON COLUMN artifact_version.version IS 'Version number (1-based, monotonically increasing per artifact)';
|
||||||
|
COMMENT ON COLUMN artifact_version.content_type IS 'MIME content type for this version';
|
||||||
|
COMMENT ON COLUMN artifact_version.size_bytes IS 'Size of content in bytes';
|
||||||
|
COMMENT ON COLUMN artifact_version.content IS 'Binary content (file data)';
|
||||||
|
COMMENT ON COLUMN artifact_version.content_json IS 'Structured JSON content';
|
||||||
|
COMMENT ON COLUMN artifact_version.meta IS 'Free-form metadata about this version';
|
||||||
|
COMMENT ON COLUMN artifact_version.created_by IS 'Who created this version (identity ref, action ref, system)';
|
||||||
|
COMMENT ON COLUMN artifact_version.file_path IS 'Relative path from artifacts_dir root for disk-stored content. When set, content BYTEA is NULL — file lives on shared volume.';
|
||||||
|
|
||||||
|
|
||||||
|
-- ============================================================================
|
||||||
|
-- HELPER FUNCTION: next_artifact_version
|
||||||
|
-- ============================================================================
|
||||||
|
-- Returns the next version number for an artifact (MAX(version) + 1, or 1 if none).
|
||||||
|
|
||||||
|
CREATE OR REPLACE FUNCTION next_artifact_version(p_artifact_id BIGINT)
|
||||||
|
RETURNS INTEGER AS $$
|
||||||
|
DECLARE
|
||||||
|
v_next INTEGER;
|
||||||
|
BEGIN
|
||||||
|
SELECT COALESCE(MAX(version), 0) + 1
|
||||||
|
INTO v_next
|
||||||
|
FROM artifact_version
|
||||||
|
WHERE artifact = p_artifact_id;
|
||||||
|
|
||||||
|
RETURN v_next;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
COMMENT ON FUNCTION next_artifact_version IS 'Returns the next version number for the given artifact';
|
||||||
|
|
||||||
|
|
||||||
|
-- ============================================================================
|
||||||
|
-- RETENTION ENFORCEMENT FUNCTION
|
||||||
|
-- ============================================================================
|
||||||
|
-- Called after inserting a new version to enforce the artifact retention policy.
|
||||||
|
-- For 'versions' policy: deletes oldest versions beyond the limit.
|
||||||
|
-- Time-based policies (days/hours/minutes) are handled by a scheduled job (not this trigger).
|
||||||
|
|
||||||
|
CREATE OR REPLACE FUNCTION enforce_artifact_retention()
|
||||||
|
RETURNS TRIGGER AS $$
|
||||||
|
DECLARE
|
||||||
|
v_policy artifact_retention_enum;
|
||||||
|
v_limit INTEGER;
|
||||||
|
v_count INTEGER;
|
||||||
|
BEGIN
|
||||||
|
SELECT retention_policy, retention_limit
|
||||||
|
INTO v_policy, v_limit
|
||||||
|
FROM artifact
|
||||||
|
WHERE id = NEW.artifact;
|
||||||
|
|
||||||
|
IF v_policy = 'versions' AND v_limit > 0 THEN
|
||||||
|
-- Count existing versions
|
||||||
|
SELECT COUNT(*) INTO v_count
|
||||||
|
FROM artifact_version
|
||||||
|
WHERE artifact = NEW.artifact;
|
||||||
|
|
||||||
|
-- If over limit, delete the oldest ones
|
||||||
|
IF v_count > v_limit THEN
|
||||||
|
DELETE FROM artifact_version
|
||||||
|
WHERE id IN (
|
||||||
|
SELECT id
|
||||||
|
FROM artifact_version
|
||||||
|
WHERE artifact = NEW.artifact
|
||||||
|
ORDER BY version ASC
|
||||||
|
LIMIT (v_count - v_limit)
|
||||||
|
);
|
||||||
|
END IF;
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
-- Update parent artifact size_bytes with the new version's size
|
||||||
|
UPDATE artifact
|
||||||
|
SET size_bytes = NEW.size_bytes,
|
||||||
|
content_type = COALESCE(NEW.content_type, content_type)
|
||||||
|
WHERE id = NEW.artifact;
|
||||||
|
|
||||||
|
RETURN NEW;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
CREATE TRIGGER trg_enforce_artifact_retention
|
||||||
|
AFTER INSERT ON artifact_version
|
||||||
|
FOR EACH ROW
|
||||||
|
EXECUTE FUNCTION enforce_artifact_retention();
|
||||||
|
|
||||||
|
COMMENT ON FUNCTION enforce_artifact_retention IS 'Enforces version-count retention policy and syncs size to parent artifact';
|
||||||
Submodule packs.external/python_example updated: daf3d04395...9414ee34e2
@@ -37,7 +37,7 @@ def main():
|
|||||||
|
|
||||||
# Simulate failure if requested
|
# Simulate failure if requested
|
||||||
if should_fail:
|
if should_fail:
|
||||||
raise RuntimeError(f"Action intentionally failed as requested (fail=true)")
|
raise RuntimeError("Action intentionally failed as requested (fail=true)")
|
||||||
|
|
||||||
# Calculate execution time
|
# Calculate execution time
|
||||||
execution_time = time.time() - start_time
|
execution_time = time.time() - start_time
|
||||||
|
|||||||
@@ -26,6 +26,10 @@ const ExecutionsPage = lazy(() => import("@/pages/executions/ExecutionsPage"));
|
|||||||
const ExecutionDetailPage = lazy(
|
const ExecutionDetailPage = lazy(
|
||||||
() => import("@/pages/executions/ExecutionDetailPage"),
|
() => import("@/pages/executions/ExecutionDetailPage"),
|
||||||
);
|
);
|
||||||
|
const ArtifactsPage = lazy(() => import("@/pages/artifacts/ArtifactsPage"));
|
||||||
|
const ArtifactDetailPage = lazy(
|
||||||
|
() => import("@/pages/artifacts/ArtifactDetailPage"),
|
||||||
|
);
|
||||||
const EventsPage = lazy(() => import("@/pages/events/EventsPage"));
|
const EventsPage = lazy(() => import("@/pages/events/EventsPage"));
|
||||||
const EventDetailPage = lazy(() => import("@/pages/events/EventDetailPage"));
|
const EventDetailPage = lazy(() => import("@/pages/events/EventDetailPage"));
|
||||||
const EnforcementsPage = lazy(
|
const EnforcementsPage = lazy(
|
||||||
@@ -99,6 +103,11 @@ function App() {
|
|||||||
path="executions/:id"
|
path="executions/:id"
|
||||||
element={<ExecutionDetailPage />}
|
element={<ExecutionDetailPage />}
|
||||||
/>
|
/>
|
||||||
|
<Route path="artifacts" element={<ArtifactsPage />} />
|
||||||
|
<Route
|
||||||
|
path="artifacts/:id"
|
||||||
|
element={<ArtifactDetailPage />}
|
||||||
|
/>
|
||||||
<Route path="events" element={<EventsPage />} />
|
<Route path="events" element={<EventsPage />} />
|
||||||
<Route path="events/:id" element={<EventDetailPage />} />
|
<Route path="events/:id" element={<EventDetailPage />} />
|
||||||
<Route path="enforcements" element={<EnforcementsPage />} />
|
<Route path="enforcements" element={<EnforcementsPage />} />
|
||||||
|
|||||||
@@ -1,32 +1,32 @@
|
|||||||
/* generated using openapi-typescript-codegen -- do not edit */
|
/* generated using openapi-typescript-codegen -- do not edit */
|
||||||
/* istanbul ignore file */
|
/* istanbul ignore file */
|
||||||
/* tslint:disable */
|
/* tslint:disable */
|
||||||
/* eslint-disable */
|
|
||||||
import type { ApiRequestOptions } from './ApiRequestOptions';
|
import type { ApiRequestOptions } from "./ApiRequestOptions";
|
||||||
|
|
||||||
type Resolver<T> = (options: ApiRequestOptions) => Promise<T>;
|
type Resolver<T> = (options: ApiRequestOptions) => Promise<T>;
|
||||||
type Headers = Record<string, string>;
|
type Headers = Record<string, string>;
|
||||||
|
|
||||||
export type OpenAPIConfig = {
|
export type OpenAPIConfig = {
|
||||||
BASE: string;
|
BASE: string;
|
||||||
VERSION: string;
|
VERSION: string;
|
||||||
WITH_CREDENTIALS: boolean;
|
WITH_CREDENTIALS: boolean;
|
||||||
CREDENTIALS: 'include' | 'omit' | 'same-origin';
|
CREDENTIALS: "include" | "omit" | "same-origin";
|
||||||
TOKEN?: string | Resolver<string> | undefined;
|
TOKEN?: string | Resolver<string> | undefined;
|
||||||
USERNAME?: string | Resolver<string> | undefined;
|
USERNAME?: string | Resolver<string> | undefined;
|
||||||
PASSWORD?: string | Resolver<string> | undefined;
|
PASSWORD?: string | Resolver<string> | undefined;
|
||||||
HEADERS?: Headers | Resolver<Headers> | undefined;
|
HEADERS?: Headers | Resolver<Headers> | undefined;
|
||||||
ENCODE_PATH?: ((path: string) => string) | undefined;
|
ENCODE_PATH?: ((path: string) => string) | undefined;
|
||||||
};
|
};
|
||||||
|
|
||||||
export const OpenAPI: OpenAPIConfig = {
|
export const OpenAPI: OpenAPIConfig = {
|
||||||
BASE: 'http://localhost:8080',
|
BASE: "http://localhost:8080",
|
||||||
VERSION: '0.1.0',
|
VERSION: "0.1.0",
|
||||||
WITH_CREDENTIALS: false,
|
WITH_CREDENTIALS: false,
|
||||||
CREDENTIALS: 'include',
|
CREDENTIALS: "include",
|
||||||
TOKEN: undefined,
|
TOKEN: undefined,
|
||||||
USERNAME: undefined,
|
USERNAME: undefined,
|
||||||
PASSWORD: undefined,
|
PASSWORD: undefined,
|
||||||
HEADERS: undefined,
|
HEADERS: undefined,
|
||||||
ENCODE_PATH: undefined,
|
ENCODE_PATH: undefined,
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -1,124 +1,124 @@
|
|||||||
/* generated using openapi-typescript-codegen -- do not edit */
|
/* generated using openapi-typescript-codegen -- do not edit */
|
||||||
/* istanbul ignore file */
|
/* istanbul ignore file */
|
||||||
/* tslint:disable */
|
/* tslint:disable */
|
||||||
/* eslint-disable */
|
|
||||||
export { ApiError } from './core/ApiError';
|
|
||||||
export { CancelablePromise, CancelError } from './core/CancelablePromise';
|
|
||||||
export { OpenAPI } from './core/OpenAPI';
|
|
||||||
export type { OpenAPIConfig } from './core/OpenAPI';
|
|
||||||
|
|
||||||
export type { ActionResponse } from './models/ActionResponse';
|
export { ApiError } from "./core/ApiError";
|
||||||
export type { ActionSummary } from './models/ActionSummary';
|
export { CancelablePromise, CancelError } from "./core/CancelablePromise";
|
||||||
export type { ApiResponse_ActionResponse } from './models/ApiResponse_ActionResponse';
|
export { OpenAPI } from "./core/OpenAPI";
|
||||||
export type { ApiResponse_CurrentUserResponse } from './models/ApiResponse_CurrentUserResponse';
|
export type { OpenAPIConfig } from "./core/OpenAPI";
|
||||||
export type { ApiResponse_EnforcementResponse } from './models/ApiResponse_EnforcementResponse';
|
|
||||||
export type { ApiResponse_EventResponse } from './models/ApiResponse_EventResponse';
|
|
||||||
export type { ApiResponse_ExecutionResponse } from './models/ApiResponse_ExecutionResponse';
|
|
||||||
export type { ApiResponse_InquiryResponse } from './models/ApiResponse_InquiryResponse';
|
|
||||||
export type { ApiResponse_KeyResponse } from './models/ApiResponse_KeyResponse';
|
|
||||||
export type { ApiResponse_PackInstallResponse } from './models/ApiResponse_PackInstallResponse';
|
|
||||||
export type { ApiResponse_PackResponse } from './models/ApiResponse_PackResponse';
|
|
||||||
export type { ApiResponse_QueueStatsResponse } from './models/ApiResponse_QueueStatsResponse';
|
|
||||||
export type { ApiResponse_RuleResponse } from './models/ApiResponse_RuleResponse';
|
|
||||||
export type { ApiResponse_SensorResponse } from './models/ApiResponse_SensorResponse';
|
|
||||||
export type { ApiResponse_String } from './models/ApiResponse_String';
|
|
||||||
export type { ApiResponse_TokenResponse } from './models/ApiResponse_TokenResponse';
|
|
||||||
export type { ApiResponse_TriggerResponse } from './models/ApiResponse_TriggerResponse';
|
|
||||||
export type { ApiResponse_WebhookReceiverResponse } from './models/ApiResponse_WebhookReceiverResponse';
|
|
||||||
export type { ApiResponse_WorkflowResponse } from './models/ApiResponse_WorkflowResponse';
|
|
||||||
export type { ChangePasswordRequest } from './models/ChangePasswordRequest';
|
|
||||||
export type { CreateActionRequest } from './models/CreateActionRequest';
|
|
||||||
export type { CreateInquiryRequest } from './models/CreateInquiryRequest';
|
|
||||||
export type { CreateKeyRequest } from './models/CreateKeyRequest';
|
|
||||||
export type { CreatePackRequest } from './models/CreatePackRequest';
|
|
||||||
export type { CreateRuleRequest } from './models/CreateRuleRequest';
|
|
||||||
export type { CreateSensorRequest } from './models/CreateSensorRequest';
|
|
||||||
export type { CreateTriggerRequest } from './models/CreateTriggerRequest';
|
|
||||||
export type { CreateWorkflowRequest } from './models/CreateWorkflowRequest';
|
|
||||||
export type { CurrentUserResponse } from './models/CurrentUserResponse';
|
|
||||||
export { EnforcementCondition } from './models/EnforcementCondition';
|
|
||||||
export type { EnforcementResponse } from './models/EnforcementResponse';
|
|
||||||
export { EnforcementStatus } from './models/EnforcementStatus';
|
|
||||||
export type { EnforcementSummary } from './models/EnforcementSummary';
|
|
||||||
export type { EventResponse } from './models/EventResponse';
|
|
||||||
export type { EventSummary } from './models/EventSummary';
|
|
||||||
export type { ExecutionResponse } from './models/ExecutionResponse';
|
|
||||||
export { ExecutionStatus } from './models/ExecutionStatus';
|
|
||||||
export type { ExecutionSummary } from './models/ExecutionSummary';
|
|
||||||
export type { HealthResponse } from './models/HealthResponse';
|
|
||||||
export type { i64 } from './models/i64';
|
|
||||||
export type { InquiryRespondRequest } from './models/InquiryRespondRequest';
|
|
||||||
export type { InquiryResponse } from './models/InquiryResponse';
|
|
||||||
export { InquiryStatus } from './models/InquiryStatus';
|
|
||||||
export type { InquirySummary } from './models/InquirySummary';
|
|
||||||
export type { InstallPackRequest } from './models/InstallPackRequest';
|
|
||||||
export type { KeyResponse } from './models/KeyResponse';
|
|
||||||
export type { KeySummary } from './models/KeySummary';
|
|
||||||
export type { LoginRequest } from './models/LoginRequest';
|
|
||||||
export { OwnerType } from './models/OwnerType';
|
|
||||||
export type { PackInstallResponse } from './models/PackInstallResponse';
|
|
||||||
export type { PackResponse } from './models/PackResponse';
|
|
||||||
export type { PackSummary } from './models/PackSummary';
|
|
||||||
export type { PackTestExecution } from './models/PackTestExecution';
|
|
||||||
export type { PackTestResult } from './models/PackTestResult';
|
|
||||||
export type { PackTestSummary } from './models/PackTestSummary';
|
|
||||||
export type { PackWorkflowSyncResponse } from './models/PackWorkflowSyncResponse';
|
|
||||||
export type { PackWorkflowValidationResponse } from './models/PackWorkflowValidationResponse';
|
|
||||||
export type { PaginatedResponse_ActionSummary } from './models/PaginatedResponse_ActionSummary';
|
|
||||||
export type { PaginatedResponse_EnforcementSummary } from './models/PaginatedResponse_EnforcementSummary';
|
|
||||||
export type { PaginatedResponse_EventSummary } from './models/PaginatedResponse_EventSummary';
|
|
||||||
export type { PaginatedResponse_ExecutionSummary } from './models/PaginatedResponse_ExecutionSummary';
|
|
||||||
export type { PaginatedResponse_InquirySummary } from './models/PaginatedResponse_InquirySummary';
|
|
||||||
export type { PaginatedResponse_KeySummary } from './models/PaginatedResponse_KeySummary';
|
|
||||||
export type { PaginatedResponse_PackSummary } from './models/PaginatedResponse_PackSummary';
|
|
||||||
export type { PaginatedResponse_PackTestSummary } from './models/PaginatedResponse_PackTestSummary';
|
|
||||||
export type { PaginatedResponse_RuleSummary } from './models/PaginatedResponse_RuleSummary';
|
|
||||||
export type { PaginatedResponse_SensorSummary } from './models/PaginatedResponse_SensorSummary';
|
|
||||||
export type { PaginatedResponse_TriggerSummary } from './models/PaginatedResponse_TriggerSummary';
|
|
||||||
export type { PaginatedResponse_WorkflowSummary } from './models/PaginatedResponse_WorkflowSummary';
|
|
||||||
export type { PaginationMeta } from './models/PaginationMeta';
|
|
||||||
export type { QueueStatsResponse } from './models/QueueStatsResponse';
|
|
||||||
export type { RefreshTokenRequest } from './models/RefreshTokenRequest';
|
|
||||||
export type { RegisterPackRequest } from './models/RegisterPackRequest';
|
|
||||||
export type { RegisterRequest } from './models/RegisterRequest';
|
|
||||||
export type { RuleResponse } from './models/RuleResponse';
|
|
||||||
export type { RuleSummary } from './models/RuleSummary';
|
|
||||||
export type { SensorResponse } from './models/SensorResponse';
|
|
||||||
export type { SensorSummary } from './models/SensorSummary';
|
|
||||||
export type { SuccessResponse } from './models/SuccessResponse';
|
|
||||||
export type { TestCaseResult } from './models/TestCaseResult';
|
|
||||||
export { TestStatus } from './models/TestStatus';
|
|
||||||
export type { TestSuiteResult } from './models/TestSuiteResult';
|
|
||||||
export type { TokenResponse } from './models/TokenResponse';
|
|
||||||
export type { TriggerResponse } from './models/TriggerResponse';
|
|
||||||
export type { TriggerSummary } from './models/TriggerSummary';
|
|
||||||
export type { UpdateActionRequest } from './models/UpdateActionRequest';
|
|
||||||
export type { UpdateInquiryRequest } from './models/UpdateInquiryRequest';
|
|
||||||
export type { UpdateKeyRequest } from './models/UpdateKeyRequest';
|
|
||||||
export type { UpdatePackRequest } from './models/UpdatePackRequest';
|
|
||||||
export type { UpdateRuleRequest } from './models/UpdateRuleRequest';
|
|
||||||
export type { UpdateSensorRequest } from './models/UpdateSensorRequest';
|
|
||||||
export type { UpdateTriggerRequest } from './models/UpdateTriggerRequest';
|
|
||||||
export type { UpdateWorkflowRequest } from './models/UpdateWorkflowRequest';
|
|
||||||
export type { UserInfo } from './models/UserInfo';
|
|
||||||
export type { Value } from './models/Value';
|
|
||||||
export type { WebhookReceiverRequest } from './models/WebhookReceiverRequest';
|
|
||||||
export type { WebhookReceiverResponse } from './models/WebhookReceiverResponse';
|
|
||||||
export type { WorkflowResponse } from './models/WorkflowResponse';
|
|
||||||
export type { WorkflowSummary } from './models/WorkflowSummary';
|
|
||||||
export type { WorkflowSyncResult } from './models/WorkflowSyncResult';
|
|
||||||
|
|
||||||
export { ActionsService } from './services/ActionsService';
|
export type { ActionResponse } from "./models/ActionResponse";
|
||||||
export { AuthService } from './services/AuthService';
|
export type { ActionSummary } from "./models/ActionSummary";
|
||||||
export { EnforcementsService } from './services/EnforcementsService';
|
export type { ApiResponse_ActionResponse } from "./models/ApiResponse_ActionResponse";
|
||||||
export { EventsService } from './services/EventsService';
|
export type { ApiResponse_CurrentUserResponse } from "./models/ApiResponse_CurrentUserResponse";
|
||||||
export { ExecutionsService } from './services/ExecutionsService';
|
export type { ApiResponse_EnforcementResponse } from "./models/ApiResponse_EnforcementResponse";
|
||||||
export { HealthService } from './services/HealthService';
|
export type { ApiResponse_EventResponse } from "./models/ApiResponse_EventResponse";
|
||||||
export { InquiriesService } from './services/InquiriesService';
|
export type { ApiResponse_ExecutionResponse } from "./models/ApiResponse_ExecutionResponse";
|
||||||
export { PacksService } from './services/PacksService';
|
export type { ApiResponse_InquiryResponse } from "./models/ApiResponse_InquiryResponse";
|
||||||
export { RulesService } from './services/RulesService';
|
export type { ApiResponse_KeyResponse } from "./models/ApiResponse_KeyResponse";
|
||||||
export { SecretsService } from './services/SecretsService';
|
export type { ApiResponse_PackInstallResponse } from "./models/ApiResponse_PackInstallResponse";
|
||||||
export { SensorsService } from './services/SensorsService';
|
export type { ApiResponse_PackResponse } from "./models/ApiResponse_PackResponse";
|
||||||
export { TriggersService } from './services/TriggersService';
|
export type { ApiResponse_QueueStatsResponse } from "./models/ApiResponse_QueueStatsResponse";
|
||||||
export { WebhooksService } from './services/WebhooksService';
|
export type { ApiResponse_RuleResponse } from "./models/ApiResponse_RuleResponse";
|
||||||
export { WorkflowsService } from './services/WorkflowsService';
|
export type { ApiResponse_SensorResponse } from "./models/ApiResponse_SensorResponse";
|
||||||
|
export type { ApiResponse_String } from "./models/ApiResponse_String";
|
||||||
|
export type { ApiResponse_TokenResponse } from "./models/ApiResponse_TokenResponse";
|
||||||
|
export type { ApiResponse_TriggerResponse } from "./models/ApiResponse_TriggerResponse";
|
||||||
|
export type { ApiResponse_WebhookReceiverResponse } from "./models/ApiResponse_WebhookReceiverResponse";
|
||||||
|
export type { ApiResponse_WorkflowResponse } from "./models/ApiResponse_WorkflowResponse";
|
||||||
|
export type { ChangePasswordRequest } from "./models/ChangePasswordRequest";
|
||||||
|
export type { CreateActionRequest } from "./models/CreateActionRequest";
|
||||||
|
export type { CreateInquiryRequest } from "./models/CreateInquiryRequest";
|
||||||
|
export type { CreateKeyRequest } from "./models/CreateKeyRequest";
|
||||||
|
export type { CreatePackRequest } from "./models/CreatePackRequest";
|
||||||
|
export type { CreateRuleRequest } from "./models/CreateRuleRequest";
|
||||||
|
export type { CreateSensorRequest } from "./models/CreateSensorRequest";
|
||||||
|
export type { CreateTriggerRequest } from "./models/CreateTriggerRequest";
|
||||||
|
export type { CreateWorkflowRequest } from "./models/CreateWorkflowRequest";
|
||||||
|
export type { CurrentUserResponse } from "./models/CurrentUserResponse";
|
||||||
|
export { EnforcementCondition } from "./models/EnforcementCondition";
|
||||||
|
export type { EnforcementResponse } from "./models/EnforcementResponse";
|
||||||
|
export { EnforcementStatus } from "./models/EnforcementStatus";
|
||||||
|
export type { EnforcementSummary } from "./models/EnforcementSummary";
|
||||||
|
export type { EventResponse } from "./models/EventResponse";
|
||||||
|
export type { EventSummary } from "./models/EventSummary";
|
||||||
|
export type { ExecutionResponse } from "./models/ExecutionResponse";
|
||||||
|
export { ExecutionStatus } from "./models/ExecutionStatus";
|
||||||
|
export type { ExecutionSummary } from "./models/ExecutionSummary";
|
||||||
|
export type { HealthResponse } from "./models/HealthResponse";
|
||||||
|
export type { i64 } from "./models/i64";
|
||||||
|
export type { InquiryRespondRequest } from "./models/InquiryRespondRequest";
|
||||||
|
export type { InquiryResponse } from "./models/InquiryResponse";
|
||||||
|
export { InquiryStatus } from "./models/InquiryStatus";
|
||||||
|
export type { InquirySummary } from "./models/InquirySummary";
|
||||||
|
export type { InstallPackRequest } from "./models/InstallPackRequest";
|
||||||
|
export type { KeyResponse } from "./models/KeyResponse";
|
||||||
|
export type { KeySummary } from "./models/KeySummary";
|
||||||
|
export type { LoginRequest } from "./models/LoginRequest";
|
||||||
|
export { OwnerType } from "./models/OwnerType";
|
||||||
|
export type { PackInstallResponse } from "./models/PackInstallResponse";
|
||||||
|
export type { PackResponse } from "./models/PackResponse";
|
||||||
|
export type { PackSummary } from "./models/PackSummary";
|
||||||
|
export type { PackTestExecution } from "./models/PackTestExecution";
|
||||||
|
export type { PackTestResult } from "./models/PackTestResult";
|
||||||
|
export type { PackTestSummary } from "./models/PackTestSummary";
|
||||||
|
export type { PackWorkflowSyncResponse } from "./models/PackWorkflowSyncResponse";
|
||||||
|
export type { PackWorkflowValidationResponse } from "./models/PackWorkflowValidationResponse";
|
||||||
|
export type { PaginatedResponse_ActionSummary } from "./models/PaginatedResponse_ActionSummary";
|
||||||
|
export type { PaginatedResponse_EnforcementSummary } from "./models/PaginatedResponse_EnforcementSummary";
|
||||||
|
export type { PaginatedResponse_EventSummary } from "./models/PaginatedResponse_EventSummary";
|
||||||
|
export type { PaginatedResponse_ExecutionSummary } from "./models/PaginatedResponse_ExecutionSummary";
|
||||||
|
export type { PaginatedResponse_InquirySummary } from "./models/PaginatedResponse_InquirySummary";
|
||||||
|
export type { PaginatedResponse_KeySummary } from "./models/PaginatedResponse_KeySummary";
|
||||||
|
export type { PaginatedResponse_PackSummary } from "./models/PaginatedResponse_PackSummary";
|
||||||
|
export type { PaginatedResponse_PackTestSummary } from "./models/PaginatedResponse_PackTestSummary";
|
||||||
|
export type { PaginatedResponse_RuleSummary } from "./models/PaginatedResponse_RuleSummary";
|
||||||
|
export type { PaginatedResponse_SensorSummary } from "./models/PaginatedResponse_SensorSummary";
|
||||||
|
export type { PaginatedResponse_TriggerSummary } from "./models/PaginatedResponse_TriggerSummary";
|
||||||
|
export type { PaginatedResponse_WorkflowSummary } from "./models/PaginatedResponse_WorkflowSummary";
|
||||||
|
export type { PaginationMeta } from "./models/PaginationMeta";
|
||||||
|
export type { QueueStatsResponse } from "./models/QueueStatsResponse";
|
||||||
|
export type { RefreshTokenRequest } from "./models/RefreshTokenRequest";
|
||||||
|
export type { RegisterPackRequest } from "./models/RegisterPackRequest";
|
||||||
|
export type { RegisterRequest } from "./models/RegisterRequest";
|
||||||
|
export type { RuleResponse } from "./models/RuleResponse";
|
||||||
|
export type { RuleSummary } from "./models/RuleSummary";
|
||||||
|
export type { SensorResponse } from "./models/SensorResponse";
|
||||||
|
export type { SensorSummary } from "./models/SensorSummary";
|
||||||
|
export type { SuccessResponse } from "./models/SuccessResponse";
|
||||||
|
export type { TestCaseResult } from "./models/TestCaseResult";
|
||||||
|
export { TestStatus } from "./models/TestStatus";
|
||||||
|
export type { TestSuiteResult } from "./models/TestSuiteResult";
|
||||||
|
export type { TokenResponse } from "./models/TokenResponse";
|
||||||
|
export type { TriggerResponse } from "./models/TriggerResponse";
|
||||||
|
export type { TriggerSummary } from "./models/TriggerSummary";
|
||||||
|
export type { UpdateActionRequest } from "./models/UpdateActionRequest";
|
||||||
|
export type { UpdateInquiryRequest } from "./models/UpdateInquiryRequest";
|
||||||
|
export type { UpdateKeyRequest } from "./models/UpdateKeyRequest";
|
||||||
|
export type { UpdatePackRequest } from "./models/UpdatePackRequest";
|
||||||
|
export type { UpdateRuleRequest } from "./models/UpdateRuleRequest";
|
||||||
|
export type { UpdateSensorRequest } from "./models/UpdateSensorRequest";
|
||||||
|
export type { UpdateTriggerRequest } from "./models/UpdateTriggerRequest";
|
||||||
|
export type { UpdateWorkflowRequest } from "./models/UpdateWorkflowRequest";
|
||||||
|
export type { UserInfo } from "./models/UserInfo";
|
||||||
|
export type { Value } from "./models/Value";
|
||||||
|
export type { WebhookReceiverRequest } from "./models/WebhookReceiverRequest";
|
||||||
|
export type { WebhookReceiverResponse } from "./models/WebhookReceiverResponse";
|
||||||
|
export type { WorkflowResponse } from "./models/WorkflowResponse";
|
||||||
|
export type { WorkflowSummary } from "./models/WorkflowSummary";
|
||||||
|
export type { WorkflowSyncResult } from "./models/WorkflowSyncResult";
|
||||||
|
|
||||||
|
export { ActionsService } from "./services/ActionsService";
|
||||||
|
export { AuthService } from "./services/AuthService";
|
||||||
|
export { EnforcementsService } from "./services/EnforcementsService";
|
||||||
|
export { EventsService } from "./services/EventsService";
|
||||||
|
export { ExecutionsService } from "./services/ExecutionsService";
|
||||||
|
export { HealthService } from "./services/HealthService";
|
||||||
|
export { InquiriesService } from "./services/InquiriesService";
|
||||||
|
export { PacksService } from "./services/PacksService";
|
||||||
|
export { RulesService } from "./services/RulesService";
|
||||||
|
export { SecretsService } from "./services/SecretsService";
|
||||||
|
export { SensorsService } from "./services/SensorsService";
|
||||||
|
export { TriggersService } from "./services/TriggersService";
|
||||||
|
export { WebhooksService } from "./services/WebhooksService";
|
||||||
|
export { WorkflowsService } from "./services/WorkflowsService";
|
||||||
|
|||||||
618
web/src/components/executions/ExecutionArtifactsPanel.tsx
Normal file
618
web/src/components/executions/ExecutionArtifactsPanel.tsx
Normal file
@@ -0,0 +1,618 @@
|
|||||||
|
import { useState, useMemo, useEffect, useCallback } from "react";
|
||||||
|
import { formatDistanceToNow } from "date-fns";
|
||||||
|
import {
|
||||||
|
ChevronDown,
|
||||||
|
ChevronRight,
|
||||||
|
FileText,
|
||||||
|
FileImage,
|
||||||
|
File,
|
||||||
|
BarChart3,
|
||||||
|
Link as LinkIcon,
|
||||||
|
Table2,
|
||||||
|
Package,
|
||||||
|
Loader2,
|
||||||
|
Download,
|
||||||
|
Eye,
|
||||||
|
X,
|
||||||
|
} from "lucide-react";
|
||||||
|
import {
|
||||||
|
useExecutionArtifacts,
|
||||||
|
useArtifact,
|
||||||
|
type ArtifactSummary,
|
||||||
|
type ArtifactType,
|
||||||
|
} from "@/hooks/useArtifacts";
|
||||||
|
import { useArtifactStream } from "@/hooks/useArtifactStream";
|
||||||
|
import { OpenAPI } from "@/api/core/OpenAPI";
|
||||||
|
|
||||||
|
interface ExecutionArtifactsPanelProps {
|
||||||
|
executionId: number;
|
||||||
|
/** Whether the execution is still running (enables polling) */
|
||||||
|
isRunning?: boolean;
|
||||||
|
defaultCollapsed?: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
function getArtifactTypeIcon(type: ArtifactType) {
|
||||||
|
switch (type) {
|
||||||
|
case "file_text":
|
||||||
|
return <FileText className="h-4 w-4 text-blue-500" />;
|
||||||
|
case "file_image":
|
||||||
|
return <FileImage className="h-4 w-4 text-purple-500" />;
|
||||||
|
case "file_binary":
|
||||||
|
return <File className="h-4 w-4 text-gray-500" />;
|
||||||
|
case "file_datatable":
|
||||||
|
return <Table2 className="h-4 w-4 text-green-500" />;
|
||||||
|
case "progress":
|
||||||
|
return <BarChart3 className="h-4 w-4 text-amber-500" />;
|
||||||
|
case "url":
|
||||||
|
return <LinkIcon className="h-4 w-4 text-cyan-500" />;
|
||||||
|
case "other":
|
||||||
|
default:
|
||||||
|
return <Package className="h-4 w-4 text-gray-400" />;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function getArtifactTypeBadge(type: ArtifactType): {
|
||||||
|
label: string;
|
||||||
|
classes: string;
|
||||||
|
} {
|
||||||
|
switch (type) {
|
||||||
|
case "file_text":
|
||||||
|
return { label: "Text File", classes: "bg-blue-100 text-blue-800" };
|
||||||
|
case "file_image":
|
||||||
|
return { label: "Image", classes: "bg-purple-100 text-purple-800" };
|
||||||
|
case "file_binary":
|
||||||
|
return { label: "Binary", classes: "bg-gray-100 text-gray-800" };
|
||||||
|
case "file_datatable":
|
||||||
|
return { label: "Data Table", classes: "bg-green-100 text-green-800" };
|
||||||
|
case "progress":
|
||||||
|
return { label: "Progress", classes: "bg-amber-100 text-amber-800" };
|
||||||
|
case "url":
|
||||||
|
return { label: "URL", classes: "bg-cyan-100 text-cyan-800" };
|
||||||
|
case "other":
|
||||||
|
default:
|
||||||
|
return { label: "Other", classes: "bg-gray-100 text-gray-700" };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatBytes(bytes: number | null): string {
|
||||||
|
if (bytes == null || bytes === 0) return "—";
|
||||||
|
if (bytes < 1024) return `${bytes} B`;
|
||||||
|
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
|
||||||
|
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Download the latest version of an artifact using a fetch with auth token. */
|
||||||
|
async function downloadArtifact(artifactId: number, artifactRef: string) {
|
||||||
|
const token = localStorage.getItem("access_token");
|
||||||
|
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/download`;
|
||||||
|
|
||||||
|
const response = await fetch(url, {
|
||||||
|
headers: {
|
||||||
|
Authorization: `Bearer ${token}`,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
console.error(`Download failed: ${response.status} ${response.statusText}`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract filename from Content-Disposition header or fall back to ref
|
||||||
|
const disposition = response.headers.get("Content-Disposition");
|
||||||
|
let filename = artifactRef.replace(/\./g, "_") + ".bin";
|
||||||
|
if (disposition) {
|
||||||
|
const match = disposition.match(/filename="?([^"]+)"?/);
|
||||||
|
if (match) filename = match[1];
|
||||||
|
}
|
||||||
|
|
||||||
|
const blob = await response.blob();
|
||||||
|
const blobUrl = URL.createObjectURL(blob);
|
||||||
|
const a = document.createElement("a");
|
||||||
|
a.href = blobUrl;
|
||||||
|
a.download = filename;
|
||||||
|
document.body.appendChild(a);
|
||||||
|
a.click();
|
||||||
|
document.body.removeChild(a);
|
||||||
|
URL.revokeObjectURL(blobUrl);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Text File Artifact Detail
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
interface TextFileDetailProps {
|
||||||
|
artifactId: number;
|
||||||
|
artifactName: string | null;
|
||||||
|
isRunning?: boolean;
|
||||||
|
onClose: () => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
function TextFileDetail({
|
||||||
|
artifactId,
|
||||||
|
artifactName,
|
||||||
|
isRunning = false,
|
||||||
|
onClose,
|
||||||
|
}: TextFileDetailProps) {
|
||||||
|
const [content, setContent] = useState<string | null>(null);
|
||||||
|
const [loadError, setLoadError] = useState<string | null>(null);
|
||||||
|
const [isLoadingContent, setIsLoadingContent] = useState(true);
|
||||||
|
|
||||||
|
const fetchContent = useCallback(async () => {
|
||||||
|
const token = localStorage.getItem("access_token");
|
||||||
|
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/download`;
|
||||||
|
try {
|
||||||
|
const response = await fetch(url, {
|
||||||
|
headers: { Authorization: `Bearer ${token}` },
|
||||||
|
});
|
||||||
|
if (!response.ok) {
|
||||||
|
setLoadError(`HTTP ${response.status}: ${response.statusText}`);
|
||||||
|
setIsLoadingContent(false);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const text = await response.text();
|
||||||
|
setContent(text);
|
||||||
|
setLoadError(null);
|
||||||
|
} catch (e) {
|
||||||
|
setLoadError(e instanceof Error ? e.message : "Unknown error");
|
||||||
|
} finally {
|
||||||
|
setIsLoadingContent(false);
|
||||||
|
}
|
||||||
|
}, [artifactId]);
|
||||||
|
|
||||||
|
// Initial load
|
||||||
|
useEffect(() => {
|
||||||
|
fetchContent();
|
||||||
|
}, [fetchContent]);
|
||||||
|
|
||||||
|
// Poll while running to pick up new file versions
|
||||||
|
useEffect(() => {
|
||||||
|
if (!isRunning) return;
|
||||||
|
const interval = setInterval(fetchContent, 3000);
|
||||||
|
return () => clearInterval(interval);
|
||||||
|
}, [isRunning, fetchContent]);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="border border-blue-200 bg-blue-50/50 rounded-lg p-4 mt-2">
|
||||||
|
<div className="flex items-center justify-between mb-3">
|
||||||
|
<h4 className="text-sm font-semibold text-blue-900 flex items-center gap-2">
|
||||||
|
<FileText className="h-4 w-4" />
|
||||||
|
{artifactName ?? "Text File"}
|
||||||
|
</h4>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
{isRunning && (
|
||||||
|
<div className="flex items-center gap-1 text-xs text-blue-600">
|
||||||
|
<Loader2 className="h-3 w-3 animate-spin" />
|
||||||
|
<span>Live</span>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
<button
|
||||||
|
onClick={onClose}
|
||||||
|
className="text-gray-400 hover:text-gray-600 p-1 rounded"
|
||||||
|
>
|
||||||
|
<X className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{isLoadingContent && (
|
||||||
|
<div className="flex items-center gap-2 py-2 text-sm text-gray-500">
|
||||||
|
<Loader2 className="h-4 w-4 animate-spin" />
|
||||||
|
Loading content…
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{loadError && (
|
||||||
|
<p className="text-xs text-red-600 italic">Error: {loadError}</p>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{!isLoadingContent && !loadError && content !== null && (
|
||||||
|
<pre className="max-h-64 overflow-y-auto bg-gray-900 text-gray-100 rounded p-3 text-xs font-mono whitespace-pre-wrap break-all">
|
||||||
|
{content || <span className="text-gray-500 italic">(empty)</span>}
|
||||||
|
</pre>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Progress Artifact Detail
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
interface ProgressDetailProps {
|
||||||
|
artifactId: number;
|
||||||
|
onClose: () => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
function ProgressDetail({ artifactId, onClose }: ProgressDetailProps) {
|
||||||
|
const { data: artifactData, isLoading } = useArtifact(artifactId);
|
||||||
|
const artifact = artifactData?.data;
|
||||||
|
|
||||||
|
const progressEntries = useMemo(() => {
|
||||||
|
if (!artifact?.data || !Array.isArray(artifact.data)) return [];
|
||||||
|
return artifact.data as Array<Record<string, unknown>>;
|
||||||
|
}, [artifact]);
|
||||||
|
|
||||||
|
const latestEntry =
|
||||||
|
progressEntries.length > 0
|
||||||
|
? progressEntries[progressEntries.length - 1]
|
||||||
|
: null;
|
||||||
|
const latestPercent =
|
||||||
|
latestEntry && typeof latestEntry.percent === "number"
|
||||||
|
? latestEntry.percent
|
||||||
|
: null;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="border border-amber-200 bg-amber-50/50 rounded-lg p-4 mt-2">
|
||||||
|
<div className="flex items-center justify-between mb-3">
|
||||||
|
<h4 className="text-sm font-semibold text-amber-900 flex items-center gap-2">
|
||||||
|
<BarChart3 className="h-4 w-4" />
|
||||||
|
{artifact?.name ?? "Progress"}
|
||||||
|
</h4>
|
||||||
|
<button
|
||||||
|
onClick={onClose}
|
||||||
|
className="text-gray-400 hover:text-gray-600 p-1 rounded"
|
||||||
|
>
|
||||||
|
<X className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{isLoading && (
|
||||||
|
<div className="flex items-center gap-2 py-2 text-sm text-gray-500">
|
||||||
|
<Loader2 className="h-4 w-4 animate-spin" />
|
||||||
|
Loading progress…
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{!isLoading && latestPercent != null && (
|
||||||
|
<div className="mb-3">
|
||||||
|
<div className="flex items-center justify-between text-xs text-gray-600 mb-1">
|
||||||
|
<span>
|
||||||
|
{latestEntry?.message
|
||||||
|
? String(latestEntry.message)
|
||||||
|
: `${latestPercent}%`}
|
||||||
|
</span>
|
||||||
|
<span className="font-mono">{latestPercent}%</span>
|
||||||
|
</div>
|
||||||
|
<div className="w-full bg-gray-200 rounded-full h-2.5">
|
||||||
|
<div
|
||||||
|
className="bg-amber-500 h-2.5 rounded-full transition-all duration-300"
|
||||||
|
style={{ width: `${Math.min(latestPercent, 100)}%` }}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{!isLoading && progressEntries.length > 0 && (
|
||||||
|
<div className="max-h-48 overflow-y-auto">
|
||||||
|
<table className="w-full text-xs">
|
||||||
|
<thead>
|
||||||
|
<tr className="text-left text-gray-500 border-b border-amber-200">
|
||||||
|
<th className="pb-1 pr-2">#</th>
|
||||||
|
<th className="pb-1 pr-2">%</th>
|
||||||
|
<th className="pb-1 pr-2">Message</th>
|
||||||
|
<th className="pb-1">Time</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
{progressEntries.map((entry, idx) => (
|
||||||
|
<tr
|
||||||
|
key={idx}
|
||||||
|
className="border-b border-amber-100 last:border-0"
|
||||||
|
>
|
||||||
|
<td className="py-1 pr-2 text-gray-400 font-mono">
|
||||||
|
{typeof entry.iteration === "number"
|
||||||
|
? entry.iteration
|
||||||
|
: idx + 1}
|
||||||
|
</td>
|
||||||
|
<td className="py-1 pr-2 font-mono">
|
||||||
|
{typeof entry.percent === "number"
|
||||||
|
? `${entry.percent}%`
|
||||||
|
: "—"}
|
||||||
|
</td>
|
||||||
|
<td className="py-1 pr-2 text-gray-700 truncate max-w-[200px]">
|
||||||
|
{entry.message ? String(entry.message) : "—"}
|
||||||
|
</td>
|
||||||
|
<td className="py-1 text-gray-400 whitespace-nowrap">
|
||||||
|
{entry.timestamp
|
||||||
|
? formatDistanceToNow(new Date(String(entry.timestamp)), {
|
||||||
|
addSuffix: true,
|
||||||
|
})
|
||||||
|
: "—"}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{!isLoading && progressEntries.length === 0 && (
|
||||||
|
<p className="text-xs text-gray-500 italic">No progress entries yet.</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Main Panel
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export default function ExecutionArtifactsPanel({
|
||||||
|
executionId,
|
||||||
|
isRunning = false,
|
||||||
|
defaultCollapsed = false,
|
||||||
|
}: ExecutionArtifactsPanelProps) {
|
||||||
|
const [isCollapsed, setIsCollapsed] = useState(defaultCollapsed);
|
||||||
|
const [expandedProgressId, setExpandedProgressId] = useState<number | null>(
|
||||||
|
null,
|
||||||
|
);
|
||||||
|
const [expandedTextFileId, setExpandedTextFileId] = useState<number | null>(
|
||||||
|
null,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Subscribe to real-time artifact notifications for this execution.
|
||||||
|
// WebSocket-driven cache invalidation replaces most of the polling need,
|
||||||
|
// but we keep polling as a fallback (staleTime/refetchInterval in the hook).
|
||||||
|
useArtifactStream({ executionId, enabled: isRunning });
|
||||||
|
|
||||||
|
const { data, isLoading, error } = useExecutionArtifacts(
|
||||||
|
executionId,
|
||||||
|
isRunning,
|
||||||
|
);
|
||||||
|
|
||||||
|
const artifacts: ArtifactSummary[] = useMemo(() => {
|
||||||
|
return data?.data ?? [];
|
||||||
|
}, [data]);
|
||||||
|
|
||||||
|
const summary = useMemo(() => {
|
||||||
|
const total = artifacts.length;
|
||||||
|
const files = artifacts.filter((a) =>
|
||||||
|
["file_text", "file_binary", "file_image", "file_datatable"].includes(
|
||||||
|
a.type,
|
||||||
|
),
|
||||||
|
).length;
|
||||||
|
const progress = artifacts.filter((a) => a.type === "progress").length;
|
||||||
|
const other = total - files - progress;
|
||||||
|
return { total, files, progress, other };
|
||||||
|
}, [artifacts]);
|
||||||
|
|
||||||
|
// Don't render anything if there are no artifacts and we're not loading
|
||||||
|
if (!isLoading && artifacts.length === 0 && !error) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg">
|
||||||
|
{/* Header */}
|
||||||
|
<button
|
||||||
|
onClick={() => setIsCollapsed(!isCollapsed)}
|
||||||
|
className="w-full flex items-center justify-between p-6 text-left hover:bg-gray-50 rounded-lg transition-colors"
|
||||||
|
>
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
{isCollapsed ? (
|
||||||
|
<ChevronRight className="h-5 w-5 text-gray-400" />
|
||||||
|
) : (
|
||||||
|
<ChevronDown className="h-5 w-5 text-gray-400" />
|
||||||
|
)}
|
||||||
|
<Package className="h-5 w-5 text-indigo-500" />
|
||||||
|
<h2 className="text-xl font-semibold">Artifacts</h2>
|
||||||
|
{!isLoading && (
|
||||||
|
<span className="text-sm text-gray-500">
|
||||||
|
({summary.total} artifact{summary.total !== 1 ? "s" : ""})
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
{isRunning && (
|
||||||
|
<div className="flex items-center gap-1.5 text-xs text-blue-600">
|
||||||
|
<Loader2 className="h-3 w-3 animate-spin" />
|
||||||
|
<span>Live</span>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Summary badges */}
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
{summary.files > 0 && (
|
||||||
|
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-blue-100 text-blue-800">
|
||||||
|
<FileText className="h-3 w-3" />
|
||||||
|
{summary.files}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
{summary.progress > 0 && (
|
||||||
|
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-amber-100 text-amber-800">
|
||||||
|
<BarChart3 className="h-3 w-3" />
|
||||||
|
{summary.progress}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
{summary.other > 0 && (
|
||||||
|
<span className="inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-xs font-medium bg-gray-100 text-gray-700">
|
||||||
|
{summary.other}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</button>
|
||||||
|
|
||||||
|
{/* Content */}
|
||||||
|
{!isCollapsed && (
|
||||||
|
<div className="px-6 pb-6">
|
||||||
|
{isLoading && (
|
||||||
|
<div className="flex items-center justify-center py-8">
|
||||||
|
<Loader2 className="h-5 w-5 animate-spin text-gray-400" />
|
||||||
|
<span className="ml-2 text-sm text-gray-500">
|
||||||
|
Loading artifacts…
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{error && (
|
||||||
|
<div className="bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded text-sm">
|
||||||
|
Error loading artifacts:{" "}
|
||||||
|
{error instanceof Error ? error.message : "Unknown error"}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{!isLoading && !error && artifacts.length > 0 && (
|
||||||
|
<div className="space-y-2">
|
||||||
|
{/* Column headers */}
|
||||||
|
<div className="grid grid-cols-12 gap-3 px-3 py-2 text-xs font-medium text-gray-500 uppercase tracking-wider border-b border-gray-100">
|
||||||
|
<div className="col-span-1">Type</div>
|
||||||
|
<div className="col-span-4">Name</div>
|
||||||
|
<div className="col-span-3">Ref</div>
|
||||||
|
<div className="col-span-1">Size</div>
|
||||||
|
<div className="col-span-2">Created</div>
|
||||||
|
<div className="col-span-1">Actions</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Artifact rows */}
|
||||||
|
{artifacts.map((artifact) => {
|
||||||
|
const badge = getArtifactTypeBadge(artifact.type);
|
||||||
|
const isProgress = artifact.type === "progress";
|
||||||
|
const isTextFile = artifact.type === "file_text";
|
||||||
|
const isFile = [
|
||||||
|
"file_text",
|
||||||
|
"file_binary",
|
||||||
|
"file_image",
|
||||||
|
"file_datatable",
|
||||||
|
].includes(artifact.type);
|
||||||
|
const isProgressExpanded = expandedProgressId === artifact.id;
|
||||||
|
const isTextExpanded = expandedTextFileId === artifact.id;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div key={artifact.id}>
|
||||||
|
<div
|
||||||
|
className={`grid grid-cols-12 gap-3 px-3 py-3 rounded-lg hover:bg-gray-50 transition-colors items-center ${
|
||||||
|
isProgress || isTextFile ? "cursor-pointer" : ""
|
||||||
|
}`}
|
||||||
|
onClick={() => {
|
||||||
|
if (isProgress) {
|
||||||
|
setExpandedProgressId(
|
||||||
|
isProgressExpanded ? null : artifact.id,
|
||||||
|
);
|
||||||
|
setExpandedTextFileId(null);
|
||||||
|
} else if (isTextFile) {
|
||||||
|
setExpandedTextFileId(
|
||||||
|
isTextExpanded ? null : artifact.id,
|
||||||
|
);
|
||||||
|
setExpandedProgressId(null);
|
||||||
|
}
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{/* Type icon */}
|
||||||
|
<div className="col-span-1 flex items-center">
|
||||||
|
{getArtifactTypeIcon(artifact.type)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Name */}
|
||||||
|
<div className="col-span-4 flex items-center gap-2 min-w-0">
|
||||||
|
<span
|
||||||
|
className="text-sm font-medium text-gray-900 truncate"
|
||||||
|
title={artifact.name ?? artifact.ref}
|
||||||
|
>
|
||||||
|
{artifact.name ?? artifact.ref}
|
||||||
|
</span>
|
||||||
|
<span
|
||||||
|
className={`inline-flex px-1.5 py-0.5 rounded text-[10px] font-medium flex-shrink-0 ${badge.classes}`}
|
||||||
|
>
|
||||||
|
{badge.label}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Ref */}
|
||||||
|
<div className="col-span-3 min-w-0">
|
||||||
|
<span
|
||||||
|
className="text-xs text-gray-500 truncate block font-mono"
|
||||||
|
title={artifact.ref}
|
||||||
|
>
|
||||||
|
{artifact.ref}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Size */}
|
||||||
|
<div className="col-span-1 text-sm text-gray-500">
|
||||||
|
{formatBytes(artifact.size_bytes)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Created */}
|
||||||
|
<div className="col-span-2 text-xs text-gray-500">
|
||||||
|
{formatDistanceToNow(new Date(artifact.created), {
|
||||||
|
addSuffix: true,
|
||||||
|
})}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Actions */}
|
||||||
|
<div
|
||||||
|
className="col-span-1 flex items-center gap-1"
|
||||||
|
onClick={(e) => e.stopPropagation()}
|
||||||
|
>
|
||||||
|
{isProgress && (
|
||||||
|
<button
|
||||||
|
onClick={() => {
|
||||||
|
setExpandedProgressId(
|
||||||
|
isProgressExpanded ? null : artifact.id,
|
||||||
|
);
|
||||||
|
setExpandedTextFileId(null);
|
||||||
|
}}
|
||||||
|
className="p-1 rounded hover:bg-gray-200 text-gray-500 hover:text-amber-600"
|
||||||
|
title="View progress"
|
||||||
|
>
|
||||||
|
<Eye className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
{isTextFile && (
|
||||||
|
<button
|
||||||
|
onClick={() => {
|
||||||
|
setExpandedTextFileId(
|
||||||
|
isTextExpanded ? null : artifact.id,
|
||||||
|
);
|
||||||
|
setExpandedProgressId(null);
|
||||||
|
}}
|
||||||
|
className="p-1 rounded hover:bg-gray-200 text-gray-500 hover:text-blue-600"
|
||||||
|
title="Preview text content"
|
||||||
|
>
|
||||||
|
<Eye className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
{isFile && (
|
||||||
|
<button
|
||||||
|
onClick={() =>
|
||||||
|
downloadArtifact(artifact.id, artifact.ref)
|
||||||
|
}
|
||||||
|
className="p-1 rounded hover:bg-gray-200 text-gray-500 hover:text-blue-600"
|
||||||
|
title="Download latest version"
|
||||||
|
>
|
||||||
|
<Download className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Expanded progress detail */}
|
||||||
|
{isProgress && isProgressExpanded && (
|
||||||
|
<div className="px-3">
|
||||||
|
<ProgressDetail
|
||||||
|
artifactId={artifact.id}
|
||||||
|
onClose={() => setExpandedProgressId(null)}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* Expanded text file preview */}
|
||||||
|
{isTextFile && isTextExpanded && (
|
||||||
|
<div className="px-3">
|
||||||
|
<TextFileDetail
|
||||||
|
artifactId={artifact.id}
|
||||||
|
artifactName={artifact.name}
|
||||||
|
isRunning={isRunning}
|
||||||
|
onClose={() => setExpandedTextFileId(null)}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
109
web/src/components/executions/ExecutionProgressBar.tsx
Normal file
109
web/src/components/executions/ExecutionProgressBar.tsx
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
import { useMemo } from "react";
|
||||||
|
import { BarChart3 } from "lucide-react";
|
||||||
|
import {
|
||||||
|
useExecutionArtifacts,
|
||||||
|
type ArtifactSummary,
|
||||||
|
} from "@/hooks/useArtifacts";
|
||||||
|
import { useArtifactStream, useArtifactProgress } from "@/hooks/useArtifactStream";
|
||||||
|
|
||||||
|
interface ExecutionProgressBarProps {
|
||||||
|
executionId: number;
|
||||||
|
/** Whether the execution is still running (enables real-time updates) */
|
||||||
|
isRunning: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Inline progress bar for executions that have progress-type artifacts.
|
||||||
|
*
|
||||||
|
* Combines two data sources for responsiveness:
|
||||||
|
* 1. **Polling**: `useExecutionArtifacts` fetches the artifact list periodically
|
||||||
|
* so we can detect when a progress artifact first appears and read its initial state.
|
||||||
|
* 2. **WebSocket**: `useArtifactStream` subscribes to real-time `artifact_updated`
|
||||||
|
* notifications, which include the latest `progress_percent` and `progress_message`
|
||||||
|
* extracted by the database trigger — providing instant updates between polls.
|
||||||
|
*
|
||||||
|
* The WebSocket-pushed summary takes precedence when available (it's newer), with
|
||||||
|
* the polled data as a fallback for the initial render before any WS message arrives.
|
||||||
|
*
|
||||||
|
* Renders nothing if no progress artifact exists for this execution.
|
||||||
|
*/
|
||||||
|
export default function ExecutionProgressBar({
|
||||||
|
executionId,
|
||||||
|
isRunning,
|
||||||
|
}: ExecutionProgressBarProps) {
|
||||||
|
// Subscribe to real-time artifact updates for this execution
|
||||||
|
useArtifactStream({ executionId, enabled: isRunning });
|
||||||
|
|
||||||
|
// Read the latest progress pushed via WebSocket (no API call)
|
||||||
|
const wsSummary = useArtifactProgress(executionId);
|
||||||
|
|
||||||
|
// Poll-based artifact list (fallback + initial detection)
|
||||||
|
const { data } = useExecutionArtifacts(
|
||||||
|
executionId,
|
||||||
|
isRunning,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Find progress artifacts from the polled data
|
||||||
|
const progressArtifact = useMemo<ArtifactSummary | null>(() => {
|
||||||
|
const artifacts = data?.data ?? [];
|
||||||
|
return artifacts.find((a) => a.type === "progress") ?? null;
|
||||||
|
}, [data]);
|
||||||
|
|
||||||
|
// If there's no progress artifact at all, render nothing
|
||||||
|
if (!progressArtifact && !wsSummary) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prefer the WS-pushed summary (more current), fall back to indicating
|
||||||
|
// that a progress artifact exists but we haven't received detail yet.
|
||||||
|
const percent = wsSummary?.percent ?? null;
|
||||||
|
const message = wsSummary?.message ?? null;
|
||||||
|
const name = wsSummary?.name ?? progressArtifact?.name ?? "Progress";
|
||||||
|
|
||||||
|
// If we have a progress artifact but no percent yet (first poll, no WS yet),
|
||||||
|
// show an indeterminate state
|
||||||
|
const hasPercent = percent != null;
|
||||||
|
const clampedPercent = hasPercent ? Math.min(Math.max(percent, 0), 100) : 0;
|
||||||
|
const isComplete = hasPercent && clampedPercent >= 100;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="mt-4 pt-4 border-t border-gray-100">
|
||||||
|
<div className="flex items-center gap-2 mb-1.5">
|
||||||
|
<BarChart3 className="h-4 w-4 text-amber-500 flex-shrink-0" />
|
||||||
|
<span className="text-sm font-medium text-gray-700 truncate">
|
||||||
|
{name}
|
||||||
|
</span>
|
||||||
|
{hasPercent && (
|
||||||
|
<span className="text-xs font-mono text-gray-500 ml-auto flex-shrink-0">
|
||||||
|
{Math.round(clampedPercent)}%
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Progress bar */}
|
||||||
|
<div className="w-full bg-gray-200 rounded-full h-2">
|
||||||
|
{hasPercent ? (
|
||||||
|
<div
|
||||||
|
className={`h-2 rounded-full transition-all duration-500 ease-out ${
|
||||||
|
isComplete
|
||||||
|
? "bg-green-500"
|
||||||
|
: "bg-amber-500"
|
||||||
|
}`}
|
||||||
|
style={{ width: `${clampedPercent}%` }}
|
||||||
|
/>
|
||||||
|
) : (
|
||||||
|
/* Indeterminate shimmer when we know a progress artifact exists
|
||||||
|
but haven't received a percent value yet */
|
||||||
|
<div className="h-2 rounded-full bg-amber-300 animate-pulse w-full opacity-40" />
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Message */}
|
||||||
|
{message && (
|
||||||
|
<p className="text-xs text-gray-500 mt-1 truncate" title={message}>
|
||||||
|
{message}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -16,6 +16,9 @@ import {
|
|||||||
SquareAsterisk,
|
SquareAsterisk,
|
||||||
KeyRound,
|
KeyRound,
|
||||||
Home,
|
Home,
|
||||||
|
Paperclip,
|
||||||
|
FolderOpenDot,
|
||||||
|
FolderArchive,
|
||||||
} from "lucide-react";
|
} from "lucide-react";
|
||||||
|
|
||||||
// Color mappings for navigation items — defined outside component for stable reference
|
// Color mappings for navigation items — defined outside component for stable reference
|
||||||
@@ -113,6 +116,12 @@ const navSections = [
|
|||||||
{
|
{
|
||||||
items: [
|
items: [
|
||||||
{ to: "/keys", label: "Keys & Secrets", icon: KeyRound, color: "gray" },
|
{ to: "/keys", label: "Keys & Secrets", icon: KeyRound, color: "gray" },
|
||||||
|
{
|
||||||
|
to: "/artifacts",
|
||||||
|
label: "Artifacts",
|
||||||
|
icon: FolderArchive,
|
||||||
|
color: "gray",
|
||||||
|
},
|
||||||
{
|
{
|
||||||
to: "/packs",
|
to: "/packs",
|
||||||
label: "Pack Management",
|
label: "Pack Management",
|
||||||
@@ -175,17 +184,20 @@ export default function MainLayout() {
|
|||||||
});
|
});
|
||||||
const [showUserMenu, setShowUserMenu] = useState(false);
|
const [showUserMenu, setShowUserMenu] = useState(false);
|
||||||
|
|
||||||
// Persist collapsed state to localStorage
|
// Persist collapsed state to localStorage and close user menu when expanding
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
localStorage.setItem("sidebar-collapsed", isCollapsed.toString());
|
localStorage.setItem("sidebar-collapsed", isCollapsed.toString());
|
||||||
}, [isCollapsed]);
|
}, [isCollapsed]);
|
||||||
|
|
||||||
// Close user menu when expanding sidebar
|
const handleToggleCollapse = () => {
|
||||||
useEffect(() => {
|
setIsCollapsed((prev) => {
|
||||||
if (!isCollapsed) {
|
const next = !prev;
|
||||||
setShowUserMenu(false);
|
if (!next) {
|
||||||
}
|
setShowUserMenu(false);
|
||||||
}, [isCollapsed]);
|
}
|
||||||
|
return next;
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
const handleLogout = () => {
|
const handleLogout = () => {
|
||||||
logout();
|
logout();
|
||||||
@@ -248,7 +260,7 @@ export default function MainLayout() {
|
|||||||
{/* Toggle Button */}
|
{/* Toggle Button */}
|
||||||
<div className="px-4 py-3">
|
<div className="px-4 py-3">
|
||||||
<button
|
<button
|
||||||
onClick={() => setIsCollapsed(!isCollapsed)}
|
onClick={handleToggleCollapse}
|
||||||
className="flex items-center w-full px-3 py-2 text-gray-400 hover:text-white hover:bg-gray-800 rounded-md transition-colors whitespace-nowrap"
|
className="flex items-center w-full px-3 py-2 text-gray-400 hover:text-white hover:bg-gray-800 rounded-md transition-colors whitespace-nowrap"
|
||||||
title={isCollapsed ? "Expand sidebar" : "Collapse sidebar"}
|
title={isCollapsed ? "Expand sidebar" : "Collapse sidebar"}
|
||||||
>
|
>
|
||||||
|
|||||||
136
web/src/hooks/useArtifactStream.ts
Normal file
136
web/src/hooks/useArtifactStream.ts
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
import { useCallback } from "react";
|
||||||
|
import { useQueryClient } from "@tanstack/react-query";
|
||||||
|
import { useEntityNotifications } from "@/contexts/WebSocketContext";
|
||||||
|
|
||||||
|
interface UseArtifactStreamOptions {
|
||||||
|
/**
|
||||||
|
* Optional execution ID to filter artifact updates for a specific execution.
|
||||||
|
* If not provided, receives updates for all artifacts.
|
||||||
|
*/
|
||||||
|
executionId?: number;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Whether the stream should be active.
|
||||||
|
* Defaults to true.
|
||||||
|
*/
|
||||||
|
enabled?: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Hook to subscribe to real-time artifact updates via WebSocket.
|
||||||
|
*
|
||||||
|
* Listens to `artifact_created` and `artifact_updated` notifications from the
|
||||||
|
* PostgreSQL LISTEN/NOTIFY system, and invalidates relevant React Query caches
|
||||||
|
* so that artifact lists and detail views update in real time.
|
||||||
|
*
|
||||||
|
* For progress-type artifacts, the notification payload includes a progress
|
||||||
|
* summary (`progress_percent`, `progress_message`, `progress_entries`) extracted
|
||||||
|
* by the database trigger so that the UI can update inline progress indicators
|
||||||
|
* without a separate API call.
|
||||||
|
*
|
||||||
|
* @example
|
||||||
|
* ```tsx
|
||||||
|
* // Listen to all artifact updates
|
||||||
|
* useArtifactStream();
|
||||||
|
*
|
||||||
|
* // Listen to artifacts for a specific execution
|
||||||
|
* useArtifactStream({ executionId: 123 });
|
||||||
|
* ```
|
||||||
|
*/
|
||||||
|
export function useArtifactStream(options: UseArtifactStreamOptions = {}) {
|
||||||
|
const { executionId, enabled = true } = options;
|
||||||
|
const queryClient = useQueryClient();
|
||||||
|
|
||||||
|
const handleNotification = useCallback(
|
||||||
|
(notification: any) => {
|
||||||
|
const payload = notification.payload as any;
|
||||||
|
|
||||||
|
// If we're filtering by execution ID, only process matching artifacts
|
||||||
|
if (executionId && payload?.execution !== executionId) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const artifactId = notification.entity_id;
|
||||||
|
const artifactExecution = payload?.execution;
|
||||||
|
|
||||||
|
// Invalidate the specific artifact query (used by ProgressDetail, TextFileDetail)
|
||||||
|
queryClient.invalidateQueries({
|
||||||
|
queryKey: ["artifacts", artifactId],
|
||||||
|
});
|
||||||
|
|
||||||
|
// Invalidate the execution artifacts list query
|
||||||
|
if (artifactExecution) {
|
||||||
|
queryClient.invalidateQueries({
|
||||||
|
queryKey: ["artifacts", "execution", artifactExecution],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// For progress artifacts, also update cached data directly with the
|
||||||
|
// summary from the notification payload to provide instant feedback
|
||||||
|
// before the invalidation refetch completes.
|
||||||
|
if (payload?.type === "progress" && payload?.progress_percent != null) {
|
||||||
|
queryClient.setQueryData(
|
||||||
|
["artifact_progress", artifactExecution],
|
||||||
|
(old: any) => ({
|
||||||
|
...old,
|
||||||
|
artifactId,
|
||||||
|
name: payload.name,
|
||||||
|
percent: payload.progress_percent,
|
||||||
|
message: payload.progress_message ?? null,
|
||||||
|
entries: payload.progress_entries ?? 0,
|
||||||
|
timestamp: notification.timestamp,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[executionId, queryClient],
|
||||||
|
);
|
||||||
|
|
||||||
|
const { connected } = useEntityNotifications(
|
||||||
|
"artifact",
|
||||||
|
handleNotification,
|
||||||
|
enabled,
|
||||||
|
);
|
||||||
|
|
||||||
|
return {
|
||||||
|
isConnected: connected,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Lightweight progress summary extracted from artifact WebSocket notifications.
|
||||||
|
* Available immediately via the `artifact_progress` query key without an API call.
|
||||||
|
*/
|
||||||
|
export interface ArtifactProgressSummary {
|
||||||
|
artifactId: number;
|
||||||
|
name: string | null;
|
||||||
|
percent: number;
|
||||||
|
message: string | null;
|
||||||
|
entries: number;
|
||||||
|
timestamp: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Hook to read the latest progress summary pushed by WebSocket notifications.
|
||||||
|
*
|
||||||
|
* This does NOT make any API calls — it only reads from the React Query cache
|
||||||
|
* which is populated by `useArtifactStream`. Returns `null` if no progress
|
||||||
|
* notification has been received yet for the given execution.
|
||||||
|
*
|
||||||
|
* For the initial load (before any WebSocket message arrives), the component
|
||||||
|
* should fall back to the polling-based `useExecutionArtifacts` data.
|
||||||
|
*/
|
||||||
|
export function useArtifactProgress(
|
||||||
|
executionId: number | undefined,
|
||||||
|
): ArtifactProgressSummary | null {
|
||||||
|
const queryClient = useQueryClient();
|
||||||
|
|
||||||
|
if (!executionId) return null;
|
||||||
|
|
||||||
|
const data = queryClient.getQueryData<ArtifactProgressSummary>([
|
||||||
|
"artifact_progress",
|
||||||
|
executionId,
|
||||||
|
]);
|
||||||
|
|
||||||
|
return data ?? null;
|
||||||
|
}
|
||||||
199
web/src/hooks/useArtifacts.ts
Normal file
199
web/src/hooks/useArtifacts.ts
Normal file
@@ -0,0 +1,199 @@
|
|||||||
|
import { useQuery, keepPreviousData } from "@tanstack/react-query";
|
||||||
|
import { OpenAPI } from "@/api/core/OpenAPI";
|
||||||
|
import { request as __request } from "@/api/core/request";
|
||||||
|
|
||||||
|
// Artifact types matching the backend ArtifactType enum
|
||||||
|
export type ArtifactType =
|
||||||
|
| "file_binary"
|
||||||
|
| "file_datatable"
|
||||||
|
| "file_image"
|
||||||
|
| "file_text"
|
||||||
|
| "other"
|
||||||
|
| "progress"
|
||||||
|
| "url";
|
||||||
|
|
||||||
|
export type ArtifactVisibility = "public" | "private";
|
||||||
|
|
||||||
|
export type OwnerType = "system" | "pack" | "action" | "sensor" | "rule";
|
||||||
|
|
||||||
|
export type RetentionPolicyType = "versions" | "days" | "hours" | "minutes";
|
||||||
|
|
||||||
|
export interface ArtifactSummary {
|
||||||
|
id: number;
|
||||||
|
ref: string;
|
||||||
|
type: ArtifactType;
|
||||||
|
visibility: ArtifactVisibility;
|
||||||
|
name: string | null;
|
||||||
|
content_type: string | null;
|
||||||
|
size_bytes: number | null;
|
||||||
|
execution: number | null;
|
||||||
|
scope: OwnerType;
|
||||||
|
owner: string;
|
||||||
|
created: string;
|
||||||
|
updated: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface ArtifactResponse {
|
||||||
|
id: number;
|
||||||
|
ref: string;
|
||||||
|
scope: OwnerType;
|
||||||
|
owner: string;
|
||||||
|
type: ArtifactType;
|
||||||
|
visibility: ArtifactVisibility;
|
||||||
|
retention_policy: RetentionPolicyType;
|
||||||
|
retention_limit: number;
|
||||||
|
name: string | null;
|
||||||
|
description: string | null;
|
||||||
|
content_type: string | null;
|
||||||
|
size_bytes: number | null;
|
||||||
|
execution: number | null;
|
||||||
|
data?: unknown;
|
||||||
|
created: string;
|
||||||
|
updated: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface ArtifactVersionSummary {
|
||||||
|
id: number;
|
||||||
|
version: number;
|
||||||
|
content_type: string | null;
|
||||||
|
size_bytes: number | null;
|
||||||
|
created_by: string | null;
|
||||||
|
created: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Search / List params
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface ArtifactsListParams {
|
||||||
|
page?: number;
|
||||||
|
perPage?: number;
|
||||||
|
scope?: OwnerType;
|
||||||
|
owner?: string;
|
||||||
|
type?: ArtifactType;
|
||||||
|
visibility?: ArtifactVisibility;
|
||||||
|
execution?: number;
|
||||||
|
name?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Paginated list response shape
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface PaginatedArtifacts {
|
||||||
|
data: ArtifactSummary[];
|
||||||
|
pagination: {
|
||||||
|
page: number;
|
||||||
|
page_size: number;
|
||||||
|
total_items: number;
|
||||||
|
total_pages: number;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Hooks
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fetch a paginated, filterable list of all artifacts.
|
||||||
|
*
|
||||||
|
* Uses GET /api/v1/artifacts with query params for server-side filtering.
|
||||||
|
*/
|
||||||
|
export function useArtifactsList(params: ArtifactsListParams = {}) {
|
||||||
|
return useQuery({
|
||||||
|
queryKey: ["artifacts", "list", params],
|
||||||
|
queryFn: async () => {
|
||||||
|
const query: Record<string, string> = {};
|
||||||
|
if (params.page) query.page = String(params.page);
|
||||||
|
if (params.perPage) query.per_page = String(params.perPage);
|
||||||
|
if (params.scope) query.scope = params.scope;
|
||||||
|
if (params.owner) query.owner = params.owner;
|
||||||
|
if (params.type) query.type = params.type;
|
||||||
|
if (params.visibility) query.visibility = params.visibility;
|
||||||
|
if (params.execution) query.execution = String(params.execution);
|
||||||
|
if (params.name) query.name = params.name;
|
||||||
|
|
||||||
|
const response = await __request<PaginatedArtifacts>(OpenAPI, {
|
||||||
|
method: "GET",
|
||||||
|
url: "/api/v1/artifacts",
|
||||||
|
query,
|
||||||
|
});
|
||||||
|
return response;
|
||||||
|
},
|
||||||
|
staleTime: 10000,
|
||||||
|
placeholderData: keepPreviousData,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fetch all artifacts for a given execution ID.
|
||||||
|
*
|
||||||
|
* Uses the GET /api/v1/executions/{execution_id}/artifacts endpoint.
|
||||||
|
*/
|
||||||
|
export function useExecutionArtifacts(
|
||||||
|
executionId: number | undefined,
|
||||||
|
isRunning = false,
|
||||||
|
) {
|
||||||
|
return useQuery({
|
||||||
|
queryKey: ["artifacts", "execution", executionId],
|
||||||
|
queryFn: async () => {
|
||||||
|
const response = await __request<{ data: ArtifactSummary[] }>(OpenAPI, {
|
||||||
|
method: "GET",
|
||||||
|
url: "/api/v1/executions/{execution_id}/artifacts",
|
||||||
|
path: {
|
||||||
|
execution_id: executionId!,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
return response;
|
||||||
|
},
|
||||||
|
enabled: !!executionId,
|
||||||
|
staleTime: isRunning ? 3000 : 10000,
|
||||||
|
refetchInterval: isRunning ? 3000 : 10000,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fetch a single artifact by ID (includes data field for progress artifacts).
|
||||||
|
*/
|
||||||
|
export function useArtifact(id: number | undefined) {
|
||||||
|
return useQuery({
|
||||||
|
queryKey: ["artifacts", id],
|
||||||
|
queryFn: async () => {
|
||||||
|
const response = await __request<{ data: ArtifactResponse }>(OpenAPI, {
|
||||||
|
method: "GET",
|
||||||
|
url: "/api/v1/artifacts/{id}",
|
||||||
|
path: {
|
||||||
|
id: id!,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
return response;
|
||||||
|
},
|
||||||
|
enabled: !!id,
|
||||||
|
staleTime: 3000,
|
||||||
|
refetchInterval: 3000,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fetch versions for a given artifact ID.
|
||||||
|
*/
|
||||||
|
export function useArtifactVersions(artifactId: number | undefined) {
|
||||||
|
return useQuery({
|
||||||
|
queryKey: ["artifacts", artifactId, "versions"],
|
||||||
|
queryFn: async () => {
|
||||||
|
const response = await __request<{ data: ArtifactVersionSummary[] }>(
|
||||||
|
OpenAPI,
|
||||||
|
{
|
||||||
|
method: "GET",
|
||||||
|
url: "/api/v1/artifacts/{id}/versions",
|
||||||
|
path: {
|
||||||
|
id: artifactId!,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
);
|
||||||
|
return response;
|
||||||
|
},
|
||||||
|
enabled: !!artifactId,
|
||||||
|
staleTime: 10000,
|
||||||
|
});
|
||||||
|
}
|
||||||
705
web/src/pages/artifacts/ArtifactDetailPage.tsx
Normal file
705
web/src/pages/artifacts/ArtifactDetailPage.tsx
Normal file
@@ -0,0 +1,705 @@
|
|||||||
|
import { useState, useMemo, useCallback, useEffect } from "react";
|
||||||
|
import { useParams, Link } from "react-router-dom";
|
||||||
|
import {
|
||||||
|
ArrowLeft,
|
||||||
|
Download,
|
||||||
|
Eye,
|
||||||
|
EyeOff,
|
||||||
|
Loader2,
|
||||||
|
FileText,
|
||||||
|
Clock,
|
||||||
|
Hash,
|
||||||
|
X,
|
||||||
|
} from "lucide-react";
|
||||||
|
import {
|
||||||
|
useArtifact,
|
||||||
|
useArtifactVersions,
|
||||||
|
type ArtifactResponse,
|
||||||
|
type ArtifactVersionSummary,
|
||||||
|
} from "@/hooks/useArtifacts";
|
||||||
|
import { useArtifactStream } from "@/hooks/useArtifactStream";
|
||||||
|
import { OpenAPI } from "@/api/core/OpenAPI";
|
||||||
|
import {
|
||||||
|
getArtifactTypeIcon,
|
||||||
|
getArtifactTypeBadge,
|
||||||
|
getScopeBadge,
|
||||||
|
formatBytes,
|
||||||
|
formatDate,
|
||||||
|
downloadArtifact,
|
||||||
|
isDownloadable,
|
||||||
|
} from "./artifactHelpers";
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Text content viewer
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function TextContentViewer({
|
||||||
|
artifactId,
|
||||||
|
versionId,
|
||||||
|
label,
|
||||||
|
}: {
|
||||||
|
artifactId: number;
|
||||||
|
versionId?: number;
|
||||||
|
label: string;
|
||||||
|
}) {
|
||||||
|
// Track a fetch key so that when deps change we re-derive initial state
|
||||||
|
// instead of calling setState synchronously inside useEffect.
|
||||||
|
const fetchKey = `${artifactId}:${versionId ?? "latest"}`;
|
||||||
|
const [settledKey, setSettledKey] = useState<string | null>(null);
|
||||||
|
const [content, setContent] = useState<string | null>(null);
|
||||||
|
const [loadError, setLoadError] = useState<string | null>(null);
|
||||||
|
|
||||||
|
const isLoading = settledKey !== fetchKey;
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
let cancelled = false;
|
||||||
|
|
||||||
|
const token = localStorage.getItem("access_token");
|
||||||
|
const url = versionId
|
||||||
|
? `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/versions/${versionId}/download`
|
||||||
|
: `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/download`;
|
||||||
|
|
||||||
|
fetch(url, { headers: { Authorization: `Bearer ${token}` } })
|
||||||
|
.then(async (response) => {
|
||||||
|
if (cancelled) return;
|
||||||
|
if (!response.ok) {
|
||||||
|
setLoadError(`HTTP ${response.status}: ${response.statusText}`);
|
||||||
|
setContent(null);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const text = await response.text();
|
||||||
|
setContent(text);
|
||||||
|
setLoadError(null);
|
||||||
|
})
|
||||||
|
.catch((e) => {
|
||||||
|
if (!cancelled) {
|
||||||
|
setLoadError(e instanceof Error ? e.message : "Unknown error");
|
||||||
|
setContent(null);
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.finally(() => {
|
||||||
|
if (!cancelled) setSettledKey(fetchKey);
|
||||||
|
});
|
||||||
|
|
||||||
|
return () => {
|
||||||
|
cancelled = true;
|
||||||
|
};
|
||||||
|
}, [artifactId, versionId, fetchKey]);
|
||||||
|
|
||||||
|
if (isLoading) {
|
||||||
|
return (
|
||||||
|
<div className="flex items-center gap-2 py-4 text-sm text-gray-500">
|
||||||
|
<Loader2 className="h-4 w-4 animate-spin" />
|
||||||
|
Loading {label}...
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (loadError) {
|
||||||
|
return <div className="py-4 text-sm text-red-600">Error: {loadError}</div>;
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<pre className="max-h-96 overflow-y-auto bg-gray-900 text-gray-100 rounded-lg p-4 text-xs font-mono whitespace-pre-wrap break-all">
|
||||||
|
{content || <span className="text-gray-500 italic">(empty)</span>}
|
||||||
|
</pre>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Progress viewer
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function ProgressViewer({ data }: { data: unknown }) {
|
||||||
|
const entries = useMemo(() => {
|
||||||
|
if (!data || !Array.isArray(data)) return [];
|
||||||
|
return data as Array<Record<string, unknown>>;
|
||||||
|
}, [data]);
|
||||||
|
|
||||||
|
const latestEntry = entries.length > 0 ? entries[entries.length - 1] : null;
|
||||||
|
const latestPercent =
|
||||||
|
latestEntry && typeof latestEntry.percent === "number"
|
||||||
|
? latestEntry.percent
|
||||||
|
: null;
|
||||||
|
|
||||||
|
if (entries.length === 0) {
|
||||||
|
return (
|
||||||
|
<p className="text-sm text-gray-500 italic">No progress entries yet.</p>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{latestPercent != null && (
|
||||||
|
<div className="mb-4">
|
||||||
|
<div className="flex items-center justify-between text-sm text-gray-600 mb-1">
|
||||||
|
<span>
|
||||||
|
{latestEntry?.message
|
||||||
|
? String(latestEntry.message)
|
||||||
|
: `${latestPercent}%`}
|
||||||
|
</span>
|
||||||
|
<span className="font-mono">{latestPercent}%</span>
|
||||||
|
</div>
|
||||||
|
<div className="w-full bg-gray-200 rounded-full h-3">
|
||||||
|
<div
|
||||||
|
className="bg-amber-500 h-3 rounded-full transition-all duration-300"
|
||||||
|
style={{ width: `${Math.min(latestPercent, 100)}%` }}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<div className="max-h-64 overflow-y-auto">
|
||||||
|
<table className="w-full text-sm">
|
||||||
|
<thead>
|
||||||
|
<tr className="text-left text-gray-500 border-b border-gray-200">
|
||||||
|
<th className="pb-2 pr-3">#</th>
|
||||||
|
<th className="pb-2 pr-3">%</th>
|
||||||
|
<th className="pb-2 pr-3">Message</th>
|
||||||
|
<th className="pb-2">Time</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
{entries.map((entry, idx) => (
|
||||||
|
<tr key={idx} className="border-b border-gray-100 last:border-0">
|
||||||
|
<td className="py-1.5 pr-3 text-gray-400 font-mono">
|
||||||
|
{typeof entry.iteration === "number"
|
||||||
|
? entry.iteration
|
||||||
|
: idx + 1}
|
||||||
|
</td>
|
||||||
|
<td className="py-1.5 pr-3 font-mono">
|
||||||
|
{typeof entry.percent === "number"
|
||||||
|
? `${entry.percent}%`
|
||||||
|
: "\u2014"}
|
||||||
|
</td>
|
||||||
|
<td className="py-1.5 pr-3 text-gray-700 truncate max-w-[300px]">
|
||||||
|
{entry.message ? String(entry.message) : "\u2014"}
|
||||||
|
</td>
|
||||||
|
<td className="py-1.5 text-gray-400 whitespace-nowrap">
|
||||||
|
{entry.timestamp
|
||||||
|
? new Date(String(entry.timestamp)).toLocaleTimeString()
|
||||||
|
: "\u2014"}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Version row
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function VersionRow({
|
||||||
|
version,
|
||||||
|
artifactId,
|
||||||
|
artifactRef,
|
||||||
|
artifactType,
|
||||||
|
}: {
|
||||||
|
version: ArtifactVersionSummary;
|
||||||
|
artifactId: number;
|
||||||
|
artifactRef: string;
|
||||||
|
artifactType: string;
|
||||||
|
}) {
|
||||||
|
const [showPreview, setShowPreview] = useState(false);
|
||||||
|
const canPreview = artifactType === "file_text";
|
||||||
|
const canDownload =
|
||||||
|
artifactType === "file_text" ||
|
||||||
|
artifactType === "file_image" ||
|
||||||
|
artifactType === "file_binary" ||
|
||||||
|
artifactType === "file_datatable";
|
||||||
|
|
||||||
|
const handleDownload = useCallback(async () => {
|
||||||
|
const token = localStorage.getItem("access_token");
|
||||||
|
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/versions/${version.id}/download`;
|
||||||
|
|
||||||
|
const response = await fetch(url, {
|
||||||
|
headers: { Authorization: `Bearer ${token}` },
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
console.error(
|
||||||
|
`Download failed: ${response.status} ${response.statusText}`,
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const disposition = response.headers.get("Content-Disposition");
|
||||||
|
let filename = `${artifactRef.replace(/\./g, "_")}_v${version.version}.bin`;
|
||||||
|
if (disposition) {
|
||||||
|
const match = disposition.match(/filename="?([^"]+)"?/);
|
||||||
|
if (match) filename = match[1];
|
||||||
|
}
|
||||||
|
|
||||||
|
const blob = await response.blob();
|
||||||
|
const blobUrl = URL.createObjectURL(blob);
|
||||||
|
const a = document.createElement("a");
|
||||||
|
a.href = blobUrl;
|
||||||
|
a.download = filename;
|
||||||
|
document.body.appendChild(a);
|
||||||
|
a.click();
|
||||||
|
document.body.removeChild(a);
|
||||||
|
URL.revokeObjectURL(blobUrl);
|
||||||
|
}, [artifactId, artifactRef, version]);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<>
|
||||||
|
<tr className="hover:bg-gray-50">
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm font-mono text-gray-900">
|
||||||
|
v{version.version}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm text-gray-600">
|
||||||
|
{version.content_type || "\u2014"}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm text-gray-600">
|
||||||
|
{formatBytes(version.size_bytes)}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm text-gray-600">
|
||||||
|
{version.created_by || "\u2014"}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm text-gray-600">
|
||||||
|
{formatDate(version.created)}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-right">
|
||||||
|
<div className="flex items-center justify-end gap-2">
|
||||||
|
{canPreview && (
|
||||||
|
<button
|
||||||
|
onClick={() => setShowPreview(!showPreview)}
|
||||||
|
className="text-gray-500 hover:text-blue-600"
|
||||||
|
title={showPreview ? "Hide preview" : "Preview content"}
|
||||||
|
>
|
||||||
|
{showPreview ? (
|
||||||
|
<X className="h-4 w-4" />
|
||||||
|
) : (
|
||||||
|
<FileText className="h-4 w-4" />
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
{canDownload && (
|
||||||
|
<button
|
||||||
|
onClick={handleDownload}
|
||||||
|
className="text-gray-500 hover:text-blue-600"
|
||||||
|
title="Download this version"
|
||||||
|
>
|
||||||
|
<Download className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
{showPreview && (
|
||||||
|
<tr>
|
||||||
|
<td colSpan={6} className="px-4 py-3">
|
||||||
|
<TextContentViewer
|
||||||
|
artifactId={artifactId}
|
||||||
|
versionId={version.id}
|
||||||
|
label={`v${version.version}`}
|
||||||
|
/>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
)}
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Detail card
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function MetadataField({
|
||||||
|
label,
|
||||||
|
children,
|
||||||
|
}: {
|
||||||
|
label: string;
|
||||||
|
children: React.ReactNode;
|
||||||
|
}) {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
<dt className="text-sm font-medium text-gray-500">{label}</dt>
|
||||||
|
<dd className="mt-1 text-sm text-gray-900">{children}</dd>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function ArtifactMetadata({ artifact }: { artifact: ArtifactResponse }) {
|
||||||
|
const typeBadge = getArtifactTypeBadge(artifact.type);
|
||||||
|
const scopeBadge = getScopeBadge(artifact.scope);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<div className="flex items-center justify-between">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
{getArtifactTypeIcon(artifact.type)}
|
||||||
|
<div>
|
||||||
|
<h2 className="text-xl font-bold text-gray-900">
|
||||||
|
{artifact.name || artifact.ref}
|
||||||
|
</h2>
|
||||||
|
{artifact.name && (
|
||||||
|
<p className="text-sm text-gray-500 font-mono">
|
||||||
|
{artifact.ref}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
{isDownloadable(artifact.type) && (
|
||||||
|
<button
|
||||||
|
onClick={() => downloadArtifact(artifact.id, artifact.ref)}
|
||||||
|
className="flex items-center gap-2 px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 transition-colors text-sm"
|
||||||
|
>
|
||||||
|
<Download className="h-4 w-4" />
|
||||||
|
Download Latest
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<dl className="grid grid-cols-2 md:grid-cols-4 gap-x-6 gap-y-4">
|
||||||
|
<MetadataField label="Type">
|
||||||
|
<span
|
||||||
|
className={`px-2 py-0.5 inline-flex text-xs leading-5 font-semibold rounded-full ${typeBadge.classes}`}
|
||||||
|
>
|
||||||
|
{typeBadge.label}
|
||||||
|
</span>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Visibility">
|
||||||
|
<div className="flex items-center gap-1.5">
|
||||||
|
{artifact.visibility === "public" ? (
|
||||||
|
<>
|
||||||
|
<Eye className="h-4 w-4 text-green-600" />
|
||||||
|
<span className="text-green-700">Public</span>
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<EyeOff className="h-4 w-4 text-gray-400" />
|
||||||
|
<span className="text-gray-600">Private</span>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Scope">
|
||||||
|
<span
|
||||||
|
className={`px-2 py-0.5 inline-flex text-xs leading-5 font-semibold rounded-full ${scopeBadge.classes}`}
|
||||||
|
>
|
||||||
|
{scopeBadge.label}
|
||||||
|
</span>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Owner">
|
||||||
|
<span className="font-mono text-sm">
|
||||||
|
{artifact.owner || "\u2014"}
|
||||||
|
</span>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Execution">
|
||||||
|
{artifact.execution ? (
|
||||||
|
<Link
|
||||||
|
to={`/executions/${artifact.execution}`}
|
||||||
|
className="text-blue-600 hover:text-blue-800 font-mono"
|
||||||
|
>
|
||||||
|
#{artifact.execution}
|
||||||
|
</Link>
|
||||||
|
) : (
|
||||||
|
<span className="text-gray-400">{"\u2014"}</span>
|
||||||
|
)}
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Content Type">
|
||||||
|
<span className="font-mono text-xs">
|
||||||
|
{artifact.content_type || "\u2014"}
|
||||||
|
</span>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Size">
|
||||||
|
{formatBytes(artifact.size_bytes)}
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Retention">
|
||||||
|
{artifact.retention_limit} {artifact.retention_policy}
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Created">
|
||||||
|
<div className="flex items-center gap-1.5">
|
||||||
|
<Clock className="h-3.5 w-3.5 text-gray-400" />
|
||||||
|
{formatDate(artifact.created)}
|
||||||
|
</div>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Updated">
|
||||||
|
<div className="flex items-center gap-1.5">
|
||||||
|
<Clock className="h-3.5 w-3.5 text-gray-400" />
|
||||||
|
{formatDate(artifact.updated)}
|
||||||
|
</div>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
{artifact.description && (
|
||||||
|
<div className="col-span-2">
|
||||||
|
<MetadataField label="Description">
|
||||||
|
{artifact.description}
|
||||||
|
</MetadataField>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</dl>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Versions list
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function ArtifactVersionsList({ artifact }: { artifact: ArtifactResponse }) {
|
||||||
|
const { data, isLoading, error } = useArtifactVersions(artifact.id);
|
||||||
|
const versions = useMemo(() => data?.data || [], [data]);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Hash className="h-5 w-5 text-gray-400" />
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">
|
||||||
|
Versions
|
||||||
|
{versions.length > 0 && (
|
||||||
|
<span className="ml-2 text-sm font-normal text-gray-500">
|
||||||
|
({versions.length})
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</h3>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{isLoading ? (
|
||||||
|
<div className="p-8 text-center">
|
||||||
|
<Loader2 className="h-6 w-6 animate-spin mx-auto text-blue-600" />
|
||||||
|
<p className="mt-2 text-sm text-gray-600">Loading versions...</p>
|
||||||
|
</div>
|
||||||
|
) : error ? (
|
||||||
|
<div className="p-8 text-center">
|
||||||
|
<p className="text-red-600">Failed to load versions</p>
|
||||||
|
<p className="text-sm text-gray-600 mt-1">
|
||||||
|
{error instanceof Error ? error.message : "Unknown error"}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
) : versions.length === 0 ? (
|
||||||
|
<div className="p-8 text-center">
|
||||||
|
<p className="text-gray-500">No versions yet</p>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
|
<div className="overflow-x-auto">
|
||||||
|
<table className="min-w-full divide-y divide-gray-200">
|
||||||
|
<thead className="bg-gray-50">
|
||||||
|
<tr>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Version
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Content Type
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Size
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Created By
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Created
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-right text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Actions
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="bg-white divide-y divide-gray-200">
|
||||||
|
{versions.map((version) => (
|
||||||
|
<VersionRow
|
||||||
|
key={version.id}
|
||||||
|
version={version}
|
||||||
|
artifactId={artifact.id}
|
||||||
|
artifactRef={artifact.ref}
|
||||||
|
artifactType={artifact.type}
|
||||||
|
/>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Inline content preview (progress / text for latest)
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function InlineContentPreview({ artifact }: { artifact: ArtifactResponse }) {
|
||||||
|
if (artifact.type === "progress") {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">
|
||||||
|
Progress Details
|
||||||
|
</h3>
|
||||||
|
</div>
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<ProgressViewer data={artifact.data} />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (artifact.type === "file_text") {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">
|
||||||
|
Content Preview (Latest)
|
||||||
|
</h3>
|
||||||
|
</div>
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<TextContentViewer artifactId={artifact.id} label="content" />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (artifact.type === "url" && artifact.data) {
|
||||||
|
const urlValue =
|
||||||
|
typeof artifact.data === "string"
|
||||||
|
? artifact.data
|
||||||
|
: typeof artifact.data === "object" &&
|
||||||
|
artifact.data !== null &&
|
||||||
|
"url" in (artifact.data as Record<string, unknown>)
|
||||||
|
? String((artifact.data as Record<string, unknown>).url)
|
||||||
|
: null;
|
||||||
|
|
||||||
|
if (urlValue) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">URL</h3>
|
||||||
|
</div>
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<a
|
||||||
|
href={urlValue}
|
||||||
|
target="_blank"
|
||||||
|
rel="noopener noreferrer"
|
||||||
|
className="text-blue-600 hover:text-blue-800 underline break-all"
|
||||||
|
>
|
||||||
|
{urlValue}
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// JSON data preview for other types that have data
|
||||||
|
if (artifact.data != null) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">Data</h3>
|
||||||
|
</div>
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<pre className="max-h-96 overflow-y-auto bg-gray-900 text-gray-100 rounded-lg p-4 text-xs font-mono whitespace-pre-wrap break-all">
|
||||||
|
{JSON.stringify(artifact.data, null, 2)}
|
||||||
|
</pre>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Main page
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export default function ArtifactDetailPage() {
|
||||||
|
const { id } = useParams<{ id: string }>();
|
||||||
|
const artifactId = id ? Number(id) : undefined;
|
||||||
|
|
||||||
|
const { data, isLoading, error } = useArtifact(artifactId);
|
||||||
|
const artifact = data?.data;
|
||||||
|
|
||||||
|
// Subscribe to real-time updates for this artifact
|
||||||
|
useArtifactStream({
|
||||||
|
executionId: artifact?.execution ?? undefined,
|
||||||
|
enabled: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (isLoading) {
|
||||||
|
return (
|
||||||
|
<div className="p-6">
|
||||||
|
<div className="flex items-center justify-center h-64">
|
||||||
|
<Loader2 className="h-8 w-8 animate-spin text-blue-600" />
|
||||||
|
<p className="ml-3 text-gray-600">Loading artifact...</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (error || !artifact) {
|
||||||
|
return (
|
||||||
|
<div className="p-6">
|
||||||
|
<div className="mb-6">
|
||||||
|
<Link
|
||||||
|
to="/artifacts"
|
||||||
|
className="flex items-center gap-2 text-gray-600 hover:text-gray-900"
|
||||||
|
>
|
||||||
|
<ArrowLeft className="h-4 w-4" />
|
||||||
|
Back to Artifacts
|
||||||
|
</Link>
|
||||||
|
</div>
|
||||||
|
<div className="bg-white shadow rounded-lg p-12 text-center">
|
||||||
|
<p className="text-red-600 text-lg">
|
||||||
|
{error ? "Failed to load artifact" : "Artifact not found"}
|
||||||
|
</p>
|
||||||
|
{error && (
|
||||||
|
<p className="text-sm text-gray-600 mt-2">
|
||||||
|
{error instanceof Error ? error.message : "Unknown error"}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="p-6">
|
||||||
|
{/* Back link */}
|
||||||
|
<div className="mb-6">
|
||||||
|
<Link
|
||||||
|
to="/artifacts"
|
||||||
|
className="flex items-center gap-2 text-gray-600 hover:text-gray-900 text-sm"
|
||||||
|
>
|
||||||
|
<ArrowLeft className="h-4 w-4" />
|
||||||
|
Back to Artifacts
|
||||||
|
</Link>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Metadata card */}
|
||||||
|
<ArtifactMetadata artifact={artifact} />
|
||||||
|
|
||||||
|
{/* Inline content preview */}
|
||||||
|
<div className="mt-6">
|
||||||
|
<InlineContentPreview artifact={artifact} />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Versions list */}
|
||||||
|
<div className="mt-6">
|
||||||
|
<ArtifactVersionsList artifact={artifact} />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
583
web/src/pages/artifacts/ArtifactsPage.tsx
Normal file
583
web/src/pages/artifacts/ArtifactsPage.tsx
Normal file
@@ -0,0 +1,583 @@
|
|||||||
|
import { useState, useCallback, useMemo, useEffect, memo } from "react";
|
||||||
|
import { Link, useSearchParams } from "react-router-dom";
|
||||||
|
import { Search, X, Eye, EyeOff, Download, Package } from "lucide-react";
|
||||||
|
import {
|
||||||
|
useArtifactsList,
|
||||||
|
type ArtifactSummary,
|
||||||
|
type ArtifactType,
|
||||||
|
type ArtifactVisibility,
|
||||||
|
type OwnerType,
|
||||||
|
} from "@/hooks/useArtifacts";
|
||||||
|
import { useArtifactStream } from "@/hooks/useArtifactStream";
|
||||||
|
import {
|
||||||
|
TYPE_OPTIONS,
|
||||||
|
VISIBILITY_OPTIONS,
|
||||||
|
SCOPE_OPTIONS,
|
||||||
|
getArtifactTypeIcon,
|
||||||
|
getArtifactTypeBadge,
|
||||||
|
getScopeBadge,
|
||||||
|
formatBytes,
|
||||||
|
formatDate,
|
||||||
|
formatTime,
|
||||||
|
downloadArtifact,
|
||||||
|
isDownloadable,
|
||||||
|
} from "./artifactHelpers";
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Results Table (memoized so filter typing doesn't re-render rows)
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
const ArtifactsResultsTable = memo(
|
||||||
|
({
|
||||||
|
artifacts,
|
||||||
|
isLoading,
|
||||||
|
isFetching,
|
||||||
|
error,
|
||||||
|
hasActiveFilters,
|
||||||
|
clearFilters,
|
||||||
|
page,
|
||||||
|
setPage,
|
||||||
|
pageSize,
|
||||||
|
total,
|
||||||
|
}: {
|
||||||
|
artifacts: ArtifactSummary[];
|
||||||
|
isLoading: boolean;
|
||||||
|
isFetching: boolean;
|
||||||
|
error: Error | null;
|
||||||
|
hasActiveFilters: boolean;
|
||||||
|
clearFilters: () => void;
|
||||||
|
page: number;
|
||||||
|
setPage: (page: number) => void;
|
||||||
|
pageSize: number;
|
||||||
|
total: number;
|
||||||
|
}) => {
|
||||||
|
const totalPages = total ? Math.ceil(total / pageSize) : 0;
|
||||||
|
|
||||||
|
if (isLoading && artifacts.length === 0) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg">
|
||||||
|
<div className="flex items-center justify-center h-64">
|
||||||
|
<div className="animate-spin rounded-full h-12 w-12 border-b-2 border-blue-600" />
|
||||||
|
<p className="ml-4 text-gray-600">Loading artifacts...</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (error && artifacts.length === 0) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg p-12 text-center">
|
||||||
|
<p className="text-red-600">Failed to load artifacts</p>
|
||||||
|
<p className="text-sm text-gray-600 mt-2">{error.message}</p>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (artifacts.length === 0) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg p-12 text-center">
|
||||||
|
<Package className="mx-auto h-12 w-12 text-gray-400" />
|
||||||
|
<p className="mt-4 text-gray-600">No artifacts found</p>
|
||||||
|
<p className="text-sm text-gray-500 mt-1">
|
||||||
|
{hasActiveFilters
|
||||||
|
? "Try adjusting your filters"
|
||||||
|
: "Artifacts will appear here when executions produce output"}
|
||||||
|
</p>
|
||||||
|
{hasActiveFilters && (
|
||||||
|
<button
|
||||||
|
onClick={clearFilters}
|
||||||
|
className="mt-3 text-sm text-blue-600 hover:text-blue-800"
|
||||||
|
>
|
||||||
|
Clear filters
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="relative">
|
||||||
|
{isFetching && (
|
||||||
|
<div className="absolute inset-0 bg-white/60 z-10 flex items-center justify-center rounded-lg">
|
||||||
|
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-blue-600" />
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{error && (
|
||||||
|
<div className="mb-4 bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded">
|
||||||
|
<p>Error refreshing: {error.message}</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="overflow-x-auto">
|
||||||
|
<table className="min-w-full divide-y divide-gray-200">
|
||||||
|
<thead className="bg-gray-50">
|
||||||
|
<tr>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Artifact
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Type
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Visibility
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Scope / Owner
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Execution
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Size
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Created
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-right text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Actions
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="bg-white divide-y divide-gray-200">
|
||||||
|
{artifacts.map((artifact) => {
|
||||||
|
const typeBadge = getArtifactTypeBadge(artifact.type);
|
||||||
|
const scopeBadge = getScopeBadge(artifact.scope);
|
||||||
|
return (
|
||||||
|
<tr key={artifact.id} className="hover:bg-gray-50">
|
||||||
|
<td className="px-6 py-4">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
{getArtifactTypeIcon(artifact.type)}
|
||||||
|
<div className="min-w-0">
|
||||||
|
<Link
|
||||||
|
to={`/artifacts/${artifact.id}`}
|
||||||
|
className="text-sm font-medium text-blue-600 hover:text-blue-800 truncate block"
|
||||||
|
>
|
||||||
|
{artifact.name || artifact.ref}
|
||||||
|
</Link>
|
||||||
|
{artifact.name && (
|
||||||
|
<div className="text-xs text-gray-500 font-mono truncate">
|
||||||
|
{artifact.ref}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap">
|
||||||
|
<span
|
||||||
|
className={`px-2 py-1 inline-flex text-xs leading-5 font-semibold rounded-full ${typeBadge.classes}`}
|
||||||
|
>
|
||||||
|
{typeBadge.label}
|
||||||
|
</span>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap">
|
||||||
|
<div className="flex items-center gap-1.5 text-sm">
|
||||||
|
{artifact.visibility === "public" ? (
|
||||||
|
<>
|
||||||
|
<Eye className="h-3.5 w-3.5 text-green-600" />
|
||||||
|
<span className="text-green-700">Public</span>
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<EyeOff className="h-3.5 w-3.5 text-gray-400" />
|
||||||
|
<span className="text-gray-600">Private</span>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4">
|
||||||
|
<div>
|
||||||
|
<span
|
||||||
|
className={`px-2 py-0.5 inline-flex text-xs leading-5 font-semibold rounded-full ${scopeBadge.classes}`}
|
||||||
|
>
|
||||||
|
{scopeBadge.label}
|
||||||
|
</span>
|
||||||
|
{artifact.owner && (
|
||||||
|
<div className="text-xs text-gray-500 mt-0.5 font-mono truncate max-w-[160px]">
|
||||||
|
{artifact.owner}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap">
|
||||||
|
{artifact.execution ? (
|
||||||
|
<Link
|
||||||
|
to={`/executions/${artifact.execution}`}
|
||||||
|
className="text-sm font-mono text-blue-600 hover:text-blue-800"
|
||||||
|
>
|
||||||
|
#{artifact.execution}
|
||||||
|
</Link>
|
||||||
|
) : (
|
||||||
|
<span className="text-sm text-gray-400 italic">
|
||||||
|
{"\u2014"}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-700">
|
||||||
|
{formatBytes(artifact.size_bytes)}
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap">
|
||||||
|
<div className="text-sm text-gray-900">
|
||||||
|
{formatTime(artifact.created)}
|
||||||
|
</div>
|
||||||
|
<div className="text-xs text-gray-500">
|
||||||
|
{formatDate(artifact.created)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap text-right">
|
||||||
|
<div className="flex items-center justify-end gap-2">
|
||||||
|
<Link
|
||||||
|
to={`/artifacts/${artifact.id}`}
|
||||||
|
className="text-gray-500 hover:text-blue-600"
|
||||||
|
title="View details"
|
||||||
|
>
|
||||||
|
<Eye className="h-4 w-4" />
|
||||||
|
</Link>
|
||||||
|
{isDownloadable(artifact.type) && (
|
||||||
|
<button
|
||||||
|
onClick={() =>
|
||||||
|
downloadArtifact(artifact.id, artifact.ref)
|
||||||
|
}
|
||||||
|
className="text-gray-500 hover:text-blue-600"
|
||||||
|
title="Download latest version"
|
||||||
|
>
|
||||||
|
<Download className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Pagination */}
|
||||||
|
{totalPages > 1 && (
|
||||||
|
<div className="bg-gray-50 px-6 py-4 flex items-center justify-between border-t border-gray-200 rounded-b-lg">
|
||||||
|
<div className="flex-1 flex justify-between sm:hidden">
|
||||||
|
<button
|
||||||
|
onClick={() => setPage(page - 1)}
|
||||||
|
disabled={page === 1}
|
||||||
|
className="relative inline-flex items-center px-4 py-2 border border-gray-300 text-sm font-medium rounded-md text-gray-700 bg-white hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||||
|
>
|
||||||
|
Previous
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => setPage(page + 1)}
|
||||||
|
disabled={page === totalPages}
|
||||||
|
className="ml-3 relative inline-flex items-center px-4 py-2 border border-gray-300 text-sm font-medium rounded-md text-gray-700 bg-white hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||||
|
>
|
||||||
|
Next
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div className="hidden sm:flex-1 sm:flex sm:items-center sm:justify-between">
|
||||||
|
<div>
|
||||||
|
<p className="text-sm text-gray-700">
|
||||||
|
Page <span className="font-medium">{page}</span> of{" "}
|
||||||
|
<span className="font-medium">{totalPages}</span>
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<nav className="relative z-0 inline-flex rounded-md shadow-sm -space-x-px">
|
||||||
|
<button
|
||||||
|
onClick={() => setPage(page - 1)}
|
||||||
|
disabled={page === 1}
|
||||||
|
className="relative inline-flex items-center px-2 py-2 rounded-l-md border border-gray-300 bg-white text-sm font-medium text-gray-500 hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||||
|
>
|
||||||
|
Previous
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => setPage(page + 1)}
|
||||||
|
disabled={page === totalPages}
|
||||||
|
className="relative inline-flex items-center px-2 py-2 rounded-r-md border border-gray-300 bg-white text-sm font-medium text-gray-500 hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||||
|
>
|
||||||
|
Next
|
||||||
|
</button>
|
||||||
|
</nav>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
ArtifactsResultsTable.displayName = "ArtifactsResultsTable";
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Main Page
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export default function ArtifactsPage() {
|
||||||
|
const [searchParams] = useSearchParams();
|
||||||
|
|
||||||
|
const [page, setPage] = useState(1);
|
||||||
|
const pageSize = 20;
|
||||||
|
|
||||||
|
const [nameFilter, setNameFilter] = useState(searchParams.get("name") || "");
|
||||||
|
const [typeFilter, setTypeFilter] = useState<ArtifactType | "">(
|
||||||
|
(searchParams.get("type") as ArtifactType) || "",
|
||||||
|
);
|
||||||
|
const [visibilityFilter, setVisibilityFilter] = useState<
|
||||||
|
ArtifactVisibility | ""
|
||||||
|
>((searchParams.get("visibility") as ArtifactVisibility) || "");
|
||||||
|
const [scopeFilter, setScopeFilter] = useState<OwnerType | "">(
|
||||||
|
(searchParams.get("scope") as OwnerType) || "",
|
||||||
|
);
|
||||||
|
const [ownerFilter, setOwnerFilter] = useState(
|
||||||
|
searchParams.get("owner") || "",
|
||||||
|
);
|
||||||
|
const [executionFilter, setExecutionFilter] = useState(
|
||||||
|
searchParams.get("execution") || "",
|
||||||
|
);
|
||||||
|
|
||||||
|
// Debounce text inputs
|
||||||
|
const [debouncedName, setDebouncedName] = useState(nameFilter);
|
||||||
|
const [debouncedOwner, setDebouncedOwner] = useState(ownerFilter);
|
||||||
|
const [debouncedExecution, setDebouncedExecution] = useState(executionFilter);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const t = setTimeout(() => setDebouncedName(nameFilter), 400);
|
||||||
|
return () => clearTimeout(t);
|
||||||
|
}, [nameFilter]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const t = setTimeout(() => setDebouncedOwner(ownerFilter), 400);
|
||||||
|
return () => clearTimeout(t);
|
||||||
|
}, [ownerFilter]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const t = setTimeout(() => setDebouncedExecution(executionFilter), 400);
|
||||||
|
return () => clearTimeout(t);
|
||||||
|
}, [executionFilter]);
|
||||||
|
|
||||||
|
// Build query params
|
||||||
|
const queryParams = useMemo(() => {
|
||||||
|
const params: Record<string, unknown> = { page, perPage: pageSize };
|
||||||
|
if (debouncedName) params.name = debouncedName;
|
||||||
|
if (typeFilter) params.type = typeFilter;
|
||||||
|
if (visibilityFilter) params.visibility = visibilityFilter;
|
||||||
|
if (scopeFilter) params.scope = scopeFilter;
|
||||||
|
if (debouncedOwner) params.owner = debouncedOwner;
|
||||||
|
if (debouncedExecution) {
|
||||||
|
const n = Number(debouncedExecution);
|
||||||
|
if (!isNaN(n)) params.execution = n;
|
||||||
|
}
|
||||||
|
return params;
|
||||||
|
}, [
|
||||||
|
page,
|
||||||
|
pageSize,
|
||||||
|
debouncedName,
|
||||||
|
typeFilter,
|
||||||
|
visibilityFilter,
|
||||||
|
scopeFilter,
|
||||||
|
debouncedOwner,
|
||||||
|
debouncedExecution,
|
||||||
|
]);
|
||||||
|
|
||||||
|
const { data, isLoading, isFetching, error } = useArtifactsList(queryParams);
|
||||||
|
|
||||||
|
// Subscribe to real-time artifact updates
|
||||||
|
useArtifactStream({ enabled: true });
|
||||||
|
|
||||||
|
const artifacts = useMemo(() => data?.data || [], [data]);
|
||||||
|
const total = data?.pagination?.total_items || 0;
|
||||||
|
|
||||||
|
const hasActiveFilters =
|
||||||
|
!!nameFilter ||
|
||||||
|
!!typeFilter ||
|
||||||
|
!!visibilityFilter ||
|
||||||
|
!!scopeFilter ||
|
||||||
|
!!ownerFilter ||
|
||||||
|
!!executionFilter;
|
||||||
|
|
||||||
|
const clearFilters = useCallback(() => {
|
||||||
|
setNameFilter("");
|
||||||
|
setTypeFilter("");
|
||||||
|
setVisibilityFilter("");
|
||||||
|
setScopeFilter("");
|
||||||
|
setOwnerFilter("");
|
||||||
|
setExecutionFilter("");
|
||||||
|
setPage(1);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="p-6">
|
||||||
|
{/* Header */}
|
||||||
|
<div className="mb-6">
|
||||||
|
<div className="flex items-center justify-between">
|
||||||
|
<div>
|
||||||
|
<h1 className="text-3xl font-bold text-gray-900">Artifacts</h1>
|
||||||
|
<p className="mt-2 text-gray-600">
|
||||||
|
Files, progress indicators, and data produced by executions
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Filters */}
|
||||||
|
<div className="bg-white shadow rounded-lg p-4 mb-6">
|
||||||
|
<div className="flex items-center justify-between mb-4">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Search className="h-5 w-5 text-gray-400" />
|
||||||
|
<h2 className="text-lg font-semibold">Filter Artifacts</h2>
|
||||||
|
</div>
|
||||||
|
{hasActiveFilters && (
|
||||||
|
<button
|
||||||
|
onClick={clearFilters}
|
||||||
|
className="flex items-center gap-1 text-sm text-gray-600 hover:text-gray-900"
|
||||||
|
>
|
||||||
|
<X className="h-4 w-4" />
|
||||||
|
Clear Filters
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="grid grid-cols-1 md:grid-cols-3 lg:grid-cols-6 gap-4">
|
||||||
|
{/* Name search */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Name
|
||||||
|
</label>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={nameFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setNameFilter(e.target.value);
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
placeholder="Search by name..."
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Type */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Type
|
||||||
|
</label>
|
||||||
|
<select
|
||||||
|
value={typeFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setTypeFilter(e.target.value as ArtifactType | "");
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
>
|
||||||
|
<option value="">All Types</option>
|
||||||
|
{TYPE_OPTIONS.map((o) => (
|
||||||
|
<option key={o.value} value={o.value}>
|
||||||
|
{o.label}
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Visibility */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Visibility
|
||||||
|
</label>
|
||||||
|
<select
|
||||||
|
value={visibilityFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setVisibilityFilter(e.target.value as ArtifactVisibility | "");
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
>
|
||||||
|
<option value="">All</option>
|
||||||
|
{VISIBILITY_OPTIONS.map((o) => (
|
||||||
|
<option key={o.value} value={o.value}>
|
||||||
|
{o.label}
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Scope */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Scope
|
||||||
|
</label>
|
||||||
|
<select
|
||||||
|
value={scopeFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setScopeFilter(e.target.value as OwnerType | "");
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
>
|
||||||
|
<option value="">All Scopes</option>
|
||||||
|
{SCOPE_OPTIONS.map((o) => (
|
||||||
|
<option key={o.value} value={o.value}>
|
||||||
|
{o.label}
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Owner */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Owner
|
||||||
|
</label>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={ownerFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setOwnerFilter(e.target.value);
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
placeholder="e.g. mypack.deploy"
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Execution ID */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Execution
|
||||||
|
</label>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={executionFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setExecutionFilter(e.target.value);
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
placeholder="Execution ID"
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{data && (
|
||||||
|
<div className="mt-3 text-sm text-gray-600">
|
||||||
|
Showing {artifacts.length} of {total} artifacts
|
||||||
|
{hasActiveFilters && " (filtered)"}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Results */}
|
||||||
|
<ArtifactsResultsTable
|
||||||
|
artifacts={artifacts}
|
||||||
|
isLoading={isLoading}
|
||||||
|
isFetching={isFetching}
|
||||||
|
error={error as Error | null}
|
||||||
|
hasActiveFilters={hasActiveFilters}
|
||||||
|
clearFilters={clearFilters}
|
||||||
|
page={page}
|
||||||
|
setPage={setPage}
|
||||||
|
pageSize={pageSize}
|
||||||
|
total={total}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
190
web/src/pages/artifacts/artifactHelpers.tsx
Normal file
190
web/src/pages/artifacts/artifactHelpers.tsx
Normal file
@@ -0,0 +1,190 @@
|
|||||||
|
import {
|
||||||
|
FileText,
|
||||||
|
FileImage,
|
||||||
|
File,
|
||||||
|
BarChart3,
|
||||||
|
Link as LinkIcon,
|
||||||
|
Table2,
|
||||||
|
Package,
|
||||||
|
} from "lucide-react";
|
||||||
|
import type { ArtifactType, OwnerType } from "@/hooks/useArtifacts";
|
||||||
|
import { OpenAPI } from "@/api/core/OpenAPI";
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Filter option constants
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export const TYPE_OPTIONS: { value: ArtifactType; label: string }[] = [
|
||||||
|
{ value: "file_text", label: "Text File" },
|
||||||
|
{ value: "file_image", label: "Image" },
|
||||||
|
{ value: "file_binary", label: "Binary" },
|
||||||
|
{ value: "file_datatable", label: "Data Table" },
|
||||||
|
{ value: "progress", label: "Progress" },
|
||||||
|
{ value: "url", label: "URL" },
|
||||||
|
{ value: "other", label: "Other" },
|
||||||
|
];
|
||||||
|
|
||||||
|
export const VISIBILITY_OPTIONS: { value: string; label: string }[] = [
|
||||||
|
{ value: "public", label: "Public" },
|
||||||
|
{ value: "private", label: "Private" },
|
||||||
|
];
|
||||||
|
|
||||||
|
export const SCOPE_OPTIONS: { value: OwnerType; label: string }[] = [
|
||||||
|
{ value: "system", label: "System" },
|
||||||
|
{ value: "pack", label: "Pack" },
|
||||||
|
{ value: "action", label: "Action" },
|
||||||
|
{ value: "sensor", label: "Sensor" },
|
||||||
|
{ value: "rule", label: "Rule" },
|
||||||
|
];
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Icon / badge helpers
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export function getArtifactTypeIcon(type: ArtifactType) {
|
||||||
|
switch (type) {
|
||||||
|
case "file_text":
|
||||||
|
return <FileText className="h-4 w-4 text-blue-500" />;
|
||||||
|
case "file_image":
|
||||||
|
return <FileImage className="h-4 w-4 text-purple-500" />;
|
||||||
|
case "file_binary":
|
||||||
|
return <File className="h-4 w-4 text-gray-500" />;
|
||||||
|
case "file_datatable":
|
||||||
|
return <Table2 className="h-4 w-4 text-green-500" />;
|
||||||
|
case "progress":
|
||||||
|
return <BarChart3 className="h-4 w-4 text-amber-500" />;
|
||||||
|
case "url":
|
||||||
|
return <LinkIcon className="h-4 w-4 text-cyan-500" />;
|
||||||
|
case "other":
|
||||||
|
default:
|
||||||
|
return <Package className="h-4 w-4 text-gray-400" />;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function getArtifactTypeBadge(type: ArtifactType): {
|
||||||
|
label: string;
|
||||||
|
classes: string;
|
||||||
|
} {
|
||||||
|
switch (type) {
|
||||||
|
case "file_text":
|
||||||
|
return { label: "Text File", classes: "bg-blue-100 text-blue-800" };
|
||||||
|
case "file_image":
|
||||||
|
return { label: "Image", classes: "bg-purple-100 text-purple-800" };
|
||||||
|
case "file_binary":
|
||||||
|
return { label: "Binary", classes: "bg-gray-100 text-gray-800" };
|
||||||
|
case "file_datatable":
|
||||||
|
return { label: "Data Table", classes: "bg-green-100 text-green-800" };
|
||||||
|
case "progress":
|
||||||
|
return { label: "Progress", classes: "bg-amber-100 text-amber-800" };
|
||||||
|
case "url":
|
||||||
|
return { label: "URL", classes: "bg-cyan-100 text-cyan-800" };
|
||||||
|
case "other":
|
||||||
|
default:
|
||||||
|
return { label: "Other", classes: "bg-gray-100 text-gray-700" };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function getScopeBadge(scope: OwnerType): {
|
||||||
|
label: string;
|
||||||
|
classes: string;
|
||||||
|
} {
|
||||||
|
switch (scope) {
|
||||||
|
case "system":
|
||||||
|
return { label: "System", classes: "bg-purple-100 text-purple-800" };
|
||||||
|
case "pack":
|
||||||
|
return { label: "Pack", classes: "bg-green-100 text-green-800" };
|
||||||
|
case "action":
|
||||||
|
return { label: "Action", classes: "bg-yellow-100 text-yellow-800" };
|
||||||
|
case "sensor":
|
||||||
|
return { label: "Sensor", classes: "bg-indigo-100 text-indigo-800" };
|
||||||
|
case "rule":
|
||||||
|
return { label: "Rule", classes: "bg-blue-100 text-blue-800" };
|
||||||
|
default:
|
||||||
|
return { label: scope, classes: "bg-gray-100 text-gray-700" };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function getVisibilityBadge(visibility: string): {
|
||||||
|
label: string;
|
||||||
|
classes: string;
|
||||||
|
} {
|
||||||
|
if (visibility === "public") {
|
||||||
|
return { label: "Public", classes: "text-green-700" };
|
||||||
|
}
|
||||||
|
return { label: "Private", classes: "text-gray-600" };
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Formatting helpers
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export function formatBytes(bytes: number | null): string {
|
||||||
|
if (bytes == null || bytes === 0) return "\u2014";
|
||||||
|
if (bytes < 1024) return `${bytes} B`;
|
||||||
|
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
|
||||||
|
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function formatDate(dateString: string) {
|
||||||
|
return new Date(dateString).toLocaleString();
|
||||||
|
}
|
||||||
|
|
||||||
|
export function formatTime(timestamp: string) {
|
||||||
|
const date = new Date(timestamp);
|
||||||
|
const now = new Date();
|
||||||
|
const diff = now.getTime() - date.getTime();
|
||||||
|
|
||||||
|
if (diff < 60000) return "just now";
|
||||||
|
if (diff < 3600000) return `${Math.floor(diff / 60000)}m ago`;
|
||||||
|
if (diff < 86400000) return `${Math.floor(diff / 3600000)}h ago`;
|
||||||
|
return date.toLocaleDateString();
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Download helper
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export async function downloadArtifact(
|
||||||
|
artifactId: number,
|
||||||
|
artifactRef: string,
|
||||||
|
) {
|
||||||
|
const token = localStorage.getItem("access_token");
|
||||||
|
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/download`;
|
||||||
|
|
||||||
|
const response = await fetch(url, {
|
||||||
|
headers: {
|
||||||
|
Authorization: `Bearer ${token}`,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
console.error(`Download failed: ${response.status} ${response.statusText}`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const disposition = response.headers.get("Content-Disposition");
|
||||||
|
let filename = artifactRef.replace(/\./g, "_") + ".bin";
|
||||||
|
if (disposition) {
|
||||||
|
const match = disposition.match(/filename="?([^"]+)"?/);
|
||||||
|
if (match) filename = match[1];
|
||||||
|
}
|
||||||
|
|
||||||
|
const blob = await response.blob();
|
||||||
|
const blobUrl = URL.createObjectURL(blob);
|
||||||
|
const a = document.createElement("a");
|
||||||
|
a.href = blobUrl;
|
||||||
|
a.download = filename;
|
||||||
|
document.body.appendChild(a);
|
||||||
|
a.click();
|
||||||
|
document.body.removeChild(a);
|
||||||
|
URL.revokeObjectURL(blobUrl);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function isDownloadable(type: ArtifactType): boolean {
|
||||||
|
return (
|
||||||
|
type === "file_text" ||
|
||||||
|
type === "file_image" ||
|
||||||
|
type === "file_binary" ||
|
||||||
|
type === "file_datatable"
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -23,6 +23,8 @@ import { RotateCcw, Loader2 } from "lucide-react";
|
|||||||
import ExecuteActionModal from "@/components/common/ExecuteActionModal";
|
import ExecuteActionModal from "@/components/common/ExecuteActionModal";
|
||||||
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
|
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
|
||||||
import WorkflowTasksPanel from "@/components/common/WorkflowTasksPanel";
|
import WorkflowTasksPanel from "@/components/common/WorkflowTasksPanel";
|
||||||
|
import ExecutionArtifactsPanel from "@/components/executions/ExecutionArtifactsPanel";
|
||||||
|
import ExecutionProgressBar from "@/components/executions/ExecutionProgressBar";
|
||||||
|
|
||||||
const getStatusColor = (status: string) => {
|
const getStatusColor = (status: string) => {
|
||||||
switch (status) {
|
switch (status) {
|
||||||
@@ -359,6 +361,14 @@ export default function ExecutionDetailPage() {
|
|||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
</dl>
|
</dl>
|
||||||
|
|
||||||
|
{/* Inline progress bar (visible when execution has progress artifacts) */}
|
||||||
|
{isRunning && (
|
||||||
|
<ExecutionProgressBar
|
||||||
|
executionId={execution.id}
|
||||||
|
isRunning={isRunning}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* Config/Parameters */}
|
{/* Config/Parameters */}
|
||||||
@@ -539,6 +549,14 @@ export default function ExecutionDetailPage() {
|
|||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
|
{/* Artifacts */}
|
||||||
|
<div className="mt-6">
|
||||||
|
<ExecutionArtifactsPanel
|
||||||
|
executionId={execution.id}
|
||||||
|
isRunning={isRunning}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* Change History */}
|
{/* Change History */}
|
||||||
<div className="mt-6">
|
<div className="mt-6">
|
||||||
<EntityHistoryPanel
|
<EntityHistoryPanel
|
||||||
|
|||||||
122
work-summary/2026-03-02-artifact-content-system.md
Normal file
122
work-summary/2026-03-02-artifact-content-system.md
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
# Artifact Content System Implementation
|
||||||
|
|
||||||
|
**Date:** 2026-03-02
|
||||||
|
**Scope:** Database migration, models, repository, API routes, DTOs
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Implemented a full artifact content management system that allows actions to create, update, and manage artifact files and progress-style artifacts through the API. This builds on the existing `artifact` table (which previously only stored metadata) by adding content storage, versioning, and progress-append semantics.
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
### Database Migration (`migrations/20250101000010_artifact_content.sql`)
|
||||||
|
|
||||||
|
- **Enhanced `artifact` table** with new columns:
|
||||||
|
- `name` (TEXT) — human-readable artifact name
|
||||||
|
- `description` (TEXT) — optional description
|
||||||
|
- `content_type` (TEXT) — MIME type
|
||||||
|
- `size_bytes` (BIGINT) — size of latest version content
|
||||||
|
- `execution` (BIGINT, no FK) — links artifact to the execution that produced it
|
||||||
|
- `data` (JSONB) — structured data for progress-type artifacts and metadata
|
||||||
|
- **Created `artifact_version` table** for immutable content snapshots:
|
||||||
|
- `artifact` (FK to artifact, CASCADE delete)
|
||||||
|
- `version` (INTEGER, 1-based, monotonically increasing)
|
||||||
|
- `content` (BYTEA) — binary file content
|
||||||
|
- `content_json` (JSONB) — structured JSON content
|
||||||
|
- `meta` (JSONB) — free-form metadata per version
|
||||||
|
- `created_by` (TEXT) — who created this version
|
||||||
|
- Unique constraint on `(artifact, version)`
|
||||||
|
- **Helper function** `next_artifact_version()` — auto-assigns next version number
|
||||||
|
- **Retention trigger** `enforce_artifact_retention()` — auto-deletes oldest versions when count exceeds the artifact's retention limit; also syncs `size_bytes` and `content_type` back to the parent artifact
|
||||||
|
|
||||||
|
### Models (`crates/common/src/models.rs`)
|
||||||
|
|
||||||
|
- Enhanced `Artifact` struct with new fields: `name`, `description`, `content_type`, `size_bytes`, `execution`, `data`
|
||||||
|
- Added `SELECT_COLUMNS` constant for consistent query column lists
|
||||||
|
- Added `ArtifactVersion` model with `SELECT_COLUMNS` (excludes binary content for performance) and `SELECT_COLUMNS_WITH_CONTENT` (includes BYTEA payload)
|
||||||
|
- Added `ToSchema` derive to `RetentionPolicyType` enum (was missing, needed for OpenAPI)
|
||||||
|
- Added re-exports for `Artifact` and `ArtifactVersion` in models module
|
||||||
|
|
||||||
|
### Repository (`crates/common/src/repositories/artifact.rs`)
|
||||||
|
|
||||||
|
- Updated all `ArtifactRepository` queries to use `SELECT_COLUMNS` constant
|
||||||
|
- Extended `CreateArtifactInput` and `UpdateArtifactInput` with new fields
|
||||||
|
- Added `ArtifactSearchFilters` and `ArtifactSearchResult` for paginated search
|
||||||
|
- Added `search()` method with filters for scope, owner, type, execution, name
|
||||||
|
- Added `find_by_execution()` for listing artifacts by execution ID
|
||||||
|
- Added `append_progress()` — atomic JSON array append for progress artifacts
|
||||||
|
- Added `set_data()` — replace full data payload
|
||||||
|
- Used macro `push_field!` to DRY up the dynamic UPDATE query builder
|
||||||
|
- Created `ArtifactVersionRepository` with methods:
|
||||||
|
- `find_by_id` / `find_by_id_with_content`
|
||||||
|
- `list_by_artifact`
|
||||||
|
- `find_latest` / `find_latest_with_content`
|
||||||
|
- `find_by_version` / `find_by_version_with_content`
|
||||||
|
- `create` (auto-assigns version number via `next_artifact_version()`)
|
||||||
|
- `delete` / `delete_all_for_artifact` / `count_by_artifact`
|
||||||
|
|
||||||
|
### API DTOs (`crates/api/src/dto/artifact.rs`)
|
||||||
|
|
||||||
|
- `CreateArtifactRequest` — with defaults for retention policy (versions) and limit (5)
|
||||||
|
- `UpdateArtifactRequest` — partial update fields
|
||||||
|
- `AppendProgressRequest` — single JSON entry to append
|
||||||
|
- `SetDataRequest` — full data replacement
|
||||||
|
- `ArtifactResponse` / `ArtifactSummary` — full and list response types
|
||||||
|
- `CreateVersionJsonRequest` — JSON content for a new version
|
||||||
|
- `ArtifactVersionResponse` / `ArtifactVersionSummary` — version response types
|
||||||
|
- `ArtifactQueryParams` — filters with pagination
|
||||||
|
- Conversion `From` impls for all model → DTO conversions
|
||||||
|
|
||||||
|
### API Routes (`crates/api/src/routes/artifacts.rs`)
|
||||||
|
|
||||||
|
Endpoints mounted under `/api/v1/`:
|
||||||
|
|
||||||
|
| Method | Path | Description |
|
||||||
|
|--------|------|-------------|
|
||||||
|
| GET | `/artifacts` | List artifacts with filters and pagination |
|
||||||
|
| POST | `/artifacts` | Create a new artifact |
|
||||||
|
| GET | `/artifacts/{id}` | Get artifact by ID |
|
||||||
|
| PUT | `/artifacts/{id}` | Update artifact metadata |
|
||||||
|
| DELETE | `/artifacts/{id}` | Delete artifact (cascades to versions) |
|
||||||
|
| GET | `/artifacts/ref/{ref}` | Get artifact by reference string |
|
||||||
|
| POST | `/artifacts/{id}/progress` | Append entry to progress artifact |
|
||||||
|
| PUT | `/artifacts/{id}/data` | Set/replace artifact data |
|
||||||
|
| GET | `/artifacts/{id}/download` | Download latest version content |
|
||||||
|
| GET | `/artifacts/{id}/versions` | List all versions |
|
||||||
|
| POST | `/artifacts/{id}/versions` | Create JSON content version |
|
||||||
|
| GET | `/artifacts/{id}/versions/latest` | Get latest version metadata |
|
||||||
|
| POST | `/artifacts/{id}/versions/upload` | Upload binary file (multipart) |
|
||||||
|
| GET | `/artifacts/{id}/versions/{version}` | Get version metadata |
|
||||||
|
| DELETE | `/artifacts/{id}/versions/{version}` | Delete a version |
|
||||||
|
| GET | `/artifacts/{id}/versions/{version}/download` | Download version content |
|
||||||
|
| GET | `/executions/{execution_id}/artifacts` | List artifacts for execution |
|
||||||
|
|
||||||
|
- File upload via multipart/form-data with 50 MB limit
|
||||||
|
- Content type auto-detection from multipart headers with explicit override
|
||||||
|
- Download endpoints serve binary with proper Content-Type and Content-Disposition headers
|
||||||
|
- All endpoints require authentication (`RequireAuth`)
|
||||||
|
|
||||||
|
### Wiring
|
||||||
|
|
||||||
|
- Added `axum` `multipart` feature to API crate's Cargo.toml
|
||||||
|
- Registered artifact routes in `routes/mod.rs` and `server.rs`
|
||||||
|
- Registered DTOs in `dto/mod.rs`
|
||||||
|
- Registered `ArtifactVersionRepository` in `repositories/mod.rs`
|
||||||
|
|
||||||
|
### Test Fixes
|
||||||
|
|
||||||
|
- Updated existing `repository_artifact_tests.rs` fixtures to include new fields in `CreateArtifactInput` and `UpdateArtifactInput`
|
||||||
|
|
||||||
|
## Design Decisions
|
||||||
|
|
||||||
|
1. **Progress vs File artifacts**: Progress artifacts use `artifact.data` (JSONB array, appended atomically in SQL). File artifacts use `artifact_version` rows. This avoids creating a version per progress tick.
|
||||||
|
|
||||||
|
2. **Binary in BYTEA**: For simplicity, binary content is stored in PostgreSQL BYTEA. A future enhancement could add external object storage (S3) for large files.
|
||||||
|
|
||||||
|
3. **Version auto-numbering**: Uses a SQL function (`next_artifact_version`) for safe concurrent version numbering.
|
||||||
|
|
||||||
|
4. **Retention enforcement via trigger**: The `enforce_artifact_retention` trigger runs after each version insert, keeping the version count within the configured limit automatically.
|
||||||
|
|
||||||
|
5. **No FK to execution**: Since execution is a TimescaleDB hypertable, `artifact.execution` is a plain BIGINT (consistent with other hypertable references in the project).
|
||||||
|
|
||||||
|
6. **SELECT_COLUMNS pattern**: Binary content is excluded from default queries for performance. Separate `*_with_content` methods exist for download endpoints.
|
||||||
45
work-summary/2026-03-02-execution-artifacts-panel.md
Normal file
45
work-summary/2026-03-02-execution-artifacts-panel.md
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
# Execution Artifacts Panel & Demo Action
|
||||||
|
|
||||||
|
**Date**: 2026-03-02
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Added an artifacts panel to the execution detail page that displays artifacts created by an execution, with support for file downloads and interactive progress tracking. Also created a Python example action that demonstrates the artifact system by creating both file and progress artifacts.
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
### Web UI — Execution Artifacts Panel
|
||||||
|
|
||||||
|
**New files:**
|
||||||
|
- `web/src/hooks/useArtifacts.ts` — React Query hooks for fetching artifacts by execution ID, individual artifact details (with auto-refresh for progress), and artifact versions. Typed interfaces for `ArtifactSummary`, `ArtifactResponse`, and `ArtifactVersionSummary` matching the backend DTOs.
|
||||||
|
- `web/src/components/executions/ExecutionArtifactsPanel.tsx` — Collapsible panel component that:
|
||||||
|
- Lists all artifacts for an execution in a table with type icon, name, ref, size, and creation time
|
||||||
|
- Shows summary badges (file count, progress count) in the header
|
||||||
|
- Supports authenticated file download (fetches with JWT, triggers browser download)
|
||||||
|
- Inline expandable progress detail view with progress bar, percentage, message, and timestamped entry table
|
||||||
|
- Auto-polls for new artifacts and progress updates while execution is running
|
||||||
|
- Auto-hides when no artifacts exist (returns `null`)
|
||||||
|
|
||||||
|
**Modified files:**
|
||||||
|
- `web/src/pages/executions/ExecutionDetailPage.tsx` — Integrated `ExecutionArtifactsPanel` between the Workflow Tasks panel and Change History panel. Passes `executionId` and `isRunning` props.
|
||||||
|
|
||||||
|
### Python Example Pack — Artifact Demo Action
|
||||||
|
|
||||||
|
**New files:**
|
||||||
|
- `packs.external/python_example/actions/artifact_demo.yaml` — Action definition for `python_example.artifact_demo` with parameters for iteration count and API credentials
|
||||||
|
- `packs.external/python_example/actions/artifact_demo.py` — Python action that:
|
||||||
|
1. Authenticates to the Attune API using provided credentials
|
||||||
|
2. Creates a `file_text` artifact and a `progress` artifact, both linked to the current execution ID
|
||||||
|
3. Runs N iterations (default 50), each iteration:
|
||||||
|
- Appends a timestamped log line and uploads the full log as a new file version
|
||||||
|
- Appends a progress entry with iteration number, percentage (increments by `100/iterations` per step), message, and timestamp
|
||||||
|
- Sleeps 0.5 seconds between iterations
|
||||||
|
4. Returns artifact IDs and completion status as JSON output
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Download uses `fetch()` with `Authorization: Bearer` header from `localStorage` since artifact endpoints require JWT auth — a plain `<a href>` would fail
|
||||||
|
- Progress artifact detail auto-refreshes every 3 seconds via `refetchInterval` on the `useArtifact` hook
|
||||||
|
- Artifact list polls every 10 seconds to pick up new artifacts created during execution
|
||||||
|
- The demo action uses `ATTUNE_API_URL` and `ATTUNE_EXEC_ID` environment variables injected by the worker, plus explicit login for auth (since `ATTUNE_API_TOKEN` is not yet implemented)
|
||||||
|
- Artifact refs include execution ID and timestamp to avoid collisions across runs
|
||||||
70
work-summary/2026-03-03-cli-pack-upload.md
Normal file
70
work-summary/2026-03-03-cli-pack-upload.md
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
# CLI Pack Upload Command
|
||||||
|
|
||||||
|
**Date**: 2026-03-03
|
||||||
|
**Scope**: `crates/cli`, `crates/api`
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
The `attune pack register` command requires the API server to be able to reach the pack directory at the specified filesystem path. When the API runs inside Docker, this means the path must be inside a known container mount (e.g. `/opt/attune/packs.dev/...`). There was no way to install a pack from an arbitrary local path on the developer's machine into a Dockerized Attune system.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
Added a new `pack upload` CLI command and a corresponding `POST /api/v1/packs/upload` API endpoint. The CLI creates a `.tar.gz` archive of the local pack directory in memory and streams it to the API via `multipart/form-data`. The API extracts the archive and calls the existing `register_pack_internal` function, so all normal registration logic (component loading, workflow sync, MQ notifications) still applies.
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
### New API endpoint: `POST /api/v1/packs/upload`
|
||||||
|
- **File**: `crates/api/src/routes/packs.rs`
|
||||||
|
- Accepts `multipart/form-data` with:
|
||||||
|
- `pack` (required) — `.tar.gz` archive of the pack directory
|
||||||
|
- `force` (optional) — `"true"` to overwrite an existing pack
|
||||||
|
- `skip_tests` (optional) — `"true"` to skip test execution
|
||||||
|
- Extracts the archive to a temp directory using `flate2` + `tar`
|
||||||
|
- Locates `pack.yaml` at the archive root or one level deep (handles GitHub-style tarballs)
|
||||||
|
- Reads the pack `ref`, moves the directory to permanent storage, then calls `register_pack_internal`
|
||||||
|
- Added helper: `find_pack_root()` walks up to one level to find `pack.yaml`
|
||||||
|
|
||||||
|
### New CLI command: `attune pack upload <path>`
|
||||||
|
- **File**: `crates/cli/src/commands/pack.rs`
|
||||||
|
- Validates the local path exists and contains `pack.yaml`
|
||||||
|
- Reads `pack.yaml` to extract the pack ref for display messages
|
||||||
|
- Builds an in-memory `.tar.gz` using `tar::Builder` + `flate2::GzEncoder`
|
||||||
|
- Helper `append_dir_to_tar()` recursively archives directory contents with paths relative to the pack root (symlinks are skipped)
|
||||||
|
- Calls `ApiClient::multipart_post()` with the archive bytes
|
||||||
|
- Flags: `--force` / `--skip-tests`
|
||||||
|
|
||||||
|
### New `ApiClient::multipart_post()` method
|
||||||
|
- **File**: `crates/cli/src/client.rs`
|
||||||
|
- Accepts a file field (name, bytes, filename, MIME type) plus a list of extra text fields
|
||||||
|
- Follows the same 401-refresh-then-error pattern as other methods
|
||||||
|
- HTTP client timeout increased from 30s to 300s for uploads
|
||||||
|
|
||||||
|
### `pack register` UX improvement
|
||||||
|
- **File**: `crates/cli/src/commands/pack.rs`
|
||||||
|
- Emits a warning when the supplied path looks like a local filesystem path (not under `/opt/attune/`, `/app/`, etc.), suggesting `pack upload` instead
|
||||||
|
|
||||||
|
### New workspace dependencies
|
||||||
|
- **Workspace** (`Cargo.toml`): `tar = "0.4"`, `flate2 = "1.0"`, `tempfile` moved from testing to runtime
|
||||||
|
- **API** (`crates/api/Cargo.toml`): added `tar`, `flate2`, `tempfile`
|
||||||
|
- **CLI** (`crates/cli/Cargo.toml`): added `tar`, `flate2`; `reqwest` gains `multipart` + `stream` features
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Log in to the dockerized system
|
||||||
|
attune --api-url http://localhost:8080 auth login \
|
||||||
|
--username test@attune.local --password 'TestPass123!'
|
||||||
|
|
||||||
|
# Upload and register a local pack (works from any machine)
|
||||||
|
attune --api-url http://localhost:8080 pack upload ./packs.external/python_example \
|
||||||
|
--skip-tests --force
|
||||||
|
```
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
Tested against a live Docker Compose stack:
|
||||||
|
- Pack archive created (~13 KB for `python_example`)
|
||||||
|
- API received, extracted, and stored the pack at `/opt/attune/packs/python_example`
|
||||||
|
- All 5 actions, 1 trigger, and 1 sensor were registered
|
||||||
|
- `pack.registered` MQ event published to trigger worker environment setup
|
||||||
|
- `attune action list` confirmed all components were visible
|
||||||
132
work-summary/2026-03-03-cli-wait-notifier-fixes.md
Normal file
132
work-summary/2026-03-03-cli-wait-notifier-fixes.md
Normal file
@@ -0,0 +1,132 @@
|
|||||||
|
# CLI `--wait` and Notifier WebSocket Fixes
|
||||||
|
|
||||||
|
**Date**: 2026-03-03
|
||||||
|
**Session type**: Bug investigation and fix
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Investigated and fixed a long-standing hang in `attune action execute --wait` and the underlying root-cause bugs in the notifier service. The `--wait` flag now works reliably, returning within milliseconds of execution completion via WebSocket notifications.
|
||||||
|
|
||||||
|
## Problems Found and Fixed
|
||||||
|
|
||||||
|
### Bug 1: PostgreSQL `PgListener` broken after sequential `listen()` calls (Notifier)
|
||||||
|
|
||||||
|
**File**: `crates/notifier/src/postgres_listener.rs`
|
||||||
|
|
||||||
|
**Symptom**: The notifier service never received any PostgreSQL LISTEN/NOTIFY messages after startup. Direct `pg_notify()` calls from psql also went undelivered.
|
||||||
|
|
||||||
|
**Root cause**: The notifier called `listener.listen(channel)` in a loop — once per channel — totalling 9 separate calls. In sqlx 0.8, each `listen()` call sends a `LISTEN` command and reads a `ReadyForQuery` response. The repeated calls left the connection in an unexpected state where subsequent `recv()` calls would never fire, even though the PostgreSQL backend showed the connection as actively `LISTEN`-ing.
|
||||||
|
|
||||||
|
**Fix**: Replaced the loop with a single `listener.listen_all(NOTIFICATION_CHANNELS.iter().copied()).await` call, which issues all 9 LISTEN commands in one round-trip. Extracted a `create_listener()` helper so the same single-call pattern is used on reconnect.
|
||||||
|
|
||||||
|
```crates/notifier/src/postgres_listener.rs#L93-135
|
||||||
|
async fn create_listener(&self) -> Result<PgListener> {
|
||||||
|
let mut listener = PgListener::connect(&self.database_url)
|
||||||
|
.await
|
||||||
|
.context("Failed to connect PostgreSQL listener")?;
|
||||||
|
|
||||||
|
// Use listen_all for a single round-trip instead of N separate commands
|
||||||
|
listener
|
||||||
|
.listen_all(NOTIFICATION_CHANNELS.iter().copied())
|
||||||
|
.await
|
||||||
|
.context("Failed to LISTEN on notification channels")?;
|
||||||
|
|
||||||
|
Ok(listener)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Also added:
|
||||||
|
- A 60-second heartbeat log (`INFO: PostgreSQL listener heartbeat`) so it's easy to confirm the task is alive during idle periods
|
||||||
|
- `tokio::time::timeout` wrapper on `recv()` so the heartbeat fires even when no notifications arrive
|
||||||
|
- Improved reconnect logging
|
||||||
|
|
||||||
|
### Bug 2: Notifications serialized without the `"type"` field (Notifier → CLI)
|
||||||
|
|
||||||
|
**File**: `crates/notifier/src/websocket_server.rs`
|
||||||
|
|
||||||
|
**Symptom**: Even after fixing Bug 1, the CLI's WebSocket loop received messages but `serde_json::from_str::<ServerMsg>(&txt)` always failed with `missing field 'type'`, silently falling through the `Err(_)` catch-all arm.
|
||||||
|
|
||||||
|
**Root cause**: The outgoing notification task serialized the raw `Notification` struct directly:
|
||||||
|
```rust
|
||||||
|
match serde_json::to_string(¬ification) { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
The `Notification` struct has no `type` field. The CLI's `ServerMsg` enum uses `#[serde(tag = "type")]`, so it expects `{"type":"notification",...}`. The bare struct produces `{"notification_type":"...","entity_type":"...",...}` — no `"type"` key.
|
||||||
|
|
||||||
|
**Fix**: Wrap the notification in the `ClientMessage` tagged enum before serializing:
|
||||||
|
```rust
|
||||||
|
let envelope = ClientMessage::Notification(notification);
|
||||||
|
match serde_json::to_string(&envelope) { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
This produces the correct `{"type":"notification","notification_type":"...","entity_type":"...","entity_id":...,"payload":{...}}` format.
|
||||||
|
|
||||||
|
### Bug 3: Polling fallback used exhausted deadline (CLI)
|
||||||
|
|
||||||
|
**File**: `crates/cli/src/wait.rs`
|
||||||
|
|
||||||
|
**Symptom**: When `--wait` fell back to polling (e.g. when WS notifications weren't delivered), the polling would immediately time out even though the execution had long since completed.
|
||||||
|
|
||||||
|
**Root cause**: Both the WebSocket path and the polling fallback shared a single `deadline = Instant::now() + timeout_secs`. The WS path ran until the deadline, leaving 0 time for polling.
|
||||||
|
|
||||||
|
**Fix**: Reserve a minimum polling budget (`MIN_POLL_BUDGET = 10s`) so the WS path exits early enough to leave polling headroom:
|
||||||
|
```rust
|
||||||
|
const MIN_POLL_BUDGET: Duration = Duration::from_secs(10);
|
||||||
|
let ws_deadline = if overall_deadline > Instant::now() + MIN_POLL_BUDGET {
|
||||||
|
overall_deadline - MIN_POLL_BUDGET
|
||||||
|
} else {
|
||||||
|
overall_deadline // very short timeout — skip WS, go straight to polling
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
Polling always uses `overall_deadline` directly (the full user-specified timeout), so at minimum `MIN_POLL_BUDGET` of polling time is guaranteed.
|
||||||
|
|
||||||
|
### Additional CLI improvement: poll-first in polling loop
|
||||||
|
|
||||||
|
The polling fallback now checks the execution status **immediately** on entry (before sleeping) rather than sleeping first. This catches the common case where the execution already completed while the WS path was running.
|
||||||
|
|
||||||
|
Also improved error handling in the poll loop: REST failures are logged and retried rather than propagating as fatal errors.
|
||||||
|
|
||||||
|
## End-to-End Verification
|
||||||
|
|
||||||
|
```
|
||||||
|
$ attune --profile docker action execute core.echo --param message="Hello!" --wait
|
||||||
|
ℹ Executing action: core.echo
|
||||||
|
ℹ Waiting for execution 51 to complete...
|
||||||
|
[notifier] connected to ws://localhost:8081/ws
|
||||||
|
[notifier] session id client_2
|
||||||
|
[notifier] subscribed to entity:execution:51
|
||||||
|
[notifier] execution_status_changed for execution 51 — status=Some("scheduled")
|
||||||
|
[notifier] execution_status_changed for execution 51 — status=Some("running")
|
||||||
|
[notifier] execution_status_changed for execution 51 — status=Some("completed")
|
||||||
|
✓ Execution 51 completed
|
||||||
|
```
|
||||||
|
|
||||||
|
Three consecutive runs all returned via WebSocket within milliseconds, no polling fallback triggered.
|
||||||
|
|
||||||
|
## Files Changed
|
||||||
|
|
||||||
|
| File | Change |
|
||||||
|
|------|--------|
|
||||||
|
| `crates/notifier/src/postgres_listener.rs` | Replace sequential `listen()` loop with `listen_all()`; add `create_listener()` helper; add heartbeat logging with timeout-wrapped recv |
|
||||||
|
| `crates/notifier/src/websocket_server.rs` | Wrap `Notification` in `ClientMessage::Notification(...)` before serializing for outgoing WS messages |
|
||||||
|
| `crates/notifier/src/service.rs` | Handle `RecvError::Lagged` and `RecvError::Closed` in broadcaster; add `debug` import |
|
||||||
|
| `crates/notifier/src/subscriber_manager.rs` | Scale broadcast result logging back to `debug` level |
|
||||||
|
| `crates/cli/src/wait.rs` | Fix shared-deadline bug with `MIN_POLL_BUDGET`; poll immediately on entry; improve error handling and verbose logging |
|
||||||
|
| `AGENTS.md` | Document notifier WebSocket protocol and the `listen_all` requirement |
|
||||||
|
|
||||||
|
## Key Protocol Facts (for future reference)
|
||||||
|
|
||||||
|
**Notifier WebSocket — server → client message format**:
|
||||||
|
```json
|
||||||
|
{"type":"notification","notification_type":"execution_status_changed","entity_type":"execution","entity_id":42,"user_id":null,"payload":{...execution row...},"timestamp":"..."}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Notifier WebSocket — client → server subscribe format**:
|
||||||
|
```json
|
||||||
|
{"type":"subscribe","filter":"entity:execution:42"}
|
||||||
|
```
|
||||||
|
|
||||||
|
Filter formats supported: `all`, `entity_type:<type>`, `entity:<type>:<id>`, `user:<id>`, `notification_type:<type>`
|
||||||
|
|
||||||
|
**Critical rule**: Always use `PgListener::listen_all()` for subscribing to multiple PostgreSQL channels. Individual `listen()` calls in a loop leave the listener in a broken state in sqlx 0.8.
|
||||||
Reference in New Issue
Block a user