artifacts!
This commit is contained in:
46
AGENTS.md
46
AGENTS.md
@@ -57,7 +57,12 @@ attune/
|
|||||||
2. **attune-executor**: Manages execution lifecycle, scheduling, policy enforcement, workflow orchestration
|
2. **attune-executor**: Manages execution lifecycle, scheduling, policy enforcement, workflow orchestration
|
||||||
3. **attune-worker**: Executes actions in multiple runtimes (Python/Node.js/containers)
|
3. **attune-worker**: Executes actions in multiple runtimes (Python/Node.js/containers)
|
||||||
4. **attune-sensor**: Monitors triggers, generates events
|
4. **attune-sensor**: Monitors triggers, generates events
|
||||||
5. **attune-notifier**: Real-time notifications via PostgreSQL LISTEN/NOTIFY + WebSocket
|
5. **attune-notifier**: Real-time notifications via PostgreSQL LISTEN/NOTIFY + WebSocket (port 8081)
|
||||||
|
- **PostgreSQL listener**: Uses `PgListener::listen_all()` (single batch command) to subscribe to all 11 channels. **Do NOT use individual `listen()` calls in a loop** — this leaves the listener in a broken state where it stops receiving after the last call.
|
||||||
|
- **Artifact notifications**: `artifact_created` and `artifact_updated` channels. The `artifact_updated` trigger extracts a progress summary (`progress_percent`, `progress_message`, `progress_entries`) from the last entry in the `data` JSONB array for progress-type artifacts, enabling inline progress bars without extra API calls. The Web UI uses `useArtifactStream` hook to subscribe to `entity_type:artifact` notifications and invalidate React Query caches + push progress summaries to a `artifact_progress` cache key.
|
||||||
|
- **WebSocket protocol** (client → server): `{"type":"subscribe","filter":"entity:execution:<id>"}` — filter formats: `all`, `entity_type:<type>`, `entity:<type>:<id>`, `user:<id>`, `notification_type:<type>`
|
||||||
|
- **WebSocket protocol** (server → client): All messages use `#[serde(tag="type")]` — `{"type":"welcome","client_id":"...","message":"..."}` on connect; `{"type":"notification","notification_type":"...","entity_type":"...","entity_id":...,"payload":{...},"user_id":null,"timestamp":"..."}` for notifications; `{"type":"error","message":"..."}` for errors
|
||||||
|
- **Key invariant**: The outgoing task in `websocket_server.rs` MUST wrap `Notification` in `ClientMessage::Notification(notification)` before serializing — bare `Notification` serialization omits the `"type"` field and breaks clients
|
||||||
|
|
||||||
**Communication**: Services communicate via RabbitMQ for async operations
|
**Communication**: Services communicate via RabbitMQ for async operations
|
||||||
|
|
||||||
@@ -66,7 +71,7 @@ attune/
|
|||||||
**All Attune services run via Docker Compose.**
|
**All Attune services run via Docker Compose.**
|
||||||
|
|
||||||
- **Compose file**: `docker-compose.yaml` (root directory)
|
- **Compose file**: `docker-compose.yaml` (root directory)
|
||||||
- **Configuration**: `config.docker.yaml` (Docker-specific settings)
|
- **Configuration**: `config.docker.yaml` (Docker-specific settings, including `artifacts_dir: /opt/attune/artifacts`)
|
||||||
- **Default user**: `test@attune.local` / `TestPass123!` (auto-created)
|
- **Default user**: `test@attune.local` / `TestPass123!` (auto-created)
|
||||||
|
|
||||||
**Services**:
|
**Services**:
|
||||||
@@ -74,6 +79,13 @@ attune/
|
|||||||
- **Init** (run-once): migrations, init-user, init-packs
|
- **Init** (run-once): migrations, init-user, init-packs
|
||||||
- **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000)
|
- **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000)
|
||||||
|
|
||||||
|
**Volumes** (named):
|
||||||
|
- `postgres_data`, `rabbitmq_data`, `redis_data` — infrastructure state
|
||||||
|
- `packs_data` — pack files (shared across all services)
|
||||||
|
- `runtime_envs` — isolated runtime environments (virtualenvs, node_modules)
|
||||||
|
- `artifacts_data` — file-backed artifact storage (shared between API rw, workers rw, executor ro)
|
||||||
|
- `*_logs` — per-service log volumes
|
||||||
|
|
||||||
**Commands**:
|
**Commands**:
|
||||||
```bash
|
```bash
|
||||||
docker compose up -d # Start all services
|
docker compose up -d # Start all services
|
||||||
@@ -148,8 +160,8 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- **Inquiry**: Human-in-the-loop async interaction (approvals, inputs)
|
- **Inquiry**: Human-in-the-loop async interaction (approvals, inputs)
|
||||||
- **Identity**: User/service account with RBAC permissions
|
- **Identity**: User/service account with RBAC permissions
|
||||||
- **Key**: Encrypted secrets storage
|
- **Key**: Encrypted secrets storage
|
||||||
- **Artifact**: Tracked output from executions (files, logs, progress indicators). Metadata + optional structured `data` (JSONB). Linked to execution via plain BIGINT (no FK). Supports retention policies (version-count or time-based).
|
- **Artifact**: Tracked output from executions (files, logs, progress indicators). Metadata + optional structured `data` (JSONB). Linked to execution via plain BIGINT (no FK). Supports retention policies (version-count or time-based). File-type artifacts (FileBinary, FileDataTable, FileImage, FileText) use disk-based storage on a shared volume; Progress and Url artifacts use DB storage. Each artifact has a `visibility` field (`ArtifactVisibility` enum: `public` or `private`, DB default `private`). Public artifacts are viewable by all authenticated users; private artifacts are restricted based on the artifact's `scope` (Identity, Pack, Action, Sensor) and `owner` fields. **Type-aware API default**: when `visibility` is omitted from `POST /api/v1/artifacts`, the API defaults to `public` for Progress artifacts (informational status indicators anyone watching an execution should see) and `private` for all other types. Callers can always override by explicitly setting `visibility`. Full RBAC enforcement is deferred — the column and basic filtering are in place for future permission checks.
|
||||||
- **ArtifactVersion**: Immutable content snapshot for an artifact. Stores binary content (BYTEA) and/or structured JSON. Version number auto-assigned. Retention trigger auto-deletes oldest versions beyond limit.
|
- **ArtifactVersion**: Immutable content snapshot for an artifact. File-type versions store a `file_path` (relative path on shared volume) with `content` BYTEA left NULL. DB-stored versions use `content` BYTEA and/or `content_json` JSONB. Version number auto-assigned via `next_artifact_version()`. Retention trigger auto-deletes oldest versions beyond limit. Invariant: exactly one of `content`, `content_json`, or `file_path` should be non-NULL per row.
|
||||||
|
|
||||||
## Key Tools & Libraries
|
## Key Tools & Libraries
|
||||||
|
|
||||||
@@ -168,6 +180,7 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- **OpenAPI**: utoipa, utoipa-swagger-ui
|
- **OpenAPI**: utoipa, utoipa-swagger-ui
|
||||||
- **Message Queue**: lapin (RabbitMQ)
|
- **Message Queue**: lapin (RabbitMQ)
|
||||||
- **HTTP Client**: reqwest
|
- **HTTP Client**: reqwest
|
||||||
|
- **Archive/Compression**: tar, flate2 (used for pack upload/extraction)
|
||||||
- **Testing**: mockall, tempfile, serial_test
|
- **Testing**: mockall, tempfile, serial_test
|
||||||
|
|
||||||
### Web UI Dependencies
|
### Web UI Dependencies
|
||||||
@@ -188,6 +201,7 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- **Key Settings**:
|
- **Key Settings**:
|
||||||
- `packs_base_dir` - Where pack files are stored (default: `/opt/attune/packs`)
|
- `packs_base_dir` - Where pack files are stored (default: `/opt/attune/packs`)
|
||||||
- `runtime_envs_dir` - Where isolated runtime environments are created (default: `/opt/attune/runtime_envs`)
|
- `runtime_envs_dir` - Where isolated runtime environments are created (default: `/opt/attune/runtime_envs`)
|
||||||
|
- `artifacts_dir` - Where file-backed artifacts are stored (default: `/opt/attune/artifacts`). Shared volume between API and workers.
|
||||||
|
|
||||||
## Authentication & Security
|
## Authentication & Security
|
||||||
- **Auth Type**: JWT (access tokens: 1h, refresh tokens: 7d)
|
- **Auth Type**: JWT (access tokens: 1h, refresh tokens: 7d)
|
||||||
@@ -226,7 +240,8 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option<Id>` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, `execution.started_at`, and `event.source` are also nullable. `enforcement.event` is nullable but has no FK constraint (event is a hypertable). `execution.enforcement` is nullable but has no FK constraint (enforcement is a hypertable). All FK columns on the execution table (`action`, `parent`, `original_execution`, `enforcement`, `executor`, `workflow_def`) have no FK constraints (execution is a hypertable). `inquiry.execution` and `workflow_execution.execution` also have no FK constraints. `enforcement.resolved_at` is nullable — `None` while status is `created`, set when resolved. `execution.started_at` is nullable — `None` until the worker sets status to `running`.
|
- **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option<Id>` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, `execution.started_at`, and `event.source` are also nullable. `enforcement.event` is nullable but has no FK constraint (event is a hypertable). `execution.enforcement` is nullable but has no FK constraint (enforcement is a hypertable). All FK columns on the execution table (`action`, `parent`, `original_execution`, `enforcement`, `executor`, `workflow_def`) have no FK constraints (execution is a hypertable). `inquiry.execution` and `workflow_execution.execution` also have no FK constraints. `enforcement.resolved_at` is nullable — `None` while status is `created`, set when resolved. `execution.started_at` is nullable — `None` until the worker sets status to `running`.
|
||||||
**Table Count**: 21 tables total in the schema (including `runtime_version`, `artifact_version`, 2 `*_history` hypertables, and the `event`, `enforcement`, + `execution` hypertables)
|
**Table Count**: 21 tables total in the schema (including `runtime_version`, `artifact_version`, 2 `*_history` hypertables, and the `event`, `enforcement`, + `execution` hypertables)
|
||||||
**Migration Count**: 10 migrations (`000001` through `000010`) — see `migrations/` directory
|
**Migration Count**: 10 migrations (`000001` through `000010`) — see `migrations/` directory
|
||||||
- **Artifact System**: The `artifact` table stores metadata + structured data (progress entries via JSONB `data` column). The `artifact_version` table stores immutable content snapshots (binary BYTEA or JSONB). Version numbering is auto-assigned via `next_artifact_version()` SQL function. A DB trigger (`enforce_artifact_retention`) auto-deletes oldest versions when count exceeds the artifact's `retention_limit`. `artifact.execution` is a plain BIGINT (no FK — execution is a hypertable). Progress-type artifacts use `artifact.data` (atomic JSON array append); file-type artifacts use `artifact_version` rows. Binary content is excluded from default queries for performance (`SELECT_COLUMNS` vs `SELECT_COLUMNS_WITH_CONTENT`).
|
- **Artifact System**: The `artifact` table stores metadata + structured data (progress entries via JSONB `data` column). The `artifact_version` table stores immutable content snapshots — either on disk (via `file_path` column) or in DB (via `content` BYTEA / `content_json` JSONB). Version numbering is auto-assigned via `next_artifact_version()` SQL function. A DB trigger (`enforce_artifact_retention`) auto-deletes oldest versions when count exceeds the artifact's `retention_limit`. `artifact.execution` is a plain BIGINT (no FK — execution is a hypertable). Progress-type artifacts use `artifact.data` (atomic JSON array append); file-type artifacts use `artifact_version` rows with `file_path` set. Binary content is excluded from default queries for performance (`SELECT_COLUMNS` vs `SELECT_COLUMNS_WITH_CONTENT`). **Visibility**: Each artifact has a `visibility` column (`artifact_visibility_enum`: `public` or `private`, DB default `private`). The `CreateArtifactRequest` DTO accepts `visibility` as `Option<ArtifactVisibility>` — when omitted the API route handler applies a **type-aware default**: `public` for Progress artifacts (informational status indicators), `private` for all other types. Callers can always override explicitly. Public artifacts are viewable by all authenticated users; private artifacts are restricted based on the artifact's `scope` (Identity, Pack, Action, Sensor) and `owner` fields. The visibility field is filterable via the search/list API (`?visibility=public`). Full RBAC enforcement is deferred — the column and basic query filtering are in place for future permission checks. **Notifications**: `artifact_created` and `artifact_updated` DB triggers (in migration `000008`) fire PostgreSQL NOTIFY with entity_type `artifact` and include `visibility` in the payload. The `artifact_updated` trigger extracts a progress summary (`progress_percent`, `progress_message`, `progress_entries`) from the last entry of the `data` JSONB array for progress-type artifacts. The Web UI `ExecutionProgressBar` component (`web/src/components/executions/ExecutionProgressBar.tsx`) renders an inline progress bar in the Execution Details card using the `useArtifactStream` hook (`web/src/hooks/useArtifactStream.ts`) for real-time WebSocket updates, with polling fallback via `useExecutionArtifacts`.
|
||||||
|
- **File-Based Artifact Storage**: File-type artifacts (FileBinary, FileDataTable, FileImage, FileText) use a shared filesystem volume instead of PostgreSQL BYTEA. The `artifact_version.file_path` column stores the relative path from the `artifacts_dir` root (e.g., `mypack/build_log/v1.txt`). Pattern: `{ref_with_dots_as_dirs}/v{version}.{ext}`. The artifact ref (globally unique) is used as the directory key — no execution ID in the path, so artifacts can outlive executions and be shared across them. **Endpoint**: `POST /api/v1/artifacts/{id}/versions/file` allocates a version number and file path without any file content; the execution process writes the file to `$ATTUNE_ARTIFACTS_DIR/{file_path}`. **Download**: `GET /api/v1/artifacts/{id}/download` and version-specific downloads check `file_path` first (read from disk), fall back to DB BYTEA/JSON. **Finalization**: After execution exits, the worker stats all file-backed versions for that execution and updates `size_bytes` on both `artifact_version` and parent `artifact` rows via direct DB access. **Cleanup**: Delete endpoints remove disk files before deleting DB rows; empty parent directories are cleaned up. **Backward compatible**: Existing DB-stored artifacts (`file_path = NULL`) continue to work unchanged.
|
||||||
- **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order.
|
- **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order.
|
||||||
|
|
||||||
### Workflow Execution Orchestration
|
### Workflow Execution Orchestration
|
||||||
@@ -306,6 +321,8 @@ Completion listener advances workflow → Schedules successor tasks → Complete
|
|||||||
- `ATTUNE_ACTION` - Action ref (always present)
|
- `ATTUNE_ACTION` - Action ref (always present)
|
||||||
- `ATTUNE_EXEC_ID` - Execution database ID (always present)
|
- `ATTUNE_EXEC_ID` - Execution database ID (always present)
|
||||||
- `ATTUNE_API_TOKEN` - Execution-scoped API token (always present)
|
- `ATTUNE_API_TOKEN` - Execution-scoped API token (always present)
|
||||||
|
- `ATTUNE_API_URL` - API base URL (always present)
|
||||||
|
- `ATTUNE_ARTIFACTS_DIR` - Absolute path to shared artifact volume (always present, e.g., `/opt/attune/artifacts`)
|
||||||
- `ATTUNE_RULE` - Rule ref (if triggered by rule)
|
- `ATTUNE_RULE` - Rule ref (if triggered by rule)
|
||||||
- `ATTUNE_TRIGGER` - Trigger ref (if triggered by event/trigger)
|
- `ATTUNE_TRIGGER` - Trigger ref (if triggered by event/trigger)
|
||||||
- **Custom Environment Variables**: Optional, set via `execution.env_vars` JSONB field (for debug flags, runtime config only)
|
- **Custom Environment Variables**: Optional, set via `execution.env_vars` JSONB field (for debug flags, runtime config only)
|
||||||
@@ -492,10 +509,23 @@ make db-reset # Drop & recreate DB
|
|||||||
cargo install --path crates/cli # Install CLI
|
cargo install --path crates/cli # Install CLI
|
||||||
attune auth login # Login
|
attune auth login # Login
|
||||||
attune pack list # List packs
|
attune pack list # List packs
|
||||||
|
attune pack upload ./path/to/pack # Upload local pack to API (works with Docker)
|
||||||
|
attune pack register /opt/attune/packs/mypak # Register from API-visible path
|
||||||
attune action execute <ref> --param key=value
|
attune action execute <ref> --param key=value
|
||||||
attune execution list # Monitor executions
|
attune execution list # Monitor executions
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Pack Upload vs Register**:
|
||||||
|
- `attune pack upload <local-path>` — Tarballs the local directory and POSTs it to `POST /api/v1/packs/upload`. Works regardless of whether the API is local or in Docker. This is the primary way to install packs from your local machine into a Dockerized system.
|
||||||
|
- `attune pack register <server-path>` — Sends a filesystem path string to the API (`POST /api/v1/packs/register`). Only works if the path is accessible from inside the API container (e.g. `/opt/attune/packs/...` or `/opt/attune/packs.dev/...`).
|
||||||
|
|
||||||
|
**Pack Upload API endpoint**: `POST /api/v1/packs/upload` — accepts `multipart/form-data` with:
|
||||||
|
- `pack` (required): a `.tar.gz` archive of the pack directory
|
||||||
|
- `force` (optional, text): `"true"` to overwrite an existing pack with the same ref
|
||||||
|
- `skip_tests` (optional, text): `"true"` to skip test execution after registration
|
||||||
|
|
||||||
|
The server extracts the archive to a temp directory, finds the `pack.yaml` (at root or one level deep), then moves it to `{packs_base_dir}/{pack_ref}/` and calls `register_pack_internal`.
|
||||||
|
|
||||||
## Test Failure Protocol
|
## Test Failure Protocol
|
||||||
|
|
||||||
**Proactively investigate and fix test failures when discovered, even if unrelated to the current task.**
|
**Proactively investigate and fix test failures when discovered, even if unrelated to the current task.**
|
||||||
@@ -600,9 +630,9 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
|
|||||||
- **Web UI**: Static files served separately or via API service
|
- **Web UI**: Static files served separately or via API service
|
||||||
|
|
||||||
## Current Development Status
|
## Current Development Status
|
||||||
- ✅ **Complete**: Database migrations (21 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`), Artifact content system (versioned file/JSON storage, progress-append semantics, binary upload/download, retention enforcement, execution-linked artifacts, 17 API endpoints under `/api/v1/artifacts/`)
|
- ✅ **Complete**: Database migrations (21 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`), Artifact content system (versioned file/JSON storage, progress-append semantics, binary upload/download, retention enforcement, execution-linked artifacts, 18 API endpoints under `/api/v1/artifacts/`, file-backed disk storage via shared volume for file-type artifacts), CLI `--wait` flag (WebSocket-first with polling fallback — connects to notifier on port 8081, subscribes to execution, returns immediately on terminal status; falls back to exponential-backoff REST polling if WS unavailable; polling always gets at least 10s budget regardless of how long WS path ran)
|
||||||
- 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management, Artifact UI (web UI for browsing/downloading artifacts)
|
- 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management, Artifact UI (web UI for browsing/downloading artifacts), Notifier service WebSocket (functional but lacks auth — the WS connection is unauthenticated; the subscribe filter controls visibility)
|
||||||
- 📋 **Planned**: Notifier service, execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage
|
- 📋 **Planned**: Execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage
|
||||||
|
|
||||||
## Quick Reference
|
## Quick Reference
|
||||||
|
|
||||||
|
|||||||
14
Cargo.toml
14
Cargo.toml
@@ -94,6 +94,16 @@ hyper = { version = "1.0", features = ["full"] }
|
|||||||
# File system utilities
|
# File system utilities
|
||||||
walkdir = "2.4"
|
walkdir = "2.4"
|
||||||
|
|
||||||
|
# Archive/compression
|
||||||
|
tar = "0.4"
|
||||||
|
flate2 = "1.0"
|
||||||
|
|
||||||
|
# WebSocket client
|
||||||
|
tokio-tungstenite = { version = "0.26", features = ["native-tls"] }
|
||||||
|
|
||||||
|
# URL parsing
|
||||||
|
url = "2.5"
|
||||||
|
|
||||||
# Async utilities
|
# Async utilities
|
||||||
async-trait = "0.1"
|
async-trait = "0.1"
|
||||||
futures = "0.3"
|
futures = "0.3"
|
||||||
@@ -101,9 +111,11 @@ futures = "0.3"
|
|||||||
# Version matching
|
# Version matching
|
||||||
semver = { version = "1.0", features = ["serde"] }
|
semver = { version = "1.0", features = ["serde"] }
|
||||||
|
|
||||||
|
# Temp files
|
||||||
|
tempfile = "3.8"
|
||||||
|
|
||||||
# Testing
|
# Testing
|
||||||
mockall = "0.14"
|
mockall = "0.14"
|
||||||
tempfile = "3.8"
|
|
||||||
serial_test = "3.2"
|
serial_test = "3.2"
|
||||||
|
|
||||||
# Concurrent data structures
|
# Concurrent data structures
|
||||||
|
|||||||
@@ -55,6 +55,11 @@ packs_base_dir: ./packs
|
|||||||
# Pattern: {runtime_envs_dir}/{pack_ref}/{runtime_name}
|
# Pattern: {runtime_envs_dir}/{pack_ref}/{runtime_name}
|
||||||
runtime_envs_dir: ./runtime_envs
|
runtime_envs_dir: ./runtime_envs
|
||||||
|
|
||||||
|
# Artifacts directory (shared volume for file-based artifact storage).
|
||||||
|
# File-type artifacts are written here by execution processes and served by the API.
|
||||||
|
# Pattern: {artifacts_dir}/{ref_slug}/v{version}.{ext}
|
||||||
|
artifacts_dir: ./artifacts
|
||||||
|
|
||||||
# Worker service configuration
|
# Worker service configuration
|
||||||
worker:
|
worker:
|
||||||
service_name: attune-worker-e2e
|
service_name: attune-worker-e2e
|
||||||
|
|||||||
@@ -68,6 +68,13 @@ jsonschema = { workspace = true }
|
|||||||
# HTTP client
|
# HTTP client
|
||||||
reqwest = { workspace = true }
|
reqwest = { workspace = true }
|
||||||
|
|
||||||
|
# Archive/compression
|
||||||
|
tar = { workspace = true }
|
||||||
|
flate2 = { workspace = true }
|
||||||
|
|
||||||
|
# Temp files (used for pack upload extraction)
|
||||||
|
tempfile = { workspace = true }
|
||||||
|
|
||||||
# Authentication
|
# Authentication
|
||||||
argon2 = { workspace = true }
|
argon2 = { workspace = true }
|
||||||
rand = "0.9"
|
rand = "0.9"
|
||||||
|
|||||||
@@ -5,7 +5,9 @@ use serde::{Deserialize, Serialize};
|
|||||||
use serde_json::Value as JsonValue;
|
use serde_json::Value as JsonValue;
|
||||||
use utoipa::{IntoParams, ToSchema};
|
use utoipa::{IntoParams, ToSchema};
|
||||||
|
|
||||||
use attune_common::models::enums::{ArtifactType, OwnerType, RetentionPolicyType};
|
use attune_common::models::enums::{
|
||||||
|
ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType,
|
||||||
|
};
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// Artifact DTOs
|
// Artifact DTOs
|
||||||
@@ -30,6 +32,10 @@ pub struct CreateArtifactRequest {
|
|||||||
#[schema(example = "file_text")]
|
#[schema(example = "file_text")]
|
||||||
pub r#type: ArtifactType,
|
pub r#type: ArtifactType,
|
||||||
|
|
||||||
|
/// Visibility level (public = all users, private = scope/owner restricted).
|
||||||
|
/// If omitted, defaults to `public` for progress artifacts and `private` for all others.
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
|
|
||||||
/// Retention policy type
|
/// Retention policy type
|
||||||
#[serde(default = "default_retention_policy")]
|
#[serde(default = "default_retention_policy")]
|
||||||
#[schema(example = "versions")]
|
#[schema(example = "versions")]
|
||||||
@@ -81,6 +87,9 @@ pub struct UpdateArtifactRequest {
|
|||||||
/// Updated artifact type
|
/// Updated artifact type
|
||||||
pub r#type: Option<ArtifactType>,
|
pub r#type: Option<ArtifactType>,
|
||||||
|
|
||||||
|
/// Updated visibility
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
|
|
||||||
/// Updated retention policy
|
/// Updated retention policy
|
||||||
pub retention_policy: Option<RetentionPolicyType>,
|
pub retention_policy: Option<RetentionPolicyType>,
|
||||||
|
|
||||||
@@ -138,6 +147,9 @@ pub struct ArtifactResponse {
|
|||||||
/// Artifact type
|
/// Artifact type
|
||||||
pub r#type: ArtifactType,
|
pub r#type: ArtifactType,
|
||||||
|
|
||||||
|
/// Visibility level
|
||||||
|
pub visibility: ArtifactVisibility,
|
||||||
|
|
||||||
/// Retention policy
|
/// Retention policy
|
||||||
pub retention_policy: RetentionPolicyType,
|
pub retention_policy: RetentionPolicyType,
|
||||||
|
|
||||||
@@ -185,6 +197,9 @@ pub struct ArtifactSummary {
|
|||||||
/// Artifact type
|
/// Artifact type
|
||||||
pub r#type: ArtifactType,
|
pub r#type: ArtifactType,
|
||||||
|
|
||||||
|
/// Visibility level
|
||||||
|
pub visibility: ArtifactVisibility,
|
||||||
|
|
||||||
/// Human-readable name
|
/// Human-readable name
|
||||||
pub name: Option<String>,
|
pub name: Option<String>,
|
||||||
|
|
||||||
@@ -222,6 +237,9 @@ pub struct ArtifactQueryParams {
|
|||||||
/// Filter by artifact type
|
/// Filter by artifact type
|
||||||
pub r#type: Option<ArtifactType>,
|
pub r#type: Option<ArtifactType>,
|
||||||
|
|
||||||
|
/// Filter by visibility
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
|
|
||||||
/// Filter by execution ID
|
/// Filter by execution ID
|
||||||
pub execution: Option<i64>,
|
pub execution: Option<i64>,
|
||||||
|
|
||||||
@@ -279,6 +297,23 @@ pub struct CreateVersionJsonRequest {
|
|||||||
pub created_by: Option<String>,
|
pub created_by: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Request DTO for creating a new file-backed artifact version.
|
||||||
|
/// No file content is included — the caller writes the file directly to
|
||||||
|
/// `$ATTUNE_ARTIFACTS_DIR/{file_path}` after receiving the response.
|
||||||
|
#[derive(Debug, Clone, Deserialize, ToSchema)]
|
||||||
|
pub struct CreateFileVersionRequest {
|
||||||
|
/// MIME content type (e.g. "text/plain", "application/octet-stream")
|
||||||
|
#[schema(example = "text/plain")]
|
||||||
|
pub content_type: Option<String>,
|
||||||
|
|
||||||
|
/// Free-form metadata about this version
|
||||||
|
#[schema(value_type = Option<Object>)]
|
||||||
|
pub meta: Option<JsonValue>,
|
||||||
|
|
||||||
|
/// Who created this version (e.g. action ref, identity, "system")
|
||||||
|
pub created_by: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
/// Response DTO for an artifact version (without binary content)
|
/// Response DTO for an artifact version (without binary content)
|
||||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||||
pub struct ArtifactVersionResponse {
|
pub struct ArtifactVersionResponse {
|
||||||
@@ -301,6 +336,11 @@ pub struct ArtifactVersionResponse {
|
|||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
pub content_json: Option<JsonValue>,
|
pub content_json: Option<JsonValue>,
|
||||||
|
|
||||||
|
/// Relative file path for disk-backed versions (from artifacts_dir root).
|
||||||
|
/// When present, the file content lives on the shared volume, not in the DB.
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub file_path: Option<String>,
|
||||||
|
|
||||||
/// Free-form metadata
|
/// Free-form metadata
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
pub meta: Option<JsonValue>,
|
pub meta: Option<JsonValue>,
|
||||||
@@ -327,6 +367,10 @@ pub struct ArtifactVersionSummary {
|
|||||||
/// Size of content in bytes
|
/// Size of content in bytes
|
||||||
pub size_bytes: Option<i64>,
|
pub size_bytes: Option<i64>,
|
||||||
|
|
||||||
|
/// Relative file path for disk-backed versions
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
pub file_path: Option<String>,
|
||||||
|
|
||||||
/// Who created this version
|
/// Who created this version
|
||||||
pub created_by: Option<String>,
|
pub created_by: Option<String>,
|
||||||
|
|
||||||
@@ -346,6 +390,7 @@ impl From<attune_common::models::artifact::Artifact> for ArtifactResponse {
|
|||||||
scope: a.scope,
|
scope: a.scope,
|
||||||
owner: a.owner,
|
owner: a.owner,
|
||||||
r#type: a.r#type,
|
r#type: a.r#type,
|
||||||
|
visibility: a.visibility,
|
||||||
retention_policy: a.retention_policy,
|
retention_policy: a.retention_policy,
|
||||||
retention_limit: a.retention_limit,
|
retention_limit: a.retention_limit,
|
||||||
name: a.name,
|
name: a.name,
|
||||||
@@ -366,6 +411,7 @@ impl From<attune_common::models::artifact::Artifact> for ArtifactSummary {
|
|||||||
id: a.id,
|
id: a.id,
|
||||||
r#ref: a.r#ref,
|
r#ref: a.r#ref,
|
||||||
r#type: a.r#type,
|
r#type: a.r#type,
|
||||||
|
visibility: a.visibility,
|
||||||
name: a.name,
|
name: a.name,
|
||||||
content_type: a.content_type,
|
content_type: a.content_type,
|
||||||
size_bytes: a.size_bytes,
|
size_bytes: a.size_bytes,
|
||||||
@@ -387,6 +433,7 @@ impl From<attune_common::models::artifact_version::ArtifactVersion> for Artifact
|
|||||||
content_type: v.content_type,
|
content_type: v.content_type,
|
||||||
size_bytes: v.size_bytes,
|
size_bytes: v.size_bytes,
|
||||||
content_json: v.content_json,
|
content_json: v.content_json,
|
||||||
|
file_path: v.file_path,
|
||||||
meta: v.meta,
|
meta: v.meta,
|
||||||
created_by: v.created_by,
|
created_by: v.created_by,
|
||||||
created: v.created,
|
created: v.created,
|
||||||
@@ -401,6 +448,7 @@ impl From<attune_common::models::artifact_version::ArtifactVersion> for Artifact
|
|||||||
version: v.version,
|
version: v.version,
|
||||||
content_type: v.content_type,
|
content_type: v.content_type,
|
||||||
size_bytes: v.size_bytes,
|
size_bytes: v.size_bytes,
|
||||||
|
file_path: v.file_path,
|
||||||
created_by: v.created_by,
|
created_by: v.created_by,
|
||||||
created: v.created,
|
created: v.created,
|
||||||
}
|
}
|
||||||
@@ -419,6 +467,7 @@ mod tests {
|
|||||||
assert_eq!(params.per_page, 20);
|
assert_eq!(params.per_page, 20);
|
||||||
assert!(params.scope.is_none());
|
assert!(params.scope.is_none());
|
||||||
assert!(params.r#type.is_none());
|
assert!(params.r#type.is_none());
|
||||||
|
assert!(params.visibility.is_none());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -427,6 +476,7 @@ mod tests {
|
|||||||
scope: None,
|
scope: None,
|
||||||
owner: None,
|
owner: None,
|
||||||
r#type: None,
|
r#type: None,
|
||||||
|
visibility: None,
|
||||||
execution: None,
|
execution: None,
|
||||||
name: None,
|
name: None,
|
||||||
page: 3,
|
page: 3,
|
||||||
@@ -441,6 +491,7 @@ mod tests {
|
|||||||
scope: None,
|
scope: None,
|
||||||
owner: None,
|
owner: None,
|
||||||
r#type: None,
|
r#type: None,
|
||||||
|
visibility: None,
|
||||||
execution: None,
|
execution: None,
|
||||||
name: None,
|
name: None,
|
||||||
page: 1,
|
page: 1,
|
||||||
@@ -460,6 +511,10 @@ mod tests {
|
|||||||
let req: CreateArtifactRequest = serde_json::from_str(json).unwrap();
|
let req: CreateArtifactRequest = serde_json::from_str(json).unwrap();
|
||||||
assert_eq!(req.retention_policy, RetentionPolicyType::Versions);
|
assert_eq!(req.retention_policy, RetentionPolicyType::Versions);
|
||||||
assert_eq!(req.retention_limit, 5);
|
assert_eq!(req.retention_limit, 5);
|
||||||
|
assert!(
|
||||||
|
req.visibility.is_none(),
|
||||||
|
"Omitting visibility should deserialize as None (server applies type-aware default)"
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|||||||
@@ -33,6 +33,86 @@ struct Args {
|
|||||||
port: Option<u16>,
|
port: Option<u16>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Attempt to connect to RabbitMQ and create a publisher.
|
||||||
|
/// Returns the publisher on success.
|
||||||
|
async fn try_connect_publisher(mq_url: &str) -> Result<Publisher> {
|
||||||
|
let mq_connection = Connection::connect(mq_url).await?;
|
||||||
|
|
||||||
|
// Setup common message queue infrastructure (exchanges and DLX)
|
||||||
|
let mq_setup_config = attune_common::mq::MessageQueueConfig::default();
|
||||||
|
if let Err(e) = mq_connection
|
||||||
|
.setup_common_infrastructure(&mq_setup_config)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
warn!(
|
||||||
|
"Failed to setup common MQ infrastructure (may already exist): {}",
|
||||||
|
e
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let publisher = Publisher::new(
|
||||||
|
&mq_connection,
|
||||||
|
PublisherConfig {
|
||||||
|
confirm_publish: true,
|
||||||
|
timeout_secs: 30,
|
||||||
|
exchange: "attune.executions".to_string(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(publisher)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Background task that keeps trying to establish the MQ publisher connection.
|
||||||
|
/// Once connected it installs the publisher into `state`, then monitors the
|
||||||
|
/// connection health and reconnects if it drops.
|
||||||
|
async fn mq_reconnect_loop(state: Arc<AppState>, mq_url: String) {
|
||||||
|
// Retry delay sequence (seconds): 1, 2, 4, 8, 16, 30, 30, …
|
||||||
|
let delays: &[u64] = &[1, 2, 4, 8, 16, 30];
|
||||||
|
let mut attempt: usize = 0;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
let delay = delays.get(attempt).copied().unwrap_or(30);
|
||||||
|
|
||||||
|
match try_connect_publisher(&mq_url).await {
|
||||||
|
Ok(publisher) => {
|
||||||
|
info!(
|
||||||
|
"Message queue publisher connected (attempt {})",
|
||||||
|
attempt + 1
|
||||||
|
);
|
||||||
|
state.set_publisher(Arc::new(publisher)).await;
|
||||||
|
attempt = 0; // reset backoff after a successful connect
|
||||||
|
|
||||||
|
// Poll liveness: the publisher will error on use when the
|
||||||
|
// underlying channel is gone. We do a lightweight wait here so
|
||||||
|
// we notice disconnections and attempt to reconnect.
|
||||||
|
loop {
|
||||||
|
tokio::time::sleep(tokio::time::Duration::from_secs(10)).await;
|
||||||
|
if state.get_publisher().await.is_none() {
|
||||||
|
// Something cleared the publisher externally; re-enter
|
||||||
|
// the outer connect loop.
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
// TODO: add a real health-check ping when the lapin API
|
||||||
|
// exposes one (e.g. channel.basic_noop). For now a broken
|
||||||
|
// publisher will be detected on the first failed publish and
|
||||||
|
// can be cleared by the handler to trigger reconnection here.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"Failed to connect to message queue (attempt {}, retrying in {}s): {}",
|
||||||
|
attempt + 1,
|
||||||
|
delay,
|
||||||
|
e
|
||||||
|
);
|
||||||
|
tokio::time::sleep(tokio::time::Duration::from_secs(delay)).await;
|
||||||
|
attempt = attempt.saturating_add(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::main]
|
#[tokio::main]
|
||||||
async fn main() -> Result<()> {
|
async fn main() -> Result<()> {
|
||||||
// Initialize tracing subscriber
|
// Initialize tracing subscriber
|
||||||
@@ -66,59 +146,21 @@ async fn main() -> Result<()> {
|
|||||||
let database = Database::new(&config.database).await?;
|
let database = Database::new(&config.database).await?;
|
||||||
info!("Database connection established");
|
info!("Database connection established");
|
||||||
|
|
||||||
// Initialize message queue connection and publisher (optional)
|
// Initialize application state (publisher starts as None)
|
||||||
let mut state = AppState::new(database.pool().clone(), config.clone());
|
let state = Arc::new(AppState::new(database.pool().clone(), config.clone()));
|
||||||
|
|
||||||
|
// Spawn background MQ reconnect loop if a message queue is configured.
|
||||||
|
// The loop will keep retrying until it connects, then install the publisher
|
||||||
|
// into the shared state so request handlers can use it immediately.
|
||||||
if let Some(ref mq_config) = config.message_queue {
|
if let Some(ref mq_config) = config.message_queue {
|
||||||
info!("Connecting to message queue...");
|
info!("Message queue configured – starting background connection loop...");
|
||||||
match Connection::connect(&mq_config.url).await {
|
let mq_url = mq_config.url.clone();
|
||||||
Ok(mq_connection) => {
|
let state_clone = state.clone();
|
||||||
info!("Message queue connection established");
|
tokio::spawn(async move {
|
||||||
|
mq_reconnect_loop(state_clone, mq_url).await;
|
||||||
// Setup common message queue infrastructure (exchanges and DLX)
|
});
|
||||||
let mq_setup_config = attune_common::mq::MessageQueueConfig::default();
|
|
||||||
match mq_connection
|
|
||||||
.setup_common_infrastructure(&mq_setup_config)
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
Ok(_) => info!("Common message queue infrastructure setup completed"),
|
|
||||||
Err(e) => {
|
|
||||||
warn!(
|
|
||||||
"Failed to setup common MQ infrastructure (may already exist): {}",
|
|
||||||
e
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create publisher
|
|
||||||
match Publisher::new(
|
|
||||||
&mq_connection,
|
|
||||||
PublisherConfig {
|
|
||||||
confirm_publish: true,
|
|
||||||
timeout_secs: 30,
|
|
||||||
exchange: "attune.executions".to_string(),
|
|
||||||
},
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
{
|
|
||||||
Ok(publisher) => {
|
|
||||||
info!("Message queue publisher initialized");
|
|
||||||
state = state.with_publisher(Arc::new(publisher));
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
warn!("Failed to create publisher: {}", e);
|
|
||||||
warn!("Executions will not be queued for processing");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
warn!("Failed to connect to message queue: {}", e);
|
|
||||||
warn!("Executions will not be queued for processing");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
} else {
|
||||||
warn!("Message queue not configured");
|
warn!("Message queue not configured – executions will not be queued for processing");
|
||||||
warn!("Executions will not be queued for processing");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
info!(
|
info!(
|
||||||
@@ -143,7 +185,7 @@ async fn main() -> Result<()> {
|
|||||||
info!("PostgreSQL notification listener started");
|
info!("PostgreSQL notification listener started");
|
||||||
|
|
||||||
// Create and start server
|
// Create and start server
|
||||||
let server = Server::new(std::sync::Arc::new(state));
|
let server = Server::new(state.clone());
|
||||||
|
|
||||||
info!("Attune API Service is ready");
|
info!("Attune API Service is ready");
|
||||||
|
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
//!
|
//!
|
||||||
//! Provides endpoints for:
|
//! Provides endpoints for:
|
||||||
//! - CRUD operations on artifacts (metadata + data)
|
//! - CRUD operations on artifacts (metadata + data)
|
||||||
|
//! - File-backed version creation (execution writes file to shared volume)
|
||||||
//! - File upload (binary) and download for file-type artifacts
|
//! - File upload (binary) and download for file-type artifacts
|
||||||
//! - JSON content versioning for structured artifacts
|
//! - JSON content versioning for structured artifacts
|
||||||
//! - Progress append for progress-type artifacts (streaming updates)
|
//! - Progress append for progress-type artifacts (streaming updates)
|
||||||
@@ -17,8 +18,9 @@ use axum::{
|
|||||||
Json, Router,
|
Json, Router,
|
||||||
};
|
};
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
use tracing::warn;
|
||||||
|
|
||||||
use attune_common::models::enums::ArtifactType;
|
use attune_common::models::enums::{ArtifactType, ArtifactVisibility};
|
||||||
use attune_common::repositories::{
|
use attune_common::repositories::{
|
||||||
artifact::{
|
artifact::{
|
||||||
ArtifactRepository, ArtifactSearchFilters, ArtifactVersionRepository, CreateArtifactInput,
|
ArtifactRepository, ArtifactSearchFilters, ArtifactVersionRepository, CreateArtifactInput,
|
||||||
@@ -33,7 +35,8 @@ use crate::{
|
|||||||
artifact::{
|
artifact::{
|
||||||
AppendProgressRequest, ArtifactQueryParams, ArtifactResponse, ArtifactSummary,
|
AppendProgressRequest, ArtifactQueryParams, ArtifactResponse, ArtifactSummary,
|
||||||
ArtifactVersionResponse, ArtifactVersionSummary, CreateArtifactRequest,
|
ArtifactVersionResponse, ArtifactVersionSummary, CreateArtifactRequest,
|
||||||
CreateVersionJsonRequest, SetDataRequest, UpdateArtifactRequest,
|
CreateFileVersionRequest, CreateVersionJsonRequest, SetDataRequest,
|
||||||
|
UpdateArtifactRequest,
|
||||||
},
|
},
|
||||||
common::{PaginatedResponse, PaginationParams},
|
common::{PaginatedResponse, PaginationParams},
|
||||||
ApiResponse, SuccessResponse,
|
ApiResponse, SuccessResponse,
|
||||||
@@ -66,6 +69,7 @@ pub async fn list_artifacts(
|
|||||||
scope: query.scope,
|
scope: query.scope,
|
||||||
owner: query.owner.clone(),
|
owner: query.owner.clone(),
|
||||||
r#type: query.r#type,
|
r#type: query.r#type,
|
||||||
|
visibility: query.visibility,
|
||||||
execution: query.execution,
|
execution: query.execution,
|
||||||
name_contains: query.name.clone(),
|
name_contains: query.name.clone(),
|
||||||
limit: query.limit(),
|
limit: query.limit(),
|
||||||
@@ -175,11 +179,22 @@ pub async fn create_artifact(
|
|||||||
)));
|
)));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Type-aware visibility default: progress artifacts are public by default
|
||||||
|
// (they're informational status indicators), everything else is private.
|
||||||
|
let visibility = request.visibility.unwrap_or_else(|| {
|
||||||
|
if request.r#type == ArtifactType::Progress {
|
||||||
|
ArtifactVisibility::Public
|
||||||
|
} else {
|
||||||
|
ArtifactVisibility::Private
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
let input = CreateArtifactInput {
|
let input = CreateArtifactInput {
|
||||||
r#ref: request.r#ref,
|
r#ref: request.r#ref,
|
||||||
scope: request.scope,
|
scope: request.scope,
|
||||||
owner: request.owner,
|
owner: request.owner,
|
||||||
r#type: request.r#type,
|
r#type: request.r#type,
|
||||||
|
visibility,
|
||||||
retention_policy: request.retention_policy,
|
retention_policy: request.retention_policy,
|
||||||
retention_limit: request.retention_limit,
|
retention_limit: request.retention_limit,
|
||||||
name: request.name,
|
name: request.name,
|
||||||
@@ -229,6 +244,7 @@ pub async fn update_artifact(
|
|||||||
scope: request.scope,
|
scope: request.scope,
|
||||||
owner: request.owner,
|
owner: request.owner,
|
||||||
r#type: request.r#type,
|
r#type: request.r#type,
|
||||||
|
visibility: request.visibility,
|
||||||
retention_policy: request.retention_policy,
|
retention_policy: request.retention_policy,
|
||||||
retention_limit: request.retention_limit,
|
retention_limit: request.retention_limit,
|
||||||
name: request.name,
|
name: request.name,
|
||||||
@@ -249,7 +265,7 @@ pub async fn update_artifact(
|
|||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Delete an artifact (cascades to all versions)
|
/// Delete an artifact (cascades to all versions, including disk files)
|
||||||
#[utoipa::path(
|
#[utoipa::path(
|
||||||
delete,
|
delete,
|
||||||
path = "/api/v1/artifacts/{id}",
|
path = "/api/v1/artifacts/{id}",
|
||||||
@@ -266,6 +282,22 @@ pub async fn delete_artifact(
|
|||||||
State(state): State<Arc<AppState>>,
|
State(state): State<Arc<AppState>>,
|
||||||
Path(id): Path<i64>,
|
Path(id): Path<i64>,
|
||||||
) -> ApiResult<impl IntoResponse> {
|
) -> ApiResult<impl IntoResponse> {
|
||||||
|
let artifact = ArtifactRepository::find_by_id(&state.db, id)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
|
||||||
|
|
||||||
|
// Before deleting DB rows, clean up any file-backed versions on disk
|
||||||
|
let file_versions =
|
||||||
|
ArtifactVersionRepository::find_file_versions_by_artifact(&state.db, id).await?;
|
||||||
|
if !file_versions.is_empty() {
|
||||||
|
let artifacts_dir = &state.config.artifacts_dir;
|
||||||
|
cleanup_version_files(artifacts_dir, &file_versions);
|
||||||
|
// Also try to remove the artifact's parent directory if it's now empty
|
||||||
|
let ref_dir = ref_to_dir_path(&artifact.r#ref);
|
||||||
|
let full_ref_dir = std::path::Path::new(artifacts_dir).join(&ref_dir);
|
||||||
|
cleanup_empty_parents(&full_ref_dir, artifacts_dir);
|
||||||
|
}
|
||||||
|
|
||||||
let deleted = ArtifactRepository::delete(&state.db, id).await?;
|
let deleted = ArtifactRepository::delete(&state.db, id).await?;
|
||||||
if !deleted {
|
if !deleted {
|
||||||
return Err(ApiError::NotFound(format!(
|
return Err(ApiError::NotFound(format!(
|
||||||
@@ -527,6 +559,7 @@ pub async fn create_version_json(
|
|||||||
),
|
),
|
||||||
content: None,
|
content: None,
|
||||||
content_json: Some(request.content),
|
content_json: Some(request.content),
|
||||||
|
file_path: None,
|
||||||
meta: request.meta,
|
meta: request.meta,
|
||||||
created_by: request.created_by,
|
created_by: request.created_by,
|
||||||
};
|
};
|
||||||
@@ -542,6 +575,108 @@ pub async fn create_version_json(
|
|||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Create a new file-backed version (no file content in request).
|
||||||
|
///
|
||||||
|
/// This endpoint allocates a version number and computes a `file_path` on the
|
||||||
|
/// shared artifact volume. The caller (execution process) is expected to write
|
||||||
|
/// the file content directly to `$ATTUNE_ARTIFACTS_DIR/{file_path}` after
|
||||||
|
/// receiving the response. The worker finalizes `size_bytes` after execution.
|
||||||
|
///
|
||||||
|
/// Only applicable to file-type artifacts (FileBinary, FileDatatable, FileText, Log).
|
||||||
|
#[utoipa::path(
|
||||||
|
post,
|
||||||
|
path = "/api/v1/artifacts/{id}/versions/file",
|
||||||
|
tag = "artifacts",
|
||||||
|
params(("id" = i64, Path, description = "Artifact ID")),
|
||||||
|
request_body = CreateFileVersionRequest,
|
||||||
|
responses(
|
||||||
|
(status = 201, description = "File version allocated", body = inline(ApiResponse<ArtifactVersionResponse>)),
|
||||||
|
(status = 400, description = "Artifact type is not file-based"),
|
||||||
|
(status = 404, description = "Artifact not found"),
|
||||||
|
),
|
||||||
|
security(("bearer_auth" = []))
|
||||||
|
)]
|
||||||
|
pub async fn create_version_file(
|
||||||
|
RequireAuth(_user): RequireAuth,
|
||||||
|
State(state): State<Arc<AppState>>,
|
||||||
|
Path(id): Path<i64>,
|
||||||
|
Json(request): Json<CreateFileVersionRequest>,
|
||||||
|
) -> ApiResult<impl IntoResponse> {
|
||||||
|
let artifact = ArtifactRepository::find_by_id(&state.db, id)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
|
||||||
|
|
||||||
|
// Validate this is a file-type artifact
|
||||||
|
if !is_file_backed_type(artifact.r#type) {
|
||||||
|
return Err(ApiError::BadRequest(format!(
|
||||||
|
"Artifact '{}' is type {:?}, which does not support file-backed versions. \
|
||||||
|
Use POST /versions for JSON or POST /versions/upload for DB-stored files.",
|
||||||
|
artifact.r#ref, artifact.r#type,
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let content_type = request
|
||||||
|
.content_type
|
||||||
|
.unwrap_or_else(|| default_content_type_for_artifact(artifact.r#type));
|
||||||
|
|
||||||
|
// We need the version number to compute the file path. The DB function
|
||||||
|
// `next_artifact_version()` is called inside the INSERT, so we create the
|
||||||
|
// row first with file_path = NULL, then compute the path from the returned
|
||||||
|
// version number and update the row. This avoids a race condition where two
|
||||||
|
// concurrent requests could compute the same version number.
|
||||||
|
let input = CreateArtifactVersionInput {
|
||||||
|
artifact: id,
|
||||||
|
content_type: Some(content_type.clone()),
|
||||||
|
content: None,
|
||||||
|
content_json: None,
|
||||||
|
file_path: None, // Will be set in the update below
|
||||||
|
meta: request.meta,
|
||||||
|
created_by: request.created_by,
|
||||||
|
};
|
||||||
|
|
||||||
|
let version = ArtifactVersionRepository::create(&state.db, input).await?;
|
||||||
|
|
||||||
|
// Compute the file path from the artifact ref and version number
|
||||||
|
let file_path = compute_file_path(&artifact.r#ref, version.version, &content_type);
|
||||||
|
|
||||||
|
// Create the parent directory on disk
|
||||||
|
let artifacts_dir = &state.config.artifacts_dir;
|
||||||
|
let full_path = std::path::Path::new(artifacts_dir).join(&file_path);
|
||||||
|
if let Some(parent) = full_path.parent() {
|
||||||
|
tokio::fs::create_dir_all(parent).await.map_err(|e| {
|
||||||
|
ApiError::InternalServerError(format!(
|
||||||
|
"Failed to create artifact directory '{}': {}",
|
||||||
|
parent.display(),
|
||||||
|
e,
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update the version row with the computed file_path
|
||||||
|
sqlx::query("UPDATE artifact_version SET file_path = $1 WHERE id = $2")
|
||||||
|
.bind(&file_path)
|
||||||
|
.execute(&state.db)
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
ApiError::InternalServerError(format!(
|
||||||
|
"Failed to set file_path on version {}: {}",
|
||||||
|
version.id, e,
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Return the version with file_path populated
|
||||||
|
let mut response = ArtifactVersionResponse::from(version);
|
||||||
|
response.file_path = Some(file_path);
|
||||||
|
|
||||||
|
Ok((
|
||||||
|
StatusCode::CREATED,
|
||||||
|
Json(ApiResponse::with_message(
|
||||||
|
response,
|
||||||
|
"File version allocated — write content to $ATTUNE_ARTIFACTS_DIR/<file_path>",
|
||||||
|
)),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
/// Upload a binary file as a new version (multipart/form-data)
|
/// Upload a binary file as a new version (multipart/form-data)
|
||||||
///
|
///
|
||||||
/// The file is sent as a multipart form field named `file`. Optional fields:
|
/// The file is sent as a multipart form field named `file`. Optional fields:
|
||||||
@@ -656,6 +791,7 @@ pub async fn upload_version(
|
|||||||
content_type: Some(resolved_ct),
|
content_type: Some(resolved_ct),
|
||||||
content: Some(file_bytes),
|
content: Some(file_bytes),
|
||||||
content_json: None,
|
content_json: None,
|
||||||
|
file_path: None,
|
||||||
meta,
|
meta,
|
||||||
created_by,
|
created_by,
|
||||||
};
|
};
|
||||||
@@ -671,7 +807,10 @@ pub async fn upload_version(
|
|||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Download the binary content of a specific version
|
/// Download the binary content of a specific version.
|
||||||
|
///
|
||||||
|
/// For file-backed versions, reads from the shared artifact volume on disk.
|
||||||
|
/// For DB-stored versions, reads from the BYTEA/JSON content column.
|
||||||
#[utoipa::path(
|
#[utoipa::path(
|
||||||
get,
|
get,
|
||||||
path = "/api/v1/artifacts/{id}/versions/{version}/download",
|
path = "/api/v1/artifacts/{id}/versions/{version}/download",
|
||||||
@@ -695,69 +834,33 @@ pub async fn download_version(
|
|||||||
.await?
|
.await?
|
||||||
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
|
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
|
||||||
|
|
||||||
|
// First try without content (cheaper query) to check for file_path
|
||||||
|
let ver = ArtifactVersionRepository::find_by_version(&state.db, id, version)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| {
|
||||||
|
ApiError::NotFound(format!("Version {} not found for artifact {}", version, id))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// File-backed version: read from disk
|
||||||
|
if let Some(ref file_path) = ver.file_path {
|
||||||
|
return serve_file_from_disk(
|
||||||
|
&state.config.artifacts_dir,
|
||||||
|
file_path,
|
||||||
|
&artifact.r#ref,
|
||||||
|
version,
|
||||||
|
ver.content_type.as_deref(),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// DB-stored version: need to fetch with content
|
||||||
let ver = ArtifactVersionRepository::find_by_version_with_content(&state.db, id, version)
|
let ver = ArtifactVersionRepository::find_by_version_with_content(&state.db, id, version)
|
||||||
.await?
|
.await?
|
||||||
.ok_or_else(|| {
|
.ok_or_else(|| {
|
||||||
ApiError::NotFound(format!("Version {} not found for artifact {}", version, id))
|
ApiError::NotFound(format!("Version {} not found for artifact {}", version, id))
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
// For binary content
|
serve_db_content(&artifact.r#ref, version, &ver)
|
||||||
if let Some(bytes) = ver.content {
|
|
||||||
let ct = ver
|
|
||||||
.content_type
|
|
||||||
.unwrap_or_else(|| "application/octet-stream".to_string());
|
|
||||||
|
|
||||||
let filename = format!(
|
|
||||||
"{}_v{}.{}",
|
|
||||||
artifact.r#ref.replace('.', "_"),
|
|
||||||
version,
|
|
||||||
extension_from_content_type(&ct)
|
|
||||||
);
|
|
||||||
|
|
||||||
return Ok((
|
|
||||||
StatusCode::OK,
|
|
||||||
[
|
|
||||||
(header::CONTENT_TYPE, ct),
|
|
||||||
(
|
|
||||||
header::CONTENT_DISPOSITION,
|
|
||||||
format!("attachment; filename=\"{}\"", filename),
|
|
||||||
),
|
|
||||||
],
|
|
||||||
Body::from(bytes),
|
|
||||||
)
|
|
||||||
.into_response());
|
|
||||||
}
|
|
||||||
|
|
||||||
// For JSON content, serialize and return
|
|
||||||
if let Some(json) = ver.content_json {
|
|
||||||
let bytes = serde_json::to_vec_pretty(&json).map_err(|e| {
|
|
||||||
ApiError::InternalServerError(format!("Failed to serialize JSON: {}", e))
|
|
||||||
})?;
|
|
||||||
|
|
||||||
let ct = ver
|
|
||||||
.content_type
|
|
||||||
.unwrap_or_else(|| "application/json".to_string());
|
|
||||||
|
|
||||||
let filename = format!("{}_v{}.json", artifact.r#ref.replace('.', "_"), version,);
|
|
||||||
|
|
||||||
return Ok((
|
|
||||||
StatusCode::OK,
|
|
||||||
[
|
|
||||||
(header::CONTENT_TYPE, ct),
|
|
||||||
(
|
|
||||||
header::CONTENT_DISPOSITION,
|
|
||||||
format!("attachment; filename=\"{}\"", filename),
|
|
||||||
),
|
|
||||||
],
|
|
||||||
Body::from(bytes),
|
|
||||||
)
|
|
||||||
.into_response());
|
|
||||||
}
|
|
||||||
|
|
||||||
Err(ApiError::NotFound(format!(
|
|
||||||
"Version {} of artifact {} has no downloadable content",
|
|
||||||
version, id
|
|
||||||
)))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Download the latest version's content
|
/// Download the latest version's content
|
||||||
@@ -781,72 +884,34 @@ pub async fn download_latest(
|
|||||||
.await?
|
.await?
|
||||||
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
|
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
|
||||||
|
|
||||||
let ver = ArtifactVersionRepository::find_latest_with_content(&state.db, id)
|
// First try without content (cheaper query) to check for file_path
|
||||||
|
let ver = ArtifactVersionRepository::find_latest(&state.db, id)
|
||||||
.await?
|
.await?
|
||||||
.ok_or_else(|| ApiError::NotFound(format!("No versions found for artifact {}", id)))?;
|
.ok_or_else(|| ApiError::NotFound(format!("No versions found for artifact {}", id)))?;
|
||||||
|
|
||||||
let version = ver.version;
|
let version = ver.version;
|
||||||
|
|
||||||
// For binary content
|
// File-backed version: read from disk
|
||||||
if let Some(bytes) = ver.content {
|
if let Some(ref file_path) = ver.file_path {
|
||||||
let ct = ver
|
return serve_file_from_disk(
|
||||||
.content_type
|
&state.config.artifacts_dir,
|
||||||
.unwrap_or_else(|| "application/octet-stream".to_string());
|
file_path,
|
||||||
|
&artifact.r#ref,
|
||||||
let filename = format!(
|
|
||||||
"{}_v{}.{}",
|
|
||||||
artifact.r#ref.replace('.', "_"),
|
|
||||||
version,
|
version,
|
||||||
extension_from_content_type(&ct)
|
ver.content_type.as_deref(),
|
||||||
);
|
|
||||||
|
|
||||||
return Ok((
|
|
||||||
StatusCode::OK,
|
|
||||||
[
|
|
||||||
(header::CONTENT_TYPE, ct),
|
|
||||||
(
|
|
||||||
header::CONTENT_DISPOSITION,
|
|
||||||
format!("attachment; filename=\"{}\"", filename),
|
|
||||||
),
|
|
||||||
],
|
|
||||||
Body::from(bytes),
|
|
||||||
)
|
)
|
||||||
.into_response());
|
.await;
|
||||||
}
|
}
|
||||||
|
|
||||||
// For JSON content
|
// DB-stored version: need to fetch with content
|
||||||
if let Some(json) = ver.content_json {
|
let ver = ArtifactVersionRepository::find_latest_with_content(&state.db, id)
|
||||||
let bytes = serde_json::to_vec_pretty(&json).map_err(|e| {
|
.await?
|
||||||
ApiError::InternalServerError(format!("Failed to serialize JSON: {}", e))
|
.ok_or_else(|| ApiError::NotFound(format!("No versions found for artifact {}", id)))?;
|
||||||
})?;
|
|
||||||
|
|
||||||
let ct = ver
|
serve_db_content(&artifact.r#ref, ver.version, &ver)
|
||||||
.content_type
|
|
||||||
.unwrap_or_else(|| "application/json".to_string());
|
|
||||||
|
|
||||||
let filename = format!("{}_v{}.json", artifact.r#ref.replace('.', "_"), version,);
|
|
||||||
|
|
||||||
return Ok((
|
|
||||||
StatusCode::OK,
|
|
||||||
[
|
|
||||||
(header::CONTENT_TYPE, ct),
|
|
||||||
(
|
|
||||||
header::CONTENT_DISPOSITION,
|
|
||||||
format!("attachment; filename=\"{}\"", filename),
|
|
||||||
),
|
|
||||||
],
|
|
||||||
Body::from(bytes),
|
|
||||||
)
|
|
||||||
.into_response());
|
|
||||||
}
|
|
||||||
|
|
||||||
Err(ApiError::NotFound(format!(
|
|
||||||
"Latest version of artifact {} has no downloadable content",
|
|
||||||
id
|
|
||||||
)))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Delete a specific version by version number
|
/// Delete a specific version by version number (including disk file if file-backed)
|
||||||
#[utoipa::path(
|
#[utoipa::path(
|
||||||
delete,
|
delete,
|
||||||
path = "/api/v1/artifacts/{id}/versions/{version}",
|
path = "/api/v1/artifacts/{id}/versions/{version}",
|
||||||
@@ -867,7 +932,7 @@ pub async fn delete_version(
|
|||||||
Path((id, version)): Path<(i64, i32)>,
|
Path((id, version)): Path<(i64, i32)>,
|
||||||
) -> ApiResult<impl IntoResponse> {
|
) -> ApiResult<impl IntoResponse> {
|
||||||
// Verify artifact exists
|
// Verify artifact exists
|
||||||
ArtifactRepository::find_by_id(&state.db, id)
|
let artifact = ArtifactRepository::find_by_id(&state.db, id)
|
||||||
.await?
|
.await?
|
||||||
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
|
.ok_or_else(|| ApiError::NotFound(format!("Artifact with ID {} not found", id)))?;
|
||||||
|
|
||||||
@@ -878,6 +943,25 @@ pub async fn delete_version(
|
|||||||
ApiError::NotFound(format!("Version {} not found for artifact {}", version, id))
|
ApiError::NotFound(format!("Version {} not found for artifact {}", version, id))
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
|
// Clean up disk file if file-backed
|
||||||
|
if let Some(ref file_path) = ver.file_path {
|
||||||
|
let artifacts_dir = &state.config.artifacts_dir;
|
||||||
|
let full_path = std::path::Path::new(artifacts_dir).join(file_path);
|
||||||
|
if full_path.exists() {
|
||||||
|
if let Err(e) = tokio::fs::remove_file(&full_path).await {
|
||||||
|
warn!(
|
||||||
|
"Failed to delete artifact file '{}': {}. DB row will still be deleted.",
|
||||||
|
full_path.display(),
|
||||||
|
e
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Try to clean up empty parent directories
|
||||||
|
let ref_dir = ref_to_dir_path(&artifact.r#ref);
|
||||||
|
let full_ref_dir = std::path::Path::new(artifacts_dir).join(&ref_dir);
|
||||||
|
cleanup_empty_parents(&full_ref_dir, artifacts_dir);
|
||||||
|
}
|
||||||
|
|
||||||
ArtifactVersionRepository::delete(&state.db, ver.id).await?;
|
ArtifactVersionRepository::delete(&state.db, ver.id).await?;
|
||||||
|
|
||||||
Ok((
|
Ok((
|
||||||
@@ -890,6 +974,212 @@ pub async fn delete_version(
|
|||||||
// Helpers
|
// Helpers
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
|
|
||||||
|
/// Returns true for artifact types that should use file-backed storage on disk.
|
||||||
|
fn is_file_backed_type(artifact_type: ArtifactType) -> bool {
|
||||||
|
matches!(
|
||||||
|
artifact_type,
|
||||||
|
ArtifactType::FileBinary
|
||||||
|
| ArtifactType::FileText
|
||||||
|
| ArtifactType::FileDataTable
|
||||||
|
| ArtifactType::FileImage
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Convert an artifact ref to a directory path by replacing dots with path separators.
|
||||||
|
/// e.g., "mypack.build_log" -> "mypack/build_log"
|
||||||
|
fn ref_to_dir_path(artifact_ref: &str) -> String {
|
||||||
|
artifact_ref.replace('.', "/")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Compute the relative file path for a file-backed artifact version.
|
||||||
|
///
|
||||||
|
/// Pattern: `{ref_slug}/v{version}.{ext}`
|
||||||
|
/// e.g., `mypack/build_log/v1.txt`
|
||||||
|
pub fn compute_file_path(artifact_ref: &str, version: i32, content_type: &str) -> String {
|
||||||
|
let ref_path = ref_to_dir_path(artifact_ref);
|
||||||
|
let ext = extension_from_content_type(content_type);
|
||||||
|
format!("{}/v{}.{}", ref_path, version, ext)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return a sensible default content type for a given artifact type.
|
||||||
|
fn default_content_type_for_artifact(artifact_type: ArtifactType) -> String {
|
||||||
|
match artifact_type {
|
||||||
|
ArtifactType::FileText => "text/plain".to_string(),
|
||||||
|
ArtifactType::FileDataTable => "text/csv".to_string(),
|
||||||
|
ArtifactType::FileImage => "image/png".to_string(),
|
||||||
|
ArtifactType::FileBinary => "application/octet-stream".to_string(),
|
||||||
|
_ => "application/octet-stream".to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Serve a file-backed artifact version from disk.
|
||||||
|
async fn serve_file_from_disk(
|
||||||
|
artifacts_dir: &str,
|
||||||
|
file_path: &str,
|
||||||
|
artifact_ref: &str,
|
||||||
|
version: i32,
|
||||||
|
content_type: Option<&str>,
|
||||||
|
) -> ApiResult<axum::response::Response> {
|
||||||
|
let full_path = std::path::Path::new(artifacts_dir).join(file_path);
|
||||||
|
|
||||||
|
if !full_path.exists() {
|
||||||
|
return Err(ApiError::NotFound(format!(
|
||||||
|
"File for version {} of artifact '{}' not found on disk (expected at '{}')",
|
||||||
|
version, artifact_ref, file_path,
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let bytes = tokio::fs::read(&full_path).await.map_err(|e| {
|
||||||
|
ApiError::InternalServerError(format!(
|
||||||
|
"Failed to read artifact file '{}': {}",
|
||||||
|
full_path.display(),
|
||||||
|
e,
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let ct = content_type
|
||||||
|
.unwrap_or("application/octet-stream")
|
||||||
|
.to_string();
|
||||||
|
let filename = format!(
|
||||||
|
"{}_v{}.{}",
|
||||||
|
artifact_ref.replace('.', "_"),
|
||||||
|
version,
|
||||||
|
extension_from_content_type(&ct),
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok((
|
||||||
|
StatusCode::OK,
|
||||||
|
[
|
||||||
|
(header::CONTENT_TYPE, ct),
|
||||||
|
(
|
||||||
|
header::CONTENT_DISPOSITION,
|
||||||
|
format!("attachment; filename=\"{}\"", filename),
|
||||||
|
),
|
||||||
|
],
|
||||||
|
Body::from(bytes),
|
||||||
|
)
|
||||||
|
.into_response())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Serve a DB-stored artifact version (BYTEA or JSON content).
|
||||||
|
fn serve_db_content(
|
||||||
|
artifact_ref: &str,
|
||||||
|
version: i32,
|
||||||
|
ver: &attune_common::models::artifact_version::ArtifactVersion,
|
||||||
|
) -> ApiResult<axum::response::Response> {
|
||||||
|
// For binary content
|
||||||
|
if let Some(ref bytes) = ver.content {
|
||||||
|
let ct = ver
|
||||||
|
.content_type
|
||||||
|
.clone()
|
||||||
|
.unwrap_or_else(|| "application/octet-stream".to_string());
|
||||||
|
|
||||||
|
let filename = format!(
|
||||||
|
"{}_v{}.{}",
|
||||||
|
artifact_ref.replace('.', "_"),
|
||||||
|
version,
|
||||||
|
extension_from_content_type(&ct),
|
||||||
|
);
|
||||||
|
|
||||||
|
return Ok((
|
||||||
|
StatusCode::OK,
|
||||||
|
[
|
||||||
|
(header::CONTENT_TYPE, ct),
|
||||||
|
(
|
||||||
|
header::CONTENT_DISPOSITION,
|
||||||
|
format!("attachment; filename=\"{}\"", filename),
|
||||||
|
),
|
||||||
|
],
|
||||||
|
Body::from(bytes.clone()),
|
||||||
|
)
|
||||||
|
.into_response());
|
||||||
|
}
|
||||||
|
|
||||||
|
// For JSON content, serialize and return
|
||||||
|
if let Some(ref json) = ver.content_json {
|
||||||
|
let bytes = serde_json::to_vec_pretty(json).map_err(|e| {
|
||||||
|
ApiError::InternalServerError(format!("Failed to serialize JSON: {}", e))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let ct = ver
|
||||||
|
.content_type
|
||||||
|
.clone()
|
||||||
|
.unwrap_or_else(|| "application/json".to_string());
|
||||||
|
|
||||||
|
let filename = format!("{}_v{}.json", artifact_ref.replace('.', "_"), version);
|
||||||
|
|
||||||
|
return Ok((
|
||||||
|
StatusCode::OK,
|
||||||
|
[
|
||||||
|
(header::CONTENT_TYPE, ct),
|
||||||
|
(
|
||||||
|
header::CONTENT_DISPOSITION,
|
||||||
|
format!("attachment; filename=\"{}\"", filename),
|
||||||
|
),
|
||||||
|
],
|
||||||
|
Body::from(bytes),
|
||||||
|
)
|
||||||
|
.into_response());
|
||||||
|
}
|
||||||
|
|
||||||
|
Err(ApiError::NotFound(format!(
|
||||||
|
"Version {} of artifact '{}' has no downloadable content",
|
||||||
|
version, artifact_ref,
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Delete disk files for a set of file-backed artifact versions.
|
||||||
|
/// Logs warnings on failure but does not propagate errors.
|
||||||
|
fn cleanup_version_files(
|
||||||
|
artifacts_dir: &str,
|
||||||
|
versions: &[attune_common::models::artifact_version::ArtifactVersion],
|
||||||
|
) {
|
||||||
|
for ver in versions {
|
||||||
|
if let Some(ref file_path) = ver.file_path {
|
||||||
|
let full_path = std::path::Path::new(artifacts_dir).join(file_path);
|
||||||
|
if full_path.exists() {
|
||||||
|
if let Err(e) = std::fs::remove_file(&full_path) {
|
||||||
|
warn!(
|
||||||
|
"Failed to delete artifact file '{}': {}",
|
||||||
|
full_path.display(),
|
||||||
|
e,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Attempt to remove empty parent directories up to (but not including) the
|
||||||
|
/// artifacts_dir root. This is best-effort cleanup.
|
||||||
|
fn cleanup_empty_parents(dir: &std::path::Path, stop_at: &str) {
|
||||||
|
let stop_path = std::path::Path::new(stop_at);
|
||||||
|
let mut current = dir.to_path_buf();
|
||||||
|
while current != stop_path && current.starts_with(stop_path) {
|
||||||
|
match std::fs::read_dir(¤t) {
|
||||||
|
Ok(mut entries) => {
|
||||||
|
if entries.next().is_some() {
|
||||||
|
// Directory is not empty, stop climbing
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
if let Err(e) = std::fs::remove_dir(¤t) {
|
||||||
|
warn!(
|
||||||
|
"Failed to remove empty directory '{}': {}",
|
||||||
|
current.display(),
|
||||||
|
e,
|
||||||
|
);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(_) => break,
|
||||||
|
}
|
||||||
|
match current.parent() {
|
||||||
|
Some(parent) => current = parent.to_path_buf(),
|
||||||
|
None => break,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Derive a simple file extension from a MIME content type
|
/// Derive a simple file extension from a MIME content type
|
||||||
fn extension_from_content_type(ct: &str) -> &str {
|
fn extension_from_content_type(ct: &str) -> &str {
|
||||||
match ct {
|
match ct {
|
||||||
@@ -944,6 +1234,7 @@ pub fn routes() -> Router<Arc<AppState>> {
|
|||||||
)
|
)
|
||||||
.route("/artifacts/{id}/versions/latest", get(get_latest_version))
|
.route("/artifacts/{id}/versions/latest", get(get_latest_version))
|
||||||
.route("/artifacts/{id}/versions/upload", post(upload_version))
|
.route("/artifacts/{id}/versions/upload", post(upload_version))
|
||||||
|
.route("/artifacts/{id}/versions/file", post(create_version_file))
|
||||||
.route(
|
.route(
|
||||||
"/artifacts/{id}/versions/{version}",
|
"/artifacts/{id}/versions/{version}",
|
||||||
get(get_version).delete(delete_version),
|
get(get_version).delete(delete_version),
|
||||||
@@ -975,4 +1266,61 @@ mod tests {
|
|||||||
assert_eq!(extension_from_content_type("image/png"), "png");
|
assert_eq!(extension_from_content_type("image/png"), "png");
|
||||||
assert_eq!(extension_from_content_type("unknown/type"), "bin");
|
assert_eq!(extension_from_content_type("unknown/type"), "bin");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_compute_file_path() {
|
||||||
|
assert_eq!(
|
||||||
|
compute_file_path("mypack.build_log", 1, "text/plain"),
|
||||||
|
"mypack/build_log/v1.txt"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
compute_file_path("mypack.build_log", 3, "application/json"),
|
||||||
|
"mypack/build_log/v3.json"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
compute_file_path("core.test.results", 2, "text/csv"),
|
||||||
|
"core/test/results/v2.csv"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
compute_file_path("simple", 1, "application/octet-stream"),
|
||||||
|
"simple/v1.bin"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_ref_to_dir_path() {
|
||||||
|
assert_eq!(ref_to_dir_path("mypack.build_log"), "mypack/build_log");
|
||||||
|
assert_eq!(ref_to_dir_path("simple"), "simple");
|
||||||
|
assert_eq!(ref_to_dir_path("a.b.c.d"), "a/b/c/d");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_is_file_backed_type() {
|
||||||
|
assert!(is_file_backed_type(ArtifactType::FileBinary));
|
||||||
|
assert!(is_file_backed_type(ArtifactType::FileText));
|
||||||
|
assert!(is_file_backed_type(ArtifactType::FileDataTable));
|
||||||
|
assert!(is_file_backed_type(ArtifactType::FileImage));
|
||||||
|
assert!(!is_file_backed_type(ArtifactType::Progress));
|
||||||
|
assert!(!is_file_backed_type(ArtifactType::Url));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_default_content_type_for_artifact() {
|
||||||
|
assert_eq!(
|
||||||
|
default_content_type_for_artifact(ArtifactType::FileText),
|
||||||
|
"text/plain"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
default_content_type_for_artifact(ArtifactType::FileDataTable),
|
||||||
|
"text/csv"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
default_content_type_for_artifact(ArtifactType::FileImage),
|
||||||
|
"image/png"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
default_content_type_for_artifact(ArtifactType::FileBinary),
|
||||||
|
"application/octet-stream"
|
||||||
|
);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -170,7 +170,7 @@ pub async fn create_event(
|
|||||||
let event = EventRepository::create(&state.db, input).await?;
|
let event = EventRepository::create(&state.db, input).await?;
|
||||||
|
|
||||||
// Publish EventCreated message to message queue if publisher is available
|
// Publish EventCreated message to message queue if publisher is available
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let message_payload = EventCreatedPayload {
|
let message_payload = EventCreatedPayload {
|
||||||
event_id: event.id,
|
event_id: event.id,
|
||||||
trigger_id: event.trigger,
|
trigger_id: event.trigger,
|
||||||
|
|||||||
@@ -99,7 +99,7 @@ pub async fn create_execution(
|
|||||||
.with_source("api-service")
|
.with_source("api-service")
|
||||||
.with_correlation_id(uuid::Uuid::new_v4());
|
.with_correlation_id(uuid::Uuid::new_v4());
|
||||||
|
|
||||||
if let Some(publisher) = &state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
publisher.publish_envelope(&message).await.map_err(|e| {
|
publisher.publish_envelope(&message).await.map_err(|e| {
|
||||||
ApiError::InternalServerError(format!("Failed to publish message: {}", e))
|
ApiError::InternalServerError(format!("Failed to publish message: {}", e))
|
||||||
})?;
|
})?;
|
||||||
|
|||||||
@@ -403,7 +403,7 @@ pub async fn respond_to_inquiry(
|
|||||||
let updated_inquiry = InquiryRepository::update(&state.db, id, update_input).await?;
|
let updated_inquiry = InquiryRepository::update(&state.db, id, update_input).await?;
|
||||||
|
|
||||||
// Publish InquiryResponded message if publisher is available
|
// Publish InquiryResponded message if publisher is available
|
||||||
if let Some(publisher) = &state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let user_id = user
|
let user_id = user
|
||||||
.0
|
.0
|
||||||
.identity_id()
|
.identity_id()
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
//! Pack management API routes
|
//! Pack management API routes
|
||||||
|
|
||||||
use axum::{
|
use axum::{
|
||||||
extract::{Path, Query, State},
|
extract::{Multipart, Path, Query, State},
|
||||||
http::StatusCode,
|
http::StatusCode,
|
||||||
response::IntoResponse,
|
response::IntoResponse,
|
||||||
routing::get,
|
routing::get,
|
||||||
@@ -448,6 +448,190 @@ async fn execute_and_store_pack_tests(
|
|||||||
Some(Ok(result))
|
Some(Ok(result))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Upload and register a pack from a tar.gz archive (multipart/form-data)
|
||||||
|
///
|
||||||
|
/// The archive should be a gzipped tar containing the pack directory at its root
|
||||||
|
/// (i.e. the archive should unpack to files like `pack.yaml`, `actions/`, etc.).
|
||||||
|
/// The multipart field name must be `pack`.
|
||||||
|
///
|
||||||
|
/// Optional form fields:
|
||||||
|
/// - `force`: `"true"` to overwrite an existing pack with the same ref
|
||||||
|
/// - `skip_tests`: `"true"` to skip test execution after registration
|
||||||
|
#[utoipa::path(
|
||||||
|
post,
|
||||||
|
path = "/api/v1/packs/upload",
|
||||||
|
tag = "packs",
|
||||||
|
request_body(content = String, content_type = "multipart/form-data"),
|
||||||
|
responses(
|
||||||
|
(status = 201, description = "Pack uploaded and registered successfully", body = inline(ApiResponse<PackInstallResponse>)),
|
||||||
|
(status = 400, description = "Invalid archive or missing pack.yaml"),
|
||||||
|
(status = 409, description = "Pack already exists (use force=true to overwrite)"),
|
||||||
|
),
|
||||||
|
security(("bearer_auth" = []))
|
||||||
|
)]
|
||||||
|
pub async fn upload_pack(
|
||||||
|
State(state): State<Arc<AppState>>,
|
||||||
|
RequireAuth(user): RequireAuth,
|
||||||
|
mut multipart: Multipart,
|
||||||
|
) -> ApiResult<impl IntoResponse> {
|
||||||
|
use std::io::Cursor;
|
||||||
|
|
||||||
|
const MAX_PACK_SIZE: usize = 100 * 1024 * 1024; // 100 MB
|
||||||
|
|
||||||
|
let mut pack_bytes: Option<Vec<u8>> = None;
|
||||||
|
let mut force = false;
|
||||||
|
let mut skip_tests = false;
|
||||||
|
|
||||||
|
// Parse multipart fields
|
||||||
|
while let Some(field) = multipart
|
||||||
|
.next_field()
|
||||||
|
.await
|
||||||
|
.map_err(|e| ApiError::BadRequest(format!("Multipart error: {}", e)))?
|
||||||
|
{
|
||||||
|
match field.name() {
|
||||||
|
Some("pack") => {
|
||||||
|
let data = field.bytes().await.map_err(|e| {
|
||||||
|
ApiError::BadRequest(format!("Failed to read pack data: {}", e))
|
||||||
|
})?;
|
||||||
|
if data.len() > MAX_PACK_SIZE {
|
||||||
|
return Err(ApiError::BadRequest(format!(
|
||||||
|
"Pack archive too large: {} bytes (max {} bytes)",
|
||||||
|
data.len(),
|
||||||
|
MAX_PACK_SIZE
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
pack_bytes = Some(data.to_vec());
|
||||||
|
}
|
||||||
|
Some("force") => {
|
||||||
|
let val = field.text().await.map_err(|e| {
|
||||||
|
ApiError::BadRequest(format!("Failed to read force field: {}", e))
|
||||||
|
})?;
|
||||||
|
force = val.trim().eq_ignore_ascii_case("true");
|
||||||
|
}
|
||||||
|
Some("skip_tests") => {
|
||||||
|
let val = field.text().await.map_err(|e| {
|
||||||
|
ApiError::BadRequest(format!("Failed to read skip_tests field: {}", e))
|
||||||
|
})?;
|
||||||
|
skip_tests = val.trim().eq_ignore_ascii_case("true");
|
||||||
|
}
|
||||||
|
_ => {
|
||||||
|
// Consume and ignore unknown fields
|
||||||
|
let _ = field.bytes().await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let pack_data = pack_bytes.ok_or_else(|| {
|
||||||
|
ApiError::BadRequest("Missing required 'pack' field in multipart upload".to_string())
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Extract the tar.gz archive into a temporary directory
|
||||||
|
let temp_extract_dir = tempfile::tempdir().map_err(|e| {
|
||||||
|
ApiError::InternalServerError(format!("Failed to create temp directory: {}", e))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
{
|
||||||
|
let cursor = Cursor::new(&pack_data[..]);
|
||||||
|
let gz = flate2::read::GzDecoder::new(cursor);
|
||||||
|
let mut archive = tar::Archive::new(gz);
|
||||||
|
archive.unpack(temp_extract_dir.path()).map_err(|e| {
|
||||||
|
ApiError::BadRequest(format!(
|
||||||
|
"Failed to extract pack archive (must be a valid .tar.gz): {}",
|
||||||
|
e
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find pack.yaml — it may be at the root or inside a single subdirectory
|
||||||
|
// (e.g. when GitHub tarballs add a top-level directory)
|
||||||
|
let pack_root = find_pack_root(temp_extract_dir.path()).ok_or_else(|| {
|
||||||
|
ApiError::BadRequest(
|
||||||
|
"Could not find pack.yaml in the uploaded archive. \
|
||||||
|
Ensure the archive contains pack.yaml at its root or in a single top-level directory."
|
||||||
|
.to_string(),
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Read pack ref from pack.yaml to determine the final storage path
|
||||||
|
let pack_yaml_path = pack_root.join("pack.yaml");
|
||||||
|
let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)
|
||||||
|
.map_err(|e| ApiError::InternalServerError(format!("Failed to read pack.yaml: {}", e)))?;
|
||||||
|
let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)
|
||||||
|
.map_err(|e| ApiError::BadRequest(format!("Failed to parse pack.yaml: {}", e)))?;
|
||||||
|
let pack_ref = pack_yaml
|
||||||
|
.get("ref")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or_else(|| ApiError::BadRequest("Missing 'ref' field in pack.yaml".to_string()))?
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
// Move pack to permanent storage
|
||||||
|
use attune_common::pack_registry::PackStorage;
|
||||||
|
let storage = PackStorage::new(&state.config.packs_base_dir);
|
||||||
|
let final_path = storage
|
||||||
|
.install_pack(&pack_root, &pack_ref, None)
|
||||||
|
.map_err(|e| {
|
||||||
|
ApiError::InternalServerError(format!("Failed to move pack to storage: {}", e))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
tracing::info!(
|
||||||
|
"Pack '{}' uploaded and stored at {:?}",
|
||||||
|
pack_ref,
|
||||||
|
final_path
|
||||||
|
);
|
||||||
|
|
||||||
|
// Register the pack in the database
|
||||||
|
let pack_id = register_pack_internal(
|
||||||
|
state.clone(),
|
||||||
|
user.claims.sub,
|
||||||
|
final_path.to_string_lossy().to_string(),
|
||||||
|
force,
|
||||||
|
skip_tests,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
// Clean up permanent storage on failure
|
||||||
|
let _ = std::fs::remove_dir_all(&final_path);
|
||||||
|
e
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Fetch the registered pack
|
||||||
|
let pack = PackRepository::find_by_id(&state.db, pack_id)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| ApiError::NotFound(format!("Pack with ID {} not found", pack_id)))?;
|
||||||
|
|
||||||
|
let response = ApiResponse::with_message(
|
||||||
|
PackInstallResponse {
|
||||||
|
pack: PackResponse::from(pack),
|
||||||
|
test_result: None,
|
||||||
|
tests_skipped: skip_tests,
|
||||||
|
},
|
||||||
|
"Pack uploaded and registered successfully",
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok((StatusCode::CREATED, Json(response)))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Walk the extracted directory and find the directory that contains `pack.yaml`.
|
||||||
|
/// Returns the path of the directory containing `pack.yaml`, or `None` if not found.
|
||||||
|
fn find_pack_root(base: &std::path::Path) -> Option<PathBuf> {
|
||||||
|
// Check root first
|
||||||
|
if base.join("pack.yaml").exists() {
|
||||||
|
return Some(base.to_path_buf());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check one level deep (e.g. GitHub tarballs: repo-main/pack.yaml)
|
||||||
|
if let Ok(entries) = std::fs::read_dir(base) {
|
||||||
|
for entry in entries.flatten() {
|
||||||
|
let path = entry.path();
|
||||||
|
if path.is_dir() && path.join("pack.yaml").exists() {
|
||||||
|
return Some(path);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
/// Register a pack from local filesystem
|
/// Register a pack from local filesystem
|
||||||
#[utoipa::path(
|
#[utoipa::path(
|
||||||
post,
|
post,
|
||||||
@@ -1051,7 +1235,7 @@ async fn register_pack_internal(
|
|||||||
|
|
||||||
// Publish pack.registered event so workers can proactively set up
|
// Publish pack.registered event so workers can proactively set up
|
||||||
// runtime environments (virtualenvs, node_modules, etc.).
|
// runtime environments (virtualenvs, node_modules, etc.).
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let runtime_names = attune_common::pack_environment::collect_runtime_names_for_pack(
|
let runtime_names = attune_common::pack_environment::collect_runtime_names_for_pack(
|
||||||
&state.db, pack.id, &pack_path,
|
&state.db, pack.id, &pack_path,
|
||||||
)
|
)
|
||||||
@@ -2241,6 +2425,7 @@ pub fn routes() -> Router<Arc<AppState>> {
|
|||||||
axum::routing::post(register_packs_batch),
|
axum::routing::post(register_packs_batch),
|
||||||
)
|
)
|
||||||
.route("/packs/install", axum::routing::post(install_pack))
|
.route("/packs/install", axum::routing::post(install_pack))
|
||||||
|
.route("/packs/upload", axum::routing::post(upload_pack))
|
||||||
.route("/packs/download", axum::routing::post(download_packs))
|
.route("/packs/download", axum::routing::post(download_packs))
|
||||||
.route(
|
.route(
|
||||||
"/packs/dependencies",
|
"/packs/dependencies",
|
||||||
|
|||||||
@@ -341,7 +341,7 @@ pub async fn create_rule(
|
|||||||
let rule = RuleRepository::create(&state.db, rule_input).await?;
|
let rule = RuleRepository::create(&state.db, rule_input).await?;
|
||||||
|
|
||||||
// Publish RuleCreated message to notify sensor service
|
// Publish RuleCreated message to notify sensor service
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let payload = RuleCreatedPayload {
|
let payload = RuleCreatedPayload {
|
||||||
rule_id: rule.id,
|
rule_id: rule.id,
|
||||||
rule_ref: rule.r#ref.clone(),
|
rule_ref: rule.r#ref.clone(),
|
||||||
@@ -440,7 +440,7 @@ pub async fn update_rule(
|
|||||||
// If the rule is enabled and trigger params changed, publish RuleEnabled message
|
// If the rule is enabled and trigger params changed, publish RuleEnabled message
|
||||||
// to notify sensors to restart with new parameters
|
// to notify sensors to restart with new parameters
|
||||||
if rule.enabled && trigger_params_changed {
|
if rule.enabled && trigger_params_changed {
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let payload = RuleEnabledPayload {
|
let payload = RuleEnabledPayload {
|
||||||
rule_id: rule.id,
|
rule_id: rule.id,
|
||||||
rule_ref: rule.r#ref.clone(),
|
rule_ref: rule.r#ref.clone(),
|
||||||
@@ -543,7 +543,7 @@ pub async fn enable_rule(
|
|||||||
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
|
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
|
||||||
|
|
||||||
// Publish RuleEnabled message to notify sensor service
|
// Publish RuleEnabled message to notify sensor service
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let payload = RuleEnabledPayload {
|
let payload = RuleEnabledPayload {
|
||||||
rule_id: rule.id,
|
rule_id: rule.id,
|
||||||
rule_ref: rule.r#ref.clone(),
|
rule_ref: rule.r#ref.clone(),
|
||||||
@@ -606,7 +606,7 @@ pub async fn disable_rule(
|
|||||||
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
|
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
|
||||||
|
|
||||||
// Publish RuleDisabled message to notify sensor service
|
// Publish RuleDisabled message to notify sensor service
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let payload = RuleDisabledPayload {
|
let payload = RuleDisabledPayload {
|
||||||
rule_id: rule.id,
|
rule_id: rule.id,
|
||||||
rule_ref: rule.r#ref.clone(),
|
rule_ref: rule.r#ref.clone(),
|
||||||
|
|||||||
@@ -650,7 +650,7 @@ pub async fn receive_webhook(
|
|||||||
"Webhook event {} created, attempting to publish EventCreated message",
|
"Webhook event {} created, attempting to publish EventCreated message",
|
||||||
event.id
|
event.id
|
||||||
);
|
);
|
||||||
if let Some(ref publisher) = state.publisher {
|
if let Some(publisher) = state.get_publisher().await {
|
||||||
let message_payload = EventCreatedPayload {
|
let message_payload = EventCreatedPayload {
|
||||||
event_id: event.id,
|
event_id: event.id,
|
||||||
trigger_id: event.trigger,
|
trigger_id: event.trigger,
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
use sqlx::PgPool;
|
use sqlx::PgPool;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use tokio::sync::broadcast;
|
use tokio::sync::{broadcast, RwLock};
|
||||||
|
|
||||||
use crate::auth::jwt::JwtConfig;
|
use crate::auth::jwt::JwtConfig;
|
||||||
use attune_common::{config::Config, mq::Publisher};
|
use attune_common::{config::Config, mq::Publisher};
|
||||||
@@ -18,8 +18,8 @@ pub struct AppState {
|
|||||||
pub cors_origins: Vec<String>,
|
pub cors_origins: Vec<String>,
|
||||||
/// Application configuration
|
/// Application configuration
|
||||||
pub config: Arc<Config>,
|
pub config: Arc<Config>,
|
||||||
/// Optional message queue publisher
|
/// Optional message queue publisher (shared, swappable after reconnection)
|
||||||
pub publisher: Option<Arc<Publisher>>,
|
pub publisher: Arc<RwLock<Option<Arc<Publisher>>>>,
|
||||||
/// Broadcast channel for SSE notifications
|
/// Broadcast channel for SSE notifications
|
||||||
pub broadcast_tx: broadcast::Sender<String>,
|
pub broadcast_tx: broadcast::Sender<String>,
|
||||||
}
|
}
|
||||||
@@ -50,15 +50,20 @@ impl AppState {
|
|||||||
jwt_config: Arc::new(jwt_config),
|
jwt_config: Arc::new(jwt_config),
|
||||||
cors_origins,
|
cors_origins,
|
||||||
config: Arc::new(config),
|
config: Arc::new(config),
|
||||||
publisher: None,
|
publisher: Arc::new(RwLock::new(None)),
|
||||||
broadcast_tx,
|
broadcast_tx,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Set the message queue publisher
|
/// Set the message queue publisher (called once at startup or after reconnection)
|
||||||
pub fn with_publisher(mut self, publisher: Arc<Publisher>) -> Self {
|
pub async fn set_publisher(&self, publisher: Arc<Publisher>) {
|
||||||
self.publisher = Some(publisher);
|
let mut guard = self.publisher.write().await;
|
||||||
self
|
*guard = Some(publisher);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get a clone of the current publisher, if available
|
||||||
|
pub async fn get_publisher(&self) -> Option<Arc<Publisher>> {
|
||||||
|
self.publisher.read().await.clone()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -16,12 +16,13 @@ attune-common = { path = "../common" }
|
|||||||
|
|
||||||
# Async runtime
|
# Async runtime
|
||||||
tokio = { workspace = true }
|
tokio = { workspace = true }
|
||||||
|
futures = { workspace = true }
|
||||||
|
|
||||||
# CLI framework
|
# CLI framework
|
||||||
clap = { workspace = true, features = ["derive", "env", "string"] }
|
clap = { workspace = true, features = ["derive", "env", "string"] }
|
||||||
|
|
||||||
# HTTP client
|
# HTTP client
|
||||||
reqwest = { workspace = true }
|
reqwest = { workspace = true, features = ["multipart", "stream"] }
|
||||||
|
|
||||||
# Serialization
|
# Serialization
|
||||||
serde = { workspace = true }
|
serde = { workspace = true }
|
||||||
@@ -41,6 +42,14 @@ dirs = "5.0"
|
|||||||
|
|
||||||
# URL encoding
|
# URL encoding
|
||||||
urlencoding = "2.1"
|
urlencoding = "2.1"
|
||||||
|
url = { workspace = true }
|
||||||
|
|
||||||
|
# Archive/compression
|
||||||
|
tar = { workspace = true }
|
||||||
|
flate2 = { workspace = true }
|
||||||
|
|
||||||
|
# WebSocket client (for notifier integration)
|
||||||
|
tokio-tungstenite = { workspace = true }
|
||||||
|
|
||||||
# Terminal UI
|
# Terminal UI
|
||||||
colored = "2.1"
|
colored = "2.1"
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
use anyhow::{Context, Result};
|
use anyhow::{Context, Result};
|
||||||
use reqwest::{Client as HttpClient, Method, RequestBuilder, Response, StatusCode};
|
use reqwest::{multipart, Client as HttpClient, Method, RequestBuilder, Response, StatusCode};
|
||||||
use serde::{de::DeserializeOwned, Serialize};
|
use serde::{de::DeserializeOwned, Serialize};
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
@@ -39,7 +39,7 @@ impl ApiClient {
|
|||||||
|
|
||||||
Self {
|
Self {
|
||||||
client: HttpClient::builder()
|
client: HttpClient::builder()
|
||||||
.timeout(Duration::from_secs(30))
|
.timeout(Duration::from_secs(300)) // longer timeout for uploads
|
||||||
.build()
|
.build()
|
||||||
.expect("Failed to build HTTP client"),
|
.expect("Failed to build HTTP client"),
|
||||||
base_url,
|
base_url,
|
||||||
@@ -50,10 +50,15 @@ impl ApiClient {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Create a new API client
|
/// Create a new API client
|
||||||
|
/// Return the base URL this client is configured to talk to.
|
||||||
|
pub fn base_url(&self) -> &str {
|
||||||
|
&self.base_url
|
||||||
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
pub fn new(base_url: String, auth_token: Option<String>) -> Self {
|
pub fn new(base_url: String, auth_token: Option<String>) -> Self {
|
||||||
let client = HttpClient::builder()
|
let client = HttpClient::builder()
|
||||||
.timeout(Duration::from_secs(30))
|
.timeout(Duration::from_secs(300))
|
||||||
.build()
|
.build()
|
||||||
.expect("Failed to build HTTP client");
|
.expect("Failed to build HTTP client");
|
||||||
|
|
||||||
@@ -296,6 +301,55 @@ impl ApiClient {
|
|||||||
anyhow::bail!("API error ({}): {}", status, error_text);
|
anyhow::bail!("API error ({}): {}", status, error_text);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// POST a multipart/form-data request with a file field and optional text fields.
|
||||||
|
///
|
||||||
|
/// - `file_field_name`: the multipart field name for the file
|
||||||
|
/// - `file_bytes`: raw bytes of the file content
|
||||||
|
/// - `file_name`: filename hint sent in the Content-Disposition header
|
||||||
|
/// - `mime_type`: MIME type of the file (e.g. `"application/gzip"`)
|
||||||
|
/// - `extra_fields`: additional text key/value fields to include in the form
|
||||||
|
pub async fn multipart_post<T: DeserializeOwned>(
|
||||||
|
&mut self,
|
||||||
|
path: &str,
|
||||||
|
file_field_name: &str,
|
||||||
|
file_bytes: Vec<u8>,
|
||||||
|
file_name: &str,
|
||||||
|
mime_type: &str,
|
||||||
|
extra_fields: Vec<(&str, String)>,
|
||||||
|
) -> Result<T> {
|
||||||
|
let url = format!("{}/api/v1{}", self.base_url, path);
|
||||||
|
|
||||||
|
let file_part = multipart::Part::bytes(file_bytes)
|
||||||
|
.file_name(file_name.to_string())
|
||||||
|
.mime_str(mime_type)
|
||||||
|
.context("Invalid MIME type")?;
|
||||||
|
|
||||||
|
let mut form = multipart::Form::new().part(file_field_name.to_string(), file_part);
|
||||||
|
|
||||||
|
for (key, value) in extra_fields {
|
||||||
|
form = form.text(key.to_string(), value);
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut req = self.client.post(&url).multipart(form);
|
||||||
|
|
||||||
|
if let Some(token) = &self.auth_token {
|
||||||
|
req = req.bearer_auth(token);
|
||||||
|
}
|
||||||
|
|
||||||
|
let response = req.send().await.context("Failed to send multipart request to API")?;
|
||||||
|
|
||||||
|
// Handle 401 + refresh (same pattern as execute())
|
||||||
|
if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() {
|
||||||
|
if self.refresh_auth_token().await? {
|
||||||
|
return Err(anyhow::anyhow!(
|
||||||
|
"Token expired and was refreshed. Please retry your command."
|
||||||
|
));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
self.handle_response(response).await
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ use std::collections::HashMap;
|
|||||||
use crate::client::ApiClient;
|
use crate::client::ApiClient;
|
||||||
use crate::config::CliConfig;
|
use crate::config::CliConfig;
|
||||||
use crate::output::{self, OutputFormat};
|
use crate::output::{self, OutputFormat};
|
||||||
|
use crate::wait::{wait_for_execution, WaitOptions};
|
||||||
|
|
||||||
#[derive(Subcommand)]
|
#[derive(Subcommand)]
|
||||||
pub enum ActionCommands {
|
pub enum ActionCommands {
|
||||||
@@ -74,6 +75,11 @@ pub enum ActionCommands {
|
|||||||
/// Timeout in seconds when waiting (default: 300)
|
/// Timeout in seconds when waiting (default: 300)
|
||||||
#[arg(long, default_value = "300", requires = "wait")]
|
#[arg(long, default_value = "300", requires = "wait")]
|
||||||
timeout: u64,
|
timeout: u64,
|
||||||
|
|
||||||
|
/// Notifier WebSocket base URL (e.g. ws://localhost:8081).
|
||||||
|
/// Derived from --api-url automatically when not set.
|
||||||
|
#[arg(long, requires = "wait")]
|
||||||
|
notifier_url: Option<String>,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -182,6 +188,7 @@ pub async fn handle_action_command(
|
|||||||
params_json,
|
params_json,
|
||||||
wait,
|
wait,
|
||||||
timeout,
|
timeout,
|
||||||
|
notifier_url,
|
||||||
} => {
|
} => {
|
||||||
handle_execute(
|
handle_execute(
|
||||||
action_ref,
|
action_ref,
|
||||||
@@ -191,6 +198,7 @@ pub async fn handle_action_command(
|
|||||||
api_url,
|
api_url,
|
||||||
wait,
|
wait,
|
||||||
timeout,
|
timeout,
|
||||||
|
notifier_url,
|
||||||
output_format,
|
output_format,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
@@ -415,6 +423,7 @@ async fn handle_execute(
|
|||||||
api_url: &Option<String>,
|
api_url: &Option<String>,
|
||||||
wait: bool,
|
wait: bool,
|
||||||
timeout: u64,
|
timeout: u64,
|
||||||
|
notifier_url: Option<String>,
|
||||||
output_format: OutputFormat,
|
output_format: OutputFormat,
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||||
@@ -453,9 +462,25 @@ async fn handle_execute(
|
|||||||
}
|
}
|
||||||
|
|
||||||
let path = "/executions/execute".to_string();
|
let path = "/executions/execute".to_string();
|
||||||
let mut execution: Execution = client.post(&path, &request).await?;
|
let execution: Execution = client.post(&path, &request).await?;
|
||||||
|
|
||||||
|
if !wait {
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Json | OutputFormat::Yaml => {
|
||||||
|
output::print_output(&execution, output_format)?;
|
||||||
|
}
|
||||||
|
OutputFormat::Table => {
|
||||||
|
output::print_success(&format!("Execution {} started", execution.id));
|
||||||
|
output::print_key_value_table(vec![
|
||||||
|
("Execution ID", execution.id.to_string()),
|
||||||
|
("Action", execution.action_ref.clone()),
|
||||||
|
("Status", output::format_status(&execution.status)),
|
||||||
|
]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
if wait {
|
|
||||||
match output_format {
|
match output_format {
|
||||||
OutputFormat::Table => {
|
OutputFormat::Table => {
|
||||||
output::print_info(&format!(
|
output::print_info(&format!(
|
||||||
@@ -466,49 +491,32 @@ async fn handle_execute(
|
|||||||
_ => {}
|
_ => {}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Poll for completion
|
let verbose = matches!(output_format, OutputFormat::Table);
|
||||||
let start = std::time::Instant::now();
|
let summary = wait_for_execution(WaitOptions {
|
||||||
let timeout_duration = std::time::Duration::from_secs(timeout);
|
execution_id: execution.id,
|
||||||
|
timeout_secs: timeout,
|
||||||
loop {
|
api_client: &mut client,
|
||||||
if start.elapsed() > timeout_duration {
|
notifier_ws_url: notifier_url,
|
||||||
anyhow::bail!("Execution timed out after {} seconds", timeout);
|
verbose,
|
||||||
}
|
})
|
||||||
|
.await?;
|
||||||
let exec_path = format!("/executions/{}", execution.id);
|
|
||||||
execution = client.get(&exec_path).await?;
|
|
||||||
|
|
||||||
if execution.status == "succeeded"
|
|
||||||
|| execution.status == "failed"
|
|
||||||
|| execution.status == "canceled"
|
|
||||||
{
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
match output_format {
|
match output_format {
|
||||||
OutputFormat::Json | OutputFormat::Yaml => {
|
OutputFormat::Json | OutputFormat::Yaml => {
|
||||||
output::print_output(&execution, output_format)?;
|
output::print_output(&summary, output_format)?;
|
||||||
}
|
}
|
||||||
OutputFormat::Table => {
|
OutputFormat::Table => {
|
||||||
output::print_success(&format!(
|
output::print_success(&format!("Execution {} completed", summary.id));
|
||||||
"Execution {} {}",
|
|
||||||
execution.id,
|
|
||||||
if wait { "completed" } else { "started" }
|
|
||||||
));
|
|
||||||
output::print_section("Execution Details");
|
output::print_section("Execution Details");
|
||||||
output::print_key_value_table(vec![
|
output::print_key_value_table(vec![
|
||||||
("Execution ID", execution.id.to_string()),
|
("Execution ID", summary.id.to_string()),
|
||||||
("Action", execution.action_ref.clone()),
|
("Action", summary.action_ref.clone()),
|
||||||
("Status", output::format_status(&execution.status)),
|
("Status", output::format_status(&summary.status)),
|
||||||
("Created", output::format_timestamp(&execution.created)),
|
("Created", output::format_timestamp(&summary.created)),
|
||||||
("Updated", output::format_timestamp(&execution.updated)),
|
("Updated", output::format_timestamp(&summary.updated)),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
if let Some(result) = execution.result {
|
if let Some(result) = summary.result {
|
||||||
if !result.is_null() {
|
if !result.is_null() {
|
||||||
output::print_section("Result");
|
output::print_section("Result");
|
||||||
println!("{}", serde_json::to_string_pretty(&result)?);
|
println!("{}", serde_json::to_string_pretty(&result)?);
|
||||||
|
|||||||
@@ -17,6 +17,14 @@ pub enum AuthCommands {
|
|||||||
/// Password (will prompt if not provided)
|
/// Password (will prompt if not provided)
|
||||||
#[arg(long)]
|
#[arg(long)]
|
||||||
password: Option<String>,
|
password: Option<String>,
|
||||||
|
|
||||||
|
/// API URL to log in to (saved into the profile for future use)
|
||||||
|
#[arg(long)]
|
||||||
|
url: Option<String>,
|
||||||
|
|
||||||
|
/// Save credentials into a named profile (creates it if it doesn't exist)
|
||||||
|
#[arg(long)]
|
||||||
|
save_profile: Option<String>,
|
||||||
},
|
},
|
||||||
/// Log out and clear authentication tokens
|
/// Log out and clear authentication tokens
|
||||||
Logout,
|
Logout,
|
||||||
@@ -53,8 +61,22 @@ pub async fn handle_auth_command(
|
|||||||
output_format: OutputFormat,
|
output_format: OutputFormat,
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
match command {
|
match command {
|
||||||
AuthCommands::Login { username, password } => {
|
AuthCommands::Login {
|
||||||
handle_login(username, password, profile, api_url, output_format).await
|
username,
|
||||||
|
password,
|
||||||
|
url,
|
||||||
|
save_profile,
|
||||||
|
} => {
|
||||||
|
// --url is a convenient alias for --api-url at login time
|
||||||
|
let effective_api_url = url.or_else(|| api_url.clone());
|
||||||
|
handle_login(
|
||||||
|
username,
|
||||||
|
password,
|
||||||
|
save_profile.as_ref().or(profile.as_ref()),
|
||||||
|
&effective_api_url,
|
||||||
|
output_format,
|
||||||
|
)
|
||||||
|
.await
|
||||||
}
|
}
|
||||||
AuthCommands::Logout => handle_logout(profile, output_format).await,
|
AuthCommands::Logout => handle_logout(profile, output_format).await,
|
||||||
AuthCommands::Whoami => handle_whoami(profile, api_url, output_format).await,
|
AuthCommands::Whoami => handle_whoami(profile, api_url, output_format).await,
|
||||||
@@ -65,11 +87,44 @@ pub async fn handle_auth_command(
|
|||||||
async fn handle_login(
|
async fn handle_login(
|
||||||
username: String,
|
username: String,
|
||||||
password: Option<String>,
|
password: Option<String>,
|
||||||
profile: &Option<String>,
|
profile: Option<&String>,
|
||||||
api_url: &Option<String>,
|
api_url: &Option<String>,
|
||||||
output_format: OutputFormat,
|
output_format: OutputFormat,
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
// Determine which profile name will own these credentials.
|
||||||
|
// If --save-profile / --profile was given, use that; otherwise use the
|
||||||
|
// currently-active profile.
|
||||||
|
let mut config = CliConfig::load()?;
|
||||||
|
let target_profile_name = profile
|
||||||
|
.cloned()
|
||||||
|
.unwrap_or_else(|| config.current_profile.clone());
|
||||||
|
|
||||||
|
// If a URL was provided and the target profile doesn't exist yet, create it.
|
||||||
|
if !config.profiles.contains_key(&target_profile_name) {
|
||||||
|
let url = api_url.clone().unwrap_or_else(|| "http://localhost:8080".to_string());
|
||||||
|
use crate::config::Profile;
|
||||||
|
config.set_profile(
|
||||||
|
target_profile_name.clone(),
|
||||||
|
Profile {
|
||||||
|
api_url: url,
|
||||||
|
auth_token: None,
|
||||||
|
refresh_token: None,
|
||||||
|
output_format: None,
|
||||||
|
description: None,
|
||||||
|
},
|
||||||
|
)?;
|
||||||
|
} else if let Some(url) = api_url {
|
||||||
|
// Profile exists — update its api_url if an explicit URL was provided.
|
||||||
|
if let Some(p) = config.profiles.get_mut(&target_profile_name) {
|
||||||
|
p.api_url = url.clone();
|
||||||
|
}
|
||||||
|
config.save()?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build a temporary config view that points at the target profile so
|
||||||
|
// ApiClient uses the right base URL.
|
||||||
|
let mut login_config = CliConfig::load()?;
|
||||||
|
login_config.current_profile = target_profile_name.clone();
|
||||||
|
|
||||||
// Prompt for password if not provided
|
// Prompt for password if not provided
|
||||||
let password = match password {
|
let password = match password {
|
||||||
@@ -82,7 +137,7 @@ async fn handle_login(
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
let mut client = ApiClient::from_config(&config, api_url);
|
let mut client = ApiClient::from_config(&login_config, api_url);
|
||||||
|
|
||||||
let login_req = LoginRequest {
|
let login_req = LoginRequest {
|
||||||
login: username,
|
login: username,
|
||||||
@@ -91,12 +146,17 @@ async fn handle_login(
|
|||||||
|
|
||||||
let response: LoginResponse = client.post("/auth/login", &login_req).await?;
|
let response: LoginResponse = client.post("/auth/login", &login_req).await?;
|
||||||
|
|
||||||
// Save tokens to config
|
// Persist tokens into the target profile.
|
||||||
let mut config = CliConfig::load()?;
|
let mut config = CliConfig::load()?;
|
||||||
config.set_auth(
|
// Ensure the profile exists (it may have just been created above and saved).
|
||||||
response.access_token.clone(),
|
if let Some(p) = config.profiles.get_mut(&target_profile_name) {
|
||||||
response.refresh_token.clone(),
|
p.auth_token = Some(response.access_token.clone());
|
||||||
)?;
|
p.refresh_token = Some(response.refresh_token.clone());
|
||||||
|
config.save()?;
|
||||||
|
} else {
|
||||||
|
// Fallback: set_auth writes to the current profile.
|
||||||
|
config.set_auth(response.access_token.clone(), response.refresh_token.clone())?;
|
||||||
|
}
|
||||||
|
|
||||||
match output_format {
|
match output_format {
|
||||||
OutputFormat::Json | OutputFormat::Yaml => {
|
OutputFormat::Json | OutputFormat::Yaml => {
|
||||||
@@ -105,6 +165,12 @@ async fn handle_login(
|
|||||||
OutputFormat::Table => {
|
OutputFormat::Table => {
|
||||||
output::print_success("Successfully logged in");
|
output::print_success("Successfully logged in");
|
||||||
output::print_info(&format!("Token expires in {} seconds", response.expires_in));
|
output::print_info(&format!("Token expires in {} seconds", response.expires_in));
|
||||||
|
if target_profile_name != config.current_profile {
|
||||||
|
output::print_info(&format!(
|
||||||
|
"Credentials saved to profile '{}'",
|
||||||
|
target_profile_name
|
||||||
|
));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
use anyhow::Result;
|
use anyhow::{Context, Result};
|
||||||
use clap::Subcommand;
|
use clap::Subcommand;
|
||||||
|
use flate2::{write::GzEncoder, Compression};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
|
|
||||||
@@ -77,9 +78,9 @@ pub enum PackCommands {
|
|||||||
#[arg(short = 'y', long)]
|
#[arg(short = 'y', long)]
|
||||||
yes: bool,
|
yes: bool,
|
||||||
},
|
},
|
||||||
/// Register a pack from a local directory
|
/// Register a pack from a local directory (path must be accessible by the API server)
|
||||||
Register {
|
Register {
|
||||||
/// Path to pack directory
|
/// Path to pack directory (must be a path the API server can access)
|
||||||
path: String,
|
path: String,
|
||||||
|
|
||||||
/// Force re-registration if pack already exists
|
/// Force re-registration if pack already exists
|
||||||
@@ -90,6 +91,22 @@ pub enum PackCommands {
|
|||||||
#[arg(long)]
|
#[arg(long)]
|
||||||
skip_tests: bool,
|
skip_tests: bool,
|
||||||
},
|
},
|
||||||
|
/// Upload a local pack directory to the API server and register it
|
||||||
|
///
|
||||||
|
/// This command tarballs the local directory and streams it to the API,
|
||||||
|
/// so it works regardless of whether the API is local or running in Docker.
|
||||||
|
Upload {
|
||||||
|
/// Path to the local pack directory (must contain pack.yaml)
|
||||||
|
path: String,
|
||||||
|
|
||||||
|
/// Force re-registration if a pack with the same ref already exists
|
||||||
|
#[arg(short, long)]
|
||||||
|
force: bool,
|
||||||
|
|
||||||
|
/// Skip running pack tests after upload
|
||||||
|
#[arg(long)]
|
||||||
|
skip_tests: bool,
|
||||||
|
},
|
||||||
/// Test a pack's test suite
|
/// Test a pack's test suite
|
||||||
Test {
|
Test {
|
||||||
/// Pack reference (name) or path to pack directory
|
/// Pack reference (name) or path to pack directory
|
||||||
@@ -256,6 +273,15 @@ struct RegisterPackRequest {
|
|||||||
skip_tests: bool,
|
skip_tests: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
struct UploadPackResponse {
|
||||||
|
pack: Pack,
|
||||||
|
#[serde(default)]
|
||||||
|
test_result: Option<serde_json::Value>,
|
||||||
|
#[serde(default)]
|
||||||
|
tests_skipped: bool,
|
||||||
|
}
|
||||||
|
|
||||||
pub async fn handle_pack_command(
|
pub async fn handle_pack_command(
|
||||||
profile: &Option<String>,
|
profile: &Option<String>,
|
||||||
command: PackCommands,
|
command: PackCommands,
|
||||||
@@ -296,6 +322,11 @@ pub async fn handle_pack_command(
|
|||||||
force,
|
force,
|
||||||
skip_tests,
|
skip_tests,
|
||||||
} => handle_register(profile, path, force, skip_tests, api_url, output_format).await,
|
} => handle_register(profile, path, force, skip_tests, api_url, output_format).await,
|
||||||
|
PackCommands::Upload {
|
||||||
|
path,
|
||||||
|
force,
|
||||||
|
skip_tests,
|
||||||
|
} => handle_upload(profile, path, force, skip_tests, api_url, output_format).await,
|
||||||
PackCommands::Test {
|
PackCommands::Test {
|
||||||
pack,
|
pack,
|
||||||
verbose,
|
verbose,
|
||||||
@@ -593,6 +624,160 @@ async fn handle_uninstall(
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn handle_upload(
|
||||||
|
profile: &Option<String>,
|
||||||
|
path: String,
|
||||||
|
force: bool,
|
||||||
|
skip_tests: bool,
|
||||||
|
api_url: &Option<String>,
|
||||||
|
output_format: OutputFormat,
|
||||||
|
) -> Result<()> {
|
||||||
|
let pack_dir = Path::new(&path);
|
||||||
|
|
||||||
|
// Validate the directory exists and contains pack.yaml
|
||||||
|
if !pack_dir.exists() {
|
||||||
|
anyhow::bail!("Path does not exist: {}", path);
|
||||||
|
}
|
||||||
|
if !pack_dir.is_dir() {
|
||||||
|
anyhow::bail!("Path is not a directory: {}", path);
|
||||||
|
}
|
||||||
|
let pack_yaml_path = pack_dir.join("pack.yaml");
|
||||||
|
if !pack_yaml_path.exists() {
|
||||||
|
anyhow::bail!("No pack.yaml found in: {}", path);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read pack ref from pack.yaml so we can display it
|
||||||
|
let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)
|
||||||
|
.context("Failed to read pack.yaml")?;
|
||||||
|
let pack_yaml: serde_yaml_ng::Value =
|
||||||
|
serde_yaml_ng::from_str(&pack_yaml_content).context("Failed to parse pack.yaml")?;
|
||||||
|
let pack_ref = pack_yaml
|
||||||
|
.get("ref")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("unknown");
|
||||||
|
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Table => {
|
||||||
|
output::print_info(&format!(
|
||||||
|
"Uploading pack '{}' from: {}",
|
||||||
|
pack_ref, path
|
||||||
|
));
|
||||||
|
output::print_info("Creating archive...");
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build an in-memory tar.gz of the pack directory
|
||||||
|
let tar_gz_bytes = {
|
||||||
|
let buf = Vec::new();
|
||||||
|
let enc = GzEncoder::new(buf, Compression::default());
|
||||||
|
let mut tar = tar::Builder::new(enc);
|
||||||
|
|
||||||
|
// Walk the directory and add files to the archive
|
||||||
|
// We strip the leading path so the archive root is the pack directory contents
|
||||||
|
let abs_pack_dir = pack_dir
|
||||||
|
.canonicalize()
|
||||||
|
.context("Failed to resolve pack directory path")?;
|
||||||
|
|
||||||
|
append_dir_to_tar(&mut tar, &abs_pack_dir, &abs_pack_dir)?;
|
||||||
|
|
||||||
|
let encoder = tar.into_inner().context("Failed to finalise tar archive")?;
|
||||||
|
encoder.finish().context("Failed to flush gzip stream")?
|
||||||
|
};
|
||||||
|
|
||||||
|
let archive_size_kb = tar_gz_bytes.len() / 1024;
|
||||||
|
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Table => {
|
||||||
|
output::print_info(&format!(
|
||||||
|
"Archive ready ({} KB), uploading...",
|
||||||
|
archive_size_kb
|
||||||
|
));
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
|
||||||
|
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||||
|
let mut client = ApiClient::from_config(&config, api_url);
|
||||||
|
|
||||||
|
let mut extra_fields = Vec::new();
|
||||||
|
if force {
|
||||||
|
extra_fields.push(("force", "true".to_string()));
|
||||||
|
}
|
||||||
|
if skip_tests {
|
||||||
|
extra_fields.push(("skip_tests", "true".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let archive_name = format!("{}.tar.gz", pack_ref);
|
||||||
|
let response: UploadPackResponse = client
|
||||||
|
.multipart_post(
|
||||||
|
"/packs/upload",
|
||||||
|
"pack",
|
||||||
|
tar_gz_bytes,
|
||||||
|
&archive_name,
|
||||||
|
"application/gzip",
|
||||||
|
extra_fields,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Json | OutputFormat::Yaml => {
|
||||||
|
output::print_output(&response, output_format)?;
|
||||||
|
}
|
||||||
|
OutputFormat::Table => {
|
||||||
|
println!();
|
||||||
|
output::print_success(&format!(
|
||||||
|
"✓ Pack '{}' uploaded and registered successfully",
|
||||||
|
response.pack.pack_ref
|
||||||
|
));
|
||||||
|
output::print_info(&format!(" Version: {}", response.pack.version));
|
||||||
|
output::print_info(&format!(" ID: {}", response.pack.id));
|
||||||
|
|
||||||
|
if response.tests_skipped {
|
||||||
|
output::print_info(" ⚠ Tests were skipped");
|
||||||
|
} else if let Some(test_result) = &response.test_result {
|
||||||
|
if let Some(status) = test_result.get("status").and_then(|s| s.as_str()) {
|
||||||
|
if status == "passed" {
|
||||||
|
output::print_success(" ✓ All tests passed");
|
||||||
|
} else if status == "failed" {
|
||||||
|
output::print_error(" ✗ Some tests failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Recursively append a directory's contents to a tar archive.
|
||||||
|
/// `base` is the root directory being archived; `dir` is the current directory
|
||||||
|
/// being walked. Files are stored with paths relative to `base`.
|
||||||
|
fn append_dir_to_tar<W: std::io::Write>(
|
||||||
|
tar: &mut tar::Builder<W>,
|
||||||
|
base: &Path,
|
||||||
|
dir: &Path,
|
||||||
|
) -> Result<()> {
|
||||||
|
for entry in std::fs::read_dir(dir).context("Failed to read directory")? {
|
||||||
|
let entry = entry.context("Failed to read directory entry")?;
|
||||||
|
let entry_path = entry.path();
|
||||||
|
let relative_path = entry_path
|
||||||
|
.strip_prefix(base)
|
||||||
|
.context("Failed to compute relative path")?;
|
||||||
|
|
||||||
|
if entry_path.is_dir() {
|
||||||
|
append_dir_to_tar(tar, base, &entry_path)?;
|
||||||
|
} else if entry_path.is_file() {
|
||||||
|
tar.append_path_with_name(&entry_path, relative_path)
|
||||||
|
.with_context(|| {
|
||||||
|
format!("Failed to add {} to archive", entry_path.display())
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
// symlinks are intentionally skipped
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
async fn handle_register(
|
async fn handle_register(
|
||||||
profile: &Option<String>,
|
profile: &Option<String>,
|
||||||
path: String,
|
path: String,
|
||||||
@@ -604,18 +789,38 @@ async fn handle_register(
|
|||||||
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
let config = CliConfig::load_with_profile(profile.as_deref())?;
|
||||||
let mut client = ApiClient::from_config(&config, api_url);
|
let mut client = ApiClient::from_config(&config, api_url);
|
||||||
|
|
||||||
let request = RegisterPackRequest {
|
// Warn if the path looks like a local filesystem path that the API server
|
||||||
path: path.clone(),
|
// probably can't see (i.e. not a known container mount point).
|
||||||
force,
|
let looks_local = !path.starts_with("/opt/attune/")
|
||||||
skip_tests,
|
&& !path.starts_with("/app/")
|
||||||
};
|
&& !path.starts_with("/packs");
|
||||||
|
if looks_local {
|
||||||
|
match output_format {
|
||||||
|
OutputFormat::Table => {
|
||||||
|
output::print_info(&format!("Registering pack from: {}", path));
|
||||||
|
eprintln!(
|
||||||
|
"⚠ Warning: '{}' looks like a local path. If the API is running in \
|
||||||
|
Docker it may not be able to access this path.\n \
|
||||||
|
Use `attune pack upload {}` instead to upload the pack directly.",
|
||||||
|
path, path
|
||||||
|
);
|
||||||
|
}
|
||||||
|
_ => {}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
match output_format {
|
match output_format {
|
||||||
OutputFormat::Table => {
|
OutputFormat::Table => {
|
||||||
output::print_info(&format!("Registering pack from: {}", path));
|
output::print_info(&format!("Registering pack from: {}", path));
|
||||||
}
|
}
|
||||||
_ => {}
|
_ => {}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let request = RegisterPackRequest {
|
||||||
|
path: path.clone(),
|
||||||
|
force,
|
||||||
|
skip_tests,
|
||||||
|
};
|
||||||
|
|
||||||
let response: PackInstallResponse = client.post("/packs/register", &request).await?;
|
let response: PackInstallResponse = client.post("/packs/register", &request).await?;
|
||||||
|
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ mod client;
|
|||||||
mod commands;
|
mod commands;
|
||||||
mod config;
|
mod config;
|
||||||
mod output;
|
mod output;
|
||||||
|
mod wait;
|
||||||
|
|
||||||
use commands::{
|
use commands::{
|
||||||
action::{handle_action_command, ActionCommands},
|
action::{handle_action_command, ActionCommands},
|
||||||
@@ -112,6 +113,11 @@ enum Commands {
|
|||||||
/// Timeout in seconds when waiting (default: 300)
|
/// Timeout in seconds when waiting (default: 300)
|
||||||
#[arg(long, default_value = "300", requires = "wait")]
|
#[arg(long, default_value = "300", requires = "wait")]
|
||||||
timeout: u64,
|
timeout: u64,
|
||||||
|
|
||||||
|
/// Notifier WebSocket base URL (e.g. ws://localhost:8081).
|
||||||
|
/// Derived from --api-url automatically when not set.
|
||||||
|
#[arg(long, requires = "wait")]
|
||||||
|
notifier_url: Option<String>,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -193,6 +199,7 @@ async fn main() {
|
|||||||
params_json,
|
params_json,
|
||||||
wait,
|
wait,
|
||||||
timeout,
|
timeout,
|
||||||
|
notifier_url,
|
||||||
} => {
|
} => {
|
||||||
// Delegate to action execute command
|
// Delegate to action execute command
|
||||||
handle_action_command(
|
handle_action_command(
|
||||||
@@ -203,6 +210,7 @@ async fn main() {
|
|||||||
params_json,
|
params_json,
|
||||||
wait,
|
wait,
|
||||||
timeout,
|
timeout,
|
||||||
|
notifier_url,
|
||||||
},
|
},
|
||||||
&cli.api_url,
|
&cli.api_url,
|
||||||
output_format,
|
output_format,
|
||||||
|
|||||||
556
crates/cli/src/wait.rs
Normal file
556
crates/cli/src/wait.rs
Normal file
@@ -0,0 +1,556 @@
|
|||||||
|
//! Waiting for execution completion.
|
||||||
|
//!
|
||||||
|
//! Tries to connect to the notifier WebSocket first so the CLI reacts
|
||||||
|
//! *immediately* when the execution reaches a terminal state. If the
|
||||||
|
//! notifier is unreachable (not configured, different port, Docker network
|
||||||
|
//! boundary, etc.) it transparently falls back to REST polling.
|
||||||
|
//!
|
||||||
|
//! Public surface:
|
||||||
|
//! - [`WaitOptions`] – caller-supplied parameters
|
||||||
|
//! - [`wait_for_execution`] – the single entry point
|
||||||
|
|
||||||
|
use anyhow::Result;
|
||||||
|
use futures::{SinkExt, StreamExt};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::time::{Duration, Instant};
|
||||||
|
use tokio_tungstenite::{connect_async, tungstenite::Message};
|
||||||
|
|
||||||
|
use crate::client::ApiClient;
|
||||||
|
|
||||||
|
// ── terminal status helpers ───────────────────────────────────────────────────
|
||||||
|
|
||||||
|
fn is_terminal(status: &str) -> bool {
|
||||||
|
matches!(
|
||||||
|
status,
|
||||||
|
"completed" | "succeeded" | "failed" | "canceled" | "cancelled" | "timeout" | "timed_out"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── public types ─────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Result returned when the wait completes.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct ExecutionSummary {
|
||||||
|
pub id: i64,
|
||||||
|
pub status: String,
|
||||||
|
pub action_ref: String,
|
||||||
|
pub result: Option<serde_json::Value>,
|
||||||
|
pub created: String,
|
||||||
|
pub updated: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parameters that control how we wait.
|
||||||
|
pub struct WaitOptions<'a> {
|
||||||
|
/// Execution ID to watch.
|
||||||
|
pub execution_id: i64,
|
||||||
|
/// Overall wall-clock limit (seconds). Defaults to 300 if `None`.
|
||||||
|
pub timeout_secs: u64,
|
||||||
|
/// REST API client (already authenticated).
|
||||||
|
pub api_client: &'a mut ApiClient,
|
||||||
|
/// Base URL of the *notifier* WebSocket service, e.g. `ws://localhost:8081`.
|
||||||
|
/// Derived from the API URL when not explicitly set.
|
||||||
|
pub notifier_ws_url: Option<String>,
|
||||||
|
/// If `true`, print progress lines to stderr.
|
||||||
|
pub verbose: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── notifier WebSocket messages (mirrors websocket_server.rs) ────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize)]
|
||||||
|
#[serde(tag = "type")]
|
||||||
|
enum ClientMsg {
|
||||||
|
#[serde(rename = "subscribe")]
|
||||||
|
Subscribe { filter: String },
|
||||||
|
#[serde(rename = "ping")]
|
||||||
|
Ping,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
#[serde(tag = "type")]
|
||||||
|
enum ServerMsg {
|
||||||
|
#[serde(rename = "welcome")]
|
||||||
|
Welcome {
|
||||||
|
client_id: String,
|
||||||
|
#[allow(dead_code)]
|
||||||
|
message: String,
|
||||||
|
},
|
||||||
|
#[serde(rename = "notification")]
|
||||||
|
Notification(NotifierNotification),
|
||||||
|
#[serde(rename = "error")]
|
||||||
|
Error { message: String },
|
||||||
|
#[serde(other)]
|
||||||
|
Unknown,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct NotifierNotification {
|
||||||
|
pub notification_type: String,
|
||||||
|
pub entity_type: String,
|
||||||
|
pub entity_id: i64,
|
||||||
|
pub payload: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── REST execution shape ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct RestExecution {
|
||||||
|
id: i64,
|
||||||
|
action_ref: String,
|
||||||
|
status: String,
|
||||||
|
result: Option<serde_json::Value>,
|
||||||
|
created: String,
|
||||||
|
updated: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<RestExecution> for ExecutionSummary {
|
||||||
|
fn from(e: RestExecution) -> Self {
|
||||||
|
Self {
|
||||||
|
id: e.id,
|
||||||
|
status: e.status,
|
||||||
|
action_ref: e.action_ref,
|
||||||
|
result: e.result,
|
||||||
|
created: e.created,
|
||||||
|
updated: e.updated,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── entry point ───────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Wait for `execution_id` to reach a terminal status.
|
||||||
|
///
|
||||||
|
/// 1. Attempts a WebSocket connection to the notifier and subscribes to the
|
||||||
|
/// specific execution with the filter `entity:execution:<id>`.
|
||||||
|
/// 2. If the connection fails (or the notifier URL can't be derived) it falls
|
||||||
|
/// back to polling `GET /executions/<id>` every 2 seconds.
|
||||||
|
/// 3. In both cases, an overall `timeout_secs` wall-clock limit is enforced.
|
||||||
|
///
|
||||||
|
/// Returns the final [`ExecutionSummary`] on success or an error if the
|
||||||
|
/// timeout is exceeded or a fatal error occurs.
|
||||||
|
pub async fn wait_for_execution(opts: WaitOptions<'_>) -> Result<ExecutionSummary> {
|
||||||
|
let overall_deadline = Instant::now() + Duration::from_secs(opts.timeout_secs);
|
||||||
|
|
||||||
|
// Reserve at least this long for polling after WebSocket gives up.
|
||||||
|
// This ensures the polling fallback always gets a fair chance even when
|
||||||
|
// the WS path consumes most of the timeout budget.
|
||||||
|
const MIN_POLL_BUDGET: Duration = Duration::from_secs(10);
|
||||||
|
|
||||||
|
// Try WebSocket path first; fall through to polling on any connection error.
|
||||||
|
if let Some(ws_url) = resolve_ws_url(&opts) {
|
||||||
|
// Give WS at most (timeout - MIN_POLL_BUDGET) so polling always has headroom.
|
||||||
|
let ws_deadline = if overall_deadline > Instant::now() + MIN_POLL_BUDGET {
|
||||||
|
overall_deadline - MIN_POLL_BUDGET
|
||||||
|
} else {
|
||||||
|
// Timeout is very short; skip WS entirely and go straight to polling.
|
||||||
|
overall_deadline
|
||||||
|
};
|
||||||
|
|
||||||
|
match wait_via_websocket(
|
||||||
|
&ws_url,
|
||||||
|
opts.execution_id,
|
||||||
|
ws_deadline,
|
||||||
|
opts.verbose,
|
||||||
|
opts.api_client,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(summary) => return Ok(summary),
|
||||||
|
Err(ws_err) => {
|
||||||
|
if opts.verbose {
|
||||||
|
eprintln!(" [notifier: {}] falling back to polling", ws_err);
|
||||||
|
}
|
||||||
|
// Fall through to polling below.
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else if opts.verbose {
|
||||||
|
eprintln!(" [notifier URL not configured] using polling");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Polling always uses the full overall deadline, so at minimum MIN_POLL_BUDGET
|
||||||
|
// remains (and often the full timeout if WS failed at connect time).
|
||||||
|
wait_via_polling(
|
||||||
|
opts.api_client,
|
||||||
|
opts.execution_id,
|
||||||
|
overall_deadline,
|
||||||
|
opts.verbose,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── WebSocket path ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
async fn wait_via_websocket(
|
||||||
|
ws_base_url: &str,
|
||||||
|
execution_id: i64,
|
||||||
|
deadline: Instant,
|
||||||
|
verbose: bool,
|
||||||
|
api_client: &mut ApiClient,
|
||||||
|
) -> Result<ExecutionSummary> {
|
||||||
|
// Build the full WS endpoint URL.
|
||||||
|
let ws_url = format!("{}/ws", ws_base_url.trim_end_matches('/'));
|
||||||
|
|
||||||
|
let connect_timeout = Duration::from_secs(5);
|
||||||
|
let remaining = deadline.saturating_duration_since(Instant::now());
|
||||||
|
if remaining.is_zero() {
|
||||||
|
anyhow::bail!("WS budget exhausted before connect");
|
||||||
|
}
|
||||||
|
let effective_connect_timeout = connect_timeout.min(remaining);
|
||||||
|
|
||||||
|
let connect_result =
|
||||||
|
tokio::time::timeout(effective_connect_timeout, connect_async(&ws_url)).await;
|
||||||
|
|
||||||
|
let (ws_stream, _response) = match connect_result {
|
||||||
|
Ok(Ok(pair)) => pair,
|
||||||
|
Ok(Err(e)) => anyhow::bail!("WebSocket connect failed: {}", e),
|
||||||
|
Err(_) => anyhow::bail!("WebSocket connect timed out"),
|
||||||
|
};
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [notifier] connected to {}", ws_url);
|
||||||
|
}
|
||||||
|
|
||||||
|
let (mut write, mut read) = ws_stream.split();
|
||||||
|
|
||||||
|
// Wait for the welcome message before subscribing.
|
||||||
|
tokio::time::timeout(Duration::from_secs(5), async {
|
||||||
|
while let Some(msg) = read.next().await {
|
||||||
|
if let Ok(Message::Text(txt)) = msg {
|
||||||
|
if let Ok(ServerMsg::Welcome { client_id, .. }) =
|
||||||
|
serde_json::from_str::<ServerMsg>(&txt)
|
||||||
|
{
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [notifier] session id {}", client_id);
|
||||||
|
}
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
anyhow::bail!("connection closed before welcome")
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|_| anyhow::anyhow!("timed out waiting for welcome message"))??;
|
||||||
|
|
||||||
|
// Subscribe to this specific execution.
|
||||||
|
let subscribe_msg = ClientMsg::Subscribe {
|
||||||
|
filter: format!("entity:execution:{}", execution_id),
|
||||||
|
};
|
||||||
|
let subscribe_json = serde_json::to_string(&subscribe_msg)?;
|
||||||
|
SinkExt::send(&mut write, Message::Text(subscribe_json.into())).await?;
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
eprintln!(
|
||||||
|
" [notifier] subscribed to entity:execution:{}",
|
||||||
|
execution_id
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Race-condition guard ──────────────────────────────────────────────
|
||||||
|
// The execution may have already completed in the window between the
|
||||||
|
// initial POST and when the WS subscription became active. Check once
|
||||||
|
// with the REST API *after* subscribing so there is no gap: either the
|
||||||
|
// notification arrives after this check (and we'll catch it in the loop
|
||||||
|
// below) or we catch the terminal state here.
|
||||||
|
{
|
||||||
|
let path = format!("/executions/{}", execution_id);
|
||||||
|
if let Ok(exec) = api_client.get::<RestExecution>(&path).await {
|
||||||
|
if is_terminal(&exec.status) {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(
|
||||||
|
" [notifier] execution {} already terminal ('{}') — caught by post-subscribe check",
|
||||||
|
execution_id, exec.status
|
||||||
|
);
|
||||||
|
}
|
||||||
|
return Ok(exec.into());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Periodically ping to keep the connection alive and check the deadline.
|
||||||
|
let ping_interval = Duration::from_secs(15);
|
||||||
|
let mut next_ping = Instant::now() + ping_interval;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
let remaining = deadline.saturating_duration_since(Instant::now());
|
||||||
|
if remaining.is_zero() {
|
||||||
|
anyhow::bail!("timed out waiting for execution {}", execution_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait up to the earlier of: next ping time or deadline.
|
||||||
|
let wait_for = remaining.min(next_ping.saturating_duration_since(Instant::now()));
|
||||||
|
|
||||||
|
let msg_result = tokio::time::timeout(wait_for, read.next()).await;
|
||||||
|
|
||||||
|
match msg_result {
|
||||||
|
// Received a message within the window.
|
||||||
|
Ok(Some(Ok(Message::Text(txt)))) => {
|
||||||
|
match serde_json::from_str::<ServerMsg>(&txt) {
|
||||||
|
Ok(ServerMsg::Notification(n)) => {
|
||||||
|
if n.entity_type == "execution" && n.entity_id == execution_id {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(
|
||||||
|
" [notifier] {} for execution {} — status={:?}",
|
||||||
|
n.notification_type,
|
||||||
|
execution_id,
|
||||||
|
n.payload.get("status").and_then(|s| s.as_str()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract status from the notification payload.
|
||||||
|
// The notifier broadcasts the full execution row in
|
||||||
|
// `payload`, so we can read the status directly.
|
||||||
|
if let Some(status) = n.payload.get("status").and_then(|s| s.as_str()) {
|
||||||
|
if is_terminal(status) {
|
||||||
|
// Build a summary from the payload; fall
|
||||||
|
// back to a REST fetch for missing fields.
|
||||||
|
return build_summary_from_payload(execution_id, &n.payload);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Not our execution or not yet terminal — keep waiting.
|
||||||
|
}
|
||||||
|
Ok(ServerMsg::Error { message }) => {
|
||||||
|
anyhow::bail!("notifier error: {}", message);
|
||||||
|
}
|
||||||
|
Ok(ServerMsg::Welcome { .. } | ServerMsg::Unknown) => {
|
||||||
|
// Ignore unexpected / unrecognised messages.
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
// Log parse failures at trace level — they can happen if the
|
||||||
|
// server sends a message format we don't recognise yet.
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [notifier] ignoring unrecognised message: {}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Connection closed cleanly.
|
||||||
|
Ok(Some(Ok(Message::Close(_)))) | Ok(None) => {
|
||||||
|
anyhow::bail!("notifier WebSocket closed unexpectedly");
|
||||||
|
}
|
||||||
|
// Ping/pong frames — ignore.
|
||||||
|
Ok(Some(Ok(
|
||||||
|
Message::Ping(_) | Message::Pong(_) | Message::Binary(_) | Message::Frame(_),
|
||||||
|
))) => {}
|
||||||
|
// WebSocket transport error.
|
||||||
|
Ok(Some(Err(e))) => {
|
||||||
|
anyhow::bail!("WebSocket error: {}", e);
|
||||||
|
}
|
||||||
|
// Timeout waiting for a message — time to ping.
|
||||||
|
Err(_timeout) => {
|
||||||
|
let now = Instant::now();
|
||||||
|
if now >= next_ping {
|
||||||
|
let _ = SinkExt::send(
|
||||||
|
&mut write,
|
||||||
|
Message::Text(serde_json::to_string(&ClientMsg::Ping)?.into()),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
next_ping = now + ping_interval;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build an [`ExecutionSummary`] from the notification payload.
|
||||||
|
/// The notifier payload matches the REST execution shape closely enough that
|
||||||
|
/// we can deserialize it directly.
|
||||||
|
fn build_summary_from_payload(
|
||||||
|
execution_id: i64,
|
||||||
|
payload: &serde_json::Value,
|
||||||
|
) -> Result<ExecutionSummary> {
|
||||||
|
// Try a full deserialize first.
|
||||||
|
if let Ok(exec) = serde_json::from_value::<RestExecution>(payload.clone()) {
|
||||||
|
return Ok(exec.into());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Partial payload — assemble what we can.
|
||||||
|
Ok(ExecutionSummary {
|
||||||
|
id: execution_id,
|
||||||
|
status: payload
|
||||||
|
.get("status")
|
||||||
|
.and_then(|s| s.as_str())
|
||||||
|
.unwrap_or("unknown")
|
||||||
|
.to_string(),
|
||||||
|
action_ref: payload
|
||||||
|
.get("action_ref")
|
||||||
|
.and_then(|s| s.as_str())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string(),
|
||||||
|
result: payload.get("result").cloned(),
|
||||||
|
created: payload
|
||||||
|
.get("created")
|
||||||
|
.and_then(|s| s.as_str())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string(),
|
||||||
|
updated: payload
|
||||||
|
.get("updated")
|
||||||
|
.and_then(|s| s.as_str())
|
||||||
|
.unwrap_or("")
|
||||||
|
.to_string(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── polling fallback ──────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
const POLL_INTERVAL: Duration = Duration::from_millis(500);
|
||||||
|
const POLL_INTERVAL_MAX: Duration = Duration::from_secs(2);
|
||||||
|
/// How quickly the poll interval grows on each successive check.
|
||||||
|
const POLL_BACKOFF_FACTOR: f64 = 1.5;
|
||||||
|
|
||||||
|
async fn wait_via_polling(
|
||||||
|
client: &mut ApiClient,
|
||||||
|
execution_id: i64,
|
||||||
|
deadline: Instant,
|
||||||
|
verbose: bool,
|
||||||
|
) -> Result<ExecutionSummary> {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [poll] watching execution {}", execution_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut interval = POLL_INTERVAL;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
// Poll immediately first, before sleeping — catches the case where the
|
||||||
|
// execution already finished while we were connecting to the notifier.
|
||||||
|
let path = format!("/executions/{}", execution_id);
|
||||||
|
match client.get::<RestExecution>(&path).await {
|
||||||
|
Ok(exec) => {
|
||||||
|
if is_terminal(&exec.status) {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [poll] execution {} is {}", execution_id, exec.status);
|
||||||
|
}
|
||||||
|
return Ok(exec.into());
|
||||||
|
}
|
||||||
|
if verbose {
|
||||||
|
eprintln!(
|
||||||
|
" [poll] status = {} — checking again in {:.1}s",
|
||||||
|
exec.status,
|
||||||
|
interval.as_secs_f64()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
if verbose {
|
||||||
|
eprintln!(" [poll] request failed ({}), retrying…", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check deadline *after* the poll attempt so we always do at least one check.
|
||||||
|
if Instant::now() >= deadline {
|
||||||
|
anyhow::bail!("timed out waiting for execution {}", execution_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sleep, but wake up if we'd overshoot the deadline.
|
||||||
|
let sleep_for = interval.min(deadline.saturating_duration_since(Instant::now()));
|
||||||
|
tokio::time::sleep(sleep_for).await;
|
||||||
|
|
||||||
|
// Exponential back-off up to the cap.
|
||||||
|
interval = Duration::from_secs_f64(
|
||||||
|
(interval.as_secs_f64() * POLL_BACKOFF_FACTOR).min(POLL_INTERVAL_MAX.as_secs_f64()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── URL resolution ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Derive the notifier WebSocket base URL.
|
||||||
|
///
|
||||||
|
/// Priority:
|
||||||
|
/// 1. Explicit `notifier_ws_url` in [`WaitOptions`].
|
||||||
|
/// 2. Replace the API base URL scheme (`http` → `ws`) and port (`8080` → `8081`).
|
||||||
|
/// This covers the standard single-host layout where both services share the
|
||||||
|
/// same hostname.
|
||||||
|
fn resolve_ws_url(opts: &WaitOptions<'_>) -> Option<String> {
|
||||||
|
if let Some(url) = &opts.notifier_ws_url {
|
||||||
|
return Some(url.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ask the client for its base URL by building a dummy request path
|
||||||
|
// and stripping the path portion — we don't have direct access to
|
||||||
|
// base_url here so we derive it from the config instead.
|
||||||
|
let api_url = opts.api_client.base_url();
|
||||||
|
|
||||||
|
// Transform http(s)://host:PORT/... → ws(s)://host:8081
|
||||||
|
let ws_url = derive_notifier_url(&api_url)?;
|
||||||
|
Some(ws_url)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Convert an HTTP API base URL into the expected notifier WebSocket URL.
|
||||||
|
///
|
||||||
|
/// - `http://localhost:8080` → `ws://localhost:8081`
|
||||||
|
/// - `https://api.example.com` → `wss://api.example.com:8081`
|
||||||
|
/// - `http://api.example.com:9000` → `ws://api.example.com:8081`
|
||||||
|
fn derive_notifier_url(api_url: &str) -> Option<String> {
|
||||||
|
let url = url::Url::parse(api_url).ok()?;
|
||||||
|
let ws_scheme = match url.scheme() {
|
||||||
|
"https" => "wss",
|
||||||
|
_ => "ws",
|
||||||
|
};
|
||||||
|
let host = url.host_str()?;
|
||||||
|
Some(format!("{}://{}:8081", ws_scheme, host))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_is_terminal() {
|
||||||
|
assert!(is_terminal("completed"));
|
||||||
|
assert!(is_terminal("succeeded"));
|
||||||
|
assert!(is_terminal("failed"));
|
||||||
|
assert!(is_terminal("canceled"));
|
||||||
|
assert!(is_terminal("cancelled"));
|
||||||
|
assert!(is_terminal("timeout"));
|
||||||
|
assert!(is_terminal("timed_out"));
|
||||||
|
assert!(!is_terminal("requested"));
|
||||||
|
assert!(!is_terminal("scheduled"));
|
||||||
|
assert!(!is_terminal("running"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_derive_notifier_url() {
|
||||||
|
assert_eq!(
|
||||||
|
derive_notifier_url("http://localhost:8080"),
|
||||||
|
Some("ws://localhost:8081".to_string())
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
derive_notifier_url("https://api.example.com"),
|
||||||
|
Some("wss://api.example.com:8081".to_string())
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
derive_notifier_url("http://api.example.com:9000"),
|
||||||
|
Some("ws://api.example.com:8081".to_string())
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
derive_notifier_url("http://10.0.0.5:8080"),
|
||||||
|
Some("ws://10.0.0.5:8081".to_string())
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_summary_from_full_payload() {
|
||||||
|
let payload = serde_json::json!({
|
||||||
|
"id": 42,
|
||||||
|
"action_ref": "core.echo",
|
||||||
|
"status": "completed",
|
||||||
|
"result": { "stdout": "hi" },
|
||||||
|
"created": "2026-01-01T00:00:00Z",
|
||||||
|
"updated": "2026-01-01T00:00:01Z"
|
||||||
|
});
|
||||||
|
let summary = build_summary_from_payload(42, &payload).unwrap();
|
||||||
|
assert_eq!(summary.id, 42);
|
||||||
|
assert_eq!(summary.status, "completed");
|
||||||
|
assert_eq!(summary.action_ref, "core.echo");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_build_summary_from_partial_payload() {
|
||||||
|
let payload = serde_json::json!({ "status": "failed" });
|
||||||
|
let summary = build_summary_from_payload(7, &payload).unwrap();
|
||||||
|
assert_eq!(summary.id, 7);
|
||||||
|
assert_eq!(summary.status, "failed");
|
||||||
|
assert_eq!(summary.action_ref, "");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -582,6 +582,13 @@ pub struct Config {
|
|||||||
#[serde(default = "default_runtime_envs_dir")]
|
#[serde(default = "default_runtime_envs_dir")]
|
||||||
pub runtime_envs_dir: String,
|
pub runtime_envs_dir: String,
|
||||||
|
|
||||||
|
/// Artifacts directory (shared volume for file-based artifact storage).
|
||||||
|
/// File-type artifacts (FileBinary, FileDatatable, FileText, Log) are stored
|
||||||
|
/// on disk at this location rather than in the database.
|
||||||
|
/// Pattern: {artifacts_dir}/{ref_slug}/v{version}.{ext}
|
||||||
|
#[serde(default = "default_artifacts_dir")]
|
||||||
|
pub artifacts_dir: String,
|
||||||
|
|
||||||
/// Notifier configuration (optional, for notifier service)
|
/// Notifier configuration (optional, for notifier service)
|
||||||
pub notifier: Option<NotifierConfig>,
|
pub notifier: Option<NotifierConfig>,
|
||||||
|
|
||||||
@@ -609,6 +616,10 @@ fn default_runtime_envs_dir() -> String {
|
|||||||
"/opt/attune/runtime_envs".to_string()
|
"/opt/attune/runtime_envs".to_string()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn default_artifacts_dir() -> String {
|
||||||
|
"/opt/attune/artifacts".to_string()
|
||||||
|
}
|
||||||
|
|
||||||
impl Default for DatabaseConfig {
|
impl Default for DatabaseConfig {
|
||||||
fn default() -> Self {
|
fn default() -> Self {
|
||||||
Self {
|
Self {
|
||||||
@@ -844,6 +855,7 @@ mod tests {
|
|||||||
sensor: None,
|
sensor: None,
|
||||||
packs_base_dir: default_packs_base_dir(),
|
packs_base_dir: default_packs_base_dir(),
|
||||||
runtime_envs_dir: default_runtime_envs_dir(),
|
runtime_envs_dir: default_runtime_envs_dir(),
|
||||||
|
artifacts_dir: default_artifacts_dir(),
|
||||||
notifier: None,
|
notifier: None,
|
||||||
pack_registry: PackRegistryConfig::default(),
|
pack_registry: PackRegistryConfig::default(),
|
||||||
executor: None,
|
executor: None,
|
||||||
@@ -917,6 +929,7 @@ mod tests {
|
|||||||
sensor: None,
|
sensor: None,
|
||||||
packs_base_dir: default_packs_base_dir(),
|
packs_base_dir: default_packs_base_dir(),
|
||||||
runtime_envs_dir: default_runtime_envs_dir(),
|
runtime_envs_dir: default_runtime_envs_dir(),
|
||||||
|
artifacts_dir: default_artifacts_dir(),
|
||||||
notifier: None,
|
notifier: None,
|
||||||
pack_registry: PackRegistryConfig::default(),
|
pack_registry: PackRegistryConfig::default(),
|
||||||
executor: None,
|
executor: None,
|
||||||
|
|||||||
@@ -367,6 +367,24 @@ pub mod enums {
|
|||||||
Minutes,
|
Minutes,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Visibility level for artifacts.
|
||||||
|
/// - `Public`: viewable by all authenticated users on the platform.
|
||||||
|
/// - `Private`: restricted based on the artifact's `scope` and `owner` fields.
|
||||||
|
/// Full RBAC enforcement is deferred; for now the field enables filtering.
|
||||||
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Type, ToSchema)]
|
||||||
|
#[sqlx(type_name = "artifact_visibility_enum", rename_all = "lowercase")]
|
||||||
|
#[serde(rename_all = "lowercase")]
|
||||||
|
pub enum ArtifactVisibility {
|
||||||
|
Public,
|
||||||
|
Private,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for ArtifactVisibility {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::Private
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Type, ToSchema)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Type, ToSchema)]
|
||||||
#[sqlx(type_name = "workflow_task_status_enum", rename_all = "lowercase")]
|
#[sqlx(type_name = "workflow_task_status_enum", rename_all = "lowercase")]
|
||||||
#[serde(rename_all = "lowercase")]
|
#[serde(rename_all = "lowercase")]
|
||||||
@@ -1268,6 +1286,7 @@ pub mod artifact {
|
|||||||
pub scope: OwnerType,
|
pub scope: OwnerType,
|
||||||
pub owner: String,
|
pub owner: String,
|
||||||
pub r#type: ArtifactType,
|
pub r#type: ArtifactType,
|
||||||
|
pub visibility: ArtifactVisibility,
|
||||||
pub retention_policy: RetentionPolicyType,
|
pub retention_policy: RetentionPolicyType,
|
||||||
pub retention_limit: i32,
|
pub retention_limit: i32,
|
||||||
/// Human-readable name (e.g. "Build Log", "Test Results")
|
/// Human-readable name (e.g. "Build Log", "Test Results")
|
||||||
@@ -1289,7 +1308,7 @@ pub mod artifact {
|
|||||||
/// Select columns for Artifact queries (excludes DB-only columns if any arise).
|
/// Select columns for Artifact queries (excludes DB-only columns if any arise).
|
||||||
/// Must be kept in sync with the Artifact struct field order.
|
/// Must be kept in sync with the Artifact struct field order.
|
||||||
pub const SELECT_COLUMNS: &str =
|
pub const SELECT_COLUMNS: &str =
|
||||||
"id, ref, scope, owner, type, retention_policy, retention_limit, \
|
"id, ref, scope, owner, type, visibility, retention_policy, retention_limit, \
|
||||||
name, description, content_type, size_bytes, execution, data, \
|
name, description, content_type, size_bytes, execution, data, \
|
||||||
created, updated";
|
created, updated";
|
||||||
}
|
}
|
||||||
@@ -1314,6 +1333,10 @@ pub mod artifact_version {
|
|||||||
pub content: Option<Vec<u8>>,
|
pub content: Option<Vec<u8>>,
|
||||||
/// Structured JSON content
|
/// Structured JSON content
|
||||||
pub content_json: Option<serde_json::Value>,
|
pub content_json: Option<serde_json::Value>,
|
||||||
|
/// Relative path from `artifacts_dir` root for disk-stored content.
|
||||||
|
/// When set, `content` BYTEA is NULL — the file lives on a shared volume.
|
||||||
|
/// Pattern: `{ref_slug}/v{version}.{ext}`
|
||||||
|
pub file_path: Option<String>,
|
||||||
/// Free-form metadata about this version
|
/// Free-form metadata about this version
|
||||||
pub meta: Option<serde_json::Value>,
|
pub meta: Option<serde_json::Value>,
|
||||||
/// Who created this version
|
/// Who created this version
|
||||||
@@ -1324,12 +1347,12 @@ pub mod artifact_version {
|
|||||||
/// Select columns WITHOUT the potentially large `content` BYTEA column.
|
/// Select columns WITHOUT the potentially large `content` BYTEA column.
|
||||||
/// Use `SELECT_COLUMNS_WITH_CONTENT` when you need the binary payload.
|
/// Use `SELECT_COLUMNS_WITH_CONTENT` when you need the binary payload.
|
||||||
pub const SELECT_COLUMNS: &str = "id, artifact, version, content_type, size_bytes, \
|
pub const SELECT_COLUMNS: &str = "id, artifact, version, content_type, size_bytes, \
|
||||||
NULL::bytea AS content, content_json, meta, created_by, created";
|
NULL::bytea AS content, content_json, file_path, meta, created_by, created";
|
||||||
|
|
||||||
/// Select columns INCLUDING the binary `content` column.
|
/// Select columns INCLUDING the binary `content` column.
|
||||||
pub const SELECT_COLUMNS_WITH_CONTENT: &str =
|
pub const SELECT_COLUMNS_WITH_CONTENT: &str =
|
||||||
"id, artifact, version, content_type, size_bytes, \
|
"id, artifact, version, content_type, size_bytes, \
|
||||||
content, content_json, meta, created_by, created";
|
content, content_json, file_path, meta, created_by, created";
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Workflow orchestration models
|
/// Workflow orchestration models
|
||||||
|
|||||||
@@ -5,7 +5,7 @@
|
|||||||
//! with headers and payload.
|
//! with headers and payload.
|
||||||
|
|
||||||
use chrono::{DateTime, Utc};
|
use chrono::{DateTime, Utc};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Deserializer, Serialize};
|
||||||
use serde_json::Value as JsonValue;
|
use serde_json::Value as JsonValue;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
@@ -124,6 +124,17 @@ impl MessageType {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Deserialize a UUID, substituting a freshly-generated one when the value is
|
||||||
|
/// null or absent. This keeps envelope parsing tolerant of messages that were
|
||||||
|
/// hand-crafted or produced by older tooling.
|
||||||
|
fn deserialize_uuid_default<'de, D>(deserializer: D) -> Result<Uuid, D::Error>
|
||||||
|
where
|
||||||
|
D: Deserializer<'de>,
|
||||||
|
{
|
||||||
|
let opt: Option<Uuid> = Option::deserialize(deserializer)?;
|
||||||
|
Ok(opt.unwrap_or_else(Uuid::new_v4))
|
||||||
|
}
|
||||||
|
|
||||||
/// Message envelope that wraps all messages with metadata
|
/// Message envelope that wraps all messages with metadata
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
pub struct MessageEnvelope<T>
|
pub struct MessageEnvelope<T>
|
||||||
@@ -131,9 +142,17 @@ where
|
|||||||
T: Clone,
|
T: Clone,
|
||||||
{
|
{
|
||||||
/// Unique message identifier
|
/// Unique message identifier
|
||||||
|
#[serde(
|
||||||
|
default = "Uuid::new_v4",
|
||||||
|
deserialize_with = "deserialize_uuid_default"
|
||||||
|
)]
|
||||||
pub message_id: Uuid,
|
pub message_id: Uuid,
|
||||||
|
|
||||||
/// Correlation ID for tracing related messages
|
/// Correlation ID for tracing related messages
|
||||||
|
#[serde(
|
||||||
|
default = "Uuid::new_v4",
|
||||||
|
deserialize_with = "deserialize_uuid_default"
|
||||||
|
)]
|
||||||
pub correlation_id: Uuid,
|
pub correlation_id: Uuid,
|
||||||
|
|
||||||
/// Message type
|
/// Message type
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
use crate::models::{
|
use crate::models::{
|
||||||
artifact::*,
|
artifact::*,
|
||||||
artifact_version::ArtifactVersion,
|
artifact_version::ArtifactVersion,
|
||||||
enums::{ArtifactType, OwnerType, RetentionPolicyType},
|
enums::{ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType},
|
||||||
};
|
};
|
||||||
use crate::Result;
|
use crate::Result;
|
||||||
use sqlx::{Executor, Postgres, QueryBuilder};
|
use sqlx::{Executor, Postgres, QueryBuilder};
|
||||||
@@ -29,6 +29,7 @@ pub struct CreateArtifactInput {
|
|||||||
pub scope: OwnerType,
|
pub scope: OwnerType,
|
||||||
pub owner: String,
|
pub owner: String,
|
||||||
pub r#type: ArtifactType,
|
pub r#type: ArtifactType,
|
||||||
|
pub visibility: ArtifactVisibility,
|
||||||
pub retention_policy: RetentionPolicyType,
|
pub retention_policy: RetentionPolicyType,
|
||||||
pub retention_limit: i32,
|
pub retention_limit: i32,
|
||||||
pub name: Option<String>,
|
pub name: Option<String>,
|
||||||
@@ -44,6 +45,7 @@ pub struct UpdateArtifactInput {
|
|||||||
pub scope: Option<OwnerType>,
|
pub scope: Option<OwnerType>,
|
||||||
pub owner: Option<String>,
|
pub owner: Option<String>,
|
||||||
pub r#type: Option<ArtifactType>,
|
pub r#type: Option<ArtifactType>,
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
pub retention_policy: Option<RetentionPolicyType>,
|
pub retention_policy: Option<RetentionPolicyType>,
|
||||||
pub retention_limit: Option<i32>,
|
pub retention_limit: Option<i32>,
|
||||||
pub name: Option<String>,
|
pub name: Option<String>,
|
||||||
@@ -59,6 +61,7 @@ pub struct ArtifactSearchFilters {
|
|||||||
pub scope: Option<OwnerType>,
|
pub scope: Option<OwnerType>,
|
||||||
pub owner: Option<String>,
|
pub owner: Option<String>,
|
||||||
pub r#type: Option<ArtifactType>,
|
pub r#type: Option<ArtifactType>,
|
||||||
|
pub visibility: Option<ArtifactVisibility>,
|
||||||
pub execution: Option<i64>,
|
pub execution: Option<i64>,
|
||||||
pub name_contains: Option<String>,
|
pub name_contains: Option<String>,
|
||||||
pub limit: u32,
|
pub limit: u32,
|
||||||
@@ -127,9 +130,9 @@ impl Create for ArtifactRepository {
|
|||||||
E: Executor<'e, Database = Postgres> + 'e,
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
{
|
{
|
||||||
let query = format!(
|
let query = format!(
|
||||||
"INSERT INTO artifact (ref, scope, owner, type, retention_policy, retention_limit, \
|
"INSERT INTO artifact (ref, scope, owner, type, visibility, retention_policy, retention_limit, \
|
||||||
name, description, content_type, execution, data) \
|
name, description, content_type, execution, data) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11) \
|
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) \
|
||||||
RETURNING {}",
|
RETURNING {}",
|
||||||
SELECT_COLUMNS
|
SELECT_COLUMNS
|
||||||
);
|
);
|
||||||
@@ -138,6 +141,7 @@ impl Create for ArtifactRepository {
|
|||||||
.bind(input.scope)
|
.bind(input.scope)
|
||||||
.bind(&input.owner)
|
.bind(&input.owner)
|
||||||
.bind(input.r#type)
|
.bind(input.r#type)
|
||||||
|
.bind(input.visibility)
|
||||||
.bind(input.retention_policy)
|
.bind(input.retention_policy)
|
||||||
.bind(input.retention_limit)
|
.bind(input.retention_limit)
|
||||||
.bind(&input.name)
|
.bind(&input.name)
|
||||||
@@ -178,6 +182,7 @@ impl Update for ArtifactRepository {
|
|||||||
push_field!(input.scope, "scope");
|
push_field!(input.scope, "scope");
|
||||||
push_field!(&input.owner, "owner");
|
push_field!(&input.owner, "owner");
|
||||||
push_field!(input.r#type, "type");
|
push_field!(input.r#type, "type");
|
||||||
|
push_field!(input.visibility, "visibility");
|
||||||
push_field!(input.retention_policy, "retention_policy");
|
push_field!(input.retention_policy, "retention_policy");
|
||||||
push_field!(input.retention_limit, "retention_limit");
|
push_field!(input.retention_limit, "retention_limit");
|
||||||
push_field!(&input.name, "name");
|
push_field!(&input.name, "name");
|
||||||
@@ -241,6 +246,10 @@ impl ArtifactRepository {
|
|||||||
param_idx += 1;
|
param_idx += 1;
|
||||||
conditions.push(format!("type = ${}", param_idx));
|
conditions.push(format!("type = ${}", param_idx));
|
||||||
}
|
}
|
||||||
|
if filters.visibility.is_some() {
|
||||||
|
param_idx += 1;
|
||||||
|
conditions.push(format!("visibility = ${}", param_idx));
|
||||||
|
}
|
||||||
if filters.execution.is_some() {
|
if filters.execution.is_some() {
|
||||||
param_idx += 1;
|
param_idx += 1;
|
||||||
conditions.push(format!("execution = ${}", param_idx));
|
conditions.push(format!("execution = ${}", param_idx));
|
||||||
@@ -270,6 +279,9 @@ impl ArtifactRepository {
|
|||||||
if let Some(r#type) = filters.r#type {
|
if let Some(r#type) = filters.r#type {
|
||||||
count_query = count_query.bind(r#type);
|
count_query = count_query.bind(r#type);
|
||||||
}
|
}
|
||||||
|
if let Some(visibility) = filters.visibility {
|
||||||
|
count_query = count_query.bind(visibility);
|
||||||
|
}
|
||||||
if let Some(execution) = filters.execution {
|
if let Some(execution) = filters.execution {
|
||||||
count_query = count_query.bind(execution);
|
count_query = count_query.bind(execution);
|
||||||
}
|
}
|
||||||
@@ -298,6 +310,9 @@ impl ArtifactRepository {
|
|||||||
if let Some(r#type) = filters.r#type {
|
if let Some(r#type) = filters.r#type {
|
||||||
data_query = data_query.bind(r#type);
|
data_query = data_query.bind(r#type);
|
||||||
}
|
}
|
||||||
|
if let Some(visibility) = filters.visibility {
|
||||||
|
data_query = data_query.bind(visibility);
|
||||||
|
}
|
||||||
if let Some(execution) = filters.execution {
|
if let Some(execution) = filters.execution {
|
||||||
data_query = data_query.bind(execution);
|
data_query = data_query.bind(execution);
|
||||||
}
|
}
|
||||||
@@ -466,6 +481,21 @@ impl ArtifactRepository {
|
|||||||
.await
|
.await
|
||||||
.map_err(Into::into)
|
.map_err(Into::into)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Update the size_bytes of an artifact (used by worker finalization to sync
|
||||||
|
/// the parent artifact's size with the latest file-based version).
|
||||||
|
pub async fn update_size_bytes<'e, E>(executor: E, id: i64, size_bytes: i64) -> Result<bool>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let result =
|
||||||
|
sqlx::query("UPDATE artifact SET size_bytes = $1, updated = NOW() WHERE id = $2")
|
||||||
|
.bind(size_bytes)
|
||||||
|
.bind(id)
|
||||||
|
.execute(executor)
|
||||||
|
.await?;
|
||||||
|
Ok(result.rows_affected() > 0)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
@@ -489,6 +519,7 @@ pub struct CreateArtifactVersionInput {
|
|||||||
pub content_type: Option<String>,
|
pub content_type: Option<String>,
|
||||||
pub content: Option<Vec<u8>>,
|
pub content: Option<Vec<u8>>,
|
||||||
pub content_json: Option<serde_json::Value>,
|
pub content_json: Option<serde_json::Value>,
|
||||||
|
pub file_path: Option<String>,
|
||||||
pub meta: Option<serde_json::Value>,
|
pub meta: Option<serde_json::Value>,
|
||||||
pub created_by: Option<String>,
|
pub created_by: Option<String>,
|
||||||
}
|
}
|
||||||
@@ -646,8 +677,8 @@ impl ArtifactVersionRepository {
|
|||||||
|
|
||||||
let query = format!(
|
let query = format!(
|
||||||
"INSERT INTO artifact_version \
|
"INSERT INTO artifact_version \
|
||||||
(artifact, version, content_type, size_bytes, content, content_json, meta, created_by) \
|
(artifact, version, content_type, size_bytes, content, content_json, file_path, meta, created_by) \
|
||||||
VALUES ($1, next_artifact_version($1), $2, $3, $4, $5, $6, $7) \
|
VALUES ($1, next_artifact_version($1), $2, $3, $4, $5, $6, $7, $8) \
|
||||||
RETURNING {}",
|
RETURNING {}",
|
||||||
artifact_version::SELECT_COLUMNS_WITH_CONTENT
|
artifact_version::SELECT_COLUMNS_WITH_CONTENT
|
||||||
);
|
);
|
||||||
@@ -657,6 +688,7 @@ impl ArtifactVersionRepository {
|
|||||||
.bind(size_bytes)
|
.bind(size_bytes)
|
||||||
.bind(&input.content)
|
.bind(&input.content)
|
||||||
.bind(&input.content_json)
|
.bind(&input.content_json)
|
||||||
|
.bind(&input.file_path)
|
||||||
.bind(&input.meta)
|
.bind(&input.meta)
|
||||||
.bind(&input.created_by)
|
.bind(&input.created_by)
|
||||||
.fetch_one(executor)
|
.fetch_one(executor)
|
||||||
@@ -699,4 +731,67 @@ impl ArtifactVersionRepository {
|
|||||||
.await
|
.await
|
||||||
.map_err(Into::into)
|
.map_err(Into::into)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Update the size_bytes of a specific artifact version (used by worker finalization).
|
||||||
|
pub async fn update_size_bytes<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
version_id: i64,
|
||||||
|
size_bytes: i64,
|
||||||
|
) -> Result<bool>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let result = sqlx::query("UPDATE artifact_version SET size_bytes = $1 WHERE id = $2")
|
||||||
|
.bind(size_bytes)
|
||||||
|
.bind(version_id)
|
||||||
|
.execute(executor)
|
||||||
|
.await?;
|
||||||
|
Ok(result.rows_affected() > 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Find all file-backed versions linked to an execution.
|
||||||
|
/// Joins artifact_version → artifact on artifact.execution to find all
|
||||||
|
/// file-based versions produced by a given execution.
|
||||||
|
pub async fn find_file_versions_by_execution<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
execution_id: i64,
|
||||||
|
) -> Result<Vec<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT av.{} \
|
||||||
|
FROM artifact_version av \
|
||||||
|
JOIN artifact a ON av.artifact = a.id \
|
||||||
|
WHERE a.execution = $1 AND av.file_path IS NOT NULL",
|
||||||
|
artifact_version::SELECT_COLUMNS
|
||||||
|
.split(", ")
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
.join(", av.")
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(execution_id)
|
||||||
|
.fetch_all(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Find all file-backed versions for a specific artifact (used for disk cleanup on delete).
|
||||||
|
pub async fn find_file_versions_by_artifact<'e, E>(
|
||||||
|
executor: E,
|
||||||
|
artifact_id: i64,
|
||||||
|
) -> Result<Vec<ArtifactVersion>>
|
||||||
|
where
|
||||||
|
E: Executor<'e, Database = Postgres> + 'e,
|
||||||
|
{
|
||||||
|
let query = format!(
|
||||||
|
"SELECT {} FROM artifact_version WHERE artifact = $1 AND file_path IS NOT NULL",
|
||||||
|
artifact_version::SELECT_COLUMNS
|
||||||
|
);
|
||||||
|
sqlx::query_as::<_, ArtifactVersion>(&query)
|
||||||
|
.bind(artifact_id)
|
||||||
|
.fetch_all(executor)
|
||||||
|
.await
|
||||||
|
.map_err(Into::into)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,7 +3,9 @@
|
|||||||
//! Tests cover CRUD operations, specialized queries, constraints,
|
//! Tests cover CRUD operations, specialized queries, constraints,
|
||||||
//! enum handling, timestamps, and edge cases.
|
//! enum handling, timestamps, and edge cases.
|
||||||
|
|
||||||
use attune_common::models::enums::{ArtifactType, OwnerType, RetentionPolicyType};
|
use attune_common::models::enums::{
|
||||||
|
ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType,
|
||||||
|
};
|
||||||
use attune_common::repositories::artifact::{
|
use attune_common::repositories::artifact::{
|
||||||
ArtifactRepository, CreateArtifactInput, UpdateArtifactInput,
|
ArtifactRepository, CreateArtifactInput, UpdateArtifactInput,
|
||||||
};
|
};
|
||||||
@@ -65,6 +67,7 @@ impl ArtifactFixture {
|
|||||||
scope: OwnerType::System,
|
scope: OwnerType::System,
|
||||||
owner: self.unique_owner("system"),
|
owner: self.unique_owner("system"),
|
||||||
r#type: ArtifactType::FileText,
|
r#type: ArtifactType::FileText,
|
||||||
|
visibility: ArtifactVisibility::default(),
|
||||||
retention_policy: RetentionPolicyType::Versions,
|
retention_policy: RetentionPolicyType::Versions,
|
||||||
retention_limit: 5,
|
retention_limit: 5,
|
||||||
name: None,
|
name: None,
|
||||||
@@ -252,6 +255,7 @@ async fn test_update_artifact_all_fields() {
|
|||||||
scope: Some(OwnerType::Identity),
|
scope: Some(OwnerType::Identity),
|
||||||
owner: Some(fixture.unique_owner("identity")),
|
owner: Some(fixture.unique_owner("identity")),
|
||||||
r#type: Some(ArtifactType::FileImage),
|
r#type: Some(ArtifactType::FileImage),
|
||||||
|
visibility: Some(ArtifactVisibility::Public),
|
||||||
retention_policy: Some(RetentionPolicyType::Days),
|
retention_policy: Some(RetentionPolicyType::Days),
|
||||||
retention_limit: Some(30),
|
retention_limit: Some(30),
|
||||||
name: Some("Updated Name".to_string()),
|
name: Some("Updated Name".to_string()),
|
||||||
|
|||||||
@@ -2,8 +2,9 @@
|
|||||||
|
|
||||||
use anyhow::{Context, Result};
|
use anyhow::{Context, Result};
|
||||||
use sqlx::postgres::PgListener;
|
use sqlx::postgres::PgListener;
|
||||||
|
use std::time::Duration;
|
||||||
use tokio::sync::broadcast;
|
use tokio::sync::broadcast;
|
||||||
use tracing::{debug, error, info, warn};
|
use tracing::{debug, error, info, trace, warn};
|
||||||
|
|
||||||
use crate::service::Notification;
|
use crate::service::Notification;
|
||||||
|
|
||||||
@@ -18,6 +19,8 @@ const NOTIFICATION_CHANNELS: &[&str] = &[
|
|||||||
"enforcement_status_changed",
|
"enforcement_status_changed",
|
||||||
"event_created",
|
"event_created",
|
||||||
"workflow_execution_status_changed",
|
"workflow_execution_status_changed",
|
||||||
|
"artifact_created",
|
||||||
|
"artifact_updated",
|
||||||
];
|
];
|
||||||
|
|
||||||
/// PostgreSQL listener that receives NOTIFY events and broadcasts them
|
/// PostgreSQL listener that receives NOTIFY events and broadcasts them
|
||||||
@@ -46,37 +49,48 @@ impl PostgresListener {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// Create a dedicated listener connection
|
// Create a dedicated listener connection
|
||||||
let mut listener = PgListener::connect(&self.database_url)
|
let mut listener = self.create_listener().await?;
|
||||||
.await
|
|
||||||
.context("Failed to connect PostgreSQL listener")?;
|
|
||||||
|
|
||||||
// Listen on all notification channels
|
info!("PostgreSQL listener ready — entering recv loop");
|
||||||
for channel in NOTIFICATION_CHANNELS {
|
|
||||||
listener
|
// Periodic heartbeat so we can confirm the task is alive even when idle.
|
||||||
.listen(channel)
|
let heartbeat_interval = Duration::from_secs(60);
|
||||||
.await
|
let mut next_heartbeat = tokio::time::Instant::now() + heartbeat_interval;
|
||||||
.context(format!("Failed to LISTEN on channel '{}'", channel))?;
|
|
||||||
info!("Listening on PostgreSQL channel: {}", channel);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process notifications in a loop
|
// Process notifications in a loop
|
||||||
loop {
|
loop {
|
||||||
match listener.recv().await {
|
// Log a heartbeat if no notification has arrived for a while.
|
||||||
|
let now = tokio::time::Instant::now();
|
||||||
|
if now >= next_heartbeat {
|
||||||
|
info!("PostgreSQL listener heartbeat — still waiting for notifications");
|
||||||
|
next_heartbeat = now + heartbeat_interval;
|
||||||
|
}
|
||||||
|
|
||||||
|
trace!("Calling listener.recv() — waiting for next notification");
|
||||||
|
|
||||||
|
// Use a timeout so the heartbeat fires even during long idle periods.
|
||||||
|
match tokio::time::timeout(heartbeat_interval, listener.recv()).await {
|
||||||
|
// Timed out waiting — loop back and log the heartbeat above.
|
||||||
|
Err(_timeout) => {
|
||||||
|
trace!("listener.recv() timed out — re-entering loop");
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
Ok(recv_result) => match recv_result {
|
||||||
Ok(pg_notification) => {
|
Ok(pg_notification) => {
|
||||||
|
let channel = pg_notification.channel();
|
||||||
|
let payload = pg_notification.payload();
|
||||||
debug!(
|
debug!(
|
||||||
"Received PostgreSQL notification: channel={}, payload={}",
|
"Received PostgreSQL notification: channel={}, payload_len={}",
|
||||||
pg_notification.channel(),
|
channel,
|
||||||
pg_notification.payload()
|
payload.len()
|
||||||
);
|
);
|
||||||
|
debug!("Notification payload: {}", payload);
|
||||||
|
|
||||||
// Parse and broadcast notification
|
// Parse and broadcast notification
|
||||||
if let Err(e) = self
|
if let Err(e) = self.process_notification(channel, payload) {
|
||||||
.process_notification(pg_notification.channel(), pg_notification.payload())
|
|
||||||
{
|
|
||||||
error!(
|
error!(
|
||||||
"Failed to process notification from channel '{}': {}",
|
"Failed to process notification from channel '{}': {}",
|
||||||
pg_notification.channel(),
|
channel, e
|
||||||
e
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -84,32 +98,62 @@ impl PostgresListener {
|
|||||||
error!("Error receiving PostgreSQL notification: {}", e);
|
error!("Error receiving PostgreSQL notification: {}", e);
|
||||||
|
|
||||||
// Sleep briefly before retrying to avoid tight loop on persistent errors
|
// Sleep briefly before retrying to avoid tight loop on persistent errors
|
||||||
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
|
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||||
|
|
||||||
// Try to reconnect
|
// Try to reconnect
|
||||||
warn!("Attempting to reconnect PostgreSQL listener...");
|
warn!("Attempting to reconnect PostgreSQL listener...");
|
||||||
match PgListener::connect(&self.database_url).await {
|
match self.create_listener().await {
|
||||||
Ok(new_listener) => {
|
Ok(new_listener) => {
|
||||||
listener = new_listener;
|
listener = new_listener;
|
||||||
// Re-subscribe to all channels
|
next_heartbeat = tokio::time::Instant::now() + heartbeat_interval;
|
||||||
for channel in NOTIFICATION_CHANNELS {
|
|
||||||
if let Err(e) = listener.listen(channel).await {
|
|
||||||
error!(
|
|
||||||
"Failed to re-subscribe to channel '{}': {}",
|
|
||||||
channel, e
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
info!("PostgreSQL listener reconnected successfully");
|
info!("PostgreSQL listener reconnected successfully");
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("Failed to reconnect PostgreSQL listener: {}", e);
|
error!("Failed to reconnect PostgreSQL listener: {}", e);
|
||||||
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
|
tokio::time::sleep(Duration::from_secs(5)).await;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}, // end Ok(recv_result)
|
||||||
|
} // end timeout match
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Create a fresh [`PgListener`] subscribed to all notification channels.
|
||||||
|
async fn create_listener(&self) -> Result<PgListener> {
|
||||||
|
info!("Connecting PostgreSQL LISTEN connection to {}", {
|
||||||
|
// Mask the password for logging
|
||||||
|
let url = &self.database_url;
|
||||||
|
if let Some(at) = url.rfind('@') {
|
||||||
|
if let Some(colon) = url[..at].rfind(':') {
|
||||||
|
format!("{}:****{}", &url[..colon], &url[at..])
|
||||||
|
} else {
|
||||||
|
url.clone()
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
url.clone()
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
let mut listener = PgListener::connect(&self.database_url)
|
||||||
|
.await
|
||||||
|
.context("Failed to connect PostgreSQL listener")?;
|
||||||
|
|
||||||
|
info!("PostgreSQL LISTEN connection established — subscribing to channels");
|
||||||
|
|
||||||
|
// Use listen_all for a single round-trip instead of N separate commands
|
||||||
|
listener
|
||||||
|
.listen_all(NOTIFICATION_CHANNELS.iter().copied())
|
||||||
|
.await
|
||||||
|
.context("Failed to LISTEN on notification channels")?;
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"Subscribed to {} PostgreSQL channels: {:?}",
|
||||||
|
NOTIFICATION_CHANNELS.len(),
|
||||||
|
NOTIFICATION_CHANNELS
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok(listener)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Process a PostgreSQL notification and broadcast it to WebSocket clients
|
/// Process a PostgreSQL notification and broadcast it to WebSocket clients
|
||||||
@@ -171,6 +215,8 @@ mod tests {
|
|||||||
assert!(NOTIFICATION_CHANNELS.contains(&"enforcement_created"));
|
assert!(NOTIFICATION_CHANNELS.contains(&"enforcement_created"));
|
||||||
assert!(NOTIFICATION_CHANNELS.contains(&"enforcement_status_changed"));
|
assert!(NOTIFICATION_CHANNELS.contains(&"enforcement_status_changed"));
|
||||||
assert!(NOTIFICATION_CHANNELS.contains(&"inquiry_created"));
|
assert!(NOTIFICATION_CHANNELS.contains(&"inquiry_created"));
|
||||||
|
assert!(NOTIFICATION_CHANNELS.contains(&"artifact_created"));
|
||||||
|
assert!(NOTIFICATION_CHANNELS.contains(&"artifact_updated"));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
use anyhow::Result;
|
use anyhow::Result;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use tokio::sync::broadcast;
|
use tokio::sync::broadcast;
|
||||||
use tracing::{error, info};
|
use tracing::{debug, error, info};
|
||||||
|
|
||||||
use attune_common::config::Config;
|
use attune_common::config::Config;
|
||||||
|
|
||||||
@@ -108,9 +108,26 @@ impl NotifierService {
|
|||||||
tokio::spawn(async move {
|
tokio::spawn(async move {
|
||||||
loop {
|
loop {
|
||||||
tokio::select! {
|
tokio::select! {
|
||||||
Ok(notification) = notification_rx.recv() => {
|
recv_result = notification_rx.recv() => {
|
||||||
|
match recv_result {
|
||||||
|
Ok(notification) => {
|
||||||
|
debug!(
|
||||||
|
"Broadcasting notification: type={}, entity_type={}, entity_id={}",
|
||||||
|
notification.notification_type,
|
||||||
|
notification.entity_type,
|
||||||
|
notification.entity_id,
|
||||||
|
);
|
||||||
subscriber_manager.broadcast(notification);
|
subscriber_manager.broadcast(notification);
|
||||||
}
|
}
|
||||||
|
Err(tokio::sync::broadcast::error::RecvError::Lagged(n)) => {
|
||||||
|
error!("Notification broadcaster lagged — dropped {} messages", n);
|
||||||
|
}
|
||||||
|
Err(tokio::sync::broadcast::error::RecvError::Closed) => {
|
||||||
|
error!("Notification broadcast channel closed — broadcaster exiting");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
_ = shutdown_rx.recv() => {
|
_ = shutdown_rx.recv() => {
|
||||||
info!("Notification broadcaster shutting down");
|
info!("Notification broadcaster shutting down");
|
||||||
break;
|
break;
|
||||||
|
|||||||
@@ -180,6 +180,7 @@ impl SubscriberManager {
|
|||||||
// Channel closed, client disconnected
|
// Channel closed, client disconnected
|
||||||
failed_count += 1;
|
failed_count += 1;
|
||||||
to_remove.push(client_id.clone());
|
to_remove.push(client_id.clone());
|
||||||
|
debug!("Client {} disconnected — removing", client_id);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -191,8 +192,12 @@ impl SubscriberManager {
|
|||||||
|
|
||||||
if sent_count > 0 {
|
if sent_count > 0 {
|
||||||
debug!(
|
debug!(
|
||||||
"Broadcast notification: sent={}, failed={}, type={}",
|
"Broadcast notification: sent={}, failed={}, type={}, entity_type={}, entity_id={}",
|
||||||
sent_count, failed_count, notification.notification_type
|
sent_count,
|
||||||
|
failed_count,
|
||||||
|
notification.notification_type,
|
||||||
|
notification.entity_type,
|
||||||
|
notification.entity_id,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -157,8 +157,10 @@ async fn handle_websocket(socket: WebSocket, state: Arc<AppState>) {
|
|||||||
let subscriber_manager_clone = state.subscriber_manager.clone();
|
let subscriber_manager_clone = state.subscriber_manager.clone();
|
||||||
let outgoing_task = tokio::spawn(async move {
|
let outgoing_task = tokio::spawn(async move {
|
||||||
while let Some(notification) = rx.recv().await {
|
while let Some(notification) = rx.recv().await {
|
||||||
// Serialize notification to JSON
|
// Wrap in the tagged ClientMessage envelope so the client sees
|
||||||
match serde_json::to_string(¬ification) {
|
// {"type":"notification", "notification_type":..., "entity_type":..., ...}
|
||||||
|
let envelope = ClientMessage::Notification(notification);
|
||||||
|
match serde_json::to_string(&envelope) {
|
||||||
Ok(json) => {
|
Ok(json) => {
|
||||||
if let Err(e) = ws_sender.send(Message::Text(json.into())).await {
|
if let Err(e) = ws_sender.send(Message::Text(json.into())).await {
|
||||||
error!("Failed to send notification to {}: {}", client_id_clone, e);
|
error!("Failed to send notification to {}: {}", client_id_clone, e);
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ use attune_common::auth::jwt::{generate_execution_token, JwtConfig};
|
|||||||
use attune_common::error::{Error, Result};
|
use attune_common::error::{Error, Result};
|
||||||
use attune_common::models::runtime::RuntimeExecutionConfig;
|
use attune_common::models::runtime::RuntimeExecutionConfig;
|
||||||
use attune_common::models::{runtime::Runtime as RuntimeModel, Action, Execution, ExecutionStatus};
|
use attune_common::models::{runtime::Runtime as RuntimeModel, Action, Execution, ExecutionStatus};
|
||||||
|
use attune_common::repositories::artifact::{ArtifactRepository, ArtifactVersionRepository};
|
||||||
use attune_common::repositories::execution::{ExecutionRepository, UpdateExecutionInput};
|
use attune_common::repositories::execution::{ExecutionRepository, UpdateExecutionInput};
|
||||||
use attune_common::repositories::runtime_version::RuntimeVersionRepository;
|
use attune_common::repositories::runtime_version::RuntimeVersionRepository;
|
||||||
use attune_common::repositories::{FindById, Update};
|
use attune_common::repositories::{FindById, Update};
|
||||||
@@ -42,6 +43,7 @@ pub struct ActionExecutor {
|
|||||||
max_stdout_bytes: usize,
|
max_stdout_bytes: usize,
|
||||||
max_stderr_bytes: usize,
|
max_stderr_bytes: usize,
|
||||||
packs_base_dir: PathBuf,
|
packs_base_dir: PathBuf,
|
||||||
|
artifacts_dir: PathBuf,
|
||||||
api_url: String,
|
api_url: String,
|
||||||
jwt_config: JwtConfig,
|
jwt_config: JwtConfig,
|
||||||
}
|
}
|
||||||
@@ -67,6 +69,7 @@ impl ActionExecutor {
|
|||||||
max_stdout_bytes: usize,
|
max_stdout_bytes: usize,
|
||||||
max_stderr_bytes: usize,
|
max_stderr_bytes: usize,
|
||||||
packs_base_dir: PathBuf,
|
packs_base_dir: PathBuf,
|
||||||
|
artifacts_dir: PathBuf,
|
||||||
api_url: String,
|
api_url: String,
|
||||||
jwt_config: JwtConfig,
|
jwt_config: JwtConfig,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
@@ -79,6 +82,7 @@ impl ActionExecutor {
|
|||||||
max_stdout_bytes,
|
max_stdout_bytes,
|
||||||
max_stderr_bytes,
|
max_stderr_bytes,
|
||||||
packs_base_dir,
|
packs_base_dir,
|
||||||
|
artifacts_dir,
|
||||||
api_url,
|
api_url,
|
||||||
jwt_config,
|
jwt_config,
|
||||||
}
|
}
|
||||||
@@ -142,6 +146,15 @@ impl ActionExecutor {
|
|||||||
// Don't fail the execution just because artifact storage failed
|
// Don't fail the execution just because artifact storage failed
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Finalize file-backed artifacts (stat files on disk and update size_bytes)
|
||||||
|
if let Err(e) = self.finalize_file_artifacts(execution_id).await {
|
||||||
|
warn!(
|
||||||
|
"Failed to finalize file-backed artifacts for execution {}: {}",
|
||||||
|
execution_id, e
|
||||||
|
);
|
||||||
|
// Don't fail the execution just because artifact finalization failed
|
||||||
|
}
|
||||||
|
|
||||||
// Update execution with result
|
// Update execution with result
|
||||||
let is_success = result.is_success();
|
let is_success = result.is_success();
|
||||||
debug!(
|
debug!(
|
||||||
@@ -291,6 +304,10 @@ impl ActionExecutor {
|
|||||||
env.insert("ATTUNE_EXEC_ID".to_string(), execution.id.to_string());
|
env.insert("ATTUNE_EXEC_ID".to_string(), execution.id.to_string());
|
||||||
env.insert("ATTUNE_ACTION".to_string(), execution.action_ref.clone());
|
env.insert("ATTUNE_ACTION".to_string(), execution.action_ref.clone());
|
||||||
env.insert("ATTUNE_API_URL".to_string(), self.api_url.clone());
|
env.insert("ATTUNE_API_URL".to_string(), self.api_url.clone());
|
||||||
|
env.insert(
|
||||||
|
"ATTUNE_ARTIFACTS_DIR".to_string(),
|
||||||
|
self.artifacts_dir.to_string_lossy().to_string(),
|
||||||
|
);
|
||||||
|
|
||||||
// Generate execution-scoped API token.
|
// Generate execution-scoped API token.
|
||||||
// The identity that triggered the execution is derived from the `sub` claim
|
// The identity that triggered the execution is derived from the `sub` claim
|
||||||
@@ -657,6 +674,95 @@ impl ActionExecutor {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Finalize file-backed artifacts after execution completes.
|
||||||
|
///
|
||||||
|
/// Scans all artifact versions linked to this execution that have a `file_path`,
|
||||||
|
/// stats each file on disk, and updates `size_bytes` on both the version row
|
||||||
|
/// and the parent artifact row.
|
||||||
|
async fn finalize_file_artifacts(&self, execution_id: i64) -> Result<()> {
|
||||||
|
let versions =
|
||||||
|
ArtifactVersionRepository::find_file_versions_by_execution(&self.pool, execution_id)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if versions.is_empty() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"Finalizing {} file-backed artifact version(s) for execution {}",
|
||||||
|
versions.len(),
|
||||||
|
execution_id,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Track the latest version per artifact so we can update parent size_bytes
|
||||||
|
let mut latest_size_per_artifact: HashMap<i64, (i32, i64)> = HashMap::new();
|
||||||
|
|
||||||
|
for ver in &versions {
|
||||||
|
let file_path = match &ver.file_path {
|
||||||
|
Some(fp) => fp,
|
||||||
|
None => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
let full_path = self.artifacts_dir.join(file_path);
|
||||||
|
let size_bytes = match tokio::fs::metadata(&full_path).await {
|
||||||
|
Ok(metadata) => metadata.len() as i64,
|
||||||
|
Err(e) => {
|
||||||
|
warn!(
|
||||||
|
"Could not stat artifact file '{}' for version {}: {}. Setting size_bytes=0.",
|
||||||
|
full_path.display(),
|
||||||
|
ver.id,
|
||||||
|
e,
|
||||||
|
);
|
||||||
|
0
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Update the version row
|
||||||
|
if let Err(e) =
|
||||||
|
ArtifactVersionRepository::update_size_bytes(&self.pool, ver.id, size_bytes).await
|
||||||
|
{
|
||||||
|
warn!(
|
||||||
|
"Failed to update size_bytes for artifact version {}: {}",
|
||||||
|
ver.id, e,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Track the highest version number per artifact for parent update
|
||||||
|
let entry = latest_size_per_artifact
|
||||||
|
.entry(ver.artifact)
|
||||||
|
.or_insert((ver.version, size_bytes));
|
||||||
|
if ver.version > entry.0 {
|
||||||
|
*entry = (ver.version, size_bytes);
|
||||||
|
}
|
||||||
|
|
||||||
|
debug!(
|
||||||
|
"Finalized artifact version {} (artifact {}): file='{}', size={}",
|
||||||
|
ver.id, ver.artifact, file_path, size_bytes,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update parent artifact size_bytes to reflect the latest version's size
|
||||||
|
for (artifact_id, (_version, size_bytes)) in &latest_size_per_artifact {
|
||||||
|
if let Err(e) =
|
||||||
|
ArtifactRepository::update_size_bytes(&self.pool, *artifact_id, *size_bytes).await
|
||||||
|
{
|
||||||
|
warn!(
|
||||||
|
"Failed to update size_bytes for artifact {}: {}",
|
||||||
|
artifact_id, e,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
info!(
|
||||||
|
"Finalized file-backed artifacts for execution {}: {} version(s), {} artifact(s)",
|
||||||
|
execution_id,
|
||||||
|
versions.len(),
|
||||||
|
latest_size_per_artifact.len(),
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
/// Handle successful execution
|
/// Handle successful execution
|
||||||
async fn handle_execution_success(
|
async fn handle_execution_success(
|
||||||
&self,
|
&self,
|
||||||
|
|||||||
@@ -136,7 +136,7 @@ impl WorkerService {
|
|||||||
// Initialize worker registration
|
// Initialize worker registration
|
||||||
let registration = Arc::new(RwLock::new(WorkerRegistration::new(pool.clone(), &config)));
|
let registration = Arc::new(RwLock::new(WorkerRegistration::new(pool.clone(), &config)));
|
||||||
|
|
||||||
// Initialize artifact manager
|
// Initialize artifact manager (legacy, for stdout/stderr log storage)
|
||||||
let artifact_base_dir = std::path::PathBuf::from(
|
let artifact_base_dir = std::path::PathBuf::from(
|
||||||
config
|
config
|
||||||
.worker
|
.worker
|
||||||
@@ -148,6 +148,22 @@ impl WorkerService {
|
|||||||
let artifact_manager = ArtifactManager::new(artifact_base_dir);
|
let artifact_manager = ArtifactManager::new(artifact_base_dir);
|
||||||
artifact_manager.initialize().await?;
|
artifact_manager.initialize().await?;
|
||||||
|
|
||||||
|
// Initialize artifacts directory for file-backed artifact storage (shared volume).
|
||||||
|
// Execution processes write artifact files here; the API serves them from the same path.
|
||||||
|
let artifacts_dir = std::path::PathBuf::from(&config.artifacts_dir);
|
||||||
|
if let Err(e) = tokio::fs::create_dir_all(&artifacts_dir).await {
|
||||||
|
warn!(
|
||||||
|
"Failed to create artifacts directory '{}': {}. File-backed artifacts may not work.",
|
||||||
|
artifacts_dir.display(),
|
||||||
|
e,
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
info!(
|
||||||
|
"Artifacts directory initialized at: {}",
|
||||||
|
artifacts_dir.display()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
let packs_base_dir = std::path::PathBuf::from(&config.packs_base_dir);
|
let packs_base_dir = std::path::PathBuf::from(&config.packs_base_dir);
|
||||||
let runtime_envs_dir = std::path::PathBuf::from(&config.runtime_envs_dir);
|
let runtime_envs_dir = std::path::PathBuf::from(&config.runtime_envs_dir);
|
||||||
|
|
||||||
@@ -304,6 +320,7 @@ impl WorkerService {
|
|||||||
max_stdout_bytes,
|
max_stdout_bytes,
|
||||||
max_stderr_bytes,
|
max_stderr_bytes,
|
||||||
packs_base_dir.clone(),
|
packs_base_dir.clone(),
|
||||||
|
artifacts_dir,
|
||||||
api_url,
|
api_url,
|
||||||
jwt_config,
|
jwt_config,
|
||||||
));
|
));
|
||||||
|
|||||||
@@ -189,6 +189,7 @@ services:
|
|||||||
- packs_data:/opt/attune/packs:rw
|
- packs_data:/opt/attune/packs:rw
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- api_logs:/opt/attune/logs
|
- api_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -233,6 +234,7 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
|
- artifacts_data:/opt/attune/artifacts:ro
|
||||||
- executor_logs:/opt/attune/logs
|
- executor_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -279,10 +281,12 @@ services:
|
|||||||
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
||||||
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
||||||
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
||||||
|
ATTUNE_API_URL: http://attune-api:8080
|
||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- worker_shell_logs:/opt/attune/logs
|
- worker_shell_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -325,10 +329,12 @@ services:
|
|||||||
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
||||||
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
||||||
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
||||||
|
ATTUNE_API_URL: http://attune-api:8080
|
||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- worker_python_logs:/opt/attune/logs
|
- worker_python_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -371,10 +377,12 @@ services:
|
|||||||
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
||||||
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
||||||
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
||||||
|
ATTUNE_API_URL: http://attune-api:8080
|
||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- worker_node_logs:/opt/attune/logs
|
- worker_node_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -417,10 +425,12 @@ services:
|
|||||||
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
|
||||||
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
|
||||||
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
|
||||||
|
ATTUNE_API_URL: http://attune-api:8080
|
||||||
volumes:
|
volumes:
|
||||||
- packs_data:/opt/attune/packs:ro
|
- packs_data:/opt/attune/packs:ro
|
||||||
- ./packs.dev:/opt/attune/packs.dev:rw
|
- ./packs.dev:/opt/attune/packs.dev:rw
|
||||||
- runtime_envs:/opt/attune/runtime_envs
|
- runtime_envs:/opt/attune/runtime_envs
|
||||||
|
- artifacts_data:/opt/attune/artifacts
|
||||||
- worker_full_logs:/opt/attune/logs
|
- worker_full_logs:/opt/attune/logs
|
||||||
depends_on:
|
depends_on:
|
||||||
init-packs:
|
init-packs:
|
||||||
@@ -594,6 +604,8 @@ volumes:
|
|||||||
driver: local
|
driver: local
|
||||||
runtime_envs:
|
runtime_envs:
|
||||||
driver: local
|
driver: local
|
||||||
|
artifacts_data:
|
||||||
|
driver: local
|
||||||
|
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
# Networks
|
# Networks
|
||||||
|
|||||||
330
docs/plans/file-based-artifact-storage.md
Normal file
330
docs/plans/file-based-artifact-storage.md
Normal file
@@ -0,0 +1,330 @@
|
|||||||
|
# File-Based Artifact Storage Plan
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Replace PostgreSQL BYTEA storage for file-type artifacts with a shared filesystem volume. Execution processes write artifact files directly to disk via paths assigned by the API; the API serves those files from disk on download. The database stores only metadata (path, size, content type) — no binary content for file-based artifacts.
|
||||||
|
|
||||||
|
**Motivation:**
|
||||||
|
- Eliminates PostgreSQL bloat from large binary artifacts
|
||||||
|
- Enables executions to write files incrementally (streaming logs, large outputs) without buffering in memory for an API upload
|
||||||
|
- Artifacts can be retained independently of execution records (executions are hypertables with 90-day retention)
|
||||||
|
- Decouples artifact lifecycle from execution lifecycle — artifacts created by one execution can be accessed by others or by external systems
|
||||||
|
|
||||||
|
## Artifact Type Classification
|
||||||
|
|
||||||
|
| Type | Storage | Notes |
|
||||||
|
|------|---------|-------|
|
||||||
|
| `FileBinary` | **Disk** (shared volume) | Binary files produced by executions |
|
||||||
|
| `FileDatatable` | **Disk** (shared volume) | Tabular data files (CSV, etc.) |
|
||||||
|
| `FileText` | **Disk** (shared volume) | Text files, logs |
|
||||||
|
| `Log` | **Disk** (shared volume) | Execution stdout/stderr logs |
|
||||||
|
| `Progress` | **DB** (`artifact.data` JSONB) | Small structured progress entries — unchanged |
|
||||||
|
| `Url` | **DB** (`artifact.data` JSONB) | URL references — unchanged |
|
||||||
|
|
||||||
|
## Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
/opt/attune/artifacts/ # artifacts_dir (configurable)
|
||||||
|
└── {artifact_ref_slug}/ # derived from artifact ref (globally unique)
|
||||||
|
├── v1.txt # version 1
|
||||||
|
├── v2.txt # version 2
|
||||||
|
└── v3.txt # version 3
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key decisions:**
|
||||||
|
- **No execution ID in the path.** Artifacts may outlive execution records (hypertable retention) and may be shared across executions or created externally.
|
||||||
|
- **Keyed by artifact ref.** The `ref` column has a unique index, making it a stable, globally unique identifier. Dots in refs become directory separators (e.g., `mypack.build_log` → `mypack/build_log/`).
|
||||||
|
- **Version files named `v{N}.{ext}`** where `N` is the version number from `next_artifact_version()` and `ext` is derived from `content_type`.
|
||||||
|
|
||||||
|
## End-to-End Flow
|
||||||
|
|
||||||
|
### Happy Path
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────────────┐
|
||||||
|
│ Worker │────▶│Execution │────▶│ API │────▶│ Shared Volume │
|
||||||
|
│ Service │ │ Process │ │ Service │ │ /opt/attune/ │
|
||||||
|
│ │ │(Py/Node/ │ │ │ │ artifacts/ │
|
||||||
|
│ │ │ Shell) │ │ │ │ │
|
||||||
|
└──────────┘ └──────────┘ └──────────┘ └────────────────┘
|
||||||
|
│ │ │ │
|
||||||
|
│ 1. Start exec │ │ │
|
||||||
|
│ Set ATTUNE_ │ │ │
|
||||||
|
│ ARTIFACTS_DIR │ │ │
|
||||||
|
│───────────────▶│ │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ 2. POST /api/v1/artifacts │
|
||||||
|
│ │ {ref, type, execution} │
|
||||||
|
│ │───────────────▶│ │
|
||||||
|
│ │ │ 3. Create artifact │
|
||||||
|
│ │ │ row in DB │
|
||||||
|
│ │ │ │
|
||||||
|
│ │◀───────────────│ │
|
||||||
|
│ │ {id, ref, ...}│ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ 4. POST /api/v1/artifacts/{id}/versions
|
||||||
|
│ │ {content_type} │
|
||||||
|
│ │───────────────▶│ │
|
||||||
|
│ │ │ 5. Create version │
|
||||||
|
│ │ │ row (file_path, │
|
||||||
|
│ │ │ no BYTEA content) │
|
||||||
|
│ │ │ + mkdir on disk │
|
||||||
|
│ │◀───────────────│ │
|
||||||
|
│ │ {id, version, │ │
|
||||||
|
│ │ file_path} │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ 6. Write file to │
|
||||||
|
│ │ $ATTUNE_ARTIFACTS_DIR/file_path │
|
||||||
|
│ │─────────────────────────────────────▶│
|
||||||
|
│ │ │ │
|
||||||
|
│ 7. Exec exits │ │ │
|
||||||
|
│◀───────────────│ │ │
|
||||||
|
│ │ │
|
||||||
|
│ 8. Finalize: stat files, │ │
|
||||||
|
│ update size_bytes in DB │ │
|
||||||
|
│ (direct DB access) │ │
|
||||||
|
│─────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
▼ │
|
||||||
|
┌──────────┐ │
|
||||||
|
│ Client │ 9. GET /api/v1/artifacts/{id}/download │
|
||||||
|
│ (UI) │──────────────────▶ API reads from disk ◀──────┘
|
||||||
|
└──────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step-by-Step
|
||||||
|
|
||||||
|
1. **Worker receives execution from MQ**, prepares `ExecutionContext`, sets `ATTUNE_ARTIFACTS_DIR` environment variable.
|
||||||
|
2. **Execution process** calls `POST /api/v1/artifacts` to create the artifact record (ref, type, execution ID, content_type).
|
||||||
|
3. **API** creates the `artifact` row in DB, returns the artifact ID.
|
||||||
|
4. **Execution process** calls `POST /api/v1/artifacts/{id}/versions` to create a new version. For file-type artifacts, the request body contains content_type and optional metadata — **no file content**.
|
||||||
|
5. **API** creates the `artifact_version` row with a computed `file_path` (e.g., `mypack/build_log/v1.txt`), `content` BYTEA left NULL. Creates the parent directory on disk. Returns version ID and `file_path`.
|
||||||
|
6. **Execution process** writes file content to `$ATTUNE_ARTIFACTS_DIR/{file_path}`. Can write incrementally (append, stream, etc.).
|
||||||
|
7. **Execution process exits.**
|
||||||
|
8. **Worker finalizes**: scans artifact versions linked to this execution, `stat()`s each file on disk, updates `artifact_version.size_bytes` and `artifact.size_bytes` in the DB via direct repository access.
|
||||||
|
9. **Client requests download**: API reads from `{artifacts_dir}/{file_path}` on disk and streams the response.
|
||||||
|
|
||||||
|
## Implementation Phases
|
||||||
|
|
||||||
|
### Phase 1: Configuration & Volume Infrastructure
|
||||||
|
|
||||||
|
**`crates/common/src/config.rs`**
|
||||||
|
- Add `artifacts_dir: String` to `Config` struct with default `/opt/attune/artifacts`
|
||||||
|
- Add `default_artifacts_dir()` function
|
||||||
|
|
||||||
|
**`config.development.yaml`**
|
||||||
|
- Add `artifacts_dir: ./artifacts`
|
||||||
|
|
||||||
|
**`config.docker.yaml`**
|
||||||
|
- Add `artifacts_dir: /opt/attune/artifacts`
|
||||||
|
|
||||||
|
**`docker-compose.yaml`**
|
||||||
|
- Add `artifacts_data` named volume
|
||||||
|
- Mount `artifacts_data:/opt/attune/artifacts` in: api (rw), all workers (rw), executor (ro)
|
||||||
|
- Add `ATTUNE__ARTIFACTS_DIR: /opt/attune/artifacts` to service environments where needed
|
||||||
|
|
||||||
|
### Phase 2: Database Schema Changes
|
||||||
|
|
||||||
|
**New migration: `migrations/20250101000011_artifact_file_storage.sql`**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Add file_path to artifact_version for disk-based storage
|
||||||
|
ALTER TABLE artifact_version ADD COLUMN IF NOT EXISTS file_path TEXT;
|
||||||
|
|
||||||
|
-- Index for finding versions by file_path (orphan cleanup)
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_artifact_version_file_path
|
||||||
|
ON artifact_version(file_path) WHERE file_path IS NOT NULL;
|
||||||
|
|
||||||
|
COMMENT ON COLUMN artifact_version.file_path IS
|
||||||
|
'Relative path from artifacts_dir root for disk-stored content. '
|
||||||
|
'When set, content BYTEA is NULL — file lives on shared volume.';
|
||||||
|
```
|
||||||
|
|
||||||
|
**`crates/common/src/models.rs`** — `artifact_version` module:
|
||||||
|
- Add `file_path: Option<String>` to `ArtifactVersion` struct
|
||||||
|
- Update `SELECT_COLUMNS` and `SELECT_COLUMNS_WITH_CONTENT` to include `file_path`
|
||||||
|
|
||||||
|
**`crates/common/src/repositories/artifact.rs`** — `ArtifactVersionRepository`:
|
||||||
|
- Add `file_path: Option<String>` to `CreateArtifactVersionInput`
|
||||||
|
- Wire `file_path` through the `create` query
|
||||||
|
- Add `update_size_bytes(executor, version_id, size_bytes)` method for worker finalization
|
||||||
|
- Add `find_file_versions_by_execution(executor, execution_id)` method — joins `artifact_version` → `artifact` on `artifact.execution` to find all file-based versions for an execution
|
||||||
|
|
||||||
|
### Phase 3: API Changes
|
||||||
|
|
||||||
|
#### Create Version Endpoint (modified)
|
||||||
|
|
||||||
|
`POST /api/v1/artifacts/{id}/versions` — currently `create_version_json`
|
||||||
|
|
||||||
|
Add a new endpoint or modify existing behavior:
|
||||||
|
|
||||||
|
**`POST /api/v1/artifacts/{id}/versions/file`** (new endpoint)
|
||||||
|
- Request body: `CreateFileVersionRequest { content_type: Option<String>, meta: Option<Value>, created_by: Option<String> }`
|
||||||
|
- **No file content in the request** — this is the key difference from `upload_version`
|
||||||
|
- API computes `file_path` from artifact ref + version number + content_type extension
|
||||||
|
- Creates `artifact_version` row with `file_path` set, `content` NULL
|
||||||
|
- Creates parent directory on disk: `{artifacts_dir}/{file_path_parent}/`
|
||||||
|
- Returns `ArtifactVersionResponse` **with `file_path` included**
|
||||||
|
|
||||||
|
**File path computation logic:**
|
||||||
|
```rust
|
||||||
|
fn compute_file_path(artifact_ref: &str, version: i32, content_type: &str) -> String {
|
||||||
|
// "mypack.build_log" → "mypack/build_log"
|
||||||
|
let ref_path = artifact_ref.replace('.', "/");
|
||||||
|
let ext = extension_from_content_type(content_type);
|
||||||
|
format!("{}/v{}.{}", ref_path, version, ext)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Download Endpoints (modified)
|
||||||
|
|
||||||
|
`GET /api/v1/artifacts/{id}/download` and `GET /api/v1/artifacts/{id}/versions/{v}/download`:
|
||||||
|
- If `artifact_version.file_path` is set:
|
||||||
|
- Resolve absolute path: `{artifacts_dir}/{file_path}`
|
||||||
|
- Verify file exists, return 404 if not
|
||||||
|
- `stat()` the file for Content-Length header
|
||||||
|
- Stream file content as response body
|
||||||
|
- If `file_path` is NULL:
|
||||||
|
- Fall back to existing BYTEA/JSON content from DB (backward compatible)
|
||||||
|
|
||||||
|
#### Upload Endpoint (unchanged for now)
|
||||||
|
|
||||||
|
`POST /api/v1/artifacts/{id}/versions/upload` (multipart) — continues to store in DB BYTEA. This remains available for non-execution uploads (external systems, small files, etc.).
|
||||||
|
|
||||||
|
#### Response DTO Changes
|
||||||
|
|
||||||
|
**`crates/api/src/dto/artifact.rs`**:
|
||||||
|
- Add `file_path: Option<String>` to `ArtifactVersionResponse`
|
||||||
|
- Add `file_path: Option<String>` to `ArtifactVersionSummary`
|
||||||
|
- Add `CreateFileVersionRequest` DTO
|
||||||
|
|
||||||
|
### Phase 4: Worker Changes
|
||||||
|
|
||||||
|
#### Environment Variable Injection
|
||||||
|
|
||||||
|
**`crates/worker/src/executor.rs`** — `prepare_execution_context()`:
|
||||||
|
- Add `ATTUNE_ARTIFACTS_DIR` to the standard env vars block:
|
||||||
|
```rust
|
||||||
|
env.insert("ATTUNE_ARTIFACTS_DIR".to_string(), self.artifacts_dir.clone());
|
||||||
|
```
|
||||||
|
- The `ActionExecutor` struct needs to hold the `artifacts_dir` value (sourced from config)
|
||||||
|
|
||||||
|
#### Post-Execution Finalization
|
||||||
|
|
||||||
|
**`crates/worker/src/executor.rs`** — after execution completes (success or failure):
|
||||||
|
|
||||||
|
```
|
||||||
|
async fn finalize_artifacts(&self, execution_id: i64) -> Result<()>
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Query `artifact_version` rows joined through `artifact.execution = execution_id` where `file_path IS NOT NULL`
|
||||||
|
2. For each version with a `file_path`:
|
||||||
|
- Resolve absolute path: `{artifacts_dir}/{file_path}`
|
||||||
|
- `tokio::fs::metadata(path).await` to get file size
|
||||||
|
- If file exists: update `artifact_version.size_bytes` via repository
|
||||||
|
- If file doesn't exist: set `size_bytes = 0` (execution didn't produce the file)
|
||||||
|
3. For each parent artifact: update `artifact.size_bytes` to the latest version's `size_bytes`
|
||||||
|
|
||||||
|
This runs after every execution regardless of success/failure status, since even failed executions may have written partial artifacts.
|
||||||
|
|
||||||
|
#### Simplify Old ArtifactManager
|
||||||
|
|
||||||
|
**`crates/worker/src/artifacts.rs`**:
|
||||||
|
- The existing `ArtifactManager` is a standalone prototype disconnected from the DB-backed system. It can be simplified to only handle the `artifacts_dir` path resolution and directory creation, or removed entirely since the API now manages paths.
|
||||||
|
- Keep the struct as a thin wrapper if it's useful for the finalization logic, but remove the `store_logs`, `store_result`, `store_file` methods that duplicate what the API does.
|
||||||
|
|
||||||
|
### Phase 5: Retention & Cleanup
|
||||||
|
|
||||||
|
#### DB Trigger (existing, minor update)
|
||||||
|
|
||||||
|
The `enforce_artifact_retention` trigger fires `AFTER INSERT ON artifact_version` and deletes old version rows when the count exceeds the limit. This continues to work for row deletion. However, it **cannot** delete files on disk (triggers can't do filesystem I/O).
|
||||||
|
|
||||||
|
#### Orphan File Cleanup (new)
|
||||||
|
|
||||||
|
Add an async cleanup mechanism — either a periodic task in the worker/executor or a dedicated CLI command:
|
||||||
|
|
||||||
|
**`attune artifact cleanup`** (CLI) or periodic task:
|
||||||
|
1. Scan all files under `{artifacts_dir}/`
|
||||||
|
2. For each file, check if a matching `artifact_version.file_path` row exists
|
||||||
|
3. If no row exists (orphaned file), delete the file
|
||||||
|
4. Also delete empty directories
|
||||||
|
|
||||||
|
This handles:
|
||||||
|
- Files left behind after the retention trigger deletes version rows
|
||||||
|
- Files from crashed executions that created directories but whose version rows were cleaned up
|
||||||
|
- Manual DB cleanup scenarios
|
||||||
|
|
||||||
|
**Frequency:** Daily or on-demand via CLI. Orphaned files are not harmful (just wasted disk space), so aggressive cleanup isn't critical.
|
||||||
|
|
||||||
|
#### Artifact Deletion Endpoint
|
||||||
|
|
||||||
|
The existing `DELETE /api/v1/artifacts/{id}` cascades to `artifact_version` rows via FK. Enhance it to also delete files on disk:
|
||||||
|
- Before deleting the DB row, query all versions with `file_path IS NOT NULL`
|
||||||
|
- Delete each file from disk
|
||||||
|
- Then delete the DB row (cascades to version rows)
|
||||||
|
- Clean up empty parent directories
|
||||||
|
|
||||||
|
Similarly for `DELETE /api/v1/artifacts/{id}/versions/{v}`.
|
||||||
|
|
||||||
|
## Schema Summary
|
||||||
|
|
||||||
|
### artifact table (unchanged)
|
||||||
|
|
||||||
|
Existing columns remain. `size_bytes` continues to reflect the latest version's size (updated by worker finalization for file-based artifacts, updated by DB trigger for DB-stored artifacts).
|
||||||
|
|
||||||
|
### artifact_version table (modified)
|
||||||
|
|
||||||
|
| Column | Type | Notes |
|
||||||
|
|--------|------|-------|
|
||||||
|
| `id` | BIGSERIAL | PK |
|
||||||
|
| `artifact` | BIGINT | FK → artifact(id) ON DELETE CASCADE |
|
||||||
|
| `version` | INTEGER | Auto-assigned by `next_artifact_version()` |
|
||||||
|
| `content_type` | TEXT | MIME type |
|
||||||
|
| `size_bytes` | BIGINT | Set by worker finalization for file-based; set at insert for DB-stored |
|
||||||
|
| `content` | BYTEA | NULL for file-based artifacts; populated for DB-stored uploads |
|
||||||
|
| `content_json` | JSONB | For JSON content versions (unchanged) |
|
||||||
|
| **`file_path`** | **TEXT** | **NEW — relative path from `artifacts_dir`. When set, `content` is NULL** |
|
||||||
|
| `meta` | JSONB | Free-form metadata |
|
||||||
|
| `created_by` | TEXT | Who created this version |
|
||||||
|
| `created` | TIMESTAMPTZ | Immutable |
|
||||||
|
|
||||||
|
**Invariant:** Exactly one of `content`, `content_json`, or `file_path` should be non-NULL for a given version row.
|
||||||
|
|
||||||
|
## Files Changed
|
||||||
|
|
||||||
|
| File | Changes |
|
||||||
|
|------|---------|
|
||||||
|
| `crates/common/src/config.rs` | Add `artifacts_dir` field with default |
|
||||||
|
| `crates/common/src/models.rs` | Add `file_path` to `ArtifactVersion` |
|
||||||
|
| `crates/common/src/repositories/artifact.rs` | Wire `file_path` through create; add `update_size_bytes`, `find_file_versions_by_execution` |
|
||||||
|
| `crates/api/src/dto/artifact.rs` | Add `file_path` to version response DTOs; add `CreateFileVersionRequest` |
|
||||||
|
| `crates/api/src/routes/artifacts.rs` | New `create_version_file` endpoint; modify download endpoints for disk reads |
|
||||||
|
| `crates/api/src/state.rs` | No change needed — `config` already accessible via `AppState.config` |
|
||||||
|
| `crates/worker/src/executor.rs` | Inject `ATTUNE_ARTIFACTS_DIR` env var; add `finalize_artifacts()` post-execution |
|
||||||
|
| `crates/worker/src/service.rs` | Pass `artifacts_dir` config to `ActionExecutor` |
|
||||||
|
| `crates/worker/src/artifacts.rs` | Simplify or remove old `ArtifactManager` |
|
||||||
|
| `migrations/20250101000011_artifact_file_storage.sql` | Add `file_path` column to `artifact_version` |
|
||||||
|
| `config.development.yaml` | Add `artifacts_dir: ./artifacts` |
|
||||||
|
| `config.docker.yaml` | Add `artifacts_dir: /opt/attune/artifacts` |
|
||||||
|
| `docker-compose.yaml` | Add `artifacts_data` volume; mount in api + worker services |
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
| Variable | Set By | Available To | Value |
|
||||||
|
|----------|--------|--------------|-------|
|
||||||
|
| `ATTUNE_ARTIFACTS_DIR` | Worker | Execution process | Absolute path to artifacts volume (e.g., `/opt/attune/artifacts`) |
|
||||||
|
| `ATTUNE__ARTIFACTS_DIR` | Docker Compose | API / Worker services | Config override for `artifacts_dir` |
|
||||||
|
|
||||||
|
## Backward Compatibility
|
||||||
|
|
||||||
|
- **Existing DB-stored artifacts continue to work.** Download endpoints check `file_path` first, fall back to BYTEA/JSON content.
|
||||||
|
- **Existing multipart upload endpoint unchanged.** External systems can still upload small files via `POST /artifacts/{id}/versions/upload` — those go to DB as before.
|
||||||
|
- **Progress and URL artifacts unchanged.** They don't use `artifact_version` content at all.
|
||||||
|
- **No data migration needed.** Existing artifacts have `file_path = NULL` and continue to serve from DB.
|
||||||
|
|
||||||
|
## Future Considerations
|
||||||
|
|
||||||
|
- **External object storage (S3/MinIO):** The `file_path` abstraction makes it straightforward to swap the local filesystem for S3 later — the path becomes an object key, and the download endpoint proxies or redirects.
|
||||||
|
- **Streaming writes:** With disk-based storage, a future enhancement could allow the API to stream large file uploads directly to disk instead of buffering in memory.
|
||||||
|
- **Artifact garbage collection:** The orphan cleanup could be integrated into the executor's periodic maintenance loop alongside execution timeout monitoring.
|
||||||
|
- **Cross-execution artifact access:** Since artifacts are keyed by ref (not execution ID), a future enhancement could let actions declare artifact dependencies, and the worker could resolve and mount those paths.
|
||||||
@@ -186,6 +186,18 @@ END $$;
|
|||||||
|
|
||||||
COMMENT ON TYPE artifact_retention_enum IS 'Type of retention policy';
|
COMMENT ON TYPE artifact_retention_enum IS 'Type of retention policy';
|
||||||
|
|
||||||
|
-- ArtifactVisibility enum
|
||||||
|
DO $$ BEGIN
|
||||||
|
CREATE TYPE artifact_visibility_enum AS ENUM (
|
||||||
|
'public',
|
||||||
|
'private'
|
||||||
|
);
|
||||||
|
EXCEPTION
|
||||||
|
WHEN duplicate_object THEN null;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
COMMENT ON TYPE artifact_visibility_enum IS 'Visibility of an artifact (public = viewable by all users, private = scoped by owner)';
|
||||||
|
|
||||||
|
|
||||||
-- PackEnvironmentStatus enum
|
-- PackEnvironmentStatus enum
|
||||||
DO $$ BEGIN
|
DO $$ BEGIN
|
||||||
|
|||||||
@@ -143,6 +143,7 @@ CREATE TABLE artifact (
|
|||||||
scope owner_type_enum NOT NULL DEFAULT 'system',
|
scope owner_type_enum NOT NULL DEFAULT 'system',
|
||||||
owner TEXT NOT NULL DEFAULT '',
|
owner TEXT NOT NULL DEFAULT '',
|
||||||
type artifact_type_enum NOT NULL,
|
type artifact_type_enum NOT NULL,
|
||||||
|
visibility artifact_visibility_enum NOT NULL DEFAULT 'private',
|
||||||
retention_policy artifact_retention_enum NOT NULL DEFAULT 'versions',
|
retention_policy artifact_retention_enum NOT NULL DEFAULT 'versions',
|
||||||
retention_limit INTEGER NOT NULL DEFAULT 1,
|
retention_limit INTEGER NOT NULL DEFAULT 1,
|
||||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
@@ -157,6 +158,8 @@ CREATE INDEX idx_artifact_type ON artifact(type);
|
|||||||
CREATE INDEX idx_artifact_created ON artifact(created DESC);
|
CREATE INDEX idx_artifact_created ON artifact(created DESC);
|
||||||
CREATE INDEX idx_artifact_scope_owner ON artifact(scope, owner);
|
CREATE INDEX idx_artifact_scope_owner ON artifact(scope, owner);
|
||||||
CREATE INDEX idx_artifact_type_created ON artifact(type, created DESC);
|
CREATE INDEX idx_artifact_type_created ON artifact(type, created DESC);
|
||||||
|
CREATE INDEX idx_artifact_visibility ON artifact(visibility);
|
||||||
|
CREATE INDEX idx_artifact_visibility_scope ON artifact(visibility, scope, owner);
|
||||||
|
|
||||||
-- Trigger
|
-- Trigger
|
||||||
CREATE TRIGGER update_artifact_updated
|
CREATE TRIGGER update_artifact_updated
|
||||||
@@ -170,6 +173,7 @@ COMMENT ON COLUMN artifact.ref IS 'Artifact reference/path';
|
|||||||
COMMENT ON COLUMN artifact.scope IS 'Owner type (system, identity, pack, action, sensor)';
|
COMMENT ON COLUMN artifact.scope IS 'Owner type (system, identity, pack, action, sensor)';
|
||||||
COMMENT ON COLUMN artifact.owner IS 'Owner identifier';
|
COMMENT ON COLUMN artifact.owner IS 'Owner identifier';
|
||||||
COMMENT ON COLUMN artifact.type IS 'Artifact type (file, url, progress, etc.)';
|
COMMENT ON COLUMN artifact.type IS 'Artifact type (file, url, progress, etc.)';
|
||||||
|
COMMENT ON COLUMN artifact.visibility IS 'Visibility level: public (all users) or private (scoped by scope/owner)';
|
||||||
COMMENT ON COLUMN artifact.retention_policy IS 'How to retain artifacts (versions, days, hours, minutes)';
|
COMMENT ON COLUMN artifact.retention_policy IS 'How to retain artifacts (versions, days, hours, minutes)';
|
||||||
COMMENT ON COLUMN artifact.retention_limit IS 'Numeric limit for retention policy';
|
COMMENT ON COLUMN artifact.retention_limit IS 'Numeric limit for retention policy';
|
||||||
|
|
||||||
|
|||||||
@@ -329,3 +329,98 @@ CREATE TRIGGER workflow_execution_status_changed_notify
|
|||||||
EXECUTE FUNCTION notify_workflow_execution_status_changed();
|
EXECUTE FUNCTION notify_workflow_execution_status_changed();
|
||||||
|
|
||||||
COMMENT ON FUNCTION notify_workflow_execution_status_changed() IS 'Sends workflow execution status change notifications via PostgreSQL LISTEN/NOTIFY';
|
COMMENT ON FUNCTION notify_workflow_execution_status_changed() IS 'Sends workflow execution status change notifications via PostgreSQL LISTEN/NOTIFY';
|
||||||
|
|
||||||
|
-- ============================================================================
|
||||||
|
-- ARTIFACT NOTIFICATIONS
|
||||||
|
-- ============================================================================
|
||||||
|
|
||||||
|
-- Function to notify on artifact creation
|
||||||
|
CREATE OR REPLACE FUNCTION notify_artifact_created()
|
||||||
|
RETURNS TRIGGER AS $$
|
||||||
|
DECLARE
|
||||||
|
payload JSON;
|
||||||
|
BEGIN
|
||||||
|
payload := json_build_object(
|
||||||
|
'entity_type', 'artifact',
|
||||||
|
'entity_id', NEW.id,
|
||||||
|
'id', NEW.id,
|
||||||
|
'ref', NEW.ref,
|
||||||
|
'type', NEW.type,
|
||||||
|
'visibility', NEW.visibility,
|
||||||
|
'name', NEW.name,
|
||||||
|
'execution', NEW.execution,
|
||||||
|
'scope', NEW.scope,
|
||||||
|
'owner', NEW.owner,
|
||||||
|
'content_type', NEW.content_type,
|
||||||
|
'size_bytes', NEW.size_bytes,
|
||||||
|
'created', NEW.created
|
||||||
|
);
|
||||||
|
|
||||||
|
PERFORM pg_notify('artifact_created', payload::text);
|
||||||
|
|
||||||
|
RETURN NEW;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
-- Trigger on artifact table for creation
|
||||||
|
CREATE TRIGGER artifact_created_notify
|
||||||
|
AFTER INSERT ON artifact
|
||||||
|
FOR EACH ROW
|
||||||
|
EXECUTE FUNCTION notify_artifact_created();
|
||||||
|
|
||||||
|
COMMENT ON FUNCTION notify_artifact_created() IS 'Sends artifact creation notifications via PostgreSQL LISTEN/NOTIFY';
|
||||||
|
|
||||||
|
-- Function to notify on artifact updates (progress appends, data changes)
|
||||||
|
CREATE OR REPLACE FUNCTION notify_artifact_updated()
|
||||||
|
RETURNS TRIGGER AS $$
|
||||||
|
DECLARE
|
||||||
|
payload JSON;
|
||||||
|
latest_percent DOUBLE PRECISION;
|
||||||
|
latest_message TEXT;
|
||||||
|
entry_count INTEGER;
|
||||||
|
BEGIN
|
||||||
|
-- Only notify on actual changes
|
||||||
|
IF TG_OP = 'UPDATE' THEN
|
||||||
|
-- Extract progress summary from data array if this is a progress artifact
|
||||||
|
IF NEW.type = 'progress' AND NEW.data IS NOT NULL AND jsonb_typeof(NEW.data) = 'array' THEN
|
||||||
|
entry_count := jsonb_array_length(NEW.data);
|
||||||
|
IF entry_count > 0 THEN
|
||||||
|
latest_percent := (NEW.data -> (entry_count - 1) ->> 'percent')::DOUBLE PRECISION;
|
||||||
|
latest_message := NEW.data -> (entry_count - 1) ->> 'message';
|
||||||
|
END IF;
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
payload := json_build_object(
|
||||||
|
'entity_type', 'artifact',
|
||||||
|
'entity_id', NEW.id,
|
||||||
|
'id', NEW.id,
|
||||||
|
'ref', NEW.ref,
|
||||||
|
'type', NEW.type,
|
||||||
|
'visibility', NEW.visibility,
|
||||||
|
'name', NEW.name,
|
||||||
|
'execution', NEW.execution,
|
||||||
|
'scope', NEW.scope,
|
||||||
|
'owner', NEW.owner,
|
||||||
|
'content_type', NEW.content_type,
|
||||||
|
'size_bytes', NEW.size_bytes,
|
||||||
|
'progress_percent', latest_percent,
|
||||||
|
'progress_message', latest_message,
|
||||||
|
'progress_entries', entry_count,
|
||||||
|
'created', NEW.created,
|
||||||
|
'updated', NEW.updated
|
||||||
|
);
|
||||||
|
|
||||||
|
PERFORM pg_notify('artifact_updated', payload::text);
|
||||||
|
END IF;
|
||||||
|
|
||||||
|
RETURN NEW;
|
||||||
|
END;
|
||||||
|
$$ LANGUAGE plpgsql;
|
||||||
|
|
||||||
|
-- Trigger on artifact table for updates
|
||||||
|
CREATE TRIGGER artifact_updated_notify
|
||||||
|
AFTER UPDATE ON artifact
|
||||||
|
FOR EACH ROW
|
||||||
|
EXECUTE FUNCTION notify_artifact_updated();
|
||||||
|
|
||||||
|
COMMENT ON FUNCTION notify_artifact_updated() IS 'Sends artifact update notifications via PostgreSQL LISTEN/NOTIFY (includes progress summary for progress-type artifacts)';
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
-- Migration: Artifact Content System
|
-- Migration: Artifact Content System
|
||||||
-- Description: Enhances the artifact table with content fields (name, description,
|
-- Description: Enhances the artifact table with content fields (name, description,
|
||||||
-- content_type, size_bytes, execution link, structured data) and creates
|
-- content_type, size_bytes, execution link, structured data, visibility)
|
||||||
-- the artifact_version table for versioned file/data storage.
|
-- and creates the artifact_version table for versioned file/data storage.
|
||||||
--
|
--
|
||||||
-- The artifact table now serves as the "header" for a logical artifact,
|
-- The artifact table now serves as the "header" for a logical artifact,
|
||||||
-- while artifact_version rows hold the actual immutable content snapshots.
|
-- while artifact_version rows hold the actual immutable content snapshots.
|
||||||
@@ -33,10 +33,19 @@ ALTER TABLE artifact ADD COLUMN IF NOT EXISTS execution BIGINT;
|
|||||||
-- Progress artifacts append entries here; file artifacts may store parsed metadata.
|
-- Progress artifacts append entries here; file artifacts may store parsed metadata.
|
||||||
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS data JSONB;
|
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS data JSONB;
|
||||||
|
|
||||||
|
-- Visibility: public artifacts are viewable by all authenticated users;
|
||||||
|
-- private artifacts are restricted based on the artifact's scope/owner.
|
||||||
|
-- The scope (identity, action, pack, etc.) + owner fields define who can access
|
||||||
|
-- a private artifact. Full RBAC enforcement is deferred — for now the column
|
||||||
|
-- enables filtering and is available for future permission checks.
|
||||||
|
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS visibility artifact_visibility_enum NOT NULL DEFAULT 'private';
|
||||||
|
|
||||||
-- New indexes for the added columns
|
-- New indexes for the added columns
|
||||||
CREATE INDEX IF NOT EXISTS idx_artifact_execution ON artifact(execution);
|
CREATE INDEX IF NOT EXISTS idx_artifact_execution ON artifact(execution);
|
||||||
CREATE INDEX IF NOT EXISTS idx_artifact_name ON artifact(name);
|
CREATE INDEX IF NOT EXISTS idx_artifact_name ON artifact(name);
|
||||||
CREATE INDEX IF NOT EXISTS idx_artifact_execution_type ON artifact(execution, type);
|
CREATE INDEX IF NOT EXISTS idx_artifact_execution_type ON artifact(execution, type);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_artifact_visibility ON artifact(visibility);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_artifact_visibility_scope ON artifact(visibility, scope, owner);
|
||||||
|
|
||||||
-- Comments for new columns
|
-- Comments for new columns
|
||||||
COMMENT ON COLUMN artifact.name IS 'Human-readable artifact name';
|
COMMENT ON COLUMN artifact.name IS 'Human-readable artifact name';
|
||||||
@@ -45,6 +54,7 @@ COMMENT ON COLUMN artifact.content_type IS 'MIME content type (e.g. application/
|
|||||||
COMMENT ON COLUMN artifact.size_bytes IS 'Size of latest version content in bytes';
|
COMMENT ON COLUMN artifact.size_bytes IS 'Size of latest version content in bytes';
|
||||||
COMMENT ON COLUMN artifact.execution IS 'Execution that produced this artifact (no FK — execution is a hypertable)';
|
COMMENT ON COLUMN artifact.execution IS 'Execution that produced this artifact (no FK — execution is a hypertable)';
|
||||||
COMMENT ON COLUMN artifact.data IS 'Structured JSONB data for progress artifacts or metadata';
|
COMMENT ON COLUMN artifact.data IS 'Structured JSONB data for progress artifacts or metadata';
|
||||||
|
COMMENT ON COLUMN artifact.visibility IS 'Access visibility: public (all users) or private (scope/owner-restricted)';
|
||||||
|
|
||||||
|
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
@@ -69,13 +79,18 @@ CREATE TABLE artifact_version (
|
|||||||
-- Size of the content in bytes
|
-- Size of the content in bytes
|
||||||
size_bytes BIGINT,
|
size_bytes BIGINT,
|
||||||
|
|
||||||
-- Binary content (file uploads). Use BYTEA for simplicity; large files
|
-- Binary content (file uploads, DB-stored). NULL for file-backed versions.
|
||||||
-- should use external object storage in production (future enhancement).
|
|
||||||
content BYTEA,
|
content BYTEA,
|
||||||
|
|
||||||
-- Structured content (JSON payloads, parsed results, etc.)
|
-- Structured content (JSON payloads, parsed results, etc.)
|
||||||
content_json JSONB,
|
content_json JSONB,
|
||||||
|
|
||||||
|
-- Relative path from artifacts_dir root for disk-stored content.
|
||||||
|
-- When set, content BYTEA is NULL — file lives on shared volume.
|
||||||
|
-- Pattern: {ref_slug}/v{version}.{ext}
|
||||||
|
-- e.g., "mypack/build_log/v1.txt"
|
||||||
|
file_path TEXT,
|
||||||
|
|
||||||
-- Free-form metadata about this version (e.g. commit hash, build number)
|
-- Free-form metadata about this version (e.g. commit hash, build number)
|
||||||
meta JSONB,
|
meta JSONB,
|
||||||
|
|
||||||
@@ -94,6 +109,7 @@ ALTER TABLE artifact_version
|
|||||||
CREATE INDEX idx_artifact_version_artifact ON artifact_version(artifact);
|
CREATE INDEX idx_artifact_version_artifact ON artifact_version(artifact);
|
||||||
CREATE INDEX idx_artifact_version_artifact_version ON artifact_version(artifact, version DESC);
|
CREATE INDEX idx_artifact_version_artifact_version ON artifact_version(artifact, version DESC);
|
||||||
CREATE INDEX idx_artifact_version_created ON artifact_version(created DESC);
|
CREATE INDEX idx_artifact_version_created ON artifact_version(created DESC);
|
||||||
|
CREATE INDEX idx_artifact_version_file_path ON artifact_version(file_path) WHERE file_path IS NOT NULL;
|
||||||
|
|
||||||
-- Comments
|
-- Comments
|
||||||
COMMENT ON TABLE artifact_version IS 'Immutable content snapshots for artifacts (file uploads, structured data)';
|
COMMENT ON TABLE artifact_version IS 'Immutable content snapshots for artifacts (file uploads, structured data)';
|
||||||
@@ -105,6 +121,7 @@ COMMENT ON COLUMN artifact_version.content IS 'Binary content (file data)';
|
|||||||
COMMENT ON COLUMN artifact_version.content_json IS 'Structured JSON content';
|
COMMENT ON COLUMN artifact_version.content_json IS 'Structured JSON content';
|
||||||
COMMENT ON COLUMN artifact_version.meta IS 'Free-form metadata about this version';
|
COMMENT ON COLUMN artifact_version.meta IS 'Free-form metadata about this version';
|
||||||
COMMENT ON COLUMN artifact_version.created_by IS 'Who created this version (identity ref, action ref, system)';
|
COMMENT ON COLUMN artifact_version.created_by IS 'Who created this version (identity ref, action ref, system)';
|
||||||
|
COMMENT ON COLUMN artifact_version.file_path IS 'Relative path from artifacts_dir root for disk-stored content. When set, content BYTEA is NULL — file lives on shared volume.';
|
||||||
|
|
||||||
|
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
|
|||||||
@@ -26,6 +26,10 @@ const ExecutionsPage = lazy(() => import("@/pages/executions/ExecutionsPage"));
|
|||||||
const ExecutionDetailPage = lazy(
|
const ExecutionDetailPage = lazy(
|
||||||
() => import("@/pages/executions/ExecutionDetailPage"),
|
() => import("@/pages/executions/ExecutionDetailPage"),
|
||||||
);
|
);
|
||||||
|
const ArtifactsPage = lazy(() => import("@/pages/artifacts/ArtifactsPage"));
|
||||||
|
const ArtifactDetailPage = lazy(
|
||||||
|
() => import("@/pages/artifacts/ArtifactDetailPage"),
|
||||||
|
);
|
||||||
const EventsPage = lazy(() => import("@/pages/events/EventsPage"));
|
const EventsPage = lazy(() => import("@/pages/events/EventsPage"));
|
||||||
const EventDetailPage = lazy(() => import("@/pages/events/EventDetailPage"));
|
const EventDetailPage = lazy(() => import("@/pages/events/EventDetailPage"));
|
||||||
const EnforcementsPage = lazy(
|
const EnforcementsPage = lazy(
|
||||||
@@ -99,6 +103,11 @@ function App() {
|
|||||||
path="executions/:id"
|
path="executions/:id"
|
||||||
element={<ExecutionDetailPage />}
|
element={<ExecutionDetailPage />}
|
||||||
/>
|
/>
|
||||||
|
<Route path="artifacts" element={<ArtifactsPage />} />
|
||||||
|
<Route
|
||||||
|
path="artifacts/:id"
|
||||||
|
element={<ArtifactDetailPage />}
|
||||||
|
/>
|
||||||
<Route path="events" element={<EventsPage />} />
|
<Route path="events" element={<EventsPage />} />
|
||||||
<Route path="events/:id" element={<EventDetailPage />} />
|
<Route path="events/:id" element={<EventDetailPage />} />
|
||||||
<Route path="enforcements" element={<EnforcementsPage />} />
|
<Route path="enforcements" element={<EnforcementsPage />} />
|
||||||
|
|||||||
@@ -21,6 +21,7 @@ import {
|
|||||||
type ArtifactSummary,
|
type ArtifactSummary,
|
||||||
type ArtifactType,
|
type ArtifactType,
|
||||||
} from "@/hooks/useArtifacts";
|
} from "@/hooks/useArtifacts";
|
||||||
|
import { useArtifactStream } from "@/hooks/useArtifactStream";
|
||||||
import { OpenAPI } from "@/api/core/OpenAPI";
|
import { OpenAPI } from "@/api/core/OpenAPI";
|
||||||
|
|
||||||
interface ExecutionArtifactsPanelProps {
|
interface ExecutionArtifactsPanelProps {
|
||||||
@@ -349,6 +350,11 @@ export default function ExecutionArtifactsPanel({
|
|||||||
null,
|
null,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// Subscribe to real-time artifact notifications for this execution.
|
||||||
|
// WebSocket-driven cache invalidation replaces most of the polling need,
|
||||||
|
// but we keep polling as a fallback (staleTime/refetchInterval in the hook).
|
||||||
|
useArtifactStream({ executionId, enabled: isRunning });
|
||||||
|
|
||||||
const { data, isLoading, error } = useExecutionArtifacts(
|
const { data, isLoading, error } = useExecutionArtifacts(
|
||||||
executionId,
|
executionId,
|
||||||
isRunning,
|
isRunning,
|
||||||
|
|||||||
109
web/src/components/executions/ExecutionProgressBar.tsx
Normal file
109
web/src/components/executions/ExecutionProgressBar.tsx
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
import { useMemo } from "react";
|
||||||
|
import { BarChart3 } from "lucide-react";
|
||||||
|
import {
|
||||||
|
useExecutionArtifacts,
|
||||||
|
type ArtifactSummary,
|
||||||
|
} from "@/hooks/useArtifacts";
|
||||||
|
import { useArtifactStream, useArtifactProgress } from "@/hooks/useArtifactStream";
|
||||||
|
|
||||||
|
interface ExecutionProgressBarProps {
|
||||||
|
executionId: number;
|
||||||
|
/** Whether the execution is still running (enables real-time updates) */
|
||||||
|
isRunning: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Inline progress bar for executions that have progress-type artifacts.
|
||||||
|
*
|
||||||
|
* Combines two data sources for responsiveness:
|
||||||
|
* 1. **Polling**: `useExecutionArtifacts` fetches the artifact list periodically
|
||||||
|
* so we can detect when a progress artifact first appears and read its initial state.
|
||||||
|
* 2. **WebSocket**: `useArtifactStream` subscribes to real-time `artifact_updated`
|
||||||
|
* notifications, which include the latest `progress_percent` and `progress_message`
|
||||||
|
* extracted by the database trigger — providing instant updates between polls.
|
||||||
|
*
|
||||||
|
* The WebSocket-pushed summary takes precedence when available (it's newer), with
|
||||||
|
* the polled data as a fallback for the initial render before any WS message arrives.
|
||||||
|
*
|
||||||
|
* Renders nothing if no progress artifact exists for this execution.
|
||||||
|
*/
|
||||||
|
export default function ExecutionProgressBar({
|
||||||
|
executionId,
|
||||||
|
isRunning,
|
||||||
|
}: ExecutionProgressBarProps) {
|
||||||
|
// Subscribe to real-time artifact updates for this execution
|
||||||
|
useArtifactStream({ executionId, enabled: isRunning });
|
||||||
|
|
||||||
|
// Read the latest progress pushed via WebSocket (no API call)
|
||||||
|
const wsSummary = useArtifactProgress(executionId);
|
||||||
|
|
||||||
|
// Poll-based artifact list (fallback + initial detection)
|
||||||
|
const { data } = useExecutionArtifacts(
|
||||||
|
executionId,
|
||||||
|
isRunning,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Find progress artifacts from the polled data
|
||||||
|
const progressArtifact = useMemo<ArtifactSummary | null>(() => {
|
||||||
|
const artifacts = data?.data ?? [];
|
||||||
|
return artifacts.find((a) => a.type === "progress") ?? null;
|
||||||
|
}, [data]);
|
||||||
|
|
||||||
|
// If there's no progress artifact at all, render nothing
|
||||||
|
if (!progressArtifact && !wsSummary) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prefer the WS-pushed summary (more current), fall back to indicating
|
||||||
|
// that a progress artifact exists but we haven't received detail yet.
|
||||||
|
const percent = wsSummary?.percent ?? null;
|
||||||
|
const message = wsSummary?.message ?? null;
|
||||||
|
const name = wsSummary?.name ?? progressArtifact?.name ?? "Progress";
|
||||||
|
|
||||||
|
// If we have a progress artifact but no percent yet (first poll, no WS yet),
|
||||||
|
// show an indeterminate state
|
||||||
|
const hasPercent = percent != null;
|
||||||
|
const clampedPercent = hasPercent ? Math.min(Math.max(percent, 0), 100) : 0;
|
||||||
|
const isComplete = hasPercent && clampedPercent >= 100;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="mt-4 pt-4 border-t border-gray-100">
|
||||||
|
<div className="flex items-center gap-2 mb-1.5">
|
||||||
|
<BarChart3 className="h-4 w-4 text-amber-500 flex-shrink-0" />
|
||||||
|
<span className="text-sm font-medium text-gray-700 truncate">
|
||||||
|
{name}
|
||||||
|
</span>
|
||||||
|
{hasPercent && (
|
||||||
|
<span className="text-xs font-mono text-gray-500 ml-auto flex-shrink-0">
|
||||||
|
{Math.round(clampedPercent)}%
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Progress bar */}
|
||||||
|
<div className="w-full bg-gray-200 rounded-full h-2">
|
||||||
|
{hasPercent ? (
|
||||||
|
<div
|
||||||
|
className={`h-2 rounded-full transition-all duration-500 ease-out ${
|
||||||
|
isComplete
|
||||||
|
? "bg-green-500"
|
||||||
|
: "bg-amber-500"
|
||||||
|
}`}
|
||||||
|
style={{ width: `${clampedPercent}%` }}
|
||||||
|
/>
|
||||||
|
) : (
|
||||||
|
/* Indeterminate shimmer when we know a progress artifact exists
|
||||||
|
but haven't received a percent value yet */
|
||||||
|
<div className="h-2 rounded-full bg-amber-300 animate-pulse w-full opacity-40" />
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Message */}
|
||||||
|
{message && (
|
||||||
|
<p className="text-xs text-gray-500 mt-1 truncate" title={message}>
|
||||||
|
{message}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -16,6 +16,9 @@ import {
|
|||||||
SquareAsterisk,
|
SquareAsterisk,
|
||||||
KeyRound,
|
KeyRound,
|
||||||
Home,
|
Home,
|
||||||
|
Paperclip,
|
||||||
|
FolderOpenDot,
|
||||||
|
FolderArchive,
|
||||||
} from "lucide-react";
|
} from "lucide-react";
|
||||||
|
|
||||||
// Color mappings for navigation items — defined outside component for stable reference
|
// Color mappings for navigation items — defined outside component for stable reference
|
||||||
@@ -113,6 +116,12 @@ const navSections = [
|
|||||||
{
|
{
|
||||||
items: [
|
items: [
|
||||||
{ to: "/keys", label: "Keys & Secrets", icon: KeyRound, color: "gray" },
|
{ to: "/keys", label: "Keys & Secrets", icon: KeyRound, color: "gray" },
|
||||||
|
{
|
||||||
|
to: "/artifacts",
|
||||||
|
label: "Artifacts",
|
||||||
|
icon: FolderArchive,
|
||||||
|
color: "gray",
|
||||||
|
},
|
||||||
{
|
{
|
||||||
to: "/packs",
|
to: "/packs",
|
||||||
label: "Pack Management",
|
label: "Pack Management",
|
||||||
|
|||||||
136
web/src/hooks/useArtifactStream.ts
Normal file
136
web/src/hooks/useArtifactStream.ts
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
import { useCallback } from "react";
|
||||||
|
import { useQueryClient } from "@tanstack/react-query";
|
||||||
|
import { useEntityNotifications } from "@/contexts/WebSocketContext";
|
||||||
|
|
||||||
|
interface UseArtifactStreamOptions {
|
||||||
|
/**
|
||||||
|
* Optional execution ID to filter artifact updates for a specific execution.
|
||||||
|
* If not provided, receives updates for all artifacts.
|
||||||
|
*/
|
||||||
|
executionId?: number;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Whether the stream should be active.
|
||||||
|
* Defaults to true.
|
||||||
|
*/
|
||||||
|
enabled?: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Hook to subscribe to real-time artifact updates via WebSocket.
|
||||||
|
*
|
||||||
|
* Listens to `artifact_created` and `artifact_updated` notifications from the
|
||||||
|
* PostgreSQL LISTEN/NOTIFY system, and invalidates relevant React Query caches
|
||||||
|
* so that artifact lists and detail views update in real time.
|
||||||
|
*
|
||||||
|
* For progress-type artifacts, the notification payload includes a progress
|
||||||
|
* summary (`progress_percent`, `progress_message`, `progress_entries`) extracted
|
||||||
|
* by the database trigger so that the UI can update inline progress indicators
|
||||||
|
* without a separate API call.
|
||||||
|
*
|
||||||
|
* @example
|
||||||
|
* ```tsx
|
||||||
|
* // Listen to all artifact updates
|
||||||
|
* useArtifactStream();
|
||||||
|
*
|
||||||
|
* // Listen to artifacts for a specific execution
|
||||||
|
* useArtifactStream({ executionId: 123 });
|
||||||
|
* ```
|
||||||
|
*/
|
||||||
|
export function useArtifactStream(options: UseArtifactStreamOptions = {}) {
|
||||||
|
const { executionId, enabled = true } = options;
|
||||||
|
const queryClient = useQueryClient();
|
||||||
|
|
||||||
|
const handleNotification = useCallback(
|
||||||
|
(notification: any) => {
|
||||||
|
const payload = notification.payload as any;
|
||||||
|
|
||||||
|
// If we're filtering by execution ID, only process matching artifacts
|
||||||
|
if (executionId && payload?.execution !== executionId) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const artifactId = notification.entity_id;
|
||||||
|
const artifactExecution = payload?.execution;
|
||||||
|
|
||||||
|
// Invalidate the specific artifact query (used by ProgressDetail, TextFileDetail)
|
||||||
|
queryClient.invalidateQueries({
|
||||||
|
queryKey: ["artifacts", artifactId],
|
||||||
|
});
|
||||||
|
|
||||||
|
// Invalidate the execution artifacts list query
|
||||||
|
if (artifactExecution) {
|
||||||
|
queryClient.invalidateQueries({
|
||||||
|
queryKey: ["artifacts", "execution", artifactExecution],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// For progress artifacts, also update cached data directly with the
|
||||||
|
// summary from the notification payload to provide instant feedback
|
||||||
|
// before the invalidation refetch completes.
|
||||||
|
if (payload?.type === "progress" && payload?.progress_percent != null) {
|
||||||
|
queryClient.setQueryData(
|
||||||
|
["artifact_progress", artifactExecution],
|
||||||
|
(old: any) => ({
|
||||||
|
...old,
|
||||||
|
artifactId,
|
||||||
|
name: payload.name,
|
||||||
|
percent: payload.progress_percent,
|
||||||
|
message: payload.progress_message ?? null,
|
||||||
|
entries: payload.progress_entries ?? 0,
|
||||||
|
timestamp: notification.timestamp,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[executionId, queryClient],
|
||||||
|
);
|
||||||
|
|
||||||
|
const { connected } = useEntityNotifications(
|
||||||
|
"artifact",
|
||||||
|
handleNotification,
|
||||||
|
enabled,
|
||||||
|
);
|
||||||
|
|
||||||
|
return {
|
||||||
|
isConnected: connected,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Lightweight progress summary extracted from artifact WebSocket notifications.
|
||||||
|
* Available immediately via the `artifact_progress` query key without an API call.
|
||||||
|
*/
|
||||||
|
export interface ArtifactProgressSummary {
|
||||||
|
artifactId: number;
|
||||||
|
name: string | null;
|
||||||
|
percent: number;
|
||||||
|
message: string | null;
|
||||||
|
entries: number;
|
||||||
|
timestamp: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Hook to read the latest progress summary pushed by WebSocket notifications.
|
||||||
|
*
|
||||||
|
* This does NOT make any API calls — it only reads from the React Query cache
|
||||||
|
* which is populated by `useArtifactStream`. Returns `null` if no progress
|
||||||
|
* notification has been received yet for the given execution.
|
||||||
|
*
|
||||||
|
* For the initial load (before any WebSocket message arrives), the component
|
||||||
|
* should fall back to the polling-based `useExecutionArtifacts` data.
|
||||||
|
*/
|
||||||
|
export function useArtifactProgress(
|
||||||
|
executionId: number | undefined,
|
||||||
|
): ArtifactProgressSummary | null {
|
||||||
|
const queryClient = useQueryClient();
|
||||||
|
|
||||||
|
if (!executionId) return null;
|
||||||
|
|
||||||
|
const data = queryClient.getQueryData<ArtifactProgressSummary>([
|
||||||
|
"artifact_progress",
|
||||||
|
executionId,
|
||||||
|
]);
|
||||||
|
|
||||||
|
return data ?? null;
|
||||||
|
}
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
import { useQuery } from "@tanstack/react-query";
|
import { useQuery, keepPreviousData } from "@tanstack/react-query";
|
||||||
import { OpenAPI } from "@/api/core/OpenAPI";
|
import { OpenAPI } from "@/api/core/OpenAPI";
|
||||||
import { request as __request } from "@/api/core/request";
|
import { request as __request } from "@/api/core/request";
|
||||||
|
|
||||||
@@ -12,6 +12,8 @@ export type ArtifactType =
|
|||||||
| "progress"
|
| "progress"
|
||||||
| "url";
|
| "url";
|
||||||
|
|
||||||
|
export type ArtifactVisibility = "public" | "private";
|
||||||
|
|
||||||
export type OwnerType = "system" | "pack" | "action" | "sensor" | "rule";
|
export type OwnerType = "system" | "pack" | "action" | "sensor" | "rule";
|
||||||
|
|
||||||
export type RetentionPolicyType = "versions" | "days" | "hours" | "minutes";
|
export type RetentionPolicyType = "versions" | "days" | "hours" | "minutes";
|
||||||
@@ -20,6 +22,7 @@ export interface ArtifactSummary {
|
|||||||
id: number;
|
id: number;
|
||||||
ref: string;
|
ref: string;
|
||||||
type: ArtifactType;
|
type: ArtifactType;
|
||||||
|
visibility: ArtifactVisibility;
|
||||||
name: string | null;
|
name: string | null;
|
||||||
content_type: string | null;
|
content_type: string | null;
|
||||||
size_bytes: number | null;
|
size_bytes: number | null;
|
||||||
@@ -36,6 +39,7 @@ export interface ArtifactResponse {
|
|||||||
scope: OwnerType;
|
scope: OwnerType;
|
||||||
owner: string;
|
owner: string;
|
||||||
type: ArtifactType;
|
type: ArtifactType;
|
||||||
|
visibility: ArtifactVisibility;
|
||||||
retention_policy: RetentionPolicyType;
|
retention_policy: RetentionPolicyType;
|
||||||
retention_limit: number;
|
retention_limit: number;
|
||||||
name: string | null;
|
name: string | null;
|
||||||
@@ -57,6 +61,70 @@ export interface ArtifactVersionSummary {
|
|||||||
created: string;
|
created: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Search / List params
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface ArtifactsListParams {
|
||||||
|
page?: number;
|
||||||
|
perPage?: number;
|
||||||
|
scope?: OwnerType;
|
||||||
|
owner?: string;
|
||||||
|
type?: ArtifactType;
|
||||||
|
visibility?: ArtifactVisibility;
|
||||||
|
execution?: number;
|
||||||
|
name?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Paginated list response shape
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface PaginatedArtifacts {
|
||||||
|
data: ArtifactSummary[];
|
||||||
|
pagination: {
|
||||||
|
page: number;
|
||||||
|
page_size: number;
|
||||||
|
total_items: number;
|
||||||
|
total_pages: number;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Hooks
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fetch a paginated, filterable list of all artifacts.
|
||||||
|
*
|
||||||
|
* Uses GET /api/v1/artifacts with query params for server-side filtering.
|
||||||
|
*/
|
||||||
|
export function useArtifactsList(params: ArtifactsListParams = {}) {
|
||||||
|
return useQuery({
|
||||||
|
queryKey: ["artifacts", "list", params],
|
||||||
|
queryFn: async () => {
|
||||||
|
const query: Record<string, string> = {};
|
||||||
|
if (params.page) query.page = String(params.page);
|
||||||
|
if (params.perPage) query.per_page = String(params.perPage);
|
||||||
|
if (params.scope) query.scope = params.scope;
|
||||||
|
if (params.owner) query.owner = params.owner;
|
||||||
|
if (params.type) query.type = params.type;
|
||||||
|
if (params.visibility) query.visibility = params.visibility;
|
||||||
|
if (params.execution) query.execution = String(params.execution);
|
||||||
|
if (params.name) query.name = params.name;
|
||||||
|
|
||||||
|
const response = await __request<PaginatedArtifacts>(OpenAPI, {
|
||||||
|
method: "GET",
|
||||||
|
url: "/api/v1/artifacts",
|
||||||
|
query,
|
||||||
|
});
|
||||||
|
return response;
|
||||||
|
},
|
||||||
|
staleTime: 10000,
|
||||||
|
placeholderData: keepPreviousData,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Fetch all artifacts for a given execution ID.
|
* Fetch all artifacts for a given execution ID.
|
||||||
*
|
*
|
||||||
|
|||||||
705
web/src/pages/artifacts/ArtifactDetailPage.tsx
Normal file
705
web/src/pages/artifacts/ArtifactDetailPage.tsx
Normal file
@@ -0,0 +1,705 @@
|
|||||||
|
import { useState, useMemo, useCallback, useEffect } from "react";
|
||||||
|
import { useParams, Link } from "react-router-dom";
|
||||||
|
import {
|
||||||
|
ArrowLeft,
|
||||||
|
Download,
|
||||||
|
Eye,
|
||||||
|
EyeOff,
|
||||||
|
Loader2,
|
||||||
|
FileText,
|
||||||
|
Clock,
|
||||||
|
Hash,
|
||||||
|
X,
|
||||||
|
} from "lucide-react";
|
||||||
|
import {
|
||||||
|
useArtifact,
|
||||||
|
useArtifactVersions,
|
||||||
|
type ArtifactResponse,
|
||||||
|
type ArtifactVersionSummary,
|
||||||
|
} from "@/hooks/useArtifacts";
|
||||||
|
import { useArtifactStream } from "@/hooks/useArtifactStream";
|
||||||
|
import { OpenAPI } from "@/api/core/OpenAPI";
|
||||||
|
import {
|
||||||
|
getArtifactTypeIcon,
|
||||||
|
getArtifactTypeBadge,
|
||||||
|
getScopeBadge,
|
||||||
|
formatBytes,
|
||||||
|
formatDate,
|
||||||
|
downloadArtifact,
|
||||||
|
isDownloadable,
|
||||||
|
} from "./artifactHelpers";
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Text content viewer
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function TextContentViewer({
|
||||||
|
artifactId,
|
||||||
|
versionId,
|
||||||
|
label,
|
||||||
|
}: {
|
||||||
|
artifactId: number;
|
||||||
|
versionId?: number;
|
||||||
|
label: string;
|
||||||
|
}) {
|
||||||
|
// Track a fetch key so that when deps change we re-derive initial state
|
||||||
|
// instead of calling setState synchronously inside useEffect.
|
||||||
|
const fetchKey = `${artifactId}:${versionId ?? "latest"}`;
|
||||||
|
const [settledKey, setSettledKey] = useState<string | null>(null);
|
||||||
|
const [content, setContent] = useState<string | null>(null);
|
||||||
|
const [loadError, setLoadError] = useState<string | null>(null);
|
||||||
|
|
||||||
|
const isLoading = settledKey !== fetchKey;
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
let cancelled = false;
|
||||||
|
|
||||||
|
const token = localStorage.getItem("access_token");
|
||||||
|
const url = versionId
|
||||||
|
? `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/versions/${versionId}/download`
|
||||||
|
: `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/download`;
|
||||||
|
|
||||||
|
fetch(url, { headers: { Authorization: `Bearer ${token}` } })
|
||||||
|
.then(async (response) => {
|
||||||
|
if (cancelled) return;
|
||||||
|
if (!response.ok) {
|
||||||
|
setLoadError(`HTTP ${response.status}: ${response.statusText}`);
|
||||||
|
setContent(null);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const text = await response.text();
|
||||||
|
setContent(text);
|
||||||
|
setLoadError(null);
|
||||||
|
})
|
||||||
|
.catch((e) => {
|
||||||
|
if (!cancelled) {
|
||||||
|
setLoadError(e instanceof Error ? e.message : "Unknown error");
|
||||||
|
setContent(null);
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.finally(() => {
|
||||||
|
if (!cancelled) setSettledKey(fetchKey);
|
||||||
|
});
|
||||||
|
|
||||||
|
return () => {
|
||||||
|
cancelled = true;
|
||||||
|
};
|
||||||
|
}, [artifactId, versionId, fetchKey]);
|
||||||
|
|
||||||
|
if (isLoading) {
|
||||||
|
return (
|
||||||
|
<div className="flex items-center gap-2 py-4 text-sm text-gray-500">
|
||||||
|
<Loader2 className="h-4 w-4 animate-spin" />
|
||||||
|
Loading {label}...
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (loadError) {
|
||||||
|
return <div className="py-4 text-sm text-red-600">Error: {loadError}</div>;
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<pre className="max-h-96 overflow-y-auto bg-gray-900 text-gray-100 rounded-lg p-4 text-xs font-mono whitespace-pre-wrap break-all">
|
||||||
|
{content || <span className="text-gray-500 italic">(empty)</span>}
|
||||||
|
</pre>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Progress viewer
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function ProgressViewer({ data }: { data: unknown }) {
|
||||||
|
const entries = useMemo(() => {
|
||||||
|
if (!data || !Array.isArray(data)) return [];
|
||||||
|
return data as Array<Record<string, unknown>>;
|
||||||
|
}, [data]);
|
||||||
|
|
||||||
|
const latestEntry = entries.length > 0 ? entries[entries.length - 1] : null;
|
||||||
|
const latestPercent =
|
||||||
|
latestEntry && typeof latestEntry.percent === "number"
|
||||||
|
? latestEntry.percent
|
||||||
|
: null;
|
||||||
|
|
||||||
|
if (entries.length === 0) {
|
||||||
|
return (
|
||||||
|
<p className="text-sm text-gray-500 italic">No progress entries yet.</p>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
{latestPercent != null && (
|
||||||
|
<div className="mb-4">
|
||||||
|
<div className="flex items-center justify-between text-sm text-gray-600 mb-1">
|
||||||
|
<span>
|
||||||
|
{latestEntry?.message
|
||||||
|
? String(latestEntry.message)
|
||||||
|
: `${latestPercent}%`}
|
||||||
|
</span>
|
||||||
|
<span className="font-mono">{latestPercent}%</span>
|
||||||
|
</div>
|
||||||
|
<div className="w-full bg-gray-200 rounded-full h-3">
|
||||||
|
<div
|
||||||
|
className="bg-amber-500 h-3 rounded-full transition-all duration-300"
|
||||||
|
style={{ width: `${Math.min(latestPercent, 100)}%` }}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<div className="max-h-64 overflow-y-auto">
|
||||||
|
<table className="w-full text-sm">
|
||||||
|
<thead>
|
||||||
|
<tr className="text-left text-gray-500 border-b border-gray-200">
|
||||||
|
<th className="pb-2 pr-3">#</th>
|
||||||
|
<th className="pb-2 pr-3">%</th>
|
||||||
|
<th className="pb-2 pr-3">Message</th>
|
||||||
|
<th className="pb-2">Time</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
{entries.map((entry, idx) => (
|
||||||
|
<tr key={idx} className="border-b border-gray-100 last:border-0">
|
||||||
|
<td className="py-1.5 pr-3 text-gray-400 font-mono">
|
||||||
|
{typeof entry.iteration === "number"
|
||||||
|
? entry.iteration
|
||||||
|
: idx + 1}
|
||||||
|
</td>
|
||||||
|
<td className="py-1.5 pr-3 font-mono">
|
||||||
|
{typeof entry.percent === "number"
|
||||||
|
? `${entry.percent}%`
|
||||||
|
: "\u2014"}
|
||||||
|
</td>
|
||||||
|
<td className="py-1.5 pr-3 text-gray-700 truncate max-w-[300px]">
|
||||||
|
{entry.message ? String(entry.message) : "\u2014"}
|
||||||
|
</td>
|
||||||
|
<td className="py-1.5 text-gray-400 whitespace-nowrap">
|
||||||
|
{entry.timestamp
|
||||||
|
? new Date(String(entry.timestamp)).toLocaleTimeString()
|
||||||
|
: "\u2014"}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Version row
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function VersionRow({
|
||||||
|
version,
|
||||||
|
artifactId,
|
||||||
|
artifactRef,
|
||||||
|
artifactType,
|
||||||
|
}: {
|
||||||
|
version: ArtifactVersionSummary;
|
||||||
|
artifactId: number;
|
||||||
|
artifactRef: string;
|
||||||
|
artifactType: string;
|
||||||
|
}) {
|
||||||
|
const [showPreview, setShowPreview] = useState(false);
|
||||||
|
const canPreview = artifactType === "file_text";
|
||||||
|
const canDownload =
|
||||||
|
artifactType === "file_text" ||
|
||||||
|
artifactType === "file_image" ||
|
||||||
|
artifactType === "file_binary" ||
|
||||||
|
artifactType === "file_datatable";
|
||||||
|
|
||||||
|
const handleDownload = useCallback(async () => {
|
||||||
|
const token = localStorage.getItem("access_token");
|
||||||
|
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/versions/${version.id}/download`;
|
||||||
|
|
||||||
|
const response = await fetch(url, {
|
||||||
|
headers: { Authorization: `Bearer ${token}` },
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
console.error(
|
||||||
|
`Download failed: ${response.status} ${response.statusText}`,
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const disposition = response.headers.get("Content-Disposition");
|
||||||
|
let filename = `${artifactRef.replace(/\./g, "_")}_v${version.version}.bin`;
|
||||||
|
if (disposition) {
|
||||||
|
const match = disposition.match(/filename="?([^"]+)"?/);
|
||||||
|
if (match) filename = match[1];
|
||||||
|
}
|
||||||
|
|
||||||
|
const blob = await response.blob();
|
||||||
|
const blobUrl = URL.createObjectURL(blob);
|
||||||
|
const a = document.createElement("a");
|
||||||
|
a.href = blobUrl;
|
||||||
|
a.download = filename;
|
||||||
|
document.body.appendChild(a);
|
||||||
|
a.click();
|
||||||
|
document.body.removeChild(a);
|
||||||
|
URL.revokeObjectURL(blobUrl);
|
||||||
|
}, [artifactId, artifactRef, version]);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<>
|
||||||
|
<tr className="hover:bg-gray-50">
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm font-mono text-gray-900">
|
||||||
|
v{version.version}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm text-gray-600">
|
||||||
|
{version.content_type || "\u2014"}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm text-gray-600">
|
||||||
|
{formatBytes(version.size_bytes)}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm text-gray-600">
|
||||||
|
{version.created_by || "\u2014"}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-sm text-gray-600">
|
||||||
|
{formatDate(version.created)}
|
||||||
|
</td>
|
||||||
|
<td className="px-4 py-3 whitespace-nowrap text-right">
|
||||||
|
<div className="flex items-center justify-end gap-2">
|
||||||
|
{canPreview && (
|
||||||
|
<button
|
||||||
|
onClick={() => setShowPreview(!showPreview)}
|
||||||
|
className="text-gray-500 hover:text-blue-600"
|
||||||
|
title={showPreview ? "Hide preview" : "Preview content"}
|
||||||
|
>
|
||||||
|
{showPreview ? (
|
||||||
|
<X className="h-4 w-4" />
|
||||||
|
) : (
|
||||||
|
<FileText className="h-4 w-4" />
|
||||||
|
)}
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
{canDownload && (
|
||||||
|
<button
|
||||||
|
onClick={handleDownload}
|
||||||
|
className="text-gray-500 hover:text-blue-600"
|
||||||
|
title="Download this version"
|
||||||
|
>
|
||||||
|
<Download className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
{showPreview && (
|
||||||
|
<tr>
|
||||||
|
<td colSpan={6} className="px-4 py-3">
|
||||||
|
<TextContentViewer
|
||||||
|
artifactId={artifactId}
|
||||||
|
versionId={version.id}
|
||||||
|
label={`v${version.version}`}
|
||||||
|
/>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
)}
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Detail card
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function MetadataField({
|
||||||
|
label,
|
||||||
|
children,
|
||||||
|
}: {
|
||||||
|
label: string;
|
||||||
|
children: React.ReactNode;
|
||||||
|
}) {
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
<dt className="text-sm font-medium text-gray-500">{label}</dt>
|
||||||
|
<dd className="mt-1 text-sm text-gray-900">{children}</dd>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function ArtifactMetadata({ artifact }: { artifact: ArtifactResponse }) {
|
||||||
|
const typeBadge = getArtifactTypeBadge(artifact.type);
|
||||||
|
const scopeBadge = getScopeBadge(artifact.scope);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<div className="flex items-center justify-between">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
{getArtifactTypeIcon(artifact.type)}
|
||||||
|
<div>
|
||||||
|
<h2 className="text-xl font-bold text-gray-900">
|
||||||
|
{artifact.name || artifact.ref}
|
||||||
|
</h2>
|
||||||
|
{artifact.name && (
|
||||||
|
<p className="text-sm text-gray-500 font-mono">
|
||||||
|
{artifact.ref}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
{isDownloadable(artifact.type) && (
|
||||||
|
<button
|
||||||
|
onClick={() => downloadArtifact(artifact.id, artifact.ref)}
|
||||||
|
className="flex items-center gap-2 px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 transition-colors text-sm"
|
||||||
|
>
|
||||||
|
<Download className="h-4 w-4" />
|
||||||
|
Download Latest
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<dl className="grid grid-cols-2 md:grid-cols-4 gap-x-6 gap-y-4">
|
||||||
|
<MetadataField label="Type">
|
||||||
|
<span
|
||||||
|
className={`px-2 py-0.5 inline-flex text-xs leading-5 font-semibold rounded-full ${typeBadge.classes}`}
|
||||||
|
>
|
||||||
|
{typeBadge.label}
|
||||||
|
</span>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Visibility">
|
||||||
|
<div className="flex items-center gap-1.5">
|
||||||
|
{artifact.visibility === "public" ? (
|
||||||
|
<>
|
||||||
|
<Eye className="h-4 w-4 text-green-600" />
|
||||||
|
<span className="text-green-700">Public</span>
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<EyeOff className="h-4 w-4 text-gray-400" />
|
||||||
|
<span className="text-gray-600">Private</span>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Scope">
|
||||||
|
<span
|
||||||
|
className={`px-2 py-0.5 inline-flex text-xs leading-5 font-semibold rounded-full ${scopeBadge.classes}`}
|
||||||
|
>
|
||||||
|
{scopeBadge.label}
|
||||||
|
</span>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Owner">
|
||||||
|
<span className="font-mono text-sm">
|
||||||
|
{artifact.owner || "\u2014"}
|
||||||
|
</span>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Execution">
|
||||||
|
{artifact.execution ? (
|
||||||
|
<Link
|
||||||
|
to={`/executions/${artifact.execution}`}
|
||||||
|
className="text-blue-600 hover:text-blue-800 font-mono"
|
||||||
|
>
|
||||||
|
#{artifact.execution}
|
||||||
|
</Link>
|
||||||
|
) : (
|
||||||
|
<span className="text-gray-400">{"\u2014"}</span>
|
||||||
|
)}
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Content Type">
|
||||||
|
<span className="font-mono text-xs">
|
||||||
|
{artifact.content_type || "\u2014"}
|
||||||
|
</span>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Size">
|
||||||
|
{formatBytes(artifact.size_bytes)}
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Retention">
|
||||||
|
{artifact.retention_limit} {artifact.retention_policy}
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Created">
|
||||||
|
<div className="flex items-center gap-1.5">
|
||||||
|
<Clock className="h-3.5 w-3.5 text-gray-400" />
|
||||||
|
{formatDate(artifact.created)}
|
||||||
|
</div>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
<MetadataField label="Updated">
|
||||||
|
<div className="flex items-center gap-1.5">
|
||||||
|
<Clock className="h-3.5 w-3.5 text-gray-400" />
|
||||||
|
{formatDate(artifact.updated)}
|
||||||
|
</div>
|
||||||
|
</MetadataField>
|
||||||
|
|
||||||
|
{artifact.description && (
|
||||||
|
<div className="col-span-2">
|
||||||
|
<MetadataField label="Description">
|
||||||
|
{artifact.description}
|
||||||
|
</MetadataField>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</dl>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Versions list
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function ArtifactVersionsList({ artifact }: { artifact: ArtifactResponse }) {
|
||||||
|
const { data, isLoading, error } = useArtifactVersions(artifact.id);
|
||||||
|
const versions = useMemo(() => data?.data || [], [data]);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Hash className="h-5 w-5 text-gray-400" />
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">
|
||||||
|
Versions
|
||||||
|
{versions.length > 0 && (
|
||||||
|
<span className="ml-2 text-sm font-normal text-gray-500">
|
||||||
|
({versions.length})
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</h3>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{isLoading ? (
|
||||||
|
<div className="p-8 text-center">
|
||||||
|
<Loader2 className="h-6 w-6 animate-spin mx-auto text-blue-600" />
|
||||||
|
<p className="mt-2 text-sm text-gray-600">Loading versions...</p>
|
||||||
|
</div>
|
||||||
|
) : error ? (
|
||||||
|
<div className="p-8 text-center">
|
||||||
|
<p className="text-red-600">Failed to load versions</p>
|
||||||
|
<p className="text-sm text-gray-600 mt-1">
|
||||||
|
{error instanceof Error ? error.message : "Unknown error"}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
) : versions.length === 0 ? (
|
||||||
|
<div className="p-8 text-center">
|
||||||
|
<p className="text-gray-500">No versions yet</p>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
|
<div className="overflow-x-auto">
|
||||||
|
<table className="min-w-full divide-y divide-gray-200">
|
||||||
|
<thead className="bg-gray-50">
|
||||||
|
<tr>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Version
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Content Type
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Size
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Created By
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Created
|
||||||
|
</th>
|
||||||
|
<th className="px-4 py-3 text-right text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Actions
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="bg-white divide-y divide-gray-200">
|
||||||
|
{versions.map((version) => (
|
||||||
|
<VersionRow
|
||||||
|
key={version.id}
|
||||||
|
version={version}
|
||||||
|
artifactId={artifact.id}
|
||||||
|
artifactRef={artifact.ref}
|
||||||
|
artifactType={artifact.type}
|
||||||
|
/>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Inline content preview (progress / text for latest)
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function InlineContentPreview({ artifact }: { artifact: ArtifactResponse }) {
|
||||||
|
if (artifact.type === "progress") {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">
|
||||||
|
Progress Details
|
||||||
|
</h3>
|
||||||
|
</div>
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<ProgressViewer data={artifact.data} />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (artifact.type === "file_text") {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">
|
||||||
|
Content Preview (Latest)
|
||||||
|
</h3>
|
||||||
|
</div>
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<TextContentViewer artifactId={artifact.id} label="content" />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (artifact.type === "url" && artifact.data) {
|
||||||
|
const urlValue =
|
||||||
|
typeof artifact.data === "string"
|
||||||
|
? artifact.data
|
||||||
|
: typeof artifact.data === "object" &&
|
||||||
|
artifact.data !== null &&
|
||||||
|
"url" in (artifact.data as Record<string, unknown>)
|
||||||
|
? String((artifact.data as Record<string, unknown>).url)
|
||||||
|
: null;
|
||||||
|
|
||||||
|
if (urlValue) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">URL</h3>
|
||||||
|
</div>
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<a
|
||||||
|
href={urlValue}
|
||||||
|
target="_blank"
|
||||||
|
rel="noopener noreferrer"
|
||||||
|
className="text-blue-600 hover:text-blue-800 underline break-all"
|
||||||
|
>
|
||||||
|
{urlValue}
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// JSON data preview for other types that have data
|
||||||
|
if (artifact.data != null) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="px-6 py-4 border-b border-gray-200">
|
||||||
|
<h3 className="text-lg font-semibold text-gray-900">Data</h3>
|
||||||
|
</div>
|
||||||
|
<div className="px-6 py-5">
|
||||||
|
<pre className="max-h-96 overflow-y-auto bg-gray-900 text-gray-100 rounded-lg p-4 text-xs font-mono whitespace-pre-wrap break-all">
|
||||||
|
{JSON.stringify(artifact.data, null, 2)}
|
||||||
|
</pre>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Main page
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export default function ArtifactDetailPage() {
|
||||||
|
const { id } = useParams<{ id: string }>();
|
||||||
|
const artifactId = id ? Number(id) : undefined;
|
||||||
|
|
||||||
|
const { data, isLoading, error } = useArtifact(artifactId);
|
||||||
|
const artifact = data?.data;
|
||||||
|
|
||||||
|
// Subscribe to real-time updates for this artifact
|
||||||
|
useArtifactStream({
|
||||||
|
executionId: artifact?.execution ?? undefined,
|
||||||
|
enabled: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (isLoading) {
|
||||||
|
return (
|
||||||
|
<div className="p-6">
|
||||||
|
<div className="flex items-center justify-center h-64">
|
||||||
|
<Loader2 className="h-8 w-8 animate-spin text-blue-600" />
|
||||||
|
<p className="ml-3 text-gray-600">Loading artifact...</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (error || !artifact) {
|
||||||
|
return (
|
||||||
|
<div className="p-6">
|
||||||
|
<div className="mb-6">
|
||||||
|
<Link
|
||||||
|
to="/artifacts"
|
||||||
|
className="flex items-center gap-2 text-gray-600 hover:text-gray-900"
|
||||||
|
>
|
||||||
|
<ArrowLeft className="h-4 w-4" />
|
||||||
|
Back to Artifacts
|
||||||
|
</Link>
|
||||||
|
</div>
|
||||||
|
<div className="bg-white shadow rounded-lg p-12 text-center">
|
||||||
|
<p className="text-red-600 text-lg">
|
||||||
|
{error ? "Failed to load artifact" : "Artifact not found"}
|
||||||
|
</p>
|
||||||
|
{error && (
|
||||||
|
<p className="text-sm text-gray-600 mt-2">
|
||||||
|
{error instanceof Error ? error.message : "Unknown error"}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="p-6">
|
||||||
|
{/* Back link */}
|
||||||
|
<div className="mb-6">
|
||||||
|
<Link
|
||||||
|
to="/artifacts"
|
||||||
|
className="flex items-center gap-2 text-gray-600 hover:text-gray-900 text-sm"
|
||||||
|
>
|
||||||
|
<ArrowLeft className="h-4 w-4" />
|
||||||
|
Back to Artifacts
|
||||||
|
</Link>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Metadata card */}
|
||||||
|
<ArtifactMetadata artifact={artifact} />
|
||||||
|
|
||||||
|
{/* Inline content preview */}
|
||||||
|
<div className="mt-6">
|
||||||
|
<InlineContentPreview artifact={artifact} />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Versions list */}
|
||||||
|
<div className="mt-6">
|
||||||
|
<ArtifactVersionsList artifact={artifact} />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
583
web/src/pages/artifacts/ArtifactsPage.tsx
Normal file
583
web/src/pages/artifacts/ArtifactsPage.tsx
Normal file
@@ -0,0 +1,583 @@
|
|||||||
|
import { useState, useCallback, useMemo, useEffect, memo } from "react";
|
||||||
|
import { Link, useSearchParams } from "react-router-dom";
|
||||||
|
import { Search, X, Eye, EyeOff, Download, Package } from "lucide-react";
|
||||||
|
import {
|
||||||
|
useArtifactsList,
|
||||||
|
type ArtifactSummary,
|
||||||
|
type ArtifactType,
|
||||||
|
type ArtifactVisibility,
|
||||||
|
type OwnerType,
|
||||||
|
} from "@/hooks/useArtifacts";
|
||||||
|
import { useArtifactStream } from "@/hooks/useArtifactStream";
|
||||||
|
import {
|
||||||
|
TYPE_OPTIONS,
|
||||||
|
VISIBILITY_OPTIONS,
|
||||||
|
SCOPE_OPTIONS,
|
||||||
|
getArtifactTypeIcon,
|
||||||
|
getArtifactTypeBadge,
|
||||||
|
getScopeBadge,
|
||||||
|
formatBytes,
|
||||||
|
formatDate,
|
||||||
|
formatTime,
|
||||||
|
downloadArtifact,
|
||||||
|
isDownloadable,
|
||||||
|
} from "./artifactHelpers";
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Results Table (memoized so filter typing doesn't re-render rows)
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
const ArtifactsResultsTable = memo(
|
||||||
|
({
|
||||||
|
artifacts,
|
||||||
|
isLoading,
|
||||||
|
isFetching,
|
||||||
|
error,
|
||||||
|
hasActiveFilters,
|
||||||
|
clearFilters,
|
||||||
|
page,
|
||||||
|
setPage,
|
||||||
|
pageSize,
|
||||||
|
total,
|
||||||
|
}: {
|
||||||
|
artifacts: ArtifactSummary[];
|
||||||
|
isLoading: boolean;
|
||||||
|
isFetching: boolean;
|
||||||
|
error: Error | null;
|
||||||
|
hasActiveFilters: boolean;
|
||||||
|
clearFilters: () => void;
|
||||||
|
page: number;
|
||||||
|
setPage: (page: number) => void;
|
||||||
|
pageSize: number;
|
||||||
|
total: number;
|
||||||
|
}) => {
|
||||||
|
const totalPages = total ? Math.ceil(total / pageSize) : 0;
|
||||||
|
|
||||||
|
if (isLoading && artifacts.length === 0) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg">
|
||||||
|
<div className="flex items-center justify-center h-64">
|
||||||
|
<div className="animate-spin rounded-full h-12 w-12 border-b-2 border-blue-600" />
|
||||||
|
<p className="ml-4 text-gray-600">Loading artifacts...</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (error && artifacts.length === 0) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg p-12 text-center">
|
||||||
|
<p className="text-red-600">Failed to load artifacts</p>
|
||||||
|
<p className="text-sm text-gray-600 mt-2">{error.message}</p>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (artifacts.length === 0) {
|
||||||
|
return (
|
||||||
|
<div className="bg-white shadow rounded-lg p-12 text-center">
|
||||||
|
<Package className="mx-auto h-12 w-12 text-gray-400" />
|
||||||
|
<p className="mt-4 text-gray-600">No artifacts found</p>
|
||||||
|
<p className="text-sm text-gray-500 mt-1">
|
||||||
|
{hasActiveFilters
|
||||||
|
? "Try adjusting your filters"
|
||||||
|
: "Artifacts will appear here when executions produce output"}
|
||||||
|
</p>
|
||||||
|
{hasActiveFilters && (
|
||||||
|
<button
|
||||||
|
onClick={clearFilters}
|
||||||
|
className="mt-3 text-sm text-blue-600 hover:text-blue-800"
|
||||||
|
>
|
||||||
|
Clear filters
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="relative">
|
||||||
|
{isFetching && (
|
||||||
|
<div className="absolute inset-0 bg-white/60 z-10 flex items-center justify-center rounded-lg">
|
||||||
|
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-blue-600" />
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{error && (
|
||||||
|
<div className="mb-4 bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded">
|
||||||
|
<p>Error refreshing: {error.message}</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<div className="bg-white shadow rounded-lg overflow-hidden">
|
||||||
|
<div className="overflow-x-auto">
|
||||||
|
<table className="min-w-full divide-y divide-gray-200">
|
||||||
|
<thead className="bg-gray-50">
|
||||||
|
<tr>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Artifact
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Type
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Visibility
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Scope / Owner
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Execution
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Size
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Created
|
||||||
|
</th>
|
||||||
|
<th className="px-6 py-3 text-right text-xs font-medium text-gray-500 uppercase tracking-wider">
|
||||||
|
Actions
|
||||||
|
</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="bg-white divide-y divide-gray-200">
|
||||||
|
{artifacts.map((artifact) => {
|
||||||
|
const typeBadge = getArtifactTypeBadge(artifact.type);
|
||||||
|
const scopeBadge = getScopeBadge(artifact.scope);
|
||||||
|
return (
|
||||||
|
<tr key={artifact.id} className="hover:bg-gray-50">
|
||||||
|
<td className="px-6 py-4">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
{getArtifactTypeIcon(artifact.type)}
|
||||||
|
<div className="min-w-0">
|
||||||
|
<Link
|
||||||
|
to={`/artifacts/${artifact.id}`}
|
||||||
|
className="text-sm font-medium text-blue-600 hover:text-blue-800 truncate block"
|
||||||
|
>
|
||||||
|
{artifact.name || artifact.ref}
|
||||||
|
</Link>
|
||||||
|
{artifact.name && (
|
||||||
|
<div className="text-xs text-gray-500 font-mono truncate">
|
||||||
|
{artifact.ref}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap">
|
||||||
|
<span
|
||||||
|
className={`px-2 py-1 inline-flex text-xs leading-5 font-semibold rounded-full ${typeBadge.classes}`}
|
||||||
|
>
|
||||||
|
{typeBadge.label}
|
||||||
|
</span>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap">
|
||||||
|
<div className="flex items-center gap-1.5 text-sm">
|
||||||
|
{artifact.visibility === "public" ? (
|
||||||
|
<>
|
||||||
|
<Eye className="h-3.5 w-3.5 text-green-600" />
|
||||||
|
<span className="text-green-700">Public</span>
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
<EyeOff className="h-3.5 w-3.5 text-gray-400" />
|
||||||
|
<span className="text-gray-600">Private</span>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4">
|
||||||
|
<div>
|
||||||
|
<span
|
||||||
|
className={`px-2 py-0.5 inline-flex text-xs leading-5 font-semibold rounded-full ${scopeBadge.classes}`}
|
||||||
|
>
|
||||||
|
{scopeBadge.label}
|
||||||
|
</span>
|
||||||
|
{artifact.owner && (
|
||||||
|
<div className="text-xs text-gray-500 mt-0.5 font-mono truncate max-w-[160px]">
|
||||||
|
{artifact.owner}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap">
|
||||||
|
{artifact.execution ? (
|
||||||
|
<Link
|
||||||
|
to={`/executions/${artifact.execution}`}
|
||||||
|
className="text-sm font-mono text-blue-600 hover:text-blue-800"
|
||||||
|
>
|
||||||
|
#{artifact.execution}
|
||||||
|
</Link>
|
||||||
|
) : (
|
||||||
|
<span className="text-sm text-gray-400 italic">
|
||||||
|
{"\u2014"}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap text-sm text-gray-700">
|
||||||
|
{formatBytes(artifact.size_bytes)}
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap">
|
||||||
|
<div className="text-sm text-gray-900">
|
||||||
|
{formatTime(artifact.created)}
|
||||||
|
</div>
|
||||||
|
<div className="text-xs text-gray-500">
|
||||||
|
{formatDate(artifact.created)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
<td className="px-6 py-4 whitespace-nowrap text-right">
|
||||||
|
<div className="flex items-center justify-end gap-2">
|
||||||
|
<Link
|
||||||
|
to={`/artifacts/${artifact.id}`}
|
||||||
|
className="text-gray-500 hover:text-blue-600"
|
||||||
|
title="View details"
|
||||||
|
>
|
||||||
|
<Eye className="h-4 w-4" />
|
||||||
|
</Link>
|
||||||
|
{isDownloadable(artifact.type) && (
|
||||||
|
<button
|
||||||
|
onClick={() =>
|
||||||
|
downloadArtifact(artifact.id, artifact.ref)
|
||||||
|
}
|
||||||
|
className="text-gray-500 hover:text-blue-600"
|
||||||
|
title="Download latest version"
|
||||||
|
>
|
||||||
|
<Download className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Pagination */}
|
||||||
|
{totalPages > 1 && (
|
||||||
|
<div className="bg-gray-50 px-6 py-4 flex items-center justify-between border-t border-gray-200 rounded-b-lg">
|
||||||
|
<div className="flex-1 flex justify-between sm:hidden">
|
||||||
|
<button
|
||||||
|
onClick={() => setPage(page - 1)}
|
||||||
|
disabled={page === 1}
|
||||||
|
className="relative inline-flex items-center px-4 py-2 border border-gray-300 text-sm font-medium rounded-md text-gray-700 bg-white hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||||
|
>
|
||||||
|
Previous
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => setPage(page + 1)}
|
||||||
|
disabled={page === totalPages}
|
||||||
|
className="ml-3 relative inline-flex items-center px-4 py-2 border border-gray-300 text-sm font-medium rounded-md text-gray-700 bg-white hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||||
|
>
|
||||||
|
Next
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div className="hidden sm:flex-1 sm:flex sm:items-center sm:justify-between">
|
||||||
|
<div>
|
||||||
|
<p className="text-sm text-gray-700">
|
||||||
|
Page <span className="font-medium">{page}</span> of{" "}
|
||||||
|
<span className="font-medium">{totalPages}</span>
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<nav className="relative z-0 inline-flex rounded-md shadow-sm -space-x-px">
|
||||||
|
<button
|
||||||
|
onClick={() => setPage(page - 1)}
|
||||||
|
disabled={page === 1}
|
||||||
|
className="relative inline-flex items-center px-2 py-2 rounded-l-md border border-gray-300 bg-white text-sm font-medium text-gray-500 hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||||
|
>
|
||||||
|
Previous
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => setPage(page + 1)}
|
||||||
|
disabled={page === totalPages}
|
||||||
|
className="relative inline-flex items-center px-2 py-2 rounded-r-md border border-gray-300 bg-white text-sm font-medium text-gray-500 hover:bg-gray-50 disabled:opacity-50 disabled:cursor-not-allowed"
|
||||||
|
>
|
||||||
|
Next
|
||||||
|
</button>
|
||||||
|
</nav>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
ArtifactsResultsTable.displayName = "ArtifactsResultsTable";
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Main Page
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export default function ArtifactsPage() {
|
||||||
|
const [searchParams] = useSearchParams();
|
||||||
|
|
||||||
|
const [page, setPage] = useState(1);
|
||||||
|
const pageSize = 20;
|
||||||
|
|
||||||
|
const [nameFilter, setNameFilter] = useState(searchParams.get("name") || "");
|
||||||
|
const [typeFilter, setTypeFilter] = useState<ArtifactType | "">(
|
||||||
|
(searchParams.get("type") as ArtifactType) || "",
|
||||||
|
);
|
||||||
|
const [visibilityFilter, setVisibilityFilter] = useState<
|
||||||
|
ArtifactVisibility | ""
|
||||||
|
>((searchParams.get("visibility") as ArtifactVisibility) || "");
|
||||||
|
const [scopeFilter, setScopeFilter] = useState<OwnerType | "">(
|
||||||
|
(searchParams.get("scope") as OwnerType) || "",
|
||||||
|
);
|
||||||
|
const [ownerFilter, setOwnerFilter] = useState(
|
||||||
|
searchParams.get("owner") || "",
|
||||||
|
);
|
||||||
|
const [executionFilter, setExecutionFilter] = useState(
|
||||||
|
searchParams.get("execution") || "",
|
||||||
|
);
|
||||||
|
|
||||||
|
// Debounce text inputs
|
||||||
|
const [debouncedName, setDebouncedName] = useState(nameFilter);
|
||||||
|
const [debouncedOwner, setDebouncedOwner] = useState(ownerFilter);
|
||||||
|
const [debouncedExecution, setDebouncedExecution] = useState(executionFilter);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const t = setTimeout(() => setDebouncedName(nameFilter), 400);
|
||||||
|
return () => clearTimeout(t);
|
||||||
|
}, [nameFilter]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const t = setTimeout(() => setDebouncedOwner(ownerFilter), 400);
|
||||||
|
return () => clearTimeout(t);
|
||||||
|
}, [ownerFilter]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const t = setTimeout(() => setDebouncedExecution(executionFilter), 400);
|
||||||
|
return () => clearTimeout(t);
|
||||||
|
}, [executionFilter]);
|
||||||
|
|
||||||
|
// Build query params
|
||||||
|
const queryParams = useMemo(() => {
|
||||||
|
const params: Record<string, unknown> = { page, perPage: pageSize };
|
||||||
|
if (debouncedName) params.name = debouncedName;
|
||||||
|
if (typeFilter) params.type = typeFilter;
|
||||||
|
if (visibilityFilter) params.visibility = visibilityFilter;
|
||||||
|
if (scopeFilter) params.scope = scopeFilter;
|
||||||
|
if (debouncedOwner) params.owner = debouncedOwner;
|
||||||
|
if (debouncedExecution) {
|
||||||
|
const n = Number(debouncedExecution);
|
||||||
|
if (!isNaN(n)) params.execution = n;
|
||||||
|
}
|
||||||
|
return params;
|
||||||
|
}, [
|
||||||
|
page,
|
||||||
|
pageSize,
|
||||||
|
debouncedName,
|
||||||
|
typeFilter,
|
||||||
|
visibilityFilter,
|
||||||
|
scopeFilter,
|
||||||
|
debouncedOwner,
|
||||||
|
debouncedExecution,
|
||||||
|
]);
|
||||||
|
|
||||||
|
const { data, isLoading, isFetching, error } = useArtifactsList(queryParams);
|
||||||
|
|
||||||
|
// Subscribe to real-time artifact updates
|
||||||
|
useArtifactStream({ enabled: true });
|
||||||
|
|
||||||
|
const artifacts = useMemo(() => data?.data || [], [data]);
|
||||||
|
const total = data?.pagination?.total_items || 0;
|
||||||
|
|
||||||
|
const hasActiveFilters =
|
||||||
|
!!nameFilter ||
|
||||||
|
!!typeFilter ||
|
||||||
|
!!visibilityFilter ||
|
||||||
|
!!scopeFilter ||
|
||||||
|
!!ownerFilter ||
|
||||||
|
!!executionFilter;
|
||||||
|
|
||||||
|
const clearFilters = useCallback(() => {
|
||||||
|
setNameFilter("");
|
||||||
|
setTypeFilter("");
|
||||||
|
setVisibilityFilter("");
|
||||||
|
setScopeFilter("");
|
||||||
|
setOwnerFilter("");
|
||||||
|
setExecutionFilter("");
|
||||||
|
setPage(1);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="p-6">
|
||||||
|
{/* Header */}
|
||||||
|
<div className="mb-6">
|
||||||
|
<div className="flex items-center justify-between">
|
||||||
|
<div>
|
||||||
|
<h1 className="text-3xl font-bold text-gray-900">Artifacts</h1>
|
||||||
|
<p className="mt-2 text-gray-600">
|
||||||
|
Files, progress indicators, and data produced by executions
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Filters */}
|
||||||
|
<div className="bg-white shadow rounded-lg p-4 mb-6">
|
||||||
|
<div className="flex items-center justify-between mb-4">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Search className="h-5 w-5 text-gray-400" />
|
||||||
|
<h2 className="text-lg font-semibold">Filter Artifacts</h2>
|
||||||
|
</div>
|
||||||
|
{hasActiveFilters && (
|
||||||
|
<button
|
||||||
|
onClick={clearFilters}
|
||||||
|
className="flex items-center gap-1 text-sm text-gray-600 hover:text-gray-900"
|
||||||
|
>
|
||||||
|
<X className="h-4 w-4" />
|
||||||
|
Clear Filters
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="grid grid-cols-1 md:grid-cols-3 lg:grid-cols-6 gap-4">
|
||||||
|
{/* Name search */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Name
|
||||||
|
</label>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={nameFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setNameFilter(e.target.value);
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
placeholder="Search by name..."
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Type */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Type
|
||||||
|
</label>
|
||||||
|
<select
|
||||||
|
value={typeFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setTypeFilter(e.target.value as ArtifactType | "");
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
>
|
||||||
|
<option value="">All Types</option>
|
||||||
|
{TYPE_OPTIONS.map((o) => (
|
||||||
|
<option key={o.value} value={o.value}>
|
||||||
|
{o.label}
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Visibility */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Visibility
|
||||||
|
</label>
|
||||||
|
<select
|
||||||
|
value={visibilityFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setVisibilityFilter(e.target.value as ArtifactVisibility | "");
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
>
|
||||||
|
<option value="">All</option>
|
||||||
|
{VISIBILITY_OPTIONS.map((o) => (
|
||||||
|
<option key={o.value} value={o.value}>
|
||||||
|
{o.label}
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Scope */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Scope
|
||||||
|
</label>
|
||||||
|
<select
|
||||||
|
value={scopeFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setScopeFilter(e.target.value as OwnerType | "");
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
>
|
||||||
|
<option value="">All Scopes</option>
|
||||||
|
{SCOPE_OPTIONS.map((o) => (
|
||||||
|
<option key={o.value} value={o.value}>
|
||||||
|
{o.label}
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Owner */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Owner
|
||||||
|
</label>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={ownerFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setOwnerFilter(e.target.value);
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
placeholder="e.g. mypack.deploy"
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Execution ID */}
|
||||||
|
<div>
|
||||||
|
<label className="block text-sm font-medium text-gray-700 mb-1">
|
||||||
|
Execution
|
||||||
|
</label>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={executionFilter}
|
||||||
|
onChange={(e) => {
|
||||||
|
setExecutionFilter(e.target.value);
|
||||||
|
setPage(1);
|
||||||
|
}}
|
||||||
|
placeholder="Execution ID"
|
||||||
|
className="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500 text-sm"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{data && (
|
||||||
|
<div className="mt-3 text-sm text-gray-600">
|
||||||
|
Showing {artifacts.length} of {total} artifacts
|
||||||
|
{hasActiveFilters && " (filtered)"}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Results */}
|
||||||
|
<ArtifactsResultsTable
|
||||||
|
artifacts={artifacts}
|
||||||
|
isLoading={isLoading}
|
||||||
|
isFetching={isFetching}
|
||||||
|
error={error as Error | null}
|
||||||
|
hasActiveFilters={hasActiveFilters}
|
||||||
|
clearFilters={clearFilters}
|
||||||
|
page={page}
|
||||||
|
setPage={setPage}
|
||||||
|
pageSize={pageSize}
|
||||||
|
total={total}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
190
web/src/pages/artifacts/artifactHelpers.tsx
Normal file
190
web/src/pages/artifacts/artifactHelpers.tsx
Normal file
@@ -0,0 +1,190 @@
|
|||||||
|
import {
|
||||||
|
FileText,
|
||||||
|
FileImage,
|
||||||
|
File,
|
||||||
|
BarChart3,
|
||||||
|
Link as LinkIcon,
|
||||||
|
Table2,
|
||||||
|
Package,
|
||||||
|
} from "lucide-react";
|
||||||
|
import type { ArtifactType, OwnerType } from "@/hooks/useArtifacts";
|
||||||
|
import { OpenAPI } from "@/api/core/OpenAPI";
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Filter option constants
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export const TYPE_OPTIONS: { value: ArtifactType; label: string }[] = [
|
||||||
|
{ value: "file_text", label: "Text File" },
|
||||||
|
{ value: "file_image", label: "Image" },
|
||||||
|
{ value: "file_binary", label: "Binary" },
|
||||||
|
{ value: "file_datatable", label: "Data Table" },
|
||||||
|
{ value: "progress", label: "Progress" },
|
||||||
|
{ value: "url", label: "URL" },
|
||||||
|
{ value: "other", label: "Other" },
|
||||||
|
];
|
||||||
|
|
||||||
|
export const VISIBILITY_OPTIONS: { value: string; label: string }[] = [
|
||||||
|
{ value: "public", label: "Public" },
|
||||||
|
{ value: "private", label: "Private" },
|
||||||
|
];
|
||||||
|
|
||||||
|
export const SCOPE_OPTIONS: { value: OwnerType; label: string }[] = [
|
||||||
|
{ value: "system", label: "System" },
|
||||||
|
{ value: "pack", label: "Pack" },
|
||||||
|
{ value: "action", label: "Action" },
|
||||||
|
{ value: "sensor", label: "Sensor" },
|
||||||
|
{ value: "rule", label: "Rule" },
|
||||||
|
];
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Icon / badge helpers
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export function getArtifactTypeIcon(type: ArtifactType) {
|
||||||
|
switch (type) {
|
||||||
|
case "file_text":
|
||||||
|
return <FileText className="h-4 w-4 text-blue-500" />;
|
||||||
|
case "file_image":
|
||||||
|
return <FileImage className="h-4 w-4 text-purple-500" />;
|
||||||
|
case "file_binary":
|
||||||
|
return <File className="h-4 w-4 text-gray-500" />;
|
||||||
|
case "file_datatable":
|
||||||
|
return <Table2 className="h-4 w-4 text-green-500" />;
|
||||||
|
case "progress":
|
||||||
|
return <BarChart3 className="h-4 w-4 text-amber-500" />;
|
||||||
|
case "url":
|
||||||
|
return <LinkIcon className="h-4 w-4 text-cyan-500" />;
|
||||||
|
case "other":
|
||||||
|
default:
|
||||||
|
return <Package className="h-4 w-4 text-gray-400" />;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function getArtifactTypeBadge(type: ArtifactType): {
|
||||||
|
label: string;
|
||||||
|
classes: string;
|
||||||
|
} {
|
||||||
|
switch (type) {
|
||||||
|
case "file_text":
|
||||||
|
return { label: "Text File", classes: "bg-blue-100 text-blue-800" };
|
||||||
|
case "file_image":
|
||||||
|
return { label: "Image", classes: "bg-purple-100 text-purple-800" };
|
||||||
|
case "file_binary":
|
||||||
|
return { label: "Binary", classes: "bg-gray-100 text-gray-800" };
|
||||||
|
case "file_datatable":
|
||||||
|
return { label: "Data Table", classes: "bg-green-100 text-green-800" };
|
||||||
|
case "progress":
|
||||||
|
return { label: "Progress", classes: "bg-amber-100 text-amber-800" };
|
||||||
|
case "url":
|
||||||
|
return { label: "URL", classes: "bg-cyan-100 text-cyan-800" };
|
||||||
|
case "other":
|
||||||
|
default:
|
||||||
|
return { label: "Other", classes: "bg-gray-100 text-gray-700" };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function getScopeBadge(scope: OwnerType): {
|
||||||
|
label: string;
|
||||||
|
classes: string;
|
||||||
|
} {
|
||||||
|
switch (scope) {
|
||||||
|
case "system":
|
||||||
|
return { label: "System", classes: "bg-purple-100 text-purple-800" };
|
||||||
|
case "pack":
|
||||||
|
return { label: "Pack", classes: "bg-green-100 text-green-800" };
|
||||||
|
case "action":
|
||||||
|
return { label: "Action", classes: "bg-yellow-100 text-yellow-800" };
|
||||||
|
case "sensor":
|
||||||
|
return { label: "Sensor", classes: "bg-indigo-100 text-indigo-800" };
|
||||||
|
case "rule":
|
||||||
|
return { label: "Rule", classes: "bg-blue-100 text-blue-800" };
|
||||||
|
default:
|
||||||
|
return { label: scope, classes: "bg-gray-100 text-gray-700" };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function getVisibilityBadge(visibility: string): {
|
||||||
|
label: string;
|
||||||
|
classes: string;
|
||||||
|
} {
|
||||||
|
if (visibility === "public") {
|
||||||
|
return { label: "Public", classes: "text-green-700" };
|
||||||
|
}
|
||||||
|
return { label: "Private", classes: "text-gray-600" };
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Formatting helpers
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export function formatBytes(bytes: number | null): string {
|
||||||
|
if (bytes == null || bytes === 0) return "\u2014";
|
||||||
|
if (bytes < 1024) return `${bytes} B`;
|
||||||
|
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
|
||||||
|
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function formatDate(dateString: string) {
|
||||||
|
return new Date(dateString).toLocaleString();
|
||||||
|
}
|
||||||
|
|
||||||
|
export function formatTime(timestamp: string) {
|
||||||
|
const date = new Date(timestamp);
|
||||||
|
const now = new Date();
|
||||||
|
const diff = now.getTime() - date.getTime();
|
||||||
|
|
||||||
|
if (diff < 60000) return "just now";
|
||||||
|
if (diff < 3600000) return `${Math.floor(diff / 60000)}m ago`;
|
||||||
|
if (diff < 86400000) return `${Math.floor(diff / 3600000)}h ago`;
|
||||||
|
return date.toLocaleDateString();
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Download helper
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export async function downloadArtifact(
|
||||||
|
artifactId: number,
|
||||||
|
artifactRef: string,
|
||||||
|
) {
|
||||||
|
const token = localStorage.getItem("access_token");
|
||||||
|
const url = `${OpenAPI.BASE}/api/v1/artifacts/${artifactId}/download`;
|
||||||
|
|
||||||
|
const response = await fetch(url, {
|
||||||
|
headers: {
|
||||||
|
Authorization: `Bearer ${token}`,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
console.error(`Download failed: ${response.status} ${response.statusText}`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const disposition = response.headers.get("Content-Disposition");
|
||||||
|
let filename = artifactRef.replace(/\./g, "_") + ".bin";
|
||||||
|
if (disposition) {
|
||||||
|
const match = disposition.match(/filename="?([^"]+)"?/);
|
||||||
|
if (match) filename = match[1];
|
||||||
|
}
|
||||||
|
|
||||||
|
const blob = await response.blob();
|
||||||
|
const blobUrl = URL.createObjectURL(blob);
|
||||||
|
const a = document.createElement("a");
|
||||||
|
a.href = blobUrl;
|
||||||
|
a.download = filename;
|
||||||
|
document.body.appendChild(a);
|
||||||
|
a.click();
|
||||||
|
document.body.removeChild(a);
|
||||||
|
URL.revokeObjectURL(blobUrl);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function isDownloadable(type: ArtifactType): boolean {
|
||||||
|
return (
|
||||||
|
type === "file_text" ||
|
||||||
|
type === "file_image" ||
|
||||||
|
type === "file_binary" ||
|
||||||
|
type === "file_datatable"
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -24,6 +24,7 @@ import ExecuteActionModal from "@/components/common/ExecuteActionModal";
|
|||||||
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
|
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
|
||||||
import WorkflowTasksPanel from "@/components/common/WorkflowTasksPanel";
|
import WorkflowTasksPanel from "@/components/common/WorkflowTasksPanel";
|
||||||
import ExecutionArtifactsPanel from "@/components/executions/ExecutionArtifactsPanel";
|
import ExecutionArtifactsPanel from "@/components/executions/ExecutionArtifactsPanel";
|
||||||
|
import ExecutionProgressBar from "@/components/executions/ExecutionProgressBar";
|
||||||
|
|
||||||
const getStatusColor = (status: string) => {
|
const getStatusColor = (status: string) => {
|
||||||
switch (status) {
|
switch (status) {
|
||||||
@@ -360,6 +361,14 @@ export default function ExecutionDetailPage() {
|
|||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
</dl>
|
</dl>
|
||||||
|
|
||||||
|
{/* Inline progress bar (visible when execution has progress artifacts) */}
|
||||||
|
{isRunning && (
|
||||||
|
<ExecutionProgressBar
|
||||||
|
executionId={execution.id}
|
||||||
|
isRunning={isRunning}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* Config/Parameters */}
|
{/* Config/Parameters */}
|
||||||
|
|||||||
70
work-summary/2026-03-03-cli-pack-upload.md
Normal file
70
work-summary/2026-03-03-cli-pack-upload.md
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
# CLI Pack Upload Command
|
||||||
|
|
||||||
|
**Date**: 2026-03-03
|
||||||
|
**Scope**: `crates/cli`, `crates/api`
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
The `attune pack register` command requires the API server to be able to reach the pack directory at the specified filesystem path. When the API runs inside Docker, this means the path must be inside a known container mount (e.g. `/opt/attune/packs.dev/...`). There was no way to install a pack from an arbitrary local path on the developer's machine into a Dockerized Attune system.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
Added a new `pack upload` CLI command and a corresponding `POST /api/v1/packs/upload` API endpoint. The CLI creates a `.tar.gz` archive of the local pack directory in memory and streams it to the API via `multipart/form-data`. The API extracts the archive and calls the existing `register_pack_internal` function, so all normal registration logic (component loading, workflow sync, MQ notifications) still applies.
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
### New API endpoint: `POST /api/v1/packs/upload`
|
||||||
|
- **File**: `crates/api/src/routes/packs.rs`
|
||||||
|
- Accepts `multipart/form-data` with:
|
||||||
|
- `pack` (required) — `.tar.gz` archive of the pack directory
|
||||||
|
- `force` (optional) — `"true"` to overwrite an existing pack
|
||||||
|
- `skip_tests` (optional) — `"true"` to skip test execution
|
||||||
|
- Extracts the archive to a temp directory using `flate2` + `tar`
|
||||||
|
- Locates `pack.yaml` at the archive root or one level deep (handles GitHub-style tarballs)
|
||||||
|
- Reads the pack `ref`, moves the directory to permanent storage, then calls `register_pack_internal`
|
||||||
|
- Added helper: `find_pack_root()` walks up to one level to find `pack.yaml`
|
||||||
|
|
||||||
|
### New CLI command: `attune pack upload <path>`
|
||||||
|
- **File**: `crates/cli/src/commands/pack.rs`
|
||||||
|
- Validates the local path exists and contains `pack.yaml`
|
||||||
|
- Reads `pack.yaml` to extract the pack ref for display messages
|
||||||
|
- Builds an in-memory `.tar.gz` using `tar::Builder` + `flate2::GzEncoder`
|
||||||
|
- Helper `append_dir_to_tar()` recursively archives directory contents with paths relative to the pack root (symlinks are skipped)
|
||||||
|
- Calls `ApiClient::multipart_post()` with the archive bytes
|
||||||
|
- Flags: `--force` / `--skip-tests`
|
||||||
|
|
||||||
|
### New `ApiClient::multipart_post()` method
|
||||||
|
- **File**: `crates/cli/src/client.rs`
|
||||||
|
- Accepts a file field (name, bytes, filename, MIME type) plus a list of extra text fields
|
||||||
|
- Follows the same 401-refresh-then-error pattern as other methods
|
||||||
|
- HTTP client timeout increased from 30s to 300s for uploads
|
||||||
|
|
||||||
|
### `pack register` UX improvement
|
||||||
|
- **File**: `crates/cli/src/commands/pack.rs`
|
||||||
|
- Emits a warning when the supplied path looks like a local filesystem path (not under `/opt/attune/`, `/app/`, etc.), suggesting `pack upload` instead
|
||||||
|
|
||||||
|
### New workspace dependencies
|
||||||
|
- **Workspace** (`Cargo.toml`): `tar = "0.4"`, `flate2 = "1.0"`, `tempfile` moved from testing to runtime
|
||||||
|
- **API** (`crates/api/Cargo.toml`): added `tar`, `flate2`, `tempfile`
|
||||||
|
- **CLI** (`crates/cli/Cargo.toml`): added `tar`, `flate2`; `reqwest` gains `multipart` + `stream` features
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Log in to the dockerized system
|
||||||
|
attune --api-url http://localhost:8080 auth login \
|
||||||
|
--username test@attune.local --password 'TestPass123!'
|
||||||
|
|
||||||
|
# Upload and register a local pack (works from any machine)
|
||||||
|
attune --api-url http://localhost:8080 pack upload ./packs.external/python_example \
|
||||||
|
--skip-tests --force
|
||||||
|
```
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
Tested against a live Docker Compose stack:
|
||||||
|
- Pack archive created (~13 KB for `python_example`)
|
||||||
|
- API received, extracted, and stored the pack at `/opt/attune/packs/python_example`
|
||||||
|
- All 5 actions, 1 trigger, and 1 sensor were registered
|
||||||
|
- `pack.registered` MQ event published to trigger worker environment setup
|
||||||
|
- `attune action list` confirmed all components were visible
|
||||||
132
work-summary/2026-03-03-cli-wait-notifier-fixes.md
Normal file
132
work-summary/2026-03-03-cli-wait-notifier-fixes.md
Normal file
@@ -0,0 +1,132 @@
|
|||||||
|
# CLI `--wait` and Notifier WebSocket Fixes
|
||||||
|
|
||||||
|
**Date**: 2026-03-03
|
||||||
|
**Session type**: Bug investigation and fix
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Investigated and fixed a long-standing hang in `attune action execute --wait` and the underlying root-cause bugs in the notifier service. The `--wait` flag now works reliably, returning within milliseconds of execution completion via WebSocket notifications.
|
||||||
|
|
||||||
|
## Problems Found and Fixed
|
||||||
|
|
||||||
|
### Bug 1: PostgreSQL `PgListener` broken after sequential `listen()` calls (Notifier)
|
||||||
|
|
||||||
|
**File**: `crates/notifier/src/postgres_listener.rs`
|
||||||
|
|
||||||
|
**Symptom**: The notifier service never received any PostgreSQL LISTEN/NOTIFY messages after startup. Direct `pg_notify()` calls from psql also went undelivered.
|
||||||
|
|
||||||
|
**Root cause**: The notifier called `listener.listen(channel)` in a loop — once per channel — totalling 9 separate calls. In sqlx 0.8, each `listen()` call sends a `LISTEN` command and reads a `ReadyForQuery` response. The repeated calls left the connection in an unexpected state where subsequent `recv()` calls would never fire, even though the PostgreSQL backend showed the connection as actively `LISTEN`-ing.
|
||||||
|
|
||||||
|
**Fix**: Replaced the loop with a single `listener.listen_all(NOTIFICATION_CHANNELS.iter().copied()).await` call, which issues all 9 LISTEN commands in one round-trip. Extracted a `create_listener()` helper so the same single-call pattern is used on reconnect.
|
||||||
|
|
||||||
|
```crates/notifier/src/postgres_listener.rs#L93-135
|
||||||
|
async fn create_listener(&self) -> Result<PgListener> {
|
||||||
|
let mut listener = PgListener::connect(&self.database_url)
|
||||||
|
.await
|
||||||
|
.context("Failed to connect PostgreSQL listener")?;
|
||||||
|
|
||||||
|
// Use listen_all for a single round-trip instead of N separate commands
|
||||||
|
listener
|
||||||
|
.listen_all(NOTIFICATION_CHANNELS.iter().copied())
|
||||||
|
.await
|
||||||
|
.context("Failed to LISTEN on notification channels")?;
|
||||||
|
|
||||||
|
Ok(listener)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Also added:
|
||||||
|
- A 60-second heartbeat log (`INFO: PostgreSQL listener heartbeat`) so it's easy to confirm the task is alive during idle periods
|
||||||
|
- `tokio::time::timeout` wrapper on `recv()` so the heartbeat fires even when no notifications arrive
|
||||||
|
- Improved reconnect logging
|
||||||
|
|
||||||
|
### Bug 2: Notifications serialized without the `"type"` field (Notifier → CLI)
|
||||||
|
|
||||||
|
**File**: `crates/notifier/src/websocket_server.rs`
|
||||||
|
|
||||||
|
**Symptom**: Even after fixing Bug 1, the CLI's WebSocket loop received messages but `serde_json::from_str::<ServerMsg>(&txt)` always failed with `missing field 'type'`, silently falling through the `Err(_)` catch-all arm.
|
||||||
|
|
||||||
|
**Root cause**: The outgoing notification task serialized the raw `Notification` struct directly:
|
||||||
|
```rust
|
||||||
|
match serde_json::to_string(¬ification) { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
The `Notification` struct has no `type` field. The CLI's `ServerMsg` enum uses `#[serde(tag = "type")]`, so it expects `{"type":"notification",...}`. The bare struct produces `{"notification_type":"...","entity_type":"...",...}` — no `"type"` key.
|
||||||
|
|
||||||
|
**Fix**: Wrap the notification in the `ClientMessage` tagged enum before serializing:
|
||||||
|
```rust
|
||||||
|
let envelope = ClientMessage::Notification(notification);
|
||||||
|
match serde_json::to_string(&envelope) { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
This produces the correct `{"type":"notification","notification_type":"...","entity_type":"...","entity_id":...,"payload":{...}}` format.
|
||||||
|
|
||||||
|
### Bug 3: Polling fallback used exhausted deadline (CLI)
|
||||||
|
|
||||||
|
**File**: `crates/cli/src/wait.rs`
|
||||||
|
|
||||||
|
**Symptom**: When `--wait` fell back to polling (e.g. when WS notifications weren't delivered), the polling would immediately time out even though the execution had long since completed.
|
||||||
|
|
||||||
|
**Root cause**: Both the WebSocket path and the polling fallback shared a single `deadline = Instant::now() + timeout_secs`. The WS path ran until the deadline, leaving 0 time for polling.
|
||||||
|
|
||||||
|
**Fix**: Reserve a minimum polling budget (`MIN_POLL_BUDGET = 10s`) so the WS path exits early enough to leave polling headroom:
|
||||||
|
```rust
|
||||||
|
const MIN_POLL_BUDGET: Duration = Duration::from_secs(10);
|
||||||
|
let ws_deadline = if overall_deadline > Instant::now() + MIN_POLL_BUDGET {
|
||||||
|
overall_deadline - MIN_POLL_BUDGET
|
||||||
|
} else {
|
||||||
|
overall_deadline // very short timeout — skip WS, go straight to polling
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
Polling always uses `overall_deadline` directly (the full user-specified timeout), so at minimum `MIN_POLL_BUDGET` of polling time is guaranteed.
|
||||||
|
|
||||||
|
### Additional CLI improvement: poll-first in polling loop
|
||||||
|
|
||||||
|
The polling fallback now checks the execution status **immediately** on entry (before sleeping) rather than sleeping first. This catches the common case where the execution already completed while the WS path was running.
|
||||||
|
|
||||||
|
Also improved error handling in the poll loop: REST failures are logged and retried rather than propagating as fatal errors.
|
||||||
|
|
||||||
|
## End-to-End Verification
|
||||||
|
|
||||||
|
```
|
||||||
|
$ attune --profile docker action execute core.echo --param message="Hello!" --wait
|
||||||
|
ℹ Executing action: core.echo
|
||||||
|
ℹ Waiting for execution 51 to complete...
|
||||||
|
[notifier] connected to ws://localhost:8081/ws
|
||||||
|
[notifier] session id client_2
|
||||||
|
[notifier] subscribed to entity:execution:51
|
||||||
|
[notifier] execution_status_changed for execution 51 — status=Some("scheduled")
|
||||||
|
[notifier] execution_status_changed for execution 51 — status=Some("running")
|
||||||
|
[notifier] execution_status_changed for execution 51 — status=Some("completed")
|
||||||
|
✓ Execution 51 completed
|
||||||
|
```
|
||||||
|
|
||||||
|
Three consecutive runs all returned via WebSocket within milliseconds, no polling fallback triggered.
|
||||||
|
|
||||||
|
## Files Changed
|
||||||
|
|
||||||
|
| File | Change |
|
||||||
|
|------|--------|
|
||||||
|
| `crates/notifier/src/postgres_listener.rs` | Replace sequential `listen()` loop with `listen_all()`; add `create_listener()` helper; add heartbeat logging with timeout-wrapped recv |
|
||||||
|
| `crates/notifier/src/websocket_server.rs` | Wrap `Notification` in `ClientMessage::Notification(...)` before serializing for outgoing WS messages |
|
||||||
|
| `crates/notifier/src/service.rs` | Handle `RecvError::Lagged` and `RecvError::Closed` in broadcaster; add `debug` import |
|
||||||
|
| `crates/notifier/src/subscriber_manager.rs` | Scale broadcast result logging back to `debug` level |
|
||||||
|
| `crates/cli/src/wait.rs` | Fix shared-deadline bug with `MIN_POLL_BUDGET`; poll immediately on entry; improve error handling and verbose logging |
|
||||||
|
| `AGENTS.md` | Document notifier WebSocket protocol and the `listen_all` requirement |
|
||||||
|
|
||||||
|
## Key Protocol Facts (for future reference)
|
||||||
|
|
||||||
|
**Notifier WebSocket — server → client message format**:
|
||||||
|
```json
|
||||||
|
{"type":"notification","notification_type":"execution_status_changed","entity_type":"execution","entity_id":42,"user_id":null,"payload":{...execution row...},"timestamp":"..."}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Notifier WebSocket — client → server subscribe format**:
|
||||||
|
```json
|
||||||
|
{"type":"subscribe","filter":"entity:execution:42"}
|
||||||
|
```
|
||||||
|
|
||||||
|
Filter formats supported: `all`, `entity_type:<type>`, `entity:<type>:<id>`, `user:<id>`, `notification_type:<type>`
|
||||||
|
|
||||||
|
**Critical rule**: Always use `PgListener::listen_all()` for subscribing to multiple PostgreSQL channels. Individual `listen()` calls in a loop leave the listener in a broken state in sqlx 0.8.
|
||||||
Reference in New Issue
Block a user