diff --git a/AGENTS.md b/AGENTS.md
index ee6cab6..531d04b 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -148,6 +148,8 @@ Completion listener advances workflow → Schedules successor tasks → Complete
- **Inquiry**: Human-in-the-loop async interaction (approvals, inputs)
- **Identity**: User/service account with RBAC permissions
- **Key**: Encrypted secrets storage
+- **Artifact**: Tracked output from executions (files, logs, progress indicators). Metadata + optional structured `data` (JSONB). Linked to execution via plain BIGINT (no FK). Supports retention policies (version-count or time-based).
+- **ArtifactVersion**: Immutable content snapshot for an artifact. Stores binary content (BYTEA) and/or structured JSON. Version number auto-assigned. Retention trigger auto-deletes oldest versions beyond limit.
## Key Tools & Libraries
@@ -222,8 +224,9 @@ Completion listener advances workflow → Schedules successor tasks → Complete
- **Entity History Tracking (TimescaleDB)**: Append-only `
_history` hypertables track field-level changes to `execution` and `worker` tables. Populated by PostgreSQL `AFTER INSERT OR UPDATE OR DELETE` triggers — no Rust code changes needed for recording. Uses JSONB diff format (`old_values`/`new_values`) with a `changed_fields TEXT[]` column for efficient filtering. Worker heartbeat-only updates are excluded. There are **no `event_history` or `enforcement_history` tables** — events are immutable and enforcements have a single deterministic status transition, so both tables are hypertables themselves. See `docs/plans/timescaledb-entity-history.md` for full design. The execution history trigger tracks: `status`, `result`, `executor`, `workflow_task`, `env_vars`, `started_at`.
- **History Large-Field Guardrails**: The `execution` history trigger stores a compact **digest summary** instead of the full value for the `result` column (which can be arbitrarily large). The digest is produced by the `_jsonb_digest_summary(JSONB)` helper function and has the shape `{"digest": "md5:", "size": , "type": ""}`. This preserves change-detection semantics while avoiding history table bloat. The full result is always available on the live `execution` row. When adding new large JSONB columns to history triggers, use `_jsonb_digest_summary()` instead of storing the raw value.
- **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, `execution.started_at`, and `event.source` are also nullable. `enforcement.event` is nullable but has no FK constraint (event is a hypertable). `execution.enforcement` is nullable but has no FK constraint (enforcement is a hypertable). All FK columns on the execution table (`action`, `parent`, `original_execution`, `enforcement`, `executor`, `workflow_def`) have no FK constraints (execution is a hypertable). `inquiry.execution` and `workflow_execution.execution` also have no FK constraints. `enforcement.resolved_at` is nullable — `None` while status is `created`, set when resolved. `execution.started_at` is nullable — `None` until the worker sets status to `running`.
-**Table Count**: 20 tables total in the schema (including `runtime_version`, 2 `*_history` hypertables, and the `event`, `enforcement`, + `execution` hypertables)
-**Migration Count**: 9 migrations (`000001` through `000009`) — see `migrations/` directory
+**Table Count**: 21 tables total in the schema (including `runtime_version`, `artifact_version`, 2 `*_history` hypertables, and the `event`, `enforcement`, + `execution` hypertables)
+**Migration Count**: 10 migrations (`000001` through `000010`) — see `migrations/` directory
+- **Artifact System**: The `artifact` table stores metadata + structured data (progress entries via JSONB `data` column). The `artifact_version` table stores immutable content snapshots (binary BYTEA or JSONB). Version numbering is auto-assigned via `next_artifact_version()` SQL function. A DB trigger (`enforce_artifact_retention`) auto-deletes oldest versions when count exceeds the artifact's `retention_limit`. `artifact.execution` is a plain BIGINT (no FK — execution is a hypertable). Progress-type artifacts use `artifact.data` (atomic JSON array append); file-type artifacts use `artifact_version` rows. Binary content is excluded from default queries for performance (`SELECT_COLUMNS` vs `SELECT_COLUMNS_WITH_CONTENT`).
- **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order.
### Workflow Execution Orchestration
@@ -597,8 +600,8 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
- **Web UI**: Static files served separately or via API service
## Current Development Status
-- ✅ **Complete**: Database migrations (20 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`)
-- 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management
+- ✅ **Complete**: Database migrations (21 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving `{{ }}` rendering in task inputs), Workflow `with_items` expansion (parallel child executions per item), Workflow `with_items` concurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflow `publish` directive processing (variable propagation between tasks), Workflow function expressions (`result()`, `succeeded()`, `failed()`, `timed_out()`), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (`parameters`, `workflow`, `task`, `config`, `keystore`, `item`, `index`, `system`), Artifact content system (versioned file/JSON storage, progress-append semantics, binary upload/download, retention enforcement, execution-linked artifacts, 17 API endpoints under `/api/v1/artifacts/`)
+- 🔄 **In Progress**: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management, Artifact UI (web UI for browsing/downloading artifacts)
- 📋 **Planned**: Notifier service, execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage
## Quick Reference
diff --git a/Cargo.toml b/Cargo.toml
index 9c4742e..77735b1 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -73,6 +73,9 @@ jsonschema = "0.38"
# OpenAPI/Swagger
utoipa = { version = "5.4", features = ["chrono", "uuid"] }
+# JWT
+jsonwebtoken = { version = "10.2", features = ["rust_crypto"] }
+
# Encryption
argon2 = "0.5"
ring = "0.17"
diff --git a/crates/api/Cargo.toml b/crates/api/Cargo.toml
index 226e6d2..386c6c1 100644
--- a/crates/api/Cargo.toml
+++ b/crates/api/Cargo.toml
@@ -26,7 +26,7 @@ async-trait = { workspace = true }
futures = { workspace = true }
# Web framework
-axum = { workspace = true }
+axum = { workspace = true, features = ["multipart"] }
tower = { workspace = true }
tower-http = { workspace = true }
@@ -69,7 +69,6 @@ jsonschema = { workspace = true }
reqwest = { workspace = true }
# Authentication
-jsonwebtoken = { version = "10.2", features = ["rust_crypto"] }
argon2 = { workspace = true }
rand = "0.9"
diff --git a/crates/api/src/auth/jwt.rs b/crates/api/src/auth/jwt.rs
index 6624a7a..b679415 100644
--- a/crates/api/src/auth/jwt.rs
+++ b/crates/api/src/auth/jwt.rs
@@ -1,389 +1,11 @@
//! JWT token generation and validation
+//!
+//! This module re-exports all JWT functionality from `attune_common::auth::jwt`.
+//! The canonical implementation lives in the common crate so that all services
+//! (API, worker, sensor) share the same token types and signing logic.
-use chrono::{Duration, Utc};
-use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation};
-use serde::{Deserialize, Serialize};
-use thiserror::Error;
-
-#[derive(Debug, Error)]
-pub enum JwtError {
- #[error("Failed to encode JWT: {0}")]
- EncodeError(String),
- #[error("Failed to decode JWT: {0}")]
- DecodeError(String),
- #[error("Token has expired")]
- Expired,
- #[error("Invalid token")]
- Invalid,
-}
-
-/// JWT Claims structure
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct Claims {
- /// Subject (identity ID)
- pub sub: String,
- /// Identity login
- pub login: String,
- /// Issued at (Unix timestamp)
- pub iat: i64,
- /// Expiration time (Unix timestamp)
- pub exp: i64,
- /// Token type (access or refresh)
- #[serde(default)]
- pub token_type: TokenType,
- /// Optional scope (e.g., "sensor", "service")
- #[serde(skip_serializing_if = "Option::is_none")]
- pub scope: Option,
- /// Optional metadata (e.g., trigger_types for sensors)
- #[serde(skip_serializing_if = "Option::is_none")]
- pub metadata: Option,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
-#[serde(rename_all = "lowercase")]
-pub enum TokenType {
- Access,
- Refresh,
- Sensor,
-}
-
-impl Default for TokenType {
- fn default() -> Self {
- Self::Access
- }
-}
-
-/// Configuration for JWT tokens
-#[derive(Debug, Clone)]
-pub struct JwtConfig {
- /// Secret key for signing tokens
- pub secret: String,
- /// Access token expiration duration (in seconds)
- pub access_token_expiration: i64,
- /// Refresh token expiration duration (in seconds)
- pub refresh_token_expiration: i64,
-}
-
-impl Default for JwtConfig {
- fn default() -> Self {
- Self {
- secret: "insecure_default_secret_change_in_production".to_string(),
- access_token_expiration: 3600, // 1 hour
- refresh_token_expiration: 604800, // 7 days
- }
- }
-}
-
-/// Generate a JWT access token
-///
-/// # Arguments
-/// * `identity_id` - The identity ID
-/// * `login` - The identity login
-/// * `config` - JWT configuration
-///
-/// # Returns
-/// * `Result` - The encoded JWT token
-pub fn generate_access_token(
- identity_id: i64,
- login: &str,
- config: &JwtConfig,
-) -> Result {
- generate_token(identity_id, login, config, TokenType::Access)
-}
-
-/// Generate a JWT refresh token
-///
-/// # Arguments
-/// * `identity_id` - The identity ID
-/// * `login` - The identity login
-/// * `config` - JWT configuration
-///
-/// # Returns
-/// * `Result` - The encoded JWT token
-pub fn generate_refresh_token(
- identity_id: i64,
- login: &str,
- config: &JwtConfig,
-) -> Result {
- generate_token(identity_id, login, config, TokenType::Refresh)
-}
-
-/// Generate a JWT token
-///
-/// # Arguments
-/// * `identity_id` - The identity ID
-/// * `login` - The identity login
-/// * `config` - JWT configuration
-/// * `token_type` - Type of token to generate
-///
-/// # Returns
-/// * `Result` - The encoded JWT token
-pub fn generate_token(
- identity_id: i64,
- login: &str,
- config: &JwtConfig,
- token_type: TokenType,
-) -> Result {
- let now = Utc::now();
- let expiration = match token_type {
- TokenType::Access => config.access_token_expiration,
- TokenType::Refresh => config.refresh_token_expiration,
- TokenType::Sensor => 86400, // Sensor tokens handled separately via generate_sensor_token()
- };
-
- let exp = (now + Duration::seconds(expiration)).timestamp();
-
- let claims = Claims {
- sub: identity_id.to_string(),
- login: login.to_string(),
- iat: now.timestamp(),
- exp,
- token_type,
- scope: None,
- metadata: None,
- };
-
- encode(
- &Header::default(),
- &claims,
- &EncodingKey::from_secret(config.secret.as_bytes()),
- )
- .map_err(|e| JwtError::EncodeError(e.to_string()))
-}
-
-/// Generate a sensor token with specific trigger types
-///
-/// # Arguments
-/// * `identity_id` - The identity ID for the sensor
-/// * `sensor_ref` - The sensor reference (e.g., "sensor:core.timer")
-/// * `trigger_types` - List of trigger types this sensor can create events for
-/// * `config` - JWT configuration
-/// * `ttl_seconds` - Time to live in seconds (default: 24 hours)
-///
-/// # Returns
-/// * `Result` - The encoded JWT token
-pub fn generate_sensor_token(
- identity_id: i64,
- sensor_ref: &str,
- trigger_types: Vec,
- config: &JwtConfig,
- ttl_seconds: Option,
-) -> Result {
- let now = Utc::now();
- let expiration = ttl_seconds.unwrap_or(86400); // Default: 24 hours
- let exp = (now + Duration::seconds(expiration)).timestamp();
-
- let metadata = serde_json::json!({
- "trigger_types": trigger_types,
- });
-
- let claims = Claims {
- sub: identity_id.to_string(),
- login: sensor_ref.to_string(),
- iat: now.timestamp(),
- exp,
- token_type: TokenType::Sensor,
- scope: Some("sensor".to_string()),
- metadata: Some(metadata),
- };
-
- encode(
- &Header::default(),
- &claims,
- &EncodingKey::from_secret(config.secret.as_bytes()),
- )
- .map_err(|e| JwtError::EncodeError(e.to_string()))
-}
-
-/// Validate and decode a JWT token
-///
-/// # Arguments
-/// * `token` - The JWT token string
-/// * `config` - JWT configuration
-///
-/// # Returns
-/// * `Result` - The decoded claims if valid
-pub fn validate_token(token: &str, config: &JwtConfig) -> Result {
- let validation = Validation::default();
-
- decode::(
- token,
- &DecodingKey::from_secret(config.secret.as_bytes()),
- &validation,
- )
- .map(|data| data.claims)
- .map_err(|e| {
- if e.to_string().contains("ExpiredSignature") {
- JwtError::Expired
- } else {
- JwtError::DecodeError(e.to_string())
- }
- })
-}
-
-/// Extract token from Authorization header
-///
-/// # Arguments
-/// * `auth_header` - The Authorization header value
-///
-/// # Returns
-/// * `Option<&str>` - The token if present and valid format
-pub fn extract_token_from_header(auth_header: &str) -> Option<&str> {
- if auth_header.starts_with("Bearer ") {
- Some(&auth_header[7..])
- } else {
- None
- }
-}
-
-#[cfg(test)]
-mod tests {
- use super::*;
-
- fn test_config() -> JwtConfig {
- JwtConfig {
- secret: "test_secret_key_for_testing".to_string(),
- access_token_expiration: 3600,
- refresh_token_expiration: 604800,
- }
- }
-
- #[test]
- fn test_generate_and_validate_access_token() {
- let config = test_config();
- let token =
- generate_access_token(123, "testuser", &config).expect("Failed to generate token");
-
- let claims = validate_token(&token, &config).expect("Failed to validate token");
-
- assert_eq!(claims.sub, "123");
- assert_eq!(claims.login, "testuser");
- assert_eq!(claims.token_type, TokenType::Access);
- }
-
- #[test]
- fn test_generate_and_validate_refresh_token() {
- let config = test_config();
- let token =
- generate_refresh_token(456, "anotheruser", &config).expect("Failed to generate token");
-
- let claims = validate_token(&token, &config).expect("Failed to validate token");
-
- assert_eq!(claims.sub, "456");
- assert_eq!(claims.login, "anotheruser");
- assert_eq!(claims.token_type, TokenType::Refresh);
- }
-
- #[test]
- fn test_invalid_token() {
- let config = test_config();
- let result = validate_token("invalid.token.here", &config);
- assert!(result.is_err());
- }
-
- #[test]
- fn test_token_with_wrong_secret() {
- let config = test_config();
- let token = generate_access_token(789, "user", &config).expect("Failed to generate token");
-
- let wrong_config = JwtConfig {
- secret: "different_secret".to_string(),
- ..config
- };
-
- let result = validate_token(&token, &wrong_config);
- assert!(result.is_err());
- }
-
- #[test]
- fn test_expired_token() {
- // Create a token that's already expired by setting exp in the past
- let now = Utc::now().timestamp();
- let expired_claims = Claims {
- sub: "999".to_string(),
- login: "expireduser".to_string(),
- iat: now - 3600,
- exp: now - 1800, // Expired 30 minutes ago
- token_type: TokenType::Access,
- scope: None,
- metadata: None,
- };
-
- let config = test_config();
-
- let expired_token = encode(
- &Header::default(),
- &expired_claims,
- &EncodingKey::from_secret(config.secret.as_bytes()),
- )
- .expect("Failed to encode token");
-
- // Validate the expired token
- let result = validate_token(&expired_token, &config);
- assert!(matches!(result, Err(JwtError::Expired)));
- }
-
- #[test]
- fn test_extract_token_from_header() {
- let header = "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9";
- let token = extract_token_from_header(header);
- assert_eq!(token, Some("eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"));
-
- let invalid_header = "Token abc123";
- let token = extract_token_from_header(invalid_header);
- assert_eq!(token, None);
-
- let no_token = "Bearer ";
- let token = extract_token_from_header(no_token);
- assert_eq!(token, Some(""));
- }
-
- #[test]
- fn test_claims_serialization() {
- let claims = Claims {
- sub: "123".to_string(),
- login: "testuser".to_string(),
- iat: 1234567890,
- exp: 1234571490,
- token_type: TokenType::Access,
- scope: None,
- metadata: None,
- };
-
- let json = serde_json::to_string(&claims).expect("Failed to serialize");
- let deserialized: Claims = serde_json::from_str(&json).expect("Failed to deserialize");
-
- assert_eq!(claims.sub, deserialized.sub);
- assert_eq!(claims.login, deserialized.login);
- assert_eq!(claims.token_type, deserialized.token_type);
- }
-
- #[test]
- fn test_generate_sensor_token() {
- let config = test_config();
- let trigger_types = vec!["core.timer".to_string(), "core.webhook".to_string()];
-
- let token = generate_sensor_token(
- 999,
- "sensor:core.timer",
- trigger_types.clone(),
- &config,
- Some(86400),
- )
- .expect("Failed to generate sensor token");
-
- let claims = validate_token(&token, &config).expect("Failed to validate token");
-
- assert_eq!(claims.sub, "999");
- assert_eq!(claims.login, "sensor:core.timer");
- assert_eq!(claims.token_type, TokenType::Sensor);
- assert_eq!(claims.scope, Some("sensor".to_string()));
-
- let metadata = claims.metadata.expect("Metadata should be present");
- let trigger_types_from_token = metadata["trigger_types"]
- .as_array()
- .expect("trigger_types should be an array");
-
- assert_eq!(trigger_types_from_token.len(), 2);
- }
-}
+pub use attune_common::auth::jwt::{
+ extract_token_from_header, generate_access_token, generate_execution_token,
+ generate_refresh_token, generate_sensor_token, generate_token, validate_token, Claims,
+ JwtConfig, JwtError, TokenType,
+};
diff --git a/crates/api/src/auth/middleware.rs b/crates/api/src/auth/middleware.rs
index 7f2126a..a1d7251 100644
--- a/crates/api/src/auth/middleware.rs
+++ b/crates/api/src/auth/middleware.rs
@@ -10,7 +10,9 @@ use axum::{
use serde_json::json;
use std::sync::Arc;
-use super::jwt::{extract_token_from_header, validate_token, Claims, JwtConfig, TokenType};
+use attune_common::auth::jwt::{
+ extract_token_from_header, validate_token, Claims, JwtConfig, TokenType,
+};
/// Authentication middleware state
#[derive(Clone)]
@@ -105,8 +107,11 @@ impl axum::extract::FromRequestParts for RequireAuth
_ => AuthError::InvalidToken,
})?;
- // Allow both access tokens and sensor tokens
- if claims.token_type != TokenType::Access && claims.token_type != TokenType::Sensor {
+ // Allow access, sensor, and execution-scoped tokens
+ if claims.token_type != TokenType::Access
+ && claims.token_type != TokenType::Sensor
+ && claims.token_type != TokenType::Execution
+ {
return Err(AuthError::InvalidToken);
}
@@ -154,7 +159,7 @@ mod tests {
login: "testuser".to_string(),
iat: 1234567890,
exp: 1234571490,
- token_type: super::super::jwt::TokenType::Access,
+ token_type: TokenType::Access,
scope: None,
metadata: None,
};
diff --git a/crates/api/src/dto/artifact.rs b/crates/api/src/dto/artifact.rs
new file mode 100644
index 0000000..2739cd2
--- /dev/null
+++ b/crates/api/src/dto/artifact.rs
@@ -0,0 +1,471 @@
+//! Artifact DTOs for API requests and responses
+
+use chrono::{DateTime, Utc};
+use serde::{Deserialize, Serialize};
+use serde_json::Value as JsonValue;
+use utoipa::{IntoParams, ToSchema};
+
+use attune_common::models::enums::{ArtifactType, OwnerType, RetentionPolicyType};
+
+// ============================================================================
+// Artifact DTOs
+// ============================================================================
+
+/// Request DTO for creating a new artifact
+#[derive(Debug, Clone, Deserialize, ToSchema)]
+pub struct CreateArtifactRequest {
+ /// Artifact reference (unique identifier, e.g. "build.log", "test.results")
+ #[schema(example = "mypack.build_log")]
+ pub r#ref: String,
+
+ /// Owner scope type
+ #[schema(example = "action")]
+ pub scope: OwnerType,
+
+ /// Owner identifier (ref string of the owning entity)
+ #[schema(example = "mypack.deploy")]
+ pub owner: String,
+
+ /// Artifact type
+ #[schema(example = "file_text")]
+ pub r#type: ArtifactType,
+
+ /// Retention policy type
+ #[serde(default = "default_retention_policy")]
+ #[schema(example = "versions")]
+ pub retention_policy: RetentionPolicyType,
+
+ /// Retention limit (number of versions, days, hours, or minutes depending on policy)
+ #[serde(default = "default_retention_limit")]
+ #[schema(example = 5)]
+ pub retention_limit: i32,
+
+ /// Human-readable name
+ #[schema(example = "Build Log")]
+ pub name: Option,
+
+ /// Optional description
+ #[schema(example = "Output log from the build action")]
+ pub description: Option,
+
+ /// MIME content type (e.g. "text/plain", "application/json")
+ #[schema(example = "text/plain")]
+ pub content_type: Option,
+
+ /// Execution ID that produced this artifact
+ #[schema(example = 42)]
+ pub execution: Option,
+
+ /// Initial structured data (for progress-type artifacts or metadata)
+ #[schema(value_type = Option