92 KiB
Attune Project Rules
Project Overview
Attune is an event-driven automation and orchestration platform built in Rust, similar to StackStorm. It enables building complex workflows triggered by events with multi-tenancy, RBAC, and human-in-the-loop capabilities.
Development Status: Pre-Production
This project is under active development with no users, deployments, or stable releases.
Breaking Changes Policy
- Breaking changes are explicitly allowed and encouraged when they improve the architecture, API design, or developer experience
- No backward compatibility required - there are no existing versions to support
- Database migrations can be modified or consolidated - no production data exists
- API contracts can change freely - no external integrations depend on them, only internal interfaces with other services and the web UI must be maintained.
- Configuration formats can be redesigned - no existing config files need migration
- Service interfaces can be refactored - no live deployments to worry about
When this project reaches v1.0 or gets its first production deployment, this section should be removed and replaced with appropriate stability guarantees and versioning policies.
Languages & Core Technologies
- Primary Language: Rust 2021 edition
- Database: PostgreSQL 16+ with TimescaleDB 2.17+ (primary data store + LISTEN/NOTIFY pub/sub + time-series history)
- Message Queue: RabbitMQ 3.12+ (via lapin)
- Cache: Redis 7.0+ (optional)
- Web UI: TypeScript + React 19 + Vite
- Async Runtime: Tokio
- Web Framework: Axum 0.8
- ORM: SQLx (compile-time query checking)
Project Structure (Cargo Workspace)
attune/
├── Cargo.toml # Workspace root
├── config.{development,test}.yaml # Environment configs
├── Makefile # Common dev tasks
├── crates/ # Rust services
│ ├── common/ # Shared library (models, db, repos, mq, config, error, template_resolver)
│ ├── api/ # REST API service (8080)
│ ├── executor/ # Execution orchestration service
│ ├── worker/ # Action execution service (multi-runtime)
│ ├── sensor/ # Event monitoring service
│ ├── notifier/ # Real-time notification service
│ └── cli/ # Command-line interface
├── migrations/ # SQLx database migrations (19 tables)
├── web/ # React web UI (Vite + TypeScript)
├── packs/ # Pack bundles
│ └── core/ # Core pack (timers, HTTP, etc.)
├── docs/ # Technical documentation
├── scripts/ # Helper scripts (DB setup, testing)
└── tests/ # Integration tests
Service Architecture (Distributed Microservices)
- attune-api: REST API gateway, JWT auth, all client interactions
- attune-executor: Manages execution lifecycle, scheduling, policy enforcement, workflow orchestration
- attune-worker: Executes actions in multiple runtimes (Python/Node.js/containers)
- attune-sensor: Monitors triggers, generates events
- attune-notifier: Real-time notifications via PostgreSQL LISTEN/NOTIFY + WebSocket (port 8081)
- PostgreSQL listener: Uses
PgListener::listen_all()(single batch command) to subscribe to all 11 channels. Do NOT use individuallisten()calls in a loop — this leaves the listener in a broken state where it stops receiving after the last call. - Artifact notifications:
artifact_createdandartifact_updatedchannels. Theartifact_updatedtrigger extracts a progress summary (progress_percent,progress_message,progress_entries) from the last entry in thedataJSONB array for progress-type artifacts, enabling inline progress bars without extra API calls. The Web UI usesuseArtifactStreamhook to subscribe toentity_type:artifactnotifications and invalidate React Query caches + push progress summaries to aartifact_progresscache key. - WebSocket protocol (client → server):
{"type":"subscribe","filter":"entity:execution:<id>"}— filter formats:all,entity_type:<type>,entity:<type>:<id>,user:<id>,notification_type:<type> - WebSocket protocol (server → client): All messages use
#[serde(tag="type")]—{"type":"welcome","client_id":"...","message":"..."}on connect;{"type":"notification","notification_type":"...","entity_type":"...","entity_id":...,"payload":{...},"user_id":null,"timestamp":"..."}for notifications;{"type":"error","message":"..."}for errors - Key invariant: The outgoing task in
websocket_server.rsMUST wrapNotificationinClientMessage::Notification(notification)before serializing — bareNotificationserialization omits the"type"field and breaks clients
- PostgreSQL listener: Uses
Communication: Services communicate via RabbitMQ for async operations
Docker Compose Orchestration
All Attune services run via Docker Compose.
- Compose file:
docker-compose.yaml(root directory) - Configuration:
config.docker.yaml(Docker-specific settings, includingartifacts_dir: /opt/attune/artifacts) - Default user:
test@attune.local/TestPass123!(auto-created)
Services:
- Infrastructure: postgres (TimescaleDB), rabbitmq, redis
- Init (run-once): migrations, init-user, init-packs
- Application: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000)
Volumes (named):
postgres_data,rabbitmq_data,redis_data— infrastructure statepacks_data— pack files (shared across all services)runtime_envs— isolated runtime environments (virtualenvs, node_modules)artifacts_data— file-backed artifact storage (shared between API rw, workers rw, executor ro)*_logs— per-service log volumes
Commands:
docker compose up -d # Start all services
docker compose down # Stop all services
docker compose logs -f <svc> # View logs
Key environment overrides: JWT_SECRET, ENCRYPTION_KEY (required for production)
Docker Build Optimization
- Optimized Dockerfiles:
docker/Dockerfile.optimized,docker/Dockerfile.worker.optimized, anddocker/Dockerfile.sensor.optimized - Strategy: Selective crate copying - only copy crates needed for each service (not entire workspace)
- Performance: 90% faster incremental builds (~30 sec vs ~5 min for code changes)
- BuildKit cache mounts: Persist cargo registry and compilation artifacts between builds
- Cache strategy:
sharing=sharedfor registry/git (concurrent-safe), service-specific IDs for target caches - Parallel builds: 4x faster than old
sharing=lockedstrategy - no serialization overhead
- Cache strategy:
- Rustc stack size: All Rust Dockerfiles set
ENV RUST_MIN_STACK=67108864(64 MiB) in the build stage to preventrustcSIGSEGV crashes during release compilation. TheMakefilealso exports this variable for local builds. - Documentation: See
docs/docker-layer-optimization.md,docs/QUICKREF-docker-optimization.md,docs/QUICKREF-buildkit-cache-strategy.md
Docker Runtime Standardization
- Base image: All worker and sensor runtime stages use
debian:bookworm-slim(ordebian:bookwormfor worker-full) - Python: Always installed via
apt-get install python3 python3-pip python3-venv→ binary at/usr/bin/python3 - Node.js: Always installed via NodeSource apt repo (
setup_${NODE_VERSION}.x) → binary at/usr/bin/node - NEVER use
python:ornode:Docker images as base — they install binaries at/usr/local/bin/which causes broken venv symlinks when multiple containers share theruntime_envsvolume - UID: All containers use UID 1000 for the
attuneuser - Venv creation: Uses
--copiesflag (python3 -m venv --copies) to avoid cross-container broken symlinks - Worker targets:
worker-base(shell),worker-python(shell+python),worker-node(shell+node),worker-full(all) - Sensor targets:
sensor-base(native only),sensor-full(native+python+node)
Packs Volume Architecture
- Key Principle: Packs are NOT copied into Docker images - they are mounted as volumes
- Volume Flow: Host
./packs/→init-packsservice →packs_datavolume → mounted in all services - Benefits: Update packs with restart (~5 sec) instead of rebuild (~5 min)
- Pack Binaries: Built separately with
./scripts/build-pack-binaries.sh(GLIBC compatibility) - Development: Use
./packs.dev/for instant testing (direct bind mount, no restart needed) - Documentation: See
docs/QUICKREF-packs-volumes.md
Runtime Environments Volume
- Key Principle: Runtime environments (virtualenvs, node_modules) are stored OUTSIDE pack directories
- Volume:
runtime_envsnamed volume mounted at/opt/attune/runtime_envsin worker, sensor, and API containers - Path Pattern:
{runtime_envs_dir}/{pack_ref}/{runtime_name}(e.g.,/opt/attune/runtime_envs/python_example/python) - Creation: Worker creates environments proactively at startup and via
pack.registeredMQ events; lightweight existence check at execution time - Broken venv auto-repair: Worker detects broken interpreter symlinks (e.g., from mismatched container python paths) and automatically recreates the environment
- API best-effort: API attempts environment setup during pack registration but logs and defers to worker on failure (Docker API containers lack interpreters)
- Pack directories remain read-only: Packs mounted
:roin workers; all generated env files go toruntime_envsvolume - Config:
runtime_envs_dirsetting in config YAML (default:/opt/attune/runtime_envs)
Domain Model & Event Flow
Critical Event Flow:
Sensor → Trigger fires → Event created → Rule evaluates →
Enforcement created → Execution scheduled → Worker executes Action
For workflows:
Execution requested → Scheduler detects workflow_def → Loads definition →
Creates workflow_execution record → Dispatches entry-point tasks as child executions →
Completion listener advances workflow → Schedules successor tasks → Completes workflow
Key Entities (all in public schema, IDs are i64):
- Pack: Bundle of automation components (actions, sensors, rules, triggers, runtimes)
- Runtime: Unified execution environment definition (Python, Shell, Node.js, etc.) — used by both actions and sensors. Configured via
execution_configJSONB (interpreter, environment setup, dependency management, env_vars). No type distinction; whether a runtime is executable is determined by itsexecution_configcontent. - RuntimeVersion: A specific version of a runtime (e.g., Python 3.12.1, Node.js 20.11.0). Each version has its own
execution_configanddistributionsfor version-specific interpreter paths, verification commands, and environment setup. Actions and sensors can declare an optionalruntime_version_constraint(semver range) to select a compatible version at execution time. - Trigger: Event type definition (e.g., "webhook_received")
- Sensor: Monitors for trigger conditions, creates events
- Event: Instance of a trigger firing with payload
- Action: Executable task with parameters
- Rule: Links triggers to actions with conditional logic
- Enforcement: Represents a rule activation
- Execution: Single action run; supports parent-child relationships for workflows
- Workflow Tasks: Workflow-specific metadata stored in
execution.workflow_taskJSONB field
- Workflow Tasks: Workflow-specific metadata stored in
- Inquiry: Human-in-the-loop async interaction (approvals, inputs)
- Identity: User/service account with RBAC permissions
- Key: Secrets/config storage. The
valuecolumn is JSONB — keys can store strings, objects, arrays, numbers, or booleans. Keys are unencrypted by default; use--encrypt/-e(CLI) or"encrypted": true(API) to encrypt. When encrypted, the JSON value is serialised to a compact string, encrypted with AES-256-GCM, and stored as a JSON string; decryption reverses this. Theencrypt_json/decrypt_jsonhelpers inattune_common::cryptohandle this — all services use this single shared implementation (the worker'sSecretManagerdelegates directly toattune_common::crypto::decrypt_json; it no longer has its own bespoke encryption code). The ciphertext format isBASE64(nonce_bytes ++ ciphertext_bytes)everywhere. The worker'sSecretManagerreturnsHashMap<String, JsonValue>and secrets are merged directly into action parameters (noValue::Stringwrapping). The workflowkeystorenamespace already usesJsonValue, so structured secrets are natively accessible (e.g.,{{ keystore.db_credentials.password }}). The CLIkey showcommand displays a SHA-256 hash of the value by default; pass--decrypt/-dto reveal the actual value. - Artifact: Tracked output from executions (files, logs, progress indicators). Metadata + optional structured
data(JSONB). Linked to execution via plain BIGINT (no FK). Supports retention policies (version-count or time-based). File-type artifacts (FileBinary, FileDataTable, FileImage, FileText) use disk-based storage on a shared volume; Progress and Url artifacts use DB storage. Each artifact has avisibilityfield (ArtifactVisibilityenum:publicorprivate, DB defaultprivate). Public artifacts are viewable by all authenticated users; private artifacts are restricted based on the artifact'sscope(Identity, Pack, Action, Sensor) andownerfields. Type-aware API default: whenvisibilityis omitted fromPOST /api/v1/artifacts, the API defaults topublicfor Progress artifacts (informational status indicators anyone watching an execution should see) andprivatefor all other types. Callers can always override by explicitly settingvisibility. Full RBAC enforcement is deferred — the column and basic filtering are in place for future permission checks. - ArtifactVersion: Immutable content snapshot for an artifact. File-type versions store a
file_path(relative path on shared volume) withcontentBYTEA left NULL. DB-stored versions usecontentBYTEA and/orcontent_jsonJSONB. Version number auto-assigned vianext_artifact_version(). Retention trigger auto-deletes oldest versions beyond limit. Invariant: exactly one ofcontent,content_json, orfile_pathshould be non-NULL per row.
Key Tools & Libraries
Shared Dependencies (workspace-level)
- Async: tokio, async-trait, futures
- Web: axum, tower, tower-http
- Database: sqlx (with postgres, json, chrono, uuid features)
- Serialization: serde, serde_json, serde_yaml_ng
- Version Matching: semver (with serde feature)
- Logging: tracing, tracing-subscriber
- Error Handling: anyhow, thiserror
- Config: config crate (YAML + env vars)
- Validation: validator
- Auth: jsonwebtoken, argon2
- CLI: clap
- OpenAPI: utoipa, utoipa-swagger-ui
- Message Queue: lapin (RabbitMQ)
- HTTP Client: reqwest
- Archive/Compression: tar, flate2 (used for pack upload/extraction)
- Testing: mockall, tempfile, serial_test
Web UI Dependencies
- Framework: React 19 + react-router-dom
- State: Zustand, @tanstack/react-query
- HTTP: axios (with generated OpenAPI client)
- Styling: Tailwind CSS
- Icons: lucide-react
- Build: Vite, TypeScript
Configuration System
- Primary: YAML config files (
config.yaml,config.{env}.yaml) - Overrides: Environment variables with prefix
ATTUNE__and separator__- Example:
ATTUNE__DATABASE__URL,ATTUNE__SERVER__PORT,ATTUNE__RUNTIME_ENVS_DIR
- Example:
- Loading Priority: Base config → env-specific config → env vars
- Required for Production:
JWT_SECRET,ENCRYPTION_KEY(32+ chars) - Location: Root directory or
ATTUNE_CONFIGenv var path - Key Settings:
packs_base_dir- Where pack files are stored (default:/opt/attune/packs)runtime_envs_dir- Where isolated runtime environments are created (default:/opt/attune/runtime_envs)artifacts_dir- Where file-backed artifacts are stored (default:/opt/attune/artifacts). Shared volume between API and workers.
Authentication & Security
- Auth Type: JWT (access tokens: 1h, refresh tokens: 7d)
- Password Hashing: Argon2id
- Protected Routes: Use
RequireAuth(user)extractor in Axum - External Identity Providers: OIDC and LDAP are supported as optional login methods alongside local username/password. Both upsert an
identityrow on first login and store provider-specific claims underattributes.oidcorattributes.ldaprespectively. The web UI login page adapts dynamically based on theGET /auth/settingsresponse, showing/hiding each method. The?auth=<provider_name>query parameter overrides which method is displayed (e.g.,?auth=direct,?auth=sso,?auth=ldap).- OIDC (
crates/api/src/auth/oidc.rs): Browser-redirect flow using theopenidconnectcrate. Config:security.oidcin YAML. Routes:GET /auth/oidc/login(redirect to provider),GET /auth/callback(authorization code exchange). Identity matched byattributes->'oidc'->>'issuer'+attributes->'oidc'->>'sub'. Supports PKCE, ID token verification via JWKS, userinfo endpoint enrichment, and provider-initiated logout viaend_session_endpoint. - LDAP (
crates/api/src/auth/ldap.rs): Server-side bind flow using theldap3crate. Config:security.ldapin YAML. Route:POST /auth/ldap/login(accepts{login, password}, returnsTokenResponse). Two authentication modes: direct bind (construct DN frombind_dn_templatewith{login}placeholder) or search-and-bind (bind as service account → searchuser_search_basewithuser_filter→ re-bind as discovered DN). Identity matched byattributes->'ldap'->>'server_url'+attributes->'ldap'->>'dn'. Supports STARTTLS, TLS cert skip (danger_skip_tls_verify), and configurable attribute mapping (login_attr,email_attr,display_name_attr,group_attr). - Login Page Config (
security.login_page):show_local_login,show_oidc_login,show_ldap_login— all default totrue. Controls which methods are visible by default on the web UI login page.
- OIDC (
- Secrets Storage: AES-GCM encrypted in
keytable (JSONBvaluecolumn) with scoped ownership. Supports structured values (objects, arrays) in addition to plain strings. All encryption/decryption goes throughattune_common::crypto(encrypt_json/decrypt_json) — the worker'sSecretManagerno longer has its own crypto implementation, eliminating a prior ciphertext format incompatibility between the API (BASE64(nonce++ciphertext)) and the old worker code (BASE64(nonce):BASE64(ciphertext)). The worker stores the raw encryption key string and passes it to the shared crypto module, which derives the AES-256 key internally via SHA-256. - User Info: Stored in
identitytable
Code Conventions & Patterns
General
- Error Handling: Use
attune_common::error::ErrorandResult<T>type alias - Async Everywhere: All I/O operations use async/await with Tokio
- Module Structure: Public API exposed via
mod.rswithpub usere-exports
Database Layer
- Schema: All tables use unqualified names; schema determined by PostgreSQL
search_path - Production: Always uses
publicschema (configured explicitly inconfig.production.yaml) - Tests: Each test uses isolated schema (e.g.,
test_a1b2c3d4) for true parallel execution - Schema Resolution: PostgreSQL
search_pathmechanism, NO hardcoded schema prefixes in queries - Models: Defined in
common/src/models.rswith#[derive(FromRow)]for SQLx - Repositories: One per entity in
common/src/repositories/, provides CRUD + specialized queries - Pattern: Services MUST interact with DB only through repository layer (no direct queries)
- Transactions: Use SQLx transactions for multi-table operations
- IDs: All IDs are
i64(BIGSERIAL in PostgreSQL) - Timestamps:
created/updatedcolumns auto-managed by DB triggers - JSON Fields: Use
serde_json::Valuefor flexible attributes/parameters, includingexecution.workflow_taskJSONB - Enums: PostgreSQL enum types mapped with
#[sqlx(type_name = "...")] - Workflow Tasks: Stored as JSONB in
execution.workflow_task(consolidated from separate table 2026-01-27) - FK ON DELETE Policy: Historical records (executions) use
ON DELETE SET NULLso they survive entity deletion while preserving text ref fields (action_ref,trigger_ref, etc.) for auditing. Theevent,enforcement, andexecutiontables are TimescaleDB hypertables, so they cannot be the target of FK constraints —enforcement.event,execution.enforcement,inquiry.execution,workflow_execution.execution,execution.parent, andexecution.original_executionare plain BIGINT columns (no FK) and may become dangling references if the referenced row is deleted. Pack-owned entities (actions, triggers, sensors, rules, runtimes) useON DELETE CASCADEfrom pack. Workflow executions cascade-delete with their workflow definition. - Event Table (TimescaleDB Hypertable): The
eventtable is a TimescaleDB hypertable partitioned oncreated(1-day chunks). Events are immutable after insert — there is noupdatedcolumn, no update trigger, and noUpdaterepository impl. TheEventmodel has noupdatedfield. Compression is segmented bytrigger_ref(after 7 days) and retention is 90 days. Theevent_volume_hourlycontinuous aggregate queries theeventtable directly. - Enforcement Table (TimescaleDB Hypertable): The
enforcementtable is a TimescaleDB hypertable partitioned oncreated(1-day chunks). Enforcements are updated exactly once — the executor setsstatusfromcreatedtoprocessedordisabledwithin ~1 second of creation, well before the 7-day compression window. Theresolved_atcolumn (nullableTIMESTAMPTZ) records when this transition occurred; it isNULLwhile status iscreated. There is noupdatedcolumn. Compression is segmented byrule_ref(after 7 days) and retention is 90 days. Theenforcement_volume_hourlycontinuous aggregate queries theenforcementtable directly. - Execution Table (TimescaleDB Hypertable): The
executiontable is a TimescaleDB hypertable partitioned oncreated(1-day chunks). Executions are updated ~4 times during their lifecycle (requested → scheduled → running → completed/failed), completing within at most ~1 day — well before the 7-day compression window. Theupdatedcolumn and its BEFORE UPDATE trigger are preserved (used by timeout monitor and UI). Thestarted_atcolumn (nullableTIMESTAMPTZ) records when the worker picked up the execution (status →running); it isNULLuntil then. Duration in the UI is computed asupdated - started_at(notupdated - created) so that queue/scheduling wait time is excluded. Compression is segmented byaction_ref(after 7 days) and retention is 90 days. Theexecution_volume_hourlycontinuous aggregate queries the execution hypertable directly. Theexecution_historyhypertable (field-level diffs) and its continuous aggregates (execution_status_hourly,execution_throughput_hourly) are preserved alongside — they serve complementary purposes (change tracking vs. volume monitoring). - Entity History Tracking (TimescaleDB): Append-only
<table>_historyhypertables track field-level changes toexecutionandworkertables. Populated by PostgreSQLAFTER INSERT OR UPDATE OR DELETEtriggers — no Rust code changes needed for recording. Uses JSONB diff format (old_values/new_values) with achanged_fields TEXT[]column for efficient filtering. Worker heartbeat-only updates are excluded. There are noevent_historyorenforcement_historytables — events are immutable and enforcements have a single deterministic status transition, so both tables are hypertables themselves. Seedocs/plans/timescaledb-entity-history.mdfor full design. The execution history trigger tracks:status,result,executor,workflow_task,env_vars,started_at. - History Large-Field Guardrails: The
executionhistory trigger stores a compact digest summary instead of the full value for theresultcolumn (which can be arbitrarily large). The digest is produced by the_jsonb_digest_summary(JSONB)helper function and has the shape{"digest": "md5:<hex>", "size": <bytes>, "type": "<jsonb_typeof>"}. This preserves change-detection semantics while avoiding history table bloat. The full result is always available on the liveexecutionrow. When adding new large JSONB columns to history triggers, use_jsonb_digest_summary()instead of storing the raw value. - Nullable FK Fields:
rule.actionandrule.triggerare nullable (Option<Id>in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability.execution.action,execution.parent,execution.enforcement,execution.started_at, andevent.sourceare also nullable.enforcement.eventis nullable but has no FK constraint (event is a hypertable).execution.enforcementis nullable but has no FK constraint (enforcement is a hypertable). All FK columns on the execution table (action,parent,original_execution,enforcement,executor,workflow_def) have no FK constraints (execution is a hypertable).inquiry.executionandworkflow_execution.executionalso have no FK constraints.enforcement.resolved_atis nullable —Nonewhile status iscreated, set when resolved.execution.started_atis nullable —Noneuntil the worker sets status torunning. Table Count: 21 tables total in the schema (includingruntime_version,artifact_version, 2*_historyhypertables, and theevent,enforcement, +executionhypertables) Migration Count: 11 migrations (000001through000011) — seemigrations/directory - Artifact System: The
artifacttable stores metadata + structured data (progress entries via JSONBdatacolumn). Theartifact_versiontable stores immutable content snapshots — either on disk (viafile_pathcolumn) or in DB (viacontentBYTEA /content_jsonJSONB). Version numbering is auto-assigned vianext_artifact_version()SQL function. A DB trigger (enforce_artifact_retention) auto-deletes oldest versions when count exceeds the artifact'sretention_limit.artifact.executionis a plain BIGINT (no FK — execution is a hypertable). Progress-type artifacts useartifact.data(atomic JSON array append); file-type artifacts useartifact_versionrows withfile_pathset. Binary content is excluded from default queries for performance (SELECT_COLUMNSvsSELECT_COLUMNS_WITH_CONTENT). Visibility: Each artifact has avisibilitycolumn (artifact_visibility_enum:publicorprivate, DB defaultprivate). TheCreateArtifactRequestDTO acceptsvisibilityasOption<ArtifactVisibility>— when omitted the API route handler applies a type-aware default:publicfor Progress artifacts (informational status indicators),privatefor all other types. Callers can always override explicitly. Public artifacts are viewable by all authenticated users; private artifacts are restricted based on the artifact'sscope(Identity, Pack, Action, Sensor) andownerfields. The visibility field is filterable via the search/list API (?visibility=public). Full RBAC enforcement is deferred — the column and basic query filtering are in place for future permission checks. Notifications:artifact_createdandartifact_updatedDB triggers (in migration000008) fire PostgreSQL NOTIFY with entity_typeartifactand includevisibilityin the payload. Theartifact_updatedtrigger extracts a progress summary (progress_percent,progress_message,progress_entries) from the last entry of thedataJSONB array for progress-type artifacts. The Web UIExecutionProgressBarcomponent (web/src/components/executions/ExecutionProgressBar.tsx) renders an inline progress bar in the Execution Details card using theuseArtifactStreamhook (web/src/hooks/useArtifactStream.ts) for real-time WebSocket updates, with polling fallback viauseExecutionArtifacts. - File-Based Artifact Storage: File-type artifacts (FileBinary, FileDataTable, FileImage, FileText) use a shared filesystem volume instead of PostgreSQL BYTEA. The
artifact_version.file_pathcolumn stores the relative path from theartifacts_dirroot (e.g.,mypack/build_log/v1.txt). Pattern:{ref_with_dots_as_dirs}/v{version}.{ext}. The artifact ref (globally unique) is used as the directory key — no execution ID in the path, so artifacts can outlive executions and be shared across them. Endpoint:POST /api/v1/artifacts/{id}/versions/fileallocates a version number and file path without any file content; the execution process writes the file to$ATTUNE_ARTIFACTS_DIR/{file_path}. Download:GET /api/v1/artifacts/{id}/downloadand version-specific downloads checkfile_pathfirst (read from disk), fall back to DB BYTEA/JSON. Finalization: After execution exits, the worker stats all file-backed versions for that execution and updatessize_byteson bothartifact_versionand parentartifactrows via direct DB access. Cleanup: Delete endpoints remove disk files before deleting DB rows; empty parent directories are cleaned up. Backward compatible: Existing DB-stored artifacts (file_path = NULL) continue to work unchanged. - Pack Component Loading Order: Runtimes → Triggers → Actions (+ workflow definitions) → Sensors (dependency order). Both
PackComponentLoader(Rust) andload_core_pack.py(Python) follow this order. When an action YAML contains aworkflow_filefield, the loader creates/updates the referencedworkflow_definitionrecord and links it to the action during the Actions phase.
Workflow Execution Orchestration
- Detection: The
ExecutionSchedulerchecksaction.workflow_def.is_some()before dispatching to a worker. Workflow actions are orchestrated by the executor, not sent to workers. - Orchestration Flow: Scheduler loads the
WorkflowDefinition, builds aTaskGraph, creates aworkflow_executionrecord, marks the parent execution as Running, builds an initialWorkflowContextfrom execution parameters and workflow vars, then dispatches entry-point tasks as child executions via MQ with rendered inputs. - Template Resolution: Task inputs are rendered through
WorkflowContext.render_json()before dispatching. Uses the expression engine for full operator/function support inside{{ }}. Canonical namespaces:parameters,workflow(mutable vars),task(results),config(pack config),keystore(secrets),item,index,system. Backward-compat aliases:vars/variables→workflow,tasks→task, bare names →workflowfallback. Type-preserving: pure template expressions like"{{ item }}"preserve the JSON type (integer5stays as5, not string"5"). Mixed expressions like"Sleeping for {{ item }} seconds"remain strings. - Function Expressions:
{{ result() }}returns the last completed task's result.{{ result().field.subfield }}navigates into it.{{ succeeded() }},{{ failed() }},{{ timed_out() }}return booleans. These are evaluated byWorkflowContext.try_evaluate_function_call(). - Publish Directives: Transition
publishdirectives are evaluated when a transition fires. Published variables are persisted to theworkflow_execution.variablescolumn and available to subsequent tasks via theworkflownamespace (e.g.,{{ workflow.number_list }}). Values can be any JSON-compatible type: string templates (e.g.,number_list: "{{ result().data.items }}"), booleans (validation_passed: true), numbers (count: 42), arrays, objects, or null. ThePublishDirective::Simplevariant storesHashMap<String, serde_json::Value>. String values are template-rendered with type preservation (pure{{ }}expressions preserve the underlying JSON type); non-string values (booleans, numbers, null) pass throughrender_jsonunchanged —truestays as booleantrue, not string"true". ThePublishVarstruct ingraph.rsuses avalue: JsonValuefield (with#[serde(alias = "expression")]for backward compat with stored task graphs). - Child Task Dispatch: Each workflow task becomes a child execution with the task's actual action ref (e.g.,
core.echo),workflow_taskmetadata linking it to theworkflow_executionrecord, and a parent reference to the workflow execution. Child executions re-enter the normal scheduling pipeline, so nested workflows work recursively. - with_items Expansion: Tasks declaring
with_items: "{{ expr }}"are expanded into child executions. The expression is resolved via theWorkflowContextto produce a JSON array, then each item gets its own child execution withitem/indexset on the context andtask_indexinWorkflowTaskMetadata. Completion tracking waits for ALL sibling items to finish before marking the task as completed/failed and advancing the workflow. - with_items Concurrency Limiting: ALL child execution records are created in the database up front (with fully-rendered inputs), but only the first
Nare published to the message queue whereNis the task'sconcurrencyvalue (default: 1, i.e. serial execution). The remaining children stay atRequestedstatus in the DB. As each item completes,advance_workflowcounts in-flight siblings (scheduling/scheduled/running), calculates free slots (concurrency - in_flight), and callspublish_pending_with_items_children()which queries forRequested-status siblings ordered bytask_indexand publishes them. The DBstatus = 'requested'query is the authoritative source of undispatched items — no auxiliary state in workflow variables needed. The task is only marked complete when all siblings reach a terminal state. To run all items in parallel, explicitly setconcurrencyto the list length or a suitably large number. - Advancement: The
CompletionListenerdetects when a completed execution hasworkflow_taskmetadata and callsExecutionScheduler::advance_workflow(). The scheduler rebuilds theWorkflowContextfrom persistedworkflow_execution.variablesplus all completed child execution results, setslast_task_outcome, evaluates transitions (succeeded/failed/always/timed_out/custom with context-based condition evaluation), processes publish directives, schedules successor tasks with rendered inputs, and completes the workflow when all tasks are done. - Transition Evaluation:
succeeded(),failed(),timed_out(), andalways(no condition) are supported. Custom conditions are evaluated viaWorkflowContext.evaluate_condition()with fallback to fire-on-success if evaluation fails. - Legacy Coordinator: The prototype
WorkflowCoordinatorincrates/executor/src/workflow/coordinator.rsis bypassed — it has hardcoded schema prefixes and is not integrated with the MQ pipeline.
Pack File Loading & Action Execution
- Pack Base Directory: Configured via
packs_base_dirin config (defaults to/opt/attune/packs, development uses./packs) - Pack Volume Strategy: Packs are mounted as volumes (NOT copied into Docker images)
- Host
./packs/→packs_datavolume viainit-packsservice → mounted at/opt/attune/packsin all services - Development packs in
./packs.dev/are bind-mounted directly for instant updates
- Host
- Pack Binaries: Native binaries (sensors) built separately with
./scripts/build-pack-binaries.sh - Action Script Resolution: Worker constructs file paths as
{packs_base_dir}/{pack_ref}/actions/{entrypoint} - Workflow Action YAML (
workflow_filefield): An action YAML may include aworkflow_filefield (e.g.,workflow_file: workflows/timeline_demo.yaml) pointing to a workflow definition file relative to theactions/directory. When present, thePackComponentLoaderreads and parses the referenced workflow YAML, creates/updates aworkflow_definitionrecord, and links the action to it viaaction.workflow_def. This separates action-level metadata (ref, label, parameters, policies) from the workflow graph (tasks, transitions, variables), and allows multiple actions to reference the same workflow file with different parameter schemas or policy configurations. Workflow actions have norunner_type(runtime isNone) — the executor orchestrates child task executions rather than sending to a worker.- Action-linked workflow files omit action-level metadata: Workflow files referenced via
workflow_fileshould contain only the execution graph:version,vars,tasks,output_map. Theref,label,description,parameters,output, andtagsfields are omitted — the action YAML is the single authoritative source for those values. TheWorkflowDefinitionparser accepts emptyref/label(defaults to""), and the loader / registrar fall back to the action YAML (or filename-derived values) when they are missing. Standalone workflow files (inworkflows/) still carry their ownref/labelsince they have no companion action YAML.
- Action-linked workflow files omit action-level metadata: Workflow files referenced via
- Workflow File Storage: The visual workflow builder save endpoints (
POST /api/v1/packs/{pack_ref}/workflow-filesandPUT /api/v1/workflows/{ref}/file) write two files per workflow:- Action YAML at
{packs_base_dir}/{pack_ref}/actions/{name}.yaml— action-level metadata (ref,label,description,parameters,output,tags,workflow_filereference,enabled). Built bybuild_action_yaml()incrates/api/src/routes/workflows.rs. - Workflow YAML at
{packs_base_dir}/{pack_ref}/actions/workflows/{name}.workflow.yaml— graph-only (version,vars,tasks,output_map). Thestrip_action_level_fields()function removesref,label,description,parameters,output, andtagsfrom the definition before writing. Pack-bundled workflows use the same directory layout and are discovered during pack registration when their companion action YAML containsworkflow_file.
- Action YAML at
- Workflow File Discovery (dual-directory scanning): The
WorkflowLoaderscans two directories when loading workflows for a pack: (1){pack_dir}/workflows/(legacy standalone workflow files), and (2){pack_dir}/actions/workflows/(visual-builder and action-linked workflow files). Files with.workflow.yamlsuffix have the.workflowportion stripped when deriving the workflow name/ref (e.g.,deploy.workflow.yaml→ namedeploy, refpack.deploy). If the same ref appears in both directories,actions/workflows/wins. Thereload_workflowmethod searchesactions/workflows/first, trying.workflow.yaml,.yaml,.workflow.yml, and.ymlextensions. - Task Model (Orquesta-aligned): Tasks are purely action invocations — there is no task
typefield or task-levelwhencondition in the UI model. Parallelism is implicit (multipledotargets in a transition fan out into parallel branches). Conditions belong exclusively on transitions (next[].when). Each task has:name,action,input,next(transitions),delay,retry,timeout,with_items,batch_size,concurrency,join.- The backend
Taskstruct (crates/common/src/workflow/parser.rs) still supportstypeand task-levelwhenfor backward compatibility, but the UI never sets them.
- The backend
- Task Transition Model (Orquesta-style): Tasks use an ordered
nextarray of transitions instead of flaton_success/on_failure/on_complete/on_timeoutfields. Each transition has:when— condition expression (e.g.,{{ succeeded() }},{{ failed() }},{{ timed_out() }}, or custom). Omit for unconditional.publish— key-value pairs to publish into the workflow context (e.g.,- result: "{{ result() }}")do— list of next task names to invoke when the condition is metlabel— optional custom display label (overrides auto-derived label fromwhenexpression)color— optional custom CSS color for the transition edge (e.g.,"#ff6600")edge_waypoints— optionalRecord<string, NodePosition[]>of intermediate routing points per target task name (chart-only, stored in__chart_meta__)label_positions— optionalRecord<string, NodePosition>of custom label positions per target task name (chart-only, stored in__chart_meta__)- Example YAML:
next: - when: "{{ succeeded() }}" label: "main path" color: "#22c55e" publish: - msg: "task done" do: - log - next_task - when: "{{ failed() }}" do: - error_handler - Legacy format support: The parser (
crates/common/src/workflow/parser.rs) auto-converts legacyon_success/on_failure/on_complete/on_timeout/decisionfields intonexttransitions during parsing. The canonical internal representation always usesnext. - Frontend types:
TaskTransitioninweb/src/types/workflow.ts(includesedge_waypoints,label_positionsfor visual routing);TransitionPreset("succeeded" | "failed" | "always") for quick-access drag handles;WorkflowEdgeincludes per-edgewaypointsandlabelPositionderived from the transition;SelectedEdgeInfoandEdgeHoverInfo(includestargetTaskId) inWorkflowEdges.tsx - Backend types:
TaskTransitionincrates/common/src/workflow/parser.rs;GraphTransitionincrates/executor/src/workflow/graph.rs - NOT this (legacy format):
on_success: task2/on_failure: error_handler— still parsed for backward compat but normalized tonext
- Runtime YAML Loading: Pack registration reads
runtimes/*.yamlfiles and inserts them into theruntimetable. Runtime refs use format{pack_ref}.{name}(e.g.,core.python,core.shell). If the YAML includes aversionsarray, each entry is inserted into theruntime_versiontable with its ownexecution_config,distributions, and optionalis_defaultflag. - Runtime Version Constraints: Actions and sensors can declare
runtime_version: ">=3.12"(or any semver constraint like~3.12,^3.12,>=3.12,<4.0) in their YAML. This is stored in theruntime_version_constraintcolumn. At execution time the worker can select the highest available version satisfying the constraint. A bare version like"3.12"is treated as tilde (~3.12→ >=3.12.0, <3.13.0). - Version Matching Module:
crates/common/src/version_matching.rsprovidesparse_version()(lenient semver parsing),parse_constraint(),matches_constraint(),select_best_version(), andextract_version_components(). Uses thesemvercrate internally. - Runtime Version Table:
runtime_versionstores version-specific execution configs per runtime. Each row has:runtime(FK),version(string),version_major/minor/patch(ints for range queries),execution_config(complete, not a diff),distributions(verification metadata),is_default,available,verified_at,meta. Unique on(runtime, version). - Runtime Selection: Determined by action's runtime field (e.g., "Shell", "Python") - compared case-insensitively; when an explicit
runtime_nameis set in execution context, it is authoritative (no fallback to extension matching). When the action also declares aruntime_version_constraint, the executor queriesruntime_versionrows, callsselect_best_version(), and passes the selected version'sexecution_configas an override throughExecutionContext.runtime_config_override. TheProcessRuntimeuses this override instead of its built-in config. - Worker Runtime Loading: Worker loads all runtimes from DB that have a non-empty
execution_config(i.e., runtimes with an interpreter configured). Native runtimes (e.g.,core.nativewith empty config) are automatically skipped since they execute binaries directly. - Worker Startup Sequence: (1) Connect to DB and MQ, (2) Load runtimes from DB → create
ProcessRuntimeinstances, (3) Register worker and set up MQ infrastructure, (4) Verify runtime versions — run verification commands fromdistributionsJSONB for eachRuntimeVersionrow and updateavailableflag (crates/worker/src/version_verify.rs), (5) Set up runtime environments — create per-version environments for packs, (6) Start heartbeat, execution consumer, and pack registration consumer. - Runtime Name Normalization: The
ATTUNE_WORKER_RUNTIMESfilter (e.g.,shell,node) uses alias-aware matching vianormalize_runtime_name()incrates/common/src/runtime_detection.rs. This ensures that filter value"node"matches DB runtime name"Node.js"(lowercased to"node.js"). Alias groups:node/nodejs/node.js→node,python/python3→python,shell/bash/sh→shell,native/builtin/standalone→native. Used in worker service runtime loading and environment setup. - Runtime Execution Environment Variables:
RuntimeExecutionConfig.env_vars(HashMap<String, String>) specifies template-based environment variables injected during action execution. Example:{"NODE_PATH": "{env_dir}/node_modules"}ensures Node.js finds packages in the isolated environment. Template variables ({env_dir},{pack_dir},{interpreter},{manifest_path}) are resolved at execution time byProcessRuntime::execute. - Native Runtime Detection: Runtime detection is purely data-driven via
execution_configin the runtime table. A runtime with emptyexecution_config(or emptyinterpreter.binary) is native — the entrypoint is executed directly without an interpreter. There is no special "builtin" runtime concept. - Sensor Runtime Assignment: Sensors declare their
runner_typein YAML (e.g.,python,native). The pack loader resolves this to the correct runtime from the database. Default isnative(compiled binary, no interpreter). Legacy valuesstandaloneandbuiltinmap tocore.native. - Runtime Environment Setup: Worker creates isolated environments (virtualenvs, node_modules) proactively at startup and via
pack.registeredMQ events at{runtime_envs_dir}/{pack_ref}/{runtime_name}; setup is idempotent. Environmentcreate_commandand dependencyinstall_commandtemplates MUST use{env_dir}(not{pack_dir}) since pack directories are mounted read-only in Docker. For Node.js,create_commandcopiespackage.jsonto{env_dir}andinstall_commandusesnpm install --prefix {env_dir}. - Per-Version Environment Isolation: When runtime versions are registered, the worker creates per-version environments at
{runtime_envs_dir}/{pack_ref}/{runtime_name}-{version}(e.g.,python-3.12). This ensures different versions maintain isolated environments with their own interpreter binaries and installed dependencies. A base (unversioned) environment is also created for backward compatibility. TheExecutionContext.runtime_env_dir_suffixfield controls which env dir theProcessRuntimeuses at execution time. - Runtime Version Verification: At worker startup,
version_verify::verify_all_runtime_versions()runs each version's verification commands (fromdistributions.verification.commandsJSONB) and updates theavailableandverified_atcolumns in the database. Only versions markedavailable = trueare considered byselect_best_version(). Verification respects theATTUNE_WORKER_RUNTIMESfilter. - Schema Format (Unified): ALL schemas (
param_schema,out_schema,conf_schema) use the same flat format withrequiredandsecretinlined per-parameter (NOT standard JSON Schema). Stored as JSONB columns.- Example YAML:
parameters:\n url:\n type: string\n required: true\n token:\n type: string\n secret: true - Stored JSON:
{"url": {"type": "string", "required": true}, "token": {"type": "string", "secret": true}} - NOT this (legacy JSON Schema):
{"type": "object", "properties": {"url": {"type": "string"}}, "required": ["url"]} - Web UI:
extractProperties()inParamSchemaForm.tsxis the single extraction function for all schema types. Only handles flat format. - SchemaBuilder: Visual schema editor reads and writes flat format with
requiredandsecretcheckboxes per parameter. - Backend Validation:
flat_to_json_schema()incrates/api/src/validation/params.rsconverts flat format to JSON Schema internally forjsonschemacrate validation. This conversion is an implementation detail — external interfaces always use flat format.
- Example YAML:
- Execution Config Format (Flat): The
execution.configJSONB column always stores parameters in flat format — the object itself IS the parameters map (e.g.,{"url": "https://...", "method": "GET"}). This is consistent across all execution sources: manual API calls, rule-triggered enforcements, and workflow task children. There is no{"parameters": {...}}wrapper — never nest parameters under a"parameters"key. The worker readsconfigas a flat object and passes each key-value pair as an action parameter. The scheduler'sextract_workflow_params()helper treats the config object directly as the parameters map. - Parameter Delivery: Actions receive parameters via stdin as JSON (never environment variables)
- Output Format: Actions declare output format (text/json/yaml) - json/yaml are parsed into execution.result JSONB
- Standard Environment Variables: Worker provides execution context via
ATTUNE_*environment variables:ATTUNE_ACTION- Action ref (always present)ATTUNE_EXEC_ID- Execution database ID (always present)ATTUNE_API_TOKEN- Execution-scoped API token (always present)ATTUNE_API_URL- API base URL (always present)ATTUNE_ARTIFACTS_DIR- Absolute path to shared artifact volume (always present, e.g.,/opt/attune/artifacts)ATTUNE_RULE- Rule ref (if triggered by rule)ATTUNE_TRIGGER- Trigger ref (if triggered by event/trigger)
- Custom Environment Variables: Optional, set via
execution.env_varsJSONB field (for debug flags, runtime config only)
API Service (crates/api)
- Structure:
routes/(endpoints) +dto/(request/response) +auth/+middleware/ - Responses: Standardized
ApiResponse<T>wrapper withdatafield - Protected Routes: Apply
RequireAuthmiddleware - OpenAPI: Documented with
utoipaattributes (#[utoipa::path]) - Error Handling: Custom
ApiErrortype with proper HTTP status codes - Available at:
http://localhost:8080(dev),/api-spec/openapi.jsonfor spec
Common Library (crates/common)
- Modules:
models,repositories,db,config,error,mq,crypto,utils,workflow(includesexpressionsub-module),pack_registry,template_resolver,version_matching,runtime_detection - Exports: Commonly used types re-exported from
lib.rs - Repository Layer: All DB access goes through repositories in
repositories/ - Message Queue: Abstractions in
mq/for RabbitMQ communication - Template Resolver: Resolves
{{ }}template variables in ruleaction_paramsduring enforcement creation. Re-exported fromattune_common::{TemplateContext, resolve_templates}.
Template Variable Syntax
Rule action_params support Jinja2-style {{ source.path }} templates resolved at enforcement creation time:
| Namespace | Example | Description |
|---|---|---|
event.payload.* |
{{ event.payload.service }} |
Event payload fields |
event.id |
{{ event.id }} |
Event database ID |
event.trigger |
{{ event.trigger }} |
Trigger ref that generated the event |
event.created |
{{ event.created }} |
Event creation timestamp (RFC 3339) |
pack.config.* |
{{ pack.config.api_token }} |
Pack configuration values |
system.* |
{{ system.timestamp }} |
System variables (timestamp, rule info) |
- Implementation:
crates/common/src/template_resolver.rs(also re-exported fromattune_sensor::template_resolver) - Integration:
crates/executor/src/event_processor.rscallsresolve_templates()increate_enforcement() - IMPORTANT: The old
trigger.payload.*syntax was renamed toevent.payload.*— the payload data comes from the Event, not the Trigger
Workflow Expression Engine
Workflow templates ({{ expr }}) support a full expression language for evaluating conditions, computing values, and transforming data. The engine is in crates/common/src/workflow/expression/ (tokenizer → parser → evaluator) and is integrated into WorkflowContext via the EvalContext trait.
Canonical Namespaces — all data inside {{ }} expressions is organised into well-defined, non-overlapping namespaces:
| Namespace | Example | Description |
|---|---|---|
parameters |
{{ parameters.url }} |
Immutable workflow input parameters |
workflow |
{{ workflow.counter }} |
Mutable workflow-scoped variables (set via publish) |
task |
{{ task.fetch.result.data }} |
Completed task results keyed by task name |
config |
{{ config.api_token }} |
Pack configuration values (read-only) |
keystore |
{{ keystore.secret_key }} |
Encrypted secrets from the key store (read-only). Values are JsonValue — strings, objects, arrays, etc. Access nested fields with dot notation: {{ keystore.db_credentials.password }} |
item |
{{ item }} / {{ item.name }} |
Current element in a with_items loop |
index |
{{ index }} |
Zero-based iteration index in a with_items loop |
system |
{{ system.workflow_start }} |
System-provided variables |
Backward-compatible aliases (kept for existing workflow definitions):
vars/variables→ same asworkflowtasks→ same astask- Bare variable names (e.g.
{{ my_var }}) resolve against theworkflowvariable store as a last-resort fallback.
IMPORTANT: New workflow definitions should always use the canonical namespace names. The config and keystore namespaces are populated by the scheduler from the pack's config JSONB column and decrypted key table entries (JSONB values) respectively. If not populated, they resolve to null. Keystore values preserve their JSON type — a key storing {"host":"db.example.com","port":5432} is accessible as {{ keystore.db_config.host }} and {{ keystore.db_config.port }} (the latter resolves to integer 5432, not string "5432").
Operators (lowest to highest precedence):
or— logical OR (short-circuit)and— logical AND (short-circuit)not— logical NOT (unary)==,!=,<,>,<=,>=,in— comparison & membership+,-— addition/subtraction (also string/array concatenation for+)*,/,%— multiplication, division, modulo- Unary
-— negation .field,[index],(args)— postfix access & function calls
Type Rules:
- No implicit type coercion:
"3" == 3→false,"hello" + 5→ error - Int/float cross-comparison allowed:
3 == 3.0→true - Integer preservation:
2 + 3→5(int),2 + 1.5→3.5(float),10 / 4→2.5(float),10 / 5→2(int) - Python-like truthiness:
null,false,0,"",[],{}are falsy - Deep equality:
==/!=recursively compare objects and arrays - Negative indexing:
arr[-1]returns last element
Built-in Functions:
- Type conversion:
string(v),number(v),int(v),bool(v) - Introspection:
type_of(v),length(v),keys(obj),values(obj) - Math:
abs(n),floor(n),ceil(n),round(n),min(a,b),max(a,b),sum(arr) - String:
lower(s),upper(s),trim(s),split(s, sep),join(arr, sep),replace(s, old, new),starts_with(s, prefix),ends_with(s, suffix),match(pattern, s)(regex) - Collection:
contains(haystack, needle),reversed(v),sort(arr),unique(arr),flat(arr),zip(a, b),range(n)/range(start, end),slice(v, start, end),index_of(haystack, needle),count(haystack, needle),merge(obj_a, obj_b),chunks(arr, size) - Workflow:
result(),succeeded(),failed(),timed_out()(resolved viaEvalContexttrait)
Usage in Conditions (when: on transitions):
when: "succeeded() and result().code == 200"
when: "length(workflow.items) > 3 and \"admin\" in workflow.roles"
when: "not failed()"
when: "result().status == \"ok\" or result().status == \"accepted\""
when: "config.retries > 0"
Usage in Templates ({{ expr }}):
input:
count: "{{ length(workflow.items) }}"
greeting: "{{ parameters.first + \" \" + parameters.last }}"
doubled: "{{ parameters.x * 2 }}"
names: "{{ join(sort(keys(workflow.data)), \", \") }}"
auth: "Bearer {{ keystore.api_key }}"
endpoint: "{{ config.base_url + \"/api/v1\" }}"
prev_output: "{{ task.fetch.result.data.id }}"
Implementation Files:
crates/common/src/workflow/expression/mod.rs— module entry point,eval_expression(),parse_expression()crates/common/src/workflow/expression/tokenizer.rs— lexercrates/common/src/workflow/expression/parser.rs— recursive-descent parsercrates/common/src/workflow/expression/evaluator.rs— AST evaluator,EvalContexttrait, built-in functionscrates/common/src/workflow/expression/ast.rs— AST node types (Expr,BinaryOp,UnaryOp)crates/executor/src/workflow/context.rs—WorkflowContextimplementsEvalContext
Web UI (web/)
- Generated Client: OpenAPI client auto-generated from API spec
- Run:
npm run generate:api(requires API running on :8080) - Location:
src/api/
- Run:
- State Management: Zustand for global state, TanStack Query for server state
- Styling: Tailwind utility classes
- Dev Server:
npm run dev(typically :3000 or :5173) - Build:
npm run build - Workflow Timeline DAG: Prefect-style workflow run timeline visualization on the execution detail page for workflow executions
- Components in
web/src/components/executions/workflow-timeline/(WorkflowTimelineDAG, TimelineRenderer, types, data, layout) - Pure SVG renderer — no D3, no React Flow, no additional npm dependencies
- Renders child task executions as horizontal duration bars on a time axis with curved Bezier dependency edges
- Data flow:
WorkflowTimelineDAG(orchestrator) fetches child executions viauseChildExecutions+ workflow definition viauseWorkflow(actionRef)→data.tstransforms intoTimelineTask[]/TimelineEdge[]/TimelineMilestone[]→layout.tscomputes lane assignments + positions →TimelineRendererrenders SVG - Edge coloring from workflow metadata: Fetches the workflow definition's
nexttransition array, classifieswhenexpressions ({{ succeeded() }}→ green,{{ failed() }}→ red dashed,{{ timed_out() }}→ orange dash-dot, unconditional → gray), and reads__chart_meta__custom labels/colors - Task bars: Colored by state (green=completed, blue=running with pulse animation, red=failed, gray=pending, orange=timeout). Left accent bar, text label with ellipsis clipping, timeout indicator badge.
- Milestones: Synthetic start/end diamond nodes + merge/fork junctions when fan-in/fan-out exceeds 3 tasks
- Lane packing: Greedy algorithm assigns tasks to non-overlapping y-lanes sorted by start time, with optional reordering to cluster tasks sharing upstream dependencies
- Interactions: Hover tooltip (name, state, times, duration, retries, upstream/downstream counts), click-to-select with BFS path highlighting, double-click to navigate to child execution, horizontal zoom (mouse wheel anchored to cursor), alt+drag pan, expand/compact toggle
- Fallback: When no workflow definition is available, infers dependency edges from task timing heuristics
- Integration: Rendered in
ExecutionDetailPage.tsxaboveWorkflowTasksPanel, conditioned onisWorkflow. Shares TanStack Query cache with WorkflowTasksPanel. AcceptsParentExecutionInfointerface (satisfied by bothExecutionResponseandExecutionSummary).
- Components in
- Workflow Builder: Visual node-based workflow editor at
/actions/workflows/newand/actions/workflows/:ref/edit- Components in
web/src/components/workflows/(ActionPalette, WorkflowCanvas, TaskNode, WorkflowEdges, TaskInspector) - Types and conversion utilities in
web/src/types/workflow.ts - Hooks in
web/src/hooks/useWorkflows.ts - Saves workflow files to
{packs_base_dir}/{pack_ref}/actions/workflows/{name}.workflow.yamlvia dedicated API endpoints - Visual / Raw YAML toggle: Toolbar has a segmented toggle to switch between the visual node-based builder and a two-panel read-only YAML preview (generated via
js-yaml). Raw YAML mode replaces the canvas, palette, and inspector with side-by-side panels: Action YAML (left, blue —actions/{name}.yaml: ref, label, parameters, output, tags,workflow_filereference) and Workflow YAML (right, green —actions/workflows/{name}.workflow.yaml: version, vars, tasks, output_map — graph only). Each panel has its own copy button and a description bar explaining the file's role. ThebuilderStateToGraph()function extracts the graph-only definition, andbuilderStateToActionYaml()extracts the action metadata. - Drag-handle connections: TaskNode has output handles (green=succeeded, red=failed, gray=always) and an input handle (top). Drag from an output handle to another node's input handle to create a transition.
- Transition customization: Users can rename transitions (custom
label) and assign custom colors (CSS color string or preset swatches) via the TaskInspector. Custom colors/labels are persisted in the workflow YAML and rendered on the canvas edges. - Edge waypoints & label dragging: Transition edges support intermediate waypoints for custom routing. Click an edge to select it, then:
- Drag existing waypoint handles (colored circles) to reposition the edge path
- Hover near the midpoint of any edge segment to reveal a "+" handle; click or drag it to insert a new waypoint
- Drag the transition label to reposition it independently of the edge path
- Double-click a waypoint to remove it; double-click a label to reset its position
- Waypoints and label positions are stored per-edge (keyed by target task name) in
TaskTransition.edge_waypointsandTaskTransition.label_positions, serialized via__chart_meta__in the workflow YAML - Edge selection state (
SelectedEdgeInfo) is managed inWorkflowCanvas; only the selected edge shows interactive handles - Multi-segment paths use Catmull-Rom → cubic Bezier conversion for smooth curves through waypoints (
buildSmoothPathinWorkflowEdges.tsx)
- Orquesta-style
nexttransitions: Tasks use anext: TaskTransition[]array instead of flaton_success/on_failurefields. Each transition haswhen(condition),publish(variables),do(target tasks), plus optionallabel,color,edge_waypoints, andlabel_positions. See "Task Transition Model" above. - No task type or task-level condition: The UI does not expose task
typeor task-levelwhen— all tasks are actions (workflows are also actions), and conditions belong on transitions. Parallelism is implicit via multipledotargets. - Ref immutability: When editing an existing workflow, the pack selector and workflow name fields are disabled — the ref cannot be changed after creation.
- Components in
Development Workflow
Common Commands (Makefile)
make build # Build all services
make build-release # Release build
make test # Run all tests
make test-integration # Run integration tests
make fmt # Format code
make clippy # Run linter
make lint # fmt + clippy
make run-api # Run API service
make run-executor # Run executor service
make run-worker # Run worker service
make run-sensor # Run sensor service
make run-notifier # Run notifier service
make db-create # Create database
make db-migrate # Run migrations
make db-reset # Drop & recreate DB
Database Operations
- Migrations: Located in
migrations/, applied viasqlx migrate run - Test DB: Separate
attune_testdatabase, setup withmake db-test-setup - Schema: All tables in
publicschema with auto-updating timestamps - Core Pack: Load with
./scripts/load-core-pack.shafter DB setup
Testing
- Architecture: Schema-per-test isolation (each test gets unique
test_<uuid>schema) - Parallel Execution: Tests run concurrently without
#[serial]constraints (4-8x faster) - Unit Tests: In module files alongside code
- Integration Tests: In
tests/directory - Test DB Required: Use
make db-test-setupbefore integration tests - Run:
cargo testormake test(parallel by default) - Verbose:
cargo test -- --nocapture --test-threads=1 - Cleanup: Schemas auto-dropped on test completion; orphaned schemas cleaned via
./scripts/cleanup-test-schemas.sh - SQLx Offline Mode: Enabled for compile-time query checking without live DB; regenerate with
cargo sqlx prepare
CLI Tool
cargo install --path crates/cli # Install CLI
attune auth login # Login
attune pack list # List packs
attune pack create --ref my_pack # Create empty pack (non-interactive)
attune pack create -i # Create empty pack (interactive prompts)
attune pack upload ./path/to/pack # Upload local pack to API (works with Docker)
attune pack register /opt/attune/packs/mypak # Register from API-visible path
attune action execute <ref> --param key=value
attune execution list # Monitor executions
attune key list # List all keys (values redacted)
attune key list --owner-type pack # Filter keys by owner type
attune key show my_token # Show key details (value shown as SHA-256 hash)
attune key show my_token -d # Show key details with decrypted/actual value
attune key create --ref my_token --name "My Token" --value "secret123" # Create unencrypted string key (default)
attune key create --ref my_token --name "My Token" --value '{"user":"admin","pass":"s3cret"}' # Create unencrypted structured key
attune key create --ref my_token --name "My Token" --value "secret123" -e # Create encrypted string key
attune key create --ref my_token --name "My Token" --value "secret123" --encrypt --owner-type pack --owner-pack-ref core # Create encrypted pack-scoped key
attune key update my_token --value "new_secret" # Update key value (string)
attune key update my_token --value '{"host":"db.example.com","port":5432}' # Update key value (structured)
attune key update my_token --name "Renamed Token" # Update key name
attune key delete my_token # Delete a key (with confirmation)
attune key delete my_token --yes # Delete without confirmation
attune workflow upload actions/deploy.yaml # Upload workflow action to existing pack
attune workflow upload actions/deploy.yaml --force # Update existing workflow
attune workflow list # List all workflows
attune workflow list --pack core # List workflows in a pack
attune workflow show core.install_packs # Show workflow details + task summary
attune workflow delete core.my_workflow --yes # Delete a workflow
attune artifact list # List all artifacts
attune artifact list --type file_text --visibility public # Filter artifacts
attune artifact list --execution 42 # List artifacts for an execution
attune artifact show 1 # Show artifact by ID
attune artifact show mypack.build_log # Show artifact by ref
attune artifact create --ref mypack.build_log --scope action --owner mypack.deploy --type file_text --name "Build Log"
attune artifact upload 1 ./output.log # Upload file as new version
attune artifact upload 1 ./data.json --content-type application/json --created-by "cli"
attune artifact download 1 # Download latest version to auto-named file
attune artifact download 1 -V 3 # Download specific version
attune artifact download 1 -o ./local.txt # Download to specific path
attune artifact download 1 -o - # Download to stdout
attune artifact delete 1 # Delete artifact (with confirmation)
attune artifact delete 1 --yes # Delete without confirmation
attune artifact version list 1 # List all versions of artifact 1
attune artifact version show 1 3 # Show details of version 3
attune artifact version upload 1 ./new-file.txt # Upload file as new version
attune artifact version create-json 1 '{"key":"value"}' # Create JSON version
attune artifact version download 1 2 -o ./v2.txt # Download version 2
attune artifact version delete 1 2 --yes # Delete version 2
Pack Upload vs Register:
attune pack upload <local-path>— Tarballs the local directory and POSTs it toPOST /api/v1/packs/upload. Works regardless of whether the API is local or in Docker. This is the primary way to install packs from your local machine into a Dockerized system.attune pack register <server-path>— Sends a filesystem path string to the API (POST /api/v1/packs/register). Only works if the path is accessible from inside the API container (e.g./opt/attune/packs/...or/opt/attune/packs.dev/...).
Workflow Upload (attune workflow upload <action-yaml-path>):
- Reads the local action YAML file and extracts the
workflow_filefield to find the companion workflow YAML - Determines the pack from the action ref (e.g.,
mypack.deploy→ packmypack, namedeploy) - The
workflow_filepath is resolved relative to the action YAML's parent directory (same as how pack loaders resolve it relative to theactions/directory) - Constructs a
SaveWorkflowFileRequestJSON payload combining action metadata (label, parameters, output, tags) with the workflow definition (version, vars, tasks, output_map) and POSTs toPOST /api/v1/packs/{pack_ref}/workflow-files - On 409 Conflict (workflow already exists), fails unless
--forceis passed, in which case it PUTs toPUT /api/v1/workflows/{ref}/fileto update - Does NOT require a full pack upload — individual workflow actions can be added to existing packs independently
- Important: The action YAML MUST contain a
workflow_filefield; regular (non-workflow) actions should be uploaded as part of a pack
Pack Upload API endpoint: POST /api/v1/packs/upload — accepts multipart/form-data with:
pack(required): a.tar.gzarchive of the pack directoryforce(optional, text):"true"to overwrite an existing pack with the same refskip_tests(optional, text):"true"to skip test execution after registration
The server extracts the archive to a temp directory, finds the pack.yaml (at root or one level deep), then moves it to {packs_base_dir}/{pack_ref}/ and calls register_pack_internal.
Test Failure Protocol
Proactively investigate and fix test failures when discovered, even if unrelated to the current task.
Guidelines:
- ALWAYS report test failures to the user with relevant error output
- ALWAYS run tests after making changes:
make testorcargo test - DO fix immediately if the cause is obvious and fixable in 1-2 attempts
- DO ask the user if the failure is complex, requires architectural changes, or you're unsure of the cause
- NEVER silently ignore test failures or skip tests without approval
- Gather context: Run with
cargo test -- --nocapture --test-threads=1for details
Priority:
- Critical (build/compile failures): Fix immediately
- Related (affects current work): Fix before proceeding
- Unrelated: Report and ask if you should fix now or defer
When reporting, ask: "Should I fix this first or continue with [original task]?"
Code Quality: Zero Warnings Policy
Maintain zero compiler warnings across the workspace. Clean builds ensure new issues are immediately visible.
Workflow
- Check after changes:
cargo check --all-targets --workspace - Before completing work: Fix or document any warnings introduced
- End of session: Verify zero warnings before finishing
Handling Warnings
- Fix first: Remove dead code, unused imports, unnecessary variables
- Prefix
_: For intentionally unused variables that document intent - Use
#[allow(dead_code)]: For API methods intended for future use (add doc comment explaining why) - Never ignore blindly: Every suppression needs a clear rationale
Conservative Approach
- Preserve methods that complete a logical API surface
- Keep test helpers that are part of shared infrastructure
- When uncertain about removal, ask the user
Red Flags
- ❌ Introducing new warnings
- ❌ Blanket
#[allow(warnings)]without specific justification - ❌ Accumulating warnings over time
File Naming & Location Conventions
When Adding Features:
- New API Endpoint:
- Route handler in
crates/api/src/routes/<domain>.rs - DTO in
crates/api/src/dto/<domain>.rs - Update
routes/mod.rsand main router
- Route handler in
- New Domain Model:
- Add to
crates/common/src/models.rs - Create migration in
migrations/YYYYMMDDHHMMSS_description.sql - Add repository in
crates/common/src/repositories/<entity>.rs
- Add to
- New Service: Add to
crates/and update workspaceCargo.tomlmembers - Configuration: Update
crates/common/src/config.rswith serde defaults - Documentation: Add to
docs/directory
Important Files
crates/common/src/models.rs- All domain modelscrates/common/src/error.rs- Error typescrates/common/src/config.rs- Configuration structurecrates/api/src/routes/mod.rs- API routingconfig.development.yaml- Dev configurationCargo.toml- Workspace dependenciesMakefile- Development commandsdocker/Dockerfile.optimized- Optimized service builds (api, executor, notifier)docker/Dockerfile.worker.optimized- Optimized worker builds (shell, python, node, full)docker/Dockerfile.sensor.optimized- Optimized sensor builds (base, full)docker/Dockerfile.pack-binaries- Separate pack binary builderscripts/build-pack-binaries.sh- Build pack binaries script
Common Pitfalls to Avoid
- NEVER bypass repositories - always use the repository layer for DB access
- NEVER forget
RequireAuthmiddleware on protected endpoints - NEVER hardcode service URLs - use configuration
- NEVER commit secrets in config files (use env vars in production)
- NEVER hardcode schema prefixes in SQL queries - rely on PostgreSQL
search_pathmechanism - NEVER copy packs into Dockerfiles - they are mounted as volumes
- NEVER put workflow definition content directly in action YAML — use a separate
.workflow.yamlfile inactions/workflows/and reference it viaworkflow_filein the action YAML - ALWAYS use PostgreSQL enum type mappings for custom enums
- ALWAYS use transactions for multi-table operations
- ALWAYS start with
attune/or correct crate name when specifying file paths - ALWAYS convert runtime names to lowercase for comparison (database may store capitalized)
- ALWAYS use optimized Dockerfiles for new services (selective crate copying)
- REMEMBER IDs are
i64, noti32oruuid - REMEMBER schema is determined by
search_path, not hardcoded in queries (production usesattune, development usespublic) - REMEMBER to regenerate SQLx metadata after schema-related changes:
cargo sqlx prepare - REMEMBER packs are volumes - update with restart, not rebuild
- REMEMBER to build pack binaries separately:
./scripts/build-pack-binaries.sh - REMEMBER when adding mutable columns to
executionorworker, add a correspondingIS DISTINCT FROMcheck to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has anexecution_historytable (because it is mutable with ~4 updates per row). - REMEMBER for large JSONB columns in history triggers (like
execution.result), use_jsonb_digest_summary()instead of storing the raw value — see migration000009_timescaledb_history - NEVER use
SELECT *on tables that have DB-only columns not in the RustFromRowstruct (e.g.,execution.is_workflow,execution.workflow_defexist in SQL but not in theExecutionmodel). Define aSELECT_COLUMNSconstant in the repository (seeexecution.rs,pack.rs,runtime_version.rsfor examples) and reference it from all queries — including queries outside the repository (e.g.,timeout_monitor.rsimportsexecution::SELECT_COLUMNS).ause runtime deserialization failures. - REMEMBER
execution,event, andenforcementare all TimescaleDB hypertables — they cannot be the target of FK constraints. Any column referencing them (e.g.,inquiry.execution,workflow_execution.execution,execution.parent) is a plain BIGINT with no FK and may become a dangling reference.
Deployment
- Target: Distributed deployment with separate service instances
- Docker: Dockerfiles for each service (planned in
docker/dir) - Config: Use environment variables for secrets in production
- Database: PostgreSQL 14+ with connection pooling
- Message Queue: RabbitMQ required for service communication
- Web UI: Static files served separately or via API service
Current Development Status
- ✅ Complete: Database migrations (21 tables, 10 migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder + workflow timeline DAG), Executor service (core functionality + workflow orchestration), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker), Event, enforcement, and execution tables as TimescaleDB hypertables (time-series with retention/compression), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution), TimescaleDB continuous aggregates (6 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under
/api/v1/analytics/— dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector), Workflow execution orchestration (scheduler detects workflow actions, creates child task executions, completion listener advances workflow via transitions), Workflow template resolution (type-preserving{{ }}rendering in task inputs), Workflowwith_itemsexpansion (parallel child executions per item), Workflowwith_itemsconcurrency limiting (sliding-window dispatch with pending items stored in workflow variables), Workflowpublishdirective processing (variable propagation between tasks), Workflow function expressions (result(),succeeded(),failed(),timed_out()), Workflow expression engine (full arithmetic/comparison/boolean/membership operators, 30+ built-in functions, recursive-descent parser), Canonical workflow namespaces (parameters,workflow,task,config,keystore,item,index,system), Artifact content system (versioned file/JSON storage, progress-append semantics, binary upload/download, retention enforcement, execution-linked artifacts, 18 API endpoints under/api/v1/artifacts/, file-backed disk storage via shared volume for file-type artifacts), CLI artifact management (attune artifact list/show/create/upload/download/delete+attune artifact version list/show/upload/create-json/download/delete— full CRUD for artifacts and their versions with multipart file upload, binary download, JSON version creation, auto-detected MIME types, human-readable size formatting, and pagination), CLI--waitflag (WebSocket-first with polling fallback — connects to notifier on port 8081, subscribes to execution, returns immediately on terminal status; falls back to exponential-backoff REST polling if WS unavailable; polling always gets at least 10s budget regardless of how long WS path ran), Workflow Timeline DAG visualization (Prefect-style time-aligned Gantt+DAG on execution detail page, pure SVG, transition-aware edge coloring from workflow definition metadata, hover tooltips, click-to-highlight path, zoom/pan) - 🔄 In Progress: Sensor service, advanced workflow features (nested workflow context propagation), Python runtime dependency management, API/UI endpoints for runtime version management, Artifact UI (web UI for browsing/downloading artifacts), Notifier service WebSocket (functional but lacks auth — the WS connection is unauthenticated; the subscribe filter controls visibility)
- 📋 Planned: Execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage
Quick Reference
Start Development Environment
# Start PostgreSQL and RabbitMQ
# Load core pack: ./scripts/load-core-pack.sh
# Start API: make run-api
# Start Web UI: cd web && npm run dev
File Path Examples
- Models:
attune/crates/common/src/models.rs - API routes:
attune/crates/api/src/routes/actions.rs - Repositories:
attune/crates/common/src/repositories/execution.rs - Migrations:
attune/migrations/*.sql - Web UI:
attune/web/src/ - Config:
attune/config.development.yaml
Documentation Locations
- API docs:
attune/docs/api-*.md - Configuration:
attune/docs/configuration.md - Architecture:
attune/docs/*-architecture.md,attune/docs/*-service.md - Testing:
attune/docs/testing-*.md,attune/docs/running-tests.md,attune/docs/schema-per-test.md - Docker optimization:
attune/docs/docker-layer-optimization.md,attune/docs/QUICKREF-docker-optimization.md,attune/docs/QUICKREF-buildkit-cache-strategy.md - Packs architecture:
attune/docs/QUICKREF-packs-volumes.md,attune/docs/DOCKER-OPTIMIZATION-SUMMARY.md - AI Agent Work Summaries:
attune/work-summary/*.md - Deployment:
attune/docs/production-deployment.md - DO NOT create additional documentation files in the root of the project. all new documentation describing how to use the system should be placed in the
attune/docsdirectory, and documentation describing the work performed should be placed in theattune/work-summarydirectory.
Work Summary & Reporting
Avoid redundant summarization - summarize changes once at completion, not continuously.
Guidelines:
- Report progress during work: brief status updates, blockers, questions
- Summarize once at completion: consolidated overview of all changes made
- Work summaries: Write to
attune/work-summary/*.mdonly at task completion, not incrementally - Avoid duplication: Don't re-explain the same changes multiple times in different formats
- What changed, not how: Focus on outcomes and impacts, not play-by-play narration
Good Pattern:
[Making changes with tool calls and brief progress notes]
...
[At completion]
"I've completed the task. Here's a summary of changes: [single consolidated overview]"
Bad Pattern:
[Makes changes]
"So I changed X, Y, and Z..."
[More changes]
"To summarize, I modified X, Y, and Z..."
[Writes work summary]
"In this session I updated X, Y, and Z..."
Maintaining the AGENTS.md file
IMPORTANT: Keep this file up-to-date as the project evolves.
After making changes to the project, you MUST update this AGENTS.md file if any of the following occur:
- New dependencies added or major dependencies removed (check package.json, Cargo.toml, requirements.txt, etc.)
- Project structure changes: new directories/modules created, existing ones renamed or removed
- Architecture changes: new layers, patterns, or major refactoring that affects how components interact
- New frameworks or tools adopted (e.g., switching from REST to GraphQL, adding a new testing framework)
- Deployment or infrastructure changes (new CI/CD pipelines, different hosting, containerization added)
- New major features that introduce new subsystems or significantly change existing ones
- Style guide or coding convention updates
AGENTS.md Content inclusion policy
- DO NOT simply summarize changes in the
AGENTS.mdfile. If there are existing sections that need updating due to changes in the application architecture or project structure, update them accordingly. - When relevant, work summaries should instead be written to
attune/work-summary/*.md
Update procedure:
- After completing your changes, review if they affect any section of
AGENTS.md - If yes, immediately update the relevant sections
- Add a brief comment at the top of
AGENTS.mdwith the date and what was updated (optional but helpful)
Update format:
When updating, be surgical - modify only the affected sections rather than rewriting the entire file. Maintain the existing structure and tone.
Treat AGENTS.md as living documentation. An outdated AGENTS.md file is worse than no AGENTS.md file, as it will mislead future AI agents and waste time.
Project Documentation Index
[Attune Project Documentation Index] |root: ./ |IMPORTANT: Prefer retrieval-led reasoning over pre-training-led reasoning |IMPORTANT: This index provides a quick overview - use grep/read_file for details | | Format: path/to/dir:{file1,file2,...} | '...' indicates truncated file list - use grep/list_directory for full contents | | To regenerate this index: make generate-agents-index | |docs:{MIGRATION-queue-separation-2026-02-03.md,QUICKREF-containerized-workers.md,QUICKREF-rabbitmq-queues.md,QUICKREF-sensor-worker-registration.md,QUICKREF-unified-runtime-detection.md,README.md,docker-deployment.md,pack-runtime-environments.md,worker-containerization.md,worker-containers-quickstart.md} |docs/api:{api-actions.md,api-completion-plan.md,api-events-enforcements.md,api-executions.md,api-inquiries.md,api-pack-testing.md,api-pack-workflows.md,api-packs.md,api-rules.md,api-secrets.md,api-triggers-sensors.md,api-workflows.md,openapi-client-generation.md,openapi-spec-completion.md} |docs/architecture:{executor-service.md,notifier-service.md,pack-management-architecture.md,queue-architecture.md,sensor-service.md,trigger-sensor-architecture.md,web-ui-architecture.md,webhook-system-architecture.md,worker-service.md} |docs/authentication:{auth-quick-reference.md,authentication.md,secrets-management.md,security-review-2024-01-02.md,service-accounts.md,token-refresh-quickref.md,token-rotation.md} |docs/cli:{cli-profiles.md,cli.md} |docs/configuration:{CONFIG_README.md,config-troubleshooting.md,configuration.md,env-to-yaml-migration.md} |docs/dependencies:{dependency-deduplication-results.md,dependency-deduplication.md,dependency-isolation.md,dependency-management.md,http-client-consolidation-complete.md,http-client-consolidation-plan.md,sea-query-removal.md,serde-yaml-migration.md,workspace-dependency-compliance-audit.md} |docs/deployment:{ops-runbook-queues.md,production-deployment.md} |docs/development:{QUICKSTART-vite.md,WORKSPACE_SETUP.md,agents-md-index.md,compilation-notes.md,dead-code-cleanup.md,documentation-organization.md,vite-dev-setup.md} |docs/examples:{complete-workflow.yaml,pack-test-demo.sh,registry-index.json,rule-parameter-examples.md,simple-workflow.yaml} |docs/guides:{QUICKREF-timer-happy-path.md,quick-start.md,quickstart-example.md,quickstart-timer-demo.md,timer-sensor-quickstart.md,workflow-quickstart.md} |docs/migrations:{workflow-task-execution-consolidation.md} |docs/packs:{PACK_TESTING.md,QUICKREF-git-installation.md,core-pack-integration.md,pack-install-testing.md,pack-installation-git.md,pack-registry-cicd.md,pack-registry-spec.md,pack-structure.md,pack-testing-framework.md} |docs/performance:{QUICKREF-performance-optimization.md,log-size-limits.md,performance-analysis-workflow-lists.md,performance-before-after-results.md,performance-context-cloning-diagram.md} |docs/plans:{schema-per-test-refactor.md,timescaledb-entity-history.md} |docs/sensors:{CHECKLIST-sensor-worker-registration.md,COMPLETION-sensor-worker-registration.md,SUMMARY-database-driven-detection.md,database-driven-runtime-detection.md,native-runtime.md,sensor-authentication-overview.md,sensor-interface.md,sensor-lifecycle-management.md,sensor-runtime.md,sensor-service-setup.md,sensor-worker-registration.md} |docs/testing:{e2e-test-plan.md,running-tests.md,schema-per-test.md,test-user-setup.md,testing-authentication.md,testing-dashboard-rules.md,testing-status.md} |docs/web-ui:{web-ui-pack-testing.md,websocket-usage.md} |docs/webhooks:{webhook-manual-testing.md,webhook-testing.md} |docs/workflows:{dynamic-parameter-forms.md,execution-hierarchy.md,inquiry-handling.md,parameter-mapping-status.md,rule-parameter-mapping.md,rule-trigger-params.md,workflow-execution-engine.md,workflow-implementation-plan.md,workflow-orchestration.md,workflow-summary.md} |scripts:{check-workspace-deps.sh,cleanup-test-schemas.sh,create-test-user.sh,create_test_user.sh,generate-python-client.sh,generate_agents_md_index.py,load-core-pack.sh,load_core_pack.py,quick-test-happy-path.sh,seed_core_pack.sql,seed_runtimes.sql,setup-db.sh,setup-e2e-db.sh,setup_timer_echo_rule.sh,start-all-services.sh,start-e2e-services.sh,start_services_test.sh,status-all-services.sh,stop-all-services.sh,stop-e2e-services.sh,...} |work-summary:{2025-01-console-logging-cleanup.md,2025-01-token-refresh-improvements.md,2025-01-websocket-duplicate-connection-fix.md,2026-02-02-unified-runtime-verification.md,2026-02-03-canonical-message-types.md,2026-02-03-inquiry-queue-separation.md,2026-02-04-event-generation-fix.md,README.md,auto-populate-ref-from-label.md,buildkit-cache-implementation.md,collapsible-navigation-implementation.md,containerized-workers-implementation.md,docker-build-race-fix.md,docker-containerization-complete.md,docker-migrations-startup-fix.md,empty-pack-creation-ui.md,git-pack-installation.md,pack-runtime-environments.md,sensor-service-cleanup-standalone-only.md,sensor-worker-registration.md,...} |work-summary/changelogs:{API-COMPLETION-SUMMARY.md,CHANGELOG.md,CLEANUP_SUMMARY_2026-01-27.md,FIFO-ORDERING-COMPLETE.md,MIGRATION_CONSOLIDATION_SUMMARY.md,cli-integration-tests-summary.md,core-pack-setup-summary.md,web-ui-session-summary.md,webhook-phase3-summary.md,webhook-testing-summary.md,workflow-loader-summary.md} |work-summary/features:{AUTOMATIC-SCHEMA-CLEANUP-ENHANCEMENT.md,TESTING-TIMER-DEMO.md,e2e-test-schema-issues.md,openapi-spec-verification.md,sensor-runtime-implementation.md,sensor-service-implementation.md} |work-summary/migrations:{2026-01-17-orquesta-refactoring.md,2026-01-24-generated-client-migration.md,2026-01-27-workflow-migration.md,DEPLOYMENT-READY-performance-optimization.md,MIGRATION_NEXT_STEPS.md,migration_comparison.txt,migration_consolidation_status.md} |work-summary/phases:{2025-01-policy-ordering-plan.md,2025-01-secret-passing-fix-plan.md,2025-01-workflow-performance-analysis.md,PHASE-5-COMPLETE.md,PHASE_1_1_SUMMARY.txt,PROBLEM.md,Pitfall-Resolution-Plan.md,SENSOR_SERVICE_README.md,StackStorm-Lessons-Learned.md,StackStorm-Pitfalls-Analysis.md,orquesta-refactor-plan.md,phase-1-1-complete.md,phase-1.2-models-repositories-complete.md,phase-1.2-repositories-summary.md,phase-1.3-test-infrastructure-summary.md,phase-1.3-yaml-validation-complete.md,phase-1.4-COMPLETE.md,phase-1.4-loader-registration-progress.md,phase-1.5-COMPLETE.md,phase-1.6-pack-integration-complete.md,...} |work-summary/sessions:{2024-01-13-event-enforcement-endpoints.md,2024-01-13-inquiry-endpoints.md,2024-01-13-integration-testing-setup.md,2024-01-13-route-conflict-fix.md,2024-01-13-secret-management-api.md,2024-01-17-sensor-runtime.md,2024-01-17-sensor-service-session.md,2024-01-20-core-pack-unit-tests.md,2024-01-20-pack-testing-framework-phase1.md,2024-01-21-pack-registry-phase1.md,2024-01-21-pack-registry-phase2.md,2024-01-22-pack-registry-phase3.md,2024-01-22-pack-registry-phase4.md,2024-01-22-pack-registry-phase5.md,2024-01-22-pack-registry-phase6.md,2025-01-13-phase-1.4-session.md,2025-01-13-yaml-configuration.md,2025-01-16_migration_consolidation.md,2025-01-17-performance-optimization-complete.md,2025-01-18-timer-triggers.md,...} |work-summary/status:{ACCOMPLISHMENTS.md,COMPILATION_STATUS.md,FIFO-ORDERING-STATUS.md,FINAL_STATUS.md,PROGRESS.md,SENSOR_STATUS.md,TEST-STATUS.md,TODO.OLD.md,TODO.md}