4 Commits

Author SHA1 Message Date
David Culbreth
7ef2b59b23 working on arm64 native
Some checks failed
CI / Rustfmt (push) Successful in 24s
CI / Cargo Audit & Deny (push) Successful in 36s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Blocking Checks (push) Successful in 48s
CI / Web Advisory Checks (push) Successful in 37s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
CI / Clippy (push) Failing after 1m53s
Publish Images / Publish Docker Dist Bundle (push) Failing after 8s
Publish Images / Publish web (amd64) (push) Successful in 56s
CI / Security Advisory Checks (push) Successful in 38s
Publish Images / Publish web (arm64) (push) Successful in 3m29s
CI / Tests (push) Successful in 9m21s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 12m28s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m20s
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish manifest attune/agent (push) Has been skipped
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
2026-03-27 16:37:46 -05:00
3a13bf754a fixing docker compose distribution
Some checks failed
CI / Rustfmt (push) Successful in 20s
CI / Clippy (push) Successful in 2m3s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Web Blocking Checks (push) Successful in 1m21s
CI / Security Blocking Checks (push) Successful in 10s
CI / Web Advisory Checks (push) Successful in 1m3s
CI / Security Advisory Checks (push) Successful in 37s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Tests (push) Successful in 8m46s
Publish Images / Publish web (arm64) (push) Successful in 3m20s
Publish Images / Publish Docker Dist Bundle (push) Failing after 9s
Publish Images / Publish web (amd64) (push) Successful in 52s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m20s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m30s
Publish Images / Publish agent (amd64) (push) Successful in 29s
Publish Images / Publish executor (amd64) (push) Successful in 35s
Publish Images / Publish api (amd64) (push) Successful in 42s
Publish Images / Publish notifier (amd64) (push) Successful in 35s
Publish Images / Publish agent (arm64) (push) Successful in 1m3s
Publish Images / Publish api (arm64) (push) Successful in 1m55s
Publish Images / Publish executor (arm64) (push) Successful in 2m1s
Publish Images / Publish notifier (arm64) (push) Successful in 1m54s
Publish Images / Publish manifest attune/agent (push) Successful in 10s
Publish Images / Publish manifest attune/api (push) Successful in 12s
Publish Images / Publish manifest attune/executor (push) Successful in 10s
Publish Images / Publish manifest attune/notifier (push) Successful in 9s
Publish Images / Publish manifest attune/web (push) Successful in 7s
2026-03-26 15:39:07 -05:00
f4ef823f43 fixing audit finding
Some checks failed
CI / Rustfmt (push) Successful in 21s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Blocking Checks (push) Successful in 53s
CI / Web Advisory Checks (push) Successful in 34s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 36s
CI / Clippy (push) Successful in 2m8s
Publish Images / Publish Docker Dist Bundle (push) Failing after 8s
Publish Images / Publish web (amd64) (push) Successful in 53s
Publish Images / Publish web (arm64) (push) Successful in 3m28s
CI / Tests (push) Successful in 9m20s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m21s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m23s
Publish Images / Publish agent (amd64) (push) Successful in 23s
Publish Images / Publish api (amd64) (push) Successful in 33s
Publish Images / Publish notifier (amd64) (push) Successful in 38s
Publish Images / Publish executor (amd64) (push) Successful in 54s
Publish Images / Publish agent (arm64) (push) Successful in 59s
Publish Images / Publish executor (arm64) (push) Successful in 1m55s
Publish Images / Publish api (arm64) (push) Successful in 2m2s
Publish Images / Publish notifier (arm64) (push) Successful in 2m3s
Publish Images / Publish manifest attune/agent (push) Successful in 10s
Publish Images / Publish manifest attune/executor (push) Successful in 19s
Publish Images / Publish manifest attune/api (push) Successful in 21s
Publish Images / Publish manifest attune/notifier (push) Successful in 12s
Publish Images / Publish manifest attune/web (push) Successful in 7s
2026-03-26 14:05:53 -05:00
ab7d31de2f fixing docker compose distribution 2026-03-26 14:04:57 -05:00
18 changed files with 658 additions and 100 deletions

5
.gitignore vendored
View File

@@ -11,6 +11,7 @@ target/
# Configuration files (keep *.example.yaml) # Configuration files (keep *.example.yaml)
config.yaml config.yaml
config.*.yaml config.*.yaml
!docker/distributable/config.docker.yaml
!config.example.yaml !config.example.yaml
!config.development.yaml !config.development.yaml
!config.test.yaml !config.test.yaml
@@ -35,6 +36,7 @@ logs/
# Build artifacts # Build artifacts
dist/ dist/
build/ build/
artifacts/
# Testing # Testing
coverage/ coverage/
@@ -80,3 +82,6 @@ docker-compose.override.yml
packs.examples/ packs.examples/
packs.external/ packs.external/
codex/ codex/
# Compiled pack binaries (built via Docker or build-pack-binaries.sh)
packs/core/sensors/attune-core-timer-sensor

View File

@@ -77,7 +77,7 @@ attune/
**Services**: **Services**:
- **Infrastructure**: postgres (TimescaleDB), rabbitmq, redis - **Infrastructure**: postgres (TimescaleDB), rabbitmq, redis
- **Init** (run-once): migrations, init-user, init-packs, init-agent - **Init** (run-once): migrations, init-user, init-pack-binaries, init-packs, init-agent
- **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000) - **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000)
**Volumes** (named): **Volumes** (named):
@@ -100,7 +100,8 @@ docker compose -f docker-compose.yaml -f docker-compose.agent.yaml up -d # Star
### Docker Build Optimization ### Docker Build Optimization
- **Active Dockerfiles**: `docker/Dockerfile.optimized`, `docker/Dockerfile.agent`, `docker/Dockerfile.web`, and `docker/Dockerfile.pack-binaries` - **Active Dockerfiles**: `docker/Dockerfile.optimized`, `docker/Dockerfile.agent`, `docker/Dockerfile.web`, and `docker/Dockerfile.pack-binaries`
- **Agent Dockerfile** (`docker/Dockerfile.agent`): Builds a statically-linked `attune-agent` binary using musl (`x86_64-unknown-linux-musl`). Three stages: `builder` (cross-compile), `agent-binary` (scratch — just the binary), `agent-init` (busybox — for volume population via `cp`). The binary has zero runtime dependencies (no glibc, no libssl). Build with `make docker-build-agent`. - **Agent Dockerfile** (`docker/Dockerfile.agent`): Builds statically-linked `attune-agent` and `attune-sensor-agent` binaries using musl. Uses `cargo-zigbuild` (zig as the cross-compilation backend) so that any target architecture can be built from any host — e.g., building `aarch64-unknown-linux-musl` on an x86_64 host or vice versa. The `RUST_TARGET` build arg controls the output architecture (`x86_64-unknown-linux-musl` default, or `aarch64-unknown-linux-musl` for arm64). Three stages: `builder` (cross-compile with cargo-zigbuild), `agent-binary` (scratch — just the binaries), `agent-init` (busybox — for volume population via `cp`). The binaries have zero runtime dependencies (no glibc, no libssl). Build with `make docker-build-agent` (amd64), `make docker-build-agent-arm64` (arm64), or `make docker-build-agent-all` (both). In `docker-compose.yaml`, set `AGENT_RUST_TARGET=aarch64-unknown-linux-musl` env var to build arm64 agent binaries (defaults to x86_64).
- **Pack Binaries Dockerfile** (`docker/Dockerfile.pack-binaries`): Builds statically-linked pack binaries (sensors, etc.) using musl + cargo-zigbuild for cross-compilation. The `RUST_TARGET` build arg controls the output architecture (`x86_64-unknown-linux-musl` default, or `aarch64-unknown-linux-musl` for arm64). Three stages: `builder` (cross-compile with cargo-zigbuild), `output` (scratch — just the binaries for `docker cp` extraction), `pack-binaries-init` (busybox — for Docker Compose volume population via `cp`). Build with `make docker-build-pack-binaries` (amd64), `make docker-build-pack-binaries-arm64` (arm64), or `make docker-build-pack-binaries-all` (both). In `docker-compose.yaml`, set `PACK_BINARIES_RUST_TARGET=aarch64-unknown-linux-musl` env var to build arm64 pack binaries (defaults to x86_64). The `init-pack-binaries` Docker Compose service automatically builds and copies pack binaries into the `packs_data` volume before `init-packs` runs.
- **Strategy**: Selective crate copying - only copy crates needed for each service (not entire workspace) - **Strategy**: Selective crate copying - only copy crates needed for each service (not entire workspace)
- **Performance**: 90% faster incremental builds (~30 sec vs ~5 min for code changes) - **Performance**: 90% faster incremental builds (~30 sec vs ~5 min for code changes)
- **BuildKit cache mounts**: Persist cargo registry and compilation artifacts between builds - **BuildKit cache mounts**: Persist cargo registry and compilation artifacts between builds
@@ -123,7 +124,7 @@ docker compose -f docker-compose.yaml -f docker-compose.agent.yaml up -d # Star
- **Key Principle**: Packs are NOT copied into Docker images - they are mounted as volumes - **Key Principle**: Packs are NOT copied into Docker images - they are mounted as volumes
- **Volume Flow**: Host `./packs/``init-packs` service → `packs_data` volume → mounted in all services - **Volume Flow**: Host `./packs/``init-packs` service → `packs_data` volume → mounted in all services
- **Benefits**: Update packs with restart (~5 sec) instead of rebuild (~5 min) - **Benefits**: Update packs with restart (~5 sec) instead of rebuild (~5 min)
- **Pack Binaries**: Built separately with `./scripts/build-pack-binaries.sh` (GLIBC compatibility) - **Pack Binaries**: Automatically built and deployed via the `init-pack-binaries` Docker Compose service (statically-linked musl binaries via cargo-zigbuild, supports cross-compilation via `PACK_BINARIES_RUST_TARGET` env var). Can also be built manually with `./scripts/build-pack-binaries.sh` or `make docker-build-pack-binaries`. The `init-packs` service depends on `init-pack-binaries` and preserves any ELF binaries already present in the target `sensors/` directory (detected via ELF magic bytes with `od`) — it backs them up before copying host pack files and restores them afterward, preventing the host's stale dynamically-linked binary from overwriting the freshly-built static one.
- **Development**: Use `./packs.dev/` for instant testing (direct bind mount, no restart needed) - **Development**: Use `./packs.dev/` for instant testing (direct bind mount, no restart needed)
- **Documentation**: See `docs/QUICKREF-packs-volumes.md` - **Documentation**: See `docs/QUICKREF-packs-volumes.md`
@@ -273,7 +274,7 @@ Completion listener advances workflow → Schedules successor tasks → Complete
- **Pack Volume Strategy**: Packs are mounted as volumes (NOT copied into Docker images) - **Pack Volume Strategy**: Packs are mounted as volumes (NOT copied into Docker images)
- Host `./packs/``packs_data` volume via `init-packs` service → mounted at `/opt/attune/packs` in all services - Host `./packs/``packs_data` volume via `init-packs` service → mounted at `/opt/attune/packs` in all services
- Development packs in `./packs.dev/` are bind-mounted directly for instant updates - Development packs in `./packs.dev/` are bind-mounted directly for instant updates
- **Pack Binaries**: Native binaries (sensors) built separately with `./scripts/build-pack-binaries.sh` - **Pack Binaries**: Native binaries (sensors) automatically built by the `init-pack-binaries` Docker Compose service (statically-linked musl, cross-arch via `PACK_BINARIES_RUST_TARGET`). Can also be built manually with `./scripts/build-pack-binaries.sh` or `make docker-build-pack-binaries`.
- **Action Script Resolution**: Worker constructs file paths as `{packs_base_dir}/{pack_ref}/actions/{entrypoint}` - **Action Script Resolution**: Worker constructs file paths as `{packs_base_dir}/{pack_ref}/actions/{entrypoint}`
- **Workflow Action YAML (`workflow_file` field)**: An action YAML may include a `workflow_file` field (e.g., `workflow_file: workflows/timeline_demo.yaml`) pointing to a workflow definition file relative to the `actions/` directory. When present, the `PackComponentLoader` reads and parses the referenced workflow YAML, creates/updates a `workflow_definition` record, and links the action to it via `action.workflow_def`. This separates action-level metadata (ref, label, parameters, policies) from the workflow graph (tasks, transitions, variables), and allows **multiple actions to reference the same workflow file** with different parameter schemas or policy configurations. Workflow actions have no `runner_type` (runtime is `None`) — the executor orchestrates child task executions rather than sending to a worker. - **Workflow Action YAML (`workflow_file` field)**: An action YAML may include a `workflow_file` field (e.g., `workflow_file: workflows/timeline_demo.yaml`) pointing to a workflow definition file relative to the `actions/` directory. When present, the `PackComponentLoader` reads and parses the referenced workflow YAML, creates/updates a `workflow_definition` record, and links the action to it via `action.workflow_def`. This separates action-level metadata (ref, label, parameters, policies) from the workflow graph (tasks, transitions, variables), and allows **multiple actions to reference the same workflow file** with different parameter schemas or policy configurations. Workflow actions have no `runner_type` (runtime is `None`) — the executor orchestrates child task executions rather than sending to a worker.
- **Action-linked workflow files omit action-level metadata**: Workflow files referenced via `workflow_file` should contain **only the execution graph**: `version`, `vars`, `tasks`, `output_map`. The `ref`, `label`, `description`, `parameters`, `output`, and `tags` fields are omitted — the action YAML is the single authoritative source for those values. The `WorkflowDefinition` parser accepts empty `ref`/`label` (defaults to `""`), and the loader / registrar fall back to the action YAML (or filename-derived values) when they are missing. Standalone workflow files (in `workflows/`) still carry their own `ref`/`label` since they have no companion action YAML. - **Action-linked workflow files omit action-level metadata**: Workflow files referenced via `workflow_file` should contain **only the execution graph**: `version`, `vars`, `tasks`, `output_map`. The `ref`, `label`, `description`, `parameters`, `output`, and `tags` fields are omitted — the action YAML is the single authoritative source for those values. The `WorkflowDefinition` parser accepts empty `ref`/`label` (defaults to `""`), and the loader / registrar fall back to the action YAML (or filename-derived values) when they are missing. Standalone workflow files (in `workflows/`) still carry their own `ref`/`label` since they have no companion action YAML.
@@ -683,7 +684,7 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
- `docker/Dockerfile.optimized` - Optimized service builds (api, executor, notifier) - `docker/Dockerfile.optimized` - Optimized service builds (api, executor, notifier)
- `docker/Dockerfile.agent` - Statically-linked agent binary (musl, for injection into any container) - `docker/Dockerfile.agent` - Statically-linked agent binary (musl, for injection into any container)
- `docker/Dockerfile.web` - Web UI build - `docker/Dockerfile.web` - Web UI build
- `docker/Dockerfile.pack-binaries` - Separate pack binary builder - `docker/Dockerfile.pack-binaries` - Separate pack binary builder (cargo-zigbuild + musl static linking, 3 stages: builder, output, pack-binaries-init)
- `scripts/build-pack-binaries.sh` - Build pack binaries script - `scripts/build-pack-binaries.sh` - Build pack binaries script
## Common Pitfalls to Avoid ## Common Pitfalls to Avoid
@@ -703,7 +704,7 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
14. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`) 14. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`)
15. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare` 15. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare`
16. **REMEMBER** packs are volumes - update with restart, not rebuild 16. **REMEMBER** packs are volumes - update with restart, not rebuild
17. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh` 17. **REMEMBER** pack binaries are automatically built by `init-pack-binaries` in Docker Compose. For manual builds use `make docker-build-pack-binaries` or `./scripts/build-pack-binaries.sh`.
18. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row). 18. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row).
19. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history` 19. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history`
20. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures. 20. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures.

View File

@@ -5,8 +5,10 @@
docker-build-worker-node docker-build-worker-full deny ci-rust ci-web-blocking ci-web-advisory \ docker-build-worker-node docker-build-worker-full deny ci-rust ci-web-blocking ci-web-advisory \
ci-security-blocking ci-security-advisory ci-blocking ci-advisory \ ci-security-blocking ci-security-advisory ci-blocking ci-advisory \
fmt-check pre-commit install-git-hooks \ fmt-check pre-commit install-git-hooks \
build-agent docker-build-agent run-agent run-agent-release \ build-agent docker-build-agent docker-build-agent-arm64 docker-build-agent-all \
docker-up-agent docker-down-agent run-agent run-agent-release \
docker-up-agent docker-down-agent \
docker-build-pack-binaries docker-build-pack-binaries-arm64 docker-build-pack-binaries-all
# Default target # Default target
help: help:
@@ -63,13 +65,20 @@ help:
@echo " make docker-down - Stop services" @echo " make docker-down - Stop services"
@echo "" @echo ""
@echo "Agent (Universal Worker):" @echo "Agent (Universal Worker):"
@echo " make build-agent - Build statically-linked agent binary (musl)" @echo " make build-agent - Build statically-linked agent binary (musl)"
@echo " make docker-build-agent - Build agent Docker image" @echo " make docker-build-agent - Build agent Docker image (amd64, default)"
@echo " make run-agent - Run agent in development mode" @echo " make docker-build-agent-arm64 - Build agent Docker image (arm64)"
@echo " make run-agent-release - Run agent in release mode" @echo " make docker-build-agent-all - Build agent Docker images (amd64 + arm64)"
@echo " make run-agent - Run agent in development mode"
@echo " make run-agent-release - Run agent in release mode"
@echo " make docker-up-agent - Start all services + agent workers (ruby, etc.)" @echo " make docker-up-agent - Start all services + agent workers (ruby, etc.)"
@echo " make docker-down-agent - Stop agent stack" @echo " make docker-down-agent - Stop agent stack"
@echo "" @echo ""
@echo "Pack Binaries:"
@echo " make docker-build-pack-binaries - Build pack binaries Docker image (amd64, default)"
@echo " make docker-build-pack-binaries-arm64 - Build pack binaries Docker image (arm64)"
@echo " make docker-build-pack-binaries-all - Build pack binaries Docker images (amd64 + arm64)"
@echo ""
@echo "Development:" @echo "Development:"
@echo " make watch - Watch and rebuild on changes" @echo " make watch - Watch and rebuild on changes"
@echo " make install-tools - Install development tools" @echo " make install-tools - Install development tools"
@@ -240,6 +249,9 @@ docker-build-web:
# Agent binary (statically-linked for injection into any container) # Agent binary (statically-linked for injection into any container)
AGENT_RUST_TARGET ?= x86_64-unknown-linux-musl AGENT_RUST_TARGET ?= x86_64-unknown-linux-musl
# Pack binaries (statically-linked for packs volume)
PACK_BINARIES_RUST_TARGET ?= x86_64-unknown-linux-musl
build-agent: build-agent:
@echo "Installing musl target (if not already installed)..." @echo "Installing musl target (if not already installed)..."
rustup target add $(AGENT_RUST_TARGET) 2>/dev/null || true rustup target add $(AGENT_RUST_TARGET) 2>/dev/null || true
@@ -254,9 +266,20 @@ build-agent:
@ls -lh target/$(AGENT_RUST_TARGET)/release/attune-sensor-agent @ls -lh target/$(AGENT_RUST_TARGET)/release/attune-sensor-agent
docker-build-agent: docker-build-agent:
@echo "Building agent Docker image (statically-linked binary)..." @echo "Building agent Docker image ($(AGENT_RUST_TARGET))..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=$(AGENT_RUST_TARGET) --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest . DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=$(AGENT_RUST_TARGET) --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest .
@echo "✅ Agent image built: attune-agent:latest" @echo "✅ Agent image built: attune-agent:latest ($(AGENT_RUST_TARGET))"
docker-build-agent-arm64:
@echo "Building arm64 agent Docker image..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target agent-init -f docker/Dockerfile.agent -t attune-agent:arm64 .
@echo "✅ Agent image built: attune-agent:arm64"
docker-build-agent-all:
@echo "Building agent Docker images for all architectures..."
$(MAKE) docker-build-agent
$(MAKE) docker-build-agent-arm64
@echo "✅ All agent images built: attune-agent:latest (amd64), attune-agent:arm64"
run-agent: run-agent:
cargo run --bin attune-agent cargo run --bin attune-agent
@@ -264,6 +287,23 @@ run-agent:
run-agent-release: run-agent-release:
cargo run --bin attune-agent --release cargo run --bin attune-agent --release
# Pack binaries (statically-linked for packs volume)
docker-build-pack-binaries:
@echo "Building pack binaries Docker image ($(PACK_BINARIES_RUST_TARGET))..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=$(PACK_BINARIES_RUST_TARGET) --target pack-binaries-init -f docker/Dockerfile.pack-binaries -t attune-pack-builder:latest .
@echo "✅ Pack binaries image built: attune-pack-builder:latest ($(PACK_BINARIES_RUST_TARGET))"
docker-build-pack-binaries-arm64:
@echo "Building arm64 pack binaries Docker image..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target pack-binaries-init -f docker/Dockerfile.pack-binaries -t attune-pack-builder:arm64 .
@echo "✅ Pack binaries image built: attune-pack-builder:arm64"
docker-build-pack-binaries-all:
@echo "Building pack binaries Docker images for all architectures..."
$(MAKE) docker-build-pack-binaries
$(MAKE) docker-build-pack-binaries-arm64
@echo "✅ All pack binary images built: attune-pack-builder:latest (amd64), attune-pack-builder:arm64"
run-sensor-agent: run-sensor-agent:
cargo run --bin attune-sensor-agent cargo run --bin attune-sensor-agent

View File

@@ -11,7 +11,7 @@ stringData:
ATTUNE__SECURITY__ENCRYPTION_KEY: {{ .Values.security.encryptionKey | quote }} ATTUNE__SECURITY__ENCRYPTION_KEY: {{ .Values.security.encryptionKey | quote }}
ATTUNE__DATABASE__URL: {{ include "attune.databaseUrl" . | quote }} ATTUNE__DATABASE__URL: {{ include "attune.databaseUrl" . | quote }}
ATTUNE__MESSAGE_QUEUE__URL: {{ include "attune.rabbitmqUrl" . | quote }} ATTUNE__MESSAGE_QUEUE__URL: {{ include "attune.rabbitmqUrl" . | quote }}
ATTUNE__CACHE__URL: {{ include "attune.redisUrl" . | quote }} ATTUNE__REDIS__URL: {{ include "attune.redisUrl" . | quote }}
DB_HOST: {{ include "attune.postgresqlServiceName" . | quote }} DB_HOST: {{ include "attune.postgresqlServiceName" . | quote }}
DB_PORT: {{ .Values.database.port | quote }} DB_PORT: {{ .Values.database.port | quote }}
DB_USER: {{ .Values.database.username | quote }} DB_USER: {{ .Values.database.username | quote }}

View File

@@ -139,7 +139,8 @@ fn conn_settings(config: &LdapConfig) -> LdapConnSettings {
/// Open a new LDAP connection. /// Open a new LDAP connection.
async fn connect(config: &LdapConfig) -> Result<Ldap, ApiError> { async fn connect(config: &LdapConfig) -> Result<Ldap, ApiError> {
let settings = conn_settings(config); let settings = conn_settings(config);
let (conn, ldap) = LdapConnAsync::with_settings(settings, &config.url) let url = config.url.as_deref().unwrap_or_default();
let (conn, ldap) = LdapConnAsync::with_settings(settings, url)
.await .await
.map_err(|err| { .map_err(|err| {
ApiError::InternalServerError(format!("Failed to connect to LDAP server: {err}")) ApiError::InternalServerError(format!("Failed to connect to LDAP server: {err}"))
@@ -333,7 +334,7 @@ fn extract_claims(config: &LdapConfig, entry: &SearchEntry) -> LdapUserClaims {
.unwrap_or_default(); .unwrap_or_default();
LdapUserClaims { LdapUserClaims {
server_url: config.url.clone(), server_url: config.url.clone().unwrap_or_default(),
dn: entry.dn.clone(), dn: entry.dn.clone(),
login: first_attr(&config.login_attr), login: first_attr(&config.login_attr),
email: first_attr(&config.email_attr), email: first_attr(&config.email_attr),

View File

@@ -126,15 +126,17 @@ pub async fn build_login_redirect(
.map_err(|err| { .map_err(|err| {
ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}")) ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}"))
})?; })?;
let redirect_uri = RedirectUrl::new(oidc.redirect_uri.clone()).map_err(|err| { let redirect_uri_str = oidc.redirect_uri.clone().unwrap_or_default();
let redirect_uri = RedirectUrl::new(redirect_uri_str).map_err(|err| {
ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}")) ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}"))
})?; })?;
let client_secret = oidc.client_secret.clone().ok_or_else(|| { let client_secret = oidc.client_secret.clone().ok_or_else(|| {
ApiError::InternalServerError("OIDC client secret is missing".to_string()) ApiError::InternalServerError("OIDC client secret is missing".to_string())
})?; })?;
let client_id = oidc.client_id.clone().unwrap_or_default();
let client = CoreClient::from_provider_metadata( let client = CoreClient::from_provider_metadata(
discovery.metadata.clone(), discovery.metadata.clone(),
ClientId::new(oidc.client_id.clone()), ClientId::new(client_id),
Some(ClientSecret::new(client_secret)), Some(ClientSecret::new(client_secret)),
) )
.set_redirect_uri(redirect_uri); .set_redirect_uri(redirect_uri);
@@ -238,15 +240,17 @@ pub async fn handle_callback(
.map_err(|err| { .map_err(|err| {
ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}")) ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}"))
})?; })?;
let redirect_uri = RedirectUrl::new(oidc.redirect_uri.clone()).map_err(|err| { let redirect_uri_str = oidc.redirect_uri.clone().unwrap_or_default();
let redirect_uri = RedirectUrl::new(redirect_uri_str).map_err(|err| {
ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}")) ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}"))
})?; })?;
let client_secret = oidc.client_secret.clone().ok_or_else(|| { let client_secret = oidc.client_secret.clone().ok_or_else(|| {
ApiError::InternalServerError("OIDC client secret is missing".to_string()) ApiError::InternalServerError("OIDC client secret is missing".to_string())
})?; })?;
let client_id = oidc.client_id.clone().unwrap_or_default();
let client = CoreClient::from_provider_metadata( let client = CoreClient::from_provider_metadata(
discovery.metadata.clone(), discovery.metadata.clone(),
ClientId::new(oidc.client_id.clone()), ClientId::new(client_id),
Some(ClientSecret::new(client_secret)), Some(ClientSecret::new(client_secret)),
) )
.set_redirect_uri(redirect_uri); .set_redirect_uri(redirect_uri);
@@ -336,7 +340,7 @@ pub async fn build_logout_redirect(
pairs.append_pair("id_token_hint", &id_token_hint); pairs.append_pair("id_token_hint", &id_token_hint);
} }
pairs.append_pair("post_logout_redirect_uri", &post_logout_redirect_uri); pairs.append_pair("post_logout_redirect_uri", &post_logout_redirect_uri);
pairs.append_pair("client_id", &oidc.client_id); pairs.append_pair("client_id", oidc.client_id.as_deref().unwrap_or_default());
} }
String::from(url) String::from(url)
} else { } else {
@@ -481,7 +485,8 @@ fn oidc_config(state: &SharedState) -> Result<OidcConfig, ApiError> {
} }
async fn fetch_discovery_document(oidc: &OidcConfig) -> Result<OidcDiscoveryDocument, ApiError> { async fn fetch_discovery_document(oidc: &OidcConfig) -> Result<OidcDiscoveryDocument, ApiError> {
let discovery = reqwest::get(&oidc.discovery_url).await.map_err(|err| { let discovery_url = oidc.discovery_url.as_deref().unwrap_or_default();
let discovery = reqwest::get(discovery_url).await.map_err(|err| {
ApiError::InternalServerError(format!("Failed to fetch OIDC discovery document: {err}")) ApiError::InternalServerError(format!("Failed to fetch OIDC discovery document: {err}"))
})?; })?;
@@ -621,7 +626,7 @@ async fn verify_id_token(
let issuer = discovery.metadata.issuer().to_string(); let issuer = discovery.metadata.issuer().to_string();
let mut validation = Validation::new(algorithm); let mut validation = Validation::new(algorithm);
validation.set_issuer(&[issuer.as_str()]); validation.set_issuer(&[issuer.as_str()]);
validation.set_audience(&[oidc.client_id.as_str()]); validation.set_audience(&[oidc.client_id.as_deref().unwrap_or_default()]);
validation.set_required_spec_claims(&["exp", "iat", "iss", "sub", "aud"]); validation.set_required_spec_claims(&["exp", "iat", "iss", "sub", "aud"]);
validation.validate_nbf = false; validation.validate_nbf = false;
@@ -740,7 +745,8 @@ fn should_use_secure_cookies(state: &SharedState) -> bool {
.security .security
.oidc .oidc
.as_ref() .as_ref()
.map(|oidc| oidc.redirect_uri.starts_with("https://")) .and_then(|oidc| oidc.redirect_uri.as_deref())
.map(|uri| uri.starts_with("https://"))
.unwrap_or(false) .unwrap_or(false)
} }

View File

@@ -355,10 +355,14 @@ pub struct OidcConfig {
pub enabled: bool, pub enabled: bool,
/// OpenID Provider discovery document URL. /// OpenID Provider discovery document URL.
pub discovery_url: String, /// Required when `enabled` is true; ignored otherwise.
#[serde(default)]
pub discovery_url: Option<String>,
/// Confidential client ID. /// Confidential client ID.
pub client_id: String, /// Required when `enabled` is true; ignored otherwise.
#[serde(default)]
pub client_id: Option<String>,
/// Provider name used in login-page overrides such as `?auth=<provider_name>`. /// Provider name used in login-page overrides such as `?auth=<provider_name>`.
#[serde(default = "default_oidc_provider_name")] #[serde(default = "default_oidc_provider_name")]
@@ -374,7 +378,9 @@ pub struct OidcConfig {
pub client_secret: Option<String>, pub client_secret: Option<String>,
/// Redirect URI registered with the provider. /// Redirect URI registered with the provider.
pub redirect_uri: String, /// Required when `enabled` is true; ignored otherwise.
#[serde(default)]
pub redirect_uri: Option<String>,
/// Optional post-logout redirect URI. /// Optional post-logout redirect URI.
pub post_logout_redirect_uri: Option<String>, pub post_logout_redirect_uri: Option<String>,
@@ -396,7 +402,9 @@ pub struct LdapConfig {
pub enabled: bool, pub enabled: bool,
/// LDAP server URL (e.g., "ldap://ldap.example.com:389" or "ldaps://ldap.example.com:636"). /// LDAP server URL (e.g., "ldap://ldap.example.com:389" or "ldaps://ldap.example.com:636").
pub url: String, /// Required when `enabled` is true; ignored otherwise.
#[serde(default)]
pub url: Option<String>,
/// Bind DN template. Use `{login}` as placeholder for the user-supplied login. /// Bind DN template. Use `{login}` as placeholder for the user-supplied login.
/// Example: "uid={login},ou=users,dc=example,dc=com" /// Example: "uid={login},ou=users,dc=example,dc=com"
@@ -985,14 +993,20 @@ impl Config {
if let Some(oidc) = &self.security.oidc { if let Some(oidc) = &self.security.oidc {
if oidc.enabled { if oidc.enabled {
if oidc.discovery_url.trim().is_empty() { if oidc
.discovery_url
.as_deref()
.unwrap_or("")
.trim()
.is_empty()
{
return Err(crate::Error::validation( return Err(crate::Error::validation(
"OIDC discovery URL cannot be empty when OIDC is enabled", "OIDC discovery URL is required when OIDC is enabled",
)); ));
} }
if oidc.client_id.trim().is_empty() { if oidc.client_id.as_deref().unwrap_or("").trim().is_empty() {
return Err(crate::Error::validation( return Err(crate::Error::validation(
"OIDC client ID cannot be empty when OIDC is enabled", "OIDC client ID is required when OIDC is enabled",
)); ));
} }
if oidc if oidc
@@ -1006,9 +1020,19 @@ impl Config {
"OIDC client secret is required when OIDC is enabled", "OIDC client secret is required when OIDC is enabled",
)); ));
} }
if oidc.redirect_uri.trim().is_empty() { if oidc.redirect_uri.as_deref().unwrap_or("").trim().is_empty() {
return Err(crate::Error::validation( return Err(crate::Error::validation(
"OIDC redirect URI cannot be empty when OIDC is enabled", "OIDC redirect URI is required when OIDC is enabled",
));
}
}
}
if let Some(ldap) = &self.security.ldap {
if ldap.enabled {
if ldap.url.as_deref().unwrap_or("").trim().is_empty() {
return Err(crate::Error::validation(
"LDAP server URL is required when LDAP is enabled",
)); ));
} }
} }
@@ -1172,6 +1196,31 @@ mod tests {
assert!(config.validate().is_err()); assert!(config.validate().is_err());
} }
#[test]
fn test_oidc_config_disabled_no_urls_required() {
let yaml = r#"
enabled: false
"#;
let cfg: OidcConfig = serde_yaml_ng::from_str(yaml).unwrap();
assert!(!cfg.enabled);
assert!(cfg.discovery_url.is_none());
assert!(cfg.client_id.is_none());
assert!(cfg.redirect_uri.is_none());
assert!(cfg.client_secret.is_none());
assert_eq!(cfg.provider_name, "oidc");
}
#[test]
fn test_ldap_config_disabled_no_url_required() {
let yaml = r#"
enabled: false
"#;
let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap();
assert!(!cfg.enabled);
assert!(cfg.url.is_none());
assert_eq!(cfg.provider_name, "ldap");
}
#[test] #[test]
fn test_ldap_config_defaults() { fn test_ldap_config_defaults() {
let yaml = r#" let yaml = r#"
@@ -1182,7 +1231,7 @@ client_id: "test"
let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap(); let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap();
assert!(cfg.enabled); assert!(cfg.enabled);
assert_eq!(cfg.url, "ldap://localhost:389"); assert_eq!(cfg.url.as_deref(), Some("ldap://localhost:389"));
assert_eq!(cfg.user_filter, "(uid={login})"); assert_eq!(cfg.user_filter, "(uid={login})");
assert_eq!(cfg.login_attr, "uid"); assert_eq!(cfg.login_attr, "uid");
assert_eq!(cfg.email_attr, "mail"); assert_eq!(cfg.email_attr, "mail");
@@ -1222,7 +1271,7 @@ provider_icon_url: "https://corp.com/icon.svg"
let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap(); let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap();
assert!(cfg.enabled); assert!(cfg.enabled);
assert_eq!(cfg.url, "ldaps://ldap.corp.com:636"); assert_eq!(cfg.url.as_deref(), Some("ldaps://ldap.corp.com:636"));
assert_eq!( assert_eq!(
cfg.bind_dn_template.as_deref(), cfg.bind_dn_template.as_deref(),
Some("uid={login},ou=people,dc=corp,dc=com") Some("uid={login},ou=people,dc=corp,dc=com")

View File

@@ -91,6 +91,30 @@ services:
- attune-network - attune-network
restart: on-failure restart: on-failure
# Build and extract statically-linked pack binaries (sensors, etc.)
# These binaries are built with musl for cross-architecture compatibility
# and placed directly into the packs volume for sensor containers to use.
init-pack-binaries:
build:
context: .
dockerfile: docker/Dockerfile.pack-binaries
target: pack-binaries-init
args:
BUILDKIT_INLINE_CACHE: 1
RUST_TARGET: ${PACK_BINARIES_RUST_TARGET:-x86_64-unknown-linux-musl}
container_name: attune-init-pack-binaries
volumes:
- packs_data:/opt/attune/packs
entrypoint:
[
"/bin/sh",
"-c",
"mkdir -p /opt/attune/packs/core/sensors && cp /pack-binaries/attune-core-timer-sensor /opt/attune/packs/core/sensors/attune-core-timer-sensor && chmod +x /opt/attune/packs/core/sensors/attune-core-timer-sensor && echo 'Pack binaries copied successfully'",
]
restart: "no"
networks:
- attune-network
# Initialize builtin packs # Initialize builtin packs
# Copies pack files to shared volume and loads them into database # Copies pack files to shared volume and loads them into database
init-packs: init-packs:
@@ -117,6 +141,8 @@ services:
DEFAULT_ADMIN_PERMISSION_SET_REF: core.admin DEFAULT_ADMIN_PERMISSION_SET_REF: core.admin
command: ["/bin/sh", "/init-packs.sh"] command: ["/bin/sh", "/init-packs.sh"]
depends_on: depends_on:
init-pack-binaries:
condition: service_completed_successfully
migrations: migrations:
condition: service_completed_successfully condition: service_completed_successfully
postgres: postgres:
@@ -136,6 +162,7 @@ services:
target: agent-init target: agent-init
args: args:
BUILDKIT_INLINE_CACHE: 1 BUILDKIT_INLINE_CACHE: 1
RUST_TARGET: ${AGENT_RUST_TARGET:-x86_64-unknown-linux-musl}
container_name: attune-init-agent container_name: attune-init-agent
volumes: volumes:
- agent_bin:/opt/attune/agent - agent_bin:/opt/attune/agent
@@ -209,8 +236,8 @@ services:
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
# Message Queue # Message Queue
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
# Cache # Redis
ATTUNE__CACHE__URL: redis://redis:6379 ATTUNE__REDIS__URL: redis://redis:6379
# Worker config override # Worker config override
ATTUNE__WORKER__WORKER_TYPE: container ATTUNE__WORKER__WORKER_TYPE: container
ports: ports:
@@ -263,7 +290,7 @@ services:
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus} ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE__CACHE__URL: redis://redis:6379 ATTUNE__REDIS__URL: redis://redis:6379
ATTUNE__WORKER__WORKER_TYPE: container ATTUNE__WORKER__WORKER_TYPE: container
volumes: volumes:
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro - ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro

View File

@@ -4,18 +4,31 @@
# using musl, suitable for injection into arbitrary runtime containers. # using musl, suitable for injection into arbitrary runtime containers.
# #
# Stages: # Stages:
# builder - Cross-compile with musl for a fully static binary # builder - Cross-compile with cargo-zigbuild + musl for a fully static binary
# agent-binary - Minimal scratch image containing just the binary # agent-binary - Minimal scratch image containing just the binary
# agent-init - BusyBox-based image for use as a Kubernetes init container # agent-init - BusyBox-based image for use as a Kubernetes init container
# or Docker Compose volume-populating service (has `cp`) # or Docker Compose volume-populating service (has `cp`)
# #
# Architecture handling:
# Uses cargo-zigbuild for cross-compilation, which bundles all necessary
# cross-compilation toolchains internally. This allows building for any
# target architecture from any host — e.g., building aarch64 musl binaries
# on an x86_64 host, or vice versa. This matches the CI/CD pipeline approach.
#
# The RUST_TARGET build arg controls the output architecture:
# x86_64-unknown-linux-musl -> amd64 static binary (default)
# aarch64-unknown-linux-musl -> arm64 static binary
#
# Usage: # Usage:
# # Build for the default architecture (x86_64):
# DOCKER_BUILDKIT=1 docker buildx build --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest .
#
# # Build for arm64:
# DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest .
#
# # Build the minimal binary-only image: # # Build the minimal binary-only image:
# DOCKER_BUILDKIT=1 docker buildx build --target agent-binary -f docker/Dockerfile.agent -t attune-agent:binary . # DOCKER_BUILDKIT=1 docker buildx build --target agent-binary -f docker/Dockerfile.agent -t attune-agent:binary .
# #
# # Build the init container image (for volume population via `cp`):
# DOCKER_BUILDKIT=1 docker buildx build --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest .
#
# # Use in docker-compose.yaml to populate a shared volume: # # Use in docker-compose.yaml to populate a shared volume:
# # agent-init: # # agent-init:
# # image: attune-agent:latest # # image: attune-agent:latest
@@ -37,14 +50,30 @@ FROM rust:${RUST_VERSION}-${DEBIAN_VERSION} AS builder
ARG RUST_TARGET ARG RUST_TARGET
# Install musl toolchain for static linking # Install build dependencies.
# - musl-tools: provides the musl libc headers needed for musl target builds
# - python3 + pip: needed to install ziglang (zig is the cross-compilation backend)
# - pkg-config, libssl-dev: needed for native dependency detection during build
# - file, binutils: for verifying the resulting binaries (file, strip)
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
musl-tools \ musl-tools \
pkg-config \ pkg-config \
libssl-dev \ libssl-dev \
ca-certificates \ ca-certificates \
file \
binutils \
python3 \
python3-pip \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Install zig (provides cross-compilation toolchains for all architectures)
# and cargo-zigbuild (cargo subcommand that uses zig as the linker/compiler).
# This replaces native musl-gcc and avoids the -m64 flag mismatch that occurs
# when the host arch doesn't match the target arch (e.g., building x86_64 musl
# binaries on an arm64 host).
RUN pip3 install --break-system-packages --no-cache-dir ziglang && \
cargo install --locked cargo-zigbuild
# Add the requested musl target for fully static binaries # Add the requested musl target for fully static binaries
RUN rustup target add ${RUST_TARGET} RUN rustup target add ${RUST_TARGET}
@@ -96,25 +125,30 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Build layer # Build layer
# Copy real source code and compile only the agent binary with musl # Copy real source code and compile only the agent binaries with musl
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
COPY migrations/ ./migrations/ COPY migrations/ ./migrations/
COPY crates/ ./crates/ COPY crates/ ./crates/
# Build the injected agent binaries, statically linked with musl. # Build the injected agent binaries, statically linked with musl.
# Uses cargo-zigbuild so that cross-compilation works regardless of host arch.
# Uses a dedicated cache ID (agent-target) so the musl target directory # Uses a dedicated cache ID (agent-target) so the musl target directory
# doesn't collide with the glibc target cache used by other Dockerfiles. # doesn't collide with the glibc target cache used by other Dockerfiles.
RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \ RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \
--mount=type=cache,target=/usr/local/cargo/git,sharing=shared \ --mount=type=cache,target=/usr/local/cargo/git,sharing=shared \
--mount=type=cache,id=agent-target,target=/build/target,sharing=locked \ --mount=type=cache,id=agent-target,target=/build/target,sharing=locked \
cargo build --release --target ${RUST_TARGET} --bin attune-agent --bin attune-sensor-agent && \ cargo zigbuild --release --target ${RUST_TARGET} --bin attune-agent --bin attune-sensor-agent && \
cp /build/target/${RUST_TARGET}/release/attune-agent /build/attune-agent && \ cp /build/target/${RUST_TARGET}/release/attune-agent /build/attune-agent && \
cp /build/target/${RUST_TARGET}/release/attune-sensor-agent /build/attune-sensor-agent cp /build/target/${RUST_TARGET}/release/attune-sensor-agent /build/attune-sensor-agent
# Strip the binaries to minimize size # Strip the binaries to minimize size.
RUN strip /build/attune-agent && strip /build/attune-sensor-agent # When cross-compiling for a different architecture, the host strip may not
# understand the foreign binary format. In that case we skip stripping — the
# binary is still functional, just slightly larger.
RUN (strip /build/attune-agent 2>/dev/null && echo "stripped attune-agent" || echo "strip skipped for attune-agent (cross-arch binary)") && \
(strip /build/attune-sensor-agent 2>/dev/null && echo "stripped attune-sensor-agent" || echo "strip skipped for attune-sensor-agent (cross-arch binary)")
# Verify the binaries are statically linked and functional # Verify the binaries exist and show their details
RUN ls -lh /build/attune-agent /build/attune-sensor-agent && \ RUN ls -lh /build/attune-agent /build/attune-sensor-agent && \
file /build/attune-agent && \ file /build/attune-agent && \
file /build/attune-sensor-agent && \ file /build/attune-sensor-agent && \

View File

@@ -1,12 +1,26 @@
# Dockerfile for building pack binaries independently # Dockerfile for building statically-linked pack binaries independently
# #
# This Dockerfile builds native pack binaries (sensors, etc.) with GLIBC compatibility # This Dockerfile builds native pack binaries (sensors, etc.) as fully static
# The binaries are built separately from service containers and placed in ./packs/ # musl binaries with zero runtime dependencies. Uses cargo-zigbuild for
# cross-compilation, allowing builds for any target architecture from any host
# (e.g., building x86_64 musl binaries on an arm64 Mac, or vice versa).
#
# Architecture handling:
# The RUST_TARGET build arg controls the output architecture:
# x86_64-unknown-linux-musl -> amd64 static binary (default)
# aarch64-unknown-linux-musl -> arm64 static binary
# #
# Usage: # Usage:
# docker build -f docker/Dockerfile.pack-binaries -t attune-pack-builder . # # Build for the default architecture (x86_64):
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.pack-binaries -t attune-pack-builder .
#
# # Build for arm64:
# DOCKER_BUILDKIT=1 docker build --build-arg RUST_TARGET=aarch64-unknown-linux-musl \
# -f docker/Dockerfile.pack-binaries -t attune-pack-builder .
#
# # Extract binaries:
# docker create --name pack-binaries attune-pack-builder # docker create --name pack-binaries attune-pack-builder
# docker cp pack-binaries:/build/pack-binaries/. ./packs/ # docker cp pack-binaries:/pack-binaries/. ./packs/
# docker rm pack-binaries # docker rm pack-binaries
# #
# Or use the provided script: # Or use the provided script:
@@ -14,25 +28,56 @@
ARG RUST_VERSION=1.92 ARG RUST_VERSION=1.92
ARG DEBIAN_VERSION=bookworm ARG DEBIAN_VERSION=bookworm
ARG RUST_TARGET=x86_64-unknown-linux-musl
# ============================================================================ # ============================================================================
# Stage 1: Builder - Build pack binaries with GLIBC 2.36 # Stage 1: Builder - Cross-compile statically-linked pack binaries with musl
# ============================================================================ # ============================================================================
FROM rust:${RUST_VERSION}-${DEBIAN_VERSION} AS builder FROM rust:${RUST_VERSION}-${DEBIAN_VERSION} AS builder
# Install build dependencies ARG RUST_TARGET
# Install build dependencies.
# - musl-tools: provides the musl libc headers needed for musl target builds
# - python3 + pip: needed to install ziglang (zig is the cross-compilation backend)
# - pkg-config, libssl-dev: needed for native dependency detection during build
# - file, binutils: for verifying and stripping the resulting binaries
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
musl-tools \
pkg-config \ pkg-config \
libssl-dev \ libssl-dev \
ca-certificates \ ca-certificates \
file \
binutils \
python3 \
python3-pip \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Install zig (provides cross-compilation toolchains for all architectures)
# and cargo-zigbuild (cargo subcommand that uses zig as the linker/compiler).
# This replaces native musl-gcc and avoids the -m64 flag mismatch that occurs
# when the host arch doesn't match the target arch (e.g., building x86_64 musl
# binaries on an arm64 host).
RUN pip3 install --break-system-packages --no-cache-dir ziglang && \
cargo install --locked cargo-zigbuild
# Add the requested musl target for fully static binaries
RUN rustup target add ${RUST_TARGET}
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds # Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=67108864 ENV RUST_MIN_STACK=67108864
# Copy workspace configuration # Enable SQLx offline mode — compile-time query checking without a live database
ENV SQLX_OFFLINE=true
# ---------------------------------------------------------------------------
# Dependency caching layer
# Copy only Cargo metadata first so `cargo fetch` is cached when only source
# code changes. This follows the same selective-copy optimization pattern as
# the other active Dockerfiles in this directory.
# ---------------------------------------------------------------------------
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./
# Copy all workspace member manifests (required for workspace resolution) # Copy all workspace member manifests (required for workspace resolution)
@@ -45,35 +90,63 @@ COPY crates/worker/Cargo.toml ./crates/worker/Cargo.toml
COPY crates/notifier/Cargo.toml ./crates/notifier/Cargo.toml COPY crates/notifier/Cargo.toml ./crates/notifier/Cargo.toml
COPY crates/cli/Cargo.toml ./crates/cli/Cargo.toml COPY crates/cli/Cargo.toml ./crates/cli/Cargo.toml
# Create dummy source files for workspace members (not being built) # Create minimal stub sources so cargo can resolve the workspace and fetch deps.
RUN mkdir -p crates/api/src && echo "fn main() {}" > crates/api/src/main.rs # These are ONLY used for `cargo fetch` — never compiled.
RUN mkdir -p crates/executor/src && echo "fn main() {}" > crates/executor/src/main.rs # NOTE: The worker crate has TWO binary targets (main + agent_main) and the
RUN mkdir -p crates/executor/benches && echo "fn main() {}" > crates/executor/benches/context_clone.rs # sensor crate also has two binary targets (main + agent_main), so we create
RUN mkdir -p crates/sensor/src && echo "fn main() {}" > crates/sensor/src/main.rs # stubs for all of them.
RUN mkdir -p crates/worker/src && echo "fn main() {}" > crates/worker/src/main.rs RUN mkdir -p crates/common/src && echo "" > crates/common/src/lib.rs && \
RUN echo "fn main() {}" > crates/worker/src/agent_main.rs mkdir -p crates/api/src && echo "fn main(){}" > crates/api/src/main.rs && \
RUN mkdir -p crates/notifier/src && echo "fn main() {}" > crates/notifier/src/main.rs mkdir -p crates/executor/src && echo "fn main(){}" > crates/executor/src/main.rs && \
RUN mkdir -p crates/cli/src && echo "fn main() {}" > crates/cli/src/main.rs mkdir -p crates/executor/benches && echo "fn main(){}" > crates/executor/benches/context_clone.rs && \
mkdir -p crates/sensor/src && echo "fn main(){}" > crates/sensor/src/main.rs && \
echo "fn main(){}" > crates/sensor/src/agent_main.rs && \
mkdir -p crates/core-timer-sensor/src && echo "fn main(){}" > crates/core-timer-sensor/src/main.rs && \
mkdir -p crates/worker/src && echo "fn main(){}" > crates/worker/src/main.rs && \
echo "fn main(){}" > crates/worker/src/agent_main.rs && \
mkdir -p crates/notifier/src && echo "fn main(){}" > crates/notifier/src/main.rs && \
mkdir -p crates/cli/src && echo "fn main(){}" > crates/cli/src/main.rs
# Copy only the source code needed for pack binaries # Download all dependencies (cached unless Cargo.toml/Cargo.lock change)
# registry/git use sharing=shared — cargo handles concurrent reads safely
RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \
--mount=type=cache,target=/usr/local/cargo/git,sharing=shared \
cargo fetch
# ---------------------------------------------------------------------------
# Build layer
# Copy real source code and compile only the pack binaries with musl
# ---------------------------------------------------------------------------
COPY migrations/ ./migrations/
COPY crates/common/ ./crates/common/ COPY crates/common/ ./crates/common/
COPY crates/core-timer-sensor/ ./crates/core-timer-sensor/ COPY crates/core-timer-sensor/ ./crates/core-timer-sensor/
# Build pack binaries with BuildKit cache mounts # Build pack binaries with BuildKit cache mounts, statically linked with musl.
# These binaries will have GLIBC 2.36 compatibility (Debian Bookworm) # Uses cargo-zigbuild so that cross-compilation works regardless of host arch.
# - registry/git use sharing=shared (cargo handles concurrent access safely) # - registry/git use sharing=shared (cargo handles concurrent access safely)
# - target uses dedicated cache for pack binaries (separate from service builds) # - target uses sharing=locked because zigbuild cross-compilation needs
# exclusive access to the target directory
# - dedicated cache ID (target-pack-binaries-static) to avoid collisions with
# other Dockerfiles' target caches
RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \ RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \
--mount=type=cache,target=/usr/local/cargo/git,sharing=shared \ --mount=type=cache,target=/usr/local/cargo/git,sharing=shared \
--mount=type=cache,target=/build/target,id=target-pack-binaries \ --mount=type=cache,id=target-pack-binaries-static,target=/build/target,sharing=locked \
mkdir -p /build/pack-binaries && \ mkdir -p /build/pack-binaries && \
cargo build --release --bin attune-core-timer-sensor && \ cargo zigbuild --release --target ${RUST_TARGET} --bin attune-core-timer-sensor && \
cp /build/target/release/attune-core-timer-sensor /build/pack-binaries/attune-core-timer-sensor cp /build/target/${RUST_TARGET}/release/attune-core-timer-sensor /build/pack-binaries/attune-core-timer-sensor
# Verify binaries were built successfully # Strip the binary to minimize size.
RUN ls -lah /build/pack-binaries/ && \ # When cross-compiling for a different architecture, the host strip may not
# understand the foreign binary format. In that case we skip stripping — the
# binary is still functional, just slightly larger.
RUN (strip /build/pack-binaries/attune-core-timer-sensor 2>/dev/null && \
echo "stripped attune-core-timer-sensor" || \
echo "strip skipped for attune-core-timer-sensor (cross-arch binary)")
# Verify binaries were built successfully and are statically linked
RUN ls -lh /build/pack-binaries/attune-core-timer-sensor && \
file /build/pack-binaries/attune-core-timer-sensor && \ file /build/pack-binaries/attune-core-timer-sensor && \
ldd /build/pack-binaries/attune-core-timer-sensor && \ (ldd /build/pack-binaries/attune-core-timer-sensor 2>&1 || echo "statically linked (no dynamic dependencies)") && \
/build/pack-binaries/attune-core-timer-sensor --version || echo "Built successfully" /build/pack-binaries/attune-core-timer-sensor --version || echo "Built successfully"
# ============================================================================ # ============================================================================
@@ -87,3 +160,15 @@ COPY --from=builder /build/pack-binaries/ /pack-binaries/
# Default command (not used in FROM scratch) # Default command (not used in FROM scratch)
CMD ["/bin/sh"] CMD ["/bin/sh"]
# ============================================================================
# Stage 3: pack-binaries-init - Init container for volume population
# ============================================================================
# Uses busybox so we have `cp`, `sh`, etc. for use as a Docker Compose
# init service that copies pack binaries into the shared packs volume.
FROM busybox:1.36 AS pack-binaries-init
COPY --from=builder /build/pack-binaries/ /pack-binaries/
# No default entrypoint — docker-compose provides the command
ENTRYPOINT ["/bin/sh"]

View File

@@ -0,0 +1,139 @@
# Attune Docker Environment Configuration
#
# This file is mounted into containers at /opt/attune/config/config.yaml.
# It provides base values for Docker deployments.
#
# Sensitive values (jwt_secret, encryption_key) are overridden by environment
# variables set in docker-compose.yaml using the ATTUNE__ prefix convention:
# ATTUNE__SECURITY__JWT_SECRET=...
# ATTUNE__SECURITY__ENCRYPTION_KEY=...
#
# The `config` crate does NOT support ${VAR} shell interpolation in YAML.
# All overrides must use ATTUNE__<SECTION>__<KEY> environment variables.
environment: docker
# Docker database (PostgreSQL container)
database:
url: postgresql://attune:attune@postgres:5432/attune
max_connections: 20
min_connections: 5
connect_timeout: 30
idle_timeout: 600
log_statements: false
schema: "public"
# Docker message queue (RabbitMQ container)
message_queue:
url: amqp://attune:attune@rabbitmq:5672
exchange: attune
enable_dlq: true
message_ttl: 3600 # seconds
# Docker cache (Redis container)
redis:
url: redis://redis:6379
pool_size: 10
# API server configuration
server:
host: 0.0.0.0
port: 8080
request_timeout: 60
enable_cors: true
cors_origins:
- http://localhost
- http://localhost:3000
- http://localhost:3001
- http://localhost:3002
- http://localhost:5173
- http://127.0.0.1:3000
- http://127.0.0.1:3001
- http://127.0.0.1:3002
- http://127.0.0.1:5173
- http://web
- http://web:3000
max_body_size: 10485760 # 10MB
# Logging configuration
log:
level: info
format: json # Structured logs for container environments
console: true
# Security settings
# jwt_secret and encryption_key are intentional placeholders — they MUST be
# overridden via ATTUNE__SECURITY__JWT_SECRET and ATTUNE__SECURITY__ENCRYPTION_KEY
# environment variables in docker-compose.yaml (or a .env file).
security:
jwt_secret: override-via-ATTUNE__SECURITY__JWT_SECRET-env-var
jwt_access_expiration: 3600 # 1 hour
jwt_refresh_expiration: 604800 # 7 days
encryption_key: override-via-ATTUNE__SECURITY__ENCRYPTION_KEY-env-var
enable_auth: true
allow_self_registration: false
login_page:
show_local_login: true
show_oidc_login: true
show_ldap_login: true
oidc:
enabled: false
# Uncomment and configure for your OIDC provider:
# discovery_url: https://auth.example.com/.well-known/openid-configuration
# client_id: your-client-id
# client_secret: your-client-secret
# provider_name: sso
# provider_label: SSO Login
# provider_icon_url: https://auth.example.com/favicon.ico
# redirect_uri: http://localhost:3000/auth/callback
# post_logout_redirect_uri: http://localhost:3000/login
# scopes:
# - groups
# Packs directory (mounted volume in containers)
packs_base_dir: /opt/attune/packs
# Runtime environments directory (isolated envs like virtualenvs, node_modules).
# Kept separate from packs so pack directories remain clean and read-only.
# Pattern: {runtime_envs_dir}/{pack_ref}/{runtime_name}
runtime_envs_dir: /opt/attune/runtime_envs
# Artifacts directory (shared volume for file-based artifact storage).
# File-type artifacts are written here by execution processes and served by the API.
# Pattern: {artifacts_dir}/{ref_slug}/v{version}.{ext}
artifacts_dir: /opt/attune/artifacts
# Executor service configuration
executor:
scheduled_timeout: 300 # 5 minutes - fail executions stuck in SCHEDULED
timeout_check_interval: 60 # Check every minute for stale executions
enable_timeout_monitor: true
# Worker service configuration
worker:
worker_type: container
max_concurrent_tasks: 20
heartbeat_interval: 10 # Reduced from 30s for faster stale detection (staleness = 30s)
task_timeout: 300
max_stdout_bytes: 10485760 # 10MB
max_stderr_bytes: 10485760 # 10MB
shutdown_timeout: 30
stream_logs: true
# Sensor service configuration
sensor:
max_concurrent_sensors: 50
heartbeat_interval: 10 # Reduced from 30s for faster stale detection
poll_interval: 10
sensor_timeout: 300
shutdown_timeout: 30
# Notifier service configuration
notifier:
host: 0.0.0.0
port: 8081
max_connections: 1000
# Agent binary distribution (serves the agent binary via API for remote downloads)
agent:
binary_dir: /opt/attune/agent

View File

@@ -69,6 +69,24 @@ services:
- attune-network - attune-network
restart: on-failure restart: on-failure
# Build and extract statically-linked pack binaries (sensors, etc.)
# These binaries are built with musl for cross-architecture compatibility
# and placed directly into the packs volume for sensor containers to use.
init-pack-binaries:
image: ${ATTUNE_IMAGE_REGISTRY:-git.rdrx.app/attune-system}/attune/pack-builder:${ATTUNE_IMAGE_TAG:-latest}
container_name: attune-init-pack-binaries
volumes:
- packs_data:/opt/attune/packs
entrypoint:
[
"/bin/sh",
"-c",
"mkdir -p /opt/attune/packs/core/sensors && cp /pack-binaries/attune-core-timer-sensor /opt/attune/packs/core/sensors/attune-core-timer-sensor && chmod +x /opt/attune/packs/core/sensors/attune-core-timer-sensor && echo 'Pack binaries copied successfully'",
]
restart: "no"
networks:
- attune-network
init-packs: init-packs:
image: python:3.11-slim image: python:3.11-slim
container_name: attune-init-packs container_name: attune-init-packs
@@ -93,6 +111,8 @@ services:
DEFAULT_ADMIN_PERMISSION_SET_REF: core.admin DEFAULT_ADMIN_PERMISSION_SET_REF: core.admin
command: ["/bin/sh", "/init-packs.sh"] command: ["/bin/sh", "/init-packs.sh"]
depends_on: depends_on:
init-pack-binaries:
condition: service_completed_successfully
migrations: migrations:
condition: service_completed_successfully condition: service_completed_successfully
postgres: postgres:
@@ -166,7 +186,7 @@ services:
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE__CACHE__URL: redis://redis:6379 ATTUNE__REDIS__URL: redis://redis:6379
ATTUNE__WORKER__WORKER_TYPE: container ATTUNE__WORKER__WORKER_TYPE: container
ports: ports:
- "8080:8080" - "8080:8080"
@@ -213,7 +233,7 @@ services:
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE__CACHE__URL: redis://redis:6379 ATTUNE__REDIS__URL: redis://redis:6379
ATTUNE__WORKER__WORKER_TYPE: container ATTUNE__WORKER__WORKER_TYPE: container
volumes: volumes:
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro - ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro

View File

@@ -119,14 +119,54 @@ for pack_dir in "$SOURCE_PACKS_DIR"/*; do
target_pack_dir="$TARGET_PACKS_DIR/$pack_name" target_pack_dir="$TARGET_PACKS_DIR/$pack_name"
if [ -d "$target_pack_dir" ]; then if [ -d "$target_pack_dir" ]; then
# Pack exists, update files to ensure we have latest (especially binaries) # Pack exists, update non-binary files to ensure we have latest.
# Compiled sensor binaries in sensors/ are provided by init-pack-binaries
# (statically-linked musl builds) and must NOT be overwritten by
# the host's copy, which may be dynamically linked or the wrong arch.
echo -e "${YELLOW}${NC} Pack exists at: $target_pack_dir, updating files..." echo -e "${YELLOW}${NC} Pack exists at: $target_pack_dir, updating files..."
# Detect ELF binaries already in the target sensors/ dir by
# checking for the 4-byte ELF magic number (\x7fELF) at the
# start of the file. The `file` command is unavailable on
# python:3.11-slim, so we read the magic bytes with `od`.
_skip_bins=""
if [ -d "$target_pack_dir/sensors" ]; then
for _bin in "$target_pack_dir/sensors"/*; do
[ -f "$_bin" ] || continue
_magic=$(od -A n -t x1 -N 4 "$_bin" 2>/dev/null | tr -d ' ')
if [ "$_magic" = "7f454c46" ]; then
_skip_bins="$_skip_bins $(basename "$_bin")"
fi
done
fi
# Copy everything from source, then restore any skipped binaries
# that were overwritten. We do it this way (copy-then-restore)
# rather than exclude-during-copy because busybox cp and POSIX cp
# have no --exclude flag.
if [ -n "$_skip_bins" ]; then
# Back up existing static binaries
_tmpdir=$(mktemp -d)
for _b in $_skip_bins; do
cp "$target_pack_dir/sensors/$_b" "$_tmpdir/$_b"
done
fi
if cp -rf "$pack_dir"/* "$target_pack_dir"/; then if cp -rf "$pack_dir"/* "$target_pack_dir"/; then
echo -e "${GREEN}${NC} Updated pack files at: $target_pack_dir" echo -e "${GREEN}${NC} Updated pack files at: $target_pack_dir"
else else
echo -e "${RED}${NC} Failed to update pack" echo -e "${RED}${NC} Failed to update pack"
exit 1 exit 1
fi fi
# Restore static binaries that were overwritten
if [ -n "$_skip_bins" ]; then
for _b in $_skip_bins; do
cp "$_tmpdir/$_b" "$target_pack_dir/sensors/$_b"
echo -e "${GREEN}${NC} Preserved static binary: sensors/$_b"
done
rm -rf "$_tmpdir"
fi
else else
# Copy pack to target directory # Copy pack to target directory
echo -e "${YELLOW}${NC} Copying pack files..." echo -e "${YELLOW}${NC} Copying pack files..."

View File

@@ -1,14 +1,16 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Build pack binaries using Docker and extract them to ./packs/ # Build pack binaries using Docker and extract them to ./packs/
# #
# This script builds native pack binaries (sensors, etc.) in a Docker container # This script builds statically-linked pack binaries (sensors, etc.) in a Docker
# with GLIBC compatibility and extracts them to the appropriate pack directories. # container using cargo-zigbuild + musl, producing binaries with zero runtime
# dependencies. Supports cross-compilation for any target architecture.
# #
# Usage: # Usage:
# ./scripts/build-pack-binaries.sh # ./scripts/build-pack-binaries.sh # Build for x86_64 (default)
# RUST_TARGET=aarch64-unknown-linux-musl ./scripts/build-pack-binaries.sh # Build for arm64
# #
# The script will: # The script will:
# 1. Build pack binaries in a Docker container with GLIBC 2.36 (Debian Bookworm) # 1. Build statically-linked pack binaries via cargo-zigbuild + musl
# 2. Extract binaries to ./packs/core/sensors/ # 2. Extract binaries to ./packs/core/sensors/
# 3. Make binaries executable # 3. Make binaries executable
# 4. Clean up temporary container # 4. Clean up temporary container
@@ -29,10 +31,12 @@ PROJECT_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
IMAGE_NAME="attune-pack-builder" IMAGE_NAME="attune-pack-builder"
CONTAINER_NAME="attune-pack-binaries-tmp" CONTAINER_NAME="attune-pack-binaries-tmp"
DOCKERFILE="docker/Dockerfile.pack-binaries" DOCKERFILE="docker/Dockerfile.pack-binaries"
RUST_TARGET="${RUST_TARGET:-x86_64-unknown-linux-musl}"
echo -e "${GREEN}Building pack binaries...${NC}" echo -e "${GREEN}Building statically-linked pack binaries...${NC}"
echo "Project root: ${PROJECT_ROOT}" echo "Project root: ${PROJECT_ROOT}"
echo "Dockerfile: ${DOCKERFILE}" echo "Dockerfile: ${DOCKERFILE}"
echo "Target: ${RUST_TARGET}"
echo "" echo ""
# Navigate to project root # Navigate to project root
@@ -45,8 +49,9 @@ if [[ ! -f "${DOCKERFILE}" ]]; then
fi fi
# Build the Docker image # Build the Docker image
echo -e "${YELLOW}Step 1/4: Building Docker image...${NC}" echo -e "${YELLOW}Step 1/4: Building Docker image (target: ${RUST_TARGET})...${NC}"
if DOCKER_BUILDKIT=1 docker build \ if DOCKER_BUILDKIT=1 docker build \
--build-arg RUST_TARGET="${RUST_TARGET}" \
-f "${DOCKERFILE}" \ -f "${DOCKERFILE}" \
-t "${IMAGE_NAME}" \ -t "${IMAGE_NAME}" \
. ; then . ; then
@@ -87,7 +92,7 @@ chmod +x packs/core/sensors/attune-core-timer-sensor
echo "" echo ""
echo -e "${YELLOW}Verifying binaries:${NC}" echo -e "${YELLOW}Verifying binaries:${NC}"
file packs/core/sensors/attune-core-timer-sensor file packs/core/sensors/attune-core-timer-sensor
ldd packs/core/sensors/attune-core-timer-sensor || echo "(ldd failed - binary may be static or require different environment)" (ldd packs/core/sensors/attune-core-timer-sensor 2>&1 || echo "statically linked (no dynamic dependencies)")
ls -lh packs/core/sensors/attune-core-timer-sensor ls -lh packs/core/sensors/attune-core-timer-sensor
# Clean up temporary container # Clean up temporary container
@@ -105,11 +110,13 @@ echo -e "${GREEN}═════════════════════
echo -e "${GREEN}Pack binaries built successfully!${NC}" echo -e "${GREEN}Pack binaries built successfully!${NC}"
echo -e "${GREEN}════════════════════════════════════════${NC}" echo -e "${GREEN}════════════════════════════════════════${NC}"
echo "" echo ""
echo "Target architecture: ${RUST_TARGET}"
echo "Binaries location:" echo "Binaries location:"
echo " • packs/core/sensors/attune-core-timer-sensor" echo " • packs/core/sensors/attune-core-timer-sensor"
echo "" echo ""
echo "These binaries are now ready to be used by the init-packs service" echo "These are statically-linked musl binaries with zero runtime dependencies."
echo "when starting docker-compose." echo "They are now ready to be used by the init-packs service when starting"
echo "docker-compose."
echo "" echo ""
echo "To use them:" echo "To use them:"
echo " docker compose up -d" echo " docker compose up -d"

View File

@@ -7,6 +7,9 @@ bundle_dir="${1:-${repo_root}/docker/distributable}"
archive_path="${2:-${repo_root}/artifacts/attune-docker-dist.tar.gz}" archive_path="${2:-${repo_root}/artifacts/attune-docker-dist.tar.gz}"
template_dir="${repo_root}/docker/distributable" template_dir="${repo_root}/docker/distributable"
bundle_dir="$(realpath -m "${bundle_dir}")"
archive_path="$(realpath -m "${archive_path}")"
template_dir="$(realpath -m "${template_dir}")"
mkdir -p "${bundle_dir}/docker" "${bundle_dir}/migrations" "${bundle_dir}/packs" "${bundle_dir}/scripts" mkdir -p "${bundle_dir}/docker" "${bundle_dir}/migrations" "${bundle_dir}/packs" "${bundle_dir}/scripts"
mkdir -p "$(dirname "${archive_path}")" mkdir -p "$(dirname "${archive_path}")"
@@ -18,13 +21,13 @@ copy_file() {
cp "${src}" "${dst}" cp "${src}" "${dst}"
} }
# Keep the distributable compose file and README as the maintained templates. # Keep the distributable compose file, README, and config as the maintained templates.
if [ "${bundle_dir}" != "${template_dir}" ]; then if [ "${bundle_dir}" != "${template_dir}" ]; then
copy_file "${template_dir}/docker-compose.yaml" "${bundle_dir}/docker-compose.yaml" copy_file "${template_dir}/docker-compose.yaml" "${bundle_dir}/docker-compose.yaml"
copy_file "${template_dir}/README.md" "${bundle_dir}/README.md" copy_file "${template_dir}/README.md" "${bundle_dir}/README.md"
copy_file "${template_dir}/config.docker.yaml" "${bundle_dir}/config.docker.yaml"
fi fi
copy_file "${repo_root}/config.docker.yaml" "${bundle_dir}/config.docker.yaml"
copy_file "${repo_root}/docker/run-migrations.sh" "${bundle_dir}/docker/run-migrations.sh" copy_file "${repo_root}/docker/run-migrations.sh" "${bundle_dir}/docker/run-migrations.sh"
copy_file "${repo_root}/docker/init-user.sh" "${bundle_dir}/docker/init-user.sh" copy_file "${repo_root}/docker/init-user.sh" "${bundle_dir}/docker/init-user.sh"
copy_file "${repo_root}/docker/init-packs.sh" "${bundle_dir}/docker/init-packs.sh" copy_file "${repo_root}/docker/init-packs.sh" "${bundle_dir}/docker/init-packs.sh"

18
web/package-lock.json generated
View File

@@ -3655,9 +3655,9 @@
"license": "ISC" "license": "ISC"
}, },
"node_modules/picomatch": { "node_modules/picomatch": {
"version": "2.3.1", "version": "2.3.2",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz", "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.2.tgz",
"integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==", "integrity": "sha512-V7+vQEJ06Z+c5tSye8S+nHUfI51xoXIXjHQ99cQtKUkQqqO1kO/KCJUfZXuB47h/YBlDhah2H3hdUGXn8ie0oA==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
@@ -4337,9 +4337,9 @@
} }
}, },
"node_modules/tinyglobby/node_modules/picomatch": { "node_modules/tinyglobby/node_modules/picomatch": {
"version": "4.0.3", "version": "4.0.4",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz", "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.4.tgz",
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", "integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true, "peer": true,
@@ -4609,9 +4609,9 @@
} }
}, },
"node_modules/vite/node_modules/picomatch": { "node_modules/vite/node_modules/picomatch": {
"version": "4.0.3", "version": "4.0.4",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz", "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.4.tgz",
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", "integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true, "peer": true,

View File

@@ -0,0 +1,101 @@
# Pack Binaries: Cross-Architecture Static Build
**Date**: 2026-03-27
## Problem
The `attune-core-timer-sensor` native sensor binary failed to execute in Docker containers on Apple Silicon (arm64) Macs with the error:
```
rosetta error: failed to open elf at /lib64/ld-linux-x86-64.so.2
```
**Two root causes**:
1. **Wrong build toolchain**: `docker/Dockerfile.pack-binaries` used a plain `cargo build` which produced a **dynamically-linked, host-architecture** binary. On arm64 Docker hosts, this created an aarch64 binary linked against glibc. When the sensor container tried to execute it, the required dynamic linker (`ld-linux-x86-64.so.2`) was absent. This contrasted with `docker/Dockerfile.agent`, which already used `cargo-zigbuild` + musl for fully static binaries.
2. **init-packs overwrote the static binary**: Even after fixing the Dockerfile, the `init-packs.sh` script did `cp -rf` from `./packs/` (host bind mount) into the packs volume, **overwriting** the freshly-placed static binary from `init-pack-binaries` with the old dynamically-linked binary from the host's `packs/core/sensors/` directory.
## Changes
### `docker/Dockerfile.pack-binaries` — Rewritten for static cross-compilation
- Added `RUST_TARGET` build arg (default: `x86_64-unknown-linux-musl`)
- Installed `musl-tools`, `ziglang`, and `cargo-zigbuild` (matching agent Dockerfile pattern)
- Replaced `cargo build --release` with `cargo zigbuild --release --target ${RUST_TARGET}`
- Added `cargo fetch` dependency caching layer with proper workspace stubs (including sensor `agent_main.rs`)
- Added `SQLX_OFFLINE=true` for compile-time query checking without a live database
- Added strip-with-fallback for cross-arch scenarios
- Added **Stage 3: `pack-binaries-init`** — busybox-based image for Docker Compose volume population (analogous to `agent-init`)
- Updated cache ID to `target-pack-binaries-static` with `sharing=locked` for zigbuild exclusivity
### `docker/init-packs.sh` — Preserve static binaries during pack copy
- Before copying host pack files, detects ELF binaries already present in the target `sensors/` directory using the 4-byte ELF magic number (`\x7fELF` = `7f454c46`) via `od` (available in python:3.11-slim, unlike `file`)
- Backs up detected ELF binaries to a temp directory before the `cp -rf` overwrites them
- Restores the backed-up static binaries after the copy completes
- Logs each preserved binary for visibility
### `docker-compose.yaml` — Added `init-pack-binaries` service
- New `init-pack-binaries` service builds from `Dockerfile.pack-binaries` (target: `pack-binaries-init`) and copies the static binary into the `packs_data` volume
- Accepts `PACK_BINARIES_RUST_TARGET` env var (default: `x86_64-unknown-linux-musl`)
- `init-packs` now depends on `init-pack-binaries` to ensure binaries are in the volume before pack files are copied
- `docker compose up` now automatically builds and deploys pack binaries — no manual script run required
### `docker/distributable/docker-compose.yaml` — Same pattern for distributable
- Added `init-pack-binaries` service using pre-built registry image
- Updated `init-packs` dependencies
### `scripts/build-pack-binaries.sh` — Updated for static builds
- Passes `RUST_TARGET` build arg to Docker build
- Accepts `RUST_TARGET` env var (default: `x86_64-unknown-linux-musl`)
- Updated verification output to expect statically-linked binary
### `Makefile` — New targets
- `PACK_BINARIES_RUST_TARGET` variable (default: `x86_64-unknown-linux-musl`)
- `docker-build-pack-binaries` — build for default architecture
- `docker-build-pack-binaries-arm64` — build for aarch64
- `docker-build-pack-binaries-all` — build both architectures
### `.gitignore` — Exclude compiled pack binary
- Added `packs/core/sensors/attune-core-timer-sensor` to `.gitignore`
- Removed the stale dynamically-linked binary from git tracking
## Architecture
The fix follows the same proven pattern as `Dockerfile.agent`:
```
cargo-zigbuild + musl → statically-linked binary → zero runtime dependencies
```
Since the binary has no dynamic library dependencies (no glibc, no libssl, no dynamic linker), it runs on **any** Linux container of the matching CPU architecture, regardless of the base image (Debian, Alpine, scratch, etc.).
### Init sequence
1. **`init-pack-binaries`**: Builds static musl binary → copies to `packs_data` volume at `core/sensors/`
2. **`init-packs`** (depends on `init-pack-binaries`): Copies host `./packs/core/` to volume → detects existing ELF binary → backs it up → copies host files → restores static binary
3. **`sensor`**: Spawns the static `attune-core-timer-sensor` → works on any architecture
## Usage
```bash
# Default (x86_64) — works on amd64 containers and arm64 via Rosetta
docker compose up -d
# For native arm64 containers
PACK_BINARIES_RUST_TARGET=aarch64-unknown-linux-musl docker compose up -d
# Standalone build
make docker-build-pack-binaries # amd64
make docker-build-pack-binaries-arm64 # arm64
make docker-build-pack-binaries-all # both
# Manual script
RUST_TARGET=aarch64-unknown-linux-musl ./scripts/build-pack-binaries.sh
```