diff --git a/.gitignore b/.gitignore index 3262d5f..15cea06 100644 --- a/.gitignore +++ b/.gitignore @@ -82,3 +82,6 @@ docker-compose.override.yml packs.examples/ packs.external/ codex/ + +# Compiled pack binaries (built via Docker or build-pack-binaries.sh) +packs/core/sensors/attune-core-timer-sensor diff --git a/AGENTS.md b/AGENTS.md index da33903..2d1cae7 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -77,7 +77,7 @@ attune/ **Services**: - **Infrastructure**: postgres (TimescaleDB), rabbitmq, redis -- **Init** (run-once): migrations, init-user, init-packs, init-agent +- **Init** (run-once): migrations, init-user, init-pack-binaries, init-packs, init-agent - **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000) **Volumes** (named): @@ -100,7 +100,8 @@ docker compose -f docker-compose.yaml -f docker-compose.agent.yaml up -d # Star ### Docker Build Optimization - **Active Dockerfiles**: `docker/Dockerfile.optimized`, `docker/Dockerfile.agent`, `docker/Dockerfile.web`, and `docker/Dockerfile.pack-binaries` -- **Agent Dockerfile** (`docker/Dockerfile.agent`): Builds a statically-linked `attune-agent` binary using musl (`x86_64-unknown-linux-musl`). Three stages: `builder` (cross-compile), `agent-binary` (scratch — just the binary), `agent-init` (busybox — for volume population via `cp`). The binary has zero runtime dependencies (no glibc, no libssl). Build with `make docker-build-agent`. +- **Agent Dockerfile** (`docker/Dockerfile.agent`): Builds statically-linked `attune-agent` and `attune-sensor-agent` binaries using musl. Uses `cargo-zigbuild` (zig as the cross-compilation backend) so that any target architecture can be built from any host — e.g., building `aarch64-unknown-linux-musl` on an x86_64 host or vice versa. The `RUST_TARGET` build arg controls the output architecture (`x86_64-unknown-linux-musl` default, or `aarch64-unknown-linux-musl` for arm64). Three stages: `builder` (cross-compile with cargo-zigbuild), `agent-binary` (scratch — just the binaries), `agent-init` (busybox — for volume population via `cp`). The binaries have zero runtime dependencies (no glibc, no libssl). Build with `make docker-build-agent` (amd64), `make docker-build-agent-arm64` (arm64), or `make docker-build-agent-all` (both). In `docker-compose.yaml`, set `AGENT_RUST_TARGET=aarch64-unknown-linux-musl` env var to build arm64 agent binaries (defaults to x86_64). +- **Pack Binaries Dockerfile** (`docker/Dockerfile.pack-binaries`): Builds statically-linked pack binaries (sensors, etc.) using musl + cargo-zigbuild for cross-compilation. The `RUST_TARGET` build arg controls the output architecture (`x86_64-unknown-linux-musl` default, or `aarch64-unknown-linux-musl` for arm64). Three stages: `builder` (cross-compile with cargo-zigbuild), `output` (scratch — just the binaries for `docker cp` extraction), `pack-binaries-init` (busybox — for Docker Compose volume population via `cp`). Build with `make docker-build-pack-binaries` (amd64), `make docker-build-pack-binaries-arm64` (arm64), or `make docker-build-pack-binaries-all` (both). In `docker-compose.yaml`, set `PACK_BINARIES_RUST_TARGET=aarch64-unknown-linux-musl` env var to build arm64 pack binaries (defaults to x86_64). The `init-pack-binaries` Docker Compose service automatically builds and copies pack binaries into the `packs_data` volume before `init-packs` runs. - **Strategy**: Selective crate copying - only copy crates needed for each service (not entire workspace) - **Performance**: 90% faster incremental builds (~30 sec vs ~5 min for code changes) - **BuildKit cache mounts**: Persist cargo registry and compilation artifacts between builds @@ -123,7 +124,7 @@ docker compose -f docker-compose.yaml -f docker-compose.agent.yaml up -d # Star - **Key Principle**: Packs are NOT copied into Docker images - they are mounted as volumes - **Volume Flow**: Host `./packs/` → `init-packs` service → `packs_data` volume → mounted in all services - **Benefits**: Update packs with restart (~5 sec) instead of rebuild (~5 min) -- **Pack Binaries**: Built separately with `./scripts/build-pack-binaries.sh` (GLIBC compatibility) +- **Pack Binaries**: Automatically built and deployed via the `init-pack-binaries` Docker Compose service (statically-linked musl binaries via cargo-zigbuild, supports cross-compilation via `PACK_BINARIES_RUST_TARGET` env var). Can also be built manually with `./scripts/build-pack-binaries.sh` or `make docker-build-pack-binaries`. The `init-packs` service depends on `init-pack-binaries` and preserves any ELF binaries already present in the target `sensors/` directory (detected via ELF magic bytes with `od`) — it backs them up before copying host pack files and restores them afterward, preventing the host's stale dynamically-linked binary from overwriting the freshly-built static one. - **Development**: Use `./packs.dev/` for instant testing (direct bind mount, no restart needed) - **Documentation**: See `docs/QUICKREF-packs-volumes.md` @@ -273,7 +274,7 @@ Completion listener advances workflow → Schedules successor tasks → Complete - **Pack Volume Strategy**: Packs are mounted as volumes (NOT copied into Docker images) - Host `./packs/` → `packs_data` volume via `init-packs` service → mounted at `/opt/attune/packs` in all services - Development packs in `./packs.dev/` are bind-mounted directly for instant updates -- **Pack Binaries**: Native binaries (sensors) built separately with `./scripts/build-pack-binaries.sh` +- **Pack Binaries**: Native binaries (sensors) automatically built by the `init-pack-binaries` Docker Compose service (statically-linked musl, cross-arch via `PACK_BINARIES_RUST_TARGET`). Can also be built manually with `./scripts/build-pack-binaries.sh` or `make docker-build-pack-binaries`. - **Action Script Resolution**: Worker constructs file paths as `{packs_base_dir}/{pack_ref}/actions/{entrypoint}` - **Workflow Action YAML (`workflow_file` field)**: An action YAML may include a `workflow_file` field (e.g., `workflow_file: workflows/timeline_demo.yaml`) pointing to a workflow definition file relative to the `actions/` directory. When present, the `PackComponentLoader` reads and parses the referenced workflow YAML, creates/updates a `workflow_definition` record, and links the action to it via `action.workflow_def`. This separates action-level metadata (ref, label, parameters, policies) from the workflow graph (tasks, transitions, variables), and allows **multiple actions to reference the same workflow file** with different parameter schemas or policy configurations. Workflow actions have no `runner_type` (runtime is `None`) — the executor orchestrates child task executions rather than sending to a worker. - **Action-linked workflow files omit action-level metadata**: Workflow files referenced via `workflow_file` should contain **only the execution graph**: `version`, `vars`, `tasks`, `output_map`. The `ref`, `label`, `description`, `parameters`, `output`, and `tags` fields are omitted — the action YAML is the single authoritative source for those values. The `WorkflowDefinition` parser accepts empty `ref`/`label` (defaults to `""`), and the loader / registrar fall back to the action YAML (or filename-derived values) when they are missing. Standalone workflow files (in `workflows/`) still carry their own `ref`/`label` since they have no companion action YAML. @@ -683,7 +684,7 @@ When reporting, ask: "Should I fix this first or continue with [original task]?" - `docker/Dockerfile.optimized` - Optimized service builds (api, executor, notifier) - `docker/Dockerfile.agent` - Statically-linked agent binary (musl, for injection into any container) - `docker/Dockerfile.web` - Web UI build -- `docker/Dockerfile.pack-binaries` - Separate pack binary builder +- `docker/Dockerfile.pack-binaries` - Separate pack binary builder (cargo-zigbuild + musl static linking, 3 stages: builder, output, pack-binaries-init) - `scripts/build-pack-binaries.sh` - Build pack binaries script ## Common Pitfalls to Avoid @@ -703,7 +704,7 @@ When reporting, ask: "Should I fix this first or continue with [original task]?" 14. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`) 15. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare` 16. **REMEMBER** packs are volumes - update with restart, not rebuild -17. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh` +17. **REMEMBER** pack binaries are automatically built by `init-pack-binaries` in Docker Compose. For manual builds use `make docker-build-pack-binaries` or `./scripts/build-pack-binaries.sh`. 18. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row). 19. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history` 20. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures. diff --git a/Makefile b/Makefile index 2764cde..6a47d74 100644 --- a/Makefile +++ b/Makefile @@ -5,8 +5,10 @@ docker-build-worker-node docker-build-worker-full deny ci-rust ci-web-blocking ci-web-advisory \ ci-security-blocking ci-security-advisory ci-blocking ci-advisory \ fmt-check pre-commit install-git-hooks \ - build-agent docker-build-agent run-agent run-agent-release \ - docker-up-agent docker-down-agent + build-agent docker-build-agent docker-build-agent-arm64 docker-build-agent-all \ + run-agent run-agent-release \ + docker-up-agent docker-down-agent \ + docker-build-pack-binaries docker-build-pack-binaries-arm64 docker-build-pack-binaries-all # Default target help: @@ -63,13 +65,20 @@ help: @echo " make docker-down - Stop services" @echo "" @echo "Agent (Universal Worker):" - @echo " make build-agent - Build statically-linked agent binary (musl)" - @echo " make docker-build-agent - Build agent Docker image" - @echo " make run-agent - Run agent in development mode" - @echo " make run-agent-release - Run agent in release mode" + @echo " make build-agent - Build statically-linked agent binary (musl)" + @echo " make docker-build-agent - Build agent Docker image (amd64, default)" + @echo " make docker-build-agent-arm64 - Build agent Docker image (arm64)" + @echo " make docker-build-agent-all - Build agent Docker images (amd64 + arm64)" + @echo " make run-agent - Run agent in development mode" + @echo " make run-agent-release - Run agent in release mode" @echo " make docker-up-agent - Start all services + agent workers (ruby, etc.)" @echo " make docker-down-agent - Stop agent stack" @echo "" + @echo "Pack Binaries:" + @echo " make docker-build-pack-binaries - Build pack binaries Docker image (amd64, default)" + @echo " make docker-build-pack-binaries-arm64 - Build pack binaries Docker image (arm64)" + @echo " make docker-build-pack-binaries-all - Build pack binaries Docker images (amd64 + arm64)" + @echo "" @echo "Development:" @echo " make watch - Watch and rebuild on changes" @echo " make install-tools - Install development tools" @@ -240,6 +249,9 @@ docker-build-web: # Agent binary (statically-linked for injection into any container) AGENT_RUST_TARGET ?= x86_64-unknown-linux-musl +# Pack binaries (statically-linked for packs volume) +PACK_BINARIES_RUST_TARGET ?= x86_64-unknown-linux-musl + build-agent: @echo "Installing musl target (if not already installed)..." rustup target add $(AGENT_RUST_TARGET) 2>/dev/null || true @@ -254,9 +266,20 @@ build-agent: @ls -lh target/$(AGENT_RUST_TARGET)/release/attune-sensor-agent docker-build-agent: - @echo "Building agent Docker image (statically-linked binary)..." + @echo "Building agent Docker image ($(AGENT_RUST_TARGET))..." DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=$(AGENT_RUST_TARGET) --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest . - @echo "✅ Agent image built: attune-agent:latest" + @echo "✅ Agent image built: attune-agent:latest ($(AGENT_RUST_TARGET))" + +docker-build-agent-arm64: + @echo "Building arm64 agent Docker image..." + DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target agent-init -f docker/Dockerfile.agent -t attune-agent:arm64 . + @echo "✅ Agent image built: attune-agent:arm64" + +docker-build-agent-all: + @echo "Building agent Docker images for all architectures..." + $(MAKE) docker-build-agent + $(MAKE) docker-build-agent-arm64 + @echo "✅ All agent images built: attune-agent:latest (amd64), attune-agent:arm64" run-agent: cargo run --bin attune-agent @@ -264,6 +287,23 @@ run-agent: run-agent-release: cargo run --bin attune-agent --release +# Pack binaries (statically-linked for packs volume) +docker-build-pack-binaries: + @echo "Building pack binaries Docker image ($(PACK_BINARIES_RUST_TARGET))..." + DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=$(PACK_BINARIES_RUST_TARGET) --target pack-binaries-init -f docker/Dockerfile.pack-binaries -t attune-pack-builder:latest . + @echo "✅ Pack binaries image built: attune-pack-builder:latest ($(PACK_BINARIES_RUST_TARGET))" + +docker-build-pack-binaries-arm64: + @echo "Building arm64 pack binaries Docker image..." + DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target pack-binaries-init -f docker/Dockerfile.pack-binaries -t attune-pack-builder:arm64 . + @echo "✅ Pack binaries image built: attune-pack-builder:arm64" + +docker-build-pack-binaries-all: + @echo "Building pack binaries Docker images for all architectures..." + $(MAKE) docker-build-pack-binaries + $(MAKE) docker-build-pack-binaries-arm64 + @echo "✅ All pack binary images built: attune-pack-builder:latest (amd64), attune-pack-builder:arm64" + run-sensor-agent: cargo run --bin attune-sensor-agent diff --git a/charts/attune/templates/secret.yaml b/charts/attune/templates/secret.yaml index 40dda71..5a28aac 100644 --- a/charts/attune/templates/secret.yaml +++ b/charts/attune/templates/secret.yaml @@ -11,7 +11,7 @@ stringData: ATTUNE__SECURITY__ENCRYPTION_KEY: {{ .Values.security.encryptionKey | quote }} ATTUNE__DATABASE__URL: {{ include "attune.databaseUrl" . | quote }} ATTUNE__MESSAGE_QUEUE__URL: {{ include "attune.rabbitmqUrl" . | quote }} - ATTUNE__CACHE__URL: {{ include "attune.redisUrl" . | quote }} + ATTUNE__REDIS__URL: {{ include "attune.redisUrl" . | quote }} DB_HOST: {{ include "attune.postgresqlServiceName" . | quote }} DB_PORT: {{ .Values.database.port | quote }} DB_USER: {{ .Values.database.username | quote }} diff --git a/crates/api/src/auth/ldap.rs b/crates/api/src/auth/ldap.rs index 63cfde3..fe81ddb 100644 --- a/crates/api/src/auth/ldap.rs +++ b/crates/api/src/auth/ldap.rs @@ -139,7 +139,8 @@ fn conn_settings(config: &LdapConfig) -> LdapConnSettings { /// Open a new LDAP connection. async fn connect(config: &LdapConfig) -> Result { let settings = conn_settings(config); - let (conn, ldap) = LdapConnAsync::with_settings(settings, &config.url) + let url = config.url.as_deref().unwrap_or_default(); + let (conn, ldap) = LdapConnAsync::with_settings(settings, url) .await .map_err(|err| { ApiError::InternalServerError(format!("Failed to connect to LDAP server: {err}")) @@ -333,7 +334,7 @@ fn extract_claims(config: &LdapConfig, entry: &SearchEntry) -> LdapUserClaims { .unwrap_or_default(); LdapUserClaims { - server_url: config.url.clone(), + server_url: config.url.clone().unwrap_or_default(), dn: entry.dn.clone(), login: first_attr(&config.login_attr), email: first_attr(&config.email_attr), diff --git a/crates/api/src/auth/oidc.rs b/crates/api/src/auth/oidc.rs index c4a9da3..b0c1e12 100644 --- a/crates/api/src/auth/oidc.rs +++ b/crates/api/src/auth/oidc.rs @@ -126,15 +126,17 @@ pub async fn build_login_redirect( .map_err(|err| { ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}")) })?; - let redirect_uri = RedirectUrl::new(oidc.redirect_uri.clone()).map_err(|err| { + let redirect_uri_str = oidc.redirect_uri.clone().unwrap_or_default(); + let redirect_uri = RedirectUrl::new(redirect_uri_str).map_err(|err| { ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}")) })?; let client_secret = oidc.client_secret.clone().ok_or_else(|| { ApiError::InternalServerError("OIDC client secret is missing".to_string()) })?; + let client_id = oidc.client_id.clone().unwrap_or_default(); let client = CoreClient::from_provider_metadata( discovery.metadata.clone(), - ClientId::new(oidc.client_id.clone()), + ClientId::new(client_id), Some(ClientSecret::new(client_secret)), ) .set_redirect_uri(redirect_uri); @@ -238,15 +240,17 @@ pub async fn handle_callback( .map_err(|err| { ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}")) })?; - let redirect_uri = RedirectUrl::new(oidc.redirect_uri.clone()).map_err(|err| { + let redirect_uri_str = oidc.redirect_uri.clone().unwrap_or_default(); + let redirect_uri = RedirectUrl::new(redirect_uri_str).map_err(|err| { ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}")) })?; let client_secret = oidc.client_secret.clone().ok_or_else(|| { ApiError::InternalServerError("OIDC client secret is missing".to_string()) })?; + let client_id = oidc.client_id.clone().unwrap_or_default(); let client = CoreClient::from_provider_metadata( discovery.metadata.clone(), - ClientId::new(oidc.client_id.clone()), + ClientId::new(client_id), Some(ClientSecret::new(client_secret)), ) .set_redirect_uri(redirect_uri); @@ -336,7 +340,7 @@ pub async fn build_logout_redirect( pairs.append_pair("id_token_hint", &id_token_hint); } pairs.append_pair("post_logout_redirect_uri", &post_logout_redirect_uri); - pairs.append_pair("client_id", &oidc.client_id); + pairs.append_pair("client_id", oidc.client_id.as_deref().unwrap_or_default()); } String::from(url) } else { @@ -481,7 +485,8 @@ fn oidc_config(state: &SharedState) -> Result { } async fn fetch_discovery_document(oidc: &OidcConfig) -> Result { - let discovery = reqwest::get(&oidc.discovery_url).await.map_err(|err| { + let discovery_url = oidc.discovery_url.as_deref().unwrap_or_default(); + let discovery = reqwest::get(discovery_url).await.map_err(|err| { ApiError::InternalServerError(format!("Failed to fetch OIDC discovery document: {err}")) })?; @@ -621,7 +626,7 @@ async fn verify_id_token( let issuer = discovery.metadata.issuer().to_string(); let mut validation = Validation::new(algorithm); validation.set_issuer(&[issuer.as_str()]); - validation.set_audience(&[oidc.client_id.as_str()]); + validation.set_audience(&[oidc.client_id.as_deref().unwrap_or_default()]); validation.set_required_spec_claims(&["exp", "iat", "iss", "sub", "aud"]); validation.validate_nbf = false; @@ -740,7 +745,8 @@ fn should_use_secure_cookies(state: &SharedState) -> bool { .security .oidc .as_ref() - .map(|oidc| oidc.redirect_uri.starts_with("https://")) + .and_then(|oidc| oidc.redirect_uri.as_deref()) + .map(|uri| uri.starts_with("https://")) .unwrap_or(false) } diff --git a/crates/common/src/config.rs b/crates/common/src/config.rs index 869a35c..3953883 100644 --- a/crates/common/src/config.rs +++ b/crates/common/src/config.rs @@ -355,10 +355,14 @@ pub struct OidcConfig { pub enabled: bool, /// OpenID Provider discovery document URL. - pub discovery_url: String, + /// Required when `enabled` is true; ignored otherwise. + #[serde(default)] + pub discovery_url: Option, /// Confidential client ID. - pub client_id: String, + /// Required when `enabled` is true; ignored otherwise. + #[serde(default)] + pub client_id: Option, /// Provider name used in login-page overrides such as `?auth=`. #[serde(default = "default_oidc_provider_name")] @@ -374,7 +378,9 @@ pub struct OidcConfig { pub client_secret: Option, /// Redirect URI registered with the provider. - pub redirect_uri: String, + /// Required when `enabled` is true; ignored otherwise. + #[serde(default)] + pub redirect_uri: Option, /// Optional post-logout redirect URI. pub post_logout_redirect_uri: Option, @@ -396,7 +402,9 @@ pub struct LdapConfig { pub enabled: bool, /// LDAP server URL (e.g., "ldap://ldap.example.com:389" or "ldaps://ldap.example.com:636"). - pub url: String, + /// Required when `enabled` is true; ignored otherwise. + #[serde(default)] + pub url: Option, /// Bind DN template. Use `{login}` as placeholder for the user-supplied login. /// Example: "uid={login},ou=users,dc=example,dc=com" @@ -985,14 +993,20 @@ impl Config { if let Some(oidc) = &self.security.oidc { if oidc.enabled { - if oidc.discovery_url.trim().is_empty() { + if oidc + .discovery_url + .as_deref() + .unwrap_or("") + .trim() + .is_empty() + { return Err(crate::Error::validation( - "OIDC discovery URL cannot be empty when OIDC is enabled", + "OIDC discovery URL is required when OIDC is enabled", )); } - if oidc.client_id.trim().is_empty() { + if oidc.client_id.as_deref().unwrap_or("").trim().is_empty() { return Err(crate::Error::validation( - "OIDC client ID cannot be empty when OIDC is enabled", + "OIDC client ID is required when OIDC is enabled", )); } if oidc @@ -1006,9 +1020,19 @@ impl Config { "OIDC client secret is required when OIDC is enabled", )); } - if oidc.redirect_uri.trim().is_empty() { + if oidc.redirect_uri.as_deref().unwrap_or("").trim().is_empty() { return Err(crate::Error::validation( - "OIDC redirect URI cannot be empty when OIDC is enabled", + "OIDC redirect URI is required when OIDC is enabled", + )); + } + } + } + + if let Some(ldap) = &self.security.ldap { + if ldap.enabled { + if ldap.url.as_deref().unwrap_or("").trim().is_empty() { + return Err(crate::Error::validation( + "LDAP server URL is required when LDAP is enabled", )); } } @@ -1172,6 +1196,31 @@ mod tests { assert!(config.validate().is_err()); } + #[test] + fn test_oidc_config_disabled_no_urls_required() { + let yaml = r#" +enabled: false +"#; + let cfg: OidcConfig = serde_yaml_ng::from_str(yaml).unwrap(); + assert!(!cfg.enabled); + assert!(cfg.discovery_url.is_none()); + assert!(cfg.client_id.is_none()); + assert!(cfg.redirect_uri.is_none()); + assert!(cfg.client_secret.is_none()); + assert_eq!(cfg.provider_name, "oidc"); + } + + #[test] + fn test_ldap_config_disabled_no_url_required() { + let yaml = r#" +enabled: false +"#; + let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap(); + assert!(!cfg.enabled); + assert!(cfg.url.is_none()); + assert_eq!(cfg.provider_name, "ldap"); + } + #[test] fn test_ldap_config_defaults() { let yaml = r#" @@ -1182,7 +1231,7 @@ client_id: "test" let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap(); assert!(cfg.enabled); - assert_eq!(cfg.url, "ldap://localhost:389"); + assert_eq!(cfg.url.as_deref(), Some("ldap://localhost:389")); assert_eq!(cfg.user_filter, "(uid={login})"); assert_eq!(cfg.login_attr, "uid"); assert_eq!(cfg.email_attr, "mail"); @@ -1222,7 +1271,7 @@ provider_icon_url: "https://corp.com/icon.svg" let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap(); assert!(cfg.enabled); - assert_eq!(cfg.url, "ldaps://ldap.corp.com:636"); + assert_eq!(cfg.url.as_deref(), Some("ldaps://ldap.corp.com:636")); assert_eq!( cfg.bind_dn_template.as_deref(), Some("uid={login},ou=people,dc=corp,dc=com") diff --git a/docker-compose.yaml b/docker-compose.yaml index 9f236d6..a834007 100644 --- a/docker-compose.yaml +++ b/docker-compose.yaml @@ -91,6 +91,30 @@ services: - attune-network restart: on-failure + # Build and extract statically-linked pack binaries (sensors, etc.) + # These binaries are built with musl for cross-architecture compatibility + # and placed directly into the packs volume for sensor containers to use. + init-pack-binaries: + build: + context: . + dockerfile: docker/Dockerfile.pack-binaries + target: pack-binaries-init + args: + BUILDKIT_INLINE_CACHE: 1 + RUST_TARGET: ${PACK_BINARIES_RUST_TARGET:-x86_64-unknown-linux-musl} + container_name: attune-init-pack-binaries + volumes: + - packs_data:/opt/attune/packs + entrypoint: + [ + "/bin/sh", + "-c", + "mkdir -p /opt/attune/packs/core/sensors && cp /pack-binaries/attune-core-timer-sensor /opt/attune/packs/core/sensors/attune-core-timer-sensor && chmod +x /opt/attune/packs/core/sensors/attune-core-timer-sensor && echo 'Pack binaries copied successfully'", + ] + restart: "no" + networks: + - attune-network + # Initialize builtin packs # Copies pack files to shared volume and loads them into database init-packs: @@ -117,6 +141,8 @@ services: DEFAULT_ADMIN_PERMISSION_SET_REF: core.admin command: ["/bin/sh", "/init-packs.sh"] depends_on: + init-pack-binaries: + condition: service_completed_successfully migrations: condition: service_completed_successfully postgres: @@ -136,6 +162,7 @@ services: target: agent-init args: BUILDKIT_INLINE_CACHE: 1 + RUST_TARGET: ${AGENT_RUST_TARGET:-x86_64-unknown-linux-musl} container_name: attune-init-agent volumes: - agent_bin:/opt/attune/agent @@ -209,8 +236,8 @@ services: ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune # Message Queue ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 - # Cache - ATTUNE__CACHE__URL: redis://redis:6379 + # Redis + ATTUNE__REDIS__URL: redis://redis:6379 # Worker config override ATTUNE__WORKER__WORKER_TYPE: container ports: @@ -263,7 +290,7 @@ services: ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus} ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 - ATTUNE__CACHE__URL: redis://redis:6379 + ATTUNE__REDIS__URL: redis://redis:6379 ATTUNE__WORKER__WORKER_TYPE: container volumes: - ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro diff --git a/docker/Dockerfile.agent b/docker/Dockerfile.agent index 4f57b22..afabf38 100644 --- a/docker/Dockerfile.agent +++ b/docker/Dockerfile.agent @@ -4,18 +4,31 @@ # using musl, suitable for injection into arbitrary runtime containers. # # Stages: -# builder - Cross-compile with musl for a fully static binary +# builder - Cross-compile with cargo-zigbuild + musl for a fully static binary # agent-binary - Minimal scratch image containing just the binary # agent-init - BusyBox-based image for use as a Kubernetes init container # or Docker Compose volume-populating service (has `cp`) # +# Architecture handling: +# Uses cargo-zigbuild for cross-compilation, which bundles all necessary +# cross-compilation toolchains internally. This allows building for any +# target architecture from any host — e.g., building aarch64 musl binaries +# on an x86_64 host, or vice versa. This matches the CI/CD pipeline approach. +# +# The RUST_TARGET build arg controls the output architecture: +# x86_64-unknown-linux-musl -> amd64 static binary (default) +# aarch64-unknown-linux-musl -> arm64 static binary +# # Usage: +# # Build for the default architecture (x86_64): +# DOCKER_BUILDKIT=1 docker buildx build --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest . +# +# # Build for arm64: +# DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest . +# # # Build the minimal binary-only image: # DOCKER_BUILDKIT=1 docker buildx build --target agent-binary -f docker/Dockerfile.agent -t attune-agent:binary . # -# # Build the init container image (for volume population via `cp`): -# DOCKER_BUILDKIT=1 docker buildx build --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest . -# # # Use in docker-compose.yaml to populate a shared volume: # # agent-init: # # image: attune-agent:latest @@ -37,14 +50,30 @@ FROM rust:${RUST_VERSION}-${DEBIAN_VERSION} AS builder ARG RUST_TARGET -# Install musl toolchain for static linking +# Install build dependencies. +# - musl-tools: provides the musl libc headers needed for musl target builds +# - python3 + pip: needed to install ziglang (zig is the cross-compilation backend) +# - pkg-config, libssl-dev: needed for native dependency detection during build +# - file, binutils: for verifying the resulting binaries (file, strip) RUN apt-get update && apt-get install -y \ musl-tools \ pkg-config \ libssl-dev \ ca-certificates \ + file \ + binutils \ + python3 \ + python3-pip \ && rm -rf /var/lib/apt/lists/* +# Install zig (provides cross-compilation toolchains for all architectures) +# and cargo-zigbuild (cargo subcommand that uses zig as the linker/compiler). +# This replaces native musl-gcc and avoids the -m64 flag mismatch that occurs +# when the host arch doesn't match the target arch (e.g., building x86_64 musl +# binaries on an arm64 host). +RUN pip3 install --break-system-packages --no-cache-dir ziglang && \ + cargo install --locked cargo-zigbuild + # Add the requested musl target for fully static binaries RUN rustup target add ${RUST_TARGET} @@ -96,25 +125,30 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \ # --------------------------------------------------------------------------- # Build layer -# Copy real source code and compile only the agent binary with musl +# Copy real source code and compile only the agent binaries with musl # --------------------------------------------------------------------------- COPY migrations/ ./migrations/ COPY crates/ ./crates/ # Build the injected agent binaries, statically linked with musl. +# Uses cargo-zigbuild so that cross-compilation works regardless of host arch. # Uses a dedicated cache ID (agent-target) so the musl target directory # doesn't collide with the glibc target cache used by other Dockerfiles. RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \ --mount=type=cache,target=/usr/local/cargo/git,sharing=shared \ --mount=type=cache,id=agent-target,target=/build/target,sharing=locked \ - cargo build --release --target ${RUST_TARGET} --bin attune-agent --bin attune-sensor-agent && \ + cargo zigbuild --release --target ${RUST_TARGET} --bin attune-agent --bin attune-sensor-agent && \ cp /build/target/${RUST_TARGET}/release/attune-agent /build/attune-agent && \ cp /build/target/${RUST_TARGET}/release/attune-sensor-agent /build/attune-sensor-agent -# Strip the binaries to minimize size -RUN strip /build/attune-agent && strip /build/attune-sensor-agent +# Strip the binaries to minimize size. +# When cross-compiling for a different architecture, the host strip may not +# understand the foreign binary format. In that case we skip stripping — the +# binary is still functional, just slightly larger. +RUN (strip /build/attune-agent 2>/dev/null && echo "stripped attune-agent" || echo "strip skipped for attune-agent (cross-arch binary)") && \ + (strip /build/attune-sensor-agent 2>/dev/null && echo "stripped attune-sensor-agent" || echo "strip skipped for attune-sensor-agent (cross-arch binary)") -# Verify the binaries are statically linked and functional +# Verify the binaries exist and show their details RUN ls -lh /build/attune-agent /build/attune-sensor-agent && \ file /build/attune-agent && \ file /build/attune-sensor-agent && \ diff --git a/docker/Dockerfile.pack-binaries b/docker/Dockerfile.pack-binaries index d537bee..8950442 100644 --- a/docker/Dockerfile.pack-binaries +++ b/docker/Dockerfile.pack-binaries @@ -1,12 +1,26 @@ -# Dockerfile for building pack binaries independently +# Dockerfile for building statically-linked pack binaries independently # -# This Dockerfile builds native pack binaries (sensors, etc.) with GLIBC compatibility -# The binaries are built separately from service containers and placed in ./packs/ +# This Dockerfile builds native pack binaries (sensors, etc.) as fully static +# musl binaries with zero runtime dependencies. Uses cargo-zigbuild for +# cross-compilation, allowing builds for any target architecture from any host +# (e.g., building x86_64 musl binaries on an arm64 Mac, or vice versa). +# +# Architecture handling: +# The RUST_TARGET build arg controls the output architecture: +# x86_64-unknown-linux-musl -> amd64 static binary (default) +# aarch64-unknown-linux-musl -> arm64 static binary # # Usage: -# docker build -f docker/Dockerfile.pack-binaries -t attune-pack-builder . +# # Build for the default architecture (x86_64): +# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.pack-binaries -t attune-pack-builder . +# +# # Build for arm64: +# DOCKER_BUILDKIT=1 docker build --build-arg RUST_TARGET=aarch64-unknown-linux-musl \ +# -f docker/Dockerfile.pack-binaries -t attune-pack-builder . +# +# # Extract binaries: # docker create --name pack-binaries attune-pack-builder -# docker cp pack-binaries:/build/pack-binaries/. ./packs/ +# docker cp pack-binaries:/pack-binaries/. ./packs/ # docker rm pack-binaries # # Or use the provided script: @@ -14,25 +28,56 @@ ARG RUST_VERSION=1.92 ARG DEBIAN_VERSION=bookworm +ARG RUST_TARGET=x86_64-unknown-linux-musl # ============================================================================ -# Stage 1: Builder - Build pack binaries with GLIBC 2.36 +# Stage 1: Builder - Cross-compile statically-linked pack binaries with musl # ============================================================================ FROM rust:${RUST_VERSION}-${DEBIAN_VERSION} AS builder -# Install build dependencies +ARG RUST_TARGET + +# Install build dependencies. +# - musl-tools: provides the musl libc headers needed for musl target builds +# - python3 + pip: needed to install ziglang (zig is the cross-compilation backend) +# - pkg-config, libssl-dev: needed for native dependency detection during build +# - file, binutils: for verifying and stripping the resulting binaries RUN apt-get update && apt-get install -y \ + musl-tools \ pkg-config \ libssl-dev \ ca-certificates \ + file \ + binutils \ + python3 \ + python3-pip \ && rm -rf /var/lib/apt/lists/* +# Install zig (provides cross-compilation toolchains for all architectures) +# and cargo-zigbuild (cargo subcommand that uses zig as the linker/compiler). +# This replaces native musl-gcc and avoids the -m64 flag mismatch that occurs +# when the host arch doesn't match the target arch (e.g., building x86_64 musl +# binaries on an arm64 host). +RUN pip3 install --break-system-packages --no-cache-dir ziglang && \ + cargo install --locked cargo-zigbuild + +# Add the requested musl target for fully static binaries +RUN rustup target add ${RUST_TARGET} + WORKDIR /build # Increase rustc stack size to prevent SIGSEGV during release builds ENV RUST_MIN_STACK=67108864 -# Copy workspace configuration +# Enable SQLx offline mode — compile-time query checking without a live database +ENV SQLX_OFFLINE=true + +# --------------------------------------------------------------------------- +# Dependency caching layer +# Copy only Cargo metadata first so `cargo fetch` is cached when only source +# code changes. This follows the same selective-copy optimization pattern as +# the other active Dockerfiles in this directory. +# --------------------------------------------------------------------------- COPY Cargo.toml Cargo.lock ./ # Copy all workspace member manifests (required for workspace resolution) @@ -45,35 +90,63 @@ COPY crates/worker/Cargo.toml ./crates/worker/Cargo.toml COPY crates/notifier/Cargo.toml ./crates/notifier/Cargo.toml COPY crates/cli/Cargo.toml ./crates/cli/Cargo.toml -# Create dummy source files for workspace members (not being built) -RUN mkdir -p crates/api/src && echo "fn main() {}" > crates/api/src/main.rs -RUN mkdir -p crates/executor/src && echo "fn main() {}" > crates/executor/src/main.rs -RUN mkdir -p crates/executor/benches && echo "fn main() {}" > crates/executor/benches/context_clone.rs -RUN mkdir -p crates/sensor/src && echo "fn main() {}" > crates/sensor/src/main.rs -RUN mkdir -p crates/worker/src && echo "fn main() {}" > crates/worker/src/main.rs -RUN echo "fn main() {}" > crates/worker/src/agent_main.rs -RUN mkdir -p crates/notifier/src && echo "fn main() {}" > crates/notifier/src/main.rs -RUN mkdir -p crates/cli/src && echo "fn main() {}" > crates/cli/src/main.rs +# Create minimal stub sources so cargo can resolve the workspace and fetch deps. +# These are ONLY used for `cargo fetch` — never compiled. +# NOTE: The worker crate has TWO binary targets (main + agent_main) and the +# sensor crate also has two binary targets (main + agent_main), so we create +# stubs for all of them. +RUN mkdir -p crates/common/src && echo "" > crates/common/src/lib.rs && \ + mkdir -p crates/api/src && echo "fn main(){}" > crates/api/src/main.rs && \ + mkdir -p crates/executor/src && echo "fn main(){}" > crates/executor/src/main.rs && \ + mkdir -p crates/executor/benches && echo "fn main(){}" > crates/executor/benches/context_clone.rs && \ + mkdir -p crates/sensor/src && echo "fn main(){}" > crates/sensor/src/main.rs && \ + echo "fn main(){}" > crates/sensor/src/agent_main.rs && \ + mkdir -p crates/core-timer-sensor/src && echo "fn main(){}" > crates/core-timer-sensor/src/main.rs && \ + mkdir -p crates/worker/src && echo "fn main(){}" > crates/worker/src/main.rs && \ + echo "fn main(){}" > crates/worker/src/agent_main.rs && \ + mkdir -p crates/notifier/src && echo "fn main(){}" > crates/notifier/src/main.rs && \ + mkdir -p crates/cli/src && echo "fn main(){}" > crates/cli/src/main.rs -# Copy only the source code needed for pack binaries +# Download all dependencies (cached unless Cargo.toml/Cargo.lock change) +# registry/git use sharing=shared — cargo handles concurrent reads safely +RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \ + --mount=type=cache,target=/usr/local/cargo/git,sharing=shared \ + cargo fetch + +# --------------------------------------------------------------------------- +# Build layer +# Copy real source code and compile only the pack binaries with musl +# --------------------------------------------------------------------------- +COPY migrations/ ./migrations/ COPY crates/common/ ./crates/common/ COPY crates/core-timer-sensor/ ./crates/core-timer-sensor/ -# Build pack binaries with BuildKit cache mounts -# These binaries will have GLIBC 2.36 compatibility (Debian Bookworm) +# Build pack binaries with BuildKit cache mounts, statically linked with musl. +# Uses cargo-zigbuild so that cross-compilation works regardless of host arch. # - registry/git use sharing=shared (cargo handles concurrent access safely) -# - target uses dedicated cache for pack binaries (separate from service builds) +# - target uses sharing=locked because zigbuild cross-compilation needs +# exclusive access to the target directory +# - dedicated cache ID (target-pack-binaries-static) to avoid collisions with +# other Dockerfiles' target caches RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \ --mount=type=cache,target=/usr/local/cargo/git,sharing=shared \ - --mount=type=cache,target=/build/target,id=target-pack-binaries \ + --mount=type=cache,id=target-pack-binaries-static,target=/build/target,sharing=locked \ mkdir -p /build/pack-binaries && \ - cargo build --release --bin attune-core-timer-sensor && \ - cp /build/target/release/attune-core-timer-sensor /build/pack-binaries/attune-core-timer-sensor + cargo zigbuild --release --target ${RUST_TARGET} --bin attune-core-timer-sensor && \ + cp /build/target/${RUST_TARGET}/release/attune-core-timer-sensor /build/pack-binaries/attune-core-timer-sensor -# Verify binaries were built successfully -RUN ls -lah /build/pack-binaries/ && \ +# Strip the binary to minimize size. +# When cross-compiling for a different architecture, the host strip may not +# understand the foreign binary format. In that case we skip stripping — the +# binary is still functional, just slightly larger. +RUN (strip /build/pack-binaries/attune-core-timer-sensor 2>/dev/null && \ + echo "stripped attune-core-timer-sensor" || \ + echo "strip skipped for attune-core-timer-sensor (cross-arch binary)") + +# Verify binaries were built successfully and are statically linked +RUN ls -lh /build/pack-binaries/attune-core-timer-sensor && \ file /build/pack-binaries/attune-core-timer-sensor && \ - ldd /build/pack-binaries/attune-core-timer-sensor && \ + (ldd /build/pack-binaries/attune-core-timer-sensor 2>&1 || echo "statically linked (no dynamic dependencies)") && \ /build/pack-binaries/attune-core-timer-sensor --version || echo "Built successfully" # ============================================================================ @@ -87,3 +160,15 @@ COPY --from=builder /build/pack-binaries/ /pack-binaries/ # Default command (not used in FROM scratch) CMD ["/bin/sh"] + +# ============================================================================ +# Stage 3: pack-binaries-init - Init container for volume population +# ============================================================================ +# Uses busybox so we have `cp`, `sh`, etc. for use as a Docker Compose +# init service that copies pack binaries into the shared packs volume. +FROM busybox:1.36 AS pack-binaries-init + +COPY --from=builder /build/pack-binaries/ /pack-binaries/ + +# No default entrypoint — docker-compose provides the command +ENTRYPOINT ["/bin/sh"] diff --git a/docker/distributable/config.docker.yaml b/docker/distributable/config.docker.yaml index 7aac10e..4278d0c 100644 --- a/docker/distributable/config.docker.yaml +++ b/docker/distributable/config.docker.yaml @@ -1,5 +1,15 @@ # Attune Docker Environment Configuration -# This file overrides base config.yaml settings for Docker deployments +# +# This file is mounted into containers at /opt/attune/config/config.yaml. +# It provides base values for Docker deployments. +# +# Sensitive values (jwt_secret, encryption_key) are overridden by environment +# variables set in docker-compose.yaml using the ATTUNE__ prefix convention: +# ATTUNE__SECURITY__JWT_SECRET=... +# ATTUNE__SECURITY__ENCRYPTION_KEY=... +# +# The `config` crate does NOT support ${VAR} shell interpolation in YAML. +# All overrides must use ATTUNE__
__ environment variables. environment: docker @@ -8,36 +18,29 @@ database: url: postgresql://attune:attune@postgres:5432/attune max_connections: 20 min_connections: 5 - acquire_timeout: 30 + connect_timeout: 30 idle_timeout: 600 - max_lifetime: 1800 log_statements: false - schema: "attune" + schema: "public" # Docker message queue (RabbitMQ container) message_queue: url: amqp://attune:attune@rabbitmq:5672 - connection_timeout: 30 - heartbeat: 60 - prefetch_count: 10 - rabbitmq: - worker_queue_ttl_ms: 300000 # 5 minutes - expire unprocessed executions - dead_letter: - enabled: true - exchange: attune.dlx - ttl_ms: 86400000 # 24 hours - retain DLQ messages for debugging + exchange: attune + enable_dlq: true + message_ttl: 3600 # seconds -# Docker cache (Redis container - optional) -cache: - enabled: true +# Docker cache (Redis container) +redis: url: redis://redis:6379 - connection_timeout: 5 - default_ttl: 3600 + pool_size: 10 # API server configuration server: host: 0.0.0.0 port: 8080 + request_timeout: 60 + enable_cors: true cors_origins: - http://localhost - http://localhost:3000 @@ -49,8 +52,8 @@ server: - http://127.0.0.1:3002 - http://127.0.0.1:5173 - http://web - request_timeout: 60 - max_request_size: 10485760 # 10MB + - http://web:3000 + max_body_size: 10485760 # 10MB # Logging configuration log: @@ -58,30 +61,34 @@ log: format: json # Structured logs for container environments console: true -# Security settings (MUST override via environment variables in production) +# Security settings +# jwt_secret and encryption_key are intentional placeholders — they MUST be +# overridden via ATTUNE__SECURITY__JWT_SECRET and ATTUNE__SECURITY__ENCRYPTION_KEY +# environment variables in docker-compose.yaml (or a .env file). security: - jwt_secret: ${JWT_SECRET} + jwt_secret: override-via-ATTUNE__SECURITY__JWT_SECRET-env-var jwt_access_expiration: 3600 # 1 hour jwt_refresh_expiration: 604800 # 7 days - encryption_key: ${ENCRYPTION_KEY} + encryption_key: override-via-ATTUNE__SECURITY__ENCRYPTION_KEY-env-var enable_auth: true allow_self_registration: false login_page: show_local_login: true show_oidc_login: true + show_ldap_login: true oidc: - # example local dev enabled: false - discovery_url: https://my.sso.provider.com/.well-known/openid-configuration - client_id: 31d194737840d32bd3afe6474826976bae346d77247a158c4dc43887278eb605 - client_secret: xL2C9WOC8shZ2QrZs9VFa10JK1Ob95xcMtZU3N86H1Pz0my5 - provider_name: my-sso-provider - provider_label: My SSO Provider - provider_icon_url: https://my.sso.provider.com/favicon.ico - redirect_uri: http://localhost:3000/auth/callback - post_logout_redirect_uri: http://localhost:3000/login - scopes: - - groups + # Uncomment and configure for your OIDC provider: + # discovery_url: https://auth.example.com/.well-known/openid-configuration + # client_id: your-client-id + # client_secret: your-client-secret + # provider_name: sso + # provider_label: SSO Login + # provider_icon_url: https://auth.example.com/favicon.ico + # redirect_uri: http://localhost:3000/auth/callback + # post_logout_redirect_uri: http://localhost:3000/login + # scopes: + # - groups # Packs directory (mounted volume in containers) packs_base_dir: /opt/attune/packs @@ -98,61 +105,34 @@ artifacts_dir: /opt/attune/artifacts # Executor service configuration executor: - service_name: attune-executor - max_concurrent_executions: 50 - heartbeat_interval: 30 - task_timeout: 300 - cleanup_interval: 120 - scheduling_interval: 5 - retry_max_attempts: 3 - retry_backoff_multiplier: 2.0 - retry_backoff_max: 300 scheduled_timeout: 300 # 5 minutes - fail executions stuck in SCHEDULED timeout_check_interval: 60 # Check every minute for stale executions enable_timeout_monitor: true # Worker service configuration worker: - service_name: attune-worker worker_type: container max_concurrent_tasks: 20 heartbeat_interval: 10 # Reduced from 30s for faster stale detection (staleness = 30s) task_timeout: 300 - cleanup_interval: 120 - work_dir: /tmp/attune-worker - python: - executable: python3 - venv_dir: /tmp/attune-worker/venvs - requirements_timeout: 300 - nodejs: - executable: node - npm_executable: npm - modules_dir: /tmp/attune-worker/node_modules - install_timeout: 300 - shell: - executable: /bin/bash - allowed_shells: - - /bin/bash - - /bin/sh + max_stdout_bytes: 10485760 # 10MB + max_stderr_bytes: 10485760 # 10MB + shutdown_timeout: 30 + stream_logs: true # Sensor service configuration sensor: - service_name: attune-sensor - heartbeat_interval: 10 # Reduced from 30s for faster stale detection max_concurrent_sensors: 50 + heartbeat_interval: 10 # Reduced from 30s for faster stale detection + poll_interval: 10 sensor_timeout: 300 - polling_interval: 10 - cleanup_interval: 120 + shutdown_timeout: 30 # Notifier service configuration notifier: - service_name: attune-notifier - websocket_host: 0.0.0.0 - websocket_port: 8081 - heartbeat_interval: 30 - connection_timeout: 60 + host: 0.0.0.0 + port: 8081 max_connections: 1000 - message_buffer_size: 10000 # Agent binary distribution (serves the agent binary via API for remote downloads) agent: diff --git a/docker/distributable/docker-compose.yaml b/docker/distributable/docker-compose.yaml index da6b89a..44f4006 100644 --- a/docker/distributable/docker-compose.yaml +++ b/docker/distributable/docker-compose.yaml @@ -69,6 +69,24 @@ services: - attune-network restart: on-failure + # Build and extract statically-linked pack binaries (sensors, etc.) + # These binaries are built with musl for cross-architecture compatibility + # and placed directly into the packs volume for sensor containers to use. + init-pack-binaries: + image: ${ATTUNE_IMAGE_REGISTRY:-git.rdrx.app/attune-system}/attune/pack-builder:${ATTUNE_IMAGE_TAG:-latest} + container_name: attune-init-pack-binaries + volumes: + - packs_data:/opt/attune/packs + entrypoint: + [ + "/bin/sh", + "-c", + "mkdir -p /opt/attune/packs/core/sensors && cp /pack-binaries/attune-core-timer-sensor /opt/attune/packs/core/sensors/attune-core-timer-sensor && chmod +x /opt/attune/packs/core/sensors/attune-core-timer-sensor && echo 'Pack binaries copied successfully'", + ] + restart: "no" + networks: + - attune-network + init-packs: image: python:3.11-slim container_name: attune-init-packs @@ -93,6 +111,8 @@ services: DEFAULT_ADMIN_PERMISSION_SET_REF: core.admin command: ["/bin/sh", "/init-packs.sh"] depends_on: + init-pack-binaries: + condition: service_completed_successfully migrations: condition: service_completed_successfully postgres: @@ -166,7 +186,7 @@ services: ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune ATTUNE__DATABASE__SCHEMA: public ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 - ATTUNE__CACHE__URL: redis://redis:6379 + ATTUNE__REDIS__URL: redis://redis:6379 ATTUNE__WORKER__WORKER_TYPE: container ports: - "8080:8080" @@ -213,7 +233,7 @@ services: ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune ATTUNE__DATABASE__SCHEMA: public ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 - ATTUNE__CACHE__URL: redis://redis:6379 + ATTUNE__REDIS__URL: redis://redis:6379 ATTUNE__WORKER__WORKER_TYPE: container volumes: - ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro diff --git a/docker/init-packs.sh b/docker/init-packs.sh index 1fceb28..285e645 100755 --- a/docker/init-packs.sh +++ b/docker/init-packs.sh @@ -119,14 +119,54 @@ for pack_dir in "$SOURCE_PACKS_DIR"/*; do target_pack_dir="$TARGET_PACKS_DIR/$pack_name" if [ -d "$target_pack_dir" ]; then - # Pack exists, update files to ensure we have latest (especially binaries) + # Pack exists, update non-binary files to ensure we have latest. + # Compiled sensor binaries in sensors/ are provided by init-pack-binaries + # (statically-linked musl builds) and must NOT be overwritten by + # the host's copy, which may be dynamically linked or the wrong arch. echo -e "${YELLOW} ⟳${NC} Pack exists at: $target_pack_dir, updating files..." + + # Detect ELF binaries already in the target sensors/ dir by + # checking for the 4-byte ELF magic number (\x7fELF) at the + # start of the file. The `file` command is unavailable on + # python:3.11-slim, so we read the magic bytes with `od`. + _skip_bins="" + if [ -d "$target_pack_dir/sensors" ]; then + for _bin in "$target_pack_dir/sensors"/*; do + [ -f "$_bin" ] || continue + _magic=$(od -A n -t x1 -N 4 "$_bin" 2>/dev/null | tr -d ' ') + if [ "$_magic" = "7f454c46" ]; then + _skip_bins="$_skip_bins $(basename "$_bin")" + fi + done + fi + + # Copy everything from source, then restore any skipped binaries + # that were overwritten. We do it this way (copy-then-restore) + # rather than exclude-during-copy because busybox cp and POSIX cp + # have no --exclude flag. + if [ -n "$_skip_bins" ]; then + # Back up existing static binaries + _tmpdir=$(mktemp -d) + for _b in $_skip_bins; do + cp "$target_pack_dir/sensors/$_b" "$_tmpdir/$_b" + done + fi + if cp -rf "$pack_dir"/* "$target_pack_dir"/; then echo -e "${GREEN} ✓${NC} Updated pack files at: $target_pack_dir" else echo -e "${RED} ✗${NC} Failed to update pack" exit 1 fi + + # Restore static binaries that were overwritten + if [ -n "$_skip_bins" ]; then + for _b in $_skip_bins; do + cp "$_tmpdir/$_b" "$target_pack_dir/sensors/$_b" + echo -e "${GREEN} ✓${NC} Preserved static binary: sensors/$_b" + done + rm -rf "$_tmpdir" + fi else # Copy pack to target directory echo -e "${YELLOW} →${NC} Copying pack files..." diff --git a/packs/core/sensors/attune-core-timer-sensor b/packs/core/sensors/attune-core-timer-sensor deleted file mode 100755 index 2a4b47c..0000000 Binary files a/packs/core/sensors/attune-core-timer-sensor and /dev/null differ diff --git a/scripts/build-pack-binaries.sh b/scripts/build-pack-binaries.sh index fbc90b5..adce374 100755 --- a/scripts/build-pack-binaries.sh +++ b/scripts/build-pack-binaries.sh @@ -1,14 +1,16 @@ #!/usr/bin/env bash # Build pack binaries using Docker and extract them to ./packs/ # -# This script builds native pack binaries (sensors, etc.) in a Docker container -# with GLIBC compatibility and extracts them to the appropriate pack directories. +# This script builds statically-linked pack binaries (sensors, etc.) in a Docker +# container using cargo-zigbuild + musl, producing binaries with zero runtime +# dependencies. Supports cross-compilation for any target architecture. # # Usage: -# ./scripts/build-pack-binaries.sh +# ./scripts/build-pack-binaries.sh # Build for x86_64 (default) +# RUST_TARGET=aarch64-unknown-linux-musl ./scripts/build-pack-binaries.sh # Build for arm64 # # The script will: -# 1. Build pack binaries in a Docker container with GLIBC 2.36 (Debian Bookworm) +# 1. Build statically-linked pack binaries via cargo-zigbuild + musl # 2. Extract binaries to ./packs/core/sensors/ # 3. Make binaries executable # 4. Clean up temporary container @@ -29,10 +31,12 @@ PROJECT_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)" IMAGE_NAME="attune-pack-builder" CONTAINER_NAME="attune-pack-binaries-tmp" DOCKERFILE="docker/Dockerfile.pack-binaries" +RUST_TARGET="${RUST_TARGET:-x86_64-unknown-linux-musl}" -echo -e "${GREEN}Building pack binaries...${NC}" +echo -e "${GREEN}Building statically-linked pack binaries...${NC}" echo "Project root: ${PROJECT_ROOT}" echo "Dockerfile: ${DOCKERFILE}" +echo "Target: ${RUST_TARGET}" echo "" # Navigate to project root @@ -45,8 +49,9 @@ if [[ ! -f "${DOCKERFILE}" ]]; then fi # Build the Docker image -echo -e "${YELLOW}Step 1/4: Building Docker image...${NC}" +echo -e "${YELLOW}Step 1/4: Building Docker image (target: ${RUST_TARGET})...${NC}" if DOCKER_BUILDKIT=1 docker build \ + --build-arg RUST_TARGET="${RUST_TARGET}" \ -f "${DOCKERFILE}" \ -t "${IMAGE_NAME}" \ . ; then @@ -87,7 +92,7 @@ chmod +x packs/core/sensors/attune-core-timer-sensor echo "" echo -e "${YELLOW}Verifying binaries:${NC}" file packs/core/sensors/attune-core-timer-sensor -ldd packs/core/sensors/attune-core-timer-sensor || echo "(ldd failed - binary may be static or require different environment)" +(ldd packs/core/sensors/attune-core-timer-sensor 2>&1 || echo "statically linked (no dynamic dependencies)") ls -lh packs/core/sensors/attune-core-timer-sensor # Clean up temporary container @@ -105,11 +110,13 @@ echo -e "${GREEN}═════════════════════ echo -e "${GREEN}Pack binaries built successfully!${NC}" echo -e "${GREEN}════════════════════════════════════════${NC}" echo "" +echo "Target architecture: ${RUST_TARGET}" echo "Binaries location:" echo " • packs/core/sensors/attune-core-timer-sensor" echo "" -echo "These binaries are now ready to be used by the init-packs service" -echo "when starting docker-compose." +echo "These are statically-linked musl binaries with zero runtime dependencies." +echo "They are now ready to be used by the init-packs service when starting" +echo "docker-compose." echo "" echo "To use them:" echo " docker compose up -d" diff --git a/work-summary/2026-03-27-pack-binaries-static-build.md b/work-summary/2026-03-27-pack-binaries-static-build.md new file mode 100644 index 0000000..956a191 --- /dev/null +++ b/work-summary/2026-03-27-pack-binaries-static-build.md @@ -0,0 +1,101 @@ +# Pack Binaries: Cross-Architecture Static Build + +**Date**: 2026-03-27 + +## Problem + +The `attune-core-timer-sensor` native sensor binary failed to execute in Docker containers on Apple Silicon (arm64) Macs with the error: + +``` +rosetta error: failed to open elf at /lib64/ld-linux-x86-64.so.2 +``` + +**Two root causes**: + +1. **Wrong build toolchain**: `docker/Dockerfile.pack-binaries` used a plain `cargo build` which produced a **dynamically-linked, host-architecture** binary. On arm64 Docker hosts, this created an aarch64 binary linked against glibc. When the sensor container tried to execute it, the required dynamic linker (`ld-linux-x86-64.so.2`) was absent. This contrasted with `docker/Dockerfile.agent`, which already used `cargo-zigbuild` + musl for fully static binaries. + +2. **init-packs overwrote the static binary**: Even after fixing the Dockerfile, the `init-packs.sh` script did `cp -rf` from `./packs/` (host bind mount) into the packs volume, **overwriting** the freshly-placed static binary from `init-pack-binaries` with the old dynamically-linked binary from the host's `packs/core/sensors/` directory. + +## Changes + +### `docker/Dockerfile.pack-binaries` — Rewritten for static cross-compilation + +- Added `RUST_TARGET` build arg (default: `x86_64-unknown-linux-musl`) +- Installed `musl-tools`, `ziglang`, and `cargo-zigbuild` (matching agent Dockerfile pattern) +- Replaced `cargo build --release` with `cargo zigbuild --release --target ${RUST_TARGET}` +- Added `cargo fetch` dependency caching layer with proper workspace stubs (including sensor `agent_main.rs`) +- Added `SQLX_OFFLINE=true` for compile-time query checking without a live database +- Added strip-with-fallback for cross-arch scenarios +- Added **Stage 3: `pack-binaries-init`** — busybox-based image for Docker Compose volume population (analogous to `agent-init`) +- Updated cache ID to `target-pack-binaries-static` with `sharing=locked` for zigbuild exclusivity + +### `docker/init-packs.sh` — Preserve static binaries during pack copy + +- Before copying host pack files, detects ELF binaries already present in the target `sensors/` directory using the 4-byte ELF magic number (`\x7fELF` = `7f454c46`) via `od` (available in python:3.11-slim, unlike `file`) +- Backs up detected ELF binaries to a temp directory before the `cp -rf` overwrites them +- Restores the backed-up static binaries after the copy completes +- Logs each preserved binary for visibility + +### `docker-compose.yaml` — Added `init-pack-binaries` service + +- New `init-pack-binaries` service builds from `Dockerfile.pack-binaries` (target: `pack-binaries-init`) and copies the static binary into the `packs_data` volume +- Accepts `PACK_BINARIES_RUST_TARGET` env var (default: `x86_64-unknown-linux-musl`) +- `init-packs` now depends on `init-pack-binaries` to ensure binaries are in the volume before pack files are copied +- `docker compose up` now automatically builds and deploys pack binaries — no manual script run required + +### `docker/distributable/docker-compose.yaml` — Same pattern for distributable + +- Added `init-pack-binaries` service using pre-built registry image +- Updated `init-packs` dependencies + +### `scripts/build-pack-binaries.sh` — Updated for static builds + +- Passes `RUST_TARGET` build arg to Docker build +- Accepts `RUST_TARGET` env var (default: `x86_64-unknown-linux-musl`) +- Updated verification output to expect statically-linked binary + +### `Makefile` — New targets + +- `PACK_BINARIES_RUST_TARGET` variable (default: `x86_64-unknown-linux-musl`) +- `docker-build-pack-binaries` — build for default architecture +- `docker-build-pack-binaries-arm64` — build for aarch64 +- `docker-build-pack-binaries-all` — build both architectures + +### `.gitignore` — Exclude compiled pack binary + +- Added `packs/core/sensors/attune-core-timer-sensor` to `.gitignore` +- Removed the stale dynamically-linked binary from git tracking + +## Architecture + +The fix follows the same proven pattern as `Dockerfile.agent`: + +``` +cargo-zigbuild + musl → statically-linked binary → zero runtime dependencies +``` + +Since the binary has no dynamic library dependencies (no glibc, no libssl, no dynamic linker), it runs on **any** Linux container of the matching CPU architecture, regardless of the base image (Debian, Alpine, scratch, etc.). + +### Init sequence + +1. **`init-pack-binaries`**: Builds static musl binary → copies to `packs_data` volume at `core/sensors/` +2. **`init-packs`** (depends on `init-pack-binaries`): Copies host `./packs/core/` to volume → detects existing ELF binary → backs it up → copies host files → restores static binary +3. **`sensor`**: Spawns the static `attune-core-timer-sensor` → works on any architecture + +## Usage + +```bash +# Default (x86_64) — works on amd64 containers and arm64 via Rosetta +docker compose up -d + +# For native arm64 containers +PACK_BINARIES_RUST_TARGET=aarch64-unknown-linux-musl docker compose up -d + +# Standalone build +make docker-build-pack-binaries # amd64 +make docker-build-pack-binaries-arm64 # arm64 +make docker-build-pack-binaries-all # both + +# Manual script +RUST_TARGET=aarch64-unknown-linux-musl ./scripts/build-pack-binaries.sh +```