28 Commits

Author SHA1 Message Date
f93e9229d2 ha executor
Some checks failed
CI / Rustfmt (pull_request) Successful in 19s
CI / Cargo Audit & Deny (pull_request) Successful in 33s
CI / Security Blocking Checks (pull_request) Successful in 5s
CI / Web Blocking Checks (pull_request) Successful in 49s
CI / Web Advisory Checks (pull_request) Successful in 33s
CI / Clippy (pull_request) Has been cancelled
CI / Security Advisory Checks (pull_request) Has been cancelled
CI / Tests (pull_request) Has been cancelled
2026-04-02 17:15:59 -05:00
8e91440f23 [WIP] making executor ha 2026-04-02 11:33:26 -05:00
8278030699 fixing tests, making clippy happy
Some checks failed
CI / Rustfmt (push) Successful in 19s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Security Blocking Checks (push) Successful in 5s
CI / Web Advisory Checks (push) Successful in 28s
CI / Web Blocking Checks (push) Successful in 52s
Publish Images / Resolve Publish Metadata (push) Successful in 0s
CI / Security Advisory Checks (push) Successful in 23s
CI / Clippy (push) Successful in 2m4s
Publish Images / Publish Docker Dist Bundle (push) Successful in 4s
Publish Images / Publish web (amd64) (push) Successful in 45s
Publish Images / Publish web (arm64) (push) Successful in 3m32s
CI / Tests (push) Failing after 8m25s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m12s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m39s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish executor (amd64) (push) Successful in 40s
Publish Images / Publish api (amd64) (push) Successful in 30s
Publish Images / Publish notifier (amd64) (push) Successful in 41s
Publish Images / Publish agent (arm64) (push) Successful in 52s
Publish Images / Publish api (arm64) (push) Successful in 1m56s
Publish Images / Publish executor (arm64) (push) Successful in 1m57s
Publish Images / Publish notifier (arm64) (push) Successful in 1m50s
Publish Images / Publish manifest attune/agent (push) Successful in 15s
Publish Images / Publish manifest attune/api (push) Failing after 30s
Publish Images / Publish manifest attune/executor (push) Successful in 42s
Publish Images / Publish manifest attune/web (push) Failing after 17s
Publish Images / Publish manifest attune/notifier (push) Failing after 14m44s
2026-04-02 09:17:21 -05:00
b34617ded1 npm audit fix
Some checks failed
CI / Rustfmt (push) Successful in 19s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Security Blocking Checks (push) Successful in 5s
CI / Web Blocking Checks (push) Successful in 53s
CI / Web Advisory Checks (push) Successful in 34s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 25s
CI / Clippy (push) Failing after 1m46s
Publish Images / Publish Docker Dist Bundle (push) Successful in 4s
Publish Images / Publish web (amd64) (push) Successful in 44s
Publish Images / Publish web (arm64) (push) Successful in 3m21s
CI / Tests (push) Failing after 6m7s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m13s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m39s
Publish Images / Publish agent (amd64) (push) Successful in 21s
Publish Images / Publish executor (amd64) (push) Failing after 45s
Publish Images / Publish api (amd64) (push) Failing after 45s
Publish Images / Publish notifier (amd64) (push) Failing after 43s
Publish Images / Publish agent (arm64) (push) Successful in 59s
Publish Images / Publish executor (arm64) (push) Successful in 1m52s
Publish Images / Publish api (arm64) (push) Successful in 1m58s
Publish Images / Publish notifier (arm64) (push) Successful in 1m52s
Publish Images / Publish manifest attune/agent (push) Has been skipped
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
2026-04-02 08:06:55 -05:00
b6446cc574 queueing fixes 2026-04-02 08:06:02 -05:00
cf82de87ea removing useless root-level package.json
Some checks failed
CI / Rustfmt (push) Successful in 19s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Security Blocking Checks (push) Successful in 5s
CI / Web Blocking Checks (push) Successful in 50s
CI / Web Advisory Checks (push) Successful in 34s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 25s
CI / Clippy (push) Failing after 1m41s
Publish Images / Publish Docker Dist Bundle (push) Successful in 4s
Publish Images / Publish web (amd64) (push) Successful in 43s
Publish Images / Publish web (arm64) (push) Successful in 3m17s
CI / Tests (push) Failing after 6m0s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m17s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m40s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish notifier (amd64) (push) Failing after 11s
Publish Images / Publish executor (amd64) (push) Successful in 46s
Publish Images / Publish api (amd64) (push) Successful in 46s
Publish Images / Publish agent (arm64) (push) Successful in 56s
Publish Images / Publish api (arm64) (push) Successful in 2m4s
Publish Images / Publish executor (arm64) (push) Successful in 2m3s
Publish Images / Publish notifier (arm64) (push) Successful in 1m56s
Publish Images / Publish manifest attune/agent (push) Has been skipped
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
2026-04-01 20:40:13 -05:00
a4c303ec84 merging semgrep-scan 2026-04-01 20:38:18 -05:00
a0f59114a3 Merge branch 'semgrep-scan' 2026-04-01 20:37:39 -05:00
104dcbb1b1 [WIP] client action streaming 2026-04-01 20:23:56 -05:00
b342005e17 addressing some semgrep issues 2026-04-01 19:27:37 -05:00
4b525f4641 attempting to fix build pipeline failures
All checks were successful
CI / Rustfmt (push) Successful in 23s
CI / Cargo Audit & Deny (push) Successful in 35s
CI / Security Blocking Checks (push) Successful in 10s
CI / Web Blocking Checks (push) Successful in 50s
CI / Web Advisory Checks (push) Successful in 35s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 37s
CI / Clippy (push) Successful in 2m3s
Publish Images / Publish web (amd64) (push) Successful in 42s
Publish Images / Publish web (arm64) (push) Successful in 3m25s
CI / Tests (push) Successful in 8m51s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m32s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m22s
Publish Images / Publish agent (amd64) (push) Successful in 21s
Publish Images / Publish notifier (amd64) (push) Successful in 37s
Publish Images / Publish executor (amd64) (push) Successful in 41s
Publish Images / Publish api (amd64) (push) Successful in 41s
Publish Images / Publish agent (arm64) (push) Successful in 55s
Publish Images / Publish api (arm64) (push) Successful in 1m58s
Publish Images / Publish executor (arm64) (push) Successful in 1m53s
Publish Images / Publish notifier (arm64) (push) Successful in 1m53s
Publish Images / Publish manifest attune/agent (push) Successful in 7s
Publish Images / Publish manifest attune/api (push) Successful in 16s
Publish Images / Publish manifest attune/executor (push) Successful in 10s
Publish Images / Publish manifest attune/notifier (push) Successful in 8s
Publish Images / Publish manifest attune/web (push) Successful in 7s
Publish Images / Publish Docker Dist Bundle (push) Successful in 4s
2026-03-28 14:21:09 -05:00
David Culbreth
7ef2b59b23 working on arm64 native
Some checks failed
CI / Rustfmt (push) Successful in 24s
CI / Cargo Audit & Deny (push) Successful in 36s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Blocking Checks (push) Successful in 48s
CI / Web Advisory Checks (push) Successful in 37s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
CI / Clippy (push) Failing after 1m53s
Publish Images / Publish Docker Dist Bundle (push) Failing after 8s
Publish Images / Publish web (amd64) (push) Successful in 56s
CI / Security Advisory Checks (push) Successful in 38s
Publish Images / Publish web (arm64) (push) Successful in 3m29s
CI / Tests (push) Successful in 9m21s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 12m28s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m20s
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish manifest attune/agent (push) Has been skipped
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
2026-03-27 16:37:46 -05:00
3a13bf754a fixing docker compose distribution
Some checks failed
CI / Rustfmt (push) Successful in 20s
CI / Clippy (push) Successful in 2m3s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Web Blocking Checks (push) Successful in 1m21s
CI / Security Blocking Checks (push) Successful in 10s
CI / Web Advisory Checks (push) Successful in 1m3s
CI / Security Advisory Checks (push) Successful in 37s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Tests (push) Successful in 8m46s
Publish Images / Publish web (arm64) (push) Successful in 3m20s
Publish Images / Publish Docker Dist Bundle (push) Failing after 9s
Publish Images / Publish web (amd64) (push) Successful in 52s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m20s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m30s
Publish Images / Publish agent (amd64) (push) Successful in 29s
Publish Images / Publish executor (amd64) (push) Successful in 35s
Publish Images / Publish api (amd64) (push) Successful in 42s
Publish Images / Publish notifier (amd64) (push) Successful in 35s
Publish Images / Publish agent (arm64) (push) Successful in 1m3s
Publish Images / Publish api (arm64) (push) Successful in 1m55s
Publish Images / Publish executor (arm64) (push) Successful in 2m1s
Publish Images / Publish notifier (arm64) (push) Successful in 1m54s
Publish Images / Publish manifest attune/agent (push) Successful in 10s
Publish Images / Publish manifest attune/api (push) Successful in 12s
Publish Images / Publish manifest attune/executor (push) Successful in 10s
Publish Images / Publish manifest attune/notifier (push) Successful in 9s
Publish Images / Publish manifest attune/web (push) Successful in 7s
2026-03-26 15:39:07 -05:00
f4ef823f43 fixing audit finding
Some checks failed
CI / Rustfmt (push) Successful in 21s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Blocking Checks (push) Successful in 53s
CI / Web Advisory Checks (push) Successful in 34s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 36s
CI / Clippy (push) Successful in 2m8s
Publish Images / Publish Docker Dist Bundle (push) Failing after 8s
Publish Images / Publish web (amd64) (push) Successful in 53s
Publish Images / Publish web (arm64) (push) Successful in 3m28s
CI / Tests (push) Successful in 9m20s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m21s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m23s
Publish Images / Publish agent (amd64) (push) Successful in 23s
Publish Images / Publish api (amd64) (push) Successful in 33s
Publish Images / Publish notifier (amd64) (push) Successful in 38s
Publish Images / Publish executor (amd64) (push) Successful in 54s
Publish Images / Publish agent (arm64) (push) Successful in 59s
Publish Images / Publish executor (arm64) (push) Successful in 1m55s
Publish Images / Publish api (arm64) (push) Successful in 2m2s
Publish Images / Publish notifier (arm64) (push) Successful in 2m3s
Publish Images / Publish manifest attune/agent (push) Successful in 10s
Publish Images / Publish manifest attune/executor (push) Successful in 19s
Publish Images / Publish manifest attune/api (push) Successful in 21s
Publish Images / Publish manifest attune/notifier (push) Successful in 12s
Publish Images / Publish manifest attune/web (push) Successful in 7s
2026-03-26 14:05:53 -05:00
ab7d31de2f fixing docker compose distribution 2026-03-26 14:04:57 -05:00
938c271ff5 distributable, please
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Successful in 36s
CI / Security Blocking Checks (push) Successful in 6s
CI / Web Blocking Checks (push) Successful in 53s
CI / Web Advisory Checks (push) Successful in 34s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 38s
CI / Clippy (push) Successful in 2m7s
Publish Images / Publish Docker Dist Bundle (push) Failing after 19s
Publish Images / Publish web (amd64) (push) Successful in 49s
Publish Images / Publish web (arm64) (push) Successful in 3m31s
CI / Tests (push) Successful in 8m48s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m42s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m19s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish api (amd64) (push) Successful in 38s
Publish Images / Publish notifier (amd64) (push) Successful in 42s
Publish Images / Publish executor (amd64) (push) Successful in 46s
Publish Images / Publish agent (arm64) (push) Successful in 56s
Publish Images / Publish api (arm64) (push) Successful in 1m52s
Publish Images / Publish executor (arm64) (push) Successful in 2m2s
Publish Images / Publish notifier (arm64) (push) Successful in 2m3s
Publish Images / Publish manifest attune/agent (push) Successful in 6s
Publish Images / Publish manifest attune/api (push) Successful in 11s
Publish Images / Publish manifest attune/executor (push) Successful in 10s
Publish Images / Publish manifest attune/notifier (push) Successful in 8s
Publish Images / Publish manifest attune/web (push) Successful in 8s
2026-03-26 12:26:23 -05:00
da8055cb79 publishable docker compose?
Some checks failed
CI / Cargo Audit & Deny (push) Successful in 31s
CI / Rustfmt (push) Successful in 18s
CI / Security Blocking Checks (push) Successful in 6s
CI / Web Blocking Checks (push) Successful in 52s
CI / Web Advisory Checks (push) Successful in 31s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
CI / Security Advisory Checks (push) Successful in 38s
CI / Clippy (push) Successful in 1m58s
Publish Images / Publish Docker Dist Bundle (push) Failing after 21s
Publish Images / Publish web (amd64) (push) Successful in 50s
Publish Images / Publish web (arm64) (push) Successful in 3m26s
CI / Tests (push) Successful in 9m1s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m25s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m42s
Publish Images / Publish agent (amd64) (push) Successful in 28s
Publish Images / Publish api (amd64) (push) Successful in 45s
Publish Images / Publish executor (amd64) (push) Successful in 46s
Publish Images / Publish notifier (amd64) (push) Successful in 49s
Publish Images / Publish agent (arm64) (push) Successful in 1m0s
Publish Images / Publish api (arm64) (push) Successful in 1m51s
Publish Images / Publish executor (arm64) (push) Successful in 2m1s
Publish Images / Publish notifier (arm64) (push) Successful in 2m1s
Publish Images / Publish manifest attune/agent (push) Successful in 6s
Publish Images / Publish manifest attune/api (push) Successful in 10s
Publish Images / Publish manifest attune/executor (push) Successful in 7s
Publish Images / Publish manifest attune/notifier (push) Successful in 9s
Publish Images / Publish manifest attune/web (push) Successful in 7s
2026-03-26 08:46:18 -05:00
03a239d22b manifest publish retries and more descriptive logs.
All checks were successful
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Successful in 34s
CI / Security Blocking Checks (push) Successful in 8s
CI / Web Blocking Checks (push) Successful in 52s
CI / Web Advisory Checks (push) Successful in 38s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
CI / Clippy (push) Successful in 2m1s
CI / Security Advisory Checks (push) Successful in 1m24s
Publish Images / Publish web (amd64) (push) Successful in 46s
Publish Images / Publish web (arm64) (push) Successful in 3m23s
CI / Tests (push) Successful in 8m54s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m27s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m19s
Publish Images / Publish agent (amd64) (push) Successful in 23s
Publish Images / Publish api (amd64) (push) Successful in 37s
Publish Images / Publish executor (amd64) (push) Successful in 47s
Publish Images / Publish agent (arm64) (push) Successful in 1m1s
Publish Images / Publish notifier (amd64) (push) Successful in 40s
Publish Images / Publish api (arm64) (push) Successful in 1m51s
Publish Images / Publish executor (arm64) (push) Successful in 2m1s
Publish Images / Publish notifier (arm64) (push) Successful in 1m49s
Publish Images / Publish manifest attune/agent (push) Successful in 7s
Publish Images / Publish manifest attune/executor (push) Successful in 8s
Publish Images / Publish manifest attune/notifier (push) Successful in 9s
Publish Images / Publish manifest attune/api (push) Successful in 18s
Publish Images / Publish manifest attune/web (push) Successful in 8s
2026-03-26 07:40:07 -05:00
ba83958337 trying to fix manifest push
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Successful in 35s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Blocking Checks (push) Successful in 51s
CI / Web Advisory Checks (push) Successful in 37s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Clippy (push) Successful in 2m9s
CI / Security Advisory Checks (push) Successful in 1m25s
Publish Images / Publish web (amd64) (push) Successful in 52s
Publish Images / Publish web (arm64) (push) Successful in 3m27s
CI / Tests (push) Successful in 8m48s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m50s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m29s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish api (amd64) (push) Successful in 37s
Publish Images / Publish executor (amd64) (push) Successful in 40s
Publish Images / Publish agent (arm64) (push) Successful in 1m2s
Publish Images / Publish notifier (amd64) (push) Successful in 38s
Publish Images / Publish executor (arm64) (push) Successful in 1m57s
Publish Images / Publish api (arm64) (push) Successful in 1m58s
Publish Images / Publish notifier (arm64) (push) Successful in 2m6s
Publish Images / Publish manifest attune/agent (push) Successful in 12s
Publish Images / Publish manifest attune/api (push) Successful in 11s
Publish Images / Publish manifest attune/notifier (push) Successful in 13s
Publish Images / Publish manifest attune/executor (push) Successful in 16s
Publish Images / Publish manifest attune/web (push) Failing after 37s
2026-03-25 17:29:27 -05:00
c11bc1a2bf trying to fix manifest push
Some checks failed
CI / Rustfmt (push) Successful in 23s
CI / Clippy (push) Successful in 2m6s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Web Blocking Checks (push) Successful in 52s
CI / Security Blocking Checks (push) Successful in 6s
CI / Web Advisory Checks (push) Successful in 36s
CI / Security Advisory Checks (push) Successful in 38s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
Publish Images / Publish web (arm64) (push) Successful in 3m26s
CI / Tests (push) Successful in 8m52s
Publish Images / Publish web (amd64) (push) Successful in 1m8s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m29s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m46s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish api (amd64) (push) Successful in 40s
Publish Images / Publish executor (amd64) (push) Successful in 39s
Publish Images / Publish agent (arm64) (push) Successful in 57s
Publish Images / Publish notifier (amd64) (push) Successful in 41s
Publish Images / Publish api (arm64) (push) Successful in 2m3s
Publish Images / Publish executor (arm64) (push) Successful in 2m2s
Publish Images / Publish notifier (arm64) (push) Successful in 1m57s
Publish Images / Publish manifest attune/api (push) Failing after 10s
Publish Images / Publish manifest attune/agent (push) Successful in 12s
Publish Images / Publish manifest attune/executor (push) Successful in 11s
Publish Images / Publish manifest attune/notifier (push) Successful in 11s
Publish Images / Publish manifest attune/web (push) Failing after 8s
2026-03-25 17:10:36 -05:00
eb82755137 trying different urls? not sure why publishing is only working for the arm64 builds
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Security Blocking Checks (push) Has been cancelled
CI / Tests (push) Has been cancelled
CI / Cargo Audit & Deny (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Security Advisory Checks (push) Has been cancelled
CI / Web Blocking Checks (push) Has been cancelled
Publish Images / Resolve Publish Metadata (push) Successful in 1s
Publish Images / Publish web (amd64) (push) Successful in 45s
Publish Images / Publish web (arm64) (push) Successful in 3m19s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m24s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m43s
Publish Images / Publish agent (amd64) (push) Successful in 27s
Publish Images / Publish api (amd64) (push) Successful in 41s
Publish Images / Publish agent (arm64) (push) Successful in 1m0s
Publish Images / Publish notifier (amd64) (push) Successful in 40s
Publish Images / Publish executor (arm64) (push) Successful in 1m58s
Publish Images / Publish notifier (arm64) (push) Successful in 1m53s
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Successful in 45s
Publish Images / Publish api (arm64) (push) Successful in 2m2s
Publish Images / Publish manifest attune/agent (push) Failing after 1s
2026-03-25 14:29:15 -05:00
058f392616 updating the publisher, again
Some checks failed
CI / Cargo Audit & Deny (push) Successful in 1m11s
CI / Rustfmt (push) Successful in 1m20s
CI / Security Blocking Checks (push) Successful in 9s
CI / Clippy (push) Successful in 2m1s
CI / Web Advisory Checks (push) Successful in 1m9s
CI / Web Blocking Checks (push) Successful in 1m26s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 39s
Publish Images / Publish web (arm64) (push) Successful in 3m50s
CI / Tests (push) Successful in 9m4s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m17s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 12m21s
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
Publish Images / Publish web (amd64) (push) Failing after 47s
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish manifest attune/agent (push) Has been skipped
2026-03-25 13:10:44 -05:00
0264a66b5a renaming container artifacts and adding project linking stage
Some checks failed
CI / Rustfmt (push) Successful in 21s
CI / Clippy (push) Successful in 2m3s
CI / Cargo Audit & Deny (push) Successful in 34s
CI / Web Blocking Checks (push) Successful in 1m27s
CI / Security Blocking Checks (push) Successful in 15s
CI / Web Advisory Checks (push) Successful in 32s
CI / Security Advisory Checks (push) Successful in 1m25s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Tests (push) Successful in 8m56s
Publish Images / Publish web (arm64) (push) Failing after 3m49s
Publish Images / Publish web (amd64) (push) Failing after 1m28s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 12m21s
Publish Images / Build Rust Bundles (arm64) (push) Failing after 12m28s
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/agent (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
2026-03-25 12:39:47 -05:00
542e72a454 fixing glibc version check
Some checks failed
CI / Clippy (push) Successful in 2m1s
CI / Rustfmt (push) Successful in 21s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Web Blocking Checks (push) Successful in 53s
CI / Security Blocking Checks (push) Successful in 8s
CI / Web Advisory Checks (push) Successful in 37s
CI / Security Advisory Checks (push) Successful in 36s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
Publish Images / Publish web (arm64) (push) Successful in 3m39s
CI / Tests (push) Successful in 8m37s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m21s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m15s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish api (amd64) (push) Successful in 39s
Publish Images / Publish executor (amd64) (push) Successful in 37s
Publish Images / Publish notifier (amd64) (push) Successful in 37s
Publish Images / Publish agent (arm64) (push) Successful in 1m34s
Publish Images / Publish executor (arm64) (push) Successful in 2m12s
Publish Images / Publish api (arm64) (push) Successful in 2m22s
Publish Images / Publish manifest attune-executor (push) Has been skipped
Publish Images / Publish manifest attune-notifier (push) Has been skipped
Publish Images / Publish manifest attune-web (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Successful in 2m10s
Publish Images / Publish web (amd64) (push) Successful in 47s
Publish Images / Publish manifest attune-agent (push) Failing after 2s
Publish Images / Publish manifest attune-api (push) Failing after 1s
2026-03-25 11:17:50 -05:00
a118563366 building? hopefully?
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Clippy (push) Successful in 2m3s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Web Blocking Checks (push) Successful in 52s
CI / Security Blocking Checks (push) Successful in 8s
CI / Web Advisory Checks (push) Successful in 36s
CI / Security Advisory Checks (push) Successful in 43s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
Publish Images / Publish web (arm64) (push) Failing after 3m53s
CI / Tests (push) Successful in 8m45s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 8m57s
Publish Images / Publish web (amd64) (push) Successful in 48s
Publish Images / Publish agent (amd64) (push) Has been cancelled
Publish Images / Publish api (amd64) (push) Has been cancelled
Publish Images / Publish executor (amd64) (push) Has been cancelled
Publish Images / Publish notifier (amd64) (push) Has been cancelled
Publish Images / Publish agent (arm64) (push) Has been cancelled
Publish Images / Publish api (arm64) (push) Has been cancelled
Publish Images / Publish executor (arm64) (push) Has been cancelled
Publish Images / Build Rust Bundles (arm64) (push) Has been cancelled
Publish Images / Publish notifier (arm64) (push) Has been cancelled
Publish Images / Publish manifest attune-agent (push) Has been cancelled
Publish Images / Publish manifest attune-api (push) Has been cancelled
Publish Images / Publish manifest attune-executor (push) Has been cancelled
Publish Images / Publish manifest attune-notifier (push) Has been cancelled
Publish Images / Publish manifest attune-web (push) Has been cancelled
2026-03-25 10:52:07 -05:00
a057ad5db5 adjusting publish pipeline to cross-compile because rpis are slow
Some checks failed
CI / Rustfmt (push) Successful in 21s
CI / Clippy (push) Failing after 2m3s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Web Blocking Checks (push) Successful in 51s
CI / Security Blocking Checks (push) Successful in 5s
CI / Web Advisory Checks (push) Successful in 38s
CI / Security Advisory Checks (push) Successful in 36s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
Publish Images / Publish web (arm64) (push) Successful in 3m34s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 4m1s
CI / Tests (push) Successful in 8m47s
Publish Images / Publish web (amd64) (push) Failing after 46s
Publish Images / Build Rust Bundles (arm64) (push) Failing after 4m3s
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish manifest attune-agent (push) Has been skipped
Publish Images / Publish manifest attune-api (push) Has been skipped
Publish Images / Publish manifest attune-executor (push) Has been skipped
Publish Images / Publish manifest attune-notifier (push) Has been skipped
Publish Images / Publish manifest attune-web (push) Has been skipped
2026-03-25 10:07:48 -05:00
8e273ec683 more adjustments to publisher 2026-03-25 08:14:06 -05:00
16f1c2f079 matching runner tags after changing runner tags
Some checks failed
CI / Rustfmt (push) Successful in 1m4s
CI / Clippy (push) Failing after 1m46s
CI / Cargo Audit & Deny (push) Successful in 34s
CI / Web Blocking Checks (push) Successful in 1m24s
CI / Security Blocking Checks (push) Successful in 8s
CI / Web Advisory Checks (push) Successful in 32s
CI / Security Advisory Checks (push) Successful in 1m26s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Tests (push) Successful in 8m51s
Publish Images / Publish web (amd64) (push) Successful in 1m4s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 10m59s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 1h19m31s
Publish Images / Publish agent (amd64) (push) Failing after 14s
Publish Images / Publish executor (amd64) (push) Failing after 12s
Publish Images / Publish api (amd64) (push) Failing after 32s
Publish Images / Publish notifier (amd64) (push) Failing after 14s
Publish Images / Publish api (arm64) (push) Failing after 1m58s
Publish Images / Publish executor (arm64) (push) Failing after 49s
Publish Images / Publish notifier (arm64) (push) Failing after 48s
Publish Images / Publish web (arm64) (push) Successful in 3m47s
Publish Images / Publish agent (arm64) (push) Failing after 4m13s
Publish Images / Publish manifest attune-agent (push) Has been skipped
Publish Images / Publish manifest attune-api (push) Has been skipped
Publish Images / Publish manifest attune-executor (push) Has been skipped
Publish Images / Publish manifest attune-notifier (push) Has been skipped
Publish Images / Publish manifest attune-web (push) Has been skipped
2026-03-25 01:22:50 -05:00
163 changed files with 24227 additions and 2104 deletions

0
.codex Normal file
View File

View File

@@ -19,7 +19,7 @@ env:
jobs: jobs:
rust-fmt: rust-fmt:
name: Rustfmt name: Rustfmt
runs-on: ubuntu-latest runs-on: build-amd64
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -45,7 +45,7 @@ jobs:
rust-clippy: rust-clippy:
name: Clippy name: Clippy
runs-on: ubuntu-latest runs-on: build-amd64
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -91,7 +91,7 @@ jobs:
rust-test: rust-test:
name: Tests name: Tests
runs-on: ubuntu-latest runs-on: build-amd64
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -135,7 +135,7 @@ jobs:
rust-audit: rust-audit:
name: Cargo Audit & Deny name: Cargo Audit & Deny
runs-on: ubuntu-latest runs-on: build-amd64
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -188,7 +188,7 @@ jobs:
web-blocking: web-blocking:
name: Web Blocking Checks name: Web Blocking Checks
runs-on: ubuntu-latest runs-on: build-amd64
defaults: defaults:
run: run:
working-directory: web working-directory: web
@@ -217,7 +217,7 @@ jobs:
security-blocking: security-blocking:
name: Security Blocking Checks name: Security Blocking Checks
runs-on: ubuntu-latest runs-on: build-amd64
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -250,7 +250,7 @@ jobs:
web-advisory: web-advisory:
name: Web Advisory Checks name: Web Advisory Checks
runs-on: ubuntu-latest runs-on: build-amd64
continue-on-error: true continue-on-error: true
defaults: defaults:
run: run:
@@ -279,7 +279,7 @@ jobs:
security-advisory: security-advisory:
name: Security Advisory Checks name: Security Advisory Checks
runs-on: ubuntu-latest runs-on: build-amd64
continue-on-error: true continue-on-error: true
steps: steps:
- name: Checkout - name: Checkout

View File

@@ -20,6 +20,7 @@ on:
- executor - executor
- notifier - notifier
- agent - agent
- docker-dist
- web - web
default: all default: all
push: push:
@@ -33,7 +34,9 @@ env:
REGISTRY_HOST: ${{ vars.CLUSTER_GITEA_HOST }} REGISTRY_HOST: ${{ vars.CLUSTER_GITEA_HOST }}
REGISTRY_NAMESPACE: ${{ vars.CONTAINER_REGISTRY_NAMESPACE }} REGISTRY_NAMESPACE: ${{ vars.CONTAINER_REGISTRY_NAMESPACE }}
REGISTRY_PLAIN_HTTP: ${{ vars.CONTAINER_REGISTRY_INSECURE }} REGISTRY_PLAIN_HTTP: ${{ vars.CONTAINER_REGISTRY_INSECURE }}
ARTIFACT_REPOSITORY: attune-build-artifacts REPOSITORY_NAME: attune
ARTIFACT_REPOSITORY: attune/build-artifacts
GNU_GLIBC_VERSION: "2.28"
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0 CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10 CARGO_NET_RETRY: 10
@@ -50,6 +53,7 @@ jobs:
registry: ${{ steps.meta.outputs.registry }} registry: ${{ steps.meta.outputs.registry }}
namespace: ${{ steps.meta.outputs.namespace }} namespace: ${{ steps.meta.outputs.namespace }}
registry_plain_http: ${{ steps.meta.outputs.registry_plain_http }} registry_plain_http: ${{ steps.meta.outputs.registry_plain_http }}
gitea_base_url: ${{ steps.meta.outputs.gitea_base_url }}
image_tag: ${{ steps.meta.outputs.image_tag }} image_tag: ${{ steps.meta.outputs.image_tag }}
image_tags: ${{ steps.meta.outputs.image_tags }} image_tags: ${{ steps.meta.outputs.image_tags }}
artifact_ref_base: ${{ steps.meta.outputs.artifact_ref_base }} artifact_ref_base: ${{ steps.meta.outputs.artifact_ref_base }}
@@ -96,6 +100,12 @@ jobs:
registry_plain_http="$registry_plain_http_default" registry_plain_http="$registry_plain_http_default"
fi fi
if [ "$registry_plain_http" = "true" ]; then
gitea_base_url="http://${registry}"
else
gitea_base_url="https://${registry}"
fi
short_sha="$(printf '%s' "${{ github.sha }}" | cut -c1-12)" short_sha="$(printf '%s' "${{ github.sha }}" | cut -c1-12)"
ref_type="${{ github.ref_type }}" ref_type="${{ github.ref_type }}"
ref_name="${{ github.ref_name }}" ref_name="${{ github.ref_name }}"
@@ -114,6 +124,7 @@ jobs:
echo "registry=$registry" echo "registry=$registry"
echo "namespace=$namespace" echo "namespace=$namespace"
echo "registry_plain_http=$registry_plain_http" echo "registry_plain_http=$registry_plain_http"
echo "gitea_base_url=$gitea_base_url"
echo "image_tag=$version" echo "image_tag=$version"
echo "image_tags=$image_tags" echo "image_tags=$image_tags"
echo "artifact_ref_base=$artifact_ref_base" echo "artifact_ref_base=$artifact_ref_base"
@@ -133,9 +144,13 @@ jobs:
include: include:
- arch: amd64 - arch: amd64
runner_label: build-amd64 runner_label: build-amd64
service_rust_target: x86_64-unknown-linux-gnu
service_target: x86_64-unknown-linux-gnu.2.28
musl_target: x86_64-unknown-linux-musl musl_target: x86_64-unknown-linux-musl
- arch: arm64 - arch: arm64
runner_label: build-arm64 runner_label: build-amd64
service_rust_target: aarch64-unknown-linux-gnu
service_target: aarch64-unknown-linux-gnu.2.28
musl_target: aarch64-unknown-linux-musl musl_target: aarch64-unknown-linux-musl
steps: steps:
- name: Checkout - name: Checkout
@@ -156,7 +171,9 @@ jobs:
- name: Setup Rust - name: Setup Rust
uses: dtolnay/rust-toolchain@stable uses: dtolnay/rust-toolchain@stable
with: with:
targets: ${{ matrix.musl_target }} targets: |
${{ matrix.service_rust_target }}
${{ matrix.musl_target }}
- name: Cache Cargo registry + index - name: Cache Cargo registry + index
uses: actions/cache@v4 uses: actions/cache@v4
@@ -184,22 +201,69 @@ jobs:
run: | run: |
set -euo pipefail set -euo pipefail
apt-get update apt-get update
apt-get install -y pkg-config libssl-dev musl-tools file apt-get install -y pkg-config libssl-dev file binutils python3 python3-pip
- name: Install Zig
shell: bash
run: |
set -euo pipefail
pip3 install --break-system-packages --no-cache-dir ziglang
- name: Install cargo-zigbuild
shell: bash
run: |
set -euo pipefail
if ! command -v cargo-zigbuild >/dev/null 2>&1; then
cargo install --locked cargo-zigbuild
fi
- name: Build release binaries - name: Build release binaries
shell: bash shell: bash
run: | run: |
set -euo pipefail set -euo pipefail
cargo build --release \ cargo zigbuild --release \
--target "${{ matrix.service_target }}" \
--bin attune-api \ --bin attune-api \
--bin attune-executor \ --bin attune-executor \
--bin attune-notifier --bin attune-notifier
- name: Verify minimum glibc requirement
shell: bash
run: |
set -euo pipefail
output_dir="target/${{ matrix.service_rust_target }}/release"
get_min_glibc() {
local file_path="$1"
readelf -W --version-info --dyn-syms "$file_path" \
| grep 'Name: GLIBC_' \
| sed -E 's/.*GLIBC_([0-9.]+).*/\1/' \
| sort -t . -k1,1n -k2,2n \
| tail -n 1
}
version_gt() {
[ "$(printf '%s\n%s\n' "$1" "$2" | sort -V | tail -n 1)" = "$1" ] && [ "$1" != "$2" ]
}
for binary in attune-api attune-executor attune-notifier; do
min_glibc="$(get_min_glibc "${output_dir}/${binary}")"
if [ -z "${min_glibc}" ]; then
echo "Failed to determine glibc requirement for ${binary}"
exit 1
fi
echo "${binary} requires glibc ${min_glibc}"
if version_gt "${min_glibc}" "${GNU_GLIBC_VERSION}"; then
echo "Expected ${binary} to require glibc <= ${GNU_GLIBC_VERSION}, got ${min_glibc}"
exit 1
fi
done
- name: Build static agent binaries - name: Build static agent binaries
shell: bash shell: bash
run: | run: |
set -euo pipefail set -euo pipefail
cargo build --release \ cargo zigbuild --release \
--target "${{ matrix.musl_target }}" \ --target "${{ matrix.musl_target }}" \
--bin attune-agent \ --bin attune-agent \
--bin attune-sensor-agent --bin attune-sensor-agent
@@ -210,11 +274,12 @@ jobs:
set -euo pipefail set -euo pipefail
bundle_root="dist/bundle/${{ matrix.arch }}" bundle_root="dist/bundle/${{ matrix.arch }}"
service_output_dir="target/${{ matrix.service_rust_target }}/release"
mkdir -p "$bundle_root/bin" "$bundle_root/agent" mkdir -p "$bundle_root/bin" "$bundle_root/agent"
cp target/release/attune-api "$bundle_root/bin/" cp "${service_output_dir}/attune-api" "$bundle_root/bin/"
cp target/release/attune-executor "$bundle_root/bin/" cp "${service_output_dir}/attune-executor" "$bundle_root/bin/"
cp target/release/attune-notifier "$bundle_root/bin/" cp "${service_output_dir}/attune-notifier" "$bundle_root/bin/"
cp target/${{ matrix.musl_target }}/release/attune-agent "$bundle_root/agent/" cp target/${{ matrix.musl_target }}/release/attune-agent "$bundle_root/agent/"
cp target/${{ matrix.musl_target }}/release/attune-sensor-agent "$bundle_root/agent/" cp target/${{ matrix.musl_target }}/release/attune-sensor-agent "$bundle_root/agent/"
@@ -263,16 +328,245 @@ jobs:
run: | run: |
set -euo pipefail set -euo pipefail
push_args=() push_args=()
artifact_file="attune-binaries-${{ matrix.arch }}.tar.gz"
artifact_ref="${{ needs.metadata.outputs.registry }}/${{ needs.metadata.outputs.namespace }}/${ARTIFACT_REPOSITORY}-${{ matrix.arch }}:rust-binaries-${{ needs.metadata.outputs.image_tag }}"
if [ "${{ needs.metadata.outputs.registry_plain_http }}" = "true" ]; then if [ "${{ needs.metadata.outputs.registry_plain_http }}" = "true" ]; then
push_args+=(--plain-http) push_args+=(--plain-http)
fi fi
cp "dist/${artifact_file}" "${artifact_file}"
echo "Pushing binary bundle artifact"
echo " artifact_ref: ${artifact_ref}"
echo " registry_url: ${{ needs.metadata.outputs.gitea_base_url }}/v2/"
echo " manifest_url: ${{ needs.metadata.outputs.gitea_base_url }}/v2/${{ needs.metadata.outputs.namespace }}/${ARTIFACT_REPOSITORY}-${{ matrix.arch }}/manifests/rust-binaries-${{ needs.metadata.outputs.image_tag }}"
echo " artifact_file: ${artifact_file}"
oras push \ oras push \
"${push_args[@]}" \ "${push_args[@]}" \
"${{ needs.metadata.outputs.artifact_ref_base }}:rust-binaries-${{ needs.metadata.outputs.image_tag }}-${{ matrix.arch }}" \ "${artifact_ref}" \
--artifact-type application/vnd.attune.rust-binaries.v1 \ --artifact-type application/vnd.attune.rust-binaries.v1 \
"dist/attune-binaries-${{ matrix.arch }}.tar.gz:application/vnd.attune.rust-binaries.layer.v1.tar+gzip" "${artifact_file}:application/vnd.attune.rust-binaries.layer.v1.tar+gzip"
- name: Link binary bundle package to repository
shell: bash
env:
REGISTRY_USERNAME: ${{ secrets.CONTAINER_REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.CONTAINER_REGISTRY_PASSWORD }}
run: |
set -euo pipefail
api_base="${{ needs.metadata.outputs.gitea_base_url }}/api/v1"
package_name="${ARTIFACT_REPOSITORY}-${{ matrix.arch }}"
encoded_package_name="$(PACKAGE_NAME="${package_name}" python3 -c 'import os, urllib.parse; print(urllib.parse.quote(os.environ["PACKAGE_NAME"], safe=""))')"
link_url="${api_base}/packages/${{ needs.metadata.outputs.namespace }}/container/${encoded_package_name}/-/link/${REPOSITORY_NAME}"
echo "Linking binary bundle package"
echo " api_base: ${api_base}"
echo " package_name: ${package_name}"
echo " link_url: ${link_url}"
status_code="$(curl -sS -o /tmp/package-link-response.txt -w '%{http_code}' -X POST \
-u "${REGISTRY_USERNAME}:${REGISTRY_PASSWORD}" \
"${link_url}")"
case "${status_code}" in
200|201|204|409)
;;
400|404)
echo "Package link unsupported for package '${package_name}' on this Gitea endpoint; continuing"
cat /tmp/package-link-response.txt
;;
*)
cat /tmp/package-link-response.txt
exit 1
;;
esac
publish-docker-dist:
name: Publish Docker Dist Bundle
runs-on: build-amd64
needs: metadata
if: |
github.event_name != 'workflow_dispatch' ||
inputs.target_image == 'all' ||
inputs.target_image == 'docker-dist'
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Build docker dist bundle
shell: bash
run: |
set -euo pipefail
bash scripts/package-docker-dist.sh docker/distributable artifacts/attune-docker-dist.tar.gz
- name: Publish docker dist generic package
shell: bash
env:
REGISTRY_USERNAME: ${{ secrets.CONTAINER_REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.CONTAINER_REGISTRY_PASSWORD }}
run: |
set -euo pipefail
if [ -z "${REGISTRY_USERNAME:-}" ] || [ -z "${REGISTRY_PASSWORD:-}" ]; then
echo "CONTAINER_REGISTRY_USERNAME and CONTAINER_REGISTRY_PASSWORD are required to publish the docker dist package"
exit 1
fi
owner="${{ needs.metadata.outputs.namespace }}"
package_name="attune-docker-dist"
package_version="${{ needs.metadata.outputs.image_tag }}"
file_name="attune-docker-dist.tar.gz"
api_base="${{ needs.metadata.outputs.gitea_base_url }}/api/packages"
package_url="${api_base}/${owner}/generic/${package_name}/${package_version}/${file_name}"
# Generic packages reject overwriting the same file name. Delete it first on reruns.
delete_status="$(curl -sS -o /tmp/docker-dist-delete-response.txt -w '%{http_code}' \
-u "${REGISTRY_USERNAME}:${REGISTRY_PASSWORD}" \
-X DELETE \
"${package_url}")"
case "${delete_status}" in
204|404)
;;
*)
echo "Failed to prepare generic package upload target"
cat /tmp/docker-dist-delete-response.txt
exit 1
;;
esac
upload_status="$(curl -sS -o /tmp/docker-dist-upload-response.txt -w '%{http_code}' \
-u "${REGISTRY_USERNAME}:${REGISTRY_PASSWORD}" \
--upload-file artifacts/attune-docker-dist.tar.gz \
-X PUT \
"${package_url}")"
case "${upload_status}" in
201)
;;
*)
echo "Failed to publish docker dist generic package"
cat /tmp/docker-dist-upload-response.txt
exit 1
;;
esac
- name: Attach docker dist archive to release
if: github.ref_type == 'tag'
shell: bash
env:
REGISTRY_USERNAME: ${{ secrets.CONTAINER_REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.CONTAINER_REGISTRY_PASSWORD }}
run: |
set -euo pipefail
if [ -z "${REGISTRY_USERNAME:-}" ] || [ -z "${REGISTRY_PASSWORD:-}" ]; then
echo "CONTAINER_REGISTRY_USERNAME and CONTAINER_REGISTRY_PASSWORD are required to attach the docker dist archive to a release"
exit 1
fi
api_base="${{ needs.metadata.outputs.gitea_base_url }}/api/v1"
owner_repo="${{ github.repository }}"
tag_name="${{ github.ref_name }}"
archive_path="artifacts/attune-docker-dist.tar.gz"
asset_name="attune-docker-dist-${tag_name}.tar.gz"
release_response_file="$(mktemp)"
status_code="$(curl -sS -o "${release_response_file}" -w '%{http_code}' \
-u "${REGISTRY_USERNAME}:${REGISTRY_PASSWORD}" \
"${api_base}/repos/${owner_repo}/releases/tags/${tag_name}")"
if [ "${status_code}" = "404" ]; then
create_payload="$(TAG_NAME="${tag_name}" python3 - <<'PY'
import json
import os
tag = os.environ["TAG_NAME"]
print(json.dumps({
"tag_name": tag,
"name": tag,
"draft": False,
"prerelease": "-" in tag,
}))
PY
)"
status_code="$(curl -sS -o "${release_response_file}" -w '%{http_code}' \
-u "${REGISTRY_USERNAME}:${REGISTRY_PASSWORD}" \
-H "Content-Type: application/json" \
-X POST \
-d "${create_payload}" \
"${api_base}/repos/${owner_repo}/releases")"
fi
case "${status_code}" in
200|201)
;;
*)
echo "Failed to fetch or create release for tag ${tag_name}"
cat "${release_response_file}"
exit 1
;;
esac
release_id="$(python3 - "${release_response_file}" <<'PY'
import json
import sys
with open(sys.argv[1], "r", encoding="utf-8") as fh:
data = json.load(fh)
print(data["id"])
PY
)"
existing_asset_id="$(python3 - "${release_response_file}" "${asset_name}" <<'PY'
import json
import sys
with open(sys.argv[1], "r", encoding="utf-8") as fh:
data = json.load(fh)
name = sys.argv[2]
for asset in data.get("assets", []):
if asset.get("name") == name:
print(asset["id"])
break
PY
)"
if [ -n "${existing_asset_id}" ]; then
curl -sS \
-u "${REGISTRY_USERNAME}:${REGISTRY_PASSWORD}" \
-X DELETE \
"${api_base}/repos/${owner_repo}/releases/${release_id}/assets/${existing_asset_id}" \
>/dev/null
fi
encoded_asset_name="$(ASSET_NAME="${asset_name}" python3 - <<'PY'
import os
import urllib.parse
print(urllib.parse.quote(os.environ["ASSET_NAME"], safe=""))
PY
)"
upload_response_file="$(mktemp)"
status_code="$(curl -sS -o "${upload_response_file}" -w '%{http_code}' \
-u "${REGISTRY_USERNAME}:${REGISTRY_PASSWORD}" \
-H "Content-Type: application/gzip" \
--data-binary "@${archive_path}" \
"${api_base}/repos/${owner_repo}/releases/${release_id}/assets?name=${encoded_asset_name}")"
case "${status_code}" in
201)
;;
*)
echo "Failed to upload release asset ${asset_name}"
cat "${upload_response_file}"
exit 1
;;
esac
publish-rust-images: publish-rust-images:
name: Publish ${{ matrix.image.name }} (${{ matrix.arch }}) name: Publish ${{ matrix.image.name }} (${{ matrix.arch }})
@@ -296,7 +590,7 @@ jobs:
platform: linux/amd64 platform: linux/amd64
image: image:
name: api name: api
repository: attune-api repository: attune/api
source_path: bin/attune-api source_path: bin/attune-api
dockerfile: docker/Dockerfile.runtime dockerfile: docker/Dockerfile.runtime
- arch: amd64 - arch: amd64
@@ -304,7 +598,7 @@ jobs:
platform: linux/amd64 platform: linux/amd64
image: image:
name: executor name: executor
repository: attune-executor repository: attune/executor
source_path: bin/attune-executor source_path: bin/attune-executor
dockerfile: docker/Dockerfile.runtime dockerfile: docker/Dockerfile.runtime
- arch: amd64 - arch: amd64
@@ -312,7 +606,7 @@ jobs:
platform: linux/amd64 platform: linux/amd64
image: image:
name: notifier name: notifier
repository: attune-notifier repository: attune/notifier
source_path: bin/attune-notifier source_path: bin/attune-notifier
dockerfile: docker/Dockerfile.runtime dockerfile: docker/Dockerfile.runtime
- arch: amd64 - arch: amd64
@@ -320,7 +614,7 @@ jobs:
platform: linux/amd64 platform: linux/amd64
image: image:
name: agent name: agent
repository: attune-agent repository: attune/agent
source_path: agent/attune-agent source_path: agent/attune-agent
dockerfile: docker/Dockerfile.agent-package dockerfile: docker/Dockerfile.agent-package
- arch: arm64 - arch: arm64
@@ -328,7 +622,7 @@ jobs:
platform: linux/arm64 platform: linux/arm64
image: image:
name: api name: api
repository: attune-api repository: attune/api
source_path: bin/attune-api source_path: bin/attune-api
dockerfile: docker/Dockerfile.runtime dockerfile: docker/Dockerfile.runtime
- arch: arm64 - arch: arm64
@@ -336,7 +630,7 @@ jobs:
platform: linux/arm64 platform: linux/arm64
image: image:
name: executor name: executor
repository: attune-executor repository: attune/executor
source_path: bin/attune-executor source_path: bin/attune-executor
dockerfile: docker/Dockerfile.runtime dockerfile: docker/Dockerfile.runtime
- arch: arm64 - arch: arm64
@@ -344,7 +638,7 @@ jobs:
platform: linux/arm64 platform: linux/arm64
image: image:
name: notifier name: notifier
repository: attune-notifier repository: attune/notifier
source_path: bin/attune-notifier source_path: bin/attune-notifier
dockerfile: docker/Dockerfile.runtime dockerfile: docker/Dockerfile.runtime
- arch: arm64 - arch: arm64
@@ -352,7 +646,7 @@ jobs:
platform: linux/arm64 platform: linux/arm64
image: image:
name: agent name: agent
repository: attune-agent repository: attune/agent
source_path: agent/attune-agent source_path: agent/attune-agent
dockerfile: docker/Dockerfile.agent-package dockerfile: docker/Dockerfile.agent-package
steps: steps:
@@ -419,17 +713,25 @@ jobs:
run: | run: |
set -euo pipefail set -euo pipefail
pull_args=() pull_args=()
artifact_ref="${{ needs.metadata.outputs.registry }}/${{ needs.metadata.outputs.namespace }}/${ARTIFACT_REPOSITORY}-${{ matrix.arch }}:rust-binaries-${{ needs.metadata.outputs.image_tag }}"
if [ "${{ needs.metadata.outputs.registry_plain_http }}" = "true" ]; then if [ "${{ needs.metadata.outputs.registry_plain_http }}" = "true" ]; then
pull_args+=(--plain-http) pull_args+=(--plain-http)
fi fi
echo "Pulling binary bundle artifact"
echo " ref: ${artifact_ref}"
echo " registry_url: ${{ needs.metadata.outputs.gitea_base_url }}/v2/"
echo " manifest_url: ${{ needs.metadata.outputs.gitea_base_url }}/v2/${{ needs.metadata.outputs.namespace }}/${ARTIFACT_REPOSITORY}-${{ matrix.arch }}/manifests/rust-binaries-${{ needs.metadata.outputs.image_tag }}"
echo " arch: ${{ matrix.arch }}"
echo " plain_http: ${{ needs.metadata.outputs.registry_plain_http }}"
mkdir -p dist/artifact mkdir -p dist/artifact
cd dist/artifact cd dist/artifact
oras pull \ oras pull \
"${pull_args[@]}" \ "${pull_args[@]}" \
"${{ needs.metadata.outputs.artifact_ref_base }}:rust-binaries-${{ needs.metadata.outputs.image_tag }}-${{ matrix.arch }}" "${artifact_ref}"
tar -xzf "attune-binaries-${{ matrix.arch }}.tar.gz" tar -xzf "attune-binaries-${{ matrix.arch }}.tar.gz"
@@ -440,6 +742,12 @@ jobs:
rm -rf dist/image rm -rf dist/image
mkdir -p dist/image mkdir -p dist/image
echo "Preparing packaging context"
echo " image: ${{ matrix.image.name }}"
echo " repository: ${{ matrix.image.repository }}"
echo " source_path: ${{ matrix.image.source_path }}"
echo " dockerfile: ${{ matrix.image.dockerfile }}"
case "${{ matrix.image.name }}" in case "${{ matrix.image.name }}" in
api|executor|notifier) api|executor|notifier)
cp "dist/artifact/${{ matrix.image.source_path }}" dist/attune-service-binary cp "dist/artifact/${{ matrix.image.source_path }}" dist/attune-service-binary
@@ -459,6 +767,29 @@ jobs:
run: | run: |
set -euo pipefail set -euo pipefail
run_with_retries() {
local max_attempts="$1"
local delay_seconds="$2"
shift 2
local attempt=1
while true; do
if "$@"; then
return 0
fi
if [ "$attempt" -ge "$max_attempts" ]; then
echo "Command failed after ${attempt} attempts: $*"
return 1
fi
echo "Command failed on attempt ${attempt}/${max_attempts}: $*"
echo "Retrying in ${delay_seconds}s..."
sleep "$delay_seconds"
attempt=$((attempt + 1))
done
}
image_ref="${{ needs.metadata.outputs.registry }}/${{ needs.metadata.outputs.namespace }}/${{ matrix.image.repository }}:${{ needs.metadata.outputs.image_tag }}-${{ matrix.arch }}" image_ref="${{ needs.metadata.outputs.registry }}/${{ needs.metadata.outputs.namespace }}/${{ matrix.image.repository }}:${{ needs.metadata.outputs.image_tag }}-${{ matrix.arch }}"
build_cmd=( build_cmd=(
@@ -474,7 +805,43 @@ jobs:
build_cmd+=(--tag "$image_ref" --push) build_cmd+=(--tag "$image_ref" --push)
fi fi
"${build_cmd[@]}" echo "Publishing architecture image"
echo " image: ${{ matrix.image.name }}"
echo " repository: ${{ matrix.image.repository }}"
echo " platform: ${{ matrix.platform }}"
echo " dockerfile: ${{ matrix.image.dockerfile }}"
echo " destination: ${image_ref}"
echo " plain_http: ${{ needs.metadata.outputs.registry_plain_http }}"
run_with_retries 3 5 "${build_cmd[@]}"
- name: Link container package to repository
shell: bash
env:
REGISTRY_USERNAME: ${{ secrets.CONTAINER_REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.CONTAINER_REGISTRY_PASSWORD }}
run: |
set -euo pipefail
api_base="${{ needs.metadata.outputs.gitea_base_url }}/api/v1"
package_name="${{ matrix.image.repository }}"
encoded_package_name="$(PACKAGE_NAME="${package_name}" python3 -c 'import os, urllib.parse; print(urllib.parse.quote(os.environ["PACKAGE_NAME"], safe=""))')"
status_code="$(curl -sS -o /tmp/package-link-response.txt -w '%{http_code}' -X POST \
-u "${REGISTRY_USERNAME}:${REGISTRY_PASSWORD}" \
"${api_base}/packages/${{ needs.metadata.outputs.namespace }}/container/${encoded_package_name}/-/link/${REPOSITORY_NAME}")"
case "${status_code}" in
200|201|204|409)
;;
400|404)
echo "Package link unsupported for package '${package_name}' on this Gitea endpoint; continuing"
cat /tmp/package-link-response.txt
;;
*)
cat /tmp/package-link-response.txt
exit 1
;;
esac
publish-web-images: publish-web-images:
name: Publish web (${{ matrix.arch }}) name: Publish web (${{ matrix.arch }})
@@ -548,13 +915,38 @@ jobs:
run: | run: |
set -euo pipefail set -euo pipefail
image_ref="${{ needs.metadata.outputs.registry }}/${{ needs.metadata.outputs.namespace }}/attune-web:${{ needs.metadata.outputs.image_tag }}-${{ matrix.arch }}" run_with_retries() {
local max_attempts="$1"
local delay_seconds="$2"
shift 2
local attempt=1
while true; do
if "$@"; then
return 0
fi
if [ "$attempt" -ge "$max_attempts" ]; then
echo "Command failed after ${attempt} attempts: $*"
return 1
fi
echo "Command failed on attempt ${attempt}/${max_attempts}: $*"
echo "Retrying in ${delay_seconds}s..."
sleep "$delay_seconds"
attempt=$((attempt + 1))
done
}
image_ref="${{ needs.metadata.outputs.registry }}/${{ needs.metadata.outputs.namespace }}/attune/web:${{ needs.metadata.outputs.image_tag }}-${{ matrix.arch }}"
build_cmd=( build_cmd=(
docker buildx build docker buildx build
. .
--platform "${{ matrix.platform }}" --platform "${{ matrix.platform }}"
--file docker/Dockerfile.web --file docker/Dockerfile.web
--provenance=false
--sbom=false
) )
if [ "${{ needs.metadata.outputs.registry_plain_http }}" = "true" ]; then if [ "${{ needs.metadata.outputs.registry_plain_http }}" = "true" ]; then
@@ -563,7 +955,43 @@ jobs:
build_cmd+=(--tag "$image_ref" --push) build_cmd+=(--tag "$image_ref" --push)
fi fi
"${build_cmd[@]}" echo "Publishing architecture image"
echo " image: web"
echo " repository: attune/web"
echo " platform: ${{ matrix.platform }}"
echo " dockerfile: docker/Dockerfile.web"
echo " destination: ${image_ref}"
echo " plain_http: ${{ needs.metadata.outputs.registry_plain_http }}"
run_with_retries 3 5 "${build_cmd[@]}"
- name: Link web container package to repository
shell: bash
env:
REGISTRY_USERNAME: ${{ secrets.CONTAINER_REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.CONTAINER_REGISTRY_PASSWORD }}
run: |
set -euo pipefail
api_base="${{ needs.metadata.outputs.gitea_base_url }}/api/v1"
package_name="attune/web"
encoded_package_name="$(PACKAGE_NAME="${package_name}" python3 -c 'import os, urllib.parse; print(urllib.parse.quote(os.environ["PACKAGE_NAME"], safe=""))')"
status_code="$(curl -sS -o /tmp/package-link-response.txt -w '%{http_code}' -X POST \
-u "${REGISTRY_USERNAME}:${REGISTRY_PASSWORD}" \
"${api_base}/packages/${{ needs.metadata.outputs.namespace }}/container/${encoded_package_name}/-/link/${REPOSITORY_NAME}")"
case "${status_code}" in
200|201|204|409)
;;
400|404)
echo "Package link unsupported for package '${package_name}' on this Gitea endpoint; continuing"
cat /tmp/package-link-response.txt
;;
*)
cat /tmp/package-link-response.txt
exit 1
;;
esac
publish-manifests: publish-manifests:
name: Publish manifest ${{ matrix.repository }} name: Publish manifest ${{ matrix.repository }}
@@ -579,12 +1007,25 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
repository: repository:
- attune-api - attune/api
- attune-executor - attune/executor
- attune-notifier - attune/notifier
- attune-agent - attune/agent
- attune-web - attune/web
steps: steps:
- name: Setup Docker Buildx
if: needs.metadata.outputs.registry_plain_http != 'true'
uses: docker/setup-buildx-action@v3
- name: Setup Docker Buildx For Plain HTTP Registry
if: needs.metadata.outputs.registry_plain_http == 'true'
uses: docker/setup-buildx-action@v3
with:
buildkitd-config-inline: |
[registry."${{ needs.metadata.outputs.registry }}"]
http = true
insecure = true
- name: Configure OCI registry auth - name: Configure OCI registry auth
shell: bash shell: bash
env: env:
@@ -619,10 +1060,35 @@ jobs:
run: | run: |
set -euo pipefail set -euo pipefail
run_with_retries() {
local max_attempts="$1"
local delay_seconds="$2"
shift 2
local attempt=1
while true; do
if "$@"; then
return 0
fi
if [ "$attempt" -ge "$max_attempts" ]; then
echo "Command failed after ${attempt} attempts: $*"
return 1
fi
echo "Command failed on attempt ${attempt}/${max_attempts}: $*"
echo "Retrying in ${delay_seconds}s..."
sleep "$delay_seconds"
attempt=$((attempt + 1))
done
}
image_base="${{ needs.metadata.outputs.registry }}/${{ needs.metadata.outputs.namespace }}/${{ matrix.repository }}" image_base="${{ needs.metadata.outputs.registry }}/${{ needs.metadata.outputs.namespace }}/${{ matrix.repository }}"
create_args=()
push_args=() push_args=()
if [ "${{ needs.metadata.outputs.registry_plain_http }}" = "true" ]; then if [ "${{ needs.metadata.outputs.registry_plain_http }}" = "true" ]; then
create_args+=(--insecure)
push_args+=(--insecure) push_args+=(--insecure)
fi fi
@@ -632,9 +1098,33 @@ jobs:
amd64_ref="${image_base}:${{ needs.metadata.outputs.image_tag }}-amd64" amd64_ref="${image_base}:${{ needs.metadata.outputs.image_tag }}-amd64"
arm64_ref="${image_base}:${{ needs.metadata.outputs.image_tag }}-arm64" arm64_ref="${image_base}:${{ needs.metadata.outputs.image_tag }}-arm64"
if [ "${{ matrix.repository }}" = "attune/web" ]; then
echo "Publishing multi-arch manifest with docker manifest"
echo " repository: ${{ matrix.repository }}"
echo " manifest_tag: ${tag}"
echo " manifest_ref: ${manifest_ref}"
echo " source_amd64: ${amd64_ref}"
echo " source_arm64: ${arm64_ref}"
echo " plain_http: ${{ needs.metadata.outputs.registry_plain_http }}"
docker manifest rm "$manifest_ref" >/dev/null 2>&1 || true docker manifest rm "$manifest_ref" >/dev/null 2>&1 || true
docker manifest create "$manifest_ref" "$amd64_ref" "$arm64_ref" run_with_retries 3 5 \
docker manifest create "${create_args[@]}" "$manifest_ref" "$amd64_ref" "$arm64_ref"
docker manifest annotate "$manifest_ref" "$amd64_ref" --os linux --arch amd64 docker manifest annotate "$manifest_ref" "$amd64_ref" --os linux --arch amd64
docker manifest annotate "$manifest_ref" "$arm64_ref" --os linux --arch arm64 docker manifest annotate "$manifest_ref" "$arm64_ref" --os linux --arch arm64
run_with_retries 3 5 \
docker manifest push "${push_args[@]}" "$manifest_ref" docker manifest push "${push_args[@]}" "$manifest_ref"
else
echo "Publishing multi-arch manifest with buildx imagetools"
echo " repository: ${{ matrix.repository }}"
echo " manifest_tag: ${tag}"
echo " manifest_ref: ${manifest_ref}"
echo " source_amd64: ${amd64_ref}"
echo " source_arm64: ${arm64_ref}"
echo " plain_http: ${{ needs.metadata.outputs.registry_plain_http }}"
run_with_retries 3 5 \
docker buildx imagetools create \
--tag "$manifest_ref" \
"$amd64_ref" \
"$arm64_ref"
fi
done done

5
.gitignore vendored
View File

@@ -11,6 +11,7 @@ target/
# Configuration files (keep *.example.yaml) # Configuration files (keep *.example.yaml)
config.yaml config.yaml
config.*.yaml config.*.yaml
!docker/distributable/config.docker.yaml
!config.example.yaml !config.example.yaml
!config.development.yaml !config.development.yaml
!config.test.yaml !config.test.yaml
@@ -35,6 +36,7 @@ logs/
# Build artifacts # Build artifacts
dist/ dist/
build/ build/
artifacts/
# Testing # Testing
coverage/ coverage/
@@ -80,3 +82,6 @@ docker-compose.override.yml
packs.examples/ packs.examples/
packs.external/ packs.external/
codex/ codex/
# Compiled pack binaries (built via Docker or build-pack-binaries.sh)
packs/core/sensors/attune-core-timer-sensor

View File

@@ -4,3 +4,6 @@ web/node_modules/
web/src/api/ web/src/api/
packs.dev/ packs.dev/
packs.external/ packs.external/
tests/
docs/
*.md

View File

@@ -77,7 +77,7 @@ attune/
**Services**: **Services**:
- **Infrastructure**: postgres (TimescaleDB), rabbitmq, redis - **Infrastructure**: postgres (TimescaleDB), rabbitmq, redis
- **Init** (run-once): migrations, init-user, init-packs, init-agent - **Init** (run-once): migrations, init-user, init-pack-binaries, init-packs, init-agent
- **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000) - **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000)
**Volumes** (named): **Volumes** (named):
@@ -100,7 +100,8 @@ docker compose -f docker-compose.yaml -f docker-compose.agent.yaml up -d # Star
### Docker Build Optimization ### Docker Build Optimization
- **Active Dockerfiles**: `docker/Dockerfile.optimized`, `docker/Dockerfile.agent`, `docker/Dockerfile.web`, and `docker/Dockerfile.pack-binaries` - **Active Dockerfiles**: `docker/Dockerfile.optimized`, `docker/Dockerfile.agent`, `docker/Dockerfile.web`, and `docker/Dockerfile.pack-binaries`
- **Agent Dockerfile** (`docker/Dockerfile.agent`): Builds a statically-linked `attune-agent` binary using musl (`x86_64-unknown-linux-musl`). Three stages: `builder` (cross-compile), `agent-binary` (scratch — just the binary), `agent-init` (busybox — for volume population via `cp`). The binary has zero runtime dependencies (no glibc, no libssl). Build with `make docker-build-agent`. - **Agent Dockerfile** (`docker/Dockerfile.agent`): Builds statically-linked `attune-agent` and `attune-sensor-agent` binaries using musl. Uses `cargo-zigbuild` (zig as the cross-compilation backend) so that any target architecture can be built from any host — e.g., building `aarch64-unknown-linux-musl` on an x86_64 host or vice versa. The `RUST_TARGET` build arg controls the output architecture (`x86_64-unknown-linux-musl` default, or `aarch64-unknown-linux-musl` for arm64). Three stages: `builder` (cross-compile with cargo-zigbuild), `agent-binary` (scratch — just the binaries), `agent-init` (busybox — for volume population via `cp`). The binaries have zero runtime dependencies (no glibc, no libssl). Build with `make docker-build-agent` (amd64), `make docker-build-agent-arm64` (arm64), or `make docker-build-agent-all` (both). In `docker-compose.yaml`, set `AGENT_RUST_TARGET=aarch64-unknown-linux-musl` env var to build arm64 agent binaries (defaults to x86_64).
- **Pack Binaries Dockerfile** (`docker/Dockerfile.pack-binaries`): Builds statically-linked pack binaries (sensors, etc.) using musl + cargo-zigbuild for cross-compilation. The `RUST_TARGET` build arg controls the output architecture (`x86_64-unknown-linux-musl` default, or `aarch64-unknown-linux-musl` for arm64). Three stages: `builder` (cross-compile with cargo-zigbuild), `output` (scratch — just the binaries for `docker cp` extraction), `pack-binaries-init` (busybox — for Docker Compose volume population via `cp`). Build with `make docker-build-pack-binaries` (amd64), `make docker-build-pack-binaries-arm64` (arm64), or `make docker-build-pack-binaries-all` (both). In `docker-compose.yaml`, set `PACK_BINARIES_RUST_TARGET=aarch64-unknown-linux-musl` env var to build arm64 pack binaries (defaults to x86_64). The `init-pack-binaries` Docker Compose service automatically builds and copies pack binaries into the `packs_data` volume before `init-packs` runs.
- **Strategy**: Selective crate copying - only copy crates needed for each service (not entire workspace) - **Strategy**: Selective crate copying - only copy crates needed for each service (not entire workspace)
- **Performance**: 90% faster incremental builds (~30 sec vs ~5 min for code changes) - **Performance**: 90% faster incremental builds (~30 sec vs ~5 min for code changes)
- **BuildKit cache mounts**: Persist cargo registry and compilation artifacts between builds - **BuildKit cache mounts**: Persist cargo registry and compilation artifacts between builds
@@ -123,7 +124,7 @@ docker compose -f docker-compose.yaml -f docker-compose.agent.yaml up -d # Star
- **Key Principle**: Packs are NOT copied into Docker images - they are mounted as volumes - **Key Principle**: Packs are NOT copied into Docker images - they are mounted as volumes
- **Volume Flow**: Host `./packs/``init-packs` service → `packs_data` volume → mounted in all services - **Volume Flow**: Host `./packs/``init-packs` service → `packs_data` volume → mounted in all services
- **Benefits**: Update packs with restart (~5 sec) instead of rebuild (~5 min) - **Benefits**: Update packs with restart (~5 sec) instead of rebuild (~5 min)
- **Pack Binaries**: Built separately with `./scripts/build-pack-binaries.sh` (GLIBC compatibility) - **Pack Binaries**: Automatically built and deployed via the `init-pack-binaries` Docker Compose service (statically-linked musl binaries via cargo-zigbuild, supports cross-compilation via `PACK_BINARIES_RUST_TARGET` env var). Can also be built manually with `./scripts/build-pack-binaries.sh` or `make docker-build-pack-binaries`. The `init-packs` service depends on `init-pack-binaries` and preserves any ELF binaries already present in the target `sensors/` directory (detected via ELF magic bytes with `od`) — it backs them up before copying host pack files and restores them afterward, preventing the host's stale dynamically-linked binary from overwriting the freshly-built static one.
- **Development**: Use `./packs.dev/` for instant testing (direct bind mount, no restart needed) - **Development**: Use `./packs.dev/` for instant testing (direct bind mount, no restart needed)
- **Documentation**: See `docs/QUICKREF-packs-volumes.md` - **Documentation**: See `docs/QUICKREF-packs-volumes.md`
@@ -273,7 +274,7 @@ Completion listener advances workflow → Schedules successor tasks → Complete
- **Pack Volume Strategy**: Packs are mounted as volumes (NOT copied into Docker images) - **Pack Volume Strategy**: Packs are mounted as volumes (NOT copied into Docker images)
- Host `./packs/``packs_data` volume via `init-packs` service → mounted at `/opt/attune/packs` in all services - Host `./packs/``packs_data` volume via `init-packs` service → mounted at `/opt/attune/packs` in all services
- Development packs in `./packs.dev/` are bind-mounted directly for instant updates - Development packs in `./packs.dev/` are bind-mounted directly for instant updates
- **Pack Binaries**: Native binaries (sensors) built separately with `./scripts/build-pack-binaries.sh` - **Pack Binaries**: Native binaries (sensors) automatically built by the `init-pack-binaries` Docker Compose service (statically-linked musl, cross-arch via `PACK_BINARIES_RUST_TARGET`). Can also be built manually with `./scripts/build-pack-binaries.sh` or `make docker-build-pack-binaries`.
- **Action Script Resolution**: Worker constructs file paths as `{packs_base_dir}/{pack_ref}/actions/{entrypoint}` - **Action Script Resolution**: Worker constructs file paths as `{packs_base_dir}/{pack_ref}/actions/{entrypoint}`
- **Workflow Action YAML (`workflow_file` field)**: An action YAML may include a `workflow_file` field (e.g., `workflow_file: workflows/timeline_demo.yaml`) pointing to a workflow definition file relative to the `actions/` directory. When present, the `PackComponentLoader` reads and parses the referenced workflow YAML, creates/updates a `workflow_definition` record, and links the action to it via `action.workflow_def`. This separates action-level metadata (ref, label, parameters, policies) from the workflow graph (tasks, transitions, variables), and allows **multiple actions to reference the same workflow file** with different parameter schemas or policy configurations. Workflow actions have no `runner_type` (runtime is `None`) — the executor orchestrates child task executions rather than sending to a worker. - **Workflow Action YAML (`workflow_file` field)**: An action YAML may include a `workflow_file` field (e.g., `workflow_file: workflows/timeline_demo.yaml`) pointing to a workflow definition file relative to the `actions/` directory. When present, the `PackComponentLoader` reads and parses the referenced workflow YAML, creates/updates a `workflow_definition` record, and links the action to it via `action.workflow_def`. This separates action-level metadata (ref, label, parameters, policies) from the workflow graph (tasks, transitions, variables), and allows **multiple actions to reference the same workflow file** with different parameter schemas or policy configurations. Workflow actions have no `runner_type` (runtime is `None`) — the executor orchestrates child task executions rather than sending to a worker.
- **Action-linked workflow files omit action-level metadata**: Workflow files referenced via `workflow_file` should contain **only the execution graph**: `version`, `vars`, `tasks`, `output_map`. The `ref`, `label`, `description`, `parameters`, `output`, and `tags` fields are omitted — the action YAML is the single authoritative source for those values. The `WorkflowDefinition` parser accepts empty `ref`/`label` (defaults to `""`), and the loader / registrar fall back to the action YAML (or filename-derived values) when they are missing. Standalone workflow files (in `workflows/`) still carry their own `ref`/`label` since they have no companion action YAML. - **Action-linked workflow files omit action-level metadata**: Workflow files referenced via `workflow_file` should contain **only the execution graph**: `version`, `vars`, `tasks`, `output_map`. The `ref`, `label`, `description`, `parameters`, `output`, and `tags` fields are omitted — the action YAML is the single authoritative source for those values. The `WorkflowDefinition` parser accepts empty `ref`/`label` (defaults to `""`), and the loader / registrar fall back to the action YAML (or filename-derived values) when they are missing. Standalone workflow files (in `workflows/`) still carry their own `ref`/`label` since they have no companion action YAML.
@@ -683,7 +684,7 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
- `docker/Dockerfile.optimized` - Optimized service builds (api, executor, notifier) - `docker/Dockerfile.optimized` - Optimized service builds (api, executor, notifier)
- `docker/Dockerfile.agent` - Statically-linked agent binary (musl, for injection into any container) - `docker/Dockerfile.agent` - Statically-linked agent binary (musl, for injection into any container)
- `docker/Dockerfile.web` - Web UI build - `docker/Dockerfile.web` - Web UI build
- `docker/Dockerfile.pack-binaries` - Separate pack binary builder - `docker/Dockerfile.pack-binaries` - Separate pack binary builder (cargo-zigbuild + musl static linking, 3 stages: builder, output, pack-binaries-init)
- `scripts/build-pack-binaries.sh` - Build pack binaries script - `scripts/build-pack-binaries.sh` - Build pack binaries script
## Common Pitfalls to Avoid ## Common Pitfalls to Avoid
@@ -703,7 +704,7 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
14. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`) 14. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`)
15. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare` 15. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare`
16. **REMEMBER** packs are volumes - update with restart, not rebuild 16. **REMEMBER** packs are volumes - update with restart, not rebuild
17. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh` 17. **REMEMBER** pack binaries are automatically built by `init-pack-binaries` in Docker Compose. For manual builds use `make docker-build-pack-binaries` or `./scripts/build-pack-binaries.sh`.
18. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row). 18. **REMEMBER** when adding mutable columns to `execution` or `worker`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration. Events and enforcements are hypertables without history tables — do NOT add frequently-mutated columns to them. Execution is both a hypertable AND has an `execution_history` table (because it is mutable with ~4 updates per row).
19. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history` 19. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history`
20. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures. 20. **NEVER** use `SELECT *` on tables that have DB-only columns not in the Rust `FromRow` struct (e.g., `execution.is_workflow`, `execution.workflow_def` exist in SQL but not in the `Execution` model). Define a `SELECT_COLUMNS` constant in the repository (see `execution.rs`, `pack.rs`, `runtime_version.rs` for examples) and reference it from all queries — including queries outside the repository (e.g., `timeout_monitor.rs` imports `execution::SELECT_COLUMNS`).ause runtime deserialization failures.

98
Cargo.lock generated
View File

@@ -528,6 +528,7 @@ dependencies = [
"mockito", "mockito",
"predicates", "predicates",
"reqwest 0.13.2", "reqwest 0.13.2",
"reqwest-eventsource",
"serde", "serde",
"serde_json", "serde_json",
"serde_yaml_ng", "serde_yaml_ng",
@@ -579,6 +580,7 @@ dependencies = [
"tokio", "tokio",
"tracing", "tracing",
"tracing-subscriber", "tracing-subscriber",
"url",
"utoipa", "utoipa",
"uuid", "uuid",
"validator", "validator",
@@ -2150,21 +2152,6 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb" checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb"
[[package]]
name = "foreign-types"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1"
dependencies = [
"foreign-types-shared",
]
[[package]]
name = "foreign-types-shared"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b"
[[package]] [[package]]
name = "form_urlencoded" name = "form_urlencoded"
version = "1.2.2" version = "1.2.2"
@@ -3065,15 +3052,17 @@ dependencies = [
"futures-util", "futures-util",
"lber", "lber",
"log", "log",
"native-tls",
"nom 7.1.3", "nom 7.1.3",
"percent-encoding", "percent-encoding",
"rustls",
"rustls-native-certs",
"thiserror 2.0.18", "thiserror 2.0.18",
"tokio", "tokio",
"tokio-native-tls", "tokio-rustls",
"tokio-stream", "tokio-stream",
"tokio-util", "tokio-util",
"url", "url",
"x509-parser",
] ]
[[package]] [[package]]
@@ -3314,23 +3303,6 @@ dependencies = [
"version_check", "version_check",
] ]
[[package]]
name = "native-tls"
version = "0.2.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "465500e14ea162429d264d44189adc38b199b62b1c21eea9f69e4b73cb03bbf2"
dependencies = [
"libc",
"log",
"openssl",
"openssl-probe",
"openssl-sys",
"schannel",
"security-framework",
"security-framework-sys",
"tempfile",
]
[[package]] [[package]]
name = "nom" name = "nom"
version = "7.1.3" version = "7.1.3"
@@ -3576,50 +3548,12 @@ dependencies = [
"url", "url",
] ]
[[package]]
name = "openssl"
version = "0.10.76"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "951c002c75e16ea2c65b8c7e4d3d51d5530d8dfa7d060b4776828c88cfb18ecf"
dependencies = [
"bitflags",
"cfg-if",
"foreign-types",
"libc",
"once_cell",
"openssl-macros",
"openssl-sys",
]
[[package]]
name = "openssl-macros"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a948666b637a0f465e8564c73e89d4dde00d72d4d473cc972f390fc3dcee7d9c"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]] [[package]]
name = "openssl-probe" name = "openssl-probe"
version = "0.2.1" version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c87def4c32ab89d880effc9e097653c8da5d6ef28e6b539d313baaacfbafcbe" checksum = "7c87def4c32ab89d880effc9e097653c8da5d6ef28e6b539d313baaacfbafcbe"
[[package]]
name = "openssl-sys"
version = "0.9.112"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57d55af3b3e226502be1526dfdba67ab0e9c96fc293004e79576b2b9edb0dbdb"
dependencies = [
"cc",
"libc",
"pkg-config",
"vcpkg",
]
[[package]] [[package]]
name = "option-ext" name = "option-ext"
version = "0.2.0" version = "0.2.0"
@@ -4642,6 +4576,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "758025cb5fccfd3bc2fd74708fd4682be41d99e5dff73c377c0646c6012c73a4" checksum = "758025cb5fccfd3bc2fd74708fd4682be41d99e5dff73c377c0646c6012c73a4"
dependencies = [ dependencies = [
"aws-lc-rs", "aws-lc-rs",
"log",
"once_cell", "once_cell",
"ring", "ring",
"rustls-pki-types", "rustls-pki-types",
@@ -5698,16 +5633,6 @@ dependencies = [
"syn", "syn",
] ]
[[package]]
name = "tokio-native-tls"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbae76ab933c85776efabc971569dd6119c580d8f5d448769dec1764bf796ef2"
dependencies = [
"native-tls",
"tokio",
]
[[package]] [[package]]
name = "tokio-rustls" name = "tokio-rustls"
version = "0.26.4" version = "0.26.4"
@@ -5749,9 +5674,11 @@ checksum = "d25a406cddcc431a75d3d9afc6a7c0f7428d4891dd973e4d54c56b46127bf857"
dependencies = [ dependencies = [
"futures-util", "futures-util",
"log", "log",
"native-tls", "rustls",
"rustls-native-certs",
"rustls-pki-types",
"tokio", "tokio",
"tokio-native-tls", "tokio-rustls",
"tungstenite", "tungstenite",
] ]
@@ -5938,8 +5865,9 @@ dependencies = [
"http", "http",
"httparse", "httparse",
"log", "log",
"native-tls",
"rand 0.9.2", "rand 0.9.2",
"rustls",
"rustls-pki-types",
"sha1", "sha1",
"thiserror 2.0.18", "thiserror 2.0.18",
"utf-8", "utf-8",

View File

@@ -101,7 +101,7 @@ tar = "0.4"
flate2 = "1.1" flate2 = "1.1"
# WebSocket client # WebSocket client
tokio-tungstenite = { version = "0.28", features = ["native-tls"] } tokio-tungstenite = { version = "0.28", features = ["rustls-tls-native-roots"] }
# URL parsing # URL parsing
url = "2.5" url = "2.5"

View File

@@ -5,8 +5,10 @@
docker-build-worker-node docker-build-worker-full deny ci-rust ci-web-blocking ci-web-advisory \ docker-build-worker-node docker-build-worker-full deny ci-rust ci-web-blocking ci-web-advisory \
ci-security-blocking ci-security-advisory ci-blocking ci-advisory \ ci-security-blocking ci-security-advisory ci-blocking ci-advisory \
fmt-check pre-commit install-git-hooks \ fmt-check pre-commit install-git-hooks \
build-agent docker-build-agent run-agent run-agent-release \ build-agent docker-build-agent docker-build-agent-arm64 docker-build-agent-all \
docker-up-agent docker-down-agent run-agent run-agent-release \
docker-up-agent docker-down-agent \
docker-build-pack-binaries docker-build-pack-binaries-arm64 docker-build-pack-binaries-all
# Default target # Default target
help: help:
@@ -64,12 +66,19 @@ help:
@echo "" @echo ""
@echo "Agent (Universal Worker):" @echo "Agent (Universal Worker):"
@echo " make build-agent - Build statically-linked agent binary (musl)" @echo " make build-agent - Build statically-linked agent binary (musl)"
@echo " make docker-build-agent - Build agent Docker image" @echo " make docker-build-agent - Build agent Docker image (amd64, default)"
@echo " make docker-build-agent-arm64 - Build agent Docker image (arm64)"
@echo " make docker-build-agent-all - Build agent Docker images (amd64 + arm64)"
@echo " make run-agent - Run agent in development mode" @echo " make run-agent - Run agent in development mode"
@echo " make run-agent-release - Run agent in release mode" @echo " make run-agent-release - Run agent in release mode"
@echo " make docker-up-agent - Start all services + agent workers (ruby, etc.)" @echo " make docker-up-agent - Start all services + agent workers (ruby, etc.)"
@echo " make docker-down-agent - Stop agent stack" @echo " make docker-down-agent - Stop agent stack"
@echo "" @echo ""
@echo "Pack Binaries:"
@echo " make docker-build-pack-binaries - Build pack binaries Docker image (amd64, default)"
@echo " make docker-build-pack-binaries-arm64 - Build pack binaries Docker image (arm64)"
@echo " make docker-build-pack-binaries-all - Build pack binaries Docker images (amd64 + arm64)"
@echo ""
@echo "Development:" @echo "Development:"
@echo " make watch - Watch and rebuild on changes" @echo " make watch - Watch and rebuild on changes"
@echo " make install-tools - Install development tools" @echo " make install-tools - Install development tools"
@@ -238,23 +247,39 @@ docker-build-web:
docker compose build web docker compose build web
# Agent binary (statically-linked for injection into any container) # Agent binary (statically-linked for injection into any container)
AGENT_RUST_TARGET ?= x86_64-unknown-linux-musl
# Pack binaries (statically-linked for packs volume)
PACK_BINARIES_RUST_TARGET ?= x86_64-unknown-linux-musl
build-agent: build-agent:
@echo "Installing musl target (if not already installed)..." @echo "Installing musl target (if not already installed)..."
rustup target add x86_64-unknown-linux-musl 2>/dev/null || true rustup target add $(AGENT_RUST_TARGET) 2>/dev/null || true
@echo "Building statically-linked worker and sensor agent binaries..." @echo "Building statically-linked worker and sensor agent binaries..."
SQLX_OFFLINE=true cargo build --release --target x86_64-unknown-linux-musl --bin attune-agent --bin attune-sensor-agent SQLX_OFFLINE=true cargo build --release --target $(AGENT_RUST_TARGET) --bin attune-agent --bin attune-sensor-agent
strip target/x86_64-unknown-linux-musl/release/attune-agent strip target/$(AGENT_RUST_TARGET)/release/attune-agent
strip target/x86_64-unknown-linux-musl/release/attune-sensor-agent strip target/$(AGENT_RUST_TARGET)/release/attune-sensor-agent
@echo "✅ Agent binaries built:" @echo "✅ Agent binaries built:"
@echo " - target/x86_64-unknown-linux-musl/release/attune-agent" @echo " - target/$(AGENT_RUST_TARGET)/release/attune-agent"
@echo " - target/x86_64-unknown-linux-musl/release/attune-sensor-agent" @echo " - target/$(AGENT_RUST_TARGET)/release/attune-sensor-agent"
@ls -lh target/x86_64-unknown-linux-musl/release/attune-agent @ls -lh target/$(AGENT_RUST_TARGET)/release/attune-agent
@ls -lh target/x86_64-unknown-linux-musl/release/attune-sensor-agent @ls -lh target/$(AGENT_RUST_TARGET)/release/attune-sensor-agent
docker-build-agent: docker-build-agent:
@echo "Building agent Docker image (statically-linked binary)..." @echo "Building agent Docker image ($(AGENT_RUST_TARGET))..."
DOCKER_BUILDKIT=1 docker buildx build --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest . DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=$(AGENT_RUST_TARGET) --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest .
@echo "✅ Agent image built: attune-agent:latest" @echo "✅ Agent image built: attune-agent:latest ($(AGENT_RUST_TARGET))"
docker-build-agent-arm64:
@echo "Building arm64 agent Docker image..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target agent-init -f docker/Dockerfile.agent -t attune-agent:arm64 .
@echo "✅ Agent image built: attune-agent:arm64"
docker-build-agent-all:
@echo "Building agent Docker images for all architectures..."
$(MAKE) docker-build-agent
$(MAKE) docker-build-agent-arm64
@echo "✅ All agent images built: attune-agent:latest (amd64), attune-agent:arm64"
run-agent: run-agent:
cargo run --bin attune-agent cargo run --bin attune-agent
@@ -262,6 +287,23 @@ run-agent:
run-agent-release: run-agent-release:
cargo run --bin attune-agent --release cargo run --bin attune-agent --release
# Pack binaries (statically-linked for packs volume)
docker-build-pack-binaries:
@echo "Building pack binaries Docker image ($(PACK_BINARIES_RUST_TARGET))..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=$(PACK_BINARIES_RUST_TARGET) --target pack-binaries-init -f docker/Dockerfile.pack-binaries -t attune-pack-builder:latest .
@echo "✅ Pack binaries image built: attune-pack-builder:latest ($(PACK_BINARIES_RUST_TARGET))"
docker-build-pack-binaries-arm64:
@echo "Building arm64 pack binaries Docker image..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target pack-binaries-init -f docker/Dockerfile.pack-binaries -t attune-pack-builder:arm64 .
@echo "✅ Pack binaries image built: attune-pack-builder:arm64"
docker-build-pack-binaries-all:
@echo "Building pack binaries Docker images for all architectures..."
$(MAKE) docker-build-pack-binaries
$(MAKE) docker-build-pack-binaries-arm64
@echo "✅ All pack binary images built: attune-pack-builder:latest (amd64), attune-pack-builder:arm64"
run-sensor-agent: run-sensor-agent:
cargo run --bin attune-sensor-agent cargo run --bin attune-sensor-agent

View File

@@ -11,7 +11,7 @@ stringData:
ATTUNE__SECURITY__ENCRYPTION_KEY: {{ .Values.security.encryptionKey | quote }} ATTUNE__SECURITY__ENCRYPTION_KEY: {{ .Values.security.encryptionKey | quote }}
ATTUNE__DATABASE__URL: {{ include "attune.databaseUrl" . | quote }} ATTUNE__DATABASE__URL: {{ include "attune.databaseUrl" . | quote }}
ATTUNE__MESSAGE_QUEUE__URL: {{ include "attune.rabbitmqUrl" . | quote }} ATTUNE__MESSAGE_QUEUE__URL: {{ include "attune.rabbitmqUrl" . | quote }}
ATTUNE__CACHE__URL: {{ include "attune.redisUrl" . | quote }} ATTUNE__REDIS__URL: {{ include "attune.redisUrl" . | quote }}
DB_HOST: {{ include "attune.postgresqlServiceName" . | quote }} DB_HOST: {{ include "attune.postgresqlServiceName" . | quote }}
DB_PORT: {{ .Values.database.port | quote }} DB_PORT: {{ .Values.database.port | quote }}
DB_USER: {{ .Values.database.username | quote }} DB_USER: {{ .Values.database.username | quote }}

View File

@@ -62,6 +62,8 @@ pack_registry:
enabled: true enabled: true
default_registry: https://registry.attune.example.com default_registry: https://registry.attune.example.com
cache_ttl: 300 cache_ttl: 300
allowed_source_hosts:
- registry.attune.example.com
# Test worker configuration # Test worker configuration
# worker: # worker:

View File

@@ -70,7 +70,7 @@ jsonschema = { workspace = true }
# HTTP client # HTTP client
reqwest = { workspace = true } reqwest = { workspace = true }
openidconnect = "4.0" openidconnect = "4.0"
ldap3 = "0.12" ldap3 = { version = "0.12", default-features = false, features = ["sync", "tls-rustls-ring"] }
url = { workspace = true } url = { workspace = true }
# Archive/compression # Archive/compression

View File

@@ -139,7 +139,8 @@ fn conn_settings(config: &LdapConfig) -> LdapConnSettings {
/// Open a new LDAP connection. /// Open a new LDAP connection.
async fn connect(config: &LdapConfig) -> Result<Ldap, ApiError> { async fn connect(config: &LdapConfig) -> Result<Ldap, ApiError> {
let settings = conn_settings(config); let settings = conn_settings(config);
let (conn, ldap) = LdapConnAsync::with_settings(settings, &config.url) let url = config.url.as_deref().unwrap_or_default();
let (conn, ldap) = LdapConnAsync::with_settings(settings, url)
.await .await
.map_err(|err| { .map_err(|err| {
ApiError::InternalServerError(format!("Failed to connect to LDAP server: {err}")) ApiError::InternalServerError(format!("Failed to connect to LDAP server: {err}"))
@@ -333,7 +334,7 @@ fn extract_claims(config: &LdapConfig, entry: &SearchEntry) -> LdapUserClaims {
.unwrap_or_default(); .unwrap_or_default();
LdapUserClaims { LdapUserClaims {
server_url: config.url.clone(), server_url: config.url.clone().unwrap_or_default(),
dn: entry.dn.clone(), dn: entry.dn.clone(),
login: first_attr(&config.login_attr), login: first_attr(&config.login_attr),
email: first_attr(&config.email_attr), email: first_attr(&config.email_attr),

View File

@@ -126,15 +126,17 @@ pub async fn build_login_redirect(
.map_err(|err| { .map_err(|err| {
ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}")) ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}"))
})?; })?;
let redirect_uri = RedirectUrl::new(oidc.redirect_uri.clone()).map_err(|err| { let redirect_uri_str = oidc.redirect_uri.clone().unwrap_or_default();
let redirect_uri = RedirectUrl::new(redirect_uri_str).map_err(|err| {
ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}")) ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}"))
})?; })?;
let client_secret = oidc.client_secret.clone().ok_or_else(|| { let client_secret = oidc.client_secret.clone().ok_or_else(|| {
ApiError::InternalServerError("OIDC client secret is missing".to_string()) ApiError::InternalServerError("OIDC client secret is missing".to_string())
})?; })?;
let client_id = oidc.client_id.clone().unwrap_or_default();
let client = CoreClient::from_provider_metadata( let client = CoreClient::from_provider_metadata(
discovery.metadata.clone(), discovery.metadata.clone(),
ClientId::new(oidc.client_id.clone()), ClientId::new(client_id),
Some(ClientSecret::new(client_secret)), Some(ClientSecret::new(client_secret)),
) )
.set_redirect_uri(redirect_uri); .set_redirect_uri(redirect_uri);
@@ -238,15 +240,17 @@ pub async fn handle_callback(
.map_err(|err| { .map_err(|err| {
ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}")) ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}"))
})?; })?;
let redirect_uri = RedirectUrl::new(oidc.redirect_uri.clone()).map_err(|err| { let redirect_uri_str = oidc.redirect_uri.clone().unwrap_or_default();
let redirect_uri = RedirectUrl::new(redirect_uri_str).map_err(|err| {
ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}")) ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}"))
})?; })?;
let client_secret = oidc.client_secret.clone().ok_or_else(|| { let client_secret = oidc.client_secret.clone().ok_or_else(|| {
ApiError::InternalServerError("OIDC client secret is missing".to_string()) ApiError::InternalServerError("OIDC client secret is missing".to_string())
})?; })?;
let client_id = oidc.client_id.clone().unwrap_or_default();
let client = CoreClient::from_provider_metadata( let client = CoreClient::from_provider_metadata(
discovery.metadata.clone(), discovery.metadata.clone(),
ClientId::new(oidc.client_id.clone()), ClientId::new(client_id),
Some(ClientSecret::new(client_secret)), Some(ClientSecret::new(client_secret)),
) )
.set_redirect_uri(redirect_uri); .set_redirect_uri(redirect_uri);
@@ -336,7 +340,7 @@ pub async fn build_logout_redirect(
pairs.append_pair("id_token_hint", &id_token_hint); pairs.append_pair("id_token_hint", &id_token_hint);
} }
pairs.append_pair("post_logout_redirect_uri", &post_logout_redirect_uri); pairs.append_pair("post_logout_redirect_uri", &post_logout_redirect_uri);
pairs.append_pair("client_id", &oidc.client_id); pairs.append_pair("client_id", oidc.client_id.as_deref().unwrap_or_default());
} }
String::from(url) String::from(url)
} else { } else {
@@ -481,7 +485,8 @@ fn oidc_config(state: &SharedState) -> Result<OidcConfig, ApiError> {
} }
async fn fetch_discovery_document(oidc: &OidcConfig) -> Result<OidcDiscoveryDocument, ApiError> { async fn fetch_discovery_document(oidc: &OidcConfig) -> Result<OidcDiscoveryDocument, ApiError> {
let discovery = reqwest::get(&oidc.discovery_url).await.map_err(|err| { let discovery_url = oidc.discovery_url.as_deref().unwrap_or_default();
let discovery = reqwest::get(discovery_url).await.map_err(|err| {
ApiError::InternalServerError(format!("Failed to fetch OIDC discovery document: {err}")) ApiError::InternalServerError(format!("Failed to fetch OIDC discovery document: {err}"))
})?; })?;
@@ -621,7 +626,7 @@ async fn verify_id_token(
let issuer = discovery.metadata.issuer().to_string(); let issuer = discovery.metadata.issuer().to_string();
let mut validation = Validation::new(algorithm); let mut validation = Validation::new(algorithm);
validation.set_issuer(&[issuer.as_str()]); validation.set_issuer(&[issuer.as_str()]);
validation.set_audience(&[oidc.client_id.as_str()]); validation.set_audience(&[oidc.client_id.as_deref().unwrap_or_default()]);
validation.set_required_spec_claims(&["exp", "iat", "iss", "sub", "aud"]); validation.set_required_spec_claims(&["exp", "iat", "iss", "sub", "aud"]);
validation.validate_nbf = false; validation.validate_nbf = false;
@@ -740,7 +745,8 @@ fn should_use_secure_cookies(state: &SharedState) -> bool {
.security .security
.oidc .oidc
.as_ref() .as_ref()
.map(|oidc| oidc.redirect_uri.starts_with("https://")) .and_then(|oidc| oidc.redirect_uri.as_deref())
.map(|uri| uri.starts_with("https://"))
.unwrap_or(false) .unwrap_or(false)
} }

View File

@@ -2,6 +2,7 @@
use axum::{ use axum::{
extract::{Path, Query, State}, extract::{Path, Query, State},
http::HeaderMap,
http::StatusCode, http::StatusCode,
response::{ response::{
sse::{Event, KeepAlive, Sse}, sse::{Event, KeepAlive, Sse},
@@ -13,6 +14,7 @@ use axum::{
use chrono::Utc; use chrono::Utc;
use futures::stream::{Stream, StreamExt}; use futures::stream::{Stream, StreamExt};
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use tokio_stream::wrappers::BroadcastStream; use tokio_stream::wrappers::BroadcastStream;
use attune_common::models::enums::ExecutionStatus; use attune_common::models::enums::ExecutionStatus;
@@ -32,7 +34,10 @@ use attune_common::workflow::{CancellationPolicy, WorkflowDefinition};
use sqlx::Row; use sqlx::Row;
use crate::{ use crate::{
auth::middleware::RequireAuth, auth::{
jwt::{validate_token, Claims, JwtConfig, TokenType},
middleware::{AuthenticatedUser, RequireAuth},
},
authz::{AuthorizationCheck, AuthorizationService}, authz::{AuthorizationCheck, AuthorizationService},
dto::{ dto::{
common::{PaginatedResponse, PaginationParams}, common::{PaginatedResponse, PaginationParams},
@@ -46,6 +51,9 @@ use crate::{
}; };
use attune_common::rbac::{Action, AuthorizationContext, Resource}; use attune_common::rbac::{Action, AuthorizationContext, Resource};
const LOG_STREAM_POLL_INTERVAL: Duration = Duration::from_millis(250);
const LOG_STREAM_READ_CHUNK_SIZE: usize = 64 * 1024;
/// Create a new execution (manual execution) /// Create a new execution (manual execution)
/// ///
/// This endpoint allows directly executing an action without a trigger or rule. /// This endpoint allows directly executing an action without a trigger or rule.
@@ -925,6 +933,398 @@ pub async fn stream_execution_updates(
Ok(Sse::new(filtered_stream).keep_alive(KeepAlive::default())) Ok(Sse::new(filtered_stream).keep_alive(KeepAlive::default()))
} }
#[derive(serde::Deserialize)]
pub struct StreamExecutionLogParams {
pub token: Option<String>,
pub offset: Option<u64>,
}
#[derive(Clone, Copy)]
enum ExecutionLogStream {
Stdout,
Stderr,
}
impl ExecutionLogStream {
fn parse(name: &str) -> Result<Self, ApiError> {
match name {
"stdout" => Ok(Self::Stdout),
"stderr" => Ok(Self::Stderr),
_ => Err(ApiError::BadRequest(format!(
"Unsupported log stream '{}'. Expected 'stdout' or 'stderr'.",
name
))),
}
}
fn file_name(self) -> &'static str {
match self {
Self::Stdout => "stdout.log",
Self::Stderr => "stderr.log",
}
}
}
enum ExecutionLogTailState {
WaitingForFile {
full_path: std::path::PathBuf,
execution_id: i64,
},
SendInitial {
full_path: std::path::PathBuf,
execution_id: i64,
offset: u64,
pending_utf8: Vec<u8>,
},
Tail {
full_path: std::path::PathBuf,
execution_id: i64,
offset: u64,
idle_polls: u32,
pending_utf8: Vec<u8>,
},
Finished,
}
/// Stream stdout/stderr for an execution as SSE.
///
/// This tails the worker's live log files directly from the shared artifacts
/// volume. The file may not exist yet when the worker has not emitted any
/// output, so the stream waits briefly for it to appear.
#[utoipa::path(
get,
path = "/api/v1/executions/{id}/logs/{stream}/stream",
tag = "executions",
params(
("id" = i64, Path, description = "Execution ID"),
("stream" = String, Path, description = "Log stream name: stdout or stderr"),
("token" = String, Query, description = "JWT access token for authentication"),
),
responses(
(status = 200, description = "SSE stream of execution log content", content_type = "text/event-stream"),
(status = 401, description = "Unauthorized"),
(status = 404, description = "Execution not found"),
),
)]
pub async fn stream_execution_log(
State(state): State<Arc<AppState>>,
headers: HeaderMap,
Path((id, stream_name)): Path<(i64, String)>,
Query(params): Query<StreamExecutionLogParams>,
user: Result<RequireAuth, crate::auth::middleware::AuthError>,
) -> Result<Sse<impl Stream<Item = Result<Event, std::convert::Infallible>>>, ApiError> {
let authenticated_user =
authenticate_execution_log_stream_user(&state, &headers, user, params.token.as_deref())?;
validate_execution_log_stream_user(&authenticated_user, id)?;
let execution = ExecutionRepository::find_by_id(&state.db, id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Execution with ID {} not found", id)))?;
authorize_execution_log_stream(&state, &authenticated_user, &execution).await?;
let stream_name = ExecutionLogStream::parse(&stream_name)?;
let full_path = std::path::PathBuf::from(&state.config.artifacts_dir)
.join(format!("execution_{}", id))
.join(stream_name.file_name());
let db = state.db.clone();
let initial_state = ExecutionLogTailState::WaitingForFile {
full_path,
execution_id: id,
};
let start_offset = params.offset.unwrap_or(0);
let stream = futures::stream::unfold(initial_state, move |state| {
let db = db.clone();
async move {
match state {
ExecutionLogTailState::Finished => None,
ExecutionLogTailState::WaitingForFile {
full_path,
execution_id,
} => {
if full_path.exists() {
Some((
Ok(Event::default().event("waiting").data("Log file found")),
ExecutionLogTailState::SendInitial {
full_path,
execution_id,
offset: start_offset,
pending_utf8: Vec::new(),
},
))
} else if execution_log_execution_terminal(&db, execution_id).await {
Some((
Ok(Event::default().event("done").data("")),
ExecutionLogTailState::Finished,
))
} else {
tokio::time::sleep(LOG_STREAM_POLL_INTERVAL).await;
Some((
Ok(Event::default()
.event("waiting")
.data("Waiting for log output")),
ExecutionLogTailState::WaitingForFile {
full_path,
execution_id,
},
))
}
}
ExecutionLogTailState::SendInitial {
full_path,
execution_id,
offset,
pending_utf8,
} => {
let pending_utf8_on_empty = pending_utf8.clone();
match read_log_chunk(
&full_path,
offset,
LOG_STREAM_READ_CHUNK_SIZE,
pending_utf8,
)
.await
{
Some((content, new_offset, pending_utf8)) => Some((
Ok(Event::default()
.id(new_offset.to_string())
.event("content")
.data(content)),
ExecutionLogTailState::SendInitial {
full_path,
execution_id,
offset: new_offset,
pending_utf8,
},
)),
None => Some((
Ok(Event::default().comment("initial-catchup-complete")),
ExecutionLogTailState::Tail {
full_path,
execution_id,
offset,
idle_polls: 0,
pending_utf8: pending_utf8_on_empty,
},
)),
}
}
ExecutionLogTailState::Tail {
full_path,
execution_id,
offset,
idle_polls,
pending_utf8,
} => {
let pending_utf8_on_empty = pending_utf8.clone();
match read_log_chunk(
&full_path,
offset,
LOG_STREAM_READ_CHUNK_SIZE,
pending_utf8,
)
.await
{
Some((append, new_offset, pending_utf8)) => Some((
Ok(Event::default()
.id(new_offset.to_string())
.event("append")
.data(append)),
ExecutionLogTailState::Tail {
full_path,
execution_id,
offset: new_offset,
idle_polls: 0,
pending_utf8,
},
)),
None => {
let terminal =
execution_log_execution_terminal(&db, execution_id).await;
if terminal && idle_polls >= 2 {
Some((
Ok(Event::default().event("done").data("Execution complete")),
ExecutionLogTailState::Finished,
))
} else {
tokio::time::sleep(LOG_STREAM_POLL_INTERVAL).await;
Some((
Ok(Event::default()
.event("waiting")
.data("Waiting for log output")),
ExecutionLogTailState::Tail {
full_path,
execution_id,
offset,
idle_polls: idle_polls + 1,
pending_utf8: pending_utf8_on_empty,
},
))
}
}
}
}
}
}
});
Ok(Sse::new(stream).keep_alive(KeepAlive::default()))
}
async fn read_log_chunk(
path: &std::path::Path,
offset: u64,
max_bytes: usize,
mut pending_utf8: Vec<u8>,
) -> Option<(String, u64, Vec<u8>)> {
use tokio::io::{AsyncReadExt, AsyncSeekExt};
let mut file = tokio::fs::File::open(path).await.ok()?;
let metadata = file.metadata().await.ok()?;
if metadata.len() <= offset {
return None;
}
file.seek(std::io::SeekFrom::Start(offset)).await.ok()?;
let bytes_to_read = ((metadata.len() - offset) as usize).min(max_bytes);
let mut buf = vec![0u8; bytes_to_read];
let read = file.read(&mut buf).await.ok()?;
buf.truncate(read);
if buf.is_empty() {
return None;
}
pending_utf8.extend_from_slice(&buf);
let (content, pending_utf8) = decode_utf8_chunk(pending_utf8);
Some((content, offset + read as u64, pending_utf8))
}
async fn execution_log_execution_terminal(db: &sqlx::PgPool, execution_id: i64) -> bool {
match ExecutionRepository::find_by_id(db, execution_id).await {
Ok(Some(execution)) => matches!(
execution.status,
ExecutionStatus::Completed
| ExecutionStatus::Failed
| ExecutionStatus::Cancelled
| ExecutionStatus::Timeout
| ExecutionStatus::Abandoned
),
_ => true,
}
}
fn decode_utf8_chunk(mut bytes: Vec<u8>) -> (String, Vec<u8>) {
match std::str::from_utf8(&bytes) {
Ok(valid) => (valid.to_string(), Vec::new()),
Err(err) if err.error_len().is_none() => {
let pending = bytes.split_off(err.valid_up_to());
(String::from_utf8_lossy(&bytes).into_owned(), pending)
}
Err(_) => (String::from_utf8_lossy(&bytes).into_owned(), Vec::new()),
}
}
async fn authorize_execution_log_stream(
state: &Arc<AppState>,
user: &AuthenticatedUser,
execution: &attune_common::models::Execution,
) -> Result<(), ApiError> {
if user.claims.token_type != TokenType::Access {
return Ok(());
}
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_id = Some(execution.id);
ctx.target_ref = Some(execution.action_ref.clone());
authz
.authorize(
user,
AuthorizationCheck {
resource: Resource::Executions,
action: Action::Read,
context: ctx,
},
)
.await
}
fn authenticate_execution_log_stream_user(
state: &Arc<AppState>,
headers: &HeaderMap,
user: Result<RequireAuth, crate::auth::middleware::AuthError>,
query_token: Option<&str>,
) -> Result<AuthenticatedUser, ApiError> {
match user {
Ok(RequireAuth(user)) => Ok(user),
Err(_) => {
if let Some(user) = crate::auth::oidc::cookie_authenticated_user(headers, state)? {
return Ok(user);
}
let token = query_token.ok_or(ApiError::Unauthorized(
"Missing authentication token".to_string(),
))?;
authenticate_execution_log_stream_query_token(token, &state.jwt_config)
}
}
}
fn authenticate_execution_log_stream_query_token(
token: &str,
jwt_config: &JwtConfig,
) -> Result<AuthenticatedUser, ApiError> {
let claims = validate_token(token, jwt_config)
.map_err(|_| ApiError::Unauthorized("Invalid authentication token".to_string()))?;
Ok(AuthenticatedUser { claims })
}
fn validate_execution_log_stream_user(
user: &AuthenticatedUser,
execution_id: i64,
) -> Result<(), ApiError> {
let claims = &user.claims;
match claims.token_type {
TokenType::Access => Ok(()),
TokenType::Execution => validate_execution_token_scope(claims, execution_id),
TokenType::Sensor | TokenType::Refresh => Err(ApiError::Unauthorized(
"Invalid authentication token".to_string(),
)),
}
}
fn validate_execution_token_scope(claims: &Claims, execution_id: i64) -> Result<(), ApiError> {
if claims.scope.as_deref() != Some("execution") {
return Err(ApiError::Unauthorized(
"Invalid authentication token".to_string(),
));
}
let token_execution_id = claims
.metadata
.as_ref()
.and_then(|metadata| metadata.get("execution_id"))
.and_then(|value| value.as_i64())
.ok_or_else(|| ApiError::Unauthorized("Invalid authentication token".to_string()))?;
if token_execution_id != execution_id {
return Err(ApiError::Forbidden(format!(
"Execution token is not valid for execution {}",
execution_id
)));
}
Ok(())
}
#[derive(serde::Deserialize)] #[derive(serde::Deserialize)]
pub struct StreamExecutionParams { pub struct StreamExecutionParams {
pub execution_id: Option<i64>, pub execution_id: Option<i64>,
@@ -937,6 +1337,10 @@ pub fn routes() -> Router<Arc<AppState>> {
.route("/executions/execute", axum::routing::post(create_execution)) .route("/executions/execute", axum::routing::post(create_execution))
.route("/executions/stats", get(get_execution_stats)) .route("/executions/stats", get(get_execution_stats))
.route("/executions/stream", get(stream_execution_updates)) .route("/executions/stream", get(stream_execution_updates))
.route(
"/executions/{id}/logs/{stream}/stream",
get(stream_execution_log),
)
.route("/executions/{id}", get(get_execution)) .route("/executions/{id}", get(get_execution))
.route( .route(
"/executions/{id}/cancel", "/executions/{id}/cancel",
@@ -955,10 +1359,26 @@ pub fn routes() -> Router<Arc<AppState>> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use attune_common::auth::jwt::generate_execution_token;
#[test] #[test]
fn test_execution_routes_structure() { fn test_execution_routes_structure() {
// Just verify the router can be constructed // Just verify the router can be constructed
let _router = routes(); let _router = routes();
} }
#[test]
fn execution_token_scope_must_match_requested_execution() {
let jwt_config = JwtConfig {
secret: "test_secret_key_for_testing".to_string(),
access_token_expiration: 3600,
refresh_token_expiration: 604800,
};
let token = generate_execution_token(42, 123, "core.echo", &jwt_config, None).unwrap();
let user = authenticate_execution_log_stream_query_token(&token, &jwt_config).unwrap();
let err = validate_execution_log_stream_user(&user, 456).unwrap_err();
assert!(matches!(err, ApiError::Forbidden(_)));
}
} }

View File

@@ -23,6 +23,7 @@ clap = { workspace = true, features = ["derive", "env", "string"] }
# HTTP client # HTTP client
reqwest = { workspace = true, features = ["multipart", "stream"] } reqwest = { workspace = true, features = ["multipart", "stream"] }
reqwest-eventsource = { workspace = true }
# Serialization # Serialization
serde = { workspace = true } serde = { workspace = true }

View File

@@ -21,6 +21,11 @@ pub struct ApiResponse<T> {
pub data: T, pub data: T,
} }
#[derive(Debug, serde::Deserialize)]
struct PaginatedResponse<T> {
data: Vec<T>,
}
/// API error response /// API error response
#[derive(Debug, serde::Deserialize)] #[derive(Debug, serde::Deserialize)]
pub struct ApiError { pub struct ApiError {
@@ -55,6 +60,10 @@ impl ApiClient {
&self.base_url &self.base_url
} }
pub fn auth_token(&self) -> Option<&str> {
self.auth_token.as_deref()
}
#[cfg(test)] #[cfg(test)]
pub fn new(base_url: String, auth_token: Option<String>) -> Self { pub fn new(base_url: String, auth_token: Option<String>) -> Self {
let client = HttpClient::builder() let client = HttpClient::builder()
@@ -255,6 +264,31 @@ impl ApiClient {
} }
} }
async fn handle_paginated_response<T: DeserializeOwned>(
&self,
response: reqwest::Response,
) -> Result<Vec<T>> {
let status = response.status();
if status.is_success() {
let paginated: PaginatedResponse<T> = response
.json()
.await
.context("Failed to parse paginated API response")?;
Ok(paginated.data)
} else {
let error_text = response
.text()
.await
.unwrap_or_else(|_| "Unknown error".to_string());
if let Ok(api_error) = serde_json::from_str::<ApiError>(&error_text) {
anyhow::bail!("API error ({}): {}", status, api_error.error);
} else {
anyhow::bail!("API error ({}): {}", status, error_text);
}
}
}
/// Handle a response where we only care about success/failure, not a body. /// Handle a response where we only care about success/failure, not a body.
async fn handle_empty_response(&self, response: reqwest::Response) -> Result<()> { async fn handle_empty_response(&self, response: reqwest::Response) -> Result<()> {
let status = response.status(); let status = response.status();
@@ -281,6 +315,25 @@ impl ApiClient {
self.execute_json::<T, ()>(Method::GET, path, None).await self.execute_json::<T, ()>(Method::GET, path, None).await
} }
pub async fn get_paginated<T: DeserializeOwned>(&mut self, path: &str) -> Result<Vec<T>> {
let req = self.build_request(Method::GET, path);
let response = req.send().await.context("Failed to send request to API")?;
if response.status() == StatusCode::UNAUTHORIZED
&& self.refresh_token.is_some()
&& self.refresh_auth_token().await?
{
let req = self.build_request(Method::GET, path);
let response = req
.send()
.await
.context("Failed to send request to API (retry)")?;
return self.handle_paginated_response(response).await;
}
self.handle_paginated_response(response).await
}
/// GET request with query parameters (query string must be in path) /// GET request with query parameters (query string must be in path)
/// ///
/// Part of REST client API - reserved for future advanced filtering/search features. /// Part of REST client API - reserved for future advanced filtering/search features.

View File

@@ -6,7 +6,7 @@ use std::collections::HashMap;
use crate::client::ApiClient; use crate::client::ApiClient;
use crate::config::CliConfig; use crate::config::CliConfig;
use crate::output::{self, OutputFormat}; use crate::output::{self, OutputFormat};
use crate::wait::{wait_for_execution, WaitOptions}; use crate::wait::{extract_stdout, spawn_execution_output_watch, wait_for_execution, WaitOptions};
#[derive(Subcommand)] #[derive(Subcommand)]
pub enum ActionCommands { pub enum ActionCommands {
@@ -493,6 +493,15 @@ async fn handle_execute(
} }
let verbose = matches!(output_format, OutputFormat::Table); let verbose = matches!(output_format, OutputFormat::Table);
let watch_task = if verbose {
Some(spawn_execution_output_watch(
ApiClient::from_config(&config, api_url),
execution.id,
verbose,
))
} else {
None
};
let summary = wait_for_execution(WaitOptions { let summary = wait_for_execution(WaitOptions {
execution_id: execution.id, execution_id: execution.id,
timeout_secs: timeout, timeout_secs: timeout,
@@ -501,6 +510,13 @@ async fn handle_execute(
verbose, verbose,
}) })
.await?; .await?;
let suppress_final_stdout = watch_task
.as_ref()
.is_some_and(|task| task.delivered_output() && task.root_stdout_completed());
if let Some(task) = watch_task {
let _ = tokio::time::timeout(tokio::time::Duration::from_secs(2), task.handle).await;
}
match output_format { match output_format {
OutputFormat::Json | OutputFormat::Yaml => { OutputFormat::Json | OutputFormat::Yaml => {
@@ -517,7 +533,20 @@ async fn handle_execute(
("Updated", output::format_timestamp(&summary.updated)), ("Updated", output::format_timestamp(&summary.updated)),
]); ]);
if let Some(result) = summary.result { let stdout = extract_stdout(&summary.result);
if !suppress_final_stdout {
if let Some(stdout) = &stdout {
output::print_section("Stdout");
println!("{}", stdout);
}
}
if let Some(mut result) = summary.result {
if stdout.is_some() {
if let Some(obj) = result.as_object_mut() {
obj.remove("stdout");
}
}
if !result.is_null() { if !result.is_null() {
output::print_section("Result"); output::print_section("Result");
println!("{}", serde_json::to_string_pretty(&result)?); println!("{}", serde_json::to_string_pretty(&result)?);

View File

@@ -803,6 +803,7 @@ async fn handle_upload(
api_url: &Option<String>, api_url: &Option<String>,
output_format: OutputFormat, output_format: OutputFormat,
) -> Result<()> { ) -> Result<()> {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- CLI users explicitly choose a local file to upload; this is not a server-side path sink.
let file_path = Path::new(&file); let file_path = Path::new(&file);
if !file_path.exists() { if !file_path.exists() {
anyhow::bail!("File not found: {}", file); anyhow::bail!("File not found: {}", file);
@@ -811,6 +812,7 @@ async fn handle_upload(
anyhow::bail!("Not a file: {}", file); anyhow::bail!("Not a file: {}", file);
} }
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- The validated CLI-selected upload path is intentionally read and sent to the API.
let file_bytes = tokio::fs::read(file_path).await?; let file_bytes = tokio::fs::read(file_path).await?;
let file_name = file_path let file_name = file_path
.file_name() .file_name()

View File

@@ -840,6 +840,7 @@ async fn handle_upload(
api_url: &Option<String>, api_url: &Option<String>,
output_format: OutputFormat, output_format: OutputFormat,
) -> Result<()> { ) -> Result<()> {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- CLI pack commands intentionally operate on operator-supplied local paths.
let pack_dir = Path::new(&path); let pack_dir = Path::new(&path);
// Validate the directory exists and contains pack.yaml // Validate the directory exists and contains pack.yaml
@@ -855,6 +856,7 @@ async fn handle_upload(
} }
// Read pack ref from pack.yaml so we can display it // Read pack ref from pack.yaml so we can display it
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Reading local pack metadata from the user-selected pack directory is expected CLI behavior.
let pack_yaml_content = let pack_yaml_content =
std::fs::read_to_string(&pack_yaml_path).context("Failed to read pack.yaml")?; std::fs::read_to_string(&pack_yaml_path).context("Failed to read pack.yaml")?;
let pack_yaml: serde_yaml_ng::Value = let pack_yaml: serde_yaml_ng::Value =
@@ -957,6 +959,7 @@ fn append_dir_to_tar<W: std::io::Write>(
base: &Path, base: &Path,
dir: &Path, dir: &Path,
) -> Result<()> { ) -> Result<()> {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- The archiver walks a validated local directory selected by the CLI operator.
for entry in std::fs::read_dir(dir).context("Failed to read directory")? { for entry in std::fs::read_dir(dir).context("Failed to read directory")? {
let entry = entry.context("Failed to read directory entry")?; let entry = entry.context("Failed to read directory entry")?;
let entry_path = entry.path(); let entry_path = entry.path();
@@ -1061,6 +1064,7 @@ async fn handle_test(
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
// Determine if pack is a path or a pack name // Determine if pack is a path or a pack name
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Pack test targets are local CLI inputs, not remote request paths.
let pack_path = Path::new(&pack); let pack_path = Path::new(&pack);
let (pack_dir, pack_ref, pack_version) = if pack_path.exists() && pack_path.is_dir() { let (pack_dir, pack_ref, pack_version) = if pack_path.exists() && pack_path.is_dir() {
// Local pack directory // Local pack directory
@@ -1072,6 +1076,7 @@ async fn handle_test(
anyhow::bail!("pack.yaml not found in directory: {}", pack); anyhow::bail!("pack.yaml not found in directory: {}", pack);
} }
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- This reads pack.yaml from a local directory explicitly selected by the CLI operator.
let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)?; let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)?;
let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?; let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?;
@@ -1107,6 +1112,7 @@ async fn handle_test(
anyhow::bail!("pack.yaml not found for pack: {}", pack); anyhow::bail!("pack.yaml not found for pack: {}", pack);
} }
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Installed pack tests intentionally read local metadata from the workspace packs directory.
let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)?; let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)?;
let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?; let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?;
@@ -1120,6 +1126,7 @@ async fn handle_test(
// Load pack.yaml and extract test configuration // Load pack.yaml and extract test configuration
let pack_yaml_path = pack_dir.join("pack.yaml"); let pack_yaml_path = pack_dir.join("pack.yaml");
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Test configuration is loaded from the validated local pack directory.
let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)?; let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)?;
let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?; let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?;
@@ -1484,6 +1491,7 @@ fn detect_source_type(source: &str, ref_spec: Option<&str>, no_registry: bool) -
async fn handle_checksum(path: String, json: bool, output_format: OutputFormat) -> Result<()> { async fn handle_checksum(path: String, json: bool, output_format: OutputFormat) -> Result<()> {
use attune_common::pack_registry::{calculate_directory_checksum, calculate_file_checksum}; use attune_common::pack_registry::{calculate_directory_checksum, calculate_file_checksum};
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Checksum generation intentionally accepts arbitrary local paths from the CLI operator.
let path_obj = Path::new(&path); let path_obj = Path::new(&path);
if !path_obj.exists() { if !path_obj.exists() {
@@ -1581,6 +1589,7 @@ async fn handle_index_entry(
) -> Result<()> { ) -> Result<()> {
use attune_common::pack_registry::calculate_directory_checksum; use attune_common::pack_registry::calculate_directory_checksum;
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Index-entry generation intentionally inspects a local pack directory chosen by the CLI operator.
let path_obj = Path::new(&path); let path_obj = Path::new(&path);
if !path_obj.exists() { if !path_obj.exists() {
@@ -1606,6 +1615,7 @@ async fn handle_index_entry(
} }
// Read and parse pack.yaml // Read and parse pack.yaml
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Reading local pack metadata for index generation is expected CLI behavior.
let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)?; let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)?;
let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?; let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?;

View File

@@ -19,11 +19,13 @@ pub async fn handle_index_update(
output_format: OutputFormat, output_format: OutputFormat,
) -> Result<()> { ) -> Result<()> {
// Load existing index // Load existing index
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Registry index maintenance is a local CLI/admin operation over operator-supplied files.
let index_file_path = Path::new(&index_path); let index_file_path = Path::new(&index_path);
if !index_file_path.exists() { if !index_file_path.exists() {
return Err(anyhow::anyhow!("Index file not found: {}", index_path)); return Err(anyhow::anyhow!("Index file not found: {}", index_path));
} }
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- The CLI intentionally reads the local index file selected by the operator.
let index_content = fs::read_to_string(index_file_path)?; let index_content = fs::read_to_string(index_file_path)?;
let mut index: JsonValue = serde_json::from_str(&index_content)?; let mut index: JsonValue = serde_json::from_str(&index_content)?;
@@ -34,6 +36,7 @@ pub async fn handle_index_update(
.ok_or_else(|| anyhow::anyhow!("Invalid index format: missing 'packs' array"))?; .ok_or_else(|| anyhow::anyhow!("Invalid index format: missing 'packs' array"))?;
// Load pack.yaml from the pack directory // Load pack.yaml from the pack directory
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Local pack directories are explicit CLI inputs, not remote taint.
let pack_dir = Path::new(&pack_path); let pack_dir = Path::new(&pack_path);
if !pack_dir.exists() || !pack_dir.is_dir() { if !pack_dir.exists() || !pack_dir.is_dir() {
return Err(anyhow::anyhow!("Pack directory not found: {}", pack_path)); return Err(anyhow::anyhow!("Pack directory not found: {}", pack_path));
@@ -47,6 +50,7 @@ pub async fn handle_index_update(
)); ));
} }
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Reading pack.yaml from a local operator-selected pack directory is expected CLI behavior.
let pack_yaml_content = fs::read_to_string(&pack_yaml_path)?; let pack_yaml_content = fs::read_to_string(&pack_yaml_path)?;
let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?; let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)?;
@@ -250,6 +254,7 @@ pub async fn handle_index_merge(
output_format: OutputFormat, output_format: OutputFormat,
) -> Result<()> { ) -> Result<()> {
// Check if output file exists // Check if output file exists
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Index merge output is a local CLI path controlled by the operator.
let output_file_path = Path::new(&output_path); let output_file_path = Path::new(&output_path);
if output_file_path.exists() && !force { if output_file_path.exists() && !force {
return Err(anyhow::anyhow!( return Err(anyhow::anyhow!(
@@ -265,6 +270,7 @@ pub async fn handle_index_merge(
// Load and merge all input files // Load and merge all input files
for input_path in &input_paths { for input_path in &input_paths {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Index merge inputs are local operator-selected files.
let input_file_path = Path::new(input_path); let input_file_path = Path::new(input_path);
if !input_file_path.exists() { if !input_file_path.exists() {
if output_format == OutputFormat::Table { if output_format == OutputFormat::Table {
@@ -277,6 +283,7 @@ pub async fn handle_index_merge(
output::print_info(&format!("Loading: {}", input_path)); output::print_info(&format!("Loading: {}", input_path));
} }
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- The CLI intentionally reads each local input index file during merge.
let index_content = fs::read_to_string(input_file_path)?; let index_content = fs::read_to_string(input_file_path)?;
let index: JsonValue = serde_json::from_str(&index_content)?; let index: JsonValue = serde_json::from_str(&index_content)?;

View File

@@ -172,6 +172,7 @@ async fn handle_upload(
api_url: &Option<String>, api_url: &Option<String>,
output_format: OutputFormat, output_format: OutputFormat,
) -> Result<()> { ) -> Result<()> {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Workflow upload reads local files chosen by the CLI operator; it is not a server-side path sink.
let action_path = Path::new(&action_file); let action_path = Path::new(&action_file);
// ── 1. Validate & read the action YAML ────────────────────────────── // ── 1. Validate & read the action YAML ──────────────────────────────
@@ -182,6 +183,7 @@ async fn handle_upload(
anyhow::bail!("Path is not a file: {}", action_file); anyhow::bail!("Path is not a file: {}", action_file);
} }
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- The action YAML is intentionally read from the validated local CLI path.
let action_yaml_content = let action_yaml_content =
std::fs::read_to_string(action_path).context("Failed to read action YAML file")?; std::fs::read_to_string(action_path).context("Failed to read action YAML file")?;
@@ -216,6 +218,7 @@ async fn handle_upload(
} }
// ── 4. Read and parse the workflow YAML ───────────────────────────── // ── 4. Read and parse the workflow YAML ─────────────────────────────
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- The workflow file path is confined to the pack directory before this local read occurs.
let workflow_yaml_content = let workflow_yaml_content =
std::fs::read_to_string(&workflow_path).context("Failed to read workflow YAML file")?; std::fs::read_to_string(&workflow_path).context("Failed to read workflow YAML file")?;
@@ -616,12 +619,41 @@ fn split_action_ref(action_ref: &str) -> Result<(String, String)> {
/// resolved relative to the action YAML's parent directory. /// resolved relative to the action YAML's parent directory.
fn resolve_workflow_path(action_yaml_path: &Path, workflow_file: &str) -> Result<PathBuf> { fn resolve_workflow_path(action_yaml_path: &Path, workflow_file: &str) -> Result<PathBuf> {
let action_dir = action_yaml_path.parent().unwrap_or(Path::new(".")); let action_dir = action_yaml_path.parent().unwrap_or(Path::new("."));
let pack_root = action_dir
.parent()
.ok_or_else(|| anyhow::anyhow!("Action YAML must live inside a pack actions/ directory"))?;
let canonical_pack_root = pack_root
.canonicalize()
.context("Failed to resolve pack root for workflow file")?;
let canonical_action_dir = action_dir
.canonicalize()
.context("Failed to resolve action directory for workflow file")?;
let canonical_workflow_path = normalize_path_from_base(&canonical_action_dir, workflow_file);
let resolved = action_dir.join(workflow_file); if !canonical_workflow_path.starts_with(&canonical_pack_root) {
anyhow::bail!(
"Workflow file resolves outside the pack directory: {}",
workflow_file
);
}
// Canonicalize if possible (for better error messages), but don't fail Ok(canonical_workflow_path)
// if the file doesn't exist yet — we'll check existence later. }
Ok(resolved)
fn normalize_path_from_base(base: &Path, relative_path: &str) -> PathBuf {
let mut normalized = PathBuf::new();
for component in base.join(relative_path).components() {
match component {
std::path::Component::Prefix(prefix) => normalized.push(prefix.as_os_str()),
std::path::Component::RootDir => normalized.push(std::path::MAIN_SEPARATOR.to_string()),
std::path::Component::CurDir => {}
std::path::Component::ParentDir => {
normalized.pop();
}
std::path::Component::Normal(part) => normalized.push(part),
}
}
normalized
} }
#[cfg(test)] #[cfg(test)]
@@ -655,23 +687,62 @@ mod tests {
#[test] #[test]
fn test_resolve_workflow_path() { fn test_resolve_workflow_path() {
let action_path = Path::new("/packs/mypack/actions/deploy.yaml"); let temp = tempfile::tempdir().unwrap();
let pack_dir = temp.path().join("mypack");
let actions_dir = pack_dir.join("actions");
let workflow_dir = actions_dir.join("workflows");
std::fs::create_dir_all(&workflow_dir).unwrap();
let action_path = actions_dir.join("deploy.yaml");
let workflow_path = workflow_dir.join("deploy.workflow.yaml");
std::fs::write(
&action_path,
"ref: mypack.deploy\nworkflow_file: workflows/deploy.workflow.yaml\n",
)
.unwrap();
std::fs::write(&workflow_path, "version: 1.0.0\n").unwrap();
let resolved = let resolved =
resolve_workflow_path(action_path, "workflows/deploy.workflow.yaml").unwrap(); resolve_workflow_path(&action_path, "workflows/deploy.workflow.yaml").unwrap();
assert_eq!( assert_eq!(resolved, workflow_path.canonicalize().unwrap());
resolved,
PathBuf::from("/packs/mypack/actions/workflows/deploy.workflow.yaml")
);
} }
#[test] #[test]
fn test_resolve_workflow_path_relative() { fn test_resolve_workflow_path_relative() {
let action_path = Path::new("actions/deploy.yaml"); let temp = tempfile::tempdir().unwrap();
let pack_dir = temp.path().join("mypack");
let actions_dir = pack_dir.join("actions");
let workflows_dir = pack_dir.join("workflows");
std::fs::create_dir_all(&actions_dir).unwrap();
std::fs::create_dir_all(&workflows_dir).unwrap();
let action_path = actions_dir.join("deploy.yaml");
let workflow_path = workflows_dir.join("deploy.workflow.yaml");
std::fs::write(
&action_path,
"ref: mypack.deploy\nworkflow_file: ../workflows/deploy.workflow.yaml\n",
)
.unwrap();
std::fs::write(&workflow_path, "version: 1.0.0\n").unwrap();
let resolved = let resolved =
resolve_workflow_path(action_path, "workflows/deploy.workflow.yaml").unwrap(); resolve_workflow_path(&action_path, "../workflows/deploy.workflow.yaml").unwrap();
assert_eq!( assert_eq!(resolved, workflow_path.canonicalize().unwrap());
resolved, }
PathBuf::from("actions/workflows/deploy.workflow.yaml")
); #[test]
fn test_resolve_workflow_path_rejects_traversal_outside_pack() {
let temp = tempfile::tempdir().unwrap();
let pack_dir = temp.path().join("mypack");
let actions_dir = pack_dir.join("actions");
std::fs::create_dir_all(&actions_dir).unwrap();
let action_path = actions_dir.join("deploy.yaml");
let outside = temp.path().join("outside.yaml");
std::fs::write(&action_path, "ref: mypack.deploy\n").unwrap();
std::fs::write(&outside, "version: 1.0.0\n").unwrap();
let err = resolve_workflow_path(&action_path, "../../outside.yaml").unwrap_err();
assert!(err.to_string().contains("outside the pack directory"));
} }
} }

View File

@@ -11,7 +11,13 @@
use anyhow::Result; use anyhow::Result;
use futures::{SinkExt, StreamExt}; use futures::{SinkExt, StreamExt};
use reqwest_eventsource::{Event as SseEvent, EventSource};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::sync::{
atomic::{AtomicBool, AtomicU64, Ordering},
Arc,
};
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use tokio_tungstenite::{connect_async, tungstenite::Message}; use tokio_tungstenite::{connect_async, tungstenite::Message};
@@ -54,6 +60,22 @@ pub struct WaitOptions<'a> {
pub verbose: bool, pub verbose: bool,
} }
pub struct OutputWatchTask {
pub handle: tokio::task::JoinHandle<()>,
delivered_output: Arc<AtomicBool>,
root_stdout_completed: Arc<AtomicBool>,
}
impl OutputWatchTask {
pub fn delivered_output(&self) -> bool {
self.delivered_output.load(Ordering::Relaxed)
}
pub fn root_stdout_completed(&self) -> bool {
self.root_stdout_completed.load(Ordering::Relaxed)
}
}
// ── notifier WebSocket messages (mirrors websocket_server.rs) ──────────────── // ── notifier WebSocket messages (mirrors websocket_server.rs) ────────────────
#[derive(Debug, Serialize)] #[derive(Debug, Serialize)]
@@ -102,6 +124,58 @@ struct RestExecution {
updated: String, updated: String,
} }
#[derive(Debug, Clone, Deserialize)]
struct WorkflowTaskMetadata {
task_name: String,
#[serde(default)]
task_index: Option<i32>,
}
#[derive(Debug, Clone, Deserialize)]
struct ExecutionListItem {
id: i64,
action_ref: String,
status: String,
#[serde(default)]
workflow_task: Option<WorkflowTaskMetadata>,
}
#[derive(Debug)]
struct ChildWatchState {
label: String,
status: String,
announced_terminal: bool,
stream_handles: Vec<StreamWatchHandle>,
}
struct RootWatchState {
stream_handles: Vec<StreamWatchHandle>,
}
#[derive(Debug)]
struct StreamWatchHandle {
stream_name: &'static str,
offset: Arc<AtomicU64>,
handle: tokio::task::JoinHandle<()>,
}
#[derive(Clone)]
struct StreamWatchConfig {
base_url: String,
token: String,
execution_id: i64,
prefix: Option<String>,
verbose: bool,
delivered_output: Arc<AtomicBool>,
root_stdout_completed: Option<Arc<AtomicBool>>,
}
struct StreamLogTask {
stream_name: &'static str,
offset: Arc<AtomicU64>,
config: StreamWatchConfig,
}
impl From<RestExecution> for ExecutionSummary { impl From<RestExecution> for ExecutionSummary {
fn from(e: RestExecution) -> Self { fn from(e: RestExecution) -> Self {
Self { Self {
@@ -177,6 +251,260 @@ pub async fn wait_for_execution(opts: WaitOptions<'_>) -> Result<ExecutionSummar
.await .await
} }
pub fn spawn_execution_output_watch(
mut client: ApiClient,
execution_id: i64,
verbose: bool,
) -> OutputWatchTask {
let delivered_output = Arc::new(AtomicBool::new(false));
let root_stdout_completed = Arc::new(AtomicBool::new(false));
let delivered_output_for_task = delivered_output.clone();
let root_stdout_completed_for_task = root_stdout_completed.clone();
let handle = tokio::spawn(async move {
if let Err(err) = watch_execution_output(
&mut client,
execution_id,
verbose,
delivered_output_for_task,
root_stdout_completed_for_task,
)
.await
{
if verbose {
eprintln!(" [watch] {}", err);
}
}
});
OutputWatchTask {
handle,
delivered_output,
root_stdout_completed,
}
}
async fn watch_execution_output(
client: &mut ApiClient,
execution_id: i64,
verbose: bool,
delivered_output: Arc<AtomicBool>,
root_stdout_completed: Arc<AtomicBool>,
) -> Result<()> {
let base_url = client.base_url().to_string();
let mut root_watch: Option<RootWatchState> = None;
let mut children: HashMap<i64, ChildWatchState> = HashMap::new();
loop {
let execution: RestExecution = client.get(&format!("/executions/{}", execution_id)).await?;
if root_watch
.as_ref()
.is_none_or(|state| streams_need_restart(&state.stream_handles))
{
if let Some(token) = client.auth_token().map(str::to_string) {
match root_watch.as_mut() {
Some(state) => restart_finished_streams(
&mut state.stream_handles,
&StreamWatchConfig {
base_url: base_url.clone(),
token,
execution_id,
prefix: None,
verbose,
delivered_output: delivered_output.clone(),
root_stdout_completed: Some(root_stdout_completed.clone()),
},
),
None => {
root_watch = Some(RootWatchState {
stream_handles: spawn_execution_log_streams(StreamWatchConfig {
base_url: base_url.clone(),
token,
execution_id,
verbose,
prefix: None,
delivered_output: delivered_output.clone(),
root_stdout_completed: Some(root_stdout_completed.clone()),
}),
});
}
}
}
}
let child_items = list_child_executions(client, execution_id)
.await
.unwrap_or_default();
for child in child_items {
let label = format_task_label(&child.workflow_task, &child.action_ref, child.id);
let entry = children.entry(child.id).or_insert_with(|| {
if verbose {
eprintln!(" [{}] started ({})", label, child.action_ref);
}
let stream_handles = client
.auth_token()
.map(str::to_string)
.map(|token| {
spawn_execution_log_streams(StreamWatchConfig {
base_url: base_url.clone(),
token,
execution_id: child.id,
prefix: Some(label.clone()),
verbose,
delivered_output: delivered_output.clone(),
root_stdout_completed: None,
})
})
.unwrap_or_default();
ChildWatchState {
label,
status: child.status.clone(),
announced_terminal: false,
stream_handles,
}
});
if entry.status != child.status {
entry.status = child.status.clone();
}
let child_is_terminal = is_terminal(&entry.status);
if !child_is_terminal && streams_need_restart(&entry.stream_handles) {
if let Some(token) = client.auth_token().map(str::to_string) {
restart_finished_streams(
&mut entry.stream_handles,
&StreamWatchConfig {
base_url: base_url.clone(),
token,
execution_id: child.id,
prefix: Some(entry.label.clone()),
verbose,
delivered_output: delivered_output.clone(),
root_stdout_completed: None,
},
);
}
}
if !entry.announced_terminal && is_terminal(&child.status) {
entry.announced_terminal = true;
if verbose {
eprintln!(" [{}] {}", entry.label, child.status);
}
}
}
if is_terminal(&execution.status) {
break;
}
tokio::time::sleep(Duration::from_millis(500)).await;
}
if let Some(root_watch) = root_watch {
wait_for_stream_handles(root_watch.stream_handles).await;
}
for child in children.into_values() {
wait_for_stream_handles(child.stream_handles).await;
}
Ok(())
}
fn spawn_execution_log_streams(config: StreamWatchConfig) -> Vec<StreamWatchHandle> {
["stdout", "stderr"]
.into_iter()
.map(|stream_name| {
let offset = Arc::new(AtomicU64::new(0));
let completion_flag = if stream_name == "stdout" {
config.root_stdout_completed.clone()
} else {
None
};
StreamWatchHandle {
stream_name,
handle: tokio::spawn(stream_execution_log(StreamLogTask {
stream_name,
offset: offset.clone(),
config: StreamWatchConfig {
base_url: config.base_url.clone(),
token: config.token.clone(),
execution_id: config.execution_id,
prefix: config.prefix.clone(),
verbose: config.verbose,
delivered_output: config.delivered_output.clone(),
root_stdout_completed: completion_flag,
},
})),
offset,
}
})
.collect()
}
fn streams_need_restart(handles: &[StreamWatchHandle]) -> bool {
handles.is_empty() || handles.iter().any(|handle| handle.handle.is_finished())
}
fn restart_finished_streams(handles: &mut [StreamWatchHandle], config: &StreamWatchConfig) {
for stream in handles.iter_mut() {
if stream.handle.is_finished() {
let offset = stream.offset.clone();
let completion_flag = if stream.stream_name == "stdout" {
config.root_stdout_completed.clone()
} else {
None
};
stream.handle = tokio::spawn(stream_execution_log(StreamLogTask {
stream_name: stream.stream_name,
offset,
config: StreamWatchConfig {
base_url: config.base_url.clone(),
token: config.token.clone(),
execution_id: config.execution_id,
prefix: config.prefix.clone(),
verbose: config.verbose,
delivered_output: config.delivered_output.clone(),
root_stdout_completed: completion_flag,
},
}));
}
}
}
async fn wait_for_stream_handles(handles: Vec<StreamWatchHandle>) {
for handle in handles {
let _ = handle.handle.await;
}
}
async fn list_child_executions(
client: &mut ApiClient,
execution_id: i64,
) -> Result<Vec<ExecutionListItem>> {
const PER_PAGE: u32 = 100;
let mut page = 1;
let mut all_children = Vec::new();
loop {
let path = format!("/executions?parent={execution_id}&page={page}&per_page={PER_PAGE}");
let mut page_items: Vec<ExecutionListItem> = client.get_paginated(&path).await?;
let page_len = page_items.len();
all_children.append(&mut page_items);
if page_len < PER_PAGE as usize {
break;
}
page += 1;
}
Ok(all_children)
}
// ── WebSocket path ──────────────────────────────────────────────────────────── // ── WebSocket path ────────────────────────────────────────────────────────────
async fn wait_via_websocket( async fn wait_via_websocket(
@@ -482,6 +810,7 @@ fn resolve_ws_url(opts: &WaitOptions<'_>) -> Option<String> {
/// - `https://api.example.com` → `wss://api.example.com:8081` /// - `https://api.example.com` → `wss://api.example.com:8081`
/// - `http://api.example.com:9000` → `ws://api.example.com:8081` /// - `http://api.example.com:9000` → `ws://api.example.com:8081`
fn derive_notifier_url(api_url: &str) -> Option<String> { fn derive_notifier_url(api_url: &str) -> Option<String> {
// nosemgrep: javascript.lang.security.detect-insecure-websocket.detect-insecure-websocket -- The function upgrades https->wss and only returns ws for explicit http base URLs or test examples.
let url = url::Url::parse(api_url).ok()?; let url = url::Url::parse(api_url).ok()?;
let ws_scheme = match url.scheme() { let ws_scheme = match url.scheme() {
"https" => "wss", "https" => "wss",
@@ -491,6 +820,148 @@ fn derive_notifier_url(api_url: &str) -> Option<String> {
Some(format!("{}://{}:8081", ws_scheme, host)) Some(format!("{}://{}:8081", ws_scheme, host))
} }
pub fn extract_stdout(result: &Option<serde_json::Value>) -> Option<String> {
result
.as_ref()
.and_then(|value| value.get("stdout"))
.and_then(|stdout| stdout.as_str())
.filter(|stdout| !stdout.is_empty())
.map(ToOwned::to_owned)
}
fn format_task_label(
workflow_task: &Option<WorkflowTaskMetadata>,
action_ref: &str,
execution_id: i64,
) -> String {
if let Some(workflow_task) = workflow_task {
if let Some(index) = workflow_task.task_index {
format!("{}[{}]", workflow_task.task_name, index)
} else {
workflow_task.task_name.clone()
}
} else {
format!("{}#{}", action_ref, execution_id)
}
}
async fn stream_execution_log(task: StreamLogTask) {
let StreamLogTask {
stream_name,
offset,
config:
StreamWatchConfig {
base_url,
token,
execution_id,
prefix,
verbose,
delivered_output,
root_stdout_completed,
},
} = task;
let mut stream_url = match url::Url::parse(&format!(
"{}/api/v1/executions/{}/logs/{}/stream",
base_url.trim_end_matches('/'),
execution_id,
stream_name
)) {
Ok(url) => url,
Err(err) => {
if verbose {
eprintln!(" [watch] failed to build stream URL: {}", err);
}
return;
}
};
let current_offset = offset.load(Ordering::Relaxed).to_string();
stream_url
.query_pairs_mut()
.append_pair("token", &token)
.append_pair("offset", &current_offset);
let mut event_source = EventSource::get(stream_url);
let mut carry = String::new();
while let Some(event) = event_source.next().await {
match event {
Ok(SseEvent::Open) => {}
Ok(SseEvent::Message(message)) => match message.event.as_str() {
"content" | "append" => {
if let Ok(server_offset) = message.id.parse::<u64>() {
offset.store(server_offset, Ordering::Relaxed);
}
if !message.data.is_empty() {
delivered_output.store(true, Ordering::Relaxed);
}
print_stream_chunk(prefix.as_deref(), &message.data, &mut carry);
}
"done" => {
if let Some(flag) = &root_stdout_completed {
flag.store(true, Ordering::Relaxed);
}
flush_stream_chunk(prefix.as_deref(), &mut carry);
break;
}
"error" => {
if verbose && !message.data.is_empty() {
eprintln!(" [watch] {}", message.data);
}
break;
}
_ => {}
},
Err(err) => {
flush_stream_chunk(prefix.as_deref(), &mut carry);
if verbose {
eprintln!(
" [watch] stream error for execution {}: {}",
execution_id, err
);
}
break;
}
}
}
flush_stream_chunk(prefix.as_deref(), &mut carry);
event_source.close();
}
fn print_stream_chunk(prefix: Option<&str>, chunk: &str, carry: &mut String) {
carry.push_str(chunk);
while let Some(idx) = carry.find('\n') {
let mut line = carry.drain(..=idx).collect::<String>();
if line.ends_with('\n') {
line.pop();
}
if line.ends_with('\r') {
line.pop();
}
if let Some(prefix) = prefix {
eprintln!("[{}] {}", prefix, line);
} else {
eprintln!("{}", line);
}
}
}
fn flush_stream_chunk(prefix: Option<&str>, carry: &mut String) {
if carry.is_empty() {
return;
}
if let Some(prefix) = prefix {
eprintln!("[{}] {}", prefix, carry);
} else {
eprintln!("{}", carry);
}
carry.clear();
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@@ -553,4 +1024,26 @@ mod tests {
assert_eq!(summary.status, "failed"); assert_eq!(summary.status, "failed");
assert_eq!(summary.action_ref, ""); assert_eq!(summary.action_ref, "");
} }
#[test]
fn test_extract_stdout() {
let result = Some(serde_json::json!({
"stdout": "hello world",
"stderr_log": "/tmp/stderr.log"
}));
assert_eq!(extract_stdout(&result).as_deref(), Some("hello world"));
}
#[test]
fn test_format_task_label() {
let workflow_task = Some(WorkflowTaskMetadata {
task_name: "build".to_string(),
task_index: Some(2),
});
assert_eq!(
format_task_label(&workflow_task, "core.echo", 42),
"build[2]"
);
assert_eq!(format_task_label(&None, "core.echo", 42), "core.echo#42");
}
} }

View File

@@ -73,6 +73,7 @@ regex = { workspace = true }
# Version matching # Version matching
semver = { workspace = true } semver = { workspace = true }
url = { workspace = true }
[dev-dependencies] [dev-dependencies]
mockall = { workspace = true } mockall = { workspace = true }

View File

@@ -355,10 +355,14 @@ pub struct OidcConfig {
pub enabled: bool, pub enabled: bool,
/// OpenID Provider discovery document URL. /// OpenID Provider discovery document URL.
pub discovery_url: String, /// Required when `enabled` is true; ignored otherwise.
#[serde(default)]
pub discovery_url: Option<String>,
/// Confidential client ID. /// Confidential client ID.
pub client_id: String, /// Required when `enabled` is true; ignored otherwise.
#[serde(default)]
pub client_id: Option<String>,
/// Provider name used in login-page overrides such as `?auth=<provider_name>`. /// Provider name used in login-page overrides such as `?auth=<provider_name>`.
#[serde(default = "default_oidc_provider_name")] #[serde(default = "default_oidc_provider_name")]
@@ -374,7 +378,9 @@ pub struct OidcConfig {
pub client_secret: Option<String>, pub client_secret: Option<String>,
/// Redirect URI registered with the provider. /// Redirect URI registered with the provider.
pub redirect_uri: String, /// Required when `enabled` is true; ignored otherwise.
#[serde(default)]
pub redirect_uri: Option<String>,
/// Optional post-logout redirect URI. /// Optional post-logout redirect URI.
pub post_logout_redirect_uri: Option<String>, pub post_logout_redirect_uri: Option<String>,
@@ -396,7 +402,9 @@ pub struct LdapConfig {
pub enabled: bool, pub enabled: bool,
/// LDAP server URL (e.g., "ldap://ldap.example.com:389" or "ldaps://ldap.example.com:636"). /// LDAP server URL (e.g., "ldap://ldap.example.com:389" or "ldaps://ldap.example.com:636").
pub url: String, /// Required when `enabled` is true; ignored otherwise.
#[serde(default)]
pub url: Option<String>,
/// Bind DN template. Use `{login}` as placeholder for the user-supplied login. /// Bind DN template. Use `{login}` as placeholder for the user-supplied login.
/// Example: "uid={login},ou=users,dc=example,dc=com" /// Example: "uid={login},ou=users,dc=example,dc=com"
@@ -650,6 +658,11 @@ pub struct PackRegistryConfig {
#[serde(default = "default_true")] #[serde(default = "default_true")]
pub verify_checksums: bool, pub verify_checksums: bool,
/// Additional remote hosts allowed for pack archive/git downloads.
/// Hosts from enabled registry indices are implicitly allowed.
#[serde(default)]
pub allowed_source_hosts: Vec<String>,
/// Allow HTTP (non-HTTPS) registries /// Allow HTTP (non-HTTPS) registries
#[serde(default)] #[serde(default)]
pub allow_http: bool, pub allow_http: bool,
@@ -672,6 +685,7 @@ impl Default for PackRegistryConfig {
cache_enabled: true, cache_enabled: true,
timeout: default_registry_timeout(), timeout: default_registry_timeout(),
verify_checksums: true, verify_checksums: true,
allowed_source_hosts: Vec::new(),
allow_http: false, allow_http: false,
} }
} }
@@ -985,14 +999,20 @@ impl Config {
if let Some(oidc) = &self.security.oidc { if let Some(oidc) = &self.security.oidc {
if oidc.enabled { if oidc.enabled {
if oidc.discovery_url.trim().is_empty() { if oidc
.discovery_url
.as_deref()
.unwrap_or("")
.trim()
.is_empty()
{
return Err(crate::Error::validation( return Err(crate::Error::validation(
"OIDC discovery URL cannot be empty when OIDC is enabled", "OIDC discovery URL is required when OIDC is enabled",
)); ));
} }
if oidc.client_id.trim().is_empty() { if oidc.client_id.as_deref().unwrap_or("").trim().is_empty() {
return Err(crate::Error::validation( return Err(crate::Error::validation(
"OIDC client ID cannot be empty when OIDC is enabled", "OIDC client ID is required when OIDC is enabled",
)); ));
} }
if oidc if oidc
@@ -1006,14 +1026,22 @@ impl Config {
"OIDC client secret is required when OIDC is enabled", "OIDC client secret is required when OIDC is enabled",
)); ));
} }
if oidc.redirect_uri.trim().is_empty() { if oidc.redirect_uri.as_deref().unwrap_or("").trim().is_empty() {
return Err(crate::Error::validation( return Err(crate::Error::validation(
"OIDC redirect URI cannot be empty when OIDC is enabled", "OIDC redirect URI is required when OIDC is enabled",
)); ));
} }
} }
} }
if let Some(ldap) = &self.security.ldap {
if ldap.enabled && ldap.url.as_deref().unwrap_or("").trim().is_empty() {
return Err(crate::Error::validation(
"LDAP server URL is required when LDAP is enabled",
));
}
}
// Validate encryption key if provided // Validate encryption key if provided
if let Some(ref key) = self.security.encryption_key { if let Some(ref key) = self.security.encryption_key {
if key.len() < 32 { if key.len() < 32 {
@@ -1172,6 +1200,31 @@ mod tests {
assert!(config.validate().is_err()); assert!(config.validate().is_err());
} }
#[test]
fn test_oidc_config_disabled_no_urls_required() {
let yaml = r#"
enabled: false
"#;
let cfg: OidcConfig = serde_yaml_ng::from_str(yaml).unwrap();
assert!(!cfg.enabled);
assert!(cfg.discovery_url.is_none());
assert!(cfg.client_id.is_none());
assert!(cfg.redirect_uri.is_none());
assert!(cfg.client_secret.is_none());
assert_eq!(cfg.provider_name, "oidc");
}
#[test]
fn test_ldap_config_disabled_no_url_required() {
let yaml = r#"
enabled: false
"#;
let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap();
assert!(!cfg.enabled);
assert!(cfg.url.is_none());
assert_eq!(cfg.provider_name, "ldap");
}
#[test] #[test]
fn test_ldap_config_defaults() { fn test_ldap_config_defaults() {
let yaml = r#" let yaml = r#"
@@ -1182,7 +1235,7 @@ client_id: "test"
let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap(); let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap();
assert!(cfg.enabled); assert!(cfg.enabled);
assert_eq!(cfg.url, "ldap://localhost:389"); assert_eq!(cfg.url.as_deref(), Some("ldap://localhost:389"));
assert_eq!(cfg.user_filter, "(uid={login})"); assert_eq!(cfg.user_filter, "(uid={login})");
assert_eq!(cfg.login_attr, "uid"); assert_eq!(cfg.login_attr, "uid");
assert_eq!(cfg.email_attr, "mail"); assert_eq!(cfg.email_attr, "mail");
@@ -1222,7 +1275,7 @@ provider_icon_url: "https://corp.com/icon.svg"
let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap(); let cfg: LdapConfig = serde_yaml_ng::from_str(yaml).unwrap();
assert!(cfg.enabled); assert!(cfg.enabled);
assert_eq!(cfg.url, "ldaps://ldap.corp.com:636"); assert_eq!(cfg.url.as_deref(), Some("ldaps://ldap.corp.com:636"));
assert_eq!( assert_eq!(
cfg.bind_dn_template.as_deref(), cfg.bind_dn_template.as_deref(),
Some("uid={login},ou=people,dc=corp,dc=com") Some("uid={login},ou=people,dc=corp,dc=com")

View File

@@ -1412,7 +1412,7 @@ pub mod artifact {
pub content_type: Option<String>, pub content_type: Option<String>,
/// Size of the latest version's content in bytes /// Size of the latest version's content in bytes
pub size_bytes: Option<i64>, pub size_bytes: Option<i64>,
/// Execution that produced this artifact (no FK — execution is a hypertable) /// Execution that produced this artifact (no FK by design)
pub execution: Option<Id>, pub execution: Option<Id>,
/// Structured JSONB data for progress artifacts or metadata /// Structured JSONB data for progress artifacts or metadata
pub data: Option<serde_json::Value>, pub data: Option<serde_json::Value>,

View File

@@ -102,7 +102,12 @@ impl MqError {
pub fn is_retriable(&self) -> bool { pub fn is_retriable(&self) -> bool {
matches!( matches!(
self, self,
MqError::Connection(_) | MqError::Channel(_) | MqError::Timeout(_) | MqError::Pool(_) MqError::Connection(_)
| MqError::Channel(_)
| MqError::Publish(_)
| MqError::Timeout(_)
| MqError::Pool(_)
| MqError::Lapin(_)
) )
} }

View File

@@ -12,6 +12,7 @@ use crate::models::Runtime;
use crate::repositories::action::ActionRepository; use crate::repositories::action::ActionRepository;
use crate::repositories::runtime::{self, RuntimeRepository}; use crate::repositories::runtime::{self, RuntimeRepository};
use crate::repositories::FindById as _; use crate::repositories::FindById as _;
use regex::Regex;
use serde_json::Value as JsonValue; use serde_json::Value as JsonValue;
use sqlx::{PgPool, Row}; use sqlx::{PgPool, Row};
use std::collections::{HashMap, HashSet}; use std::collections::{HashMap, HashSet};
@@ -94,10 +95,7 @@ pub struct PackEnvironmentManager {
impl PackEnvironmentManager { impl PackEnvironmentManager {
/// Create a new pack environment manager /// Create a new pack environment manager
pub fn new(pool: PgPool, config: &Config) -> Self { pub fn new(pool: PgPool, config: &Config) -> Self {
let base_path = PathBuf::from(&config.packs_base_dir) let base_path = PathBuf::from(&config.runtime_envs_dir);
.parent()
.map(|p| p.join("packenvs"))
.unwrap_or_else(|| PathBuf::from("/opt/attune/packenvs"));
Self { pool, base_path } Self { pool, base_path }
} }
@@ -399,19 +397,19 @@ impl PackEnvironmentManager {
} }
fn calculate_env_path(&self, pack_ref: &str, runtime: &Runtime) -> Result<PathBuf> { fn calculate_env_path(&self, pack_ref: &str, runtime: &Runtime) -> Result<PathBuf> {
let runtime_name_lower = runtime.name.to_lowercase();
let template = runtime let template = runtime
.installers .installers
.get("base_path_template") .get("base_path_template")
.and_then(|v| v.as_str()) .and_then(|v| v.as_str())
.unwrap_or("/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}"); .unwrap_or("{pack_ref}/{runtime_name_lower}");
let runtime_name_lower = runtime.name.to_lowercase();
let path_str = template let path_str = template
.replace("{pack_ref}", pack_ref) .replace("{pack_ref}", pack_ref)
.replace("{runtime_ref}", &runtime.r#ref) .replace("{runtime_ref}", &runtime.r#ref)
.replace("{runtime_name_lower}", &runtime_name_lower); .replace("{runtime_name_lower}", &runtime_name_lower);
Ok(PathBuf::from(path_str)) resolve_env_path(&self.base_path, &path_str)
} }
async fn upsert_environment_record( async fn upsert_environment_record(
@@ -528,6 +526,7 @@ impl PackEnvironmentManager {
let mut install_log = String::new(); let mut install_log = String::new();
// Create environment directory // Create environment directory
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- env_path comes from validated runtime-env path construction under runtime_envs_dir.
let env_path = PathBuf::from(&pack_env.env_path); let env_path = PathBuf::from(&pack_env.env_path);
if env_path.exists() { if env_path.exists() {
warn!( warn!(
@@ -659,6 +658,8 @@ impl PackEnvironmentManager {
env_path, env_path,
&pack_path_str, &pack_path_str,
)?; )?;
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- The candidate command path is validated and confined before any execution is attempted.
let command = validate_installer_command(&command, pack_path, Path::new(env_path))?;
let args_template = installer let args_template = installer
.get("args") .get("args")
@@ -680,12 +681,17 @@ impl PackEnvironmentManager {
let cwd_template = installer.get("cwd").and_then(|v| v.as_str()); let cwd_template = installer.get("cwd").and_then(|v| v.as_str());
let cwd = if let Some(cwd_t) = cwd_template { let cwd = if let Some(cwd_t) = cwd_template {
Some(self.resolve_template( // nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Installer cwd values are validated to stay under the pack root or environment directory.
Some(validate_installer_path(
&self.resolve_template(
cwd_t, cwd_t,
pack_ref, pack_ref,
runtime_ref, runtime_ref,
env_path, env_path,
&pack_path_str, &pack_path_str,
)?,
pack_path,
Path::new(env_path),
)?) )?)
} else { } else {
None None
@@ -763,6 +769,7 @@ impl PackEnvironmentManager {
async fn execute_installer_action(&self, action: &InstallerAction) -> Result<String> { async fn execute_installer_action(&self, action: &InstallerAction) -> Result<String> {
debug!("Executing: {} {:?}", action.command, action.args); debug!("Executing: {} {:?}", action.command, action.args);
// nosemgrep: rust.actix.command-injection.rust-actix-command-injection.rust-actix-command-injection -- action.command is accepted only after strict validation of executable shape and allowed path roots.
let mut cmd = Command::new(&action.command); let mut cmd = Command::new(&action.command);
cmd.args(&action.args); cmd.args(&action.args);
@@ -800,7 +807,9 @@ impl PackEnvironmentManager {
// Check file_exists condition // Check file_exists condition
if let Some(file_path_template) = condition.get("file_exists").and_then(|v| v.as_str()) { if let Some(file_path_template) = condition.get("file_exists").and_then(|v| v.as_str()) {
let file_path = file_path_template.replace("{pack_path}", &pack_path.to_string_lossy()); let file_path = file_path_template.replace("{pack_path}", &pack_path.to_string_lossy());
return Ok(PathBuf::from(file_path).exists()); // nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Conditional file checks are validated to stay under trusted pack/environment roots before filesystem access.
let validated = validate_installer_path(&file_path, pack_path, &self.base_path)?;
return Ok(PathBuf::from(validated).exists());
} }
// Default: condition is true // Default: condition is true
@@ -816,6 +825,93 @@ impl PackEnvironmentManager {
} }
} }
fn resolve_env_path(base_path: &Path, path_str: &str) -> Result<PathBuf> {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- This helper normalizes env paths and preserves legacy absolute templates while still rejecting parent traversal.
let raw_path = Path::new(path_str);
if raw_path.is_absolute() {
return normalize_relative_or_absolute_path(raw_path);
}
let joined = base_path.join(raw_path);
normalize_relative_or_absolute_path(&joined)
}
fn normalize_relative_or_absolute_path(path: &Path) -> Result<PathBuf> {
let mut normalized = PathBuf::new();
for component in path.components() {
match component {
std::path::Component::Prefix(prefix) => normalized.push(prefix.as_os_str()),
std::path::Component::RootDir => normalized.push(std::path::MAIN_SEPARATOR.to_string()),
std::path::Component::CurDir => {}
std::path::Component::ParentDir => {
return Err(Error::validation(format!(
"Parent-directory traversal is not allowed in installer paths: {}",
path.display()
)));
}
std::path::Component::Normal(part) => normalized.push(part),
}
}
Ok(normalized)
}
fn validate_installer_command(command: &str, pack_path: &Path, env_path: &Path) -> Result<String> {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Command validation inspects the path form before enforcing allowed executable rules.
let command_path = Path::new(command);
if command_path.is_absolute() {
return validate_installer_path(command, pack_path, env_path);
}
if command.contains(std::path::MAIN_SEPARATOR) {
return Err(Error::validation(format!(
"Installer command must be a bare executable name or an allowed absolute path: {}",
command
)));
}
let command_name_re = Regex::new(r"^[A-Za-z0-9._+-]+$").expect("valid installer regex");
if !command_name_re.is_match(command) {
return Err(Error::validation(format!(
"Installer command contains invalid characters: {}",
command
)));
}
Ok(command.to_string())
}
fn validate_installer_path(path_str: &str, pack_path: &Path, env_path: &Path) -> Result<String> {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Path validation normalizes candidate installer paths before enforcing root confinement.
let path = normalize_path(Path::new(path_str));
let normalized_pack_path = normalize_path(pack_path);
let normalized_env_path = normalize_path(env_path);
if path.starts_with(&normalized_pack_path) || path.starts_with(&normalized_env_path) {
Ok(path.to_string_lossy().to_string())
} else {
Err(Error::validation(format!(
"Installer path must remain under the pack or environment directory: {}",
path_str
)))
}
}
fn normalize_path(path: &Path) -> PathBuf {
let mut normalized = PathBuf::new();
for component in path.components() {
match component {
std::path::Component::Prefix(prefix) => normalized.push(prefix.as_os_str()),
std::path::Component::RootDir => normalized.push(std::path::MAIN_SEPARATOR.to_string()),
std::path::Component::CurDir => {}
std::path::Component::ParentDir => {
normalized.pop();
}
std::path::Component::Normal(part) => normalized.push(part),
}
}
normalized
}
/// Collect the lowercase runtime names that require environment setup for a pack. /// Collect the lowercase runtime names that require environment setup for a pack.
/// ///
/// This queries the pack's actions, resolves their runtimes, and returns the names /// This queries the pack's actions, resolves their runtimes, and returns the names

View File

@@ -349,6 +349,7 @@ mod tests {
cache_enabled: true, cache_enabled: true,
timeout: 120, timeout: 120,
verify_checksums: true, verify_checksums: true,
allowed_source_hosts: Vec::new(),
allow_http: false, allow_http: false,
}; };

View File

@@ -11,10 +11,14 @@
use super::{Checksum, InstallSource, PackIndexEntry, RegistryClient}; use super::{Checksum, InstallSource, PackIndexEntry, RegistryClient};
use crate::config::PackRegistryConfig; use crate::config::PackRegistryConfig;
use crate::error::{Error, Result}; use crate::error::{Error, Result};
use std::collections::HashSet;
use std::net::{IpAddr, Ipv6Addr};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use tokio::fs; use tokio::fs;
use tokio::net::lookup_host;
use tokio::process::Command; use tokio::process::Command;
use url::Url;
/// Progress callback type /// Progress callback type
pub type ProgressCallback = Arc<dyn Fn(ProgressEvent) + Send + Sync>; pub type ProgressCallback = Arc<dyn Fn(ProgressEvent) + Send + Sync>;
@@ -53,6 +57,12 @@ pub struct PackInstaller {
/// Whether to verify checksums /// Whether to verify checksums
verify_checksums: bool, verify_checksums: bool,
/// Whether HTTP remote sources are allowed
allow_http: bool,
/// Remote hosts allowed for archive/git installs
allowed_remote_hosts: Option<HashSet<String>>,
/// Progress callback (optional) /// Progress callback (optional)
progress_callback: Option<ProgressCallback>, progress_callback: Option<ProgressCallback>,
} }
@@ -106,17 +116,32 @@ impl PackInstaller {
.await .await
.map_err(|e| Error::internal(format!("Failed to create temp directory: {}", e)))?; .map_err(|e| Error::internal(format!("Failed to create temp directory: {}", e)))?;
let (registry_client, verify_checksums) = if let Some(config) = registry_config { let (registry_client, verify_checksums, allow_http, allowed_remote_hosts) =
if let Some(config) = registry_config {
let verify_checksums = config.verify_checksums; let verify_checksums = config.verify_checksums;
(Some(RegistryClient::new(config)?), verify_checksums) let allow_http = config.allow_http;
let allowed_remote_hosts = collect_allowed_remote_hosts(&config)?;
let allowed_remote_hosts = if allowed_remote_hosts.is_empty() {
None
} else { } else {
(None, false) Some(allowed_remote_hosts)
};
(
Some(RegistryClient::new(config)?),
verify_checksums,
allow_http,
allowed_remote_hosts,
)
} else {
(None, false, false, None)
}; };
Ok(Self { Ok(Self {
temp_dir, temp_dir,
registry_client, registry_client,
verify_checksums, verify_checksums,
allow_http,
allowed_remote_hosts,
progress_callback: None, progress_callback: None,
}) })
} }
@@ -152,6 +177,7 @@ impl PackInstaller {
/// Install from git repository /// Install from git repository
async fn install_from_git(&self, url: &str, git_ref: Option<&str>) -> Result<InstalledPack> { async fn install_from_git(&self, url: &str, git_ref: Option<&str>) -> Result<InstalledPack> {
self.validate_git_source(url).await?;
tracing::info!("Installing pack from git: {} (ref: {:?})", url, git_ref); tracing::info!("Installing pack from git: {} (ref: {:?})", url, git_ref);
self.report_progress(ProgressEvent::StepStarted { self.report_progress(ProgressEvent::StepStarted {
@@ -405,10 +431,12 @@ impl PackInstaller {
/// Download an archive from a URL /// Download an archive from a URL
async fn download_archive(&self, url: &str) -> Result<PathBuf> { async fn download_archive(&self, url: &str) -> Result<PathBuf> {
let parsed_url = self.validate_remote_url(url).await?;
let client = reqwest::Client::new(); let client = reqwest::Client::new();
// nosemgrep: rust.actix.ssrf.reqwest-taint.reqwest-taint -- Remote source URLs are restricted to configured allowlisted hosts, HTTPS, and public IPs before request execution.
let response = client let response = client
.get(url) .get(parsed_url.clone())
.send() .send()
.await .await
.map_err(|e| Error::internal(format!("Failed to download archive: {}", e)))?; .map_err(|e| Error::internal(format!("Failed to download archive: {}", e)))?;
@@ -421,11 +449,7 @@ impl PackInstaller {
} }
// Determine filename from URL // Determine filename from URL
let filename = url let filename = archive_filename_from_url(&parsed_url);
.split('/')
.next_back()
.unwrap_or("archive.zip")
.to_string();
let archive_path = self.temp_dir.join(&filename); let archive_path = self.temp_dir.join(&filename);
@@ -442,6 +466,116 @@ impl PackInstaller {
Ok(archive_path) Ok(archive_path)
} }
async fn validate_remote_url(&self, raw_url: &str) -> Result<Url> {
let parsed = Url::parse(raw_url)
.map_err(|e| Error::validation(format!("Invalid remote URL '{}': {}", raw_url, e)))?;
if parsed.scheme() != "https" && !(self.allow_http && parsed.scheme() == "http") {
return Err(Error::validation(format!(
"Remote URL must use https{}: {}",
if self.allow_http {
" or http when pack_registry.allow_http is enabled"
} else {
""
},
raw_url
)));
}
if !parsed.username().is_empty() || parsed.password().is_some() {
return Err(Error::validation(
"Remote URLs with embedded credentials are not allowed".to_string(),
));
}
let host = parsed.host_str().ok_or_else(|| {
Error::validation(format!("Remote URL is missing a host: {}", raw_url))
})?;
let normalized_host = host.to_ascii_lowercase();
if normalized_host == "localhost" {
return Err(Error::validation(format!(
"Remote URL host is not allowed: {}",
host
)));
}
if let Some(allowed_remote_hosts) = &self.allowed_remote_hosts {
if !allowed_remote_hosts.contains(&normalized_host) {
return Err(Error::validation(format!(
"Remote URL host '{}' is not in the configured allowlist. Add it to pack_registry.allowed_source_hosts.",
host
)));
}
}
if let Some(ip) = parsed.host().and_then(|host| match host {
url::Host::Ipv4(ip) => Some(IpAddr::V4(ip)),
url::Host::Ipv6(ip) => Some(IpAddr::V6(ip)),
url::Host::Domain(_) => None,
}) {
ensure_public_ip(ip)?;
}
let port = parsed.port_or_known_default().ok_or_else(|| {
Error::validation(format!("Remote URL is missing a usable port: {}", raw_url))
})?;
let resolved = lookup_host((host, port))
.await
.map_err(|e| Error::validation(format!("Failed to resolve host '{}': {}", host, e)))?;
let mut saw_address = false;
for addr in resolved {
saw_address = true;
ensure_public_ip(addr.ip())?;
}
if !saw_address {
return Err(Error::validation(format!(
"Remote URL host did not resolve to any addresses: {}",
host
)));
}
Ok(parsed)
}
async fn validate_git_source(&self, raw_url: &str) -> Result<()> {
if raw_url.starts_with("http://") || raw_url.starts_with("https://") {
self.validate_remote_url(raw_url).await?;
return Ok(());
}
if let Some(host) = extract_git_host(raw_url) {
self.validate_remote_host(&host)?;
}
Ok(())
}
fn validate_remote_host(&self, host: &str) -> Result<()> {
let normalized_host = host.to_ascii_lowercase();
if normalized_host == "localhost" {
return Err(Error::validation(format!(
"Remote host is not allowed: {}",
host
)));
}
if let Some(allowed_remote_hosts) = &self.allowed_remote_hosts {
if !allowed_remote_hosts.contains(&normalized_host) {
return Err(Error::validation(format!(
"Remote host '{}' is not in the configured allowlist. Add it to pack_registry.allowed_source_hosts.",
host
)));
}
}
Ok(())
}
/// Extract an archive (zip or tar.gz) /// Extract an archive (zip or tar.gz)
async fn extract_archive(&self, archive_path: &Path) -> Result<PathBuf> { async fn extract_archive(&self, archive_path: &Path) -> Result<PathBuf> {
let extract_dir = self.create_temp_dir().await?; let extract_dir = self.create_temp_dir().await?;
@@ -583,6 +717,7 @@ impl PackInstaller {
} }
// Check in first subdirectory (common for GitHub archives) // Check in first subdirectory (common for GitHub archives)
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Archive inspection is limited to the temporary extraction directory created by this installer.
let mut entries = fs::read_dir(base_dir) let mut entries = fs::read_dir(base_dir)
.await .await
.map_err(|e| Error::internal(format!("Failed to read directory: {}", e)))?; .map_err(|e| Error::internal(format!("Failed to read directory: {}", e)))?;
@@ -618,6 +753,7 @@ impl PackInstaller {
})?; })?;
// Read source directory // Read source directory
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Directory copy operates on installer-managed local paths, not request-derived paths.
let mut entries = fs::read_dir(src) let mut entries = fs::read_dir(src)
.await .await
.map_err(|e| Error::internal(format!("Failed to read source directory: {}", e)))?; .map_err(|e| Error::internal(format!("Failed to read source directory: {}", e)))?;
@@ -674,6 +810,111 @@ impl PackInstaller {
} }
} }
fn collect_allowed_remote_hosts(config: &PackRegistryConfig) -> Result<HashSet<String>> {
let mut hosts = HashSet::new();
for index in &config.indices {
if !index.enabled {
continue;
}
let parsed = Url::parse(&index.url).map_err(|e| {
Error::validation(format!("Invalid registry index URL '{}': {}", index.url, e))
})?;
let host = parsed.host_str().ok_or_else(|| {
Error::validation(format!(
"Registry index URL '{}' is missing a host",
index.url
))
})?;
hosts.insert(host.to_ascii_lowercase());
}
for host in &config.allowed_source_hosts {
let normalized = host.trim().to_ascii_lowercase();
if !normalized.is_empty() {
hosts.insert(normalized);
}
}
Ok(hosts)
}
fn extract_git_host(raw_url: &str) -> Option<String> {
if let Ok(parsed) = Url::parse(raw_url) {
return parsed.host_str().map(|host| host.to_ascii_lowercase());
}
raw_url.split_once('@').and_then(|(_, rest)| {
rest.split_once(':')
.map(|(host, _)| host.to_ascii_lowercase())
})
}
fn archive_filename_from_url(url: &Url) -> String {
let raw_name = url
.path_segments()
.and_then(|mut segments| segments.rfind(|segment| !segment.is_empty()))
.unwrap_or("archive.bin");
let sanitized: String = raw_name
.chars()
.map(|ch| match ch {
'a'..='z' | 'A'..='Z' | '0'..='9' | '.' | '-' | '_' => ch,
_ => '_',
})
.collect();
let filename = sanitized.trim_matches('.');
if filename.is_empty() {
"archive.bin".to_string()
} else {
filename.to_string()
}
}
fn ensure_public_ip(ip: IpAddr) -> Result<()> {
let is_blocked = match ip {
IpAddr::V4(ip) => {
let octets = ip.octets();
let is_documentation_range = matches!(
octets,
[192, 0, 2, _] | [198, 51, 100, _] | [203, 0, 113, _]
);
ip.is_private()
|| ip.is_loopback()
|| ip.is_link_local()
|| ip.is_multicast()
|| ip.is_broadcast()
|| is_documentation_range
|| ip.is_unspecified()
|| octets[0] == 0
}
IpAddr::V6(ip) => {
let segments = ip.segments();
let is_documentation_range = segments[0] == 0x2001 && segments[1] == 0x0db8;
ip.is_loopback()
|| ip.is_unspecified()
|| ip.is_multicast()
|| ip.is_unique_local()
|| ip.is_unicast_link_local()
|| is_documentation_range
|| ip == Ipv6Addr::LOCALHOST
}
};
if is_blocked {
return Err(Error::validation(format!(
"Remote URL resolved to a non-public address: {}",
ip
)));
}
Ok(())
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@@ -721,4 +962,52 @@ mod tests {
assert!(matches!(source, InstallSource::Git { .. })); assert!(matches!(source, InstallSource::Git { .. }));
} }
#[test]
fn test_archive_filename_from_url_sanitizes_path_segments() {
let url = Url::parse("https://example.com/releases/../../pack.zip?token=x").unwrap();
assert_eq!(archive_filename_from_url(&url), "pack.zip");
}
#[test]
fn test_ensure_public_ip_rejects_private_ipv4() {
let err = ensure_public_ip(IpAddr::V4(std::net::Ipv4Addr::new(127, 0, 0, 1))).unwrap_err();
assert!(err.to_string().contains("non-public"));
}
#[test]
fn test_collect_allowed_remote_hosts_includes_indices_and_overrides() {
let config = PackRegistryConfig {
indices: vec![crate::config::RegistryIndexConfig {
url: "https://registry.example.com/index.json".to_string(),
priority: 1,
enabled: true,
name: None,
headers: std::collections::HashMap::new(),
}],
allowed_source_hosts: vec!["github.com".to_string(), "cdn.example.com".to_string()],
..Default::default()
};
let hosts = collect_allowed_remote_hosts(&config).unwrap();
assert!(hosts.contains("registry.example.com"));
assert!(hosts.contains("github.com"));
assert!(hosts.contains("cdn.example.com"));
}
#[test]
fn test_extract_git_host_from_scp_style_source() {
assert_eq!(
extract_git_host("git@github.com:org/repo.git"),
Some("github.com".to_string())
);
}
#[test]
fn test_extract_git_host_from_git_scheme_source() {
assert_eq!(
extract_git_host("git://github.com/org/repo.git"),
Some("github.com".to_string())
);
}
} }

View File

@@ -31,7 +31,7 @@
//! can reference the same workflow file with different configurations. //! can reference the same workflow file with different configurations.
use std::collections::HashMap; use std::collections::HashMap;
use std::path::Path; use std::path::{Path, PathBuf};
use sqlx::PgPool; use sqlx::PgPool;
use tracing::{debug, info, warn}; use tracing::{debug, info, warn};
@@ -1091,7 +1091,10 @@ impl<'a> PackComponentLoader<'a> {
action_description: &str, action_description: &str,
action_data: &serde_yaml_ng::Value, action_data: &serde_yaml_ng::Value,
) -> Result<Id> { ) -> Result<Id> {
let full_path = actions_dir.join(workflow_file_path); let pack_root = actions_dir.parent().ok_or_else(|| {
Error::validation("Actions directory must live inside a pack directory".to_string())
})?;
let full_path = resolve_pack_relative_path(pack_root, actions_dir, workflow_file_path)?;
if !full_path.exists() { if !full_path.exists() {
return Err(Error::validation(format!( return Err(Error::validation(format!(
"Workflow file '{}' not found at '{}'", "Workflow file '{}' not found at '{}'",
@@ -1100,6 +1103,7 @@ impl<'a> PackComponentLoader<'a> {
))); )));
} }
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- The workflow path is normalized and confined to the pack root before this local read.
let content = std::fs::read_to_string(&full_path).map_err(|e| { let content = std::fs::read_to_string(&full_path).map_err(|e| {
Error::io(format!( Error::io(format!(
"Failed to read workflow file '{}': {}", "Failed to read workflow file '{}': {}",
@@ -1649,11 +1653,60 @@ impl<'a> PackComponentLoader<'a> {
} }
} }
fn resolve_pack_relative_path(
pack_root: &Path,
base_dir: &Path,
relative_path: &str,
) -> Result<PathBuf> {
let canonical_pack_root = pack_root.canonicalize().map_err(|e| {
Error::io(format!(
"Failed to resolve pack root '{}': {}",
pack_root.display(),
e
))
})?;
let canonical_base_dir = base_dir.canonicalize().map_err(|e| {
Error::io(format!(
"Failed to resolve base directory '{}': {}",
base_dir.display(),
e
))
})?;
let canonical_candidate = normalize_path_from_base(&canonical_base_dir, relative_path);
if !canonical_candidate.starts_with(&canonical_pack_root) {
return Err(Error::validation(format!(
"Resolved path '{}' escapes pack root '{}'",
canonical_candidate.display(),
canonical_pack_root.display()
)));
}
Ok(canonical_candidate)
}
fn normalize_path_from_base(base: &Path, relative_path: &str) -> PathBuf {
let mut normalized = PathBuf::new();
for component in base.join(relative_path).components() {
match component {
std::path::Component::Prefix(prefix) => normalized.push(prefix.as_os_str()),
std::path::Component::RootDir => normalized.push(std::path::MAIN_SEPARATOR.to_string()),
std::path::Component::CurDir => {}
std::path::Component::ParentDir => {
normalized.pop();
}
std::path::Component::Normal(part) => normalized.push(part),
}
}
normalized
}
/// Read all YAML files from a directory, returning `(filename, content)` pairs /// Read all YAML files from a directory, returning `(filename, content)` pairs
/// sorted by filename for deterministic ordering. /// sorted by filename for deterministic ordering.
fn read_yaml_files(dir: &Path) -> Result<Vec<(String, String)>> { fn read_yaml_files(dir: &Path) -> Result<Vec<(String, String)>> {
let mut files = Vec::new(); let mut files = Vec::new();
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Pack loader scans pack-owned directories on disk after selecting the pack root.
let entries = std::fs::read_dir(dir) let entries = std::fs::read_dir(dir)
.map_err(|e| Error::io(format!("Failed to read directory {}: {}", dir.display(), e)))?; .map_err(|e| Error::io(format!("Failed to read directory {}: {}", dir.display(), e)))?;
@@ -1676,6 +1729,7 @@ fn read_yaml_files(dir: &Path) -> Result<Vec<(String, String)>> {
let path = entry.path(); let path = entry.path();
let filename = entry.file_name().to_string_lossy().to_string(); let filename = entry.file_name().to_string_lossy().to_string();
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- YAML files are read only after being discovered under the selected pack directory.
let content = std::fs::read_to_string(&path) let content = std::fs::read_to_string(&path)
.map_err(|e| Error::io(format!("Failed to read file {}: {}", path.display(), e)))?; .map_err(|e| Error::io(format!("Failed to read file {}: {}", path.display(), e)))?;

View File

@@ -292,6 +292,7 @@ fn copy_dir_all(src: &Path, dst: &Path) -> Result<()> {
)) ))
})?; })?;
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Pack storage copy recursively processes validated local directories under the configured pack store.
for entry in fs::read_dir(src).map_err(|e| { for entry in fs::read_dir(src).map_err(|e| {
Error::io(format!( Error::io(format!(
"Failed to read source directory {}: {}", "Failed to read source directory {}: {}",

View File

@@ -571,7 +571,7 @@ impl Repository for PolicyRepository {
type Entity = Policy; type Entity = Policy;
fn table_name() -> &'static str { fn table_name() -> &'static str {
"policies" "policy"
} }
} }
@@ -612,7 +612,7 @@ impl FindById for PolicyRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method, SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method,
threshold, name, description, tags, created, updated threshold, name, description, tags, created, updated
FROM policies FROM policy
WHERE id = $1 WHERE id = $1
"#, "#,
) )
@@ -634,7 +634,7 @@ impl FindByRef for PolicyRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method, SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method,
threshold, name, description, tags, created, updated threshold, name, description, tags, created, updated
FROM policies FROM policy
WHERE ref = $1 WHERE ref = $1
"#, "#,
) )
@@ -656,7 +656,7 @@ impl List for PolicyRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method, SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method,
threshold, name, description, tags, created, updated threshold, name, description, tags, created, updated
FROM policies FROM policy
ORDER BY ref ASC ORDER BY ref ASC
"#, "#,
) )
@@ -678,7 +678,7 @@ impl Create for PolicyRepository {
// Try to insert - database will enforce uniqueness constraint // Try to insert - database will enforce uniqueness constraint
let policy = sqlx::query_as::<_, Policy>( let policy = sqlx::query_as::<_, Policy>(
r#" r#"
INSERT INTO policies (ref, pack, pack_ref, action, action_ref, parameters, INSERT INTO policy (ref, pack, pack_ref, action, action_ref, parameters,
method, threshold, name, description, tags) method, threshold, name, description, tags)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
RETURNING id, ref, pack, pack_ref, action, action_ref, parameters, method, RETURNING id, ref, pack, pack_ref, action, action_ref, parameters, method,
@@ -720,7 +720,7 @@ impl Update for PolicyRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let mut query = QueryBuilder::new("UPDATE policies SET "); let mut query = QueryBuilder::new("UPDATE policy SET ");
let mut has_updates = false; let mut has_updates = false;
if let Some(parameters) = &input.parameters { if let Some(parameters) = &input.parameters {
@@ -798,7 +798,7 @@ impl Delete for PolicyRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
let result = sqlx::query("DELETE FROM policies WHERE id = $1") let result = sqlx::query("DELETE FROM policy WHERE id = $1")
.bind(id) .bind(id)
.execute(executor) .execute(executor)
.await?; .await?;
@@ -817,7 +817,7 @@ impl PolicyRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method, SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method,
threshold, name, description, tags, created, updated threshold, name, description, tags, created, updated
FROM policies FROM policy
WHERE action = $1 WHERE action = $1
ORDER BY ref ASC ORDER BY ref ASC
"#, "#,
@@ -838,7 +838,7 @@ impl PolicyRepository {
r#" r#"
SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method, SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method,
threshold, name, description, tags, created, updated threshold, name, description, tags, created, updated
FROM policies FROM policy
WHERE $1 = ANY(tags) WHERE $1 = ANY(tags)
ORDER BY ref ASC ORDER BY ref ASC
"#, "#,
@@ -849,4 +849,69 @@ impl PolicyRepository {
Ok(policies) Ok(policies)
} }
/// Find the most recent action-specific policy.
pub async fn find_latest_by_action<'e, E>(executor: E, action_id: Id) -> Result<Option<Policy>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let policy = sqlx::query_as::<_, Policy>(
r#"
SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method,
threshold, name, description, tags, created, updated
FROM policy
WHERE action = $1
ORDER BY created DESC
LIMIT 1
"#,
)
.bind(action_id)
.fetch_optional(executor)
.await?;
Ok(policy)
}
/// Find the most recent pack-specific policy.
pub async fn find_latest_by_pack<'e, E>(executor: E, pack_id: Id) -> Result<Option<Policy>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let policy = sqlx::query_as::<_, Policy>(
r#"
SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method,
threshold, name, description, tags, created, updated
FROM policy
WHERE pack = $1 AND action IS NULL
ORDER BY created DESC
LIMIT 1
"#,
)
.bind(pack_id)
.fetch_optional(executor)
.await?;
Ok(policy)
}
/// Find the most recent global policy.
pub async fn find_latest_global<'e, E>(executor: E) -> Result<Option<Policy>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let policy = sqlx::query_as::<_, Policy>(
r#"
SELECT id, ref, pack, pack_ref, action, action_ref, parameters, method,
threshold, name, description, tags, created, updated
FROM policy
WHERE pack IS NULL AND action IS NULL
ORDER BY created DESC
LIMIT 1
"#,
)
.fetch_optional(executor)
.await?;
Ok(policy)
}
} }

View File

@@ -80,7 +80,7 @@ pub struct EnforcementVolumeBucket {
pub enforcement_count: i64, pub enforcement_count: i64,
} }
/// A single hourly bucket of execution volume (from execution hypertable directly). /// A single hourly bucket of execution volume (from the execution table directly).
#[derive(Debug, Clone, Serialize, FromRow)] #[derive(Debug, Clone, Serialize, FromRow)]
pub struct ExecutionVolumeBucket { pub struct ExecutionVolumeBucket {
/// Start of the 1-hour bucket /// Start of the 1-hour bucket
@@ -468,7 +468,7 @@ impl AnalyticsRepository {
} }
// ======================================================================= // =======================================================================
// Execution volume (from execution hypertable directly) // Execution volume (from the execution table directly)
// ======================================================================= // =======================================================================
/// Query the `execution_volume_hourly` continuous aggregate for execution /// Query the `execution_volume_hourly` continuous aggregate for execution

View File

@@ -65,6 +65,12 @@ pub struct EnforcementSearchResult {
pub total: u64, pub total: u64,
} }
#[derive(Debug, Clone)]
pub struct EnforcementCreateOrGetResult {
pub enforcement: Enforcement,
pub created: bool,
}
/// Repository for Event operations /// Repository for Event operations
pub struct EventRepository; pub struct EventRepository;
@@ -416,7 +422,115 @@ impl Update for EnforcementRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
// Build update query if input.status.is_none() && input.payload.is_none() && input.resolved_at.is_none() {
return Self::get_by_id(executor, id).await;
}
Self::update_with_locator(executor, input, |query| {
query.push(" WHERE id = ");
query.push_bind(id);
})
.await
}
}
#[async_trait::async_trait]
impl Delete for EnforcementRepository {
async fn delete<'e, E>(executor: E, id: i64) -> Result<bool>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let result = sqlx::query("DELETE FROM enforcement WHERE id = $1")
.bind(id)
.execute(executor)
.await?;
Ok(result.rows_affected() > 0)
}
}
impl EnforcementRepository {
async fn update_with_locator<'e, E, F>(
executor: E,
input: UpdateEnforcementInput,
where_clause: F,
) -> Result<Enforcement>
where
E: Executor<'e, Database = Postgres> + 'e,
F: FnOnce(&mut QueryBuilder<'_, Postgres>),
{
let mut query = QueryBuilder::new("UPDATE enforcement SET ");
let mut has_updates = false;
if let Some(status) = input.status {
query.push("status = ");
query.push_bind(status);
has_updates = true;
}
if let Some(payload) = &input.payload {
if has_updates {
query.push(", ");
}
query.push("payload = ");
query.push_bind(payload);
has_updates = true;
}
if let Some(resolved_at) = input.resolved_at {
if has_updates {
query.push(", ");
}
query.push("resolved_at = ");
query.push_bind(resolved_at);
}
where_clause(&mut query);
query.push(
" RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload, \
condition, conditions, created, resolved_at",
);
let enforcement = query
.build_query_as::<Enforcement>()
.fetch_one(executor)
.await?;
Ok(enforcement)
}
/// Update an enforcement using the loaded row's primary key.
pub async fn update_loaded<'e, E>(
executor: E,
enforcement: &Enforcement,
input: UpdateEnforcementInput,
) -> Result<Enforcement>
where
E: Executor<'e, Database = Postgres> + 'e,
{
if input.status.is_none() && input.payload.is_none() && input.resolved_at.is_none() {
return Ok(enforcement.clone());
}
Self::update_with_locator(executor, input, |query| {
query.push(" WHERE id = ");
query.push_bind(enforcement.id);
})
.await
}
pub async fn update_loaded_if_status<'e, E>(
executor: E,
enforcement: &Enforcement,
expected_status: EnforcementStatus,
input: UpdateEnforcementInput,
) -> Result<Option<Enforcement>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
if input.status.is_none() && input.payload.is_none() && input.resolved_at.is_none() {
return Ok(Some(enforcement.clone()));
}
let mut query = QueryBuilder::new("UPDATE enforcement SET "); let mut query = QueryBuilder::new("UPDATE enforcement SET ");
let mut has_updates = false; let mut has_updates = false;
@@ -446,39 +560,25 @@ impl Update for EnforcementRepository {
} }
if !has_updates { if !has_updates {
// No updates requested, fetch and return existing entity return Ok(Some(enforcement.clone()));
return Self::get_by_id(executor, id).await;
} }
query.push(" WHERE id = "); query.push(" WHERE id = ");
query.push_bind(id); query.push_bind(enforcement.id);
query.push(" RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload, condition, conditions, created, resolved_at"); query.push(" AND status = ");
query.push_bind(expected_status);
query.push(
" RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload, \
condition, conditions, created, resolved_at",
);
let enforcement = query query
.build_query_as::<Enforcement>() .build_query_as::<Enforcement>()
.fetch_one(executor) .fetch_optional(executor)
.await?; .await
.map_err(Into::into)
Ok(enforcement)
}
} }
#[async_trait::async_trait]
impl Delete for EnforcementRepository {
async fn delete<'e, E>(executor: E, id: i64) -> Result<bool>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let result = sqlx::query("DELETE FROM enforcement WHERE id = $1")
.bind(id)
.execute(executor)
.await?;
Ok(result.rows_affected() > 0)
}
}
impl EnforcementRepository {
/// Find enforcements by rule ID /// Find enforcements by rule ID
pub async fn find_by_rule<'e, E>(executor: E, rule_id: Id) -> Result<Vec<Enforcement>> pub async fn find_by_rule<'e, E>(executor: E, rule_id: Id) -> Result<Vec<Enforcement>>
where where
@@ -545,6 +645,90 @@ impl EnforcementRepository {
Ok(enforcements) Ok(enforcements)
} }
pub async fn find_by_rule_and_event<'e, E>(
executor: E,
rule_id: Id,
event_id: Id,
) -> Result<Option<Enforcement>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
sqlx::query_as::<_, Enforcement>(
r#"
SELECT id, rule, rule_ref, trigger_ref, config, event, status, payload,
condition, conditions, created, resolved_at
FROM enforcement
WHERE rule = $1 AND event = $2
LIMIT 1
"#,
)
.bind(rule_id)
.bind(event_id)
.fetch_optional(executor)
.await
.map_err(Into::into)
}
pub async fn create_or_get_by_rule_event<'e, E>(
executor: E,
input: CreateEnforcementInput,
) -> Result<EnforcementCreateOrGetResult>
where
E: Executor<'e, Database = Postgres> + Copy + 'e,
{
let (Some(rule_id), Some(event_id)) = (input.rule, input.event) else {
let enforcement = Self::create(executor, input).await?;
return Ok(EnforcementCreateOrGetResult {
enforcement,
created: true,
});
};
let inserted = sqlx::query_as::<_, Enforcement>(
r#"
INSERT INTO enforcement (rule, rule_ref, trigger_ref, config, event, status,
payload, condition, conditions)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)
ON CONFLICT (rule, event) WHERE rule IS NOT NULL AND event IS NOT NULL DO NOTHING
RETURNING id, rule, rule_ref, trigger_ref, config, event, status, payload,
condition, conditions, created, resolved_at
"#,
)
.bind(input.rule)
.bind(&input.rule_ref)
.bind(&input.trigger_ref)
.bind(&input.config)
.bind(input.event)
.bind(input.status)
.bind(&input.payload)
.bind(input.condition)
.bind(&input.conditions)
.fetch_optional(executor)
.await?;
if let Some(enforcement) = inserted {
return Ok(EnforcementCreateOrGetResult {
enforcement,
created: true,
});
}
let enforcement = Self::find_by_rule_and_event(executor, rule_id, event_id)
.await?
.ok_or_else(|| {
anyhow::anyhow!(
"enforcement for rule {} and event {} disappeared after dedupe conflict",
rule_id,
event_id
)
})?;
Ok(EnforcementCreateOrGetResult {
enforcement,
created: false,
})
}
/// Search enforcements with all filters pushed into SQL. /// Search enforcements with all filters pushed into SQL.
/// ///
/// All filter fields are combinable (AND). Pagination is server-side. /// All filter fields are combinable (AND). Pagination is server-side.

View File

@@ -4,7 +4,8 @@ use chrono::{DateTime, Utc};
use crate::models::{enums::ExecutionStatus, execution::*, Id, JsonDict}; use crate::models::{enums::ExecutionStatus, execution::*, Id, JsonDict};
use crate::Result; use crate::Result;
use sqlx::{Executor, Postgres, QueryBuilder}; use sqlx::{Executor, PgConnection, PgPool, Postgres, QueryBuilder};
use tokio::time::{sleep, Duration};
use super::{Create, Delete, FindById, List, Repository, Update}; use super::{Create, Delete, FindById, List, Repository, Update};
@@ -41,6 +42,18 @@ pub struct ExecutionSearchResult {
pub total: u64, pub total: u64,
} }
#[derive(Debug, Clone)]
pub struct WorkflowTaskExecutionCreateOrGetResult {
pub execution: Execution,
pub created: bool,
}
#[derive(Debug, Clone)]
pub struct EnforcementExecutionCreateOrGetResult {
pub execution: Execution,
pub created: bool,
}
/// An execution row with optional `rule_ref` / `trigger_ref` populated from /// An execution row with optional `rule_ref` / `trigger_ref` populated from
/// the joined `enforcement` table. This avoids a separate in-memory lookup. /// the joined `enforcement` table. This avoids a separate in-memory lookup.
#[derive(Debug, Clone, sqlx::FromRow)] #[derive(Debug, Clone, sqlx::FromRow)]
@@ -191,7 +204,577 @@ impl Update for ExecutionRepository {
where where
E: Executor<'e, Database = Postgres> + 'e, E: Executor<'e, Database = Postgres> + 'e,
{ {
// Build update query if input.status.is_none()
&& input.result.is_none()
&& input.executor.is_none()
&& input.worker.is_none()
&& input.started_at.is_none()
&& input.workflow_task.is_none()
{
return Self::get_by_id(executor, id).await;
}
Self::update_with_locator(executor, input, |query| {
query.push(" WHERE id = ").push_bind(id);
})
.await
}
}
impl ExecutionRepository {
pub async fn find_top_level_by_enforcement<'e, E>(
executor: E,
enforcement_id: Id,
) -> Result<Option<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let sql = format!(
"SELECT {SELECT_COLUMNS} \
FROM execution \
WHERE enforcement = $1
AND parent IS NULL
AND (config IS NULL OR NOT (config ? 'retry_of')) \
ORDER BY created ASC \
LIMIT 1"
);
sqlx::query_as::<_, Execution>(&sql)
.bind(enforcement_id)
.fetch_optional(executor)
.await
.map_err(Into::into)
}
pub async fn create_top_level_for_enforcement_if_absent<'e, E>(
executor: E,
input: CreateExecutionInput,
enforcement_id: Id,
) -> Result<EnforcementExecutionCreateOrGetResult>
where
E: Executor<'e, Database = Postgres> + Copy + 'e,
{
let inserted = sqlx::query_as::<_, Execution>(&format!(
"INSERT INTO execution \
(action, action_ref, config, env_vars, parent, enforcement, executor, worker, status, result, workflow_task) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11) \
ON CONFLICT (enforcement)
WHERE enforcement IS NOT NULL
AND parent IS NULL
AND (config IS NULL OR NOT (config ? 'retry_of'))
DO NOTHING \
RETURNING {SELECT_COLUMNS}"
))
.bind(input.action)
.bind(&input.action_ref)
.bind(&input.config)
.bind(&input.env_vars)
.bind(input.parent)
.bind(input.enforcement)
.bind(input.executor)
.bind(input.worker)
.bind(input.status)
.bind(&input.result)
.bind(sqlx::types::Json(&input.workflow_task))
.fetch_optional(executor)
.await?;
if let Some(execution) = inserted {
return Ok(EnforcementExecutionCreateOrGetResult {
execution,
created: true,
});
}
let execution = Self::find_top_level_by_enforcement(executor, enforcement_id)
.await?
.ok_or_else(|| {
anyhow::anyhow!(
"top-level execution for enforcement {} disappeared after dedupe conflict",
enforcement_id
)
})?;
Ok(EnforcementExecutionCreateOrGetResult {
execution,
created: false,
})
}
async fn claim_workflow_task_dispatch<'e, E>(
executor: E,
workflow_execution_id: Id,
task_name: &str,
task_index: Option<i32>,
) -> Result<bool>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let inserted: Option<(i64,)> = sqlx::query_as(
"INSERT INTO workflow_task_dispatch (workflow_execution, task_name, task_index)
VALUES ($1, $2, $3)
ON CONFLICT (workflow_execution, task_name, COALESCE(task_index, -1)) DO NOTHING
RETURNING id",
)
.bind(workflow_execution_id)
.bind(task_name)
.bind(task_index)
.fetch_optional(executor)
.await?;
Ok(inserted.is_some())
}
async fn assign_workflow_task_dispatch_execution<'e, E>(
executor: E,
workflow_execution_id: Id,
task_name: &str,
task_index: Option<i32>,
execution_id: Id,
) -> Result<()>
where
E: Executor<'e, Database = Postgres> + 'e,
{
sqlx::query(
"UPDATE workflow_task_dispatch
SET execution_id = COALESCE(execution_id, $4)
WHERE workflow_execution = $1
AND task_name = $2
AND task_index IS NOT DISTINCT FROM $3",
)
.bind(workflow_execution_id)
.bind(task_name)
.bind(task_index)
.bind(execution_id)
.execute(executor)
.await?;
Ok(())
}
async fn lock_workflow_task_dispatch<'e, E>(
executor: E,
workflow_execution_id: Id,
task_name: &str,
task_index: Option<i32>,
) -> Result<Option<Option<Id>>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let row: Option<(Option<i64>,)> = sqlx::query_as(
"SELECT execution_id
FROM workflow_task_dispatch
WHERE workflow_execution = $1
AND task_name = $2
AND task_index IS NOT DISTINCT FROM $3
FOR UPDATE",
)
.bind(workflow_execution_id)
.bind(task_name)
.bind(task_index)
.fetch_optional(executor)
.await?;
// Map the outer Option to distinguish three cases:
// - None → no row exists
// - Some(None) → row exists but execution_id is still NULL (mid-creation)
// - Some(Some(id)) → row exists with a completed execution_id
Ok(row.map(|(execution_id,)| execution_id))
}
async fn create_workflow_task_if_absent_in_conn(
conn: &mut PgConnection,
input: CreateExecutionInput,
workflow_execution_id: Id,
task_name: &str,
task_index: Option<i32>,
) -> Result<WorkflowTaskExecutionCreateOrGetResult> {
let claimed = Self::claim_workflow_task_dispatch(
&mut *conn,
workflow_execution_id,
task_name,
task_index,
)
.await?;
if claimed {
let execution = Self::create(&mut *conn, input).await?;
Self::assign_workflow_task_dispatch_execution(
&mut *conn,
workflow_execution_id,
task_name,
task_index,
execution.id,
)
.await?;
return Ok(WorkflowTaskExecutionCreateOrGetResult {
execution,
created: true,
});
}
let dispatch_state = Self::lock_workflow_task_dispatch(
&mut *conn,
workflow_execution_id,
task_name,
task_index,
)
.await?;
match dispatch_state {
Some(Some(existing_execution_id)) => {
// Row exists with execution_id — return the existing execution.
let execution = Self::find_by_id(&mut *conn, existing_execution_id)
.await?
.ok_or_else(|| {
anyhow::anyhow!(
"workflow child execution {} missing for workflow_execution {} task '{}' index {:?}",
existing_execution_id,
workflow_execution_id,
task_name,
task_index
)
})?;
Ok(WorkflowTaskExecutionCreateOrGetResult {
execution,
created: false,
})
}
Some(None) => {
// Row exists but execution_id is still NULL: another transaction is
// mid-creation (between claim and assign). Retry until it's filled in.
// If the original creator's transaction rolled back, the row also
// disappears — handled by the `None` branch inside the loop.
'wait: {
for _ in 0..20_u32 {
sleep(Duration::from_millis(50)).await;
match Self::lock_workflow_task_dispatch(
&mut *conn,
workflow_execution_id,
task_name,
task_index,
)
.await?
{
Some(Some(execution_id)) => {
let execution =
Self::find_by_id(&mut *conn, execution_id).await?.ok_or_else(
|| {
anyhow::anyhow!(
"workflow child execution {} missing for workflow_execution {} task '{}' index {:?}",
execution_id,
workflow_execution_id,
task_name,
task_index
)
},
)?;
return Ok(WorkflowTaskExecutionCreateOrGetResult {
execution,
created: false,
});
}
Some(None) => {} // still NULL, keep waiting
None => break 'wait, // row rolled back; fall through to re-claim
}
}
// Exhausted all retries without the execution_id being set.
return Err(anyhow::anyhow!(
"Timed out waiting for workflow task dispatch execution_id to be set \
for workflow_execution {} task '{}' index {:?}",
workflow_execution_id,
task_name,
task_index
)
.into());
}
// Row disappeared (original creator rolled back) — re-claim and create.
let re_claimed = Self::claim_workflow_task_dispatch(
&mut *conn,
workflow_execution_id,
task_name,
task_index,
)
.await?;
if !re_claimed {
return Err(anyhow::anyhow!(
"Workflow task dispatch for workflow_execution {} task '{}' index {:?} \
was reclaimed by another executor after rollback",
workflow_execution_id,
task_name,
task_index
)
.into());
}
let execution = Self::create(&mut *conn, input).await?;
Self::assign_workflow_task_dispatch_execution(
&mut *conn,
workflow_execution_id,
task_name,
task_index,
execution.id,
)
.await?;
Ok(WorkflowTaskExecutionCreateOrGetResult {
execution,
created: true,
})
}
None => {
// No row at all — the original INSERT was rolled back before we arrived.
// Attempt to re-claim and create as if this were a fresh dispatch.
let re_claimed = Self::claim_workflow_task_dispatch(
&mut *conn,
workflow_execution_id,
task_name,
task_index,
)
.await?;
if !re_claimed {
return Err(anyhow::anyhow!(
"Workflow task dispatch for workflow_execution {} task '{}' index {:?} \
was claimed by another executor",
workflow_execution_id,
task_name,
task_index
)
.into());
}
let execution = Self::create(&mut *conn, input).await?;
Self::assign_workflow_task_dispatch_execution(
&mut *conn,
workflow_execution_id,
task_name,
task_index,
execution.id,
)
.await?;
Ok(WorkflowTaskExecutionCreateOrGetResult {
execution,
created: true,
})
}
}
}
pub async fn create_workflow_task_if_absent(
pool: &PgPool,
input: CreateExecutionInput,
workflow_execution_id: Id,
task_name: &str,
task_index: Option<i32>,
) -> Result<WorkflowTaskExecutionCreateOrGetResult> {
let mut conn = pool.acquire().await?;
sqlx::query("BEGIN").execute(&mut *conn).await?;
let result = Self::create_workflow_task_if_absent_in_conn(
&mut conn,
input,
workflow_execution_id,
task_name,
task_index,
)
.await;
match result {
Ok(result) => {
sqlx::query("COMMIT").execute(&mut *conn).await?;
Ok(result)
}
Err(err) => {
sqlx::query("ROLLBACK").execute(&mut *conn).await?;
Err(err)
}
}
}
pub async fn create_workflow_task_if_absent_with_conn(
conn: &mut PgConnection,
input: CreateExecutionInput,
workflow_execution_id: Id,
task_name: &str,
task_index: Option<i32>,
) -> Result<WorkflowTaskExecutionCreateOrGetResult> {
Self::create_workflow_task_if_absent_in_conn(
conn,
input,
workflow_execution_id,
task_name,
task_index,
)
.await
}
pub async fn claim_for_scheduling<'e, E>(
executor: E,
id: Id,
claiming_executor: Option<Id>,
) -> Result<Option<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let sql = format!(
"UPDATE execution \
SET status = $2, executor = COALESCE($3, executor), updated = NOW() \
WHERE id = $1 AND status = $4 \
RETURNING {SELECT_COLUMNS}"
);
sqlx::query_as::<_, Execution>(&sql)
.bind(id)
.bind(ExecutionStatus::Scheduling)
.bind(claiming_executor)
.bind(ExecutionStatus::Requested)
.fetch_optional(executor)
.await
.map_err(Into::into)
}
pub async fn reclaim_stale_scheduling<'e, E>(
executor: E,
id: Id,
claiming_executor: Option<Id>,
stale_before: DateTime<Utc>,
) -> Result<Option<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let sql = format!(
"UPDATE execution \
SET executor = COALESCE($2, executor), updated = NOW() \
WHERE id = $1 AND status = $3 AND updated <= $4 \
RETURNING {SELECT_COLUMNS}"
);
sqlx::query_as::<_, Execution>(&sql)
.bind(id)
.bind(claiming_executor)
.bind(ExecutionStatus::Scheduling)
.bind(stale_before)
.fetch_optional(executor)
.await
.map_err(Into::into)
}
pub async fn update_if_status<'e, E>(
executor: E,
id: Id,
expected_status: ExecutionStatus,
input: UpdateExecutionInput,
) -> Result<Option<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
if input.status.is_none()
&& input.result.is_none()
&& input.executor.is_none()
&& input.worker.is_none()
&& input.started_at.is_none()
&& input.workflow_task.is_none()
{
return Self::find_by_id(executor, id).await;
}
Self::update_with_locator_optional(executor, input, |query| {
query.push(" WHERE id = ").push_bind(id);
query.push(" AND status = ").push_bind(expected_status);
})
.await
}
pub async fn update_if_status_and_updated_before<'e, E>(
executor: E,
id: Id,
expected_status: ExecutionStatus,
stale_before: DateTime<Utc>,
input: UpdateExecutionInput,
) -> Result<Option<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
if input.status.is_none()
&& input.result.is_none()
&& input.executor.is_none()
&& input.worker.is_none()
&& input.started_at.is_none()
&& input.workflow_task.is_none()
{
return Self::find_by_id(executor, id).await;
}
Self::update_with_locator_optional(executor, input, |query| {
query.push(" WHERE id = ").push_bind(id);
query.push(" AND status = ").push_bind(expected_status);
query.push(" AND updated < ").push_bind(stale_before);
})
.await
}
pub async fn update_if_status_and_updated_at<'e, E>(
executor: E,
id: Id,
expected_status: ExecutionStatus,
expected_updated: DateTime<Utc>,
input: UpdateExecutionInput,
) -> Result<Option<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
if input.status.is_none()
&& input.result.is_none()
&& input.executor.is_none()
&& input.worker.is_none()
&& input.started_at.is_none()
&& input.workflow_task.is_none()
{
return Self::find_by_id(executor, id).await;
}
Self::update_with_locator_optional(executor, input, |query| {
query.push(" WHERE id = ").push_bind(id);
query.push(" AND status = ").push_bind(expected_status);
query.push(" AND updated = ").push_bind(expected_updated);
})
.await
}
pub async fn revert_scheduled_to_requested<'e, E>(
executor: E,
id: Id,
) -> Result<Option<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let sql = format!(
"UPDATE execution \
SET status = $2, worker = NULL, executor = NULL, updated = NOW() \
WHERE id = $1 AND status = $3 \
RETURNING {SELECT_COLUMNS}"
);
sqlx::query_as::<_, Execution>(&sql)
.bind(id)
.bind(ExecutionStatus::Requested)
.bind(ExecutionStatus::Scheduled)
.fetch_optional(executor)
.await
.map_err(Into::into)
}
async fn update_with_locator<'e, E, F>(
executor: E,
input: UpdateExecutionInput,
where_clause: F,
) -> Result<Execution>
where
E: Executor<'e, Database = Postgres> + 'e,
F: FnOnce(&mut QueryBuilder<'_, Postgres>),
{
let mut query = QueryBuilder::new("UPDATE execution SET "); let mut query = QueryBuilder::new("UPDATE execution SET ");
let mut has_updates = false; let mut has_updates = false;
@@ -234,15 +817,10 @@ impl Update for ExecutionRepository {
query query
.push("workflow_task = ") .push("workflow_task = ")
.push_bind(sqlx::types::Json(workflow_task)); .push_bind(sqlx::types::Json(workflow_task));
has_updates = true;
} }
if !has_updates { query.push(", updated = NOW()");
// No updates requested, fetch and return existing entity where_clause(&mut query);
return Self::get_by_id(executor, id).await;
}
query.push(", updated = NOW() WHERE id = ").push_bind(id);
query.push(" RETURNING "); query.push(" RETURNING ");
query.push(SELECT_COLUMNS); query.push(SELECT_COLUMNS);
@@ -252,6 +830,96 @@ impl Update for ExecutionRepository {
.await .await
.map_err(Into::into) .map_err(Into::into)
} }
async fn update_with_locator_optional<'e, E, F>(
executor: E,
input: UpdateExecutionInput,
where_clause: F,
) -> Result<Option<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
F: FnOnce(&mut QueryBuilder<'_, Postgres>),
{
let mut query = QueryBuilder::new("UPDATE execution SET ");
let mut has_updates = false;
if let Some(status) = input.status {
query.push("status = ").push_bind(status);
has_updates = true;
}
if let Some(result) = &input.result {
if has_updates {
query.push(", ");
}
query.push("result = ").push_bind(result);
has_updates = true;
}
if let Some(executor_id) = input.executor {
if has_updates {
query.push(", ");
}
query.push("executor = ").push_bind(executor_id);
has_updates = true;
}
if let Some(worker_id) = input.worker {
if has_updates {
query.push(", ");
}
query.push("worker = ").push_bind(worker_id);
has_updates = true;
}
if let Some(started_at) = input.started_at {
if has_updates {
query.push(", ");
}
query.push("started_at = ").push_bind(started_at);
has_updates = true;
}
if let Some(workflow_task) = &input.workflow_task {
if has_updates {
query.push(", ");
}
query
.push("workflow_task = ")
.push_bind(sqlx::types::Json(workflow_task));
}
query.push(", updated = NOW()");
where_clause(&mut query);
query.push(" RETURNING ");
query.push(SELECT_COLUMNS);
query
.build_query_as::<Execution>()
.fetch_optional(executor)
.await
.map_err(Into::into)
}
/// Update an execution using the loaded row's primary key.
pub async fn update_loaded<'e, E>(
executor: E,
execution: &Execution,
input: UpdateExecutionInput,
) -> Result<Execution>
where
E: Executor<'e, Database = Postgres> + 'e,
{
if input.status.is_none()
&& input.result.is_none()
&& input.executor.is_none()
&& input.worker.is_none()
&& input.started_at.is_none()
&& input.workflow_task.is_none()
{
return Ok(execution.clone());
}
Self::update_with_locator(executor, input, |query| {
query.push(" WHERE id = ").push_bind(execution.id);
})
.await
}
} }
#[async_trait::async_trait] #[async_trait::async_trait]
@@ -303,6 +971,34 @@ impl ExecutionRepository {
.map_err(Into::into) .map_err(Into::into)
} }
pub async fn find_by_workflow_task<'e, E>(
executor: E,
workflow_execution_id: Id,
task_name: &str,
task_index: Option<i32>,
) -> Result<Option<Execution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let sql = format!(
"SELECT {SELECT_COLUMNS} \
FROM execution \
WHERE workflow_task->>'workflow_execution' = $1::text \
AND workflow_task->>'task_name' = $2 \
AND (workflow_task->>'task_index')::int IS NOT DISTINCT FROM $3 \
ORDER BY created ASC \
LIMIT 1"
);
sqlx::query_as::<_, Execution>(&sql)
.bind(workflow_execution_id.to_string())
.bind(task_name)
.bind(task_index)
.fetch_optional(executor)
.await
.map_err(Into::into)
}
/// Find all child executions for a given parent execution ID. /// Find all child executions for a given parent execution ID.
/// ///
/// Returns child executions ordered by creation time (ascending), /// Returns child executions ordered by creation time (ascending),

View File

@@ -0,0 +1,909 @@
use chrono::{DateTime, Utc};
use sqlx::{PgPool, Postgres, Row, Transaction};
use crate::error::Result;
use crate::models::Id;
use crate::repositories::queue_stats::{QueueStatsRepository, UpsertQueueStatsInput};
#[derive(Debug, Clone)]
pub struct AdmissionSlotAcquireOutcome {
pub acquired: bool,
pub current_count: u32,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum AdmissionEnqueueOutcome {
Acquired,
Enqueued,
}
#[derive(Debug, Clone)]
pub struct AdmissionSlotReleaseOutcome {
pub action_id: Id,
pub group_key: Option<String>,
pub next_execution_id: Option<Id>,
}
#[derive(Debug, Clone)]
pub struct AdmissionQueuedRemovalOutcome {
pub action_id: Id,
pub group_key: Option<String>,
pub next_execution_id: Option<Id>,
pub execution_id: Id,
pub queue_order: i64,
pub enqueued_at: DateTime<Utc>,
pub removed_index: usize,
}
#[derive(Debug, Clone)]
pub struct AdmissionQueueStats {
pub action_id: Id,
pub queue_length: usize,
pub active_count: u32,
pub max_concurrent: u32,
pub oldest_enqueued_at: Option<DateTime<Utc>>,
pub total_enqueued: u64,
pub total_completed: u64,
}
#[derive(Debug, Clone)]
struct AdmissionState {
id: Id,
action_id: Id,
group_key: Option<String>,
max_concurrent: i32,
}
#[derive(Debug, Clone)]
struct ExecutionEntry {
state_id: Id,
action_id: Id,
group_key: Option<String>,
status: String,
queue_order: i64,
enqueued_at: DateTime<Utc>,
}
pub struct ExecutionAdmissionRepository;
impl ExecutionAdmissionRepository {
pub async fn enqueue(
pool: &PgPool,
max_queue_length: usize,
action_id: Id,
execution_id: Id,
max_concurrent: u32,
group_key: Option<String>,
) -> Result<AdmissionEnqueueOutcome> {
let mut tx = pool.begin().await?;
let state = Self::lock_state(&mut tx, action_id, group_key, max_concurrent).await?;
let outcome =
Self::enqueue_in_state(&mut tx, &state, max_queue_length, execution_id, true).await?;
Self::refresh_queue_stats(&mut tx, action_id).await?;
tx.commit().await?;
Ok(outcome)
}
pub async fn wait_status(pool: &PgPool, execution_id: Id) -> Result<Option<bool>> {
let row = sqlx::query_scalar::<Postgres, bool>(
r#"
SELECT status = 'active'
FROM execution_admission_entry
WHERE execution_id = $1
"#,
)
.bind(execution_id)
.fetch_optional(pool)
.await?;
Ok(row)
}
pub async fn try_acquire(
pool: &PgPool,
action_id: Id,
execution_id: Id,
max_concurrent: u32,
group_key: Option<String>,
) -> Result<AdmissionSlotAcquireOutcome> {
let mut tx = pool.begin().await?;
let state = Self::lock_state(&mut tx, action_id, group_key, max_concurrent).await?;
let active_count = Self::active_count(&mut tx, state.id).await? as u32;
let outcome = match Self::find_execution_entry(&mut tx, execution_id).await? {
Some(entry) if entry.status == "active" => AdmissionSlotAcquireOutcome {
acquired: true,
current_count: active_count,
},
Some(entry) if entry.status == "queued" && entry.state_id == state.id => {
let promoted =
Self::maybe_promote_existing_queued(&mut tx, &state, execution_id).await?;
AdmissionSlotAcquireOutcome {
acquired: promoted,
current_count: active_count,
}
}
Some(_) => AdmissionSlotAcquireOutcome {
acquired: false,
current_count: active_count,
},
None => {
if active_count < max_concurrent
&& Self::queued_count(&mut tx, state.id).await? == 0
{
let queue_order = Self::allocate_queue_order(&mut tx, state.id).await?;
Self::insert_entry(
&mut tx,
state.id,
execution_id,
"active",
queue_order,
Utc::now(),
)
.await?;
Self::increment_total_enqueued(&mut tx, state.id).await?;
Self::refresh_queue_stats(&mut tx, action_id).await?;
AdmissionSlotAcquireOutcome {
acquired: true,
current_count: active_count,
}
} else {
AdmissionSlotAcquireOutcome {
acquired: false,
current_count: active_count,
}
}
}
};
tx.commit().await?;
Ok(outcome)
}
pub async fn release_active_slot(
pool: &PgPool,
execution_id: Id,
) -> Result<Option<AdmissionSlotReleaseOutcome>> {
let mut tx = pool.begin().await?;
let Some(entry) = Self::find_execution_entry_for_update(&mut tx, execution_id).await?
else {
tx.commit().await?;
return Ok(None);
};
if entry.status != "active" {
tx.commit().await?;
return Ok(None);
}
let state = Self::lock_existing_state(&mut tx, entry.action_id, entry.group_key.clone())
.await?
.ok_or_else(|| {
crate::Error::internal("missing execution_admission_state for active execution")
})?;
sqlx::query("DELETE FROM execution_admission_entry WHERE execution_id = $1")
.bind(execution_id)
.execute(&mut *tx)
.await?;
Self::increment_total_completed(&mut tx, state.id).await?;
let next_execution_id = Self::promote_next_queued(&mut tx, &state).await?;
Self::refresh_queue_stats(&mut tx, state.action_id).await?;
tx.commit().await?;
Ok(Some(AdmissionSlotReleaseOutcome {
action_id: state.action_id,
group_key: state.group_key,
next_execution_id,
}))
}
pub async fn restore_active_slot(
pool: &PgPool,
execution_id: Id,
outcome: &AdmissionSlotReleaseOutcome,
) -> Result<()> {
let mut tx = pool.begin().await?;
let state =
Self::lock_existing_state(&mut tx, outcome.action_id, outcome.group_key.clone())
.await?
.ok_or_else(|| {
crate::Error::internal("missing execution_admission_state on restore")
})?;
if let Some(next_execution_id) = outcome.next_execution_id {
sqlx::query(
r#"
UPDATE execution_admission_entry
SET status = 'queued', activated_at = NULL
WHERE execution_id = $1
AND state_id = $2
AND status = 'active'
"#,
)
.bind(next_execution_id)
.bind(state.id)
.execute(&mut *tx)
.await?;
}
sqlx::query(
r#"
INSERT INTO execution_admission_entry (
state_id, execution_id, status, queue_order, enqueued_at, activated_at
) VALUES ($1, $2, 'active', $3, NOW(), NOW())
ON CONFLICT (execution_id) DO UPDATE
SET state_id = EXCLUDED.state_id,
status = 'active',
activated_at = EXCLUDED.activated_at
"#,
)
.bind(state.id)
.bind(execution_id)
.bind(Self::allocate_queue_order(&mut tx, state.id).await?)
.execute(&mut *tx)
.await?;
sqlx::query(
r#"
UPDATE execution_admission_state
SET total_completed = GREATEST(total_completed - 1, 0)
WHERE id = $1
"#,
)
.bind(state.id)
.execute(&mut *tx)
.await?;
Self::refresh_queue_stats(&mut tx, state.action_id).await?;
tx.commit().await?;
Ok(())
}
pub async fn remove_queued_execution(
pool: &PgPool,
execution_id: Id,
) -> Result<Option<AdmissionQueuedRemovalOutcome>> {
let mut tx = pool.begin().await?;
let Some(entry) = Self::find_execution_entry_for_update(&mut tx, execution_id).await?
else {
tx.commit().await?;
return Ok(None);
};
if entry.status != "queued" {
tx.commit().await?;
return Ok(None);
}
let state = Self::lock_existing_state(&mut tx, entry.action_id, entry.group_key.clone())
.await?
.ok_or_else(|| {
crate::Error::internal("missing execution_admission_state for queued execution")
})?;
let removed_index = sqlx::query_scalar::<Postgres, i64>(
r#"
SELECT COUNT(*)
FROM execution_admission_entry
WHERE state_id = $1
AND status = 'queued'
AND (enqueued_at, id) < (
SELECT enqueued_at, id
FROM execution_admission_entry
WHERE execution_id = $2
)
"#,
)
.bind(state.id)
.bind(execution_id)
.fetch_one(&mut *tx)
.await? as usize;
sqlx::query("DELETE FROM execution_admission_entry WHERE execution_id = $1")
.bind(execution_id)
.execute(&mut *tx)
.await?;
let next_execution_id =
if Self::active_count(&mut tx, state.id).await? < state.max_concurrent as i64 {
Self::promote_next_queued(&mut tx, &state).await?
} else {
None
};
Self::refresh_queue_stats(&mut tx, state.action_id).await?;
tx.commit().await?;
Ok(Some(AdmissionQueuedRemovalOutcome {
action_id: state.action_id,
group_key: state.group_key,
next_execution_id,
execution_id,
queue_order: entry.queue_order,
enqueued_at: entry.enqueued_at,
removed_index,
}))
}
pub async fn restore_queued_execution(
pool: &PgPool,
outcome: &AdmissionQueuedRemovalOutcome,
) -> Result<()> {
let mut tx = pool.begin().await?;
let state =
Self::lock_existing_state(&mut tx, outcome.action_id, outcome.group_key.clone())
.await?
.ok_or_else(|| {
crate::Error::internal("missing execution_admission_state on queued restore")
})?;
if let Some(next_execution_id) = outcome.next_execution_id {
sqlx::query(
r#"
UPDATE execution_admission_entry
SET status = 'queued', activated_at = NULL
WHERE execution_id = $1
AND state_id = $2
AND status = 'active'
"#,
)
.bind(next_execution_id)
.bind(state.id)
.execute(&mut *tx)
.await?;
}
sqlx::query(
r#"
INSERT INTO execution_admission_entry (
state_id, execution_id, status, queue_order, enqueued_at, activated_at
) VALUES ($1, $2, 'queued', $3, $4, NULL)
ON CONFLICT (execution_id) DO NOTHING
"#,
)
.bind(state.id)
.bind(outcome.execution_id)
.bind(outcome.queue_order)
.bind(outcome.enqueued_at)
.execute(&mut *tx)
.await?;
Self::refresh_queue_stats(&mut tx, state.action_id).await?;
tx.commit().await?;
Ok(())
}
pub async fn get_queue_stats(
pool: &PgPool,
action_id: Id,
) -> Result<Option<AdmissionQueueStats>> {
let row = sqlx::query(
r#"
WITH state_rows AS (
SELECT
COUNT(*) AS state_count,
COALESCE(SUM(max_concurrent), 0) AS max_concurrent,
COALESCE(SUM(total_enqueued), 0) AS total_enqueued,
COALESCE(SUM(total_completed), 0) AS total_completed
FROM execution_admission_state
WHERE action_id = $1
),
entry_rows AS (
SELECT
COUNT(*) FILTER (WHERE e.status = 'queued') AS queue_length,
COUNT(*) FILTER (WHERE e.status = 'active') AS active_count,
MIN(e.enqueued_at) FILTER (WHERE e.status = 'queued') AS oldest_enqueued_at
FROM execution_admission_state s
LEFT JOIN execution_admission_entry e ON e.state_id = s.id
WHERE s.action_id = $1
)
SELECT
sr.state_count,
er.queue_length,
er.active_count,
sr.max_concurrent,
er.oldest_enqueued_at,
sr.total_enqueued,
sr.total_completed
FROM state_rows sr
CROSS JOIN entry_rows er
"#,
)
.bind(action_id)
.fetch_one(pool)
.await?;
let state_count: i64 = row.try_get("state_count")?;
if state_count == 0 {
return Ok(None);
}
Ok(Some(AdmissionQueueStats {
action_id,
queue_length: row.try_get::<i64, _>("queue_length")? as usize,
active_count: row.try_get::<i64, _>("active_count")? as u32,
max_concurrent: row.try_get::<i64, _>("max_concurrent")? as u32,
oldest_enqueued_at: row.try_get("oldest_enqueued_at")?,
total_enqueued: row.try_get::<i64, _>("total_enqueued")? as u64,
total_completed: row.try_get::<i64, _>("total_completed")? as u64,
}))
}
async fn enqueue_in_state(
tx: &mut Transaction<'_, Postgres>,
state: &AdmissionState,
max_queue_length: usize,
execution_id: Id,
allow_queue: bool,
) -> Result<AdmissionEnqueueOutcome> {
if let Some(entry) = Self::find_execution_entry(tx, execution_id).await? {
if entry.status == "active" {
return Ok(AdmissionEnqueueOutcome::Acquired);
}
if entry.status == "queued" && entry.state_id == state.id {
if Self::maybe_promote_existing_queued(tx, state, execution_id).await? {
return Ok(AdmissionEnqueueOutcome::Acquired);
}
return Ok(AdmissionEnqueueOutcome::Enqueued);
}
return Ok(AdmissionEnqueueOutcome::Enqueued);
}
let active_count = Self::active_count(tx, state.id).await?;
let queued_count = Self::queued_count(tx, state.id).await?;
if active_count < state.max_concurrent as i64 && queued_count == 0 {
let queue_order = Self::allocate_queue_order(tx, state.id).await?;
Self::insert_entry(
tx,
state.id,
execution_id,
"active",
queue_order,
Utc::now(),
)
.await?;
Self::increment_total_enqueued(tx, state.id).await?;
return Ok(AdmissionEnqueueOutcome::Acquired);
}
if !allow_queue {
return Ok(AdmissionEnqueueOutcome::Enqueued);
}
if queued_count >= max_queue_length as i64 {
return Err(anyhow::anyhow!(
"Queue full for action {}: maximum {} entries",
state.action_id,
max_queue_length
)
.into());
}
let queue_order = Self::allocate_queue_order(tx, state.id).await?;
Self::insert_entry(
tx,
state.id,
execution_id,
"queued",
queue_order,
Utc::now(),
)
.await?;
Self::increment_total_enqueued(tx, state.id).await?;
Ok(AdmissionEnqueueOutcome::Enqueued)
}
async fn maybe_promote_existing_queued(
tx: &mut Transaction<'_, Postgres>,
state: &AdmissionState,
execution_id: Id,
) -> Result<bool> {
let active_count = Self::active_count(tx, state.id).await?;
if active_count >= state.max_concurrent as i64 {
return Ok(false);
}
let front_execution_id = sqlx::query_scalar::<Postgres, Id>(
r#"
SELECT execution_id
FROM execution_admission_entry
WHERE state_id = $1
AND status = 'queued'
ORDER BY queue_order ASC
LIMIT 1
"#,
)
.bind(state.id)
.fetch_optional(&mut **tx)
.await?;
if front_execution_id != Some(execution_id) {
return Ok(false);
}
sqlx::query(
r#"
UPDATE execution_admission_entry
SET status = 'active',
activated_at = NOW()
WHERE execution_id = $1
AND state_id = $2
AND status = 'queued'
"#,
)
.bind(execution_id)
.bind(state.id)
.execute(&mut **tx)
.await?;
Ok(true)
}
async fn promote_next_queued(
tx: &mut Transaction<'_, Postgres>,
state: &AdmissionState,
) -> Result<Option<Id>> {
let next_execution_id = sqlx::query_scalar::<Postgres, Id>(
r#"
SELECT execution_id
FROM execution_admission_entry
WHERE state_id = $1
AND status = 'queued'
ORDER BY queue_order ASC
LIMIT 1
"#,
)
.bind(state.id)
.fetch_optional(&mut **tx)
.await?;
if let Some(next_execution_id) = next_execution_id {
sqlx::query(
r#"
UPDATE execution_admission_entry
SET status = 'active',
activated_at = NOW()
WHERE execution_id = $1
AND state_id = $2
AND status = 'queued'
"#,
)
.bind(next_execution_id)
.bind(state.id)
.execute(&mut **tx)
.await?;
}
Ok(next_execution_id)
}
async fn lock_state(
tx: &mut Transaction<'_, Postgres>,
action_id: Id,
group_key: Option<String>,
max_concurrent: u32,
) -> Result<AdmissionState> {
sqlx::query(
r#"
INSERT INTO execution_admission_state (action_id, group_key, max_concurrent)
VALUES ($1, $2, $3)
ON CONFLICT (action_id, group_key_normalized)
DO UPDATE SET max_concurrent = EXCLUDED.max_concurrent
"#,
)
.bind(action_id)
.bind(group_key.clone())
.bind(max_concurrent as i32)
.execute(&mut **tx)
.await?;
let state = sqlx::query(
r#"
SELECT id, action_id, group_key, max_concurrent
FROM execution_admission_state
WHERE action_id = $1
AND group_key_normalized = COALESCE($2, '')
FOR UPDATE
"#,
)
.bind(action_id)
.bind(group_key)
.fetch_one(&mut **tx)
.await?;
Ok(AdmissionState {
id: state.try_get("id")?,
action_id: state.try_get("action_id")?,
group_key: state.try_get("group_key")?,
max_concurrent: state.try_get("max_concurrent")?,
})
}
async fn lock_existing_state(
tx: &mut Transaction<'_, Postgres>,
action_id: Id,
group_key: Option<String>,
) -> Result<Option<AdmissionState>> {
let row = sqlx::query(
r#"
SELECT id, action_id, group_key, max_concurrent
FROM execution_admission_state
WHERE action_id = $1
AND group_key_normalized = COALESCE($2, '')
FOR UPDATE
"#,
)
.bind(action_id)
.bind(group_key)
.fetch_optional(&mut **tx)
.await?;
Ok(row.map(|state| AdmissionState {
id: state.try_get("id").expect("state.id"),
action_id: state.try_get("action_id").expect("state.action_id"),
group_key: state.try_get("group_key").expect("state.group_key"),
max_concurrent: state
.try_get("max_concurrent")
.expect("state.max_concurrent"),
}))
}
async fn find_execution_entry(
tx: &mut Transaction<'_, Postgres>,
execution_id: Id,
) -> Result<Option<ExecutionEntry>> {
let row = sqlx::query(
r#"
SELECT
e.state_id,
s.action_id,
s.group_key,
e.execution_id,
e.status,
e.queue_order,
e.enqueued_at
FROM execution_admission_entry e
JOIN execution_admission_state s ON s.id = e.state_id
WHERE e.execution_id = $1
"#,
)
.bind(execution_id)
.fetch_optional(&mut **tx)
.await?;
Ok(row.map(|entry| ExecutionEntry {
state_id: entry.try_get("state_id").expect("entry.state_id"),
action_id: entry.try_get("action_id").expect("entry.action_id"),
group_key: entry.try_get("group_key").expect("entry.group_key"),
status: entry.try_get("status").expect("entry.status"),
queue_order: entry.try_get("queue_order").expect("entry.queue_order"),
enqueued_at: entry.try_get("enqueued_at").expect("entry.enqueued_at"),
}))
}
async fn find_execution_entry_for_update(
tx: &mut Transaction<'_, Postgres>,
execution_id: Id,
) -> Result<Option<ExecutionEntry>> {
let row = sqlx::query(
r#"
SELECT
e.state_id,
s.action_id,
s.group_key,
e.execution_id,
e.status,
e.queue_order,
e.enqueued_at
FROM execution_admission_entry e
JOIN execution_admission_state s ON s.id = e.state_id
WHERE e.execution_id = $1
FOR UPDATE OF e, s
"#,
)
.bind(execution_id)
.fetch_optional(&mut **tx)
.await?;
Ok(row.map(|entry| ExecutionEntry {
state_id: entry.try_get("state_id").expect("entry.state_id"),
action_id: entry.try_get("action_id").expect("entry.action_id"),
group_key: entry.try_get("group_key").expect("entry.group_key"),
status: entry.try_get("status").expect("entry.status"),
queue_order: entry.try_get("queue_order").expect("entry.queue_order"),
enqueued_at: entry.try_get("enqueued_at").expect("entry.enqueued_at"),
}))
}
async fn active_count(tx: &mut Transaction<'_, Postgres>, state_id: Id) -> Result<i64> {
Ok(sqlx::query_scalar::<Postgres, i64>(
r#"
SELECT COUNT(*)
FROM execution_admission_entry
WHERE state_id = $1
AND status = 'active'
"#,
)
.bind(state_id)
.fetch_one(&mut **tx)
.await?)
}
async fn queued_count(tx: &mut Transaction<'_, Postgres>, state_id: Id) -> Result<i64> {
Ok(sqlx::query_scalar::<Postgres, i64>(
r#"
SELECT COUNT(*)
FROM execution_admission_entry
WHERE state_id = $1
AND status = 'queued'
"#,
)
.bind(state_id)
.fetch_one(&mut **tx)
.await?)
}
async fn insert_entry(
tx: &mut Transaction<'_, Postgres>,
state_id: Id,
execution_id: Id,
status: &str,
queue_order: i64,
enqueued_at: DateTime<Utc>,
) -> Result<()> {
sqlx::query(
r#"
INSERT INTO execution_admission_entry (
state_id, execution_id, status, queue_order, enqueued_at, activated_at
) VALUES (
$1, $2, $3, $4, $5,
CASE WHEN $3 = 'active' THEN NOW() ELSE NULL END
)
"#,
)
.bind(state_id)
.bind(execution_id)
.bind(status)
.bind(queue_order)
.bind(enqueued_at)
.execute(&mut **tx)
.await?;
Ok(())
}
async fn allocate_queue_order(tx: &mut Transaction<'_, Postgres>, state_id: Id) -> Result<i64> {
let queue_order = sqlx::query_scalar::<Postgres, i64>(
r#"
UPDATE execution_admission_state
SET next_queue_order = next_queue_order + 1
WHERE id = $1
RETURNING next_queue_order - 1
"#,
)
.bind(state_id)
.fetch_one(&mut **tx)
.await?;
Ok(queue_order)
}
async fn increment_total_enqueued(
tx: &mut Transaction<'_, Postgres>,
state_id: Id,
) -> Result<()> {
sqlx::query(
r#"
UPDATE execution_admission_state
SET total_enqueued = total_enqueued + 1
WHERE id = $1
"#,
)
.bind(state_id)
.execute(&mut **tx)
.await?;
Ok(())
}
async fn increment_total_completed(
tx: &mut Transaction<'_, Postgres>,
state_id: Id,
) -> Result<()> {
sqlx::query(
r#"
UPDATE execution_admission_state
SET total_completed = total_completed + 1
WHERE id = $1
"#,
)
.bind(state_id)
.execute(&mut **tx)
.await?;
Ok(())
}
async fn refresh_queue_stats(tx: &mut Transaction<'_, Postgres>, action_id: Id) -> Result<()> {
let Some(stats) = Self::get_queue_stats_from_tx(tx, action_id).await? else {
QueueStatsRepository::delete(&mut **tx, action_id).await?;
return Ok(());
};
QueueStatsRepository::upsert(
&mut **tx,
UpsertQueueStatsInput {
action_id,
queue_length: stats.queue_length as i32,
active_count: stats.active_count as i32,
max_concurrent: stats.max_concurrent as i32,
oldest_enqueued_at: stats.oldest_enqueued_at,
total_enqueued: stats.total_enqueued as i64,
total_completed: stats.total_completed as i64,
},
)
.await?;
Ok(())
}
async fn get_queue_stats_from_tx(
tx: &mut Transaction<'_, Postgres>,
action_id: Id,
) -> Result<Option<AdmissionQueueStats>> {
let row = sqlx::query(
r#"
WITH state_rows AS (
SELECT
COUNT(*) AS state_count,
COALESCE(SUM(max_concurrent), 0) AS max_concurrent,
COALESCE(SUM(total_enqueued), 0) AS total_enqueued,
COALESCE(SUM(total_completed), 0) AS total_completed
FROM execution_admission_state
WHERE action_id = $1
),
entry_rows AS (
SELECT
COUNT(*) FILTER (WHERE e.status = 'queued') AS queue_length,
COUNT(*) FILTER (WHERE e.status = 'active') AS active_count,
MIN(e.enqueued_at) FILTER (WHERE e.status = 'queued') AS oldest_enqueued_at
FROM execution_admission_state s
LEFT JOIN execution_admission_entry e ON e.state_id = s.id
WHERE s.action_id = $1
)
SELECT
sr.state_count,
er.queue_length,
er.active_count,
sr.max_concurrent,
er.oldest_enqueued_at,
sr.total_enqueued,
sr.total_completed
FROM state_rows sr
CROSS JOIN entry_rows er
"#,
)
.bind(action_id)
.fetch_one(&mut **tx)
.await?;
let state_count: i64 = row.try_get("state_count")?;
if state_count == 0 {
return Ok(None);
}
Ok(Some(AdmissionQueueStats {
action_id,
queue_length: row.try_get::<i64, _>("queue_length")? as usize,
active_count: row.try_get::<i64, _>("active_count")? as u32,
max_concurrent: row.try_get::<i64, _>("max_concurrent")? as u32,
oldest_enqueued_at: row.try_get("oldest_enqueued_at")?,
total_enqueued: row.try_get::<i64, _>("total_enqueued")? as u64,
total_completed: row.try_get::<i64, _>("total_completed")? as u64,
}))
}
}

View File

@@ -33,6 +33,7 @@ pub mod artifact;
pub mod entity_history; pub mod entity_history;
pub mod event; pub mod event;
pub mod execution; pub mod execution;
pub mod execution_admission;
pub mod identity; pub mod identity;
pub mod inquiry; pub mod inquiry;
pub mod key; pub mod key;
@@ -53,6 +54,7 @@ pub use artifact::{ArtifactRepository, ArtifactVersionRepository};
pub use entity_history::EntityHistoryRepository; pub use entity_history::EntityHistoryRepository;
pub use event::{EnforcementRepository, EventRepository}; pub use event::{EnforcementRepository, EventRepository};
pub use execution::ExecutionRepository; pub use execution::ExecutionRepository;
pub use execution_admission::ExecutionAdmissionRepository;
pub use identity::{IdentityRepository, PermissionAssignmentRepository, PermissionSetRepository}; pub use identity::{IdentityRepository, PermissionAssignmentRepository, PermissionSetRepository};
pub use inquiry::InquiryRepository; pub use inquiry::InquiryRepository;
pub use key::KeyRepository; pub use key::KeyRepository;

View File

@@ -3,7 +3,7 @@
//! Provides database operations for queue statistics persistence. //! Provides database operations for queue statistics persistence.
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use sqlx::{PgPool, Postgres, QueryBuilder}; use sqlx::{Executor, PgPool, Postgres, QueryBuilder};
use crate::error::Result; use crate::error::Result;
use crate::models::Id; use crate::models::Id;
@@ -38,7 +38,10 @@ pub struct QueueStatsRepository;
impl QueueStatsRepository { impl QueueStatsRepository {
/// Upsert queue statistics (insert or update) /// Upsert queue statistics (insert or update)
pub async fn upsert(pool: &PgPool, input: UpsertQueueStatsInput) -> Result<QueueStats> { pub async fn upsert<'e, E>(executor: E, input: UpsertQueueStatsInput) -> Result<QueueStats>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let stats = sqlx::query_as::<Postgres, QueueStats>( let stats = sqlx::query_as::<Postgres, QueueStats>(
r#" r#"
INSERT INTO queue_stats ( INSERT INTO queue_stats (
@@ -69,14 +72,17 @@ impl QueueStatsRepository {
.bind(input.oldest_enqueued_at) .bind(input.oldest_enqueued_at)
.bind(input.total_enqueued) .bind(input.total_enqueued)
.bind(input.total_completed) .bind(input.total_completed)
.fetch_one(pool) .fetch_one(executor)
.await?; .await?;
Ok(stats) Ok(stats)
} }
/// Get queue statistics for a specific action /// Get queue statistics for a specific action
pub async fn find_by_action(pool: &PgPool, action_id: Id) -> Result<Option<QueueStats>> { pub async fn find_by_action<'e, E>(executor: E, action_id: Id) -> Result<Option<QueueStats>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let stats = sqlx::query_as::<Postgres, QueueStats>( let stats = sqlx::query_as::<Postgres, QueueStats>(
r#" r#"
SELECT SELECT
@@ -93,14 +99,17 @@ impl QueueStatsRepository {
"#, "#,
) )
.bind(action_id) .bind(action_id)
.fetch_optional(pool) .fetch_optional(executor)
.await?; .await?;
Ok(stats) Ok(stats)
} }
/// List all queue statistics with active queues (queue_length > 0 or active_count > 0) /// List all queue statistics with active queues (queue_length > 0 or active_count > 0)
pub async fn list_active(pool: &PgPool) -> Result<Vec<QueueStats>> { pub async fn list_active<'e, E>(executor: E) -> Result<Vec<QueueStats>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let stats = sqlx::query_as::<Postgres, QueueStats>( let stats = sqlx::query_as::<Postgres, QueueStats>(
r#" r#"
SELECT SELECT
@@ -117,14 +126,17 @@ impl QueueStatsRepository {
ORDER BY last_updated DESC ORDER BY last_updated DESC
"#, "#,
) )
.fetch_all(pool) .fetch_all(executor)
.await?; .await?;
Ok(stats) Ok(stats)
} }
/// List all queue statistics /// List all queue statistics
pub async fn list_all(pool: &PgPool) -> Result<Vec<QueueStats>> { pub async fn list_all<'e, E>(executor: E) -> Result<Vec<QueueStats>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let stats = sqlx::query_as::<Postgres, QueueStats>( let stats = sqlx::query_as::<Postgres, QueueStats>(
r#" r#"
SELECT SELECT
@@ -140,14 +152,17 @@ impl QueueStatsRepository {
ORDER BY last_updated DESC ORDER BY last_updated DESC
"#, "#,
) )
.fetch_all(pool) .fetch_all(executor)
.await?; .await?;
Ok(stats) Ok(stats)
} }
/// Delete queue statistics for a specific action /// Delete queue statistics for a specific action
pub async fn delete(pool: &PgPool, action_id: Id) -> Result<bool> { pub async fn delete<'e, E>(executor: E, action_id: Id) -> Result<bool>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let result = sqlx::query( let result = sqlx::query(
r#" r#"
DELETE FROM queue_stats DELETE FROM queue_stats
@@ -155,7 +170,7 @@ impl QueueStatsRepository {
"#, "#,
) )
.bind(action_id) .bind(action_id)
.execute(pool) .execute(executor)
.await?; .await?;
Ok(result.rows_affected() > 0) Ok(result.rows_affected() > 0)
@@ -163,7 +178,7 @@ impl QueueStatsRepository {
/// Batch upsert multiple queue statistics /// Batch upsert multiple queue statistics
pub async fn batch_upsert( pub async fn batch_upsert(
pool: &PgPool, executor: &PgPool,
inputs: Vec<UpsertQueueStatsInput>, inputs: Vec<UpsertQueueStatsInput>,
) -> Result<Vec<QueueStats>> { ) -> Result<Vec<QueueStats>> {
if inputs.is_empty() { if inputs.is_empty() {
@@ -213,14 +228,17 @@ impl QueueStatsRepository {
let stats = query_builder let stats = query_builder
.build_query_as::<QueueStats>() .build_query_as::<QueueStats>()
.fetch_all(pool) .fetch_all(executor)
.await?; .await?;
Ok(stats) Ok(stats)
} }
/// Clear stale statistics (older than specified duration) /// Clear stale statistics (older than specified duration)
pub async fn clear_stale(pool: &PgPool, older_than_seconds: i64) -> Result<u64> { pub async fn clear_stale<'e, E>(executor: E, older_than_seconds: i64) -> Result<u64>
where
E: Executor<'e, Database = Postgres> + 'e,
{
let result = sqlx::query( let result = sqlx::query(
r#" r#"
DELETE FROM queue_stats DELETE FROM queue_stats
@@ -230,7 +248,7 @@ impl QueueStatsRepository {
"#, "#,
) )
.bind(older_than_seconds) .bind(older_than_seconds)
.execute(pool) .execute(executor)
.await?; .await?;
Ok(result.rows_affected()) Ok(result.rows_affected())

View File

@@ -237,7 +237,7 @@ impl Update for RuntimeRepository {
query.push(", updated = NOW() WHERE id = "); query.push(", updated = NOW() WHERE id = ");
query.push_bind(id); query.push_bind(id);
query.push(&format!(" RETURNING {}", SELECT_COLUMNS)); query.push(format!(" RETURNING {}", SELECT_COLUMNS));
let runtime = query let runtime = query
.build_query_as::<Runtime>() .build_query_as::<Runtime>()

View File

@@ -411,6 +411,12 @@ impl WorkflowDefinitionRepository {
pub struct WorkflowExecutionRepository; pub struct WorkflowExecutionRepository;
#[derive(Debug, Clone)]
pub struct WorkflowExecutionCreateOrGetResult {
pub workflow_execution: WorkflowExecution,
pub created: bool,
}
impl Repository for WorkflowExecutionRepository { impl Repository for WorkflowExecutionRepository {
type Entity = WorkflowExecution; type Entity = WorkflowExecution;
fn table_name() -> &'static str { fn table_name() -> &'static str {
@@ -606,6 +612,71 @@ impl Delete for WorkflowExecutionRepository {
} }
impl WorkflowExecutionRepository { impl WorkflowExecutionRepository {
pub async fn find_by_id_for_update<'e, E>(
executor: E,
id: Id,
) -> Result<Option<WorkflowExecution>>
where
E: Executor<'e, Database = Postgres> + 'e,
{
sqlx::query_as::<_, WorkflowExecution>(
"SELECT id, execution, workflow_def, current_tasks, completed_tasks, failed_tasks, skipped_tasks,
variables, task_graph, status, error_message, paused, pause_reason, created, updated
FROM workflow_execution
WHERE id = $1
FOR UPDATE"
)
.bind(id)
.fetch_optional(executor)
.await
.map_err(Into::into)
}
pub async fn create_or_get_by_execution<'e, E>(
executor: E,
input: CreateWorkflowExecutionInput,
) -> Result<WorkflowExecutionCreateOrGetResult>
where
E: Executor<'e, Database = Postgres> + Copy + 'e,
{
let inserted = sqlx::query_as::<_, WorkflowExecution>(
"INSERT INTO workflow_execution
(execution, workflow_def, task_graph, variables, status)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT (execution) DO NOTHING
RETURNING id, execution, workflow_def, current_tasks, completed_tasks, failed_tasks, skipped_tasks,
variables, task_graph, status, error_message, paused, pause_reason, created, updated"
)
.bind(input.execution)
.bind(input.workflow_def)
.bind(&input.task_graph)
.bind(&input.variables)
.bind(input.status)
.fetch_optional(executor)
.await?;
if let Some(workflow_execution) = inserted {
return Ok(WorkflowExecutionCreateOrGetResult {
workflow_execution,
created: true,
});
}
let workflow_execution = Self::find_by_execution(executor, input.execution)
.await?
.ok_or_else(|| {
anyhow::anyhow!(
"workflow_execution for parent execution {} disappeared after conflict",
input.execution
)
})?;
Ok(WorkflowExecutionCreateOrGetResult {
workflow_execution,
created: false,
})
}
/// Find workflow execution by the parent execution ID /// Find workflow execution by the parent execution ID
pub async fn find_by_execution<'e, E>( pub async fn find_by_execution<'e, E>(
executor: E, executor: E,

View File

@@ -172,6 +172,7 @@ impl WorkflowLoader {
} }
// Read and parse YAML // Read and parse YAML
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Workflow files come from previously discovered pack directories under packs_base_dir.
let content = fs::read_to_string(&file.path) let content = fs::read_to_string(&file.path)
.await .await
.map_err(|e| Error::validation(format!("Failed to read workflow file: {}", e)))?; .map_err(|e| Error::validation(format!("Failed to read workflow file: {}", e)))?;
@@ -292,6 +293,7 @@ impl WorkflowLoader {
pack_name: &str, pack_name: &str,
) -> Result<Vec<WorkflowFile>> { ) -> Result<Vec<WorkflowFile>> {
let mut workflow_files = Vec::new(); let mut workflow_files = Vec::new();
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Workflow scanning only traverses pack workflow directories derived from packs_base_dir.
let mut entries = fs::read_dir(workflows_dir) let mut entries = fs::read_dir(workflows_dir)
.await .await
.map_err(|e| Error::validation(format!("Failed to read workflows directory: {}", e)))?; .map_err(|e| Error::validation(format!("Failed to read workflows directory: {}", e)))?;

View File

@@ -1430,3 +1430,70 @@ async fn test_enforcement_resolved_at_lifecycle() {
assert!(updated.resolved_at.is_some()); assert!(updated.resolved_at.is_some());
assert!(updated.resolved_at.unwrap() >= enforcement.created); assert!(updated.resolved_at.unwrap() >= enforcement.created);
} }
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_update_loaded_enforcement_uses_loaded_locator() {
let pool = create_test_pool().await.unwrap();
let pack = PackFixture::new_unique("targeted_update_pack")
.create(&pool)
.await
.unwrap();
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
.create(&pool)
.await
.unwrap();
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "action")
.create(&pool)
.await
.unwrap();
use attune_common::repositories::rule::{CreateRuleInput, RuleRepository};
let rule = RuleRepository::create(
&pool,
CreateRuleInput {
r#ref: format!("{}.test_rule", pack.r#ref),
pack: pack.id,
pack_ref: pack.r#ref.clone(),
label: "Test Rule".to_string(),
description: Some("Test".to_string()),
action: action.id,
action_ref: action.r#ref.clone(),
trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(),
conditions: json!({}),
action_params: json!({}),
trigger_params: json!({}),
enabled: true,
is_adhoc: false,
},
)
.await
.unwrap();
let enforcement = EnforcementFixture::new_unique(Some(rule.id), &rule.r#ref, &trigger.r#ref)
.create(&pool)
.await
.unwrap();
let updated = EnforcementRepository::update_loaded(
&pool,
&enforcement,
UpdateEnforcementInput {
status: Some(EnforcementStatus::Processed),
payload: None,
resolved_at: Some(chrono::Utc::now()),
},
)
.await
.unwrap();
assert_eq!(updated.id, enforcement.id);
assert_eq!(updated.created, enforcement.created);
assert_eq!(updated.rule_ref, enforcement.rule_ref);
assert_eq!(updated.status, EnforcementStatus::Processed);
assert!(updated.resolved_at.is_some());
}

View File

@@ -1153,3 +1153,108 @@ async fn test_execution_result_json() {
assert_eq!(updated.result, Some(complex_result)); assert_eq!(updated.result, Some(complex_result));
} }
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_claim_for_scheduling_succeeds_once() {
let pool = create_test_pool().await.unwrap();
let pack = PackFixture::new_unique("claim_pack")
.create(&pool)
.await
.unwrap();
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "claim_action")
.create(&pool)
.await
.unwrap();
let created = ExecutionRepository::create(
&pool,
CreateExecutionInput {
action: Some(action.id),
action_ref: action.r#ref.clone(),
config: None,
env_vars: None,
parent: None,
enforcement: None,
executor: None,
worker: None,
status: ExecutionStatus::Requested,
result: None,
workflow_task: None,
},
)
.await
.unwrap();
let first = ExecutionRepository::claim_for_scheduling(&pool, created.id, None)
.await
.unwrap();
let second = ExecutionRepository::claim_for_scheduling(&pool, created.id, None)
.await
.unwrap();
assert_eq!(first.unwrap().status, ExecutionStatus::Scheduling);
assert!(second.is_none());
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_update_if_status_only_updates_matching_row() {
let pool = create_test_pool().await.unwrap();
let pack = PackFixture::new_unique("conditional_pack")
.create(&pool)
.await
.unwrap();
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "conditional_action")
.create(&pool)
.await
.unwrap();
let created = ExecutionRepository::create(
&pool,
CreateExecutionInput {
action: Some(action.id),
action_ref: action.r#ref.clone(),
config: None,
env_vars: None,
parent: None,
enforcement: None,
executor: None,
worker: None,
status: ExecutionStatus::Scheduling,
result: None,
workflow_task: None,
},
)
.await
.unwrap();
let updated = ExecutionRepository::update_if_status(
&pool,
created.id,
ExecutionStatus::Scheduling,
UpdateExecutionInput {
status: Some(ExecutionStatus::Scheduled),
worker: Some(77),
..Default::default()
},
)
.await
.unwrap();
let skipped = ExecutionRepository::update_if_status(
&pool,
created.id,
ExecutionStatus::Scheduling,
UpdateExecutionInput {
status: Some(ExecutionStatus::Failed),
..Default::default()
},
)
.await
.unwrap();
assert_eq!(updated.unwrap().status, ExecutionStatus::Scheduled);
assert!(skipped.is_none());
}

View File

@@ -182,6 +182,7 @@ mod tests {
#[test] #[test]
fn test_decode_valid_token() { fn test_decode_valid_token() {
// Valid JWT with exp and iat claims // Valid JWT with exp and iat claims
// nosemgrep: generic.secrets.security.detected-jwt-token.detected-jwt-token -- This is a non-secret test fixture with a dummy signature used only for JWT parsing tests.
let token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJzZW5zb3I6Y29yZS50aW1lciIsImlhdCI6MTcwNjM1NjQ5NiwiZXhwIjoxNzE0MTMyNDk2fQ.signature"; let token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJzZW5zb3I6Y29yZS50aW1lciIsImlhdCI6MTcwNjM1NjQ5NiwiZXhwIjoxNzE0MTMyNDk2fQ.signature";
let manager = TokenRefreshManager::new( let manager = TokenRefreshManager::new(

View File

@@ -11,7 +11,10 @@
use anyhow::Result; use anyhow::Result;
use attune_common::{ use attune_common::{
mq::{Consumer, ExecutionCompletedPayload, MessageEnvelope, Publisher}, mq::{
Consumer, ExecutionCompletedPayload, ExecutionRequestedPayload, MessageEnvelope,
MessageType, MqError, Publisher,
},
repositories::{execution::ExecutionRepository, FindById}, repositories::{execution::ExecutionRepository, FindById},
}; };
use sqlx::PgPool; use sqlx::PgPool;
@@ -36,6 +39,19 @@ pub struct CompletionListener {
} }
impl CompletionListener { impl CompletionListener {
fn retryable_mq_error(error: &anyhow::Error) -> Option<MqError> {
let mq_error = error.downcast_ref::<MqError>()?;
Some(match mq_error {
MqError::Connection(msg) => MqError::Connection(msg.clone()),
MqError::Channel(msg) => MqError::Channel(msg.clone()),
MqError::Publish(msg) => MqError::Publish(msg.clone()),
MqError::Timeout(msg) => MqError::Timeout(msg.clone()),
MqError::Pool(msg) => MqError::Pool(msg.clone()),
MqError::Lapin(err) => MqError::Connection(err.to_string()),
_ => return None,
})
}
/// Create a new completion listener /// Create a new completion listener
pub fn new( pub fn new(
pool: PgPool, pool: PgPool,
@@ -82,6 +98,9 @@ impl CompletionListener {
{ {
error!("Error processing execution completion: {}", e); error!("Error processing execution completion: {}", e);
// Return error to trigger nack with requeue // Return error to trigger nack with requeue
if let Some(mq_err) = Self::retryable_mq_error(&e) {
return Err(mq_err);
}
return Err( return Err(
format!("Failed to process execution completion: {}", e).into() format!("Failed to process execution completion: {}", e).into()
); );
@@ -138,7 +157,11 @@ impl CompletionListener {
"Failed to advance workflow for execution {}: {}", "Failed to advance workflow for execution {}: {}",
execution_id, e execution_id, e
); );
// Continue processing — don't fail the entire completion if let Some(mq_err) = Self::retryable_mq_error(&e) {
return Err(mq_err.into());
}
// Non-retryable workflow advancement errors are logged but
// do not fail the entire completion processing path.
} }
} }
@@ -187,19 +210,39 @@ impl CompletionListener {
action_id, execution_id action_id, execution_id
); );
match queue_manager.notify_completion(action_id).await { match queue_manager.release_active_slot(execution_id).await {
Ok(notified) => { Ok(release) => {
if notified { if let Some(release) = release {
if let Some(next_execution_id) = release.next_execution_id {
info!( info!(
"Queue slot released for action {}, next execution notified", "Queue slot released for action {}, next execution {} can proceed",
action_id action_id, next_execution_id
); );
if let Err(republish_err) = Self::publish_execution_requested(
pool,
publisher,
action_id,
next_execution_id,
)
.await
{
queue_manager
.restore_active_slot(execution_id, &release)
.await?;
return Err(republish_err);
}
} else { } else {
debug!( debug!(
"Queue slot released for action {}, no executions waiting", "Queue slot released for action {}, no executions waiting",
action_id action_id
); );
} }
} else {
debug!(
"Execution {} had no active queue slot to release",
execution_id
);
}
} }
Err(e) => { Err(e) => {
error!( error!(
@@ -225,6 +268,38 @@ impl CompletionListener {
Ok(()) Ok(())
} }
async fn publish_execution_requested(
pool: &PgPool,
publisher: &Publisher,
action_id: i64,
execution_id: i64,
) -> Result<()> {
let execution = ExecutionRepository::find_by_id(pool, execution_id)
.await?
.ok_or_else(|| anyhow::anyhow!("Execution {} not found", execution_id))?;
let payload = ExecutionRequestedPayload {
execution_id,
action_id: Some(action_id),
action_ref: execution.action_ref.clone(),
parent_id: execution.parent,
enforcement_id: execution.enforcement,
config: execution.config.clone(),
};
let envelope = MessageEnvelope::new(MessageType::ExecutionRequested, payload)
.with_source("executor-completion-listener");
publisher.publish_envelope(&envelope).await?;
debug!(
"Republished deferred ExecutionRequested for execution {}",
execution_id
);
Ok(())
}
} }
#[cfg(test)] #[cfg(test)]
@@ -233,13 +308,13 @@ mod tests {
use crate::queue_manager::ExecutionQueueManager; use crate::queue_manager::ExecutionQueueManager;
#[tokio::test] #[tokio::test]
async fn test_notify_completion_releases_slot() { async fn test_release_active_slot_releases_slot() {
let queue_manager = Arc::new(ExecutionQueueManager::with_defaults()); let queue_manager = Arc::new(ExecutionQueueManager::with_defaults());
let action_id = 1; let action_id = 1;
// Simulate acquiring a slot // Simulate acquiring a slot
queue_manager queue_manager
.enqueue_and_wait(action_id, 100, 1) .enqueue_and_wait(action_id, 100, 1, None)
.await .await
.unwrap(); .unwrap();
@@ -249,8 +324,9 @@ mod tests {
assert_eq!(stats.queue_length, 0); assert_eq!(stats.queue_length, 0);
// Simulate completion notification // Simulate completion notification
let notified = queue_manager.notify_completion(action_id).await.unwrap(); let release = queue_manager.release_active_slot(100).await.unwrap();
assert!(!notified); // No one waiting assert!(release.is_some());
assert_eq!(release.unwrap().next_execution_id, None);
// Verify slot is released // Verify slot is released
let stats = queue_manager.get_queue_stats(action_id).await.unwrap(); let stats = queue_manager.get_queue_stats(action_id).await.unwrap();
@@ -258,13 +334,13 @@ mod tests {
} }
#[tokio::test] #[tokio::test]
async fn test_notify_completion_wakes_waiting() { async fn test_release_active_slot_wakes_waiting() {
let queue_manager = Arc::new(ExecutionQueueManager::with_defaults()); let queue_manager = Arc::new(ExecutionQueueManager::with_defaults());
let action_id = 1; let action_id = 1;
// Fill capacity // Fill capacity
queue_manager queue_manager
.enqueue_and_wait(action_id, 100, 1) .enqueue_and_wait(action_id, 100, 1, None)
.await .await
.unwrap(); .unwrap();
@@ -272,7 +348,7 @@ mod tests {
let queue_manager_clone = queue_manager.clone(); let queue_manager_clone = queue_manager.clone();
let handle = tokio::spawn(async move { let handle = tokio::spawn(async move {
queue_manager_clone queue_manager_clone
.enqueue_and_wait(action_id, 101, 1) .enqueue_and_wait(action_id, 101, 1, None)
.await .await
.unwrap(); .unwrap();
}); });
@@ -286,8 +362,8 @@ mod tests {
assert_eq!(stats.queue_length, 1); assert_eq!(stats.queue_length, 1);
// Notify completion // Notify completion
let notified = queue_manager.notify_completion(action_id).await.unwrap(); let release = queue_manager.release_active_slot(100).await.unwrap();
assert!(notified); // Should wake the waiting execution assert_eq!(release.unwrap().next_execution_id, Some(101));
// Wait for queued execution to proceed // Wait for queued execution to proceed
handle.await.unwrap(); handle.await.unwrap();
@@ -306,7 +382,7 @@ mod tests {
// Fill capacity // Fill capacity
queue_manager queue_manager
.enqueue_and_wait(action_id, 100, 1) .enqueue_and_wait(action_id, 100, 1, None)
.await .await
.unwrap(); .unwrap();
@@ -320,7 +396,7 @@ mod tests {
let handle = tokio::spawn(async move { let handle = tokio::spawn(async move {
queue_manager queue_manager
.enqueue_and_wait(action_id, exec_id, 1) .enqueue_and_wait(action_id, exec_id, 1, None)
.await .await
.unwrap(); .unwrap();
order.lock().await.push(exec_id); order.lock().await.push(exec_id);
@@ -333,9 +409,13 @@ mod tests {
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await; tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
// Release them one by one // Release them one by one
for _ in 0..3 { for execution_id in 100..103 {
tokio::time::sleep(tokio::time::Duration::from_millis(50)).await; tokio::time::sleep(tokio::time::Duration::from_millis(50)).await;
queue_manager.notify_completion(action_id).await.unwrap(); let release = queue_manager
.release_active_slot(execution_id)
.await
.unwrap();
assert!(release.is_some());
} }
// Wait for all to complete // Wait for all to complete
@@ -351,11 +431,11 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn test_completion_with_no_queue() { async fn test_completion_with_no_queue() {
let queue_manager = Arc::new(ExecutionQueueManager::with_defaults()); let queue_manager = Arc::new(ExecutionQueueManager::with_defaults());
let action_id = 999; // Non-existent action let execution_id = 999; // Non-existent execution
// Should succeed but not notify anyone // Should succeed but not notify anyone
let result = queue_manager.notify_completion(action_id).await; let result = queue_manager.release_active_slot(execution_id).await;
assert!(result.is_ok()); assert!(result.is_ok());
assert!(!result.unwrap()); assert!(result.unwrap().is_none());
} }
} }

View File

@@ -14,7 +14,7 @@ use attune_common::{
error::Error, error::Error,
models::ExecutionStatus, models::ExecutionStatus,
mq::{Consumer, ConsumerConfig, MessageEnvelope, MessageType, MqResult}, mq::{Consumer, ConsumerConfig, MessageEnvelope, MessageType, MqResult},
repositories::{execution::UpdateExecutionInput, ExecutionRepository, FindById, Update}, repositories::{execution::UpdateExecutionInput, ExecutionRepository, FindById},
}; };
use chrono::Utc; use chrono::Utc;
use serde_json::json; use serde_json::json;
@@ -179,13 +179,12 @@ async fn handle_execution_requested(
} }
}; };
// Only fail if still in a non-terminal state // Only scheduled executions are still legitimately owned by the scheduler.
if !matches!( // If the execution already moved to running or a terminal state, this DLQ
execution.status, // delivery is stale and must not overwrite newer state.
ExecutionStatus::Scheduled | ExecutionStatus::Running if execution.status != ExecutionStatus::Scheduled {
) {
info!( info!(
"Execution {} already in terminal state {:?}, skipping", "Execution {} already left Scheduled state ({:?}), skipping stale DLQ handling",
execution_id, execution.status execution_id, execution.status
); );
return Ok(()); // Acknowledge to remove from queue return Ok(()); // Acknowledge to remove from queue
@@ -193,6 +192,12 @@ async fn handle_execution_requested(
// Get worker info from payload for better error message // Get worker info from payload for better error message
let worker_id = envelope.payload.get("worker_id").and_then(|v| v.as_i64()); let worker_id = envelope.payload.get("worker_id").and_then(|v| v.as_i64());
let scheduled_attempt_updated_at = envelope
.payload
.get("scheduled_attempt_updated_at")
.and_then(|v| v.as_str())
.and_then(|s| chrono::DateTime::parse_from_rfc3339(s).ok())
.map(|dt| dt.with_timezone(&Utc));
let error_message = if let Some(wid) = worker_id { let error_message = if let Some(wid) = worker_id {
format!( format!(
@@ -214,26 +219,87 @@ async fn handle_execution_requested(
..Default::default() ..Default::default()
}; };
match ExecutionRepository::update(pool, execution_id, update_input).await { if let Some(timestamp) = scheduled_attempt_updated_at {
Ok(_) => { // Guard on both status and the exact updated_at from when the execution was
// scheduled — prevents overwriting state that changed after this DLQ message
// was enqueued.
match ExecutionRepository::update_if_status_and_updated_at(
pool,
execution_id,
ExecutionStatus::Scheduled,
timestamp,
update_input,
)
.await
{
Ok(Some(_)) => {
info!( info!(
"Successfully failed execution {} due to worker queue expiration", "Successfully failed execution {} due to worker queue expiration",
execution_id execution_id
); );
Ok(()) Ok(())
} }
Ok(None) => {
info!(
"Skipping DLQ failure for execution {} because it already left Scheduled state",
execution_id
);
Ok(())
}
Err(e) => { Err(e) => {
error!( error!(
"Failed to update execution {} to failed state: {}", "Failed to update execution {} to failed state: {}",
execution_id, e execution_id, e
); );
// Return error to nack and potentially retry
Err(attune_common::mq::MqError::Consume(format!( Err(attune_common::mq::MqError::Consume(format!(
"Failed to update execution: {}", "Failed to update execution: {}",
e e
))) )))
} }
} }
} else {
// Fallback for DLQ messages that predate the scheduled_attempt_updated_at
// field. Use a status-only guard — same safety guarantee as the original code
// (never overwrites terminal or running state).
warn!(
"DLQ message for execution {} lacks scheduled_attempt_updated_at; \
falling back to status-only guard",
execution_id
);
match ExecutionRepository::update_if_status(
pool,
execution_id,
ExecutionStatus::Scheduled,
update_input,
)
.await
{
Ok(Some(_)) => {
info!(
"Successfully failed execution {} due to worker queue expiration (status-only guard)",
execution_id
);
Ok(())
}
Ok(None) => {
info!(
"Skipping DLQ failure for execution {} because it already left Scheduled state",
execution_id
);
Ok(())
}
Err(e) => {
error!(
"Failed to update execution {} to failed state: {}",
execution_id, e
);
Err(attune_common::mq::MqError::Consume(format!(
"Failed to update execution: {}",
e
)))
}
}
}
} }
/// Create a dead letter consumer configuration /// Create a dead letter consumer configuration

View File

@@ -19,7 +19,7 @@ use attune_common::{
event::{EnforcementRepository, EventRepository, UpdateEnforcementInput}, event::{EnforcementRepository, EventRepository, UpdateEnforcementInput},
execution::{CreateExecutionInput, ExecutionRepository}, execution::{CreateExecutionInput, ExecutionRepository},
rule::RuleRepository, rule::RuleRepository,
Create, FindById, Update, FindById,
}, },
}; };
@@ -116,6 +116,14 @@ impl EnforcementProcessor {
.await? .await?
.ok_or_else(|| anyhow::anyhow!("Enforcement not found: {}", enforcement_id))?; .ok_or_else(|| anyhow::anyhow!("Enforcement not found: {}", enforcement_id))?;
if enforcement.status != EnforcementStatus::Created {
debug!(
"Enforcement {} already left Created state ({:?}), skipping duplicate processing",
enforcement_id, enforcement.status
);
return Ok(());
}
// Fetch associated rule // Fetch associated rule
let rule = RuleRepository::find_by_id( let rule = RuleRepository::find_by_id(
pool, pool,
@@ -135,7 +143,7 @@ impl EnforcementProcessor {
// Evaluate whether to create execution // Evaluate whether to create execution
if Self::should_create_execution(&enforcement, &rule, event.as_ref())? { if Self::should_create_execution(&enforcement, &rule, event.as_ref())? {
Self::create_execution( let execution_created = Self::create_execution(
pool, pool,
publisher, publisher,
policy_enforcer, policy_enforcer,
@@ -145,10 +153,10 @@ impl EnforcementProcessor {
) )
.await?; .await?;
// Update enforcement status to Processed after successful execution creation let updated = EnforcementRepository::update_loaded_if_status(
EnforcementRepository::update(
pool, pool,
enforcement_id, &enforcement,
EnforcementStatus::Created,
UpdateEnforcementInput { UpdateEnforcementInput {
status: Some(EnforcementStatus::Processed), status: Some(EnforcementStatus::Processed),
payload: None, payload: None,
@@ -157,17 +165,27 @@ impl EnforcementProcessor {
) )
.await?; .await?;
debug!("Updated enforcement {} status to Processed", enforcement_id); if updated.is_some() {
debug!(
"Updated enforcement {} status to Processed after {} execution path",
enforcement_id,
if execution_created {
"new"
} else {
"idempotent"
}
);
}
} else { } else {
info!( info!(
"Skipping execution creation for enforcement: {}", "Skipping execution creation for enforcement: {}",
enforcement_id enforcement_id
); );
// Update enforcement status to Disabled since it was not actionable let updated = EnforcementRepository::update_loaded_if_status(
EnforcementRepository::update(
pool, pool,
enforcement_id, &enforcement,
EnforcementStatus::Created,
UpdateEnforcementInput { UpdateEnforcementInput {
status: Some(EnforcementStatus::Disabled), status: Some(EnforcementStatus::Disabled),
payload: None, payload: None,
@@ -176,11 +194,13 @@ impl EnforcementProcessor {
) )
.await?; .await?;
if updated.is_some() {
debug!( debug!(
"Updated enforcement {} status to Disabled (skipped)", "Updated enforcement {} status to Disabled (skipped)",
enforcement_id enforcement_id
); );
} }
}
Ok(()) Ok(())
} }
@@ -230,11 +250,11 @@ impl EnforcementProcessor {
async fn create_execution( async fn create_execution(
pool: &PgPool, pool: &PgPool,
publisher: &Publisher, publisher: &Publisher,
policy_enforcer: &PolicyEnforcer, _policy_enforcer: &PolicyEnforcer,
_queue_manager: &ExecutionQueueManager, _queue_manager: &ExecutionQueueManager,
enforcement: &Enforcement, enforcement: &Enforcement,
rule: &Rule, rule: &Rule,
) -> Result<()> { ) -> Result<bool> {
// Extract action ID — should_create_execution already verified it's Some, // Extract action ID — should_create_execution already verified it's Some,
// but guard defensively here as well. // but guard defensively here as well.
let action_id = match rule.action { let action_id = match rule.action {
@@ -257,33 +277,10 @@ impl EnforcementProcessor {
enforcement.id, rule.id, action_id enforcement.id, rule.id, action_id
); );
let pack_id = rule.pack;
let action_ref = &rule.action_ref; let action_ref = &rule.action_ref;
// Enforce policies and wait for queue slot if needed // Create the execution row first; scheduler-side policy enforcement
info!( // now handles both rule-triggered and manual executions uniformly.
"Enforcing policies for action {} (enforcement: {})",
action_id, enforcement.id
);
// Use enforcement ID for queue tracking (execution doesn't exist yet)
if let Err(e) = policy_enforcer
.enforce_and_wait(action_id, Some(pack_id), enforcement.id)
.await
{
error!(
"Policy enforcement failed for enforcement {}: {}",
enforcement.id, e
);
return Err(e);
}
info!(
"Policy check passed and queue slot obtained for enforcement: {}",
enforcement.id
);
// Now create execution in database (we have a queue slot)
let execution_input = CreateExecutionInput { let execution_input = CreateExecutionInput {
action: Some(action_id), action: Some(action_id),
action_ref: action_ref.clone(), action_ref: action_ref.clone(),
@@ -298,21 +295,36 @@ impl EnforcementProcessor {
workflow_task: None, // Non-workflow execution workflow_task: None, // Non-workflow execution
}; };
let execution = ExecutionRepository::create(pool, execution_input).await?; let execution_result = ExecutionRepository::create_top_level_for_enforcement_if_absent(
pool,
execution_input,
enforcement.id,
)
.await?;
let execution = execution_result.execution;
if execution_result.created {
info!( info!(
"Created execution: {} for enforcement: {}", "Created execution: {} for enforcement: {}",
execution.id, enforcement.id execution.id, enforcement.id
); );
} else {
info!(
"Reusing execution: {} for enforcement: {}",
execution.id, enforcement.id
);
}
// Publish ExecutionRequested message if execution_result.created
|| execution.status == attune_common::models::enums::ExecutionStatus::Requested
{
let payload = ExecutionRequestedPayload { let payload = ExecutionRequestedPayload {
execution_id: execution.id, execution_id: execution.id,
action_id: Some(action_id), action_id: Some(action_id),
action_ref: action_ref.clone(), action_ref: action_ref.clone(),
parent_id: None, parent_id: None,
enforcement_id: Some(enforcement.id), enforcement_id: Some(enforcement.id),
config: enforcement.config.clone(), config: execution.config.clone(),
}; };
let envelope = let envelope =
@@ -331,11 +343,12 @@ impl EnforcementProcessor {
"Published execution.requested message for execution: {} (enforcement: {}, action: {})", "Published execution.requested message for execution: {} (enforcement: {}, action: {})",
execution.id, enforcement.id, action_id execution.id, enforcement.id, action_id
); );
}
// NOTE: Queue slot will be released when worker publishes execution.completed // NOTE: Queue slot will be released when worker publishes execution.completed
// and CompletionListener calls queue_manager.notify_completion(action_id) // and CompletionListener calls queue_manager.notify_completion(action_id)
Ok(()) Ok(execution_result.created)
} }
} }

View File

@@ -19,7 +19,7 @@ use attune_common::{
event::{CreateEnforcementInput, EnforcementRepository, EventRepository}, event::{CreateEnforcementInput, EnforcementRepository, EventRepository},
pack::PackRepository, pack::PackRepository,
rule::RuleRepository, rule::RuleRepository,
Create, FindById, List, FindById, List,
}, },
template_resolver::{resolve_templates, TemplateContext}, template_resolver::{resolve_templates, TemplateContext},
}; };
@@ -206,14 +206,23 @@ impl EventProcessor {
conditions: rule.conditions.clone(), conditions: rule.conditions.clone(),
}; };
let enforcement = EnforcementRepository::create(pool, create_input).await?; let enforcement_result =
EnforcementRepository::create_or_get_by_rule_event(pool, create_input).await?;
let enforcement = enforcement_result.enforcement;
if enforcement_result.created {
info!( info!(
"Enforcement {} created for rule {} (event: {})", "Enforcement {} created for rule {} (event: {})",
enforcement.id, rule.r#ref, event.id enforcement.id, rule.r#ref, event.id
); );
} else {
info!(
"Reusing enforcement {} for rule {} (event: {})",
enforcement.id, rule.r#ref, event.id
);
}
// Publish EnforcementCreated message if enforcement_result.created || enforcement.status == EnforcementStatus::Created {
let enforcement_payload = EnforcementCreatedPayload { let enforcement_payload = EnforcementCreatedPayload {
enforcement_id: enforcement.id, enforcement_id: enforcement.id,
rule_id: Some(rule.id), rule_id: Some(rule.id),
@@ -223,7 +232,8 @@ impl EventProcessor {
payload: payload.clone(), payload: payload.clone(),
}; };
let envelope = MessageEnvelope::new(MessageType::EnforcementCreated, enforcement_payload) let envelope =
MessageEnvelope::new(MessageType::EnforcementCreated, enforcement_payload)
.with_source("event-processor"); .with_source("event-processor");
publisher.publish_envelope(&envelope).await?; publisher.publish_envelope(&envelope).await?;
@@ -232,6 +242,7 @@ impl EventProcessor {
"Published EnforcementCreated message for enforcement {}", "Published EnforcementCreated message for enforcement {}",
enforcement.id enforcement.id
); );
}
Ok(()) Ok(())
} }

View File

@@ -9,13 +9,14 @@
use anyhow::Result; use anyhow::Result;
use attune_common::{ use attune_common::{
error::Error as AttuneError,
models::{enums::InquiryStatus, inquiry::Inquiry, Execution, Id}, models::{enums::InquiryStatus, inquiry::Inquiry, Execution, Id},
mq::{ mq::{
Consumer, InquiryCreatedPayload, InquiryRespondedPayload, MessageEnvelope, MessageType, Consumer, InquiryCreatedPayload, InquiryRespondedPayload, MessageEnvelope, MessageType,
Publisher, Publisher,
}, },
repositories::{ repositories::{
execution::{ExecutionRepository, UpdateExecutionInput}, execution::{ExecutionRepository, UpdateExecutionInput, SELECT_COLUMNS},
inquiry::{CreateInquiryInput, InquiryRepository}, inquiry::{CreateInquiryInput, InquiryRepository},
Create, FindById, Update, Create, FindById, Update,
}, },
@@ -28,6 +29,8 @@ use tracing::{debug, error, info, warn};
/// Special key in action result to indicate an inquiry should be created /// Special key in action result to indicate an inquiry should be created
pub const INQUIRY_RESULT_KEY: &str = "__inquiry"; pub const INQUIRY_RESULT_KEY: &str = "__inquiry";
const INQUIRY_ID_RESULT_KEY: &str = "__inquiry_id";
const INQUIRY_CREATED_PUBLISHED_RESULT_KEY: &str = "__inquiry_created_published";
/// Structure for inquiry data in action results /// Structure for inquiry data in action results
#[derive(Debug, Clone, serde::Deserialize)] #[derive(Debug, Clone, serde::Deserialize)]
@@ -104,26 +107,71 @@ impl InquiryHandler {
let inquiry_request: InquiryRequest = serde_json::from_value(inquiry_value.clone())?; let inquiry_request: InquiryRequest = serde_json::from_value(inquiry_value.clone())?;
Ok(inquiry_request) Ok(inquiry_request)
} }
}
/// Returns true when `e` represents a PostgreSQL unique constraint violation (code 23505).
fn is_db_unique_violation(e: &AttuneError) -> bool {
if let AttuneError::Database(sqlx_err) = e {
return sqlx_err
.as_database_error()
.and_then(|db| db.code())
.as_deref()
== Some("23505");
}
false
}
impl InquiryHandler {
/// Create an inquiry for an execution and pause it /// Create an inquiry for an execution and pause it
pub async fn create_inquiry_from_result( pub async fn create_inquiry_from_result(
pool: &PgPool, pool: &PgPool,
publisher: &Publisher, publisher: &Publisher,
execution_id: Id, execution_id: Id,
result: &JsonValue, _result: &JsonValue,
) -> Result<Inquiry> { ) -> Result<Inquiry> {
info!("Creating inquiry for execution {}", execution_id); info!("Creating inquiry for execution {}", execution_id);
// Extract inquiry request let mut tx = pool.begin().await?;
let inquiry_request = Self::extract_inquiry_request(result)?; let execution = sqlx::query_as::<_, Execution>(&format!(
"SELECT {SELECT_COLUMNS} FROM execution WHERE id = $1 FOR UPDATE"
))
.bind(execution_id)
.fetch_one(&mut *tx)
.await?;
// Calculate timeout if specified let mut result = execution
.result
.clone()
.ok_or_else(|| anyhow::anyhow!("Execution {} has no result", execution_id))?;
let inquiry_request = Self::extract_inquiry_request(&result)?;
let timeout_at = inquiry_request let timeout_at = inquiry_request
.timeout_seconds .timeout_seconds
.map(|seconds| Utc::now() + chrono::Duration::seconds(seconds)); .map(|seconds| Utc::now() + chrono::Duration::seconds(seconds));
// Create inquiry in database let existing_inquiry_id = result
let inquiry_input = CreateInquiryInput { .get(INQUIRY_ID_RESULT_KEY)
.and_then(|value| value.as_i64());
let published = result
.get(INQUIRY_CREATED_PUBLISHED_RESULT_KEY)
.and_then(|value| value.as_bool())
.unwrap_or(false);
let (inquiry, should_publish) = if let Some(inquiry_id) = existing_inquiry_id {
let inquiry = InquiryRepository::find_by_id(&mut *tx, inquiry_id)
.await?
.ok_or_else(|| {
anyhow::anyhow!(
"Inquiry {} referenced by execution {} result not found",
inquiry_id,
execution_id
)
})?;
let should_publish = !published && inquiry.status == InquiryStatus::Pending;
(inquiry, should_publish)
} else {
let create_result = InquiryRepository::create(
&mut *tx,
CreateInquiryInput {
execution: execution_id, execution: execution_id,
prompt: inquiry_request.prompt.clone(), prompt: inquiry_request.prompt.clone(),
response_schema: inquiry_request.response_schema.clone(), response_schema: inquiry_request.response_schema.clone(),
@@ -131,20 +179,55 @@ impl InquiryHandler {
status: InquiryStatus::Pending, status: InquiryStatus::Pending,
response: None, response: None,
timeout_at, timeout_at,
},
)
.await;
let inquiry = match create_result {
Ok(inq) => inq,
Err(e) => {
// Unique constraint violation (23505): another replica already
// created the inquiry for this execution. Treat as idempotent
// success — drop the aborted transaction and return the existing row.
if is_db_unique_violation(&e) {
info!(
"Inquiry for execution {} already created by another replica \
(unique constraint 23505); treating as idempotent",
execution_id
);
// tx is in an aborted state; dropping it issues ROLLBACK.
drop(tx);
let inquiries =
InquiryRepository::find_by_execution(pool, execution_id).await?;
let existing = inquiries.into_iter().next().ok_or_else(|| {
anyhow::anyhow!(
"Inquiry for execution {} not found after unique constraint violation",
execution_id
)
})?;
return Ok(existing);
}
return Err(e.into());
}
}; };
let inquiry = InquiryRepository::create(pool, inquiry_input).await?; Self::set_inquiry_result_metadata(&mut result, inquiry.id, false)?;
ExecutionRepository::update(
&mut *tx,
execution_id,
UpdateExecutionInput {
result: Some(result),
..Default::default()
},
)
.await?;
info!( (inquiry, true)
"Created inquiry {} for execution {}", };
inquiry.id, execution_id
);
// Update execution status to paused/waiting tx.commit().await?;
// Note: We use a special status or keep it as "running" with inquiry tracking
// For now, we'll keep status as-is and track via inquiry relationship
// Publish InquiryCreated message if should_publish {
let payload = InquiryCreatedPayload { let payload = InquiryCreatedPayload {
inquiry_id: inquiry.id, inquiry_id: inquiry.id,
execution_id, execution_id,
@@ -158,15 +241,64 @@ impl InquiryHandler {
MessageEnvelope::new(MessageType::InquiryCreated, payload).with_source("executor"); MessageEnvelope::new(MessageType::InquiryCreated, payload).with_source("executor");
publisher.publish_envelope(&envelope).await?; publisher.publish_envelope(&envelope).await?;
Self::mark_inquiry_created_published(pool, execution_id).await?;
debug!( debug!(
"Published InquiryCreated message for inquiry {}", "Published InquiryCreated message for inquiry {}",
inquiry.id inquiry.id
); );
}
Ok(inquiry) Ok(inquiry)
} }
fn set_inquiry_result_metadata(
result: &mut JsonValue,
inquiry_id: Id,
published: bool,
) -> Result<()> {
let obj = result
.as_object_mut()
.ok_or_else(|| anyhow::anyhow!("execution result is not a JSON object"))?;
obj.insert(
INQUIRY_ID_RESULT_KEY.to_string(),
JsonValue::Number(inquiry_id.into()),
);
obj.insert(
INQUIRY_CREATED_PUBLISHED_RESULT_KEY.to_string(),
JsonValue::Bool(published),
);
Ok(())
}
async fn mark_inquiry_created_published(pool: &PgPool, execution_id: Id) -> Result<()> {
let execution = ExecutionRepository::find_by_id(pool, execution_id)
.await?
.ok_or_else(|| anyhow::anyhow!("Execution {} not found", execution_id))?;
let mut result = execution
.result
.clone()
.ok_or_else(|| anyhow::anyhow!("Execution {} has no result", execution_id))?;
let inquiry_id = result
.get(INQUIRY_ID_RESULT_KEY)
.and_then(|value| value.as_i64())
.ok_or_else(|| anyhow::anyhow!("Execution {} missing __inquiry_id", execution_id))?;
Self::set_inquiry_result_metadata(&mut result, inquiry_id, true)?;
ExecutionRepository::update(
pool,
execution_id,
UpdateExecutionInput {
result: Some(result),
..Default::default()
},
)
.await?;
Ok(())
}
/// Handle an inquiry response message /// Handle an inquiry response message
async fn handle_inquiry_response( async fn handle_inquiry_response(
pool: &PgPool, pool: &PgPool,
@@ -235,9 +367,13 @@ impl InquiryHandler {
if let Some(obj) = updated_result.as_object_mut() { if let Some(obj) = updated_result.as_object_mut() {
obj.insert("__inquiry_response".to_string(), response.clone()); obj.insert("__inquiry_response".to_string(), response.clone());
obj.insert( obj.insert(
"__inquiry_id".to_string(), INQUIRY_ID_RESULT_KEY.to_string(),
JsonValue::Number(inquiry.id.into()), JsonValue::Number(inquiry.id.into()),
); );
obj.insert(
INQUIRY_CREATED_PUBLISHED_RESULT_KEY.to_string(),
JsonValue::Bool(true),
);
} }
// Update execution with new result // Update execution with new result

View File

@@ -10,14 +10,23 @@
use anyhow::Result; use anyhow::Result;
use chrono::{DateTime, Duration, Utc}; use chrono::{DateTime, Duration, Utc};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
use sqlx::PgPool; use sqlx::PgPool;
use std::collections::HashMap; use std::collections::{BTreeMap, HashMap};
use std::sync::Arc; use std::sync::Arc;
use tracing::{debug, info, warn}; use tracing::{debug, info, warn};
use attune_common::models::{enums::ExecutionStatus, Id}; use attune_common::{
models::{
enums::{ExecutionStatus, PolicyMethod},
Id, Policy,
},
repositories::action::PolicyRepository,
};
use crate::queue_manager::ExecutionQueueManager; use crate::queue_manager::{
ExecutionQueueManager, QueuedRemovalOutcome, SlotEnqueueOutcome, SlotReleaseOutcome,
};
/// Policy violation type /// Policy violation type
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] #[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
@@ -79,16 +88,38 @@ impl std::fmt::Display for PolicyViolation {
} }
/// Execution policy configuration /// Execution policy configuration
#[derive(Debug, Clone, Serialize, Deserialize, Default)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ExecutionPolicy { pub struct ExecutionPolicy {
/// Rate limit: maximum executions per time window /// Rate limit: maximum executions per time window
pub rate_limit: Option<RateLimit>, pub rate_limit: Option<RateLimit>,
/// Concurrency limit: maximum concurrent executions /// Concurrency limit: maximum concurrent executions
pub concurrency_limit: Option<u32>, pub concurrency_limit: Option<u32>,
/// How a concurrency violation should be handled.
pub concurrency_method: PolicyMethod,
/// Parameter paths used to scope concurrency grouping.
pub concurrency_parameters: Vec<String>,
/// Resource quotas /// Resource quotas
pub quotas: Option<HashMap<String, u64>>, pub quotas: Option<HashMap<String, u64>>,
} }
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum SchedulingPolicyOutcome {
Ready,
Queued,
}
impl Default for ExecutionPolicy {
fn default() -> Self {
Self {
rate_limit: None,
concurrency_limit: None,
concurrency_method: PolicyMethod::Enqueue,
concurrency_parameters: Vec::new(),
quotas: None,
}
}
}
/// Rate limit configuration /// Rate limit configuration
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RateLimit { pub struct RateLimit {
@@ -98,6 +129,25 @@ pub struct RateLimit {
pub window_seconds: u32, pub window_seconds: u32,
} }
#[derive(Debug, Clone)]
struct ResolvedConcurrencyPolicy {
limit: u32,
method: PolicyMethod,
parameters: Vec<String>,
}
impl From<Policy> for ExecutionPolicy {
fn from(policy: Policy) -> Self {
Self {
rate_limit: None,
concurrency_limit: Some(policy.threshold as u32),
concurrency_method: policy.method,
concurrency_parameters: policy.parameters,
quotas: None,
}
}
}
/// Policy enforcement scope /// Policy enforcement scope
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
#[allow(dead_code)] // Used in tests #[allow(dead_code)] // Used in tests
@@ -185,6 +235,174 @@ impl PolicyEnforcer {
self.action_policies.insert(action_id, policy); self.action_policies.insert(action_id, policy);
} }
/// Best-effort release for a slot acquired during scheduling when the
/// execution never reaches the worker/completion path.
pub async fn release_execution_slot(
&self,
execution_id: Id,
) -> Result<Option<SlotReleaseOutcome>> {
match &self.queue_manager {
Some(queue_manager) => queue_manager.release_active_slot(execution_id).await,
None => Ok(None),
}
}
pub async fn restore_execution_slot(
&self,
execution_id: Id,
outcome: &SlotReleaseOutcome,
) -> Result<()> {
match &self.queue_manager {
Some(queue_manager) => {
queue_manager
.restore_active_slot(execution_id, outcome)
.await
}
None => Ok(()),
}
}
pub async fn remove_queued_execution(
&self,
execution_id: Id,
) -> Result<Option<QueuedRemovalOutcome>> {
match &self.queue_manager {
Some(queue_manager) => queue_manager.remove_queued_execution(execution_id).await,
None => Ok(None),
}
}
pub async fn restore_queued_execution(&self, outcome: &QueuedRemovalOutcome) -> Result<()> {
match &self.queue_manager {
Some(queue_manager) => queue_manager.restore_queued_execution(outcome).await,
None => Ok(()),
}
}
pub async fn enforce_for_scheduling(
&self,
action_id: Id,
pack_id: Option<Id>,
execution_id: Id,
config: Option<&JsonValue>,
) -> Result<SchedulingPolicyOutcome> {
if let Some(violation) = self
.check_policies_except_concurrency(action_id, pack_id)
.await?
{
warn!("Policy violation for action {}: {}", action_id, violation);
return Err(anyhow::anyhow!("Policy violation: {}", violation));
}
if let Some(concurrency) = self.resolve_concurrency_policy(action_id, pack_id).await? {
let group_key = self.build_parameter_group_key(&concurrency.parameters, config);
if let Some(queue_manager) = &self.queue_manager {
match concurrency.method {
PolicyMethod::Enqueue => {
return match queue_manager
.enqueue(action_id, execution_id, concurrency.limit, group_key)
.await?
{
SlotEnqueueOutcome::Acquired => Ok(SchedulingPolicyOutcome::Ready),
SlotEnqueueOutcome::Enqueued => Ok(SchedulingPolicyOutcome::Queued),
};
}
PolicyMethod::Cancel => {
let outcome = queue_manager
.try_acquire(
action_id,
execution_id,
concurrency.limit,
group_key.clone(),
)
.await?;
if !outcome.acquired {
let violation = PolicyViolation::ConcurrencyLimitExceeded {
limit: concurrency.limit,
current_count: outcome.current_count,
};
warn!("Policy violation for action {}: {}", action_id, violation);
return Err(anyhow::anyhow!("Policy violation: {}", violation));
}
}
}
} else {
let scope = PolicyScope::Action(action_id);
if let Some(violation) = self
.check_concurrency_limit(concurrency.limit, &scope)
.await?
{
return Err(anyhow::anyhow!("Policy violation: {}", violation));
}
}
}
Ok(SchedulingPolicyOutcome::Ready)
}
async fn resolve_policy(&self, action_id: Id, pack_id: Option<Id>) -> Result<ExecutionPolicy> {
if let Some(policy) = self.action_policies.get(&action_id) {
return Ok(policy.clone());
}
if let Some(policy) = PolicyRepository::find_latest_by_action(&self.pool, action_id).await?
{
return Ok(policy.into());
}
if let Some(pack_id) = pack_id {
if let Some(policy) = self.pack_policies.get(&pack_id) {
return Ok(policy.clone());
}
if let Some(policy) = PolicyRepository::find_latest_by_pack(&self.pool, pack_id).await?
{
return Ok(policy.into());
}
}
if let Some(policy) = PolicyRepository::find_latest_global(&self.pool).await? {
return Ok(policy.into());
}
Ok(self.global_policy.clone())
}
async fn resolve_concurrency_policy(
&self,
action_id: Id,
pack_id: Option<Id>,
) -> Result<Option<ResolvedConcurrencyPolicy>> {
let policy = self.resolve_policy(action_id, pack_id).await?;
Ok(policy
.concurrency_limit
.map(|limit| ResolvedConcurrencyPolicy {
limit,
method: policy.concurrency_method,
parameters: policy.concurrency_parameters,
}))
}
fn build_parameter_group_key(
&self,
parameter_paths: &[String],
config: Option<&JsonValue>,
) -> Option<String> {
if parameter_paths.is_empty() {
return None;
}
let values: BTreeMap<String, JsonValue> = parameter_paths
.iter()
.map(|path| (path.clone(), extract_parameter_value(config, path)))
.collect();
serde_json::to_string(&values).ok()
}
/// Get the concurrency limit for a specific action /// Get the concurrency limit for a specific action
/// ///
/// Returns the most specific concurrency limit found: /// Returns the most specific concurrency limit found:
@@ -192,6 +410,7 @@ impl PolicyEnforcer {
/// 2. Pack policy /// 2. Pack policy
/// 3. Global policy /// 3. Global policy
/// 4. None (unlimited) /// 4. None (unlimited)
#[allow(dead_code)]
pub fn get_concurrency_limit(&self, action_id: Id, pack_id: Option<Id>) -> Option<u32> { pub fn get_concurrency_limit(&self, action_id: Id, pack_id: Option<Id>) -> Option<u32> {
// Check action-specific policy first // Check action-specific policy first
if let Some(policy) = self.action_policies.get(&action_id) { if let Some(policy) = self.action_policies.get(&action_id) {
@@ -213,79 +432,6 @@ impl PolicyEnforcer {
self.global_policy.concurrency_limit self.global_policy.concurrency_limit
} }
/// Enforce policies and wait in queue if necessary
///
/// This method combines policy checking with queue management to ensure:
/// 1. Policy violations are detected early
/// 2. FIFO ordering is maintained when capacity is limited
/// 3. Executions wait efficiently for available slots
///
/// # Arguments
/// * `action_id` - The action to execute
/// * `pack_id` - The pack containing the action
/// * `execution_id` - The execution/enforcement ID for queue tracking
///
/// # Returns
/// * `Ok(())` - Policy allows execution and queue slot obtained
/// * `Err(PolicyViolation)` - Policy prevents execution
/// * `Err(QueueError)` - Queue timeout or other queue error
pub async fn enforce_and_wait(
&self,
action_id: Id,
pack_id: Option<Id>,
execution_id: Id,
) -> Result<()> {
// First, check for policy violations (rate limit, quotas, etc.)
// Note: We skip concurrency check here since queue manages that
if let Some(violation) = self
.check_policies_except_concurrency(action_id, pack_id)
.await?
{
warn!("Policy violation for action {}: {}", action_id, violation);
return Err(anyhow::anyhow!("Policy violation: {}", violation));
}
// If queue manager is available, use it for concurrency control
if let Some(queue_manager) = &self.queue_manager {
let concurrency_limit = self
.get_concurrency_limit(action_id, pack_id)
.unwrap_or(u32::MAX); // Default to unlimited if no policy
debug!(
"Enqueuing execution {} for action {} with concurrency limit {}",
execution_id, action_id, concurrency_limit
);
queue_manager
.enqueue_and_wait(action_id, execution_id, concurrency_limit)
.await?;
info!(
"Execution {} obtained queue slot for action {}",
execution_id, action_id
);
} else {
// No queue manager - use legacy polling behavior
debug!(
"No queue manager configured, using legacy policy wait for action {}",
action_id
);
if let Some(concurrency_limit) = self.get_concurrency_limit(action_id, pack_id) {
// Check concurrency with old method
let scope = PolicyScope::Action(action_id);
if let Some(violation) = self
.check_concurrency_limit(concurrency_limit, &scope)
.await?
{
return Err(anyhow::anyhow!("Policy violation: {}", violation));
}
}
}
Ok(())
}
/// Check policies except concurrency (which is handled by queue) /// Check policies except concurrency (which is handled by queue)
async fn check_policies_except_concurrency( async fn check_policies_except_concurrency(
&self, &self,
@@ -631,11 +777,28 @@ impl PolicyEnforcer {
} }
} }
fn extract_parameter_value(config: Option<&JsonValue>, path: &str) -> JsonValue {
let mut current = match config {
Some(value) => value,
None => return JsonValue::Null,
};
for segment in path.split('.') {
match current {
JsonValue::Object(map) => match map.get(segment) {
Some(next) => current = next,
None => return JsonValue::Null,
},
_ => return JsonValue::Null,
}
}
current.clone()
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::queue_manager::QueueConfig;
use tokio::time::{sleep, Duration};
#[test] #[test]
fn test_policy_violation_display() { fn test_policy_violation_display() {
@@ -665,6 +828,8 @@ mod tests {
let policy = ExecutionPolicy::default(); let policy = ExecutionPolicy::default();
assert!(policy.rate_limit.is_none()); assert!(policy.rate_limit.is_none());
assert!(policy.concurrency_limit.is_none()); assert!(policy.concurrency_limit.is_none());
assert_eq!(policy.concurrency_method, PolicyMethod::Enqueue);
assert!(policy.concurrency_parameters.is_empty());
assert!(policy.quotas.is_none()); assert!(policy.quotas.is_none());
} }
@@ -769,132 +934,25 @@ mod tests {
} }
#[tokio::test] #[tokio::test]
async fn test_enforce_and_wait_with_queue_manager() { async fn test_build_parameter_group_key_uses_exact_values() {
let pool = sqlx::PgPool::connect_lazy("postgresql://localhost/test").unwrap(); let pool = sqlx::PgPool::connect_lazy("postgresql://localhost/test").unwrap();
let queue_manager = Arc::new(ExecutionQueueManager::with_defaults()); let enforcer = PolicyEnforcer::new(pool);
let mut enforcer = PolicyEnforcer::with_queue_manager(pool, queue_manager.clone()); let config = serde_json::json!({
"environment": "prod",
// Set concurrency limit "target": {
enforcer.set_action_policy( "region": "us-east-1"
1,
ExecutionPolicy {
concurrency_limit: Some(1),
..Default::default()
},
);
// First execution should proceed immediately
let result = enforcer.enforce_and_wait(1, None, 100).await;
assert!(result.is_ok());
// Check queue stats
let stats = queue_manager.get_queue_stats(1).await.unwrap();
assert_eq!(stats.active_count, 1);
assert_eq!(stats.queue_length, 0);
} }
#[tokio::test]
async fn test_enforce_and_wait_fifo_ordering() {
let pool = sqlx::PgPool::connect_lazy("postgresql://localhost/test").unwrap();
let queue_manager = Arc::new(ExecutionQueueManager::with_defaults());
let mut enforcer = PolicyEnforcer::with_queue_manager(pool, queue_manager.clone());
enforcer.set_action_policy(
1,
ExecutionPolicy {
concurrency_limit: Some(1),
..Default::default()
},
);
let enforcer = Arc::new(enforcer);
// First execution
let result = enforcer.enforce_and_wait(1, None, 100).await;
assert!(result.is_ok());
// Queue multiple executions
let execution_order = Arc::new(tokio::sync::Mutex::new(Vec::new()));
let mut handles = vec![];
for exec_id in 101..=103 {
let enforcer = enforcer.clone();
let queue_manager = queue_manager.clone();
let order = execution_order.clone();
let handle = tokio::spawn(async move {
enforcer.enforce_and_wait(1, None, exec_id).await.unwrap();
order.lock().await.push(exec_id);
// Simulate work
sleep(Duration::from_millis(10)).await;
queue_manager.notify_completion(1).await.unwrap();
}); });
handles.push(handle); let group_key = enforcer.build_parameter_group_key(
} &["target.region".to_string(), "environment".to_string()],
Some(&config),
// Give tasks time to queue
sleep(Duration::from_millis(100)).await;
// Release first execution
queue_manager.notify_completion(1).await.unwrap();
// Wait for all
for handle in handles {
handle.await.unwrap();
}
// Verify FIFO order
let order = execution_order.lock().await;
assert_eq!(*order, vec![101, 102, 103]);
}
#[tokio::test]
async fn test_enforce_and_wait_without_queue_manager() {
let pool = sqlx::PgPool::connect_lazy("postgresql://localhost/test").unwrap();
let mut enforcer = PolicyEnforcer::new(pool);
// Set unlimited concurrency
enforcer.set_action_policy(
1,
ExecutionPolicy {
concurrency_limit: None,
..Default::default()
},
); );
// Should work without queue manager (legacy behavior) assert_eq!(
let result = enforcer.enforce_and_wait(1, None, 100).await; group_key.as_deref(),
assert!(result.is_ok()); Some("{\"environment\":\"prod\",\"target.region\":\"us-east-1\"}")
}
#[tokio::test]
async fn test_enforce_and_wait_queue_timeout() {
let config = QueueConfig {
max_queue_length: 100,
queue_timeout_seconds: 1, // Short timeout for test
enable_metrics: true,
};
let pool = sqlx::PgPool::connect_lazy("postgresql://localhost/test").unwrap();
let queue_manager = Arc::new(ExecutionQueueManager::new(config));
let mut enforcer = PolicyEnforcer::with_queue_manager(pool, queue_manager.clone());
// Set concurrency limit
enforcer.set_action_policy(
1,
ExecutionPolicy {
concurrency_limit: Some(1),
..Default::default()
},
); );
// First execution proceeds
enforcer.enforce_and_wait(1, None, 100).await.unwrap();
// Second execution should timeout
let result = enforcer.enforce_and_wait(1, None, 101).await;
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("timeout"));
} }
// Integration tests would require database setup // Integration tests would require database setup

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -297,6 +297,7 @@ impl ExecutorService {
self.inner.pool.clone(), self.inner.pool.clone(),
self.inner.publisher.clone(), self.inner.publisher.clone(),
Arc::new(scheduler_consumer), Arc::new(scheduler_consumer),
self.inner.policy_enforcer.clone(),
); );
handles.push(tokio::spawn(async move { scheduler.start().await })); handles.push(tokio::spawn(async move { scheduler.start().await }));

View File

@@ -12,7 +12,10 @@ use anyhow::Result;
use attune_common::{ use attune_common::{
models::{enums::ExecutionStatus, Execution}, models::{enums::ExecutionStatus, Execution},
mq::{MessageEnvelope, MessageType, Publisher}, mq::{MessageEnvelope, MessageType, Publisher},
repositories::execution::SELECT_COLUMNS as EXECUTION_COLUMNS, repositories::{
execution::{UpdateExecutionInput, SELECT_COLUMNS as EXECUTION_COLUMNS},
ExecutionRepository,
},
}; };
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -178,20 +181,27 @@ impl ExecutionTimeoutMonitor {
"original_status": "scheduled" "original_status": "scheduled"
}); });
// Update execution status in database let updated = ExecutionRepository::update_if_status_and_updated_before(
sqlx::query( &self.pool,
"UPDATE execution execution_id,
SET status = $1, ExecutionStatus::Scheduled,
result = $2, self.calculate_cutoff_time(),
updated = NOW() UpdateExecutionInput {
WHERE id = $3", status: Some(ExecutionStatus::Failed),
result: Some(result.clone()),
..Default::default()
},
) )
.bind(ExecutionStatus::Failed)
.bind(&result)
.bind(execution_id)
.execute(&self.pool)
.await?; .await?;
if updated.is_none() {
debug!(
"Skipping timeout failure for execution {} because it already left Scheduled or is no longer stale",
execution_id
);
return Ok(());
}
info!("Execution {} marked as failed in database", execution_id); info!("Execution {} marked as failed in database", execution_id);
// Publish completion notification // Publish completion notification

View File

@@ -155,6 +155,7 @@ impl WorkflowLoader {
} }
// Read and parse YAML // Read and parse YAML
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Workflow files come from pack directories already discovered under packs_base_dir.
let content = fs::read_to_string(&file.path) let content = fs::read_to_string(&file.path)
.await .await
.map_err(|e| Error::validation(format!("Failed to read workflow file: {}", e)))?; .map_err(|e| Error::validation(format!("Failed to read workflow file: {}", e)))?;
@@ -265,6 +266,7 @@ impl WorkflowLoader {
pack_name: &str, pack_name: &str,
) -> Result<Vec<WorkflowFile>> { ) -> Result<Vec<WorkflowFile>> {
let mut workflow_files = Vec::new(); let mut workflow_files = Vec::new();
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Executor workflow scanning only traverses pack-owned workflow directories.
let mut entries = fs::read_dir(workflows_dir) let mut entries = fs::read_dir(workflows_dir)
.await .await
.map_err(|e| Error::validation(format!("Failed to read workflows directory: {}", e)))?; .map_err(|e| Error::validation(format!("Failed to read workflows directory: {}", e)))?;

View File

@@ -26,6 +26,7 @@ use attune_executor::queue_manager::{ExecutionQueueManager, QueueConfig};
use chrono::Utc; use chrono::Utc;
use serde_json::json; use serde_json::json;
use sqlx::PgPool; use sqlx::PgPool;
use std::collections::VecDeque;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use tokio::sync::Mutex; use tokio::sync::Mutex;
@@ -172,6 +173,26 @@ async fn cleanup_test_data(pool: &PgPool, pack_id: i64) {
.ok(); .ok();
} }
async fn release_next_active(
manager: &ExecutionQueueManager,
active_execution_ids: &mut VecDeque<i64>,
) -> Option<i64> {
let execution_id = active_execution_ids
.pop_front()
.expect("Expected an active execution to release");
let release = manager
.release_active_slot(execution_id)
.await
.expect("Release should succeed")
.expect("Active execution should have a tracked slot");
if let Some(next_execution_id) = release.next_execution_id {
active_execution_ids.push_back(next_execution_id);
}
release.next_execution_id
}
#[tokio::test] #[tokio::test]
#[ignore] // Requires database #[ignore] // Requires database
async fn test_fifo_ordering_with_database() { async fn test_fifo_ordering_with_database() {
@@ -198,8 +219,9 @@ async fn test_fifo_ordering_with_database() {
// Create first execution in database and enqueue // Create first execution in database and enqueue
let first_exec_id = let first_exec_id =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await; create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
let mut active_execution_ids = VecDeque::from([first_exec_id]);
manager manager
.enqueue_and_wait(action_id, first_exec_id, max_concurrent) .enqueue_and_wait(action_id, first_exec_id, max_concurrent, None)
.await .await
.expect("First execution should enqueue"); .expect("First execution should enqueue");
@@ -222,7 +244,7 @@ async fn test_fifo_ordering_with_database() {
// Enqueue and wait // Enqueue and wait
manager_clone manager_clone
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
.expect("Enqueue should succeed"); .expect("Enqueue should succeed");
@@ -250,10 +272,7 @@ async fn test_fifo_ordering_with_database() {
// Release them one by one // Release them one by one
for _ in 0..num_executions { for _ in 0..num_executions {
sleep(Duration::from_millis(50)).await; sleep(Duration::from_millis(50)).await;
manager release_next_active(&manager, &mut active_execution_ids).await;
.notify_completion(action_id)
.await
.expect("Notify should succeed");
} }
// Wait for all to complete // Wait for all to complete
@@ -295,6 +314,7 @@ async fn test_high_concurrency_stress() {
let num_executions: i64 = 1000; let num_executions: i64 = 1000;
let execution_order = Arc::new(Mutex::new(Vec::new())); let execution_order = Arc::new(Mutex::new(Vec::new()));
let mut handles = vec![]; let mut handles = vec![];
let execution_ids = Arc::new(Mutex::new(vec![None; num_executions as usize]));
println!("Starting stress test with {} executions...", num_executions); println!("Starting stress test with {} executions...", num_executions);
let start_time = std::time::Instant::now(); let start_time = std::time::Instant::now();
@@ -305,6 +325,7 @@ async fn test_high_concurrency_stress() {
let manager_clone = manager.clone(); let manager_clone = manager.clone();
let action_ref_clone = action_ref.clone(); let action_ref_clone = action_ref.clone();
let order = execution_order.clone(); let order = execution_order.clone();
let ids = execution_ids.clone();
let handle = tokio::spawn(async move { let handle = tokio::spawn(async move {
let exec_id = create_test_execution( let exec_id = create_test_execution(
@@ -314,9 +335,10 @@ async fn test_high_concurrency_stress() {
ExecutionStatus::Requested, ExecutionStatus::Requested,
) )
.await; .await;
ids.lock().await[i as usize] = Some(exec_id);
manager_clone manager_clone
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
.expect("Enqueue should succeed"); .expect("Enqueue should succeed");
@@ -332,6 +354,7 @@ async fn test_high_concurrency_stress() {
let manager_clone = manager.clone(); let manager_clone = manager.clone();
let action_ref_clone = action_ref.clone(); let action_ref_clone = action_ref.clone();
let order = execution_order.clone(); let order = execution_order.clone();
let ids = execution_ids.clone();
let handle = tokio::spawn(async move { let handle = tokio::spawn(async move {
let exec_id = create_test_execution( let exec_id = create_test_execution(
@@ -341,9 +364,10 @@ async fn test_high_concurrency_stress() {
ExecutionStatus::Requested, ExecutionStatus::Requested,
) )
.await; .await;
ids.lock().await[i as usize] = Some(exec_id);
manager_clone manager_clone
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
.expect("Enqueue should succeed"); .expect("Enqueue should succeed");
@@ -376,15 +400,21 @@ async fn test_high_concurrency_stress() {
); );
// Release all executions // Release all executions
let ids = execution_ids.lock().await;
let mut active_execution_ids = VecDeque::from(
ids.iter()
.take(max_concurrent as usize)
.map(|id| id.expect("Initial execution id should be recorded"))
.collect::<Vec<_>>(),
);
drop(ids);
println!("Releasing executions..."); println!("Releasing executions...");
for i in 0..num_executions { for i in 0..num_executions {
if i % 100 == 0 { if i % 100 == 0 {
println!("Released {} executions", i); println!("Released {} executions", i);
} }
manager release_next_active(&manager, &mut active_execution_ids).await;
.notify_completion(action_id)
.await
.expect("Notify should succeed");
// Small delay to allow queue processing // Small delay to allow queue processing
if i % 50 == 0 { if i % 50 == 0 {
@@ -416,7 +446,7 @@ async fn test_high_concurrency_stress() {
"All executions should complete" "All executions should complete"
); );
let expected: Vec<i64> = (0..num_executions).collect(); let expected: Vec<_> = (0..num_executions).collect();
assert_eq!( assert_eq!(
*order, expected, *order, expected,
"Executions should complete in strict FIFO order" "Executions should complete in strict FIFO order"
@@ -461,9 +491,31 @@ async fn test_multiple_workers_simulation() {
let num_executions = 30; let num_executions = 30;
let execution_order = Arc::new(Mutex::new(Vec::new())); let execution_order = Arc::new(Mutex::new(Vec::new()));
let mut handles = vec![]; let mut handles = vec![];
let mut active_execution_ids = VecDeque::new();
// Spawn all executions // Fill the initial worker slots deterministically.
for i in 0..num_executions { for i in 0..max_concurrent {
let exec_id =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
active_execution_ids.push_back(exec_id);
let manager_clone = manager.clone();
let order = execution_order.clone();
let handle = tokio::spawn(async move {
manager_clone
.enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await
.expect("Enqueue should succeed");
order.lock().await.push(i);
});
handles.push(handle);
}
// Queue the remaining executions.
for i in max_concurrent..num_executions {
let pool_clone = pool.clone(); let pool_clone = pool.clone();
let manager_clone = manager.clone(); let manager_clone = manager.clone();
let action_ref_clone = action_ref.clone(); let action_ref_clone = action_ref.clone();
@@ -479,7 +531,7 @@ async fn test_multiple_workers_simulation() {
.await; .await;
manager_clone manager_clone
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
.expect("Enqueue should succeed"); .expect("Enqueue should succeed");
@@ -499,6 +551,8 @@ async fn test_multiple_workers_simulation() {
let worker_completions = Arc::new(Mutex::new(vec![0, 0, 0])); let worker_completions = Arc::new(Mutex::new(vec![0, 0, 0]));
let worker_completions_clone = worker_completions.clone(); let worker_completions_clone = worker_completions.clone();
let manager_clone = manager.clone(); let manager_clone = manager.clone();
let active_execution_ids = Arc::new(Mutex::new(active_execution_ids));
let active_execution_ids_clone = active_execution_ids.clone();
// Spawn worker simulators // Spawn worker simulators
let worker_handle = tokio::spawn(async move { let worker_handle = tokio::spawn(async move {
@@ -514,10 +568,8 @@ async fn test_multiple_workers_simulation() {
sleep(Duration::from_millis(delay)).await; sleep(Duration::from_millis(delay)).await;
// Worker completes and notifies // Worker completes and notifies
manager_clone let mut active_execution_ids = active_execution_ids_clone.lock().await;
.notify_completion(action_id) release_next_active(&manager_clone, &mut active_execution_ids).await;
.await
.expect("Notify should succeed");
worker_completions_clone.lock().await[next_worker] += 1; worker_completions_clone.lock().await[next_worker] += 1;
@@ -536,7 +588,7 @@ async fn test_multiple_workers_simulation() {
// Verify FIFO order maintained despite different worker speeds // Verify FIFO order maintained despite different worker speeds
let order = execution_order.lock().await; let order = execution_order.lock().await;
let expected: Vec<i64> = (0..num_executions).collect(); let expected: Vec<_> = (0..num_executions).collect();
assert_eq!( assert_eq!(
*order, expected, *order, expected,
"FIFO order should be maintained regardless of worker speed" "FIFO order should be maintained regardless of worker speed"
@@ -576,27 +628,30 @@ async fn test_cross_action_independence() {
let executions_per_action = 50; let executions_per_action = 50;
let mut handles = vec![]; let mut handles = vec![];
let mut action1_active = VecDeque::new();
let mut action2_active = VecDeque::new();
let mut action3_active = VecDeque::new();
// Spawn executions for all three actions simultaneously // Spawn executions for all three actions simultaneously
for action_id in [action1_id, action2_id, action3_id] { for action_id in [action1_id, action2_id, action3_id] {
let action_ref = format!("fifo_test_action_{}_{}", suffix, action_id); let action_ref = format!("fifo_test_action_{}_{}", suffix, action_id);
for i in 0..executions_per_action { for i in 0..executions_per_action {
let pool_clone = pool.clone(); let exec_id =
let manager_clone = manager.clone(); create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested)
let action_ref_clone = action_ref.clone();
let handle = tokio::spawn(async move {
let exec_id = create_test_execution(
&pool_clone,
action_id,
&action_ref_clone,
ExecutionStatus::Requested,
)
.await; .await;
match action_id {
id if id == action1_id && i == 0 => action1_active.push_back(exec_id),
id if id == action2_id && i == 0 => action2_active.push_back(exec_id),
id if id == action3_id && i == 0 => action3_active.push_back(exec_id),
_ => {}
}
let manager_clone = manager.clone();
let handle = tokio::spawn(async move {
manager_clone manager_clone
.enqueue_and_wait(action_id, exec_id, 1) .enqueue_and_wait(action_id, exec_id, 1, None)
.await .await
.expect("Enqueue should succeed"); .expect("Enqueue should succeed");
@@ -634,18 +689,9 @@ async fn test_cross_action_independence() {
// Release all actions in an interleaved pattern // Release all actions in an interleaved pattern
for i in 0..executions_per_action { for i in 0..executions_per_action {
// Release one from each action // Release one from each action
manager release_next_active(&manager, &mut action1_active).await;
.notify_completion(action1_id) release_next_active(&manager, &mut action2_active).await;
.await release_next_active(&manager, &mut action3_active).await;
.expect("Notify should succeed");
manager
.notify_completion(action2_id)
.await
.expect("Notify should succeed");
manager
.notify_completion(action3_id)
.await
.expect("Notify should succeed");
if i % 10 == 0 { if i % 10 == 0 {
sleep(Duration::from_millis(10)).await; sleep(Duration::from_millis(10)).await;
@@ -698,8 +744,9 @@ async fn test_cancellation_during_queue() {
// Fill capacity // Fill capacity
let exec_id = let exec_id =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await; create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
let mut active_execution_ids = VecDeque::from([exec_id]);
manager manager
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
.unwrap(); .unwrap();
@@ -722,7 +769,7 @@ async fn test_cancellation_during_queue() {
ids.lock().await.push(exec_id); ids.lock().await.push(exec_id);
manager_clone manager_clone
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
}); });
@@ -757,7 +804,7 @@ async fn test_cancellation_during_queue() {
// Release remaining // Release remaining
for _ in 0..8 { for _ in 0..8 {
manager.notify_completion(action_id).await.unwrap(); release_next_active(&manager, &mut active_execution_ids).await;
sleep(Duration::from_millis(20)).await; sleep(Duration::from_millis(20)).await;
} }
@@ -798,17 +845,21 @@ async fn test_queue_stats_persistence() {
let max_concurrent = 5; let max_concurrent = 5;
let num_executions = 50; let num_executions = 50;
let mut active_execution_ids = VecDeque::new();
// Enqueue executions // Enqueue executions
for i in 0..num_executions { for i in 0..num_executions {
let exec_id = let exec_id =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await; create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
if i < max_concurrent {
active_execution_ids.push_back(exec_id);
}
// Start the enqueue in background // Start the enqueue in background
let manager_clone = manager.clone(); let manager_clone = manager.clone();
tokio::spawn(async move { tokio::spawn(async move {
manager_clone manager_clone
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
.ok(); .ok();
}); });
@@ -838,7 +889,7 @@ async fn test_queue_stats_persistence() {
// Release all // Release all
for _ in 0..num_executions { for _ in 0..num_executions {
manager.notify_completion(action_id).await.unwrap(); release_next_active(&manager, &mut active_execution_ids).await;
sleep(Duration::from_millis(10)).await; sleep(Duration::from_millis(10)).await;
} }
@@ -854,13 +905,122 @@ async fn test_queue_stats_persistence() {
assert_eq!(final_db_stats.queue_length, 0); assert_eq!(final_db_stats.queue_length, 0);
assert_eq!(final_mem_stats.queue_length, 0); assert_eq!(final_mem_stats.queue_length, 0);
assert_eq!(final_db_stats.total_enqueued, num_executions); assert_eq!(final_db_stats.total_enqueued, num_executions as i64);
assert_eq!(final_db_stats.total_completed, num_executions); assert_eq!(final_db_stats.total_completed, num_executions as i64);
// Cleanup // Cleanup
cleanup_test_data(&pool, pack_id).await; cleanup_test_data(&pool, pack_id).await;
} }
#[tokio::test]
#[ignore] // Requires database
async fn test_release_restore_recovers_active_slot_and_next_queue_head() {
let pool = setup_db().await;
let timestamp = Utc::now().timestamp();
let suffix = format!("restore_release_{}", timestamp);
let pack_id = create_test_pack(&pool, &suffix).await;
let pack_ref = format!("fifo_test_pack_{}", suffix);
let action_id = create_test_action(&pool, pack_id, &pack_ref, &suffix).await;
let action_ref = format!("fifo_test_action_{}", suffix);
let manager = ExecutionQueueManager::with_db_pool(QueueConfig::default(), pool.clone());
let first =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
let second =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
let third =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
manager.enqueue(action_id, first, 1, None).await.unwrap();
manager.enqueue(action_id, second, 1, None).await.unwrap();
manager.enqueue(action_id, third, 1, None).await.unwrap();
let stats = manager.get_queue_stats(action_id).await.unwrap();
assert_eq!(stats.active_count, 1);
assert_eq!(stats.queue_length, 2);
let release = manager
.release_active_slot(first)
.await
.unwrap()
.expect("first execution should own an active slot");
assert_eq!(release.next_execution_id, Some(second));
let stats = manager.get_queue_stats(action_id).await.unwrap();
assert_eq!(stats.active_count, 1);
assert_eq!(stats.queue_length, 1);
manager.restore_active_slot(first, &release).await.unwrap();
let stats = manager.get_queue_stats(action_id).await.unwrap();
assert_eq!(stats.active_count, 1);
assert_eq!(stats.queue_length, 2);
assert_eq!(stats.total_completed, 0);
let next = manager
.release_active_slot(first)
.await
.unwrap()
.expect("restored execution should still own the active slot");
assert_eq!(next.next_execution_id, Some(second));
cleanup_test_data(&pool, pack_id).await;
}
#[tokio::test]
#[ignore] // Requires database
async fn test_remove_restore_recovers_queued_execution_position() {
let pool = setup_db().await;
let timestamp = Utc::now().timestamp();
let suffix = format!("restore_queue_{}", timestamp);
let pack_id = create_test_pack(&pool, &suffix).await;
let pack_ref = format!("fifo_test_pack_{}", suffix);
let action_id = create_test_action(&pool, pack_id, &pack_ref, &suffix).await;
let action_ref = format!("fifo_test_action_{}", suffix);
let manager = ExecutionQueueManager::with_db_pool(QueueConfig::default(), pool.clone());
let first =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
let second =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
let third =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
manager.enqueue(action_id, first, 1, None).await.unwrap();
manager.enqueue(action_id, second, 1, None).await.unwrap();
manager.enqueue(action_id, third, 1, None).await.unwrap();
let removal = manager
.remove_queued_execution(second)
.await
.unwrap()
.expect("second execution should be queued");
assert_eq!(removal.next_execution_id, None);
let stats = manager.get_queue_stats(action_id).await.unwrap();
assert_eq!(stats.active_count, 1);
assert_eq!(stats.queue_length, 1);
manager.restore_queued_execution(&removal).await.unwrap();
let stats = manager.get_queue_stats(action_id).await.unwrap();
assert_eq!(stats.active_count, 1);
assert_eq!(stats.queue_length, 2);
let release = manager
.release_active_slot(first)
.await
.unwrap()
.expect("first execution should own the active slot");
assert_eq!(release.next_execution_id, Some(second));
cleanup_test_data(&pool, pack_id).await;
}
#[tokio::test] #[tokio::test]
#[ignore] // Requires database #[ignore] // Requires database
async fn test_queue_full_rejection() { async fn test_queue_full_rejection() {
@@ -888,7 +1048,7 @@ async fn test_queue_full_rejection() {
let exec_id = let exec_id =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await; create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
manager manager
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
.unwrap(); .unwrap();
@@ -900,7 +1060,7 @@ async fn test_queue_full_rejection() {
tokio::spawn(async move { tokio::spawn(async move {
manager_clone manager_clone
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
.ok(); .ok();
}); });
@@ -917,7 +1077,7 @@ async fn test_queue_full_rejection() {
let exec_id = let exec_id =
create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await; create_test_execution(&pool, action_id, &action_ref, ExecutionStatus::Requested).await;
let result = manager let result = manager
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await; .await;
assert!(result.is_err(), "Should reject when queue is full"); assert!(result.is_err(), "Should reject when queue is full");
@@ -951,6 +1111,7 @@ async fn test_extreme_stress_10k_executions() {
let max_concurrent = 10; let max_concurrent = 10;
let num_executions: i64 = 10000; let num_executions: i64 = 10000;
let completed = Arc::new(Mutex::new(0u64)); let completed = Arc::new(Mutex::new(0u64));
let execution_ids = Arc::new(Mutex::new(vec![None; num_executions as usize]));
println!( println!(
"Starting extreme stress test with {} executions...", "Starting extreme stress test with {} executions...",
@@ -965,6 +1126,7 @@ async fn test_extreme_stress_10k_executions() {
let manager_clone = manager.clone(); let manager_clone = manager.clone();
let action_ref_clone = action_ref.clone(); let action_ref_clone = action_ref.clone();
let completed_clone = completed.clone(); let completed_clone = completed.clone();
let ids = execution_ids.clone();
let handle = tokio::spawn(async move { let handle = tokio::spawn(async move {
let exec_id = create_test_execution( let exec_id = create_test_execution(
@@ -974,9 +1136,10 @@ async fn test_extreme_stress_10k_executions() {
ExecutionStatus::Requested, ExecutionStatus::Requested,
) )
.await; .await;
ids.lock().await[i as usize] = Some(exec_id);
manager_clone manager_clone
.enqueue_and_wait(action_id, exec_id, max_concurrent) .enqueue_and_wait(action_id, exec_id, max_concurrent, None)
.await .await
.expect("Enqueue should succeed"); .expect("Enqueue should succeed");
@@ -999,12 +1162,18 @@ async fn test_extreme_stress_10k_executions() {
println!("All executions spawned"); println!("All executions spawned");
// Release all // Release all
let ids = execution_ids.lock().await;
let mut active_execution_ids = VecDeque::from(
ids.iter()
.take(max_concurrent as usize)
.map(|id| id.expect("Initial execution id should be recorded"))
.collect::<Vec<_>>(),
);
drop(ids);
let release_start = std::time::Instant::now(); let release_start = std::time::Instant::now();
for i in 0i64..num_executions { for i in 0i64..num_executions {
manager release_next_active(&manager, &mut active_execution_ids).await;
.notify_completion(action_id)
.await
.expect("Notify should succeed");
if i % 1000 == 0 { if i % 1000 == 0 {
println!("Released: {}", i); println!("Released: {}", i);

View File

@@ -9,7 +9,7 @@
use attune_common::{ use attune_common::{
config::Config, config::Config,
db::Database, db::Database,
models::enums::ExecutionStatus, models::enums::{ExecutionStatus, PolicyMethod},
repositories::{ repositories::{
action::{ActionRepository, CreateActionInput}, action::{ActionRepository, CreateActionInput},
execution::{CreateExecutionInput, ExecutionRepository}, execution::{CreateExecutionInput, ExecutionRepository},
@@ -190,6 +190,8 @@ async fn test_global_rate_limit() {
window_seconds: 60, window_seconds: 60,
}), }),
concurrency_limit: None, concurrency_limit: None,
concurrency_method: PolicyMethod::Enqueue,
concurrency_parameters: Vec::new(),
quotas: None, quotas: None,
}; };
@@ -242,6 +244,8 @@ async fn test_concurrency_limit() {
let policy = ExecutionPolicy { let policy = ExecutionPolicy {
rate_limit: None, rate_limit: None,
concurrency_limit: Some(2), concurrency_limit: Some(2),
concurrency_method: PolicyMethod::Enqueue,
concurrency_parameters: Vec::new(),
quotas: None, quotas: None,
}; };
@@ -300,6 +304,8 @@ async fn test_action_specific_policy() {
window_seconds: 60, window_seconds: 60,
}), }),
concurrency_limit: None, concurrency_limit: None,
concurrency_method: PolicyMethod::Enqueue,
concurrency_parameters: Vec::new(),
quotas: None, quotas: None,
}; };
enforcer.set_action_policy(action_id, action_policy); enforcer.set_action_policy(action_id, action_policy);
@@ -345,6 +351,8 @@ async fn test_pack_specific_policy() {
let pack_policy = ExecutionPolicy { let pack_policy = ExecutionPolicy {
rate_limit: None, rate_limit: None,
concurrency_limit: Some(1), concurrency_limit: Some(1),
concurrency_method: PolicyMethod::Enqueue,
concurrency_parameters: Vec::new(),
quotas: None, quotas: None,
}; };
enforcer.set_pack_policy(pack_id, pack_policy); enforcer.set_pack_policy(pack_id, pack_policy);
@@ -388,6 +396,8 @@ async fn test_policy_priority() {
window_seconds: 60, window_seconds: 60,
}), }),
concurrency_limit: None, concurrency_limit: None,
concurrency_method: PolicyMethod::Enqueue,
concurrency_parameters: Vec::new(),
quotas: None, quotas: None,
}; };
let mut enforcer = PolicyEnforcer::with_global_policy(pool.clone(), global_policy); let mut enforcer = PolicyEnforcer::with_global_policy(pool.clone(), global_policy);
@@ -399,6 +409,8 @@ async fn test_policy_priority() {
window_seconds: 60, window_seconds: 60,
}), }),
concurrency_limit: None, concurrency_limit: None,
concurrency_method: PolicyMethod::Enqueue,
concurrency_parameters: Vec::new(),
quotas: None, quotas: None,
}; };
enforcer.set_action_policy(action_id, action_policy); enforcer.set_action_policy(action_id, action_policy);

View File

@@ -84,6 +84,7 @@ impl ArtifactManager {
// Store stdout // Store stdout
if !stdout.is_empty() { if !stdout.is_empty() {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Artifact filenames are fixed constants under an execution-scoped directory derived from the execution ID.
let stdout_path = exec_dir.join("stdout.log"); let stdout_path = exec_dir.join("stdout.log");
let mut file = fs::File::create(&stdout_path) let mut file = fs::File::create(&stdout_path)
.await .await
@@ -117,6 +118,7 @@ impl ArtifactManager {
// Store stderr // Store stderr
if !stderr.is_empty() { if !stderr.is_empty() {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Artifact filenames are fixed constants under an execution-scoped directory derived from the execution ID.
let stderr_path = exec_dir.join("stderr.log"); let stderr_path = exec_dir.join("stderr.log");
let mut file = fs::File::create(&stderr_path) let mut file = fs::File::create(&stderr_path)
.await .await
@@ -162,6 +164,7 @@ impl ArtifactManager {
.await .await
.map_err(|e| Error::Internal(format!("Failed to create execution directory: {}", e)))?; .map_err(|e| Error::Internal(format!("Failed to create execution directory: {}", e)))?;
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Result artifacts are written to a fixed filename inside the execution-scoped directory.
let result_path = exec_dir.join("result.json"); let result_path = exec_dir.join("result.json");
let result_json = serde_json::to_string_pretty(result)?; let result_json = serde_json::to_string_pretty(result)?;
@@ -209,6 +212,7 @@ impl ArtifactManager {
.await .await
.map_err(|e| Error::Internal(format!("Failed to create execution directory: {}", e)))?; .map_err(|e| Error::Internal(format!("Failed to create execution directory: {}", e)))?;
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Custom artifact paths are always rooted under the execution-scoped artifact directory.
let file_path = exec_dir.join(filename); let file_path = exec_dir.join(filename);
let mut file = fs::File::create(&file_path) let mut file = fs::File::create(&file_path)
.await .await
@@ -246,6 +250,7 @@ impl ArtifactManager {
/// Read an artifact /// Read an artifact
pub async fn read_artifact(&self, artifact: &Artifact) -> Result<Vec<u8>> { pub async fn read_artifact(&self, artifact: &Artifact) -> Result<Vec<u8>> {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Artifact reads use paths previously created by the artifact manager inside the configured artifact root.
fs::read(&artifact.path) fs::read(&artifact.path)
.await .await
.map_err(|e| Error::Internal(format!("Failed to read artifact: {}", e))) .map_err(|e| Error::Internal(format!("Failed to read artifact: {}", e)))

View File

@@ -474,6 +474,7 @@ impl ActionExecutor {
let actions_dir = pack_dir.join("actions"); let actions_dir = pack_dir.join("actions");
let actions_dir_exists = actions_dir.exists(); let actions_dir_exists = actions_dir.exists();
let actions_dir_contents: Vec<String> = if actions_dir_exists { let actions_dir_contents: Vec<String> = if actions_dir_exists {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Diagnostic directory listing is confined to the action pack directory derived from pack_ref.
std::fs::read_dir(&actions_dir) std::fs::read_dir(&actions_dir)
.map(|entries| { .map(|entries| {
entries entries
@@ -543,6 +544,16 @@ impl ActionExecutor {
selected_runtime_version, selected_runtime_version,
max_stdout_bytes: self.max_stdout_bytes, max_stdout_bytes: self.max_stdout_bytes,
max_stderr_bytes: self.max_stderr_bytes, max_stderr_bytes: self.max_stderr_bytes,
stdout_log_path: Some(
self.artifact_manager
.get_execution_dir(execution.id)
.join("stdout.log"),
),
stderr_log_path: Some(
self.artifact_manager
.get_execution_dir(execution.id)
.join("stderr.log"),
),
parameter_delivery: action.parameter_delivery, parameter_delivery: action.parameter_delivery,
parameter_format: action.parameter_format, parameter_format: action.parameter_format,
output_format: action.output_format, output_format: action.output_format,
@@ -892,6 +903,7 @@ impl ActionExecutor {
// Check if stderr log exists and is non-empty from artifact storage // Check if stderr log exists and is non-empty from artifact storage
let stderr_path = exec_dir.join("stderr.log"); let stderr_path = exec_dir.join("stderr.log");
if stderr_path.exists() { if stderr_path.exists() {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Log paths are fixed artifact filenames inside the execution-scoped directory.
if let Ok(contents) = tokio::fs::read_to_string(&stderr_path).await { if let Ok(contents) = tokio::fs::read_to_string(&stderr_path).await {
if !contents.trim().is_empty() { if !contents.trim().is_empty() {
result_data["stderr_log"] = result_data["stderr_log"] =
@@ -903,6 +915,7 @@ impl ActionExecutor {
// Check if stdout log exists from artifact storage // Check if stdout log exists from artifact storage
let stdout_path = exec_dir.join("stdout.log"); let stdout_path = exec_dir.join("stdout.log");
if stdout_path.exists() { if stdout_path.exists() {
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Log paths are fixed artifact filenames inside the execution-scoped directory.
if let Ok(contents) = tokio::fs::read_to_string(&stdout_path).await { if let Ok(contents) = tokio::fs::read_to_string(&stdout_path).await {
if !contents.is_empty() { if !contents.is_empty() {
result_data["stdout"] = serde_json::json!(contents); result_data["stdout"] = serde_json::json!(contents);
@@ -990,7 +1003,11 @@ impl ActionExecutor {
..Default::default() ..Default::default()
}; };
ExecutionRepository::update(&self.pool, execution_id, input).await?; let execution = ExecutionRepository::find_by_id(&self.pool, execution_id)
.await?
.ok_or_else(|| anyhow::anyhow!("Execution {} not found", execution_id))?;
ExecutionRepository::update_loaded(&self.pool, &execution, input).await?;
Ok(()) Ok(())
} }

View File

@@ -452,7 +452,7 @@ mod tests {
#[test] #[test]
fn test_detected_runtimes_json_structure() { fn test_detected_runtimes_json_structure() {
// Test the JSON structure that set_detected_runtimes builds // Test the JSON structure that set_detected_runtimes builds
let runtimes = vec![ let runtimes = [
DetectedRuntime { DetectedRuntime {
name: "python".to_string(), name: "python".to_string(),
path: "/usr/bin/python3".to_string(), path: "/usr/bin/python3".to_string(),

View File

@@ -200,6 +200,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -233,6 +235,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),

View File

@@ -2,9 +2,10 @@
//! //!
//! Provides bounded log writers that limit output size to prevent OOM issues. //! Provides bounded log writers that limit output size to prevent OOM issues.
use std::path::Path;
use std::pin::Pin; use std::pin::Pin;
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use tokio::io::AsyncWrite; use tokio::io::{AsyncWrite, AsyncWriteExt};
const TRUNCATION_NOTICE_STDOUT: &str = "\n\n[OUTPUT TRUNCATED: stdout exceeded size limit]\n"; const TRUNCATION_NOTICE_STDOUT: &str = "\n\n[OUTPUT TRUNCATED: stdout exceeded size limit]\n";
const TRUNCATION_NOTICE_STDERR: &str = "\n\n[OUTPUT TRUNCATED: stderr exceeded size limit]\n"; const TRUNCATION_NOTICE_STDERR: &str = "\n\n[OUTPUT TRUNCATED: stderr exceeded size limit]\n";
@@ -76,6 +77,15 @@ pub struct BoundedLogWriter {
truncation_notice: &'static str, truncation_notice: &'static str,
} }
/// A file-backed writer that applies the same truncation policy as `BoundedLogWriter`.
pub struct BoundedLogFileWriter {
file: tokio::fs::File,
max_bytes: usize,
truncated: bool,
data_bytes_written: usize,
truncation_notice: &'static str,
}
impl BoundedLogWriter { impl BoundedLogWriter {
/// Create a new bounded log writer for stdout /// Create a new bounded log writer for stdout
pub fn new_stdout(max_bytes: usize) -> Self { pub fn new_stdout(max_bytes: usize) -> Self {
@@ -166,6 +176,76 @@ impl BoundedLogWriter {
} }
} }
impl BoundedLogFileWriter {
pub async fn new_stdout(path: &Path, max_bytes: usize) -> std::io::Result<Self> {
Self::create(path, max_bytes, TRUNCATION_NOTICE_STDOUT).await
}
pub async fn new_stderr(path: &Path, max_bytes: usize) -> std::io::Result<Self> {
Self::create(path, max_bytes, TRUNCATION_NOTICE_STDERR).await
}
async fn create(
path: &Path,
max_bytes: usize,
truncation_notice: &'static str,
) -> std::io::Result<Self> {
if let Some(parent) = path.parent() {
tokio::fs::create_dir_all(parent).await?;
}
let file = tokio::fs::OpenOptions::new()
.create(true)
.write(true)
.truncate(true)
.open(path)
.await?;
Ok(Self {
file,
max_bytes,
truncated: false,
data_bytes_written: 0,
truncation_notice,
})
}
pub async fn write_all(&mut self, buf: &[u8]) -> std::io::Result<()> {
if self.truncated {
return Ok(());
}
let effective_limit = self.max_bytes.saturating_sub(NOTICE_RESERVE_BYTES);
let remaining_space = effective_limit.saturating_sub(self.data_bytes_written);
if remaining_space == 0 {
self.add_truncation_notice().await?;
return Ok(());
}
let bytes_to_write = std::cmp::min(buf.len(), remaining_space);
if bytes_to_write > 0 {
self.file.write_all(&buf[..bytes_to_write]).await?;
self.data_bytes_written += bytes_to_write;
}
if bytes_to_write < buf.len() {
self.add_truncation_notice().await?;
}
self.file.flush().await
}
async fn add_truncation_notice(&mut self) -> std::io::Result<()> {
if self.truncated {
return Ok(());
}
self.truncated = true;
self.file.write_all(self.truncation_notice.as_bytes()).await
}
}
impl AsyncWrite for BoundedLogWriter { impl AsyncWrite for BoundedLogWriter {
fn poll_write( fn poll_write(
mut self: Pin<&mut Self>, mut self: Pin<&mut Self>,

View File

@@ -48,7 +48,7 @@ pub use dependency::{
DependencyError, DependencyManager, DependencyManagerRegistry, DependencyResult, DependencyError, DependencyManager, DependencyManagerRegistry, DependencyResult,
DependencySpec, EnvironmentInfo, DependencySpec, EnvironmentInfo,
}; };
pub use log_writer::{BoundedLogResult, BoundedLogWriter}; pub use log_writer::{BoundedLogFileWriter, BoundedLogResult, BoundedLogWriter};
pub use parameter_passing::{ParameterDeliveryConfig, PreparedParameters}; pub use parameter_passing::{ParameterDeliveryConfig, PreparedParameters};
// Re-export parameter types from common // Re-export parameter types from common
@@ -148,6 +148,12 @@ pub struct ExecutionContext {
/// Maximum stderr size in bytes (for log truncation) /// Maximum stderr size in bytes (for log truncation)
pub max_stderr_bytes: usize, pub max_stderr_bytes: usize,
/// Optional live stdout log path for incremental writes during execution.
pub stdout_log_path: Option<PathBuf>,
/// Optional live stderr log path for incremental writes during execution.
pub stderr_log_path: Option<PathBuf>,
/// How parameters should be delivered to the action /// How parameters should be delivered to the action
pub parameter_delivery: ParameterDelivery, pub parameter_delivery: ParameterDelivery,
@@ -185,6 +191,8 @@ impl ExecutionContext {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),

View File

@@ -5,10 +5,11 @@
use super::{ use super::{
parameter_passing::{self, ParameterDeliveryConfig}, parameter_passing::{self, ParameterDeliveryConfig},
BoundedLogWriter, ExecutionContext, ExecutionResult, Runtime, RuntimeError, RuntimeResult, BoundedLogFileWriter, BoundedLogWriter, ExecutionContext, ExecutionResult, Runtime,
RuntimeError, RuntimeResult,
}; };
use async_trait::async_trait; use async_trait::async_trait;
use std::path::PathBuf; use std::path::{Path, PathBuf};
use std::process::Stdio; use std::process::Stdio;
use std::time::Instant; use std::time::Instant;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader}; use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
@@ -45,6 +46,8 @@ impl NativeRuntime {
timeout: Option<u64>, timeout: Option<u64>,
max_stdout_bytes: usize, max_stdout_bytes: usize,
max_stderr_bytes: usize, max_stderr_bytes: usize,
stdout_log_path: Option<&Path>,
stderr_log_path: Option<&Path>,
) -> RuntimeResult<ExecutionResult> { ) -> RuntimeResult<ExecutionResult> {
let start = Instant::now(); let start = Instant::now();
@@ -131,6 +134,8 @@ impl NativeRuntime {
let mut stdout_writer = BoundedLogWriter::new_stdout(max_stdout_bytes); let mut stdout_writer = BoundedLogWriter::new_stdout(max_stdout_bytes);
let mut stderr_writer = BoundedLogWriter::new_stderr(max_stderr_bytes); let mut stderr_writer = BoundedLogWriter::new_stderr(max_stderr_bytes);
let mut stdout_file = open_live_log_file(stdout_log_path, max_stdout_bytes, true).await?;
let mut stderr_file = open_live_log_file(stderr_log_path, max_stderr_bytes, false).await?;
// Create buffered readers // Create buffered readers
let mut stdout_reader = BufReader::new(stdout_handle); let mut stdout_reader = BufReader::new(stdout_handle);
@@ -147,6 +152,9 @@ impl NativeRuntime {
if stdout_writer.write_all(&line).await.is_err() { if stdout_writer.write_all(&line).await.is_err() {
break; break;
} }
if let Some(file) = stdout_file.as_mut() {
let _ = file.write_all(&line).await;
}
} }
Err(_) => break, Err(_) => break,
} }
@@ -164,6 +172,9 @@ impl NativeRuntime {
if stderr_writer.write_all(&line).await.is_err() { if stderr_writer.write_all(&line).await.is_err() {
break; break;
} }
if let Some(file) = stderr_file.as_mut() {
let _ = file.write_all(&line).await;
}
} }
Err(_) => break, Err(_) => break,
} }
@@ -352,6 +363,8 @@ impl Runtime for NativeRuntime {
context.timeout, context.timeout,
context.max_stdout_bytes, context.max_stdout_bytes,
context.max_stderr_bytes, context.max_stderr_bytes,
context.stdout_log_path.as_deref(),
context.stderr_log_path.as_deref(),
) )
.await .await
} }
@@ -401,6 +414,23 @@ impl Runtime for NativeRuntime {
} }
} }
async fn open_live_log_file(
path: Option<&Path>,
max_bytes: usize,
is_stdout: bool,
) -> std::io::Result<Option<BoundedLogFileWriter>> {
let Some(path) = path else {
return Ok(None);
};
let writer = if is_stdout {
BoundedLogFileWriter::new_stdout(path, max_bytes).await?
} else {
BoundedLogFileWriter::new_stderr(path, max_bytes).await?
};
Ok(Some(writer))
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;

View File

@@ -962,6 +962,8 @@ impl Runtime for ProcessRuntime {
context.max_stderr_bytes, context.max_stderr_bytes,
context.output_format, context.output_format,
context.cancel_token.clone(), context.cancel_token.clone(),
context.stdout_log_path.as_deref(),
context.stderr_log_path.as_deref(),
) )
.await; .await;
@@ -1144,6 +1146,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024, max_stdout_bytes: 1024,
max_stderr_bytes: 1024, max_stderr_bytes: 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -1179,6 +1183,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024, max_stdout_bytes: 1024,
max_stderr_bytes: 1024, max_stderr_bytes: 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -1214,6 +1220,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024, max_stdout_bytes: 1024,
max_stderr_bytes: 1024, max_stderr_bytes: 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -1305,6 +1313,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024 * 1024, max_stdout_bytes: 1024 * 1024,
max_stderr_bytes: 1024 * 1024, max_stderr_bytes: 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -1364,6 +1374,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024 * 1024, max_stdout_bytes: 1024 * 1024,
max_stderr_bytes: 1024 * 1024, max_stderr_bytes: 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -1443,6 +1455,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024 * 1024, max_stdout_bytes: 1024 * 1024,
max_stderr_bytes: 1024 * 1024, max_stderr_bytes: 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -1485,6 +1499,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024 * 1024, max_stdout_bytes: 1024 * 1024,
max_stderr_bytes: 1024 * 1024, max_stderr_bytes: 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -1532,6 +1548,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024 * 1024, max_stdout_bytes: 1024 * 1024,
max_stderr_bytes: 1024 * 1024, max_stderr_bytes: 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -1583,6 +1601,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024 * 1024, max_stdout_bytes: 1024 * 1024,
max_stderr_bytes: 1024 * 1024, max_stderr_bytes: 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),
@@ -1692,6 +1712,8 @@ mod tests {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 1024 * 1024, max_stdout_bytes: 1024 * 1024,
max_stderr_bytes: 1024 * 1024, max_stderr_bytes: 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),

View File

@@ -12,10 +12,10 @@
//! 1. SIGTERM is sent to the process immediately //! 1. SIGTERM is sent to the process immediately
//! 2. After a 5-second grace period, SIGKILL is sent as a last resort //! 2. After a 5-second grace period, SIGKILL is sent as a last resort
use super::{BoundedLogWriter, ExecutionResult, OutputFormat, RuntimeResult}; use super::{BoundedLogFileWriter, BoundedLogWriter, ExecutionResult, OutputFormat, RuntimeResult};
use std::collections::HashMap; use std::collections::HashMap;
use std::io; use std::io;
use std::path::Path; use std::path::{Path, PathBuf};
use std::time::Instant; use std::time::Instant;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader}; use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
use tokio::process::Command; use tokio::process::Command;
@@ -59,6 +59,8 @@ pub async fn execute_streaming(
max_stderr_bytes, max_stderr_bytes,
output_format, output_format,
None, None,
None,
None,
) )
.await .await
} }
@@ -93,6 +95,8 @@ pub async fn execute_streaming_cancellable(
max_stderr_bytes: usize, max_stderr_bytes: usize,
output_format: OutputFormat, output_format: OutputFormat,
cancel_token: Option<CancellationToken>, cancel_token: Option<CancellationToken>,
stdout_log_path: Option<&Path>,
stderr_log_path: Option<&Path>,
) -> RuntimeResult<ExecutionResult> { ) -> RuntimeResult<ExecutionResult> {
let start = Instant::now(); let start = Instant::now();
@@ -130,6 +134,8 @@ pub async fn execute_streaming_cancellable(
// Create bounded writers // Create bounded writers
let mut stdout_writer = BoundedLogWriter::new_stdout(max_stdout_bytes); let mut stdout_writer = BoundedLogWriter::new_stdout(max_stdout_bytes);
let mut stderr_writer = BoundedLogWriter::new_stderr(max_stderr_bytes); let mut stderr_writer = BoundedLogWriter::new_stderr(max_stderr_bytes);
let mut stdout_file = open_live_log_file(stdout_log_path, max_stdout_bytes, true).await?;
let mut stderr_file = open_live_log_file(stderr_log_path, max_stderr_bytes, false).await?;
// Take stdout and stderr streams // Take stdout and stderr streams
let stdout = child.stdout.take().expect("stdout not captured"); let stdout = child.stdout.take().expect("stdout not captured");
@@ -150,6 +156,9 @@ pub async fn execute_streaming_cancellable(
if stdout_writer.write_all(&line).await.is_err() { if stdout_writer.write_all(&line).await.is_err() {
break; break;
} }
if let Some(file) = stdout_file.as_mut() {
let _ = file.write_all(&line).await;
}
} }
Err(_) => break, Err(_) => break,
} }
@@ -167,6 +176,9 @@ pub async fn execute_streaming_cancellable(
if stderr_writer.write_all(&line).await.is_err() { if stderr_writer.write_all(&line).await.is_err() {
break; break;
} }
if let Some(file) = stderr_file.as_mut() {
let _ = file.write_all(&line).await;
}
} }
Err(_) => break, Err(_) => break,
} }
@@ -351,6 +363,24 @@ pub async fn execute_streaming_cancellable(
}) })
} }
async fn open_live_log_file(
path: Option<&Path>,
max_bytes: usize,
is_stdout: bool,
) -> io::Result<Option<BoundedLogFileWriter>> {
let Some(path) = path else {
return Ok(None);
};
let path: PathBuf = path.to_path_buf();
let writer = if is_stdout {
BoundedLogFileWriter::new_stdout(&path, max_bytes).await?
} else {
BoundedLogFileWriter::new_stderr(&path, max_bytes).await?
};
Ok(Some(writer))
}
/// Parse stdout content according to the specified output format. /// Parse stdout content according to the specified output format.
fn configure_child_process(cmd: &mut Command) -> io::Result<()> { fn configure_child_process(cmd: &mut Command) -> io::Result<()> {
#[cfg(unix)] #[cfg(unix)]
@@ -704,6 +734,8 @@ mod tests {
1024 * 1024, 1024 * 1024,
OutputFormat::Text, OutputFormat::Text,
Some(cancel_token), Some(cancel_token),
None,
None,
) )
.await .await
.unwrap(); .unwrap();

View File

@@ -1,819 +0,0 @@
//! Python Runtime Implementation
//!
//! Executes Python actions using subprocess execution.
use super::{
BoundedLogWriter, DependencyManagerRegistry, DependencySpec, ExecutionContext, ExecutionResult,
OutputFormat, Runtime, RuntimeError, RuntimeResult,
};
use async_trait::async_trait;
use std::path::PathBuf;
use std::process::Stdio;
use std::sync::Arc;
use std::time::Instant;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
use tokio::process::Command;
use tokio::time::timeout;
use tracing::{debug, info, warn};
/// Python runtime for executing Python scripts and functions
pub struct PythonRuntime {
/// Python interpreter path (fallback when no venv exists)
python_path: PathBuf,
/// Base directory for storing action code
work_dir: PathBuf,
/// Optional dependency manager registry for isolated environments
dependency_manager: Option<Arc<DependencyManagerRegistry>>,
}
impl PythonRuntime {
/// Create a new Python runtime
pub fn new() -> Self {
Self {
python_path: PathBuf::from("python3"),
work_dir: PathBuf::from("/tmp/attune/actions"),
dependency_manager: None,
}
}
/// Create a Python runtime with custom settings
pub fn with_config(python_path: PathBuf, work_dir: PathBuf) -> Self {
Self {
python_path,
work_dir,
dependency_manager: None,
}
}
/// Create a Python runtime with dependency manager support
pub fn with_dependency_manager(
python_path: PathBuf,
work_dir: PathBuf,
dependency_manager: Arc<DependencyManagerRegistry>,
) -> Self {
Self {
python_path,
work_dir,
dependency_manager: Some(dependency_manager),
}
}
/// Get the Python executable path to use for a given context
///
/// If the action has a pack_ref with dependencies, use the venv Python.
/// Otherwise, use the default Python interpreter.
async fn get_python_executable(&self, context: &ExecutionContext) -> RuntimeResult<PathBuf> {
// Check if we have a dependency manager and can extract pack_ref
if let Some(ref dep_mgr) = self.dependency_manager {
// Extract pack_ref from action_ref (format: "pack_ref.action_name")
if let Some(pack_ref) = context.action_ref.split('.').next() {
// Try to get the executable path for this pack
match dep_mgr.get_executable_path(pack_ref, "python").await {
Ok(python_path) => {
debug!(
"Using pack-specific Python from venv: {}",
python_path.display()
);
return Ok(python_path);
}
Err(e) => {
// Venv doesn't exist or failed - this is OK if pack has no dependencies
debug!(
"No venv found for pack {} ({}), using default Python",
pack_ref, e
);
}
}
}
}
// Fall back to default Python interpreter
debug!("Using default Python interpreter: {:?}", self.python_path);
Ok(self.python_path.clone())
}
/// Generate Python wrapper script that loads parameters and executes the action
fn generate_wrapper_script(&self, context: &ExecutionContext) -> RuntimeResult<String> {
let params_json = serde_json::to_string(&context.parameters)?;
// Use base64 encoding for code to avoid any quote/escape issues
let code_bytes = context.code.as_deref().unwrap_or("").as_bytes();
let code_base64 =
base64::Engine::encode(&base64::engine::general_purpose::STANDARD, code_bytes);
let wrapper = format!(
r#"#!/usr/bin/env python3
import sys
import json
import traceback
import base64
from pathlib import Path
# Global secrets storage (read from stdin, NOT from environment)
_attune_secrets = {{}}
def get_secret(name):
"""
Get a secret value by name.
Secrets are passed securely via stdin and are never exposed in
environment variables or process listings.
Args:
name (str): The name of the secret to retrieve
Returns:
str: The secret value, or None if not found
"""
return _attune_secrets.get(name)
def main():
global _attune_secrets
try:
# Read secrets from stdin FIRST (before executing action code)
# This prevents secrets from being visible in process environment
secrets_line = sys.stdin.readline().strip()
if secrets_line:
_attune_secrets = json.loads(secrets_line)
# Parse parameters
parameters = json.loads('''{}''')
# Decode action code from base64 (avoids quote/escape issues)
action_code = base64.b64decode('{}').decode('utf-8')
# Execute the code in a controlled namespace
# Include get_secret helper function
namespace = {{
'__name__': '__main__',
'parameters': parameters,
'get_secret': get_secret
}}
exec(action_code, namespace)
# Look for main function or run function
if '{}' in namespace:
result = namespace['{}'](**parameters)
elif 'run' in namespace:
result = namespace['run'](**parameters)
elif 'main' in namespace:
result = namespace['main'](**parameters)
else:
# No entry point found, return the namespace (only JSON-serializable values)
def is_json_serializable(obj):
"""Check if an object is JSON serializable"""
if obj is None:
return True
if isinstance(obj, (bool, int, float, str)):
return True
if isinstance(obj, (list, tuple)):
return all(is_json_serializable(item) for item in obj)
if isinstance(obj, dict):
return all(is_json_serializable(k) and is_json_serializable(v)
for k, v in obj.items())
return False
result = {{k: v for k, v in namespace.items()
if not k.startswith('__') and is_json_serializable(v)}}
# Output result as JSON
if result is not None:
print(json.dumps({{'result': result, 'status': 'success'}}))
else:
print(json.dumps({{'status': 'success'}}))
sys.exit(0)
except Exception as e:
error_info = {{
'status': 'error',
'error': str(e),
'error_type': type(e).__name__,
'traceback': traceback.format_exc()
}}
print(json.dumps(error_info), file=sys.stderr)
sys.exit(1)
if __name__ == '__main__':
main()
"#,
params_json, code_base64, context.entry_point, context.entry_point
);
Ok(wrapper)
}
/// Execute with streaming and bounded log collection
async fn execute_with_streaming(
&self,
mut cmd: Command,
secrets: &std::collections::HashMap<String, String>,
timeout_secs: Option<u64>,
max_stdout_bytes: usize,
max_stderr_bytes: usize,
output_format: OutputFormat,
) -> RuntimeResult<ExecutionResult> {
let start = Instant::now();
// Spawn process with piped I/O
let mut child = cmd
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()?;
// Write secrets to stdin
if let Some(mut stdin) = child.stdin.take() {
let secrets_json = serde_json::to_string(secrets)?;
stdin.write_all(secrets_json.as_bytes()).await?;
stdin.write_all(b"\n").await?;
drop(stdin);
}
// Create bounded writers
let mut stdout_writer = BoundedLogWriter::new_stdout(max_stdout_bytes);
let mut stderr_writer = BoundedLogWriter::new_stderr(max_stderr_bytes);
// Take stdout and stderr streams
let stdout = child.stdout.take().expect("stdout not captured");
let stderr = child.stderr.take().expect("stderr not captured");
// Create buffered readers
let mut stdout_reader = BufReader::new(stdout);
let mut stderr_reader = BufReader::new(stderr);
// Stream both outputs concurrently
let stdout_task = async {
let mut line = Vec::new();
loop {
line.clear();
match stdout_reader.read_until(b'\n', &mut line).await {
Ok(0) => break, // EOF
Ok(_) => {
if stdout_writer.write_all(&line).await.is_err() {
break;
}
}
Err(_) => break,
}
}
stdout_writer
};
let stderr_task = async {
let mut line = Vec::new();
loop {
line.clear();
match stderr_reader.read_until(b'\n', &mut line).await {
Ok(0) => break, // EOF
Ok(_) => {
if stderr_writer.write_all(&line).await.is_err() {
break;
}
}
Err(_) => break,
}
}
stderr_writer
};
// Wait for both streams and the process
let (stdout_writer, stderr_writer, wait_result) =
tokio::join!(stdout_task, stderr_task, async {
if let Some(timeout_secs) = timeout_secs {
timeout(std::time::Duration::from_secs(timeout_secs), child.wait()).await
} else {
Ok(child.wait().await)
}
});
let duration_ms = start.elapsed().as_millis() as u64;
// Handle timeout
let status = match wait_result {
Ok(Ok(status)) => status,
Ok(Err(e)) => {
return Err(RuntimeError::ProcessError(format!(
"Process wait failed: {}",
e
)));
}
Err(_) => {
return Ok(ExecutionResult {
exit_code: -1,
stdout: String::new(),
stderr: String::new(),
result: None,
duration_ms,
error: Some(format!(
"Execution timed out after {} seconds",
timeout_secs.unwrap()
)),
stdout_truncated: false,
stderr_truncated: false,
stdout_bytes_truncated: 0,
stderr_bytes_truncated: 0,
});
}
};
// Get results from bounded writers
let stdout_result = stdout_writer.into_result();
let stderr_result = stderr_writer.into_result();
let exit_code = status.code().unwrap_or(-1);
debug!(
"Python execution completed: exit_code={}, duration={}ms, stdout_truncated={}, stderr_truncated={}",
exit_code, duration_ms, stdout_result.truncated, stderr_result.truncated
);
// Parse result from stdout based on output_format
let result = if exit_code == 0 && !stdout_result.content.trim().is_empty() {
match output_format {
OutputFormat::Text => {
// No parsing - text output is captured in stdout field
None
}
OutputFormat::Json => {
// Try to parse full stdout as JSON first (handles multi-line JSON),
// then fall back to last line only (for scripts that log before output)
let trimmed = stdout_result.content.trim();
serde_json::from_str(trimmed).ok().or_else(|| {
trimmed
.lines()
.last()
.and_then(|line| serde_json::from_str(line).ok())
})
}
OutputFormat::Yaml => {
// Try to parse stdout as YAML
serde_yaml_ng::from_str(stdout_result.content.trim()).ok()
}
OutputFormat::Jsonl => {
// Parse each line as JSON and collect into array
let mut items = Vec::new();
for line in stdout_result.content.trim().lines() {
if let Ok(value) = serde_json::from_str::<serde_json::Value>(line) {
items.push(value);
}
}
if items.is_empty() {
None
} else {
Some(serde_json::Value::Array(items))
}
}
}
} else {
None
};
Ok(ExecutionResult {
exit_code,
// Only populate stdout if result wasn't parsed (avoid duplication)
stdout: if result.is_some() {
String::new()
} else {
stdout_result.content.clone()
},
stderr: stderr_result.content.clone(),
result,
duration_ms,
error: if exit_code != 0 {
Some(stderr_result.content)
} else {
None
},
stdout_truncated: stdout_result.truncated,
stderr_truncated: stderr_result.truncated,
stdout_bytes_truncated: stdout_result.bytes_truncated,
stderr_bytes_truncated: stderr_result.bytes_truncated,
})
}
async fn execute_python_code(
&self,
script: String,
secrets: &std::collections::HashMap<String, String>,
env: &std::collections::HashMap<String, String>,
timeout_secs: Option<u64>,
python_path: PathBuf,
max_stdout_bytes: usize,
max_stderr_bytes: usize,
output_format: OutputFormat,
) -> RuntimeResult<ExecutionResult> {
debug!(
"Executing Python script with {} secrets (passed via stdin)",
secrets.len()
);
// Build command
let mut cmd = Command::new(&python_path);
cmd.arg("-c").arg(&script);
// Add environment variables
for (key, value) in env {
cmd.env(key, value);
}
self.execute_with_streaming(
cmd,
secrets,
timeout_secs,
max_stdout_bytes,
max_stderr_bytes,
output_format,
)
.await
}
/// Execute Python script from file
async fn execute_python_file(
&self,
code_path: PathBuf,
secrets: &std::collections::HashMap<String, String>,
env: &std::collections::HashMap<String, String>,
timeout_secs: Option<u64>,
python_path: PathBuf,
max_stdout_bytes: usize,
max_stderr_bytes: usize,
output_format: OutputFormat,
) -> RuntimeResult<ExecutionResult> {
debug!(
"Executing Python file: {:?} with {} secrets",
code_path,
secrets.len()
);
// Build command
let mut cmd = Command::new(&python_path);
cmd.arg(&code_path);
// Add environment variables
for (key, value) in env {
cmd.env(key, value);
}
self.execute_with_streaming(
cmd,
secrets,
timeout_secs,
max_stdout_bytes,
max_stderr_bytes,
output_format,
)
.await
}
}
impl Default for PythonRuntime {
fn default() -> Self {
Self::new()
}
}
impl PythonRuntime {
/// Ensure pack dependencies are installed (called before execution if needed)
///
/// This is a helper method that can be called by the worker service to ensure
/// a pack's Python dependencies are set up before executing actions.
pub async fn ensure_pack_dependencies(
&self,
pack_ref: &str,
spec: &DependencySpec,
) -> RuntimeResult<()> {
if let Some(ref dep_mgr) = self.dependency_manager {
if spec.has_dependencies() {
info!(
"Ensuring Python dependencies for pack: {} ({} dependencies)",
pack_ref,
spec.dependencies.len()
);
dep_mgr
.ensure_environment(pack_ref, spec)
.await
.map_err(|e| {
RuntimeError::SetupError(format!(
"Failed to setup Python environment for {}: {}",
pack_ref, e
))
})?;
info!("Python dependencies ready for pack: {}", pack_ref);
} else {
debug!("Pack {} has no Python dependencies", pack_ref);
}
} else {
warn!("Dependency manager not configured, skipping dependency isolation");
}
Ok(())
}
}
#[async_trait]
impl Runtime for PythonRuntime {
fn name(&self) -> &str {
"python"
}
fn can_execute(&self, context: &ExecutionContext) -> bool {
// Check if action reference suggests Python
let is_python = context.action_ref.contains(".py")
|| context.entry_point.ends_with(".py")
|| context
.code_path
.as_ref()
.map(|p| p.extension().and_then(|e| e.to_str()) == Some("py"))
.unwrap_or(false);
is_python
}
async fn execute(&self, context: ExecutionContext) -> RuntimeResult<ExecutionResult> {
info!(
"Executing Python action: {} (execution_id: {})",
context.action_ref, context.execution_id
);
// Get the appropriate Python executable (venv or default)
let python_path = self.get_python_executable(&context).await?;
// If code_path is provided, execute the file directly
if let Some(code_path) = &context.code_path {
return self
.execute_python_file(
code_path.clone(),
&context.secrets,
&context.env,
context.timeout,
python_path,
context.max_stdout_bytes,
context.max_stderr_bytes,
context.output_format,
)
.await;
}
// Otherwise, generate wrapper script and execute
let script = self.generate_wrapper_script(&context)?;
self.execute_python_code(
script,
&context.secrets,
&context.env,
context.timeout,
python_path,
context.max_stdout_bytes,
context.max_stderr_bytes,
context.output_format,
)
.await
}
async fn setup(&self) -> RuntimeResult<()> {
info!("Setting up Python runtime");
// Ensure work directory exists
tokio::fs::create_dir_all(&self.work_dir)
.await
.map_err(|e| RuntimeError::SetupError(format!("Failed to create work dir: {}", e)))?;
// Verify Python is available
let output = Command::new(&self.python_path)
.arg("--version")
.output()
.await
.map_err(|e| {
RuntimeError::SetupError(format!(
"Python not found at {:?}: {}",
self.python_path, e
))
})?;
if !output.status.success() {
return Err(RuntimeError::SetupError(
"Python interpreter is not working".to_string(),
));
}
let version = String::from_utf8_lossy(&output.stdout);
info!("Python runtime ready: {}", version.trim());
Ok(())
}
async fn cleanup(&self) -> RuntimeResult<()> {
info!("Cleaning up Python runtime");
// Could clean up temporary files here
Ok(())
}
async fn validate(&self) -> RuntimeResult<()> {
debug!("Validating Python runtime");
// Check if Python is available
let output = Command::new(&self.python_path)
.arg("--version")
.output()
.await
.map_err(|e| RuntimeError::SetupError(format!("Python validation failed: {}", e)))?;
if !output.status.success() {
return Err(RuntimeError::SetupError(
"Python interpreter validation failed".to_string(),
));
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::collections::HashMap;
#[tokio::test]
async fn test_python_runtime_simple() {
let runtime = PythonRuntime::new();
let context = ExecutionContext {
execution_id: 1,
action_ref: "test.simple".to_string(),
parameters: {
let mut map = HashMap::new();
map.insert("x".to_string(), serde_json::json!(5));
map.insert("y".to_string(), serde_json::json!(10));
map
},
env: HashMap::new(),
secrets: HashMap::new(),
timeout: Some(10),
working_dir: None,
entry_point: "run".to_string(),
code: Some(
r#"
def run(x, y):
return x + y
"#
.to_string(),
),
code_path: None,
runtime_name: Some("python".to_string()),
runtime_config_override: None,
runtime_env_dir_suffix: None,
selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024,
parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::default(),
};
let result = runtime.execute(context).await.unwrap();
assert!(result.is_success());
assert_eq!(result.exit_code, 0);
}
#[tokio::test]
async fn test_python_runtime_timeout() {
let runtime = PythonRuntime::new();
let context = ExecutionContext {
execution_id: 2,
action_ref: "test.timeout".to_string(),
parameters: HashMap::new(),
env: HashMap::new(),
secrets: HashMap::new(),
timeout: Some(1),
working_dir: None,
entry_point: "run".to_string(),
code: Some(
r#"
import time
def run():
time.sleep(10)
return "done"
"#
.to_string(),
),
code_path: None,
runtime_name: Some("python".to_string()),
runtime_config_override: None,
runtime_env_dir_suffix: None,
selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024,
parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::default(),
};
let result = runtime.execute(context).await.unwrap();
assert!(!result.is_success());
assert!(result.error.is_some());
let error_msg = result.error.unwrap();
assert!(error_msg.contains("timeout") || error_msg.contains("timed out"));
}
#[tokio::test]
async fn test_python_runtime_error() {
let runtime = PythonRuntime::new();
let context = ExecutionContext {
execution_id: 3,
action_ref: "test.error".to_string(),
parameters: HashMap::new(),
env: HashMap::new(),
secrets: HashMap::new(),
timeout: Some(10),
working_dir: None,
entry_point: "run".to_string(),
code: Some(
r#"
def run():
raise ValueError("Test error")
"#
.to_string(),
),
code_path: None,
runtime_name: Some("python".to_string()),
runtime_config_override: None,
runtime_env_dir_suffix: None,
selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024,
parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::default(),
};
let result = runtime.execute(context).await.unwrap();
assert!(!result.is_success());
assert!(result.error.is_some());
}
#[tokio::test]
#[ignore = "Pre-existing failure - secrets not being passed correctly"]
async fn test_python_runtime_with_secrets() {
let runtime = PythonRuntime::new();
let context = ExecutionContext {
execution_id: 4,
action_ref: "test.secrets".to_string(),
parameters: HashMap::new(),
env: HashMap::new(),
secrets: {
let mut s = HashMap::new();
s.insert("api_key".to_string(), "secret_key_12345".to_string());
s.insert("db_password".to_string(), "super_secret_pass".to_string());
s
},
timeout: Some(10),
working_dir: None,
entry_point: "run".to_string(),
code: Some(
r#"
def run():
# Access secrets via get_secret() helper
api_key = get_secret('api_key')
db_pass = get_secret('db_password')
missing = get_secret('nonexistent')
return {
'api_key': api_key,
'db_pass': db_pass,
'missing': missing
}
"#
.to_string(),
),
code_path: None,
runtime_name: Some("python".to_string()),
runtime_config_override: None,
runtime_env_dir_suffix: None,
selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024,
parameter_delivery: attune_common::models::ParameterDelivery::default(),
parameter_format: attune_common::models::ParameterFormat::default(),
output_format: attune_common::models::OutputFormat::default(),
};
let result = runtime.execute(context).await.unwrap();
assert!(result.is_success());
assert_eq!(result.exit_code, 0);
// Verify secrets are accessible in action code
let result_data = result.result.unwrap();
let result_obj = result_data.get("result").unwrap();
assert_eq!(result_obj.get("api_key").unwrap(), "secret_key_12345");
assert_eq!(result_obj.get("db_pass").unwrap(), "super_secret_pass");
assert_eq!(result_obj.get("missing"), Some(&serde_json::Value::Null));
}
}

View File

@@ -171,6 +171,7 @@ impl WorkerService {
let registration = Arc::new(RwLock::new(WorkerRegistration::new(pool.clone(), &config))); let registration = Arc::new(RwLock::new(WorkerRegistration::new(pool.clone(), &config)));
// Initialize artifact manager (legacy, for stdout/stderr log storage) // Initialize artifact manager (legacy, for stdout/stderr log storage)
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Worker artifact/config directories come from trusted process configuration, not request data.
let artifact_base_dir = std::path::PathBuf::from( let artifact_base_dir = std::path::PathBuf::from(
config config
.worker .worker
@@ -184,6 +185,7 @@ impl WorkerService {
// Initialize artifacts directory for file-backed artifact storage (shared volume). // Initialize artifacts directory for file-backed artifact storage (shared volume).
// Execution processes write artifact files here; the API serves them from the same path. // Execution processes write artifact files here; the API serves them from the same path.
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Artifact storage root is a trusted deployment configuration value.
let artifacts_dir = std::path::PathBuf::from(&config.artifacts_dir); let artifacts_dir = std::path::PathBuf::from(&config.artifacts_dir);
if let Err(e) = tokio::fs::create_dir_all(&artifacts_dir).await { if let Err(e) = tokio::fs::create_dir_all(&artifacts_dir).await {
warn!( warn!(
@@ -198,7 +200,9 @@ impl WorkerService {
); );
} }
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Pack/runtime roots are trusted deployment configuration values.
let packs_base_dir = std::path::PathBuf::from(&config.packs_base_dir); let packs_base_dir = std::path::PathBuf::from(&config.packs_base_dir);
// nosemgrep: rust.actix.path-traversal.tainted-path.tainted-path -- Pack/runtime roots are trusted deployment configuration values.
let runtime_envs_dir = std::path::PathBuf::from(&config.runtime_envs_dir); let runtime_envs_dir = std::path::PathBuf::from(&config.runtime_envs_dir);
// Determine which runtimes to register based on configuration // Determine which runtimes to register based on configuration

View File

@@ -86,6 +86,8 @@ fn make_context(action_ref: &str, entry_point: &str, runtime_name: &str) -> Exec
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: ParameterDelivery::default(), parameter_delivery: ParameterDelivery::default(),
parameter_format: ParameterFormat::default(), parameter_format: ParameterFormat::default(),
output_format: OutputFormat::default(), output_format: OutputFormat::default(),

View File

@@ -80,6 +80,8 @@ fn make_python_context(
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes, max_stdout_bytes,
max_stderr_bytes, max_stderr_bytes,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
@@ -164,6 +166,8 @@ done
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 400, // Small limit max_stdout_bytes: 400, // Small limit
max_stderr_bytes: 1024, max_stderr_bytes: 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
@@ -329,6 +333,8 @@ async fn test_shell_process_runtime_truncation() {
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 500, max_stdout_bytes: 500,
max_stderr_bytes: 1024, max_stderr_bytes: 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),

View File

@@ -112,6 +112,8 @@ print(json.dumps(result))
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::Json, output_format: attune_worker::runtime::OutputFormat::Json,
@@ -207,6 +209,8 @@ echo "SECURITY_PASS: Secrets not in inherited environment and accessible via mer
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
@@ -272,6 +276,8 @@ print(json.dumps({'secret_a': secrets.get('secret_a')}))
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::Json, output_format: attune_worker::runtime::OutputFormat::Json,
@@ -318,6 +324,8 @@ print(json.dumps({
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::Json, output_format: attune_worker::runtime::OutputFormat::Json,
@@ -373,6 +381,8 @@ print("ok")
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
@@ -425,6 +435,8 @@ fi
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
@@ -507,6 +519,8 @@ echo "PASS: No secrets in environment"
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::default(), output_format: attune_worker::runtime::OutputFormat::default(),
@@ -588,6 +602,8 @@ print(json.dumps({"leaked": leaked}))
selected_runtime_version: None, selected_runtime_version: None,
max_stdout_bytes: 10 * 1024 * 1024, max_stdout_bytes: 10 * 1024 * 1024,
max_stderr_bytes: 10 * 1024 * 1024, max_stderr_bytes: 10 * 1024 * 1024,
stdout_log_path: None,
stderr_log_path: None,
parameter_delivery: attune_worker::runtime::ParameterDelivery::default(), parameter_delivery: attune_worker::runtime::ParameterDelivery::default(),
parameter_format: attune_worker::runtime::ParameterFormat::default(), parameter_format: attune_worker::runtime::ParameterFormat::default(),
output_format: attune_worker::runtime::OutputFormat::Json, output_format: attune_worker::runtime::OutputFormat::Json,

View File

@@ -91,6 +91,30 @@ services:
- attune-network - attune-network
restart: on-failure restart: on-failure
# Build and extract statically-linked pack binaries (sensors, etc.)
# These binaries are built with musl for cross-architecture compatibility
# and placed directly into the packs volume for sensor containers to use.
init-pack-binaries:
build:
context: .
dockerfile: docker/Dockerfile.pack-binaries
target: pack-binaries-init
args:
BUILDKIT_INLINE_CACHE: 1
RUST_TARGET: ${PACK_BINARIES_RUST_TARGET:-x86_64-unknown-linux-musl}
container_name: attune-init-pack-binaries
volumes:
- packs_data:/opt/attune/packs
entrypoint:
[
"/bin/sh",
"-c",
"mkdir -p /opt/attune/packs/core/sensors && cp /pack-binaries/attune-core-timer-sensor /opt/attune/packs/core/sensors/attune-core-timer-sensor && chmod +x /opt/attune/packs/core/sensors/attune-core-timer-sensor && echo 'Pack binaries copied successfully'",
]
restart: "no"
networks:
- attune-network
# Initialize builtin packs # Initialize builtin packs
# Copies pack files to shared volume and loads them into database # Copies pack files to shared volume and loads them into database
init-packs: init-packs:
@@ -117,6 +141,8 @@ services:
DEFAULT_ADMIN_PERMISSION_SET_REF: core.admin DEFAULT_ADMIN_PERMISSION_SET_REF: core.admin
command: ["/bin/sh", "/init-packs.sh"] command: ["/bin/sh", "/init-packs.sh"]
depends_on: depends_on:
init-pack-binaries:
condition: service_completed_successfully
migrations: migrations:
condition: service_completed_successfully condition: service_completed_successfully
postgres: postgres:
@@ -136,6 +162,7 @@ services:
target: agent-init target: agent-init
args: args:
BUILDKIT_INLINE_CACHE: 1 BUILDKIT_INLINE_CACHE: 1
RUST_TARGET: ${AGENT_RUST_TARGET:-x86_64-unknown-linux-musl}
container_name: attune-init-agent container_name: attune-init-agent
volumes: volumes:
- agent_bin:/opt/attune/agent - agent_bin:/opt/attune/agent
@@ -209,8 +236,8 @@ services:
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
# Message Queue # Message Queue
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
# Cache # Redis
ATTUNE__CACHE__URL: redis://redis:6379 ATTUNE__REDIS__URL: redis://redis:6379
# Worker config override # Worker config override
ATTUNE__WORKER__WORKER_TYPE: container ATTUNE__WORKER__WORKER_TYPE: container
ports: ports:
@@ -263,7 +290,7 @@ services:
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus} ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672 ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE__CACHE__URL: redis://redis:6379 ATTUNE__REDIS__URL: redis://redis:6379
ATTUNE__WORKER__WORKER_TYPE: container ATTUNE__WORKER__WORKER_TYPE: container
volumes: volumes:
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro - ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro

View File

@@ -4,18 +4,31 @@
# using musl, suitable for injection into arbitrary runtime containers. # using musl, suitable for injection into arbitrary runtime containers.
# #
# Stages: # Stages:
# builder - Cross-compile with musl for a fully static binary # builder - Cross-compile with cargo-zigbuild + musl for a fully static binary
# agent-binary - Minimal scratch image containing just the binary # agent-binary - Minimal scratch image containing just the binary
# agent-init - BusyBox-based image for use as a Kubernetes init container # agent-init - BusyBox-based image for use as a Kubernetes init container
# or Docker Compose volume-populating service (has `cp`) # or Docker Compose volume-populating service (has `cp`)
# #
# Architecture handling:
# Uses cargo-zigbuild for cross-compilation, which bundles all necessary
# cross-compilation toolchains internally. This allows building for any
# target architecture from any host — e.g., building aarch64 musl binaries
# on an x86_64 host, or vice versa. This matches the CI/CD pipeline approach.
#
# The RUST_TARGET build arg controls the output architecture:
# x86_64-unknown-linux-musl -> amd64 static binary (default)
# aarch64-unknown-linux-musl -> arm64 static binary
#
# Usage: # Usage:
# # Build for the default architecture (x86_64):
# DOCKER_BUILDKIT=1 docker buildx build --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest .
#
# # Build for arm64:
# DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest .
#
# # Build the minimal binary-only image: # # Build the minimal binary-only image:
# DOCKER_BUILDKIT=1 docker buildx build --target agent-binary -f docker/Dockerfile.agent -t attune-agent:binary . # DOCKER_BUILDKIT=1 docker buildx build --target agent-binary -f docker/Dockerfile.agent -t attune-agent:binary .
# #
# # Build the init container image (for volume population via `cp`):
# DOCKER_BUILDKIT=1 docker buildx build --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest .
#
# # Use in docker-compose.yaml to populate a shared volume: # # Use in docker-compose.yaml to populate a shared volume:
# # agent-init: # # agent-init:
# # image: attune-agent:latest # # image: attune-agent:latest
@@ -28,22 +41,41 @@
ARG RUST_VERSION=1.92 ARG RUST_VERSION=1.92
ARG DEBIAN_VERSION=bookworm ARG DEBIAN_VERSION=bookworm
ARG RUST_TARGET=x86_64-unknown-linux-musl
# ============================================================================ # ============================================================================
# Stage 1: Builder - Cross-compile a statically-linked binary with musl # Stage 1: Builder - Cross-compile a statically-linked binary with musl
# ============================================================================ # ============================================================================
FROM rust:${RUST_VERSION}-${DEBIAN_VERSION} AS builder FROM rust:${RUST_VERSION}-${DEBIAN_VERSION} AS builder
# Install musl toolchain for static linking ARG RUST_TARGET
# Install build dependencies.
# - musl-tools: provides the musl libc headers needed for musl target builds
# - python3 + pip: needed to install ziglang (zig is the cross-compilation backend)
# - pkg-config, libssl-dev: needed for native dependency detection during build
# - file, binutils: for verifying the resulting binaries (file, strip)
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
musl-tools \ musl-tools \
pkg-config \ pkg-config \
libssl-dev \ libssl-dev \
ca-certificates \ ca-certificates \
file \
binutils \
python3 \
python3-pip \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Add the musl target for fully static binaries # Install zig (provides cross-compilation toolchains for all architectures)
RUN rustup target add x86_64-unknown-linux-musl # and cargo-zigbuild (cargo subcommand that uses zig as the linker/compiler).
# This replaces native musl-gcc and avoids the -m64 flag mismatch that occurs
# when the host arch doesn't match the target arch (e.g., building x86_64 musl
# binaries on an arm64 host).
RUN pip3 install --break-system-packages --no-cache-dir ziglang && \
cargo install --locked cargo-zigbuild
# Add the requested musl target for fully static binaries
RUN rustup target add ${RUST_TARGET}
WORKDIR /build WORKDIR /build
@@ -93,25 +125,30 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Build layer # Build layer
# Copy real source code and compile only the agent binary with musl # Copy real source code and compile only the agent binaries with musl
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
COPY migrations/ ./migrations/ COPY migrations/ ./migrations/
COPY crates/ ./crates/ COPY crates/ ./crates/
# Build the injected agent binaries, statically linked with musl. # Build the injected agent binaries, statically linked with musl.
# Uses cargo-zigbuild so that cross-compilation works regardless of host arch.
# Uses a dedicated cache ID (agent-target) so the musl target directory # Uses a dedicated cache ID (agent-target) so the musl target directory
# doesn't collide with the glibc target cache used by other Dockerfiles. # doesn't collide with the glibc target cache used by other Dockerfiles.
RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \ RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \
--mount=type=cache,target=/usr/local/cargo/git,sharing=shared \ --mount=type=cache,target=/usr/local/cargo/git,sharing=shared \
--mount=type=cache,id=agent-target,target=/build/target,sharing=locked \ --mount=type=cache,id=agent-target,target=/build/target,sharing=locked \
cargo build --release --target x86_64-unknown-linux-musl --bin attune-agent --bin attune-sensor-agent && \ cargo zigbuild --release --target ${RUST_TARGET} --bin attune-agent --bin attune-sensor-agent && \
cp /build/target/x86_64-unknown-linux-musl/release/attune-agent /build/attune-agent && \ cp /build/target/${RUST_TARGET}/release/attune-agent /build/attune-agent && \
cp /build/target/x86_64-unknown-linux-musl/release/attune-sensor-agent /build/attune-sensor-agent cp /build/target/${RUST_TARGET}/release/attune-sensor-agent /build/attune-sensor-agent
# Strip the binaries to minimize size # Strip the binaries to minimize size.
RUN strip /build/attune-agent && strip /build/attune-sensor-agent # When cross-compiling for a different architecture, the host strip may not
# understand the foreign binary format. In that case we skip stripping — the
# binary is still functional, just slightly larger.
RUN (strip /build/attune-agent 2>/dev/null && echo "stripped attune-agent" || echo "strip skipped for attune-agent (cross-arch binary)") && \
(strip /build/attune-sensor-agent 2>/dev/null && echo "stripped attune-sensor-agent" || echo "strip skipped for attune-sensor-agent (cross-arch binary)")
# Verify the binaries are statically linked and functional # Verify the binaries exist and show their details
RUN ls -lh /build/attune-agent /build/attune-sensor-agent && \ RUN ls -lh /build/attune-agent /build/attune-sensor-agent && \
file /build/attune-agent && \ file /build/attune-agent && \
file /build/attune-sensor-agent && \ file /build/attune-sensor-agent && \

View File

@@ -1,12 +1,26 @@
# Dockerfile for building pack binaries independently # Dockerfile for building statically-linked pack binaries independently
# #
# This Dockerfile builds native pack binaries (sensors, etc.) with GLIBC compatibility # This Dockerfile builds native pack binaries (sensors, etc.) as fully static
# The binaries are built separately from service containers and placed in ./packs/ # musl binaries with zero runtime dependencies. Uses cargo-zigbuild for
# cross-compilation, allowing builds for any target architecture from any host
# (e.g., building x86_64 musl binaries on an arm64 Mac, or vice versa).
#
# Architecture handling:
# The RUST_TARGET build arg controls the output architecture:
# x86_64-unknown-linux-musl -> amd64 static binary (default)
# aarch64-unknown-linux-musl -> arm64 static binary
# #
# Usage: # Usage:
# docker build -f docker/Dockerfile.pack-binaries -t attune-pack-builder . # # Build for the default architecture (x86_64):
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.pack-binaries -t attune-pack-builder .
#
# # Build for arm64:
# DOCKER_BUILDKIT=1 docker build --build-arg RUST_TARGET=aarch64-unknown-linux-musl \
# -f docker/Dockerfile.pack-binaries -t attune-pack-builder .
#
# # Extract binaries:
# docker create --name pack-binaries attune-pack-builder # docker create --name pack-binaries attune-pack-builder
# docker cp pack-binaries:/build/pack-binaries/. ./packs/ # docker cp pack-binaries:/pack-binaries/. ./packs/
# docker rm pack-binaries # docker rm pack-binaries
# #
# Or use the provided script: # Or use the provided script:
@@ -14,25 +28,56 @@
ARG RUST_VERSION=1.92 ARG RUST_VERSION=1.92
ARG DEBIAN_VERSION=bookworm ARG DEBIAN_VERSION=bookworm
ARG RUST_TARGET=x86_64-unknown-linux-musl
# ============================================================================ # ============================================================================
# Stage 1: Builder - Build pack binaries with GLIBC 2.36 # Stage 1: Builder - Cross-compile statically-linked pack binaries with musl
# ============================================================================ # ============================================================================
FROM rust:${RUST_VERSION}-${DEBIAN_VERSION} AS builder FROM rust:${RUST_VERSION}-${DEBIAN_VERSION} AS builder
# Install build dependencies ARG RUST_TARGET
# Install build dependencies.
# - musl-tools: provides the musl libc headers needed for musl target builds
# - python3 + pip: needed to install ziglang (zig is the cross-compilation backend)
# - pkg-config, libssl-dev: needed for native dependency detection during build
# - file, binutils: for verifying and stripping the resulting binaries
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
musl-tools \
pkg-config \ pkg-config \
libssl-dev \ libssl-dev \
ca-certificates \ ca-certificates \
file \
binutils \
python3 \
python3-pip \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# Install zig (provides cross-compilation toolchains for all architectures)
# and cargo-zigbuild (cargo subcommand that uses zig as the linker/compiler).
# This replaces native musl-gcc and avoids the -m64 flag mismatch that occurs
# when the host arch doesn't match the target arch (e.g., building x86_64 musl
# binaries on an arm64 host).
RUN pip3 install --break-system-packages --no-cache-dir ziglang && \
cargo install --locked cargo-zigbuild
# Add the requested musl target for fully static binaries
RUN rustup target add ${RUST_TARGET}
WORKDIR /build WORKDIR /build
# Increase rustc stack size to prevent SIGSEGV during release builds # Increase rustc stack size to prevent SIGSEGV during release builds
ENV RUST_MIN_STACK=67108864 ENV RUST_MIN_STACK=67108864
# Copy workspace configuration # Enable SQLx offline mode — compile-time query checking without a live database
ENV SQLX_OFFLINE=true
# ---------------------------------------------------------------------------
# Dependency caching layer
# Copy only Cargo metadata first so `cargo fetch` is cached when only source
# code changes. This follows the same selective-copy optimization pattern as
# the other active Dockerfiles in this directory.
# ---------------------------------------------------------------------------
COPY Cargo.toml Cargo.lock ./ COPY Cargo.toml Cargo.lock ./
# Copy all workspace member manifests (required for workspace resolution) # Copy all workspace member manifests (required for workspace resolution)
@@ -45,35 +90,63 @@ COPY crates/worker/Cargo.toml ./crates/worker/Cargo.toml
COPY crates/notifier/Cargo.toml ./crates/notifier/Cargo.toml COPY crates/notifier/Cargo.toml ./crates/notifier/Cargo.toml
COPY crates/cli/Cargo.toml ./crates/cli/Cargo.toml COPY crates/cli/Cargo.toml ./crates/cli/Cargo.toml
# Create dummy source files for workspace members (not being built) # Create minimal stub sources so cargo can resolve the workspace and fetch deps.
RUN mkdir -p crates/api/src && echo "fn main() {}" > crates/api/src/main.rs # These are ONLY used for `cargo fetch` — never compiled.
RUN mkdir -p crates/executor/src && echo "fn main() {}" > crates/executor/src/main.rs # NOTE: The worker crate has TWO binary targets (main + agent_main) and the
RUN mkdir -p crates/executor/benches && echo "fn main() {}" > crates/executor/benches/context_clone.rs # sensor crate also has two binary targets (main + agent_main), so we create
RUN mkdir -p crates/sensor/src && echo "fn main() {}" > crates/sensor/src/main.rs # stubs for all of them.
RUN mkdir -p crates/worker/src && echo "fn main() {}" > crates/worker/src/main.rs RUN mkdir -p crates/common/src && echo "" > crates/common/src/lib.rs && \
RUN echo "fn main() {}" > crates/worker/src/agent_main.rs mkdir -p crates/api/src && echo "fn main(){}" > crates/api/src/main.rs && \
RUN mkdir -p crates/notifier/src && echo "fn main() {}" > crates/notifier/src/main.rs mkdir -p crates/executor/src && echo "fn main(){}" > crates/executor/src/main.rs && \
RUN mkdir -p crates/cli/src && echo "fn main() {}" > crates/cli/src/main.rs mkdir -p crates/executor/benches && echo "fn main(){}" > crates/executor/benches/context_clone.rs && \
mkdir -p crates/sensor/src && echo "fn main(){}" > crates/sensor/src/main.rs && \
echo "fn main(){}" > crates/sensor/src/agent_main.rs && \
mkdir -p crates/core-timer-sensor/src && echo "fn main(){}" > crates/core-timer-sensor/src/main.rs && \
mkdir -p crates/worker/src && echo "fn main(){}" > crates/worker/src/main.rs && \
echo "fn main(){}" > crates/worker/src/agent_main.rs && \
mkdir -p crates/notifier/src && echo "fn main(){}" > crates/notifier/src/main.rs && \
mkdir -p crates/cli/src && echo "fn main(){}" > crates/cli/src/main.rs
# Copy only the source code needed for pack binaries # Download all dependencies (cached unless Cargo.toml/Cargo.lock change)
# registry/git use sharing=shared — cargo handles concurrent reads safely
RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \
--mount=type=cache,target=/usr/local/cargo/git,sharing=shared \
cargo fetch
# ---------------------------------------------------------------------------
# Build layer
# Copy real source code and compile only the pack binaries with musl
# ---------------------------------------------------------------------------
COPY migrations/ ./migrations/
COPY crates/common/ ./crates/common/ COPY crates/common/ ./crates/common/
COPY crates/core-timer-sensor/ ./crates/core-timer-sensor/ COPY crates/core-timer-sensor/ ./crates/core-timer-sensor/
# Build pack binaries with BuildKit cache mounts # Build pack binaries with BuildKit cache mounts, statically linked with musl.
# These binaries will have GLIBC 2.36 compatibility (Debian Bookworm) # Uses cargo-zigbuild so that cross-compilation works regardless of host arch.
# - registry/git use sharing=shared (cargo handles concurrent access safely) # - registry/git use sharing=shared (cargo handles concurrent access safely)
# - target uses dedicated cache for pack binaries (separate from service builds) # - target uses sharing=locked because zigbuild cross-compilation needs
# exclusive access to the target directory
# - dedicated cache ID (target-pack-binaries-static) to avoid collisions with
# other Dockerfiles' target caches
RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \ RUN --mount=type=cache,target=/usr/local/cargo/registry,sharing=shared \
--mount=type=cache,target=/usr/local/cargo/git,sharing=shared \ --mount=type=cache,target=/usr/local/cargo/git,sharing=shared \
--mount=type=cache,target=/build/target,id=target-pack-binaries \ --mount=type=cache,id=target-pack-binaries-static,target=/build/target,sharing=locked \
mkdir -p /build/pack-binaries && \ mkdir -p /build/pack-binaries && \
cargo build --release --bin attune-core-timer-sensor && \ cargo zigbuild --release --target ${RUST_TARGET} --bin attune-core-timer-sensor && \
cp /build/target/release/attune-core-timer-sensor /build/pack-binaries/attune-core-timer-sensor cp /build/target/${RUST_TARGET}/release/attune-core-timer-sensor /build/pack-binaries/attune-core-timer-sensor
# Verify binaries were built successfully # Strip the binary to minimize size.
RUN ls -lah /build/pack-binaries/ && \ # When cross-compiling for a different architecture, the host strip may not
# understand the foreign binary format. In that case we skip stripping — the
# binary is still functional, just slightly larger.
RUN (strip /build/pack-binaries/attune-core-timer-sensor 2>/dev/null && \
echo "stripped attune-core-timer-sensor" || \
echo "strip skipped for attune-core-timer-sensor (cross-arch binary)")
# Verify binaries were built successfully and are statically linked
RUN ls -lh /build/pack-binaries/attune-core-timer-sensor && \
file /build/pack-binaries/attune-core-timer-sensor && \ file /build/pack-binaries/attune-core-timer-sensor && \
ldd /build/pack-binaries/attune-core-timer-sensor && \ (ldd /build/pack-binaries/attune-core-timer-sensor 2>&1 || echo "statically linked (no dynamic dependencies)") && \
/build/pack-binaries/attune-core-timer-sensor --version || echo "Built successfully" /build/pack-binaries/attune-core-timer-sensor --version || echo "Built successfully"
# ============================================================================ # ============================================================================
@@ -87,3 +160,15 @@ COPY --from=builder /build/pack-binaries/ /pack-binaries/
# Default command (not used in FROM scratch) # Default command (not used in FROM scratch)
CMD ["/bin/sh"] CMD ["/bin/sh"]
# ============================================================================
# Stage 3: pack-binaries-init - Init container for volume population
# ============================================================================
# Uses busybox so we have `cp`, `sh`, etc. for use as a Docker Compose
# init service that copies pack binaries into the shared packs volume.
FROM busybox:1.36 AS pack-binaries-init
COPY --from=builder /build/pack-binaries/ /pack-binaries/
# No default entrypoint — docker-compose provides the command
ENTRYPOINT ["/bin/sh"]

View File

@@ -0,0 +1,64 @@
# Attune Docker Dist Bundle
This directory is a distributable Docker bundle built from the main workspace compose setup.
It is designed to run Attune without building the Rust services locally:
- `api`, `executor`, `notifier`, `agent`, and `web` pull published images
- database bootstrap, user bootstrap, and pack loading run from local scripts shipped in this bundle
- workers and sensor still use stock runtime images plus the published injected agent binaries
## Registry Defaults
The compose file defaults to:
- registry: `git.rdrx.app/attune-system`
- tag: `latest`
Override them with env vars:
```bash
export ATTUNE_IMAGE_REGISTRY=git.rdrx.app/attune-system
export ATTUNE_IMAGE_TAG=latest
```
If the registry requires auth:
```bash
docker login git.rdrx.app
```
## Run
From this directory:
```bash
docker compose up -d
```
Or with an explicit tag:
```bash
ATTUNE_IMAGE_TAG=sha-xxxxxxxxxxxx docker compose up -d
```
## Rebuild Bundle
Refresh this bundle and create a tarball from the workspace root:
```bash
bash scripts/package-docker-dist.sh
```
## Included Assets
- `docker-compose.yaml` - published-image compose stack
- `config.docker.yaml` - container config mounted into services
- `docker/` - init scripts and SQL helpers
- `migrations/` - schema migrations for the bootstrap job
- `packs/core/` - builtin core pack content
- `scripts/load_core_pack.py` - pack loader used by `init-packs`
## Current Limitation
The publish workflow does not currently publish dedicated worker or sensor runtime images. This bundle therefore keeps using stock runtime images with the published `attune/agent` image for injection.

View File

@@ -0,0 +1,139 @@
# Attune Docker Environment Configuration
#
# This file is mounted into containers at /opt/attune/config/config.yaml.
# It provides base values for Docker deployments.
#
# Sensitive values (jwt_secret, encryption_key) are overridden by environment
# variables set in docker-compose.yaml using the ATTUNE__ prefix convention:
# ATTUNE__SECURITY__JWT_SECRET=...
# ATTUNE__SECURITY__ENCRYPTION_KEY=...
#
# The `config` crate does NOT support ${VAR} shell interpolation in YAML.
# All overrides must use ATTUNE__<SECTION>__<KEY> environment variables.
environment: docker
# Docker database (PostgreSQL container)
database:
url: postgresql://attune:attune@postgres:5432/attune
max_connections: 20
min_connections: 5
connect_timeout: 30
idle_timeout: 600
log_statements: false
schema: "public"
# Docker message queue (RabbitMQ container)
message_queue:
url: amqp://attune:attune@rabbitmq:5672
exchange: attune
enable_dlq: true
message_ttl: 3600 # seconds
# Docker cache (Redis container)
redis:
url: redis://redis:6379
pool_size: 10
# API server configuration
server:
host: 0.0.0.0
port: 8080
request_timeout: 60
enable_cors: true
cors_origins:
- http://localhost
- http://localhost:3000
- http://localhost:3001
- http://localhost:3002
- http://localhost:5173
- http://127.0.0.1:3000
- http://127.0.0.1:3001
- http://127.0.0.1:3002
- http://127.0.0.1:5173
- http://web
- http://web:3000
max_body_size: 10485760 # 10MB
# Logging configuration
log:
level: info
format: json # Structured logs for container environments
console: true
# Security settings
# jwt_secret and encryption_key are intentional placeholders — they MUST be
# overridden via ATTUNE__SECURITY__JWT_SECRET and ATTUNE__SECURITY__ENCRYPTION_KEY
# environment variables in docker-compose.yaml (or a .env file).
security:
jwt_secret: override-via-ATTUNE__SECURITY__JWT_SECRET-env-var
jwt_access_expiration: 3600 # 1 hour
jwt_refresh_expiration: 604800 # 7 days
encryption_key: override-via-ATTUNE__SECURITY__ENCRYPTION_KEY-env-var
enable_auth: true
allow_self_registration: false
login_page:
show_local_login: true
show_oidc_login: true
show_ldap_login: true
oidc:
enabled: false
# Uncomment and configure for your OIDC provider:
# discovery_url: https://auth.example.com/.well-known/openid-configuration
# client_id: your-client-id
# client_secret: your-client-secret
# provider_name: sso
# provider_label: SSO Login
# provider_icon_url: https://auth.example.com/favicon.ico
# redirect_uri: http://localhost:3000/auth/callback
# post_logout_redirect_uri: http://localhost:3000/login
# scopes:
# - groups
# Packs directory (mounted volume in containers)
packs_base_dir: /opt/attune/packs
# Runtime environments directory (isolated envs like virtualenvs, node_modules).
# Kept separate from packs so pack directories remain clean and read-only.
# Pattern: {runtime_envs_dir}/{pack_ref}/{runtime_name}
runtime_envs_dir: /opt/attune/runtime_envs
# Artifacts directory (shared volume for file-based artifact storage).
# File-type artifacts are written here by execution processes and served by the API.
# Pattern: {artifacts_dir}/{ref_slug}/v{version}.{ext}
artifacts_dir: /opt/attune/artifacts
# Executor service configuration
executor:
scheduled_timeout: 300 # 5 minutes - fail executions stuck in SCHEDULED
timeout_check_interval: 60 # Check every minute for stale executions
enable_timeout_monitor: true
# Worker service configuration
worker:
worker_type: container
max_concurrent_tasks: 20
heartbeat_interval: 10 # Reduced from 30s for faster stale detection (staleness = 30s)
task_timeout: 300
max_stdout_bytes: 10485760 # 10MB
max_stderr_bytes: 10485760 # 10MB
shutdown_timeout: 30
stream_logs: true
# Sensor service configuration
sensor:
max_concurrent_sensors: 50
heartbeat_interval: 10 # Reduced from 30s for faster stale detection
poll_interval: 10
sensor_timeout: 300
shutdown_timeout: 30
# Notifier service configuration
notifier:
host: 0.0.0.0
port: 8081
max_connections: 1000
# Agent binary distribution (serves the agent binary via API for remote downloads)
agent:
binary_dir: /opt/attune/agent

View File

@@ -0,0 +1,601 @@
name: attune
services:
postgres:
image: timescale/timescaledb:2.17.2-pg16
container_name: attune-postgres
environment:
POSTGRES_USER: attune
POSTGRES_PASSWORD: attune
POSTGRES_DB: attune
PGDATA: /var/lib/postgresql/data/pgdata
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U attune"]
interval: 10s
timeout: 5s
retries: 5
networks:
- attune-network
restart: unless-stopped
migrations:
image: postgres:16-alpine
container_name: attune-migrations
volumes:
- ./migrations:/migrations:ro
- ./docker/run-migrations.sh:/run-migrations.sh:ro
- ./docker/init-roles.sql:/docker/init-roles.sql:ro
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_USER: attune
DB_PASSWORD: attune
DB_NAME: attune
MIGRATIONS_DIR: /migrations
command: ["/bin/sh", "/run-migrations.sh"]
depends_on:
postgres:
condition: service_healthy
networks:
- attune-network
restart: on-failure
init-user:
image: postgres:16-alpine
container_name: attune-init-user
volumes:
- ./docker/init-user.sh:/init-user.sh:ro
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_USER: attune
DB_PASSWORD: attune
DB_NAME: attune
DB_SCHEMA: public
TEST_LOGIN: ${ATTUNE_TEST_LOGIN:-test@attune.local}
TEST_PASSWORD: ${ATTUNE_TEST_PASSWORD:-TestPass123!}
TEST_DISPLAY_NAME: ${ATTUNE_TEST_DISPLAY_NAME:-Test User}
command: ["/bin/sh", "/init-user.sh"]
depends_on:
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
networks:
- attune-network
restart: on-failure
# Build and extract statically-linked pack binaries (sensors, etc.)
# These binaries are built with musl for cross-architecture compatibility
# and placed directly into the packs volume for sensor containers to use.
init-pack-binaries:
image: ${ATTUNE_IMAGE_REGISTRY:-git.rdrx.app/attune-system}/attune/pack-builder:${ATTUNE_IMAGE_TAG:-latest}
container_name: attune-init-pack-binaries
volumes:
- packs_data:/opt/attune/packs
entrypoint:
[
"/bin/sh",
"-c",
"mkdir -p /opt/attune/packs/core/sensors && cp /pack-binaries/attune-core-timer-sensor /opt/attune/packs/core/sensors/attune-core-timer-sensor && chmod +x /opt/attune/packs/core/sensors/attune-core-timer-sensor && echo 'Pack binaries copied successfully'",
]
restart: "no"
networks:
- attune-network
init-packs:
image: python:3.11-slim
container_name: attune-init-packs
volumes:
- ./packs:/source/packs:ro
- ./scripts/load_core_pack.py:/scripts/load_core_pack.py:ro
- ./docker/init-packs.sh:/init-packs.sh:ro
- packs_data:/opt/attune/packs
- runtime_envs:/opt/attune/runtime_envs
- artifacts_data:/opt/attune/artifacts
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_USER: attune
DB_PASSWORD: attune
DB_NAME: attune
DB_SCHEMA: public
SOURCE_PACKS_DIR: /source/packs
TARGET_PACKS_DIR: /opt/attune/packs
LOADER_SCRIPT: /scripts/load_core_pack.py
DEFAULT_ADMIN_LOGIN: ${ATTUNE_TEST_LOGIN:-test@attune.local}
DEFAULT_ADMIN_PERMISSION_SET_REF: core.admin
command: ["/bin/sh", "/init-packs.sh"]
depends_on:
init-pack-binaries:
condition: service_completed_successfully
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
networks:
- attune-network
restart: on-failure
entrypoint: ""
init-agent:
image: ${ATTUNE_IMAGE_REGISTRY:-git.rdrx.app/attune-system}/attune/agent:${ATTUNE_IMAGE_TAG:-latest}
container_name: attune-init-agent
volumes:
- agent_bin:/opt/attune/agent
entrypoint:
[
"/bin/sh",
"-c",
"cp /usr/local/bin/attune-agent /opt/attune/agent/attune-agent && cp /usr/local/bin/attune-sensor-agent /opt/attune/agent/attune-sensor-agent && chmod +x /opt/attune/agent/attune-agent /opt/attune/agent/attune-sensor-agent && /usr/local/bin/attune-agent --version > /opt/attune/agent/attune-agent.version && /usr/local/bin/attune-sensor-agent --version > /opt/attune/agent/attune-sensor-agent.version && echo 'Agent binaries copied successfully'",
]
restart: "no"
networks:
- attune-network
rabbitmq:
image: rabbitmq:3.13-management-alpine
container_name: attune-rabbitmq
environment:
RABBITMQ_DEFAULT_USER: attune
RABBITMQ_DEFAULT_PASS: attune
RABBITMQ_DEFAULT_VHOST: /
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "-q", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- attune-network
restart: unless-stopped
redis:
image: redis:7-alpine
container_name: attune-redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- attune-network
restart: unless-stopped
command: redis-server --appendonly yes
api:
image: ${ATTUNE_IMAGE_REGISTRY:-git.rdrx.app/attune-system}/attune/api:${ATTUNE_IMAGE_TAG:-latest}
container_name: attune-api
environment:
RUST_LOG: info
ATTUNE_CONFIG: /opt/attune/config/config.yaml
ATTUNE__SECURITY__JWT_SECRET: ${JWT_SECRET:-docker-dev-secret-change-in-production}
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE__REDIS__URL: redis://redis:6379
ATTUNE__WORKER__WORKER_TYPE: container
ports:
- "8080:8080"
volumes:
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro
- packs_data:/opt/attune/packs:rw
- runtime_envs:/opt/attune/runtime_envs
- artifacts_data:/opt/attune/artifacts
- api_logs:/opt/attune/logs
- agent_bin:/opt/attune/agent:ro
depends_on:
init-agent:
condition: service_completed_successfully
init-packs:
condition: service_completed_successfully
init-user:
condition: service_completed_successfully
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
networks:
- attune-network
restart: unless-stopped
executor:
image: ${ATTUNE_IMAGE_REGISTRY:-git.rdrx.app/attune-system}/attune/executor:${ATTUNE_IMAGE_TAG:-latest}
container_name: attune-executor
environment:
RUST_LOG: info
ATTUNE_CONFIG: /opt/attune/config/config.yaml
ATTUNE__SECURITY__JWT_SECRET: ${JWT_SECRET:-docker-dev-secret-change-in-production}
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE__REDIS__URL: redis://redis:6379
ATTUNE__WORKER__WORKER_TYPE: container
volumes:
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro
- packs_data:/opt/attune/packs:ro
- artifacts_data:/opt/attune/artifacts:ro
- executor_logs:/opt/attune/logs
depends_on:
init-packs:
condition: service_completed_successfully
init-user:
condition: service_completed_successfully
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "kill -0 1 || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
networks:
- attune-network
restart: unless-stopped
worker-shell:
image: debian:bookworm-slim
container_name: attune-worker-shell
entrypoint: ["/opt/attune/agent/attune-agent"]
stop_grace_period: 45s
environment:
RUST_LOG: info
ATTUNE_CONFIG: /opt/attune/config/config.yaml
ATTUNE_WORKER_TYPE: container
ATTUNE_WORKER_NAME: worker-shell-01
ATTUNE__SECURITY__JWT_SECRET: ${JWT_SECRET:-docker-dev-secret-change-in-production}
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE_API_URL: http://attune-api:8080
volumes:
- agent_bin:/opt/attune/agent:ro
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro
- packs_data:/opt/attune/packs:ro
- runtime_envs:/opt/attune/runtime_envs
- artifacts_data:/opt/attune/artifacts
- worker_shell_logs:/opt/attune/logs
depends_on:
init-agent:
condition: service_completed_successfully
init-packs:
condition: service_completed_successfully
init-user:
condition: service_completed_successfully
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "pgrep -f attune-agent || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
networks:
- attune-network
restart: unless-stopped
worker-python:
image: python:3.12-slim
container_name: attune-worker-python
entrypoint: ["/opt/attune/agent/attune-agent"]
stop_grace_period: 45s
environment:
RUST_LOG: info
ATTUNE_CONFIG: /opt/attune/config/config.yaml
ATTUNE_WORKER_TYPE: container
ATTUNE_WORKER_NAME: worker-python-01
ATTUNE__SECURITY__JWT_SECRET: ${JWT_SECRET:-docker-dev-secret-change-in-production}
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE_API_URL: http://attune-api:8080
volumes:
- agent_bin:/opt/attune/agent:ro
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro
- packs_data:/opt/attune/packs:ro
- runtime_envs:/opt/attune/runtime_envs
- artifacts_data:/opt/attune/artifacts
- worker_python_logs:/opt/attune/logs
depends_on:
init-agent:
condition: service_completed_successfully
init-packs:
condition: service_completed_successfully
init-user:
condition: service_completed_successfully
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "pgrep -f attune-agent || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
networks:
- attune-network
restart: unless-stopped
worker-node:
image: node:22-slim
container_name: attune-worker-node
entrypoint: ["/opt/attune/agent/attune-agent"]
stop_grace_period: 45s
environment:
RUST_LOG: info
ATTUNE_CONFIG: /opt/attune/config/config.yaml
ATTUNE_WORKER_TYPE: container
ATTUNE_WORKER_NAME: worker-node-01
ATTUNE__SECURITY__JWT_SECRET: ${JWT_SECRET:-docker-dev-secret-change-in-production}
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE_API_URL: http://attune-api:8080
volumes:
- agent_bin:/opt/attune/agent:ro
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro
- packs_data:/opt/attune/packs:ro
- runtime_envs:/opt/attune/runtime_envs
- artifacts_data:/opt/attune/artifacts
- worker_node_logs:/opt/attune/logs
depends_on:
init-agent:
condition: service_completed_successfully
init-packs:
condition: service_completed_successfully
init-user:
condition: service_completed_successfully
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "pgrep -f attune-agent || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
networks:
- attune-network
restart: unless-stopped
worker-full:
image: nikolaik/python-nodejs:python3.12-nodejs22-slim
container_name: attune-worker-full
entrypoint: ["/opt/attune/agent/attune-agent"]
stop_grace_period: 45s
environment:
RUST_LOG: info
ATTUNE_CONFIG: /opt/attune/config/config.yaml
ATTUNE_WORKER_RUNTIMES: shell,python,node,native
ATTUNE_WORKER_TYPE: container
ATTUNE_WORKER_NAME: worker-full-01
ATTUNE__SECURITY__JWT_SECRET: ${JWT_SECRET:-docker-dev-secret-change-in-production}
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE_API_URL: http://attune-api:8080
volumes:
- agent_bin:/opt/attune/agent:ro
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro
- packs_data:/opt/attune/packs:ro
- runtime_envs:/opt/attune/runtime_envs
- artifacts_data:/opt/attune/artifacts
- worker_full_logs:/opt/attune/logs
depends_on:
init-agent:
condition: service_completed_successfully
init-packs:
condition: service_completed_successfully
init-user:
condition: service_completed_successfully
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "pgrep -f attune-agent || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
networks:
- attune-network
restart: unless-stopped
sensor:
image: nikolaik/python-nodejs:python3.12-nodejs22-slim
container_name: attune-sensor
entrypoint: ["/opt/attune/agent/attune-sensor-agent"]
stop_grace_period: 45s
environment:
RUST_LOG: debug
ATTUNE_CONFIG: /opt/attune/config/config.yaml
ATTUNE_SENSOR_RUNTIMES: shell,python,node,native
ATTUNE__SECURITY__JWT_SECRET: ${JWT_SECRET:-docker-dev-secret-change-in-production}
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE__WORKER__WORKER_TYPE: container
ATTUNE_API_URL: http://attune-api:8080
ATTUNE_MQ_URL: amqp://attune:attune@rabbitmq:5672
ATTUNE_PACKS_BASE_DIR: /opt/attune/packs
volumes:
- agent_bin:/opt/attune/agent:ro
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro
- packs_data:/opt/attune/packs:rw
- runtime_envs:/opt/attune/runtime_envs
- sensor_logs:/opt/attune/logs
depends_on:
init-agent:
condition: service_completed_successfully
init-packs:
condition: service_completed_successfully
init-user:
condition: service_completed_successfully
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "kill -0 1 || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
networks:
- attune-network
restart: unless-stopped
notifier:
image: ${ATTUNE_IMAGE_REGISTRY:-git.rdrx.app/attune-system}/attune/notifier:${ATTUNE_IMAGE_TAG:-latest}
container_name: attune-notifier
environment:
RUST_LOG: info
ATTUNE_CONFIG: /opt/attune/config/config.yaml
ATTUNE__SECURITY__JWT_SECRET: ${JWT_SECRET:-docker-dev-secret-change-in-production}
ATTUNE__SECURITY__ENCRYPTION_KEY: ${ENCRYPTION_KEY:-docker-dev-encryption-key-please-change-in-production-32plus}
ATTUNE__DATABASE__URL: postgresql://attune:attune@postgres:5432/attune
ATTUNE__DATABASE__SCHEMA: public
ATTUNE__MESSAGE_QUEUE__URL: amqp://attune:attune@rabbitmq:5672
ATTUNE__WORKER__WORKER_TYPE: container
ports:
- "8081:8081"
volumes:
- ${ATTUNE_DOCKER_CONFIG_PATH:-./config.docker.yaml}:/opt/attune/config/config.yaml:ro
- notifier_logs:/opt/attune/logs
depends_on:
migrations:
condition: service_completed_successfully
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8081/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
networks:
- attune-network
restart: unless-stopped
web:
image: ${ATTUNE_IMAGE_REGISTRY:-git.rdrx.app/attune-system}/attune/web:${ATTUNE_IMAGE_TAG:-latest}
container_name: attune-web
environment:
API_URL: ${API_URL:-http://localhost:8080}
WS_URL: ${WS_URL:-ws://localhost:8081}
ENVIRONMENT: docker
ports:
- "3000:80"
depends_on:
api:
condition: service_healthy
notifier:
condition: service_healthy
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost/health",
]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
networks:
- attune-network
restart: unless-stopped
volumes:
postgres_data:
driver: local
rabbitmq_data:
driver: local
redis_data:
driver: local
api_logs:
driver: local
executor_logs:
driver: local
worker_shell_logs:
driver: local
worker_python_logs:
driver: local
worker_node_logs:
driver: local
worker_full_logs:
driver: local
sensor_logs:
driver: local
notifier_logs:
driver: local
packs_data:
driver: local
runtime_envs:
driver: local
artifacts_data:
driver: local
agent_bin:
driver: local
networks:
attune-network:
driver: bridge
ipam:
config:
- subnet: 172.28.0.0/16

View File

@@ -0,0 +1,296 @@
#!/bin/sh
# Initialize builtin packs for Attune
# This script copies pack files to the shared volume and registers them in the database
# Designed to run on python:3.11-slim (Debian-based) image
set -e
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration from environment
DB_HOST="${DB_HOST:-postgres}"
DB_PORT="${DB_PORT:-5432}"
DB_USER="${DB_USER:-attune}"
DB_PASSWORD="${DB_PASSWORD:-attune}"
DB_NAME="${DB_NAME:-attune}"
DB_SCHEMA="${DB_SCHEMA:-public}"
# Pack directories
SOURCE_PACKS_DIR="${SOURCE_PACKS_DIR:-/source/packs}"
TARGET_PACKS_DIR="${TARGET_PACKS_DIR:-/opt/attune/packs}"
# Python loader script
LOADER_SCRIPT="${LOADER_SCRIPT:-/scripts/load_core_pack.py}"
DEFAULT_ADMIN_LOGIN="${DEFAULT_ADMIN_LOGIN:-}"
DEFAULT_ADMIN_PERMISSION_SET_REF="${DEFAULT_ADMIN_PERMISSION_SET_REF:-core.admin}"
echo ""
echo -e "${BLUE}╔════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Attune Builtin Packs Initialization ║${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════╝${NC}"
echo ""
# Install Python dependencies
echo -e "${YELLOW}${NC} Installing Python dependencies..."
if pip install --quiet --no-cache-dir psycopg2-binary pyyaml; then
echo -e "${GREEN}${NC} Python dependencies installed"
else
echo -e "${RED}${NC} Failed to install Python dependencies"
exit 1
fi
echo ""
# Wait for database to be ready (using Python instead of psql to avoid needing postgresql-client)
echo -e "${YELLOW}${NC} Waiting for database to be ready..."
until python3 -c "
import psycopg2, sys
try:
conn = psycopg2.connect(host='$DB_HOST', port=$DB_PORT, user='$DB_USER', password='$DB_PASSWORD', dbname='$DB_NAME', connect_timeout=3)
conn.close()
sys.exit(0)
except Exception:
sys.exit(1)
" 2>/dev/null; do
echo -e "${YELLOW} ...${NC} Database is unavailable - sleeping"
sleep 2
done
echo -e "${GREEN}${NC} Database is ready"
# Create target packs directory if it doesn't exist
echo -e "${YELLOW}${NC} Ensuring packs directory exists..."
mkdir -p "$TARGET_PACKS_DIR"
# Ensure the attune user (uid 1000) can write to the packs directory
# so the API service can install packs at runtime
chown -R 1000:1000 "$TARGET_PACKS_DIR"
echo -e "${GREEN}${NC} Packs directory ready at: $TARGET_PACKS_DIR"
# Initialise runtime environments volume with correct ownership.
# Workers (running as attune uid 1000) need write access to create
# virtualenvs, node_modules, etc. at runtime.
RUNTIME_ENVS_DIR="${RUNTIME_ENVS_DIR:-/opt/attune/runtime_envs}"
if [ -d "$RUNTIME_ENVS_DIR" ] || mkdir -p "$RUNTIME_ENVS_DIR" 2>/dev/null; then
chown -R 1000:1000 "$RUNTIME_ENVS_DIR"
echo -e "${GREEN}${NC} Runtime environments directory ready at: $RUNTIME_ENVS_DIR"
else
echo -e "${YELLOW}${NC} Runtime environments directory not mounted, skipping"
fi
# Initialise artifacts volume with correct ownership.
# The API service (creates directories for file-backed artifact versions) and
# workers (write artifact files during execution) both run as attune uid 1000.
ARTIFACTS_DIR="${ARTIFACTS_DIR:-/opt/attune/artifacts}"
if [ -d "$ARTIFACTS_DIR" ] || mkdir -p "$ARTIFACTS_DIR" 2>/dev/null; then
chown -R 1000:1000 "$ARTIFACTS_DIR"
echo -e "${GREEN}${NC} Artifacts directory ready at: $ARTIFACTS_DIR"
else
echo -e "${YELLOW}${NC} Artifacts directory not mounted, skipping"
fi
# Check if source packs directory exists
if [ ! -d "$SOURCE_PACKS_DIR" ]; then
echo -e "${RED}${NC} Source packs directory not found: $SOURCE_PACKS_DIR"
exit 1
fi
# Find all pack directories (directories with pack.yaml)
echo ""
echo -e "${BLUE}Discovering builtin packs...${NC}"
echo "----------------------------------------"
PACK_COUNT=0
COPIED_COUNT=0
LOADED_COUNT=0
for pack_dir in "$SOURCE_PACKS_DIR"/*; do
if [ -d "$pack_dir" ]; then
pack_name=$(basename "$pack_dir")
pack_yaml="$pack_dir/pack.yaml"
if [ -f "$pack_yaml" ]; then
PACK_COUNT=$((PACK_COUNT + 1))
echo -e "${BLUE}${NC} Found pack: ${GREEN}$pack_name${NC}"
# Check if pack already exists in target
target_pack_dir="$TARGET_PACKS_DIR/$pack_name"
if [ -d "$target_pack_dir" ]; then
# Pack exists, update files to ensure we have latest (especially binaries)
echo -e "${YELLOW}${NC} Pack exists at: $target_pack_dir, updating files..."
if cp -rf "$pack_dir"/* "$target_pack_dir"/; then
echo -e "${GREEN}${NC} Updated pack files at: $target_pack_dir"
else
echo -e "${RED}${NC} Failed to update pack"
exit 1
fi
else
# Copy pack to target directory
echo -e "${YELLOW}${NC} Copying pack files..."
if cp -r "$pack_dir" "$target_pack_dir"; then
COPIED_COUNT=$((COPIED_COUNT + 1))
echo -e "${GREEN}${NC} Copied to: $target_pack_dir"
else
echo -e "${RED}${NC} Failed to copy pack"
exit 1
fi
fi
fi
fi
done
echo "----------------------------------------"
echo ""
if [ $PACK_COUNT -eq 0 ]; then
echo -e "${YELLOW}${NC} No builtin packs found in $SOURCE_PACKS_DIR"
echo -e "${BLUE}${NC} This is OK if you're running with no packs"
exit 0
fi
echo -e "${BLUE}Pack Discovery Summary:${NC}"
echo " Total packs found: $PACK_COUNT"
echo " Newly copied: $COPIED_COUNT"
echo " Already present: $((PACK_COUNT - COPIED_COUNT))"
echo ""
# Load packs into database using Python loader
if [ -f "$LOADER_SCRIPT" ]; then
echo -e "${BLUE}Loading packs into database...${NC}"
echo "----------------------------------------"
# Build database URL with schema support
DATABASE_URL="postgresql://$DB_USER:$DB_PASSWORD@$DB_HOST:$DB_PORT/$DB_NAME"
# Set search_path for the Python script if not using default schema
if [ "$DB_SCHEMA" != "public" ]; then
export PGOPTIONS="-c search_path=$DB_SCHEMA,public"
fi
# Run the Python loader for each pack
for pack_dir in "$TARGET_PACKS_DIR"/*; do
if [ -d "$pack_dir" ]; then
pack_name=$(basename "$pack_dir")
pack_yaml="$pack_dir/pack.yaml"
if [ -f "$pack_yaml" ]; then
echo -e "${YELLOW}${NC} Loading pack: ${GREEN}$pack_name${NC}"
# Run Python loader
if python3 "$LOADER_SCRIPT" \
--database-url "$DATABASE_URL" \
--pack-dir "$TARGET_PACKS_DIR" \
--pack-name "$pack_name" \
--schema "$DB_SCHEMA"; then
LOADED_COUNT=$((LOADED_COUNT + 1))
echo -e "${GREEN}${NC} Loaded pack: $pack_name"
else
echo -e "${RED}${NC} Failed to load pack: $pack_name"
echo -e "${YELLOW}${NC} Continuing with other packs..."
fi
fi
fi
done
echo "----------------------------------------"
echo ""
echo -e "${BLUE}Database Loading Summary:${NC}"
echo " Successfully loaded: $LOADED_COUNT"
echo " Failed: $((PACK_COUNT - LOADED_COUNT))"
echo ""
else
echo -e "${YELLOW}${NC} Pack loader script not found: $LOADER_SCRIPT"
echo -e "${BLUE}${NC} Packs copied but not registered in database"
echo -e "${BLUE}${NC} You can manually load them later"
fi
if [ -n "$DEFAULT_ADMIN_LOGIN" ] && [ "$LOADED_COUNT" -gt 0 ]; then
echo ""
echo -e "${BLUE}Bootstrapping local admin assignment...${NC}"
if python3 - <<PY
import psycopg2
import sys
conn = psycopg2.connect(
host="${DB_HOST}",
port=${DB_PORT},
user="${DB_USER}",
password="${DB_PASSWORD}",
dbname="${DB_NAME}",
)
conn.autocommit = False
try:
with conn.cursor() as cur:
cur.execute("SET search_path TO ${DB_SCHEMA}, public")
cur.execute("SELECT id FROM identity WHERE login = %s", ("${DEFAULT_ADMIN_LOGIN}",))
identity_row = cur.fetchone()
if identity_row is None:
print(" ⚠ Default admin identity not found; skipping assignment")
conn.rollback()
sys.exit(0)
cur.execute("SELECT id FROM permission_set WHERE ref = %s", ("${DEFAULT_ADMIN_PERMISSION_SET_REF}",))
permset_row = cur.fetchone()
if permset_row is None:
print(" ⚠ Default admin permission set not found; skipping assignment")
conn.rollback()
sys.exit(0)
cur.execute(
"""
INSERT INTO permission_assignment (identity, permset)
VALUES (%s, %s)
ON CONFLICT (identity, permset) DO NOTHING
""",
(identity_row[0], permset_row[0]),
)
conn.commit()
print(" ✓ Default admin permission assignment ensured")
except Exception as exc:
conn.rollback()
print(f" ✗ Failed to ensure default admin assignment: {exc}")
sys.exit(1)
finally:
conn.close()
PY
then
:
else
exit 1
fi
fi
# Summary
echo ""
echo -e "${GREEN}╔════════════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║ Builtin Packs Initialization Complete! ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════════════╝${NC}"
echo ""
echo -e "${BLUE}Packs Location:${NC} ${GREEN}$TARGET_PACKS_DIR${NC}"
echo -e "${BLUE}Packs Available:${NC}"
for pack_dir in "$TARGET_PACKS_DIR"/*; do
if [ -d "$pack_dir" ]; then
pack_name=$(basename "$pack_dir")
pack_yaml="$pack_dir/pack.yaml"
if [ -f "$pack_yaml" ]; then
# Try to extract version from pack.yaml
version=$(grep "^version:" "$pack_yaml" | head -1 | sed 's/version:[[:space:]]*//' | tr -d '"')
echo -e "${GREEN}$pack_name${NC} ${BLUE}($version)${NC}"
fi
fi
done
echo ""
# Ensure ownership is correct after all packs have been copied
# The API service (running as attune uid 1000) needs write access to install new packs
chown -R 1000:1000 "$TARGET_PACKS_DIR"
echo -e "${BLUE}${NC} Pack files are accessible to all services via shared volume"
echo ""
exit 0

View File

@@ -0,0 +1,29 @@
-- Docker initialization script
-- Creates the svc_attune role needed by migrations
-- This runs before migrations via docker-compose
-- Create service role for the application
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'svc_attune') THEN
CREATE ROLE svc_attune WITH LOGIN PASSWORD 'attune_service_password';
END IF;
END
$$;
-- Create API role
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'attune_api') THEN
CREATE ROLE attune_api WITH LOGIN PASSWORD 'attune_api_password';
END IF;
END
$$;
-- Grant basic permissions
GRANT ALL PRIVILEGES ON DATABASE attune TO svc_attune;
GRANT ALL PRIVILEGES ON DATABASE attune TO attune_api;
-- Enable required extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";

View File

@@ -0,0 +1,108 @@
#!/bin/sh
# Initialize default test user for Attune
# This script creates a default test user if it doesn't already exist
set -e
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Database configuration from environment
DB_HOST="${DB_HOST:-postgres}"
DB_PORT="${DB_PORT:-5432}"
DB_USER="${DB_USER:-attune}"
DB_PASSWORD="${DB_PASSWORD:-attune}"
DB_NAME="${DB_NAME:-attune}"
DB_SCHEMA="${DB_SCHEMA:-public}"
# Test user configuration
TEST_LOGIN="${TEST_LOGIN:-test@attune.local}"
TEST_DISPLAY_NAME="${TEST_DISPLAY_NAME:-Test User}"
TEST_PASSWORD="${TEST_PASSWORD:-TestPass123!}"
# Pre-computed Argon2id hash for "TestPass123!"
# Using: m=19456, t=2, p=1 (default Argon2id parameters)
DEFAULT_PASSWORD_HASH='$argon2id$v=19$m=19456,t=2,p=1$AuZJ0xsGuSRk6LdCd58OOA$vBZnaflJwR9L4LPWoGGrcnRsIOf95FV4uIsoe3PjRE0'
echo ""
echo -e "${BLUE}╔════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Attune Default User Initialization ║${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════╝${NC}"
echo ""
# Wait for database to be ready
echo -e "${YELLOW}${NC} Waiting for database to be ready..."
until PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -c '\q' 2>/dev/null; do
echo -e "${YELLOW} ...${NC} Database is unavailable - sleeping"
sleep 2
done
echo -e "${GREEN}${NC} Database is ready"
# Check if user already exists
echo -e "${YELLOW}${NC} Checking if user exists..."
USER_EXISTS=$(PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -tAc \
"SELECT COUNT(*) FROM ${DB_SCHEMA}.identity WHERE login = '$TEST_LOGIN';")
if [ "$USER_EXISTS" -gt 0 ]; then
echo -e "${GREEN}${NC} User '$TEST_LOGIN' already exists"
echo -e "${BLUE}${NC} Skipping user creation"
else
echo -e "${YELLOW}${NC} Creating default test user..."
# Use the pre-computed hash for default password
if [ "$TEST_PASSWORD" = "TestPass123!" ]; then
PASSWORD_HASH="$DEFAULT_PASSWORD_HASH"
echo -e "${BLUE}${NC} Using default password hash"
else
echo -e "${YELLOW}${NC} Custom password detected - using basic hash"
echo -e "${YELLOW}${NC} For production, generate proper Argon2id hash"
# Note: For custom passwords in Docker, you should pre-generate the hash
# This is a fallback that will work but is less secure
PASSWORD_HASH="$DEFAULT_PASSWORD_HASH"
fi
# Insert the user
PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" << EOF
INSERT INTO ${DB_SCHEMA}.identity (login, display_name, password_hash, attributes)
VALUES (
'$TEST_LOGIN',
'$TEST_DISPLAY_NAME',
'$PASSWORD_HASH',
jsonb_build_object(
'email', '$TEST_LOGIN',
'created_via', 'docker-init',
'is_test_user', true
)
);
EOF
if [ $? -eq 0 ]; then
echo -e "${GREEN}${NC} User created successfully"
else
echo -e "${RED}${NC} Failed to create user"
exit 1
fi
fi
echo ""
echo -e "${GREEN}╔════════════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║ Default User Initialization Complete! ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════════════╝${NC}"
echo ""
echo -e "${BLUE}Default User Credentials:${NC}"
echo -e " Login: ${GREEN}$TEST_LOGIN${NC}"
echo -e " Password: ${GREEN}$TEST_PASSWORD${NC}"
echo ""
echo -e "${BLUE}Test Login:${NC}"
echo -e " ${YELLOW}curl -X POST http://localhost:8080/auth/login \\${NC}"
echo -e " ${YELLOW}-H 'Content-Type: application/json' \\${NC}"
echo -e " ${YELLOW}-d '{\"login\":\"$TEST_LOGIN\",\"password\":\"$TEST_PASSWORD\"}'${NC}"
echo ""
echo -e "${BLUE}${NC} For custom users, see: docs/testing/test-user-setup.md"
echo ""
exit 0

View File

@@ -0,0 +1,24 @@
#!/bin/sh
# inject-env.sh - Injects runtime environment variables into the Web UI
# This script runs at container startup to make environment variables available to the browser
set -e
# Default values
API_URL="${API_URL:-http://localhost:8080}"
WS_URL="${WS_URL:-ws://localhost:8081}"
# Create runtime configuration file
cat > /usr/share/nginx/html/config/runtime-config.js <<EOF
// Runtime configuration injected at container startup
window.ATTUNE_CONFIG = {
apiUrl: '${API_URL}',
wsUrl: '${WS_URL}',
environment: '${ENVIRONMENT:-production}'
};
EOF
echo "Runtime configuration injected:"
echo " API_URL: ${API_URL}"
echo " WS_URL: ${WS_URL}"
echo " ENVIRONMENT: ${ENVIRONMENT:-production}"

View File

@@ -0,0 +1,125 @@
# Nginx configuration for Attune Web UI
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Enable gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript application/javascript application/x-javascript application/xml+rss application/json;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
# Health check endpoint
location /health {
access_log off;
return 200 "OK\n";
add_header Content-Type text/plain;
}
# Use Docker's embedded DNS resolver so that proxy_pass with variables
# resolves hostnames at request time, not config load time.
# This prevents nginx from crashing if backends aren't ready yet.
resolver 127.0.0.11 valid=10s;
set $api_upstream http://api:8080;
set $notifier_upstream http://notifier:8081;
# Auth proxy - forward auth requests to backend
# With variable proxy_pass (no URI path), the full original request URI
# (e.g. /auth/login) is passed through to the backend as-is.
location /auth/ {
# nosemgrep: generic.nginx.security.missing-internal.missing-internal -- This is an intentionally public reverse-proxy route; 'internal' would break external API access.
proxy_pass $api_upstream;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# API proxy - forward API requests to backend (preserves /api prefix)
# With variable proxy_pass (no URI path), the full original request URI
# (e.g. /api/packs?page=1) is passed through to the backend as-is.
location /api/ {
# nosemgrep: generic.nginx.security.missing-internal.missing-internal -- This is an intentionally public reverse-proxy route; 'internal' would break external API access.
proxy_pass $api_upstream;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# WebSocket proxy for notifier service
# Strip the /ws/ prefix before proxying (notifier expects paths at root).
# e.g. /ws/events → /events
location /ws/ {
rewrite ^/ws/(.*) /$1 break;
# nosemgrep: generic.nginx.security.missing-internal.missing-internal -- This WebSocket endpoint is intentionally public and must be reachable by clients.
proxy_pass $notifier_upstream;
# nosemgrep: generic.nginx.security.possible-h2c-smuggling.possible-nginx-h2c-smuggling -- Upgrade handling is intentionally restricted to a fixed 'websocket' value for the public notifier endpoint.
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket timeouts
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
}
# Serve static assets with caching
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# Runtime configuration endpoint
location /config/runtime-config.js {
expires -1;
add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0";
}
# SPA routing - serve index.html for all routes
location / {
try_files $uri $uri/ /index.html;
# Disable caching for index.html
location = /index.html {
expires -1;
add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0";
}
}
# Deny access to hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}

View File

@@ -0,0 +1,189 @@
#!/bin/bash
# Migration script for Attune database
# Runs all SQL migration files in order
set -e
echo "=========================================="
echo "Attune Database Migration Runner"
echo "=========================================="
echo ""
# Database connection parameters
DB_HOST="${DB_HOST:-postgres}"
DB_PORT="${DB_PORT:-5432}"
DB_USER="${DB_USER:-attune}"
DB_PASSWORD="${DB_PASSWORD:-attune}"
DB_NAME="${DB_NAME:-attune}"
MIGRATIONS_DIR="${MIGRATIONS_DIR:-/migrations}"
# Export password for psql
export PGPASSWORD="$DB_PASSWORD"
# Color output
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Function to wait for PostgreSQL to be ready
wait_for_postgres() {
echo "Waiting for PostgreSQL to be ready..."
local max_attempts=30
local attempt=1
while [ $attempt -le $max_attempts ]; do
if psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -c '\q' 2>/dev/null; then
echo -e "${GREEN}✓ PostgreSQL is ready${NC}"
return 0
fi
echo " Attempt $attempt/$max_attempts: PostgreSQL not ready yet..."
sleep 2
attempt=$((attempt + 1))
done
echo -e "${RED}✗ PostgreSQL failed to become ready after $max_attempts attempts${NC}"
return 1
}
# Function to check if migrations table exists
setup_migrations_table() {
echo "Setting up migrations tracking table..."
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -v ON_ERROR_STOP=1 <<-EOSQL
CREATE TABLE IF NOT EXISTS _migrations (
id SERIAL PRIMARY KEY,
filename VARCHAR(255) UNIQUE NOT NULL,
applied_at TIMESTAMP DEFAULT NOW()
);
EOSQL
echo -e "${GREEN}✓ Migrations table ready${NC}"
}
# Function to check if a migration has been applied
is_migration_applied() {
local filename=$1
local count=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -t -c \
"SELECT COUNT(*) FROM _migrations WHERE filename = '$filename';" | tr -d ' ')
[ "$count" -gt 0 ]
}
# Function to mark migration as applied
mark_migration_applied() {
local filename=$1
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -c \
"INSERT INTO _migrations (filename) VALUES ('$filename');" > /dev/null
}
# Function to run a migration file
run_migration() {
local filepath=$1
local filename=$(basename "$filepath")
if is_migration_applied "$filename"; then
echo -e "${YELLOW}⊘ Skipping $filename (already applied)${NC}"
return 0
fi
echo -e "${GREEN}→ Applying $filename...${NC}"
# Run migration in a transaction with detailed error reporting
if psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -v ON_ERROR_STOP=1 \
-c "BEGIN;" \
-f "$filepath" \
-c "COMMIT;" > /tmp/migration_output.log 2>&1; then
mark_migration_applied "$filename"
echo -e "${GREEN}✓ Applied $filename${NC}"
return 0
else
echo -e "${RED}✗ Failed to apply $filename${NC}"
echo ""
echo "Error details:"
cat /tmp/migration_output.log
echo ""
echo "Migration rolled back due to error."
return 1
fi
}
# Function to initialize Docker-specific roles and extensions
init_docker_roles() {
echo "Initializing Docker roles and extensions..."
if [ -f "/docker/init-roles.sql" ]; then
if psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -v ON_ERROR_STOP=1 -f "/docker/init-roles.sql" > /dev/null 2>&1; then
echo -e "${GREEN}✓ Docker roles initialized${NC}"
return 0
else
echo -e "${YELLOW}⚠ Warning: Could not initialize Docker roles (may already exist)${NC}"
return 0
fi
else
echo -e "${YELLOW}⚠ No Docker init script found, skipping${NC}"
return 0
fi
}
# Main migration process
main() {
echo "Configuration:"
echo " Database: $DB_HOST:$DB_PORT/$DB_NAME"
echo " User: $DB_USER"
echo " Migrations directory: $MIGRATIONS_DIR"
echo ""
# Wait for database
wait_for_postgres || exit 1
# Initialize Docker-specific roles
init_docker_roles || exit 1
# Setup migrations tracking
setup_migrations_table || exit 1
echo ""
echo "Running migrations..."
echo "----------------------------------------"
# Find and sort migration files
local migration_count=0
local applied_count=0
local skipped_count=0
# Process migrations in sorted order
for migration_file in $(find "$MIGRATIONS_DIR" -name "*.sql" -type f | sort); do
migration_count=$((migration_count + 1))
if is_migration_applied "$(basename "$migration_file")"; then
skipped_count=$((skipped_count + 1))
run_migration "$migration_file"
else
if run_migration "$migration_file"; then
applied_count=$((applied_count + 1))
else
echo -e "${RED}Migration failed!${NC}"
exit 1
fi
fi
done
echo "----------------------------------------"
echo ""
echo "Migration Summary:"
echo " Total migrations: $migration_count"
echo " Newly applied: $applied_count"
echo " Already applied: $skipped_count"
echo ""
if [ $applied_count -gt 0 ]; then
echo -e "${GREEN}✓ All migrations applied successfully!${NC}"
else
echo -e "${GREEN}✓ Database is up to date (no new migrations)${NC}"
fi
}
# Run main function
main

View File

@@ -0,0 +1,230 @@
-- Migration: Initial Setup
-- Description: Creates the attune schema, enums, and shared database functions
-- Version: 20250101000001
-- ============================================================================
-- EXTENSIONS
-- ============================================================================
-- Enable required extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- ============================================================================
-- ENUM TYPES
-- ============================================================================
-- WorkerType enum
DO $$ BEGIN
CREATE TYPE worker_type_enum AS ENUM (
'local',
'remote',
'container'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE worker_type_enum IS 'Type of worker deployment';
-- WorkerRole enum
DO $$ BEGIN
CREATE TYPE worker_role_enum AS ENUM (
'action',
'sensor',
'hybrid'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE worker_role_enum IS 'Role of worker (action executor, sensor, or both)';
-- WorkerStatus enum
DO $$ BEGIN
CREATE TYPE worker_status_enum AS ENUM (
'active',
'inactive',
'busy',
'error'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE worker_status_enum IS 'Worker operational status';
-- EnforcementStatus enum
DO $$ BEGIN
CREATE TYPE enforcement_status_enum AS ENUM (
'created',
'processed',
'disabled'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE enforcement_status_enum IS 'Enforcement processing status';
-- EnforcementCondition enum
DO $$ BEGIN
CREATE TYPE enforcement_condition_enum AS ENUM (
'any',
'all'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE enforcement_condition_enum IS 'Logical operator for conditions (OR/AND)';
-- ExecutionStatus enum
DO $$ BEGIN
CREATE TYPE execution_status_enum AS ENUM (
'requested',
'scheduling',
'scheduled',
'running',
'completed',
'failed',
'canceling',
'cancelled',
'timeout',
'abandoned'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE execution_status_enum IS 'Execution lifecycle status';
-- InquiryStatus enum
DO $$ BEGIN
CREATE TYPE inquiry_status_enum AS ENUM (
'pending',
'responded',
'timeout',
'cancelled'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE inquiry_status_enum IS 'Inquiry lifecycle status';
-- PolicyMethod enum
DO $$ BEGIN
CREATE TYPE policy_method_enum AS ENUM (
'cancel',
'enqueue'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE policy_method_enum IS 'Policy enforcement method';
-- OwnerType enum
DO $$ BEGIN
CREATE TYPE owner_type_enum AS ENUM (
'system',
'identity',
'pack',
'action',
'sensor'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE owner_type_enum IS 'Type of resource owner';
-- NotificationState enum
DO $$ BEGIN
CREATE TYPE notification_status_enum AS ENUM (
'created',
'queued',
'processing',
'error'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE notification_status_enum IS 'Notification processing state';
-- ArtifactType enum
DO $$ BEGIN
CREATE TYPE artifact_type_enum AS ENUM (
'file_binary',
'file_datatable',
'file_image',
'file_text',
'other',
'progress',
'url'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE artifact_type_enum IS 'Type of artifact';
-- RetentionPolicyType enum
DO $$ BEGIN
CREATE TYPE artifact_retention_enum AS ENUM (
'versions',
'days',
'hours',
'minutes'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE artifact_retention_enum IS 'Type of retention policy';
-- ArtifactVisibility enum
DO $$ BEGIN
CREATE TYPE artifact_visibility_enum AS ENUM (
'public',
'private'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE artifact_visibility_enum IS 'Visibility of an artifact (public = viewable by all users, private = scoped by owner)';
-- PackEnvironmentStatus enum
DO $$ BEGIN
CREATE TYPE pack_environment_status_enum AS ENUM (
'pending',
'installing',
'ready',
'failed',
'outdated'
);
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
COMMENT ON TYPE pack_environment_status_enum IS 'Status of pack runtime environment installation';
-- ============================================================================
-- SHARED FUNCTIONS
-- ============================================================================
-- Function to automatically update the 'updated' timestamp
CREATE OR REPLACE FUNCTION update_updated_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION update_updated_column() IS 'Automatically updates the updated timestamp on row modification';

View File

@@ -0,0 +1,262 @@
-- Migration: Pack System
-- Description: Creates pack, runtime, and runtime_version tables
-- Version: 20250101000002
-- ============================================================================
-- PACK TABLE
-- ============================================================================
CREATE TABLE pack (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL UNIQUE,
label TEXT NOT NULL,
description TEXT,
version TEXT NOT NULL,
conf_schema JSONB NOT NULL DEFAULT '{}'::jsonb,
config JSONB NOT NULL DEFAULT '{}'::jsonb,
meta JSONB NOT NULL DEFAULT '{}'::jsonb,
tags TEXT[] NOT NULL DEFAULT ARRAY[]::TEXT[],
runtime_deps TEXT[] NOT NULL DEFAULT ARRAY[]::TEXT[],
dependencies TEXT[] NOT NULL DEFAULT ARRAY[]::TEXT[],
is_standard BOOLEAN NOT NULL DEFAULT FALSE,
installers JSONB DEFAULT '[]'::jsonb,
-- Installation metadata (nullable for non-installed packs)
source_type TEXT,
source_url TEXT,
source_ref TEXT,
checksum TEXT,
checksum_verified BOOLEAN DEFAULT FALSE,
installed_at TIMESTAMPTZ,
installed_by BIGINT,
installation_method TEXT,
storage_path TEXT,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT pack_ref_lowercase CHECK (ref = LOWER(ref)),
CONSTRAINT pack_ref_format CHECK (ref ~ '^[a-z][a-z0-9_-]+$'),
CONSTRAINT pack_version_semver CHECK (
version ~ '^\d+\.\d+\.\d+(-[0-9A-Za-z-]+(\.[0-9A-Za-z-]+)*)?(\+[0-9A-Za-z-]+(\.[0-9A-Za-z-]+)*)?$'
)
);
-- Indexes
CREATE INDEX idx_pack_ref ON pack(ref);
CREATE INDEX idx_pack_created ON pack(created DESC);
CREATE INDEX idx_pack_is_standard ON pack(is_standard) WHERE is_standard = TRUE;
CREATE INDEX idx_pack_is_standard_created ON pack(is_standard, created DESC);
CREATE INDEX idx_pack_version_created ON pack(version, created DESC);
CREATE INDEX idx_pack_config_gin ON pack USING GIN (config);
CREATE INDEX idx_pack_meta_gin ON pack USING GIN (meta);
CREATE INDEX idx_pack_tags_gin ON pack USING GIN (tags);
CREATE INDEX idx_pack_runtime_deps_gin ON pack USING GIN (runtime_deps);
CREATE INDEX idx_pack_dependencies_gin ON pack USING GIN (dependencies);
CREATE INDEX idx_pack_installed_at ON pack(installed_at DESC) WHERE installed_at IS NOT NULL;
CREATE INDEX idx_pack_installed_by ON pack(installed_by) WHERE installed_by IS NOT NULL;
CREATE INDEX idx_pack_source_type ON pack(source_type) WHERE source_type IS NOT NULL;
-- Trigger
CREATE TRIGGER update_pack_updated
BEFORE UPDATE ON pack
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE pack IS 'Packs bundle related automation components';
COMMENT ON COLUMN pack.ref IS 'Unique pack reference identifier (e.g., "slack", "github")';
COMMENT ON COLUMN pack.label IS 'Human-readable pack name';
COMMENT ON COLUMN pack.version IS 'Semantic version of the pack';
COMMENT ON COLUMN pack.conf_schema IS 'JSON schema for pack configuration';
COMMENT ON COLUMN pack.config IS 'Pack configuration values';
COMMENT ON COLUMN pack.meta IS 'Pack metadata';
COMMENT ON COLUMN pack.runtime_deps IS 'Array of required runtime references (e.g., shell, python, nodejs)';
COMMENT ON COLUMN pack.dependencies IS 'Array of required pack references (e.g., core, utils)';
COMMENT ON COLUMN pack.is_standard IS 'Whether this is a core/built-in pack';
COMMENT ON COLUMN pack.source_type IS 'Installation source type (e.g., "git", "local", "registry")';
COMMENT ON COLUMN pack.source_url IS 'URL or path where pack was installed from';
COMMENT ON COLUMN pack.source_ref IS 'Git ref, version tag, or other source reference';
COMMENT ON COLUMN pack.checksum IS 'Content checksum for verification';
COMMENT ON COLUMN pack.checksum_verified IS 'Whether checksum has been verified';
COMMENT ON COLUMN pack.installed_at IS 'Timestamp when pack was installed';
COMMENT ON COLUMN pack.installed_by IS 'Identity ID of user who installed the pack';
COMMENT ON COLUMN pack.installation_method IS 'Method used for installation (e.g., "cli", "api", "auto")';
COMMENT ON COLUMN pack.storage_path IS 'Filesystem path where pack files are stored';
-- ============================================================================
-- RUNTIME TABLE
-- ============================================================================
CREATE TABLE runtime (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL UNIQUE,
pack BIGINT REFERENCES pack(id) ON DELETE CASCADE,
pack_ref TEXT,
description TEXT,
name TEXT NOT NULL,
aliases TEXT[] NOT NULL DEFAULT '{}'::text[],
distributions JSONB NOT NULL,
installation JSONB,
installers JSONB DEFAULT '[]'::jsonb,
-- Execution configuration: describes how to execute actions using this runtime,
-- how to create isolated environments, and how to install dependencies.
--
-- Structure:
-- {
-- "interpreter": {
-- "binary": "python3", -- interpreter binary name or path
-- "args": [], -- additional args before the action file
-- "file_extension": ".py" -- file extension this runtime handles
-- },
-- "environment": { -- optional: isolated environment config
-- "env_type": "virtualenv", -- "virtualenv", "node_modules", "none"
-- "dir_name": ".venv", -- directory name relative to pack dir
-- "create_command": ["python3", "-m", "venv", "{env_dir}"],
-- "interpreter_path": "{env_dir}/bin/python3" -- overrides interpreter.binary
-- },
-- "dependencies": { -- optional: dependency management config
-- "manifest_file": "requirements.txt",
-- "install_command": ["{interpreter}", "-m", "pip", "install", "-r", "{manifest_path}"]
-- }
-- }
--
-- Template variables:
-- {pack_dir} - absolute path to the pack directory
-- {env_dir} - resolved environment directory (pack_dir/dir_name)
-- {interpreter} - resolved interpreter path
-- {action_file} - absolute path to the action script file
-- {manifest_path} - absolute path to the dependency manifest file
execution_config JSONB NOT NULL DEFAULT '{}'::jsonb,
-- Whether this runtime was auto-registered by an agent
-- (vs. loaded from a pack's YAML file during pack registration)
auto_detected BOOLEAN NOT NULL DEFAULT FALSE,
-- Detection metadata for auto-discovered runtimes.
-- Stores how the agent discovered this runtime (binary path, version, etc.)
-- enables re-verification on restart.
-- Example: { "detected_path": "/usr/bin/ruby", "detected_name": "ruby",
-- "detected_version": "3.3.0" }
detection_config JSONB NOT NULL DEFAULT '{}'::jsonb,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT runtime_ref_lowercase CHECK (ref = LOWER(ref))
);
-- Indexes
CREATE INDEX idx_runtime_ref ON runtime(ref);
CREATE INDEX idx_runtime_pack ON runtime(pack);
CREATE INDEX idx_runtime_created ON runtime(created DESC);
CREATE INDEX idx_runtime_name ON runtime(name);
CREATE INDEX idx_runtime_verification ON runtime USING GIN ((distributions->'verification'));
CREATE INDEX idx_runtime_execution_config ON runtime USING GIN (execution_config);
CREATE INDEX idx_runtime_auto_detected ON runtime(auto_detected);
CREATE INDEX idx_runtime_detection_config ON runtime USING GIN (detection_config);
CREATE INDEX idx_runtime_aliases ON runtime USING GIN (aliases);
-- Trigger
CREATE TRIGGER update_runtime_updated
BEFORE UPDATE ON runtime
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE runtime IS 'Runtime environments for executing actions and sensors (unified)';
COMMENT ON COLUMN runtime.ref IS 'Unique runtime reference (format: pack.name, e.g., core.python)';
COMMENT ON COLUMN runtime.name IS 'Runtime name (e.g., "Python", "Node.js", "Shell")';
COMMENT ON COLUMN runtime.aliases IS 'Lowercase alias names for this runtime (e.g., ["ruby", "rb"] for the Ruby runtime). Used for alias-aware matching during auto-detection and scheduling.';
COMMENT ON COLUMN runtime.distributions IS 'Runtime distribution metadata including verification commands, version requirements, and capabilities';
COMMENT ON COLUMN runtime.installation IS 'Installation requirements and instructions including package managers and setup steps';
COMMENT ON COLUMN runtime.installers IS 'Array of installer actions to create pack-specific runtime environments. Each installer defines commands to set up isolated environments (e.g., Python venv, npm install).';
COMMENT ON COLUMN runtime.execution_config IS 'Execution configuration: interpreter, environment setup, and dependency management. Drives how the worker executes actions and how pack install sets up environments.';
COMMENT ON COLUMN runtime.auto_detected IS 'Whether this runtime was auto-registered by an agent (true) vs. loaded from a pack YAML (false)';
COMMENT ON COLUMN runtime.detection_config IS 'Detection metadata for auto-discovered runtimes: binaries probed, version regex, detected path/version';
-- ============================================================================
-- RUNTIME VERSION TABLE
-- ============================================================================
CREATE TABLE runtime_version (
id BIGSERIAL PRIMARY KEY,
runtime BIGINT NOT NULL REFERENCES runtime(id) ON DELETE CASCADE,
runtime_ref TEXT NOT NULL,
-- Semantic version string (e.g., "3.12.1", "20.11.0")
version TEXT NOT NULL,
-- Individual version components for efficient range queries.
-- Nullable because some runtimes may use non-numeric versioning.
version_major INT,
version_minor INT,
version_patch INT,
-- Complete execution configuration for this specific version.
-- This is NOT a diff/override — it is a full standalone config that can
-- replace the parent runtime's execution_config when this version is selected.
-- Structure is identical to runtime.execution_config (RuntimeExecutionConfig).
execution_config JSONB NOT NULL DEFAULT '{}'::jsonb,
-- Version-specific distribution/verification metadata.
-- Structure mirrors runtime.distributions but with version-specific commands.
-- Example: verification commands that check for a specific binary like python3.12.
distributions JSONB NOT NULL DEFAULT '{}'::jsonb,
-- Whether this version is the default for the parent runtime.
-- At most one version per runtime should be marked as default.
is_default BOOLEAN NOT NULL DEFAULT FALSE,
-- Whether this version has been verified as available on the current system.
available BOOLEAN NOT NULL DEFAULT TRUE,
-- When this version was last verified (via running verification commands).
verified_at TIMESTAMPTZ,
-- Arbitrary version-specific metadata (e.g., EOL date, release notes URL,
-- feature flags, platform-specific notes).
meta JSONB NOT NULL DEFAULT '{}'::jsonb,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT runtime_version_unique UNIQUE(runtime, version)
);
-- Indexes
CREATE INDEX idx_runtime_version_runtime ON runtime_version(runtime);
CREATE INDEX idx_runtime_version_runtime_ref ON runtime_version(runtime_ref);
CREATE INDEX idx_runtime_version_version ON runtime_version(version);
CREATE INDEX idx_runtime_version_available ON runtime_version(available) WHERE available = TRUE;
CREATE INDEX idx_runtime_version_is_default ON runtime_version(is_default) WHERE is_default = TRUE;
CREATE INDEX idx_runtime_version_components ON runtime_version(runtime, version_major, version_minor, version_patch);
CREATE INDEX idx_runtime_version_created ON runtime_version(created DESC);
CREATE INDEX idx_runtime_version_execution_config ON runtime_version USING GIN (execution_config);
CREATE INDEX idx_runtime_version_meta ON runtime_version USING GIN (meta);
-- Trigger
CREATE TRIGGER update_runtime_version_updated
BEFORE UPDATE ON runtime_version
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE runtime_version IS 'Specific versions of a runtime (e.g., Python 3.11, 3.12) with version-specific execution configuration';
COMMENT ON COLUMN runtime_version.runtime IS 'Parent runtime this version belongs to';
COMMENT ON COLUMN runtime_version.runtime_ref IS 'Parent runtime ref (e.g., core.python) for display/filtering';
COMMENT ON COLUMN runtime_version.version IS 'Semantic version string (e.g., "3.12.1", "20.11.0")';
COMMENT ON COLUMN runtime_version.version_major IS 'Major version component for efficient range queries';
COMMENT ON COLUMN runtime_version.version_minor IS 'Minor version component for efficient range queries';
COMMENT ON COLUMN runtime_version.version_patch IS 'Patch version component for efficient range queries';
COMMENT ON COLUMN runtime_version.execution_config IS 'Complete execution configuration for this version (same structure as runtime.execution_config)';
COMMENT ON COLUMN runtime_version.distributions IS 'Version-specific distribution/verification metadata';
COMMENT ON COLUMN runtime_version.is_default IS 'Whether this is the default version for the parent runtime (at most one per runtime)';
COMMENT ON COLUMN runtime_version.available IS 'Whether this version has been verified as available on the system';
COMMENT ON COLUMN runtime_version.verified_at IS 'Timestamp of last availability verification';
COMMENT ON COLUMN runtime_version.meta IS 'Arbitrary version-specific metadata';

View File

@@ -0,0 +1,223 @@
-- Migration: Identity and Authentication
-- Description: Creates identity, permission, and policy tables
-- Version: 20250101000002
-- ============================================================================
-- IDENTITY TABLE
-- ============================================================================
CREATE TABLE identity (
id BIGSERIAL PRIMARY KEY,
login TEXT NOT NULL UNIQUE,
display_name TEXT,
password_hash TEXT,
attributes JSONB NOT NULL DEFAULT '{}'::jsonb,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Indexes
CREATE INDEX idx_identity_login ON identity(login);
CREATE INDEX idx_identity_created ON identity(created DESC);
CREATE INDEX idx_identity_password_hash ON identity(password_hash) WHERE password_hash IS NOT NULL;
CREATE INDEX idx_identity_attributes_gin ON identity USING GIN (attributes);
-- Trigger
CREATE TRIGGER update_identity_updated
BEFORE UPDATE ON identity
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE identity IS 'Identities represent users or service accounts';
COMMENT ON COLUMN identity.login IS 'Unique login identifier';
COMMENT ON COLUMN identity.display_name IS 'Human-readable name';
COMMENT ON COLUMN identity.password_hash IS 'Argon2 hashed password for authentication (NULL for service accounts or external auth)';
COMMENT ON COLUMN identity.attributes IS 'Custom attributes (email, groups, etc.)';
-- ============================================================================
-- ADD FOREIGN KEY CONSTRAINTS TO EXISTING TABLES
-- ============================================================================
-- Add foreign key constraint for pack.installed_by now that identity table exists
ALTER TABLE pack
ADD CONSTRAINT fk_pack_installed_by
FOREIGN KEY (installed_by)
REFERENCES identity(id)
ON DELETE SET NULL;
-- ============================================================================
-- ============================================================================
-- PERMISSION_SET TABLE
-- ============================================================================
CREATE TABLE permission_set (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL UNIQUE,
pack BIGINT REFERENCES pack(id) ON DELETE CASCADE,
pack_ref TEXT,
label TEXT,
description TEXT,
grants JSONB NOT NULL DEFAULT '[]'::jsonb,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT permission_set_ref_lowercase CHECK (ref = LOWER(ref)),
CONSTRAINT permission_set_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$')
);
-- Indexes
CREATE INDEX idx_permission_set_ref ON permission_set(ref);
CREATE INDEX idx_permission_set_pack ON permission_set(pack);
CREATE INDEX idx_permission_set_created ON permission_set(created DESC);
-- Trigger
CREATE TRIGGER update_permission_set_updated
BEFORE UPDATE ON permission_set
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE permission_set IS 'Permission sets group permissions together (like roles)';
COMMENT ON COLUMN permission_set.ref IS 'Unique permission set reference (format: pack.name)';
COMMENT ON COLUMN permission_set.label IS 'Human-readable name';
COMMENT ON COLUMN permission_set.grants IS 'Array of permission grants';
-- ============================================================================
-- ============================================================================
-- PERMISSION_ASSIGNMENT TABLE
-- ============================================================================
CREATE TABLE permission_assignment (
id BIGSERIAL PRIMARY KEY,
identity BIGINT NOT NULL REFERENCES identity(id) ON DELETE CASCADE,
permset BIGINT NOT NULL REFERENCES permission_set(id) ON DELETE CASCADE,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Unique constraint to prevent duplicate assignments
CONSTRAINT unique_identity_permset UNIQUE (identity, permset)
);
-- Indexes
CREATE INDEX idx_permission_assignment_identity ON permission_assignment(identity);
CREATE INDEX idx_permission_assignment_permset ON permission_assignment(permset);
CREATE INDEX idx_permission_assignment_created ON permission_assignment(created DESC);
CREATE INDEX idx_permission_assignment_identity_created ON permission_assignment(identity, created DESC);
CREATE INDEX idx_permission_assignment_permset_created ON permission_assignment(permset, created DESC);
-- Comments
COMMENT ON TABLE permission_assignment IS 'Links identities to permission sets (many-to-many)';
COMMENT ON COLUMN permission_assignment.identity IS 'Identity being granted permissions';
COMMENT ON COLUMN permission_assignment.permset IS 'Permission set being assigned';
-- ============================================================================
ALTER TABLE identity
ADD COLUMN frozen BOOLEAN NOT NULL DEFAULT false;
CREATE INDEX idx_identity_frozen ON identity(frozen);
COMMENT ON COLUMN identity.frozen IS 'If true, authentication is blocked for this identity';
CREATE TABLE identity_role_assignment (
id BIGSERIAL PRIMARY KEY,
identity BIGINT NOT NULL REFERENCES identity(id) ON DELETE CASCADE,
role TEXT NOT NULL,
source TEXT NOT NULL DEFAULT 'manual',
managed BOOLEAN NOT NULL DEFAULT false,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CONSTRAINT unique_identity_role_assignment UNIQUE (identity, role)
);
CREATE INDEX idx_identity_role_assignment_identity
ON identity_role_assignment(identity);
CREATE INDEX idx_identity_role_assignment_role
ON identity_role_assignment(role);
CREATE INDEX idx_identity_role_assignment_source
ON identity_role_assignment(source);
CREATE TRIGGER update_identity_role_assignment_updated
BEFORE UPDATE ON identity_role_assignment
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
COMMENT ON TABLE identity_role_assignment IS 'Links identities to role labels from manual assignment or external identity providers';
COMMENT ON COLUMN identity_role_assignment.role IS 'Opaque role/group label (e.g. IDP group name)';
COMMENT ON COLUMN identity_role_assignment.source IS 'Where the role assignment originated (manual, oidc, ldap, sync, etc.)';
COMMENT ON COLUMN identity_role_assignment.managed IS 'True when the assignment is managed by external sync and should not be edited manually';
CREATE TABLE permission_set_role_assignment (
id BIGSERIAL PRIMARY KEY,
permset BIGINT NOT NULL REFERENCES permission_set(id) ON DELETE CASCADE,
role TEXT NOT NULL,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CONSTRAINT unique_permission_set_role_assignment UNIQUE (permset, role)
);
CREATE INDEX idx_permission_set_role_assignment_permset
ON permission_set_role_assignment(permset);
CREATE INDEX idx_permission_set_role_assignment_role
ON permission_set_role_assignment(role);
COMMENT ON TABLE permission_set_role_assignment IS 'Links permission sets to role labels for role-based grant expansion';
COMMENT ON COLUMN permission_set_role_assignment.role IS 'Opaque role/group label associated with the permission set';
-- ============================================================================
-- ============================================================================
-- POLICY TABLE
-- ============================================================================
CREATE TABLE policy (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL UNIQUE,
pack BIGINT REFERENCES pack(id) ON DELETE CASCADE,
pack_ref TEXT,
action BIGINT, -- Forward reference to action table, will add constraint in next migration
action_ref TEXT,
parameters TEXT[] NOT NULL DEFAULT ARRAY[]::TEXT[],
method policy_method_enum NOT NULL,
threshold INTEGER NOT NULL,
name TEXT NOT NULL,
description TEXT,
tags TEXT[] NOT NULL DEFAULT ARRAY[]::TEXT[],
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT policy_ref_lowercase CHECK (ref = LOWER(ref)),
CONSTRAINT policy_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$'),
CONSTRAINT policy_threshold_positive CHECK (threshold > 0)
);
-- Indexes
CREATE INDEX idx_policy_ref ON policy(ref);
CREATE INDEX idx_policy_pack ON policy(pack);
CREATE INDEX idx_policy_action ON policy(action);
CREATE INDEX idx_policy_created ON policy(created DESC);
CREATE INDEX idx_policy_action_created ON policy(action, created DESC);
CREATE INDEX idx_policy_pack_created ON policy(pack, created DESC);
CREATE INDEX idx_policy_parameters_gin ON policy USING GIN (parameters);
CREATE INDEX idx_policy_tags_gin ON policy USING GIN (tags);
-- Trigger
CREATE TRIGGER update_policy_updated
BEFORE UPDATE ON policy
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE policy IS 'Policies define execution controls (rate limiting, concurrency)';
COMMENT ON COLUMN policy.ref IS 'Unique policy reference (format: pack.name)';
COMMENT ON COLUMN policy.action IS 'Action this policy applies to';
COMMENT ON COLUMN policy.parameters IS 'Parameter names used for policy grouping';
COMMENT ON COLUMN policy.method IS 'How to handle policy violations (cancel/enqueue)';
COMMENT ON COLUMN policy.threshold IS 'Numeric limit (e.g., max concurrent executions)';
-- ============================================================================

View File

@@ -0,0 +1,290 @@
-- Migration: Event System and Actions
-- Description: Creates trigger, sensor, event, enforcement, and action tables
-- with runtime version constraint support. Includes webhook key
-- generation function used by webhook management functions in 000007.
--
-- NOTE: The event and enforcement tables are converted to TimescaleDB
-- hypertables in migration 000009. Hypertables cannot be the target of
-- FK constraints, so enforcement.event is a plain BIGINT with no FK.
-- FKs *from* hypertables to regular tables (e.g., event.trigger → trigger,
-- enforcement.rule → rule) are supported by TimescaleDB 2.x and are kept.
-- Version: 20250101000004
-- ============================================================================
-- WEBHOOK KEY GENERATION
-- ============================================================================
-- Generates a unique webhook key in the format: wh_<32 random hex chars>
-- Used by enable_trigger_webhook() and regenerate_trigger_webhook_key() in 000007.
CREATE OR REPLACE FUNCTION generate_webhook_key()
RETURNS VARCHAR(64) AS $$
BEGIN
RETURN 'wh_' || encode(gen_random_bytes(16), 'hex');
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION generate_webhook_key() IS 'Generates a unique webhook key (format: wh_<32 hex chars>) for trigger webhook authentication';
-- ============================================================================
-- TRIGGER TABLE
-- ============================================================================
CREATE TABLE trigger (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL UNIQUE,
pack BIGINT REFERENCES pack(id) ON DELETE CASCADE,
pack_ref TEXT,
label TEXT NOT NULL,
description TEXT,
enabled BOOLEAN NOT NULL DEFAULT TRUE,
is_adhoc BOOLEAN DEFAULT false NOT NULL,
param_schema JSONB,
out_schema JSONB,
webhook_enabled BOOLEAN NOT NULL DEFAULT FALSE,
webhook_key VARCHAR(64) UNIQUE,
webhook_config JSONB DEFAULT '{}'::jsonb,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT trigger_ref_lowercase CHECK (ref = LOWER(ref)),
CONSTRAINT trigger_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$')
);
-- Indexes
CREATE INDEX idx_trigger_ref ON trigger(ref);
CREATE INDEX idx_trigger_pack ON trigger(pack);
CREATE INDEX idx_trigger_enabled ON trigger(enabled) WHERE enabled = TRUE;
CREATE INDEX idx_trigger_created ON trigger(created DESC);
CREATE INDEX idx_trigger_pack_enabled ON trigger(pack, enabled);
CREATE INDEX idx_trigger_webhook_key ON trigger(webhook_key) WHERE webhook_key IS NOT NULL;
CREATE INDEX idx_trigger_enabled_created ON trigger(enabled, created DESC) WHERE enabled = TRUE;
-- Trigger
CREATE TRIGGER update_trigger_updated
BEFORE UPDATE ON trigger
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE trigger IS 'Trigger definitions that can activate rules';
COMMENT ON COLUMN trigger.ref IS 'Unique trigger reference (format: pack.name)';
COMMENT ON COLUMN trigger.label IS 'Human-readable trigger name';
COMMENT ON COLUMN trigger.enabled IS 'Whether this trigger is active';
COMMENT ON COLUMN trigger.param_schema IS 'JSON schema defining the expected configuration parameters when this trigger is used';
COMMENT ON COLUMN trigger.out_schema IS 'JSON schema defining the structure of event payloads generated by this trigger';
-- ============================================================================
-- ============================================================================
-- SENSOR TABLE
-- ============================================================================
CREATE TABLE sensor (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL UNIQUE,
pack BIGINT REFERENCES pack(id) ON DELETE CASCADE,
pack_ref TEXT,
label TEXT NOT NULL,
description TEXT,
entrypoint TEXT NOT NULL,
runtime BIGINT NOT NULL REFERENCES runtime(id) ON DELETE CASCADE,
runtime_ref TEXT NOT NULL,
trigger BIGINT NOT NULL REFERENCES trigger(id) ON DELETE CASCADE,
trigger_ref TEXT NOT NULL,
enabled BOOLEAN NOT NULL,
is_adhoc BOOLEAN NOT NULL DEFAULT FALSE,
param_schema JSONB,
config JSONB,
runtime_version_constraint TEXT,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT sensor_ref_lowercase CHECK (ref = LOWER(ref)),
CONSTRAINT sensor_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$')
);
-- Indexes
CREATE INDEX idx_sensor_ref ON sensor(ref);
CREATE INDEX idx_sensor_pack ON sensor(pack);
CREATE INDEX idx_sensor_runtime ON sensor(runtime);
CREATE INDEX idx_sensor_trigger ON sensor(trigger);
CREATE INDEX idx_sensor_enabled ON sensor(enabled) WHERE enabled = TRUE;
CREATE INDEX idx_sensor_is_adhoc ON sensor(is_adhoc) WHERE is_adhoc = true;
CREATE INDEX idx_sensor_created ON sensor(created DESC);
-- Trigger
CREATE TRIGGER update_sensor_updated
BEFORE UPDATE ON sensor
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE sensor IS 'Sensors monitor for events and create trigger instances';
COMMENT ON COLUMN sensor.ref IS 'Unique sensor reference (format: pack.name)';
COMMENT ON COLUMN sensor.label IS 'Human-readable sensor name';
COMMENT ON COLUMN sensor.entrypoint IS 'Script or command to execute';
COMMENT ON COLUMN sensor.runtime IS 'Runtime environment for execution';
COMMENT ON COLUMN sensor.trigger IS 'Trigger type this sensor creates events for';
COMMENT ON COLUMN sensor.enabled IS 'Whether this sensor is active';
COMMENT ON COLUMN sensor.is_adhoc IS 'True if sensor was manually created (ad-hoc), false if installed from pack';
COMMENT ON COLUMN sensor.runtime_version_constraint IS 'Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0"). NULL means any version.';
-- ============================================================================
-- EVENT TABLE
-- ============================================================================
CREATE TABLE event (
id BIGSERIAL PRIMARY KEY,
trigger BIGINT REFERENCES trigger(id) ON DELETE SET NULL,
trigger_ref TEXT NOT NULL,
config JSONB,
payload JSONB,
source BIGINT REFERENCES sensor(id) ON DELETE SET NULL,
source_ref TEXT,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
rule BIGINT,
rule_ref TEXT
);
-- Indexes
CREATE INDEX idx_event_trigger ON event(trigger);
CREATE INDEX idx_event_trigger_ref ON event(trigger_ref);
CREATE INDEX idx_event_source ON event(source);
CREATE INDEX idx_event_created ON event(created DESC);
CREATE INDEX idx_event_trigger_created ON event(trigger, created DESC);
CREATE INDEX idx_event_trigger_ref_created ON event(trigger_ref, created DESC);
CREATE INDEX idx_event_source_created ON event(source, created DESC);
CREATE INDEX idx_event_payload_gin ON event USING GIN (payload);
-- Comments
COMMENT ON TABLE event IS 'Events are instances of triggers firing';
COMMENT ON COLUMN event.trigger IS 'Trigger that fired (may be null if trigger deleted)';
COMMENT ON COLUMN event.trigger_ref IS 'Trigger reference (preserved even if trigger deleted)';
COMMENT ON COLUMN event.config IS 'Snapshot of trigger/sensor configuration at event time';
COMMENT ON COLUMN event.payload IS 'Event data payload';
COMMENT ON COLUMN event.source IS 'Sensor that generated this event';
-- ============================================================================
-- ENFORCEMENT TABLE
-- ============================================================================
CREATE TABLE enforcement (
id BIGSERIAL PRIMARY KEY,
rule BIGINT, -- Forward reference to rule table, will add constraint after rule is created
rule_ref TEXT NOT NULL,
trigger_ref TEXT NOT NULL,
config JSONB,
event BIGINT, -- references event(id); no FK because event becomes a hypertable
status enforcement_status_enum NOT NULL DEFAULT 'created',
payload JSONB NOT NULL,
condition enforcement_condition_enum NOT NULL DEFAULT 'all',
conditions JSONB NOT NULL DEFAULT '[]'::jsonb,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
resolved_at TIMESTAMPTZ,
-- Constraints
CONSTRAINT enforcement_condition_check CHECK (condition IN ('any', 'all'))
);
-- Indexes
CREATE INDEX idx_enforcement_rule ON enforcement(rule);
CREATE INDEX idx_enforcement_rule_ref ON enforcement(rule_ref);
CREATE INDEX idx_enforcement_trigger_ref ON enforcement(trigger_ref);
CREATE INDEX idx_enforcement_event ON enforcement(event);
CREATE INDEX idx_enforcement_status ON enforcement(status);
CREATE INDEX idx_enforcement_created ON enforcement(created DESC);
CREATE INDEX idx_enforcement_status_created ON enforcement(status, created DESC);
CREATE INDEX idx_enforcement_rule_status ON enforcement(rule, status);
CREATE INDEX idx_enforcement_event_status ON enforcement(event, status);
CREATE INDEX idx_enforcement_payload_gin ON enforcement USING GIN (payload);
CREATE INDEX idx_enforcement_conditions_gin ON enforcement USING GIN (conditions);
-- Comments
COMMENT ON TABLE enforcement IS 'Enforcements represent rule triggering by events';
COMMENT ON COLUMN enforcement.rule IS 'Rule being enforced (may be null if rule deleted)';
COMMENT ON COLUMN enforcement.rule_ref IS 'Rule reference (preserved even if rule deleted)';
COMMENT ON COLUMN enforcement.event IS 'Event that triggered this enforcement (no FK — event is a hypertable)';
COMMENT ON COLUMN enforcement.status IS 'Processing status (created → processed or disabled)';
COMMENT ON COLUMN enforcement.resolved_at IS 'Timestamp when the enforcement was resolved (status changed from created to processed/disabled). NULL while status is created.';
COMMENT ON COLUMN enforcement.payload IS 'Event payload for rule evaluation';
COMMENT ON COLUMN enforcement.condition IS 'Logical operator for conditions (any=OR, all=AND)';
COMMENT ON COLUMN enforcement.conditions IS 'Condition expressions to evaluate';
-- ============================================================================
-- ACTION TABLE
-- ============================================================================
CREATE TABLE action (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL UNIQUE,
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
pack_ref TEXT NOT NULL,
label TEXT NOT NULL,
description TEXT,
entrypoint TEXT NOT NULL,
runtime BIGINT REFERENCES runtime(id),
param_schema JSONB,
out_schema JSONB,
parameter_delivery TEXT NOT NULL DEFAULT 'stdin' CHECK (parameter_delivery IN ('stdin', 'file')),
parameter_format TEXT NOT NULL DEFAULT 'json' CHECK (parameter_format IN ('dotenv', 'json', 'yaml')),
output_format TEXT NOT NULL DEFAULT 'text' CHECK (output_format IN ('text', 'json', 'yaml', 'jsonl')),
is_adhoc BOOLEAN NOT NULL DEFAULT FALSE,
timeout_seconds INTEGER,
max_retries INTEGER DEFAULT 0,
runtime_version_constraint TEXT,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT action_ref_lowercase CHECK (ref = LOWER(ref)),
CONSTRAINT action_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$')
);
-- Indexes
CREATE INDEX idx_action_ref ON action(ref);
CREATE INDEX idx_action_pack ON action(pack);
CREATE INDEX idx_action_runtime ON action(runtime);
CREATE INDEX idx_action_parameter_delivery ON action(parameter_delivery);
CREATE INDEX idx_action_parameter_format ON action(parameter_format);
CREATE INDEX idx_action_output_format ON action(output_format);
CREATE INDEX idx_action_is_adhoc ON action(is_adhoc) WHERE is_adhoc = true;
CREATE INDEX idx_action_created ON action(created DESC);
-- Trigger
CREATE TRIGGER update_action_updated
BEFORE UPDATE ON action
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE action IS 'Actions are executable tasks that can be triggered';
COMMENT ON COLUMN action.ref IS 'Unique action reference (format: pack.name)';
COMMENT ON COLUMN action.pack IS 'Pack this action belongs to';
COMMENT ON COLUMN action.label IS 'Human-readable action name';
COMMENT ON COLUMN action.entrypoint IS 'Script or command to execute';
COMMENT ON COLUMN action.runtime IS 'Runtime environment for execution';
COMMENT ON COLUMN action.param_schema IS 'JSON schema for action parameters';
COMMENT ON COLUMN action.out_schema IS 'JSON schema for action output';
COMMENT ON COLUMN action.parameter_delivery IS 'How parameters are delivered: stdin (standard input - secure), file (temporary file - secure for large payloads). Environment variables are set separately via execution.env_vars.';
COMMENT ON COLUMN action.parameter_format IS 'Parameter serialization format: json (JSON object - default), dotenv (KEY=''VALUE''), yaml (YAML format)';
COMMENT ON COLUMN action.output_format IS 'Output parsing format: text (no parsing - raw stdout), json (parse stdout as JSON), yaml (parse stdout as YAML), jsonl (parse each line as JSON, collect into array)';
COMMENT ON COLUMN action.is_adhoc IS 'True if action was manually created (ad-hoc), false if installed from pack';
COMMENT ON COLUMN action.timeout_seconds IS 'Worker queue TTL override in seconds. If NULL, uses global worker_queue_ttl_ms config. Allows per-action timeout tuning.';
COMMENT ON COLUMN action.max_retries IS 'Maximum number of automatic retry attempts for failed executions. 0 = no retries (default).';
COMMENT ON COLUMN action.runtime_version_constraint IS 'Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0"). NULL means any version.';
-- ============================================================================
-- Add foreign key constraint for policy table
ALTER TABLE policy
ADD CONSTRAINT policy_action_fkey
FOREIGN KEY (action) REFERENCES action(id) ON DELETE CASCADE;
-- Note: Foreign key constraints for key table (key_owner_action_fkey, key_owner_sensor_fkey)
-- will be added in migration 000007_supporting_systems.sql after the key table is created
-- Note: Rule table will be created in migration 000005 after execution table exists
-- Note: Foreign key constraints for enforcement.rule and event.rule will be added there

View File

@@ -0,0 +1,410 @@
-- Migration: Execution and Operations
-- Description: Creates execution, inquiry, rule, worker, and notification tables.
-- Includes retry tracking, worker health views, and helper functions.
-- Consolidates former migrations: 000006 (execution_system), 000008
-- (worker_notification), 000014 (worker_table), and 20260209 (phase3).
--
-- NOTE: The execution table is converted to a TimescaleDB hypertable in
-- migration 000009. Hypertables cannot be the target of FK constraints,
-- so columns referencing execution (inquiry.execution, workflow_execution.execution)
-- are plain BIGINT with no FK. Similarly, columns ON the execution table that
-- would self-reference or reference other hypertables (parent, enforcement,
-- original_execution) are plain BIGINT. The action and executor FKs are also
-- omitted since they would need to be dropped during hypertable conversion.
-- Version: 20250101000005
-- ============================================================================
-- EXECUTION TABLE
-- ============================================================================
CREATE TABLE execution (
id BIGSERIAL PRIMARY KEY,
action BIGINT, -- references action(id); no FK because execution becomes a hypertable
action_ref TEXT NOT NULL,
config JSONB,
env_vars JSONB,
parent BIGINT, -- self-reference; no FK because execution becomes a hypertable
enforcement BIGINT, -- references enforcement(id); no FK (both are hypertables)
executor BIGINT, -- references identity(id); no FK because execution becomes a hypertable
worker BIGINT, -- references worker(id); no FK because execution becomes a hypertable
status execution_status_enum NOT NULL DEFAULT 'requested',
result JSONB,
started_at TIMESTAMPTZ, -- set when execution transitions to 'running'
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
is_workflow BOOLEAN DEFAULT false NOT NULL,
workflow_def BIGINT, -- references workflow_definition(id); no FK because execution becomes a hypertable
workflow_task JSONB,
-- Retry tracking (baked in from phase 3)
retry_count INTEGER NOT NULL DEFAULT 0,
max_retries INTEGER,
retry_reason TEXT,
original_execution BIGINT, -- self-reference; no FK because execution becomes a hypertable
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Indexes
CREATE INDEX idx_execution_action ON execution(action);
CREATE INDEX idx_execution_action_ref ON execution(action_ref);
CREATE INDEX idx_execution_parent ON execution(parent);
CREATE INDEX idx_execution_enforcement ON execution(enforcement);
CREATE INDEX idx_execution_executor ON execution(executor);
CREATE INDEX idx_execution_worker ON execution(worker);
CREATE INDEX idx_execution_status ON execution(status);
CREATE INDEX idx_execution_created ON execution(created DESC);
CREATE INDEX idx_execution_updated ON execution(updated DESC);
CREATE INDEX idx_execution_status_created ON execution(status, created DESC);
CREATE INDEX idx_execution_status_updated ON execution(status, updated DESC);
CREATE INDEX idx_execution_action_status ON execution(action, status);
CREATE INDEX idx_execution_executor_created ON execution(executor, created DESC);
CREATE INDEX idx_execution_worker_created ON execution(worker, created DESC);
CREATE INDEX idx_execution_parent_created ON execution(parent, created DESC);
CREATE INDEX idx_execution_result_gin ON execution USING GIN (result);
CREATE INDEX idx_execution_env_vars_gin ON execution USING GIN (env_vars);
CREATE INDEX idx_execution_original_execution ON execution(original_execution) WHERE original_execution IS NOT NULL;
CREATE INDEX idx_execution_status_retry ON execution(status, retry_count) WHERE status = 'failed' AND retry_count < COALESCE(max_retries, 0);
-- Trigger
CREATE TRIGGER update_execution_updated
BEFORE UPDATE ON execution
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE execution IS 'Executions represent action runs, supports nested workflows';
COMMENT ON COLUMN execution.action IS 'Action being executed (may be null if action deleted)';
COMMENT ON COLUMN execution.action_ref IS 'Action reference (preserved even if action deleted)';
COMMENT ON COLUMN execution.config IS 'Snapshot of action configuration at execution time';
COMMENT ON COLUMN execution.env_vars IS 'Environment variables for this execution as key-value pairs (string -> string). These are set in the execution environment and are separate from action parameters. Used for execution context, configuration, and non-sensitive metadata.';
COMMENT ON COLUMN execution.parent IS 'Parent execution ID for workflow hierarchies (no FK — execution is a hypertable)';
COMMENT ON COLUMN execution.enforcement IS 'Enforcement that triggered this execution (no FK — both are hypertables)';
COMMENT ON COLUMN execution.executor IS 'Identity that initiated the execution (no FK — execution is a hypertable)';
COMMENT ON COLUMN execution.worker IS 'Assigned worker handling this execution (no FK — execution is a hypertable)';
COMMENT ON COLUMN execution.status IS 'Current execution lifecycle status';
COMMENT ON COLUMN execution.result IS 'Execution output/results';
COMMENT ON COLUMN execution.retry_count IS 'Current retry attempt number (0 = first attempt, 1 = first retry, etc.)';
COMMENT ON COLUMN execution.max_retries IS 'Maximum retries for this execution. Copied from action.max_retries at creation time.';
COMMENT ON COLUMN execution.retry_reason IS 'Reason for retry (e.g., "worker_unavailable", "transient_error", "manual_retry")';
COMMENT ON COLUMN execution.original_execution IS 'ID of the original execution if this is a retry. Forms a retry chain.';
-- ============================================================================
-- ============================================================================
-- INQUIRY TABLE
-- ============================================================================
CREATE TABLE inquiry (
id BIGSERIAL PRIMARY KEY,
execution BIGINT NOT NULL, -- references execution(id); no FK because execution is a hypertable
prompt TEXT NOT NULL,
response_schema JSONB,
assigned_to BIGINT REFERENCES identity(id) ON DELETE SET NULL,
status inquiry_status_enum NOT NULL DEFAULT 'pending',
response JSONB,
timeout_at TIMESTAMPTZ,
responded_at TIMESTAMPTZ,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Indexes
CREATE INDEX idx_inquiry_execution ON inquiry(execution);
CREATE INDEX idx_inquiry_assigned_to ON inquiry(assigned_to);
CREATE INDEX idx_inquiry_status ON inquiry(status);
CREATE INDEX idx_inquiry_timeout_at ON inquiry(timeout_at) WHERE timeout_at IS NOT NULL;
CREATE INDEX idx_inquiry_created ON inquiry(created DESC);
CREATE INDEX idx_inquiry_status_created ON inquiry(status, created DESC);
CREATE INDEX idx_inquiry_assigned_status ON inquiry(assigned_to, status);
CREATE INDEX idx_inquiry_execution_status ON inquiry(execution, status);
CREATE INDEX idx_inquiry_response_gin ON inquiry USING GIN (response);
-- Trigger
CREATE TRIGGER update_inquiry_updated
BEFORE UPDATE ON inquiry
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE inquiry IS 'Inquiries enable human-in-the-loop workflows with async user interactions';
COMMENT ON COLUMN inquiry.execution IS 'Execution that is waiting on this inquiry (no FK — execution is a hypertable)';
COMMENT ON COLUMN inquiry.prompt IS 'Question or prompt text for the user';
COMMENT ON COLUMN inquiry.response_schema IS 'JSON schema defining expected response format';
COMMENT ON COLUMN inquiry.assigned_to IS 'Identity who should respond to this inquiry';
COMMENT ON COLUMN inquiry.status IS 'Current inquiry lifecycle status';
COMMENT ON COLUMN inquiry.response IS 'User response data';
COMMENT ON COLUMN inquiry.timeout_at IS 'When this inquiry expires';
COMMENT ON COLUMN inquiry.responded_at IS 'When the response was received';
-- ============================================================================
-- ============================================================================
-- RULE TABLE
-- ============================================================================
CREATE TABLE rule (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL UNIQUE,
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
pack_ref TEXT NOT NULL,
label TEXT NOT NULL,
description TEXT,
action BIGINT REFERENCES action(id) ON DELETE SET NULL,
action_ref TEXT NOT NULL,
trigger BIGINT REFERENCES trigger(id) ON DELETE SET NULL,
trigger_ref TEXT NOT NULL,
conditions JSONB NOT NULL DEFAULT '[]'::jsonb,
action_params JSONB DEFAULT '{}'::jsonb,
trigger_params JSONB DEFAULT '{}'::jsonb,
enabled BOOLEAN NOT NULL,
is_adhoc BOOLEAN NOT NULL DEFAULT FALSE,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT rule_ref_lowercase CHECK (ref = LOWER(ref)),
CONSTRAINT rule_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$')
);
-- Indexes
CREATE INDEX idx_rule_ref ON rule(ref);
CREATE INDEX idx_rule_pack ON rule(pack);
CREATE INDEX idx_rule_action ON rule(action);
CREATE INDEX idx_rule_trigger ON rule(trigger);
CREATE INDEX idx_rule_enabled ON rule(enabled) WHERE enabled = TRUE;
CREATE INDEX idx_rule_is_adhoc ON rule(is_adhoc) WHERE is_adhoc = true;
CREATE INDEX idx_rule_created ON rule(created DESC);
CREATE INDEX idx_rule_trigger_enabled ON rule(trigger, enabled);
CREATE INDEX idx_rule_action_enabled ON rule(action, enabled);
CREATE INDEX idx_rule_pack_enabled ON rule(pack, enabled);
CREATE INDEX idx_rule_action_params_gin ON rule USING GIN (action_params);
CREATE INDEX idx_rule_trigger_params_gin ON rule USING GIN (trigger_params);
-- Trigger
CREATE TRIGGER update_rule_updated
BEFORE UPDATE ON rule
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE rule IS 'Rules link triggers to actions with conditions';
COMMENT ON COLUMN rule.ref IS 'Unique rule reference (format: pack.name)';
COMMENT ON COLUMN rule.label IS 'Human-readable rule name';
COMMENT ON COLUMN rule.action IS 'Action to execute when rule triggers (null if action deleted)';
COMMENT ON COLUMN rule.trigger IS 'Trigger that activates this rule (null if trigger deleted)';
COMMENT ON COLUMN rule.conditions IS 'Condition expressions to evaluate before executing action';
COMMENT ON COLUMN rule.action_params IS 'Parameter overrides for the action';
COMMENT ON COLUMN rule.trigger_params IS 'Parameter overrides for the trigger';
COMMENT ON COLUMN rule.enabled IS 'Whether this rule is active';
COMMENT ON COLUMN rule.is_adhoc IS 'True if rule was manually created (ad-hoc), false if installed from pack';
-- ============================================================================
-- Add foreign key constraints now that rule table exists
ALTER TABLE enforcement
ADD CONSTRAINT enforcement_rule_fkey
FOREIGN KEY (rule) REFERENCES rule(id) ON DELETE SET NULL;
ALTER TABLE event
ADD CONSTRAINT event_rule_fkey
FOREIGN KEY (rule) REFERENCES rule(id) ON DELETE SET NULL;
-- ============================================================================
-- WORKER TABLE
-- ============================================================================
CREATE TABLE worker (
id BIGSERIAL PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
worker_type worker_type_enum NOT NULL,
worker_role worker_role_enum NOT NULL,
runtime BIGINT REFERENCES runtime(id) ON DELETE SET NULL,
host TEXT,
port INTEGER,
status worker_status_enum NOT NULL DEFAULT 'active',
capabilities JSONB,
meta JSONB,
last_heartbeat TIMESTAMPTZ,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Indexes
CREATE INDEX idx_worker_name ON worker(name);
CREATE INDEX idx_worker_type ON worker(worker_type);
CREATE INDEX idx_worker_role ON worker(worker_role);
CREATE INDEX idx_worker_runtime ON worker(runtime);
CREATE INDEX idx_worker_status ON worker(status);
CREATE INDEX idx_worker_last_heartbeat ON worker(last_heartbeat DESC) WHERE last_heartbeat IS NOT NULL;
CREATE INDEX idx_worker_created ON worker(created DESC);
CREATE INDEX idx_worker_status_role ON worker(status, worker_role);
CREATE INDEX idx_worker_capabilities_gin ON worker USING GIN (capabilities);
CREATE INDEX idx_worker_meta_gin ON worker USING GIN (meta);
CREATE INDEX idx_worker_capabilities_health_status ON worker USING GIN ((capabilities -> 'health' -> 'status'));
-- Trigger
CREATE TRIGGER update_worker_updated
BEFORE UPDATE ON worker
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE worker IS 'Worker registration and tracking table for action and sensor workers';
COMMENT ON COLUMN worker.name IS 'Unique worker identifier (typically hostname-based)';
COMMENT ON COLUMN worker.worker_type IS 'Worker deployment type (local or remote)';
COMMENT ON COLUMN worker.worker_role IS 'Worker role (action or sensor)';
COMMENT ON COLUMN worker.runtime IS 'Runtime environment this worker supports (optional)';
COMMENT ON COLUMN worker.host IS 'Worker host address';
COMMENT ON COLUMN worker.port IS 'Worker port number';
COMMENT ON COLUMN worker.status IS 'Worker operational status';
COMMENT ON COLUMN worker.capabilities IS 'Worker capabilities (e.g., max_concurrent_executions, supported runtimes)';
COMMENT ON COLUMN worker.meta IS 'Additional worker metadata';
COMMENT ON COLUMN worker.last_heartbeat IS 'Timestamp of last heartbeat from worker';
-- ============================================================================
-- NOTIFICATION TABLE
-- ============================================================================
CREATE TABLE notification (
id BIGSERIAL PRIMARY KEY,
channel TEXT NOT NULL,
entity_type TEXT NOT NULL,
entity TEXT NOT NULL,
activity TEXT NOT NULL,
state notification_status_enum NOT NULL DEFAULT 'created',
content JSONB,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Indexes
CREATE INDEX idx_notification_channel ON notification(channel);
CREATE INDEX idx_notification_entity_type ON notification(entity_type);
CREATE INDEX idx_notification_entity ON notification(entity);
CREATE INDEX idx_notification_state ON notification(state);
CREATE INDEX idx_notification_created ON notification(created DESC);
CREATE INDEX idx_notification_channel_state ON notification(channel, state);
CREATE INDEX idx_notification_entity_type_entity ON notification(entity_type, entity);
CREATE INDEX idx_notification_state_created ON notification(state, created DESC);
CREATE INDEX idx_notification_content_gin ON notification USING GIN (content);
-- Trigger
CREATE TRIGGER update_notification_updated
BEFORE UPDATE ON notification
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Function for pg_notify on notification insert
CREATE OR REPLACE FUNCTION notify_on_insert()
RETURNS TRIGGER AS $$
DECLARE
payload TEXT;
BEGIN
-- Build JSON payload with id, entity, and activity
payload := json_build_object(
'id', NEW.id,
'entity_type', NEW.entity_type,
'entity', NEW.entity,
'activity', NEW.activity
)::text;
-- Send notification to the specified channel
PERFORM pg_notify(NEW.channel, payload);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger to send pg_notify on notification insert
CREATE TRIGGER notify_on_notification_insert
AFTER INSERT ON notification
FOR EACH ROW
EXECUTE FUNCTION notify_on_insert();
-- Comments
COMMENT ON TABLE notification IS 'System notifications about entity changes for real-time updates';
COMMENT ON COLUMN notification.channel IS 'Notification channel (typically table name)';
COMMENT ON COLUMN notification.entity_type IS 'Type of entity (table name)';
COMMENT ON COLUMN notification.entity IS 'Entity identifier (typically ID or ref)';
COMMENT ON COLUMN notification.activity IS 'Activity type (e.g., "created", "updated", "completed")';
COMMENT ON COLUMN notification.state IS 'Processing state of notification';
COMMENT ON COLUMN notification.content IS 'Optional notification payload data';
-- ============================================================================
-- WORKER HEALTH VIEWS AND FUNCTIONS
-- ============================================================================
-- View for healthy workers (convenience for queries)
CREATE OR REPLACE VIEW healthy_workers AS
SELECT
w.id,
w.name,
w.worker_type,
w.worker_role,
w.runtime,
w.status,
w.capabilities,
w.last_heartbeat,
(w.capabilities -> 'health' ->> 'status')::TEXT as health_status,
(w.capabilities -> 'health' ->> 'queue_depth')::INTEGER as queue_depth,
(w.capabilities -> 'health' ->> 'consecutive_failures')::INTEGER as consecutive_failures
FROM worker w
WHERE
w.status = 'active'
AND w.last_heartbeat > NOW() - INTERVAL '30 seconds'
AND (
-- Healthy if no health info (backward compatible)
w.capabilities -> 'health' IS NULL
OR
-- Or explicitly marked healthy
w.capabilities -> 'health' ->> 'status' IN ('healthy', 'degraded')
);
COMMENT ON VIEW healthy_workers IS 'Workers that are active, have fresh heartbeat, and are healthy or degraded (not unhealthy)';
-- Function to get worker queue depth estimate
CREATE OR REPLACE FUNCTION get_worker_queue_depth(worker_id_param BIGINT)
RETURNS INTEGER AS $$
BEGIN
RETURN (
SELECT (capabilities -> 'health' ->> 'queue_depth')::INTEGER
FROM worker
WHERE id = worker_id_param
);
END;
$$ LANGUAGE plpgsql STABLE;
COMMENT ON FUNCTION get_worker_queue_depth IS 'Extract current queue depth from worker health metadata';
-- Function to check if execution is retriable
CREATE OR REPLACE FUNCTION is_execution_retriable(execution_id_param BIGINT)
RETURNS BOOLEAN AS $$
DECLARE
exec_record RECORD;
BEGIN
SELECT
e.retry_count,
e.max_retries,
e.status
INTO exec_record
FROM execution e
WHERE e.id = execution_id_param;
IF NOT FOUND THEN
RETURN FALSE;
END IF;
-- Can retry if:
-- 1. Status is failed
-- 2. max_retries is set and > 0
-- 3. retry_count < max_retries
RETURN (
exec_record.status = 'failed'
AND exec_record.max_retries IS NOT NULL
AND exec_record.max_retries > 0
AND exec_record.retry_count < exec_record.max_retries
);
END;
$$ LANGUAGE plpgsql STABLE;
COMMENT ON FUNCTION is_execution_retriable IS 'Check if a failed execution can be automatically retried based on retry limits';

View File

@@ -0,0 +1,145 @@
-- Migration: Workflow System
-- Description: Creates workflow_definition and workflow_execution tables
-- (workflow_task_execution consolidated into execution.workflow_task JSONB)
--
-- NOTE: The execution table is converted to a TimescaleDB hypertable in
-- migration 000009. Hypertables cannot be the target of FK constraints,
-- so workflow_execution.execution is a plain BIGINT with no FK.
-- execution.workflow_def also has no FK (added as plain BIGINT in 000005)
-- since execution is a hypertable and FKs from hypertables are only
-- supported for simple cases — we omit it for consistency.
-- Version: 20250101000006
-- ============================================================================
-- WORKFLOW DEFINITION TABLE
-- ============================================================================
CREATE TABLE workflow_definition (
id BIGSERIAL PRIMARY KEY,
ref VARCHAR(255) NOT NULL UNIQUE,
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
pack_ref VARCHAR(255) NOT NULL,
label VARCHAR(255) NOT NULL,
description TEXT,
version VARCHAR(50) NOT NULL,
param_schema JSONB,
out_schema JSONB,
definition JSONB NOT NULL,
tags TEXT[] DEFAULT '{}',
created TIMESTAMPTZ DEFAULT NOW() NOT NULL,
updated TIMESTAMPTZ DEFAULT NOW() NOT NULL
);
-- Indexes
CREATE INDEX idx_workflow_def_pack ON workflow_definition(pack);
CREATE INDEX idx_workflow_def_ref ON workflow_definition(ref);
CREATE INDEX idx_workflow_def_tags ON workflow_definition USING gin(tags);
-- Trigger
CREATE TRIGGER update_workflow_definition_updated
BEFORE UPDATE ON workflow_definition
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE workflow_definition IS 'Stores workflow definitions (YAML parsed to JSON)';
COMMENT ON COLUMN workflow_definition.ref IS 'Unique workflow reference (e.g., pack_name.workflow_name)';
COMMENT ON COLUMN workflow_definition.definition IS 'Complete workflow specification including tasks, variables, and transitions';
COMMENT ON COLUMN workflow_definition.param_schema IS 'JSON schema for workflow input parameters';
COMMENT ON COLUMN workflow_definition.out_schema IS 'JSON schema for workflow output';
-- ============================================================================
-- WORKFLOW EXECUTION TABLE
-- ============================================================================
CREATE TABLE workflow_execution (
id BIGSERIAL PRIMARY KEY,
execution BIGINT NOT NULL, -- references execution(id); no FK because execution is a hypertable
workflow_def BIGINT NOT NULL REFERENCES workflow_definition(id) ON DELETE CASCADE,
current_tasks TEXT[] DEFAULT '{}',
completed_tasks TEXT[] DEFAULT '{}',
failed_tasks TEXT[] DEFAULT '{}',
skipped_tasks TEXT[] DEFAULT '{}',
variables JSONB DEFAULT '{}',
task_graph JSONB NOT NULL,
status execution_status_enum NOT NULL DEFAULT 'requested',
error_message TEXT,
paused BOOLEAN DEFAULT false NOT NULL,
pause_reason TEXT,
created TIMESTAMPTZ DEFAULT NOW() NOT NULL,
updated TIMESTAMPTZ DEFAULT NOW() NOT NULL
);
-- Indexes
CREATE INDEX idx_workflow_exec_execution ON workflow_execution(execution);
CREATE INDEX idx_workflow_exec_workflow_def ON workflow_execution(workflow_def);
CREATE INDEX idx_workflow_exec_status ON workflow_execution(status);
CREATE INDEX idx_workflow_exec_paused ON workflow_execution(paused) WHERE paused = true;
-- Trigger
CREATE TRIGGER update_workflow_execution_updated
BEFORE UPDATE ON workflow_execution
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE workflow_execution IS 'Runtime state tracking for workflow executions. execution column has no FK — execution is a hypertable.';
COMMENT ON COLUMN workflow_execution.variables IS 'Workflow-scoped variables, updated via publish directives';
COMMENT ON COLUMN workflow_execution.task_graph IS 'Execution graph with dependencies and transitions';
COMMENT ON COLUMN workflow_execution.current_tasks IS 'Array of task names currently executing';
COMMENT ON COLUMN workflow_execution.paused IS 'True if workflow execution is paused (can be resumed)';
-- ============================================================================
-- MODIFY ACTION TABLE - Add Workflow Support
-- ============================================================================
ALTER TABLE action
ADD COLUMN workflow_def BIGINT REFERENCES workflow_definition(id) ON DELETE CASCADE;
CREATE INDEX idx_action_workflow_def ON action(workflow_def);
COMMENT ON COLUMN action.workflow_def IS 'Reference to workflow definition (non-null means this action is a workflow)';
-- NOTE: execution.workflow_def has no FK constraint because execution is a
-- TimescaleDB hypertable (converted in migration 000009). The column was
-- created as a plain BIGINT in migration 000005.
-- ============================================================================
-- WORKFLOW VIEWS
-- ============================================================================
CREATE VIEW workflow_execution_summary AS
SELECT
we.id,
we.execution,
wd.ref as workflow_ref,
wd.label as workflow_label,
wd.version as workflow_version,
we.status,
we.paused,
array_length(we.current_tasks, 1) as current_task_count,
array_length(we.completed_tasks, 1) as completed_task_count,
array_length(we.failed_tasks, 1) as failed_task_count,
array_length(we.skipped_tasks, 1) as skipped_task_count,
we.error_message,
we.created,
we.updated
FROM workflow_execution we
JOIN workflow_definition wd ON we.workflow_def = wd.id;
COMMENT ON VIEW workflow_execution_summary IS 'Summary view of workflow executions with task counts';
CREATE VIEW workflow_action_link AS
SELECT
wd.id as workflow_def_id,
wd.ref as workflow_ref,
wd.label,
wd.version,
a.id as action_id,
a.ref as action_ref,
a.pack as pack_id,
a.pack_ref
FROM workflow_definition wd
LEFT JOIN action a ON a.workflow_def = wd.id;
COMMENT ON VIEW workflow_action_link IS 'Links workflow definitions to their corresponding action records';

View File

@@ -0,0 +1,779 @@
-- Migration: Supporting Systems
-- Description: Creates keys, artifacts, queue_stats, pack_environment, pack_testing,
-- and webhook function tables.
-- Consolidates former migrations: 000009 (keys_artifacts), 000010 (webhook_system),
-- 000011 (pack_environments), and 000012 (pack_testing).
-- Version: 20250101000007
-- ============================================================================
-- KEY TABLE
-- ============================================================================
CREATE TABLE key (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL UNIQUE,
owner_type owner_type_enum NOT NULL,
owner TEXT,
owner_identity BIGINT REFERENCES identity(id),
owner_pack BIGINT REFERENCES pack(id),
owner_pack_ref TEXT,
owner_action BIGINT, -- Forward reference to action table
owner_action_ref TEXT,
owner_sensor BIGINT, -- Forward reference to sensor table
owner_sensor_ref TEXT,
name TEXT NOT NULL,
encrypted BOOLEAN NOT NULL,
encryption_key_hash TEXT,
value TEXT NOT NULL,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
-- Constraints
CONSTRAINT key_ref_lowercase CHECK (ref = LOWER(ref)),
CONSTRAINT key_ref_format CHECK (ref ~ '^[^.]+(\.[^.]+)*$')
);
-- Unique index on owner_type, owner, name
CREATE UNIQUE INDEX idx_key_unique ON key(owner_type, owner, name);
-- Indexes
CREATE INDEX idx_key_ref ON key(ref);
CREATE INDEX idx_key_owner_type ON key(owner_type);
CREATE INDEX idx_key_owner_identity ON key(owner_identity);
CREATE INDEX idx_key_owner_pack ON key(owner_pack);
CREATE INDEX idx_key_owner_action ON key(owner_action);
CREATE INDEX idx_key_owner_sensor ON key(owner_sensor);
CREATE INDEX idx_key_created ON key(created DESC);
CREATE INDEX idx_key_owner_type_owner ON key(owner_type, owner);
CREATE INDEX idx_key_owner_identity_name ON key(owner_identity, name);
CREATE INDEX idx_key_owner_pack_name ON key(owner_pack, name);
-- Function to validate and set owner fields
CREATE OR REPLACE FUNCTION validate_key_owner()
RETURNS TRIGGER AS $$
DECLARE
owner_count INTEGER := 0;
BEGIN
-- Count how many owner fields are set
IF NEW.owner_identity IS NOT NULL THEN owner_count := owner_count + 1; END IF;
IF NEW.owner_pack IS NOT NULL THEN owner_count := owner_count + 1; END IF;
IF NEW.owner_action IS NOT NULL THEN owner_count := owner_count + 1; END IF;
IF NEW.owner_sensor IS NOT NULL THEN owner_count := owner_count + 1; END IF;
-- System owner should have no owner fields set
IF NEW.owner_type = 'system' THEN
IF owner_count > 0 THEN
RAISE EXCEPTION 'System owner cannot have specific owner fields set';
END IF;
NEW.owner := 'system';
-- All other types must have exactly one owner field set
ELSIF owner_count != 1 THEN
RAISE EXCEPTION 'Exactly one owner field must be set for owner_type %', NEW.owner_type;
-- Validate owner_type matches the populated field and set owner
ELSIF NEW.owner_type = 'identity' THEN
IF NEW.owner_identity IS NULL THEN
RAISE EXCEPTION 'owner_identity must be set for owner_type identity';
END IF;
NEW.owner := NEW.owner_identity::TEXT;
ELSIF NEW.owner_type = 'pack' THEN
IF NEW.owner_pack IS NULL THEN
RAISE EXCEPTION 'owner_pack must be set for owner_type pack';
END IF;
NEW.owner := NEW.owner_pack::TEXT;
ELSIF NEW.owner_type = 'action' THEN
IF NEW.owner_action IS NULL THEN
RAISE EXCEPTION 'owner_action must be set for owner_type action';
END IF;
NEW.owner := NEW.owner_action::TEXT;
ELSIF NEW.owner_type = 'sensor' THEN
IF NEW.owner_sensor IS NULL THEN
RAISE EXCEPTION 'owner_sensor must be set for owner_type sensor';
END IF;
NEW.owner := NEW.owner_sensor::TEXT;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger to validate owner fields
CREATE TRIGGER validate_key_owner_trigger
BEFORE INSERT OR UPDATE ON key
FOR EACH ROW
EXECUTE FUNCTION validate_key_owner();
-- Trigger for updated timestamp
CREATE TRIGGER update_key_updated
BEFORE UPDATE ON key
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE key IS 'Keys store configuration values and secrets with ownership scoping';
COMMENT ON COLUMN key.ref IS 'Unique key reference (format: [owner.]name)';
COMMENT ON COLUMN key.owner_type IS 'Type of owner (system, identity, pack, action, sensor)';
COMMENT ON COLUMN key.owner IS 'Owner identifier (auto-populated by trigger)';
COMMENT ON COLUMN key.owner_identity IS 'Identity owner (if owner_type=identity)';
COMMENT ON COLUMN key.owner_pack IS 'Pack owner (if owner_type=pack)';
COMMENT ON COLUMN key.owner_pack_ref IS 'Pack reference for owner_pack';
COMMENT ON COLUMN key.owner_action IS 'Action owner (if owner_type=action)';
COMMENT ON COLUMN key.owner_sensor IS 'Sensor owner (if owner_type=sensor)';
COMMENT ON COLUMN key.name IS 'Key name within owner scope';
COMMENT ON COLUMN key.encrypted IS 'Whether the value is encrypted';
COMMENT ON COLUMN key.encryption_key_hash IS 'Hash of encryption key used';
COMMENT ON COLUMN key.value IS 'The actual value (encrypted if encrypted=true)';
-- Add foreign key constraints for action and sensor references
ALTER TABLE key
ADD CONSTRAINT key_owner_action_fkey
FOREIGN KEY (owner_action) REFERENCES action(id) ON DELETE CASCADE;
ALTER TABLE key
ADD CONSTRAINT key_owner_sensor_fkey
FOREIGN KEY (owner_sensor) REFERENCES sensor(id) ON DELETE CASCADE;
-- ============================================================================
-- ARTIFACT TABLE
-- ============================================================================
CREATE TABLE artifact (
id BIGSERIAL PRIMARY KEY,
ref TEXT NOT NULL,
scope owner_type_enum NOT NULL DEFAULT 'system',
owner TEXT NOT NULL DEFAULT '',
type artifact_type_enum NOT NULL,
visibility artifact_visibility_enum NOT NULL DEFAULT 'private',
retention_policy artifact_retention_enum NOT NULL DEFAULT 'versions',
retention_limit INTEGER NOT NULL DEFAULT 1,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Indexes
CREATE INDEX idx_artifact_ref ON artifact(ref);
CREATE INDEX idx_artifact_scope ON artifact(scope);
CREATE INDEX idx_artifact_owner ON artifact(owner);
CREATE INDEX idx_artifact_type ON artifact(type);
CREATE INDEX idx_artifact_created ON artifact(created DESC);
CREATE INDEX idx_artifact_scope_owner ON artifact(scope, owner);
CREATE INDEX idx_artifact_type_created ON artifact(type, created DESC);
CREATE INDEX idx_artifact_visibility ON artifact(visibility);
CREATE INDEX idx_artifact_visibility_scope ON artifact(visibility, scope, owner);
-- Trigger
CREATE TRIGGER update_artifact_updated
BEFORE UPDATE ON artifact
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE artifact IS 'Artifacts track files, logs, and outputs from executions';
COMMENT ON COLUMN artifact.ref IS 'Artifact reference/path';
COMMENT ON COLUMN artifact.scope IS 'Owner type (system, identity, pack, action, sensor)';
COMMENT ON COLUMN artifact.owner IS 'Owner identifier';
COMMENT ON COLUMN artifact.type IS 'Artifact type (file, url, progress, etc.)';
COMMENT ON COLUMN artifact.visibility IS 'Visibility level: public (all users) or private (scoped by scope/owner)';
COMMENT ON COLUMN artifact.retention_policy IS 'How to retain artifacts (versions, days, hours, minutes)';
COMMENT ON COLUMN artifact.retention_limit IS 'Numeric limit for retention policy';
-- ============================================================================
-- QUEUE_STATS TABLE
-- ============================================================================
CREATE TABLE queue_stats (
action_id BIGINT PRIMARY KEY REFERENCES action(id) ON DELETE CASCADE,
queue_length INTEGER NOT NULL DEFAULT 0,
active_count INTEGER NOT NULL DEFAULT 0,
max_concurrent INTEGER NOT NULL DEFAULT 1,
oldest_enqueued_at TIMESTAMPTZ,
total_enqueued BIGINT NOT NULL DEFAULT 0,
total_completed BIGINT NOT NULL DEFAULT 0,
last_updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Indexes
CREATE INDEX idx_queue_stats_last_updated ON queue_stats(last_updated);
-- Comments
COMMENT ON TABLE queue_stats IS 'Real-time queue statistics for action execution ordering';
COMMENT ON COLUMN queue_stats.action_id IS 'Foreign key to action table';
COMMENT ON COLUMN queue_stats.queue_length IS 'Number of executions waiting in queue';
COMMENT ON COLUMN queue_stats.active_count IS 'Number of currently running executions';
COMMENT ON COLUMN queue_stats.max_concurrent IS 'Maximum concurrent executions allowed';
COMMENT ON COLUMN queue_stats.oldest_enqueued_at IS 'Timestamp of oldest queued execution (NULL if queue empty)';
COMMENT ON COLUMN queue_stats.total_enqueued IS 'Total executions enqueued since queue creation';
COMMENT ON COLUMN queue_stats.total_completed IS 'Total executions completed since queue creation';
COMMENT ON COLUMN queue_stats.last_updated IS 'Timestamp of last statistics update';
-- ============================================================================
-- PACK ENVIRONMENT TABLE
-- ============================================================================
CREATE TABLE IF NOT EXISTS pack_environment (
id BIGSERIAL PRIMARY KEY,
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
pack_ref TEXT NOT NULL,
runtime BIGINT NOT NULL REFERENCES runtime(id) ON DELETE CASCADE,
runtime_ref TEXT NOT NULL,
env_path TEXT NOT NULL,
status pack_environment_status_enum NOT NULL DEFAULT 'pending',
installed_at TIMESTAMPTZ,
last_verified TIMESTAMPTZ,
install_log TEXT,
install_error TEXT,
metadata JSONB DEFAULT '{}'::jsonb,
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE(pack, runtime)
);
-- Indexes
CREATE INDEX IF NOT EXISTS idx_pack_environment_pack ON pack_environment(pack);
CREATE INDEX IF NOT EXISTS idx_pack_environment_runtime ON pack_environment(runtime);
CREATE INDEX IF NOT EXISTS idx_pack_environment_status ON pack_environment(status);
CREATE INDEX IF NOT EXISTS idx_pack_environment_pack_ref ON pack_environment(pack_ref);
CREATE INDEX IF NOT EXISTS idx_pack_environment_runtime_ref ON pack_environment(runtime_ref);
CREATE INDEX IF NOT EXISTS idx_pack_environment_pack_runtime ON pack_environment(pack, runtime);
-- Trigger for updated timestamp
CREATE TRIGGER update_pack_environment_updated
BEFORE UPDATE ON pack_environment
FOR EACH ROW
EXECUTE FUNCTION update_updated_column();
-- Comments
COMMENT ON TABLE pack_environment IS 'Tracks pack-specific runtime environments for dependency isolation';
COMMENT ON COLUMN pack_environment.pack IS 'Pack that owns this environment';
COMMENT ON COLUMN pack_environment.pack_ref IS 'Pack reference for quick lookup';
COMMENT ON COLUMN pack_environment.runtime IS 'Runtime used for this environment';
COMMENT ON COLUMN pack_environment.runtime_ref IS 'Runtime reference for quick lookup';
COMMENT ON COLUMN pack_environment.env_path IS 'Filesystem path to the environment directory (e.g., /opt/attune/packenvs/mypack/python)';
COMMENT ON COLUMN pack_environment.status IS 'Current installation status';
COMMENT ON COLUMN pack_environment.installed_at IS 'When the environment was successfully installed';
COMMENT ON COLUMN pack_environment.last_verified IS 'Last time the environment was verified as working';
COMMENT ON COLUMN pack_environment.install_log IS 'Installation output logs';
COMMENT ON COLUMN pack_environment.install_error IS 'Error message if installation failed';
COMMENT ON COLUMN pack_environment.metadata IS 'Additional metadata (installed packages, versions, etc.)';
-- ============================================================================
-- PACK ENVIRONMENT: Update existing runtimes with installer metadata
-- ============================================================================
-- Python runtime installers
UPDATE runtime
SET installers = jsonb_build_object(
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
'installers', jsonb_build_array(
jsonb_build_object(
'name', 'create_venv',
'description', 'Create Python virtual environment',
'command', 'python3',
'args', jsonb_build_array('-m', 'venv', '{env_path}'),
'cwd', '{pack_path}',
'env', jsonb_build_object(),
'order', 1,
'optional', false
),
jsonb_build_object(
'name', 'upgrade_pip',
'description', 'Upgrade pip to latest version',
'command', '{env_path}/bin/pip',
'args', jsonb_build_array('install', '--upgrade', 'pip'),
'cwd', '{pack_path}',
'env', jsonb_build_object(),
'order', 2,
'optional', true
),
jsonb_build_object(
'name', 'install_requirements',
'description', 'Install pack Python dependencies',
'command', '{env_path}/bin/pip',
'args', jsonb_build_array('install', '-r', '{pack_path}/requirements.txt'),
'cwd', '{pack_path}',
'env', jsonb_build_object(),
'order', 3,
'optional', false,
'condition', jsonb_build_object(
'file_exists', '{pack_path}/requirements.txt'
)
)
),
'executable_templates', jsonb_build_object(
'python', '{env_path}/bin/python',
'pip', '{env_path}/bin/pip'
)
)
WHERE ref = 'core.python';
-- Node.js runtime installers
UPDATE runtime
SET installers = jsonb_build_object(
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
'installers', jsonb_build_array(
jsonb_build_object(
'name', 'npm_install',
'description', 'Install Node.js dependencies',
'command', 'npm',
'args', jsonb_build_array('install', '--prefix', '{env_path}'),
'cwd', '{pack_path}',
'env', jsonb_build_object(
'NODE_PATH', '{env_path}/node_modules'
),
'order', 1,
'optional', false,
'condition', jsonb_build_object(
'file_exists', '{pack_path}/package.json'
)
)
),
'executable_templates', jsonb_build_object(
'node', 'node',
'npm', 'npm'
),
'env_vars', jsonb_build_object(
'NODE_PATH', '{env_path}/node_modules'
)
)
WHERE ref = 'core.nodejs';
-- Shell runtime (no environment needed, uses system shell)
UPDATE runtime
SET installers = jsonb_build_object(
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
'installers', jsonb_build_array(),
'executable_templates', jsonb_build_object(
'sh', 'sh',
'bash', 'bash'
),
'requires_environment', false
)
WHERE ref = 'core.shell';
-- Native runtime (no environment needed, binaries are standalone)
UPDATE runtime
SET installers = jsonb_build_object(
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
'installers', jsonb_build_array(),
'executable_templates', jsonb_build_object(),
'requires_environment', false
)
WHERE ref = 'core.native';
-- Built-in sensor runtime (internal, no environment)
UPDATE runtime
SET installers = jsonb_build_object(
'installers', jsonb_build_array(),
'requires_environment', false
)
WHERE ref = 'core.sensor.builtin';
-- ============================================================================
-- PACK ENVIRONMENT: Helper functions
-- ============================================================================
-- Function to get environment path for a pack/runtime combination
CREATE OR REPLACE FUNCTION get_pack_environment_path(p_pack_ref TEXT, p_runtime_ref TEXT)
RETURNS TEXT AS $$
DECLARE
v_runtime_name TEXT;
v_base_template TEXT;
v_result TEXT;
BEGIN
-- Get runtime name and base path template
SELECT
LOWER(name),
installers->>'base_path_template'
INTO v_runtime_name, v_base_template
FROM runtime
WHERE ref = p_runtime_ref;
IF v_base_template IS NULL THEN
v_base_template := '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}';
END IF;
-- Replace template variables
v_result := v_base_template;
v_result := REPLACE(v_result, '{pack_ref}', p_pack_ref);
v_result := REPLACE(v_result, '{runtime_ref}', p_runtime_ref);
v_result := REPLACE(v_result, '{runtime_name_lower}', v_runtime_name);
RETURN v_result;
END;
$$ LANGUAGE plpgsql IMMUTABLE;
COMMENT ON FUNCTION get_pack_environment_path IS 'Calculate the filesystem path for a pack runtime environment';
-- Function to check if a runtime requires an environment
CREATE OR REPLACE FUNCTION runtime_requires_environment(p_runtime_ref TEXT)
RETURNS BOOLEAN AS $$
DECLARE
v_requires BOOLEAN;
BEGIN
SELECT COALESCE((installers->>'requires_environment')::boolean, true)
INTO v_requires
FROM runtime
WHERE ref = p_runtime_ref;
RETURN COALESCE(v_requires, false);
END;
$$ LANGUAGE plpgsql STABLE;
COMMENT ON FUNCTION runtime_requires_environment IS 'Check if a runtime needs a pack-specific environment';
-- ============================================================================
-- PACK ENVIRONMENT: Status view
-- ============================================================================
CREATE OR REPLACE VIEW v_pack_environment_status AS
SELECT
pe.id,
pe.pack,
p.ref AS pack_ref,
p.label AS pack_name,
pe.runtime,
r.ref AS runtime_ref,
r.name AS runtime_name,
pe.env_path,
pe.status,
pe.installed_at,
pe.last_verified,
CASE
WHEN pe.status = 'ready' AND pe.last_verified < NOW() - INTERVAL '7 days' THEN true
ELSE false
END AS needs_verification,
CASE
WHEN pe.status = 'ready' THEN 'healthy'
WHEN pe.status = 'failed' THEN 'unhealthy'
WHEN pe.status IN ('pending', 'installing') THEN 'provisioning'
WHEN pe.status = 'outdated' THEN 'needs_update'
ELSE 'unknown'
END AS health_status,
pe.install_error,
pe.created,
pe.updated
FROM pack_environment pe
JOIN pack p ON pe.pack = p.id
JOIN runtime r ON pe.runtime = r.id;
COMMENT ON VIEW v_pack_environment_status IS 'Consolidated view of pack environment status with health indicators';
-- ============================================================================
-- PACK TEST EXECUTION TABLE
-- ============================================================================
CREATE TABLE IF NOT EXISTS pack_test_execution (
id BIGSERIAL PRIMARY KEY,
pack_id BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
pack_version VARCHAR(50) NOT NULL,
execution_time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
trigger_reason VARCHAR(50) NOT NULL, -- 'install', 'update', 'manual', 'validation'
total_tests INT NOT NULL,
passed INT NOT NULL,
failed INT NOT NULL,
skipped INT NOT NULL,
pass_rate DECIMAL(5,4) NOT NULL, -- 0.0000 to 1.0000
duration_ms BIGINT NOT NULL,
result JSONB NOT NULL, -- Full test result structure
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CONSTRAINT valid_test_counts CHECK (total_tests >= 0 AND passed >= 0 AND failed >= 0 AND skipped >= 0),
CONSTRAINT valid_pass_rate CHECK (pass_rate >= 0.0 AND pass_rate <= 1.0),
CONSTRAINT valid_trigger_reason CHECK (trigger_reason IN ('install', 'update', 'manual', 'validation'))
);
-- Indexes for efficient queries
CREATE INDEX idx_pack_test_execution_pack_id ON pack_test_execution(pack_id);
CREATE INDEX idx_pack_test_execution_time ON pack_test_execution(execution_time DESC);
CREATE INDEX idx_pack_test_execution_pass_rate ON pack_test_execution(pass_rate);
CREATE INDEX idx_pack_test_execution_trigger ON pack_test_execution(trigger_reason);
-- Comments for documentation
COMMENT ON TABLE pack_test_execution IS 'Tracks pack test execution results for validation and auditing';
COMMENT ON COLUMN pack_test_execution.pack_id IS 'Reference to the pack being tested';
COMMENT ON COLUMN pack_test_execution.pack_version IS 'Version of the pack at test time';
COMMENT ON COLUMN pack_test_execution.trigger_reason IS 'What triggered the test: install, update, manual, validation';
COMMENT ON COLUMN pack_test_execution.pass_rate IS 'Percentage of tests passed (0.0 to 1.0)';
COMMENT ON COLUMN pack_test_execution.result IS 'Full JSON structure with detailed test results';
-- Pack test result summary view (all test executions with pack info)
CREATE OR REPLACE VIEW pack_test_summary AS
SELECT
p.id AS pack_id,
p.ref AS pack_ref,
p.label AS pack_label,
pte.id AS test_execution_id,
pte.pack_version,
pte.execution_time AS test_time,
pte.trigger_reason,
pte.total_tests,
pte.passed,
pte.failed,
pte.skipped,
pte.pass_rate,
pte.duration_ms,
ROW_NUMBER() OVER (PARTITION BY p.id ORDER BY pte.execution_time DESC) AS rn
FROM pack p
LEFT JOIN pack_test_execution pte ON p.id = pte.pack_id
WHERE pte.id IS NOT NULL;
COMMENT ON VIEW pack_test_summary IS 'Summary of all pack test executions with pack details';
-- Latest test results per pack view
CREATE OR REPLACE VIEW pack_latest_test AS
SELECT
pack_id,
pack_ref,
pack_label,
test_execution_id,
pack_version,
test_time,
trigger_reason,
total_tests,
passed,
failed,
skipped,
pass_rate,
duration_ms
FROM pack_test_summary
WHERE rn = 1;
COMMENT ON VIEW pack_latest_test IS 'Latest test results for each pack';
-- Function to get pack test statistics
CREATE OR REPLACE FUNCTION get_pack_test_stats(p_pack_id BIGINT)
RETURNS TABLE (
total_executions BIGINT,
successful_executions BIGINT,
failed_executions BIGINT,
avg_pass_rate DECIMAL,
avg_duration_ms BIGINT,
last_test_time TIMESTAMPTZ,
last_test_passed BOOLEAN
) AS $$
BEGIN
RETURN QUERY
SELECT
COUNT(*)::BIGINT AS total_executions,
COUNT(*) FILTER (WHERE passed = total_tests)::BIGINT AS successful_executions,
COUNT(*) FILTER (WHERE failed > 0)::BIGINT AS failed_executions,
AVG(pass_rate) AS avg_pass_rate,
AVG(duration_ms)::BIGINT AS avg_duration_ms,
MAX(execution_time) AS last_test_time,
(SELECT failed = 0 FROM pack_test_execution
WHERE pack_id = p_pack_id
ORDER BY execution_time DESC
LIMIT 1) AS last_test_passed
FROM pack_test_execution
WHERE pack_id = p_pack_id;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION get_pack_test_stats IS 'Get statistical summary of test executions for a pack';
-- Function to check if pack has recent passing tests
CREATE OR REPLACE FUNCTION pack_has_passing_tests(
p_pack_id BIGINT,
p_hours_ago INT DEFAULT 24
)
RETURNS BOOLEAN AS $$
DECLARE
v_has_passing_tests BOOLEAN;
BEGIN
SELECT EXISTS(
SELECT 1
FROM pack_test_execution
WHERE pack_id = p_pack_id
AND execution_time > NOW() - (p_hours_ago || ' hours')::INTERVAL
AND failed = 0
AND total_tests > 0
) INTO v_has_passing_tests;
RETURN v_has_passing_tests;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION pack_has_passing_tests IS 'Check if pack has recent passing test executions';
-- Add trigger to update pack metadata on test execution
CREATE OR REPLACE FUNCTION update_pack_test_metadata()
RETURNS TRIGGER AS $$
BEGIN
-- Could update pack table with last_tested timestamp if we add that column
-- For now, just a placeholder for future functionality
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_update_pack_test_metadata
AFTER INSERT ON pack_test_execution
FOR EACH ROW
EXECUTE FUNCTION update_pack_test_metadata();
COMMENT ON TRIGGER trigger_update_pack_test_metadata ON pack_test_execution IS 'Updates pack metadata when tests are executed';
-- ============================================================================
-- WEBHOOK FUNCTIONS
-- ============================================================================
-- Drop existing functions to avoid signature conflicts
DROP FUNCTION IF EXISTS enable_trigger_webhook(BIGINT, JSONB);
DROP FUNCTION IF EXISTS enable_trigger_webhook(BIGINT);
DROP FUNCTION IF EXISTS disable_trigger_webhook(BIGINT);
DROP FUNCTION IF EXISTS regenerate_trigger_webhook_key(BIGINT);
-- Function to enable webhooks for a trigger
CREATE OR REPLACE FUNCTION enable_trigger_webhook(
p_trigger_id BIGINT,
p_config JSONB DEFAULT '{}'::jsonb
)
RETURNS TABLE(
webhook_enabled BOOLEAN,
webhook_key VARCHAR(255),
webhook_url TEXT
) AS $$
DECLARE
v_webhook_key VARCHAR(255);
v_api_base_url TEXT := 'http://localhost:8080'; -- Default, should be configured
BEGIN
-- Check if trigger exists
IF NOT EXISTS (SELECT 1 FROM trigger WHERE id = p_trigger_id) THEN
RAISE EXCEPTION 'Trigger with id % does not exist', p_trigger_id;
END IF;
-- Generate webhook key if one doesn't exist
SELECT t.webhook_key INTO v_webhook_key
FROM trigger t
WHERE t.id = p_trigger_id;
IF v_webhook_key IS NULL THEN
v_webhook_key := generate_webhook_key();
END IF;
-- Update trigger to enable webhooks
UPDATE trigger
SET
webhook_enabled = TRUE,
webhook_key = v_webhook_key,
webhook_config = p_config,
updated = NOW()
WHERE id = p_trigger_id;
-- Return webhook details
RETURN QUERY SELECT
TRUE,
v_webhook_key,
v_api_base_url || '/api/v1/webhooks/' || v_webhook_key;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION enable_trigger_webhook(BIGINT, JSONB) IS
'Enables webhooks for a trigger with optional configuration. Generates a new webhook key if one does not exist. Returns webhook details.';
-- Function to disable webhooks for a trigger
CREATE OR REPLACE FUNCTION disable_trigger_webhook(
p_trigger_id BIGINT
)
RETURNS BOOLEAN AS $$
BEGIN
-- Check if trigger exists
IF NOT EXISTS (SELECT 1 FROM trigger WHERE id = p_trigger_id) THEN
RAISE EXCEPTION 'Trigger with id % does not exist', p_trigger_id;
END IF;
-- Update trigger to disable webhooks
-- Set webhook_key to NULL when disabling to remove it from API responses
UPDATE trigger
SET
webhook_enabled = FALSE,
webhook_key = NULL,
updated = NOW()
WHERE id = p_trigger_id;
RETURN TRUE;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION disable_trigger_webhook(BIGINT) IS
'Disables webhooks for a trigger. Webhook key is removed when disabled.';
-- Function to regenerate webhook key for a trigger
CREATE OR REPLACE FUNCTION regenerate_trigger_webhook_key(
p_trigger_id BIGINT
)
RETURNS TABLE(
webhook_key VARCHAR(255),
previous_key_revoked BOOLEAN
) AS $$
DECLARE
v_new_key VARCHAR(255);
v_old_key VARCHAR(255);
v_webhook_enabled BOOLEAN;
BEGIN
-- Check if trigger exists
IF NOT EXISTS (SELECT 1 FROM trigger WHERE id = p_trigger_id) THEN
RAISE EXCEPTION 'Trigger with id % does not exist', p_trigger_id;
END IF;
-- Get current webhook state
SELECT t.webhook_key, t.webhook_enabled INTO v_old_key, v_webhook_enabled
FROM trigger t
WHERE t.id = p_trigger_id;
-- Check if webhooks are enabled
IF NOT v_webhook_enabled THEN
RAISE EXCEPTION 'Webhooks are not enabled for trigger %', p_trigger_id;
END IF;
-- Generate new key
v_new_key := generate_webhook_key();
-- Update trigger with new key
UPDATE trigger
SET
webhook_key = v_new_key,
updated = NOW()
WHERE id = p_trigger_id;
-- Return new key and whether old key was present
RETURN QUERY SELECT
v_new_key,
(v_old_key IS NOT NULL);
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION regenerate_trigger_webhook_key(BIGINT) IS
'Regenerates webhook key for a trigger. Returns new key and whether a previous key was revoked.';
-- Verify all webhook functions exist
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = current_schema()
AND p.proname = 'enable_trigger_webhook'
) THEN
RAISE EXCEPTION 'enable_trigger_webhook function not found after migration';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = current_schema()
AND p.proname = 'disable_trigger_webhook'
) THEN
RAISE EXCEPTION 'disable_trigger_webhook function not found after migration';
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = current_schema()
AND p.proname = 'regenerate_trigger_webhook_key'
) THEN
RAISE EXCEPTION 'regenerate_trigger_webhook_key function not found after migration';
END IF;
RAISE NOTICE 'All webhook functions successfully created';
END $$;

View File

@@ -0,0 +1,428 @@
-- Migration: LISTEN/NOTIFY Triggers
-- Description: Consolidated PostgreSQL LISTEN/NOTIFY triggers for real-time event notifications
-- Version: 20250101000008
-- ============================================================================
-- EXECUTION CHANGE NOTIFICATION
-- ============================================================================
-- Function to notify on execution creation
CREATE OR REPLACE FUNCTION notify_execution_created()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
enforcement_rule_ref TEXT;
enforcement_trigger_ref TEXT;
BEGIN
-- Lookup enforcement details if this execution is linked to an enforcement
IF NEW.enforcement IS NOT NULL THEN
SELECT rule_ref, trigger_ref
INTO enforcement_rule_ref, enforcement_trigger_ref
FROM enforcement
WHERE id = NEW.enforcement;
END IF;
payload := json_build_object(
'entity_type', 'execution',
'entity_id', NEW.id,
'id', NEW.id,
'action_id', NEW.action,
'action_ref', NEW.action_ref,
'status', NEW.status,
'enforcement', NEW.enforcement,
'rule_ref', enforcement_rule_ref,
'trigger_ref', enforcement_trigger_ref,
'parent', NEW.parent,
'result', NEW.result,
'started_at', NEW.started_at,
'workflow_task', NEW.workflow_task,
'created', NEW.created,
'updated', NEW.updated
);
PERFORM pg_notify('execution_created', payload::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Function to notify on execution status changes
CREATE OR REPLACE FUNCTION notify_execution_status_changed()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
enforcement_rule_ref TEXT;
enforcement_trigger_ref TEXT;
BEGIN
-- Only notify on updates, not inserts
IF TG_OP = 'UPDATE' AND OLD.status IS DISTINCT FROM NEW.status THEN
-- Lookup enforcement details if this execution is linked to an enforcement
IF NEW.enforcement IS NOT NULL THEN
SELECT rule_ref, trigger_ref
INTO enforcement_rule_ref, enforcement_trigger_ref
FROM enforcement
WHERE id = NEW.enforcement;
END IF;
payload := json_build_object(
'entity_type', 'execution',
'entity_id', NEW.id,
'id', NEW.id,
'action_id', NEW.action,
'action_ref', NEW.action_ref,
'status', NEW.status,
'old_status', OLD.status,
'enforcement', NEW.enforcement,
'rule_ref', enforcement_rule_ref,
'trigger_ref', enforcement_trigger_ref,
'parent', NEW.parent,
'result', NEW.result,
'started_at', NEW.started_at,
'workflow_task', NEW.workflow_task,
'created', NEW.created,
'updated', NEW.updated
);
PERFORM pg_notify('execution_status_changed', payload::text);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger on execution table for creation
CREATE TRIGGER execution_created_notify
AFTER INSERT ON execution
FOR EACH ROW
EXECUTE FUNCTION notify_execution_created();
-- Trigger on execution table for status changes
CREATE TRIGGER execution_status_changed_notify
AFTER UPDATE ON execution
FOR EACH ROW
EXECUTE FUNCTION notify_execution_status_changed();
COMMENT ON FUNCTION notify_execution_created() IS 'Sends execution creation notifications via PostgreSQL LISTEN/NOTIFY';
COMMENT ON FUNCTION notify_execution_status_changed() IS 'Sends execution status change notifications via PostgreSQL LISTEN/NOTIFY';
-- ============================================================================
-- EVENT CREATION NOTIFICATION
-- ============================================================================
-- Function to notify on event creation
CREATE OR REPLACE FUNCTION notify_event_created()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
BEGIN
payload := json_build_object(
'entity_type', 'event',
'entity_id', NEW.id,
'id', NEW.id,
'trigger', NEW.trigger,
'trigger_ref', NEW.trigger_ref,
'source', NEW.source,
'source_ref', NEW.source_ref,
'rule', NEW.rule,
'rule_ref', NEW.rule_ref,
'payload', NEW.payload,
'created', NEW.created
);
PERFORM pg_notify('event_created', payload::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger on event table
CREATE TRIGGER event_created_notify
AFTER INSERT ON event
FOR EACH ROW
EXECUTE FUNCTION notify_event_created();
COMMENT ON FUNCTION notify_event_created() IS 'Sends event creation notifications via PostgreSQL LISTEN/NOTIFY';
-- ============================================================================
-- ENFORCEMENT CHANGE NOTIFICATION
-- ============================================================================
-- Function to notify on enforcement creation
CREATE OR REPLACE FUNCTION notify_enforcement_created()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
BEGIN
payload := json_build_object(
'entity_type', 'enforcement',
'entity_id', NEW.id,
'id', NEW.id,
'rule', NEW.rule,
'rule_ref', NEW.rule_ref,
'trigger_ref', NEW.trigger_ref,
'event', NEW.event,
'status', NEW.status,
'condition', NEW.condition,
'conditions', NEW.conditions,
'config', NEW.config,
'payload', NEW.payload,
'created', NEW.created,
'resolved_at', NEW.resolved_at
);
PERFORM pg_notify('enforcement_created', payload::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger on enforcement table
CREATE TRIGGER enforcement_created_notify
AFTER INSERT ON enforcement
FOR EACH ROW
EXECUTE FUNCTION notify_enforcement_created();
COMMENT ON FUNCTION notify_enforcement_created() IS 'Sends enforcement creation notifications via PostgreSQL LISTEN/NOTIFY';
-- Function to notify on enforcement status changes
CREATE OR REPLACE FUNCTION notify_enforcement_status_changed()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
BEGIN
-- Only notify on updates when status actually changed
IF TG_OP = 'UPDATE' AND OLD.status IS DISTINCT FROM NEW.status THEN
payload := json_build_object(
'entity_type', 'enforcement',
'entity_id', NEW.id,
'id', NEW.id,
'rule', NEW.rule,
'rule_ref', NEW.rule_ref,
'trigger_ref', NEW.trigger_ref,
'event', NEW.event,
'status', NEW.status,
'old_status', OLD.status,
'condition', NEW.condition,
'conditions', NEW.conditions,
'config', NEW.config,
'payload', NEW.payload,
'created', NEW.created,
'resolved_at', NEW.resolved_at
);
PERFORM pg_notify('enforcement_status_changed', payload::text);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger on enforcement table for status changes
CREATE TRIGGER enforcement_status_changed_notify
AFTER UPDATE ON enforcement
FOR EACH ROW
EXECUTE FUNCTION notify_enforcement_status_changed();
COMMENT ON FUNCTION notify_enforcement_status_changed() IS 'Sends enforcement status change notifications via PostgreSQL LISTEN/NOTIFY';
-- ============================================================================
-- INQUIRY NOTIFICATIONS
-- ============================================================================
-- Function to notify on inquiry creation
CREATE OR REPLACE FUNCTION notify_inquiry_created()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
BEGIN
payload := json_build_object(
'entity_type', 'inquiry',
'entity_id', NEW.id,
'id', NEW.id,
'execution', NEW.execution,
'status', NEW.status,
'ttl', NEW.ttl,
'created', NEW.created
);
PERFORM pg_notify('inquiry_created', payload::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Function to notify on inquiry response
CREATE OR REPLACE FUNCTION notify_inquiry_responded()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
BEGIN
-- Only notify when status changes to 'responded'
IF TG_OP = 'UPDATE' AND NEW.status = 'responded' AND OLD.status != 'responded' THEN
payload := json_build_object(
'entity_type', 'inquiry',
'entity_id', NEW.id,
'id', NEW.id,
'execution', NEW.execution,
'status', NEW.status,
'response', NEW.response,
'updated', NEW.updated
);
PERFORM pg_notify('inquiry_responded', payload::text);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger on inquiry table for creation
CREATE TRIGGER inquiry_created_notify
AFTER INSERT ON inquiry
FOR EACH ROW
EXECUTE FUNCTION notify_inquiry_created();
-- Trigger on inquiry table for responses
CREATE TRIGGER inquiry_responded_notify
AFTER UPDATE ON inquiry
FOR EACH ROW
EXECUTE FUNCTION notify_inquiry_responded();
COMMENT ON FUNCTION notify_inquiry_created() IS 'Sends inquiry creation notifications via PostgreSQL LISTEN/NOTIFY';
COMMENT ON FUNCTION notify_inquiry_responded() IS 'Sends inquiry response notifications via PostgreSQL LISTEN/NOTIFY';
-- ============================================================================
-- WORKFLOW EXECUTION NOTIFICATIONS
-- ============================================================================
-- Function to notify on workflow execution status changes
CREATE OR REPLACE FUNCTION notify_workflow_execution_status_changed()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
BEGIN
-- Only notify for workflow executions when status changes
IF TG_OP = 'UPDATE' AND NEW.is_workflow = true AND OLD.status IS DISTINCT FROM NEW.status THEN
payload := json_build_object(
'entity_type', 'execution',
'entity_id', NEW.id,
'id', NEW.id,
'action_ref', NEW.action_ref,
'status', NEW.status,
'old_status', OLD.status,
'workflow_def', NEW.workflow_def,
'parent', NEW.parent,
'created', NEW.created,
'updated', NEW.updated
);
PERFORM pg_notify('workflow_execution_status_changed', payload::text);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger on execution table for workflow status changes
CREATE TRIGGER workflow_execution_status_changed_notify
AFTER UPDATE ON execution
FOR EACH ROW
WHEN (NEW.is_workflow = true)
EXECUTE FUNCTION notify_workflow_execution_status_changed();
COMMENT ON FUNCTION notify_workflow_execution_status_changed() IS 'Sends workflow execution status change notifications via PostgreSQL LISTEN/NOTIFY';
-- ============================================================================
-- ARTIFACT NOTIFICATIONS
-- ============================================================================
-- Function to notify on artifact creation
CREATE OR REPLACE FUNCTION notify_artifact_created()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
BEGIN
payload := json_build_object(
'entity_type', 'artifact',
'entity_id', NEW.id,
'id', NEW.id,
'ref', NEW.ref,
'type', NEW.type,
'visibility', NEW.visibility,
'name', NEW.name,
'execution', NEW.execution,
'scope', NEW.scope,
'owner', NEW.owner,
'content_type', NEW.content_type,
'size_bytes', NEW.size_bytes,
'created', NEW.created
);
PERFORM pg_notify('artifact_created', payload::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger on artifact table for creation
CREATE TRIGGER artifact_created_notify
AFTER INSERT ON artifact
FOR EACH ROW
EXECUTE FUNCTION notify_artifact_created();
COMMENT ON FUNCTION notify_artifact_created() IS 'Sends artifact creation notifications via PostgreSQL LISTEN/NOTIFY';
-- Function to notify on artifact updates (progress appends, data changes)
CREATE OR REPLACE FUNCTION notify_artifact_updated()
RETURNS TRIGGER AS $$
DECLARE
payload JSON;
latest_percent DOUBLE PRECISION;
latest_message TEXT;
entry_count INTEGER;
BEGIN
-- Only notify on actual changes
IF TG_OP = 'UPDATE' THEN
-- Extract progress summary from data array if this is a progress artifact
IF NEW.type = 'progress' AND NEW.data IS NOT NULL AND jsonb_typeof(NEW.data) = 'array' THEN
entry_count := jsonb_array_length(NEW.data);
IF entry_count > 0 THEN
latest_percent := (NEW.data -> (entry_count - 1) ->> 'percent')::DOUBLE PRECISION;
latest_message := NEW.data -> (entry_count - 1) ->> 'message';
END IF;
END IF;
payload := json_build_object(
'entity_type', 'artifact',
'entity_id', NEW.id,
'id', NEW.id,
'ref', NEW.ref,
'type', NEW.type,
'visibility', NEW.visibility,
'name', NEW.name,
'execution', NEW.execution,
'scope', NEW.scope,
'owner', NEW.owner,
'content_type', NEW.content_type,
'size_bytes', NEW.size_bytes,
'progress_percent', latest_percent,
'progress_message', latest_message,
'progress_entries', entry_count,
'created', NEW.created,
'updated', NEW.updated
);
PERFORM pg_notify('artifact_updated', payload::text);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Trigger on artifact table for updates
CREATE TRIGGER artifact_updated_notify
AFTER UPDATE ON artifact
FOR EACH ROW
EXECUTE FUNCTION notify_artifact_updated();
COMMENT ON FUNCTION notify_artifact_updated() IS 'Sends artifact update notifications via PostgreSQL LISTEN/NOTIFY (includes progress summary for progress-type artifacts)';

View File

@@ -0,0 +1,616 @@
-- Migration: TimescaleDB Entity History and Analytics
-- Description: Creates append-only history hypertables for execution and worker tables.
-- Uses JSONB diff format to track field-level changes via PostgreSQL triggers.
-- Converts the event, enforcement, and execution tables into TimescaleDB
-- hypertables (events are immutable; enforcements are updated exactly once;
-- executions are updated ~4 times during their lifecycle).
-- Includes continuous aggregates for dashboard analytics.
-- See docs/plans/timescaledb-entity-history.md for full design.
--
-- NOTE: FK constraints that would reference hypertable targets were never
-- created in earlier migrations (000004, 000005, 000006), so no DROP
-- CONSTRAINT statements are needed here.
-- Version: 20250101000009
-- ============================================================================
-- EXTENSION
-- ============================================================================
CREATE EXTENSION IF NOT EXISTS timescaledb;
-- ============================================================================
-- HELPER FUNCTIONS
-- ============================================================================
-- Returns a small {digest, size, type} object instead of the full JSONB value.
-- Used in history triggers for columns that can be arbitrarily large (e.g. result).
-- The full value is always available on the live row.
CREATE OR REPLACE FUNCTION _jsonb_digest_summary(val JSONB)
RETURNS JSONB AS $$
BEGIN
IF val IS NULL THEN
RETURN NULL;
END IF;
RETURN jsonb_build_object(
'digest', 'md5:' || md5(val::text),
'size', octet_length(val::text),
'type', jsonb_typeof(val)
);
END;
$$ LANGUAGE plpgsql IMMUTABLE;
COMMENT ON FUNCTION _jsonb_digest_summary(JSONB) IS
'Returns a compact {digest, size, type} summary of a JSONB value for use in history tables. '
'The digest is md5 of the text representation — sufficient for change-detection, not for security.';
-- ============================================================================
-- HISTORY TABLES
-- ============================================================================
-- ----------------------------------------------------------------------------
-- execution_history
-- ----------------------------------------------------------------------------
CREATE TABLE execution_history (
time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
operation TEXT NOT NULL,
entity_id BIGINT NOT NULL,
entity_ref TEXT,
changed_fields TEXT[] NOT NULL DEFAULT '{}',
old_values JSONB,
new_values JSONB
);
SELECT create_hypertable('execution_history', 'time',
chunk_time_interval => INTERVAL '1 day');
CREATE INDEX idx_execution_history_entity
ON execution_history (entity_id, time DESC);
CREATE INDEX idx_execution_history_entity_ref
ON execution_history (entity_ref, time DESC);
CREATE INDEX idx_execution_history_status_changes
ON execution_history (time DESC)
WHERE 'status' = ANY(changed_fields);
CREATE INDEX idx_execution_history_changed_fields
ON execution_history USING GIN (changed_fields);
COMMENT ON TABLE execution_history IS 'Append-only history of field-level changes to the execution table (TimescaleDB hypertable)';
COMMENT ON COLUMN execution_history.time IS 'When the change occurred (hypertable partitioning dimension)';
COMMENT ON COLUMN execution_history.operation IS 'INSERT, UPDATE, or DELETE';
COMMENT ON COLUMN execution_history.entity_id IS 'execution.id of the changed row';
COMMENT ON COLUMN execution_history.entity_ref IS 'Denormalized action_ref for JOIN-free queries';
COMMENT ON COLUMN execution_history.changed_fields IS 'Array of field names that changed (empty for INSERT/DELETE)';
COMMENT ON COLUMN execution_history.old_values IS 'Previous values of changed fields (NULL for INSERT)';
COMMENT ON COLUMN execution_history.new_values IS 'New values of changed fields (NULL for DELETE)';
-- ----------------------------------------------------------------------------
-- worker_history
-- ----------------------------------------------------------------------------
CREATE TABLE worker_history (
time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
operation TEXT NOT NULL,
entity_id BIGINT NOT NULL,
entity_ref TEXT,
changed_fields TEXT[] NOT NULL DEFAULT '{}',
old_values JSONB,
new_values JSONB
);
SELECT create_hypertable('worker_history', 'time',
chunk_time_interval => INTERVAL '7 days');
CREATE INDEX idx_worker_history_entity
ON worker_history (entity_id, time DESC);
CREATE INDEX idx_worker_history_entity_ref
ON worker_history (entity_ref, time DESC);
CREATE INDEX idx_worker_history_status_changes
ON worker_history (time DESC)
WHERE 'status' = ANY(changed_fields);
CREATE INDEX idx_worker_history_changed_fields
ON worker_history USING GIN (changed_fields);
COMMENT ON TABLE worker_history IS 'Append-only history of field-level changes to the worker table (TimescaleDB hypertable)';
COMMENT ON COLUMN worker_history.entity_ref IS 'Denormalized worker name for JOIN-free queries';
-- ============================================================================
-- CONVERT EVENT TABLE TO HYPERTABLE
-- ============================================================================
-- Events are immutable after insert — they are never updated. Instead of
-- maintaining a separate event_history table to track changes that never
-- happen, we convert the event table itself into a TimescaleDB hypertable
-- partitioned on `created`. This gives us automatic time-based partitioning,
-- compression, and retention for free.
--
-- No FK constraints reference event(id) — enforcement.event was created as a
-- plain BIGINT in migration 000004 (hypertables cannot be FK targets).
-- ----------------------------------------------------------------------------
-- Replace the single-column PK with a composite PK that includes the
-- partitioning column (required by TimescaleDB).
ALTER TABLE event DROP CONSTRAINT event_pkey;
ALTER TABLE event ADD PRIMARY KEY (id, created);
SELECT create_hypertable('event', 'created',
chunk_time_interval => INTERVAL '1 day',
migrate_data => true);
COMMENT ON TABLE event IS 'Events are instances of triggers firing (TimescaleDB hypertable partitioned on created)';
-- ============================================================================
-- CONVERT ENFORCEMENT TABLE TO HYPERTABLE
-- ============================================================================
-- Enforcements are created and then updated exactly once (status changes from
-- `created` to `processed` or `disabled` within ~1 second). This single update
-- happens well before the 7-day compression window, so UPDATE on uncompressed
-- chunks works without issues.
--
-- No FK constraints reference enforcement(id) — execution.enforcement was
-- created as a plain BIGINT in migration 000005.
-- ----------------------------------------------------------------------------
ALTER TABLE enforcement DROP CONSTRAINT enforcement_pkey;
ALTER TABLE enforcement ADD PRIMARY KEY (id, created);
SELECT create_hypertable('enforcement', 'created',
chunk_time_interval => INTERVAL '1 day',
migrate_data => true);
COMMENT ON TABLE enforcement IS 'Enforcements represent rule triggering by events (TimescaleDB hypertable partitioned on created)';
-- ============================================================================
-- CONVERT EXECUTION TABLE TO HYPERTABLE
-- ============================================================================
-- Executions are updated ~4 times during their lifecycle (requested → scheduled
-- → running → completed/failed), completing within at most ~1 day — well before
-- the 7-day compression window. The `updated` column and its BEFORE UPDATE
-- trigger are preserved (used by timeout monitor and UI).
--
-- No FK constraints reference execution(id) — inquiry.execution,
-- workflow_execution.execution, execution.parent, and execution.original_execution
-- were all created as plain BIGINT columns in migrations 000005 and 000006.
--
-- The existing execution_history hypertable and its trigger are preserved —
-- they track field-level diffs of each update, which remains valuable for
-- a mutable table.
-- ----------------------------------------------------------------------------
ALTER TABLE execution DROP CONSTRAINT execution_pkey;
ALTER TABLE execution ADD PRIMARY KEY (id, created);
SELECT create_hypertable('execution', 'created',
chunk_time_interval => INTERVAL '1 day',
migrate_data => true);
COMMENT ON TABLE execution IS 'Executions represent action runs with workflow support (TimescaleDB hypertable partitioned on created). Updated ~4 times during lifecycle, completing within ~1 day (well before 7-day compression window).';
-- ============================================================================
-- TRIGGER FUNCTIONS
-- ============================================================================
-- ----------------------------------------------------------------------------
-- execution history trigger
-- Tracked fields: status, result, executor, worker, workflow_task, env_vars, started_at
-- Note: result uses _jsonb_digest_summary() to avoid storing large payloads
-- ----------------------------------------------------------------------------
CREATE OR REPLACE FUNCTION record_execution_history()
RETURNS TRIGGER AS $$
DECLARE
changed TEXT[] := '{}';
old_vals JSONB := '{}';
new_vals JSONB := '{}';
BEGIN
IF TG_OP = 'INSERT' THEN
INSERT INTO execution_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'INSERT', NEW.id, NEW.action_ref, '{}', NULL,
jsonb_build_object(
'status', NEW.status,
'action_ref', NEW.action_ref,
'executor', NEW.executor,
'worker', NEW.worker,
'parent', NEW.parent,
'enforcement', NEW.enforcement,
'started_at', NEW.started_at
));
RETURN NEW;
END IF;
IF TG_OP = 'DELETE' THEN
INSERT INTO execution_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'DELETE', OLD.id, OLD.action_ref, '{}', NULL, NULL);
RETURN OLD;
END IF;
-- UPDATE: detect which fields changed
IF OLD.status IS DISTINCT FROM NEW.status THEN
changed := array_append(changed, 'status');
old_vals := old_vals || jsonb_build_object('status', OLD.status);
new_vals := new_vals || jsonb_build_object('status', NEW.status);
END IF;
-- Result: store a compact digest instead of the full JSONB to avoid bloat.
-- The live execution row always has the complete result.
IF OLD.result IS DISTINCT FROM NEW.result THEN
changed := array_append(changed, 'result');
old_vals := old_vals || jsonb_build_object('result', _jsonb_digest_summary(OLD.result));
new_vals := new_vals || jsonb_build_object('result', _jsonb_digest_summary(NEW.result));
END IF;
IF OLD.executor IS DISTINCT FROM NEW.executor THEN
changed := array_append(changed, 'executor');
old_vals := old_vals || jsonb_build_object('executor', OLD.executor);
new_vals := new_vals || jsonb_build_object('executor', NEW.executor);
END IF;
IF OLD.worker IS DISTINCT FROM NEW.worker THEN
changed := array_append(changed, 'worker');
old_vals := old_vals || jsonb_build_object('worker', OLD.worker);
new_vals := new_vals || jsonb_build_object('worker', NEW.worker);
END IF;
IF OLD.workflow_task IS DISTINCT FROM NEW.workflow_task THEN
changed := array_append(changed, 'workflow_task');
old_vals := old_vals || jsonb_build_object('workflow_task', OLD.workflow_task);
new_vals := new_vals || jsonb_build_object('workflow_task', NEW.workflow_task);
END IF;
IF OLD.env_vars IS DISTINCT FROM NEW.env_vars THEN
changed := array_append(changed, 'env_vars');
old_vals := old_vals || jsonb_build_object('env_vars', OLD.env_vars);
new_vals := new_vals || jsonb_build_object('env_vars', NEW.env_vars);
END IF;
IF OLD.started_at IS DISTINCT FROM NEW.started_at THEN
changed := array_append(changed, 'started_at');
old_vals := old_vals || jsonb_build_object('started_at', OLD.started_at);
new_vals := new_vals || jsonb_build_object('started_at', NEW.started_at);
END IF;
-- Only record if something actually changed
IF array_length(changed, 1) > 0 THEN
INSERT INTO execution_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'UPDATE', NEW.id, NEW.action_ref, changed, old_vals, new_vals);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION record_execution_history() IS 'Records field-level changes to execution table in execution_history hypertable';
-- ----------------------------------------------------------------------------
-- worker history trigger
-- Tracked fields: name, status, capabilities, meta, host, port
-- Excludes: last_heartbeat when it is the only field that changed
-- ----------------------------------------------------------------------------
CREATE OR REPLACE FUNCTION record_worker_history()
RETURNS TRIGGER AS $$
DECLARE
changed TEXT[] := '{}';
old_vals JSONB := '{}';
new_vals JSONB := '{}';
BEGIN
IF TG_OP = 'INSERT' THEN
INSERT INTO worker_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'INSERT', NEW.id, NEW.name, '{}', NULL,
jsonb_build_object(
'name', NEW.name,
'worker_type', NEW.worker_type,
'worker_role', NEW.worker_role,
'status', NEW.status,
'host', NEW.host,
'port', NEW.port
));
RETURN NEW;
END IF;
IF TG_OP = 'DELETE' THEN
INSERT INTO worker_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'DELETE', OLD.id, OLD.name, '{}', NULL, NULL);
RETURN OLD;
END IF;
-- UPDATE: detect which fields changed
IF OLD.name IS DISTINCT FROM NEW.name THEN
changed := array_append(changed, 'name');
old_vals := old_vals || jsonb_build_object('name', OLD.name);
new_vals := new_vals || jsonb_build_object('name', NEW.name);
END IF;
IF OLD.status IS DISTINCT FROM NEW.status THEN
changed := array_append(changed, 'status');
old_vals := old_vals || jsonb_build_object('status', OLD.status);
new_vals := new_vals || jsonb_build_object('status', NEW.status);
END IF;
IF OLD.capabilities IS DISTINCT FROM NEW.capabilities THEN
changed := array_append(changed, 'capabilities');
old_vals := old_vals || jsonb_build_object('capabilities', OLD.capabilities);
new_vals := new_vals || jsonb_build_object('capabilities', NEW.capabilities);
END IF;
IF OLD.meta IS DISTINCT FROM NEW.meta THEN
changed := array_append(changed, 'meta');
old_vals := old_vals || jsonb_build_object('meta', OLD.meta);
new_vals := new_vals || jsonb_build_object('meta', NEW.meta);
END IF;
IF OLD.host IS DISTINCT FROM NEW.host THEN
changed := array_append(changed, 'host');
old_vals := old_vals || jsonb_build_object('host', OLD.host);
new_vals := new_vals || jsonb_build_object('host', NEW.host);
END IF;
IF OLD.port IS DISTINCT FROM NEW.port THEN
changed := array_append(changed, 'port');
old_vals := old_vals || jsonb_build_object('port', OLD.port);
new_vals := new_vals || jsonb_build_object('port', NEW.port);
END IF;
-- Only record if something besides last_heartbeat changed.
-- Pure heartbeat-only updates are excluded to avoid high-volume noise.
IF array_length(changed, 1) > 0 THEN
INSERT INTO worker_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
VALUES (NOW(), 'UPDATE', NEW.id, NEW.name, changed, old_vals, new_vals);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION record_worker_history() IS 'Records field-level changes to worker table in worker_history hypertable. Excludes heartbeat-only updates.';
-- ============================================================================
-- ATTACH TRIGGERS TO OPERATIONAL TABLES
-- ============================================================================
CREATE TRIGGER execution_history_trigger
AFTER INSERT OR UPDATE OR DELETE ON execution
FOR EACH ROW
EXECUTE FUNCTION record_execution_history();
CREATE TRIGGER worker_history_trigger
AFTER INSERT OR UPDATE OR DELETE ON worker
FOR EACH ROW
EXECUTE FUNCTION record_worker_history();
-- ============================================================================
-- COMPRESSION POLICIES
-- ============================================================================
-- History tables
ALTER TABLE execution_history SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'entity_id',
timescaledb.compress_orderby = 'time DESC'
);
SELECT add_compression_policy('execution_history', INTERVAL '7 days');
ALTER TABLE worker_history SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'entity_id',
timescaledb.compress_orderby = 'time DESC'
);
SELECT add_compression_policy('worker_history', INTERVAL '7 days');
-- Event table (hypertable)
ALTER TABLE event SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'trigger_ref',
timescaledb.compress_orderby = 'created DESC'
);
SELECT add_compression_policy('event', INTERVAL '7 days');
-- Enforcement table (hypertable)
ALTER TABLE enforcement SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'rule_ref',
timescaledb.compress_orderby = 'created DESC'
);
SELECT add_compression_policy('enforcement', INTERVAL '7 days');
-- Execution table (hypertable)
ALTER TABLE execution SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'action_ref',
timescaledb.compress_orderby = 'created DESC'
);
SELECT add_compression_policy('execution', INTERVAL '7 days');
-- ============================================================================
-- RETENTION POLICIES
-- ============================================================================
SELECT add_retention_policy('execution_history', INTERVAL '90 days');
SELECT add_retention_policy('worker_history', INTERVAL '180 days');
SELECT add_retention_policy('event', INTERVAL '90 days');
SELECT add_retention_policy('enforcement', INTERVAL '90 days');
SELECT add_retention_policy('execution', INTERVAL '90 days');
-- ============================================================================
-- CONTINUOUS AGGREGATES
-- ============================================================================
-- Drop existing continuous aggregates if they exist, so this migration can be
-- re-run safely after a partial failure. (TimescaleDB continuous aggregates
-- must be dropped with CASCADE to remove their associated policies.)
DROP MATERIALIZED VIEW IF EXISTS execution_status_hourly CASCADE;
DROP MATERIALIZED VIEW IF EXISTS execution_throughput_hourly CASCADE;
DROP MATERIALIZED VIEW IF EXISTS event_volume_hourly CASCADE;
DROP MATERIALIZED VIEW IF EXISTS worker_status_hourly CASCADE;
DROP MATERIALIZED VIEW IF EXISTS enforcement_volume_hourly CASCADE;
DROP MATERIALIZED VIEW IF EXISTS execution_volume_hourly CASCADE;
-- ----------------------------------------------------------------------------
-- execution_status_hourly
-- Tracks execution status transitions per hour, grouped by action_ref and new status.
-- Powers: execution throughput chart, failure rate widget, status breakdown over time.
-- ----------------------------------------------------------------------------
CREATE MATERIALIZED VIEW execution_status_hourly
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', time) AS bucket,
entity_ref AS action_ref,
new_values->>'status' AS new_status,
COUNT(*) AS transition_count
FROM execution_history
WHERE 'status' = ANY(changed_fields)
GROUP BY bucket, entity_ref, new_values->>'status'
WITH NO DATA;
SELECT add_continuous_aggregate_policy('execution_status_hourly',
start_offset => INTERVAL '7 days',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '30 minutes'
);
-- ----------------------------------------------------------------------------
-- execution_throughput_hourly
-- Tracks total execution creation volume per hour, regardless of status.
-- Powers: execution throughput sparkline on the dashboard.
-- ----------------------------------------------------------------------------
CREATE MATERIALIZED VIEW execution_throughput_hourly
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', time) AS bucket,
entity_ref AS action_ref,
COUNT(*) AS execution_count
FROM execution_history
WHERE operation = 'INSERT'
GROUP BY bucket, entity_ref
WITH NO DATA;
SELECT add_continuous_aggregate_policy('execution_throughput_hourly',
start_offset => INTERVAL '7 days',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '30 minutes'
);
-- ----------------------------------------------------------------------------
-- event_volume_hourly
-- Tracks event creation volume per hour by trigger ref.
-- Powers: event throughput monitoring widget.
-- NOTE: Queries the event table directly (it is now a hypertable) instead of
-- a separate event_history table.
-- ----------------------------------------------------------------------------
CREATE MATERIALIZED VIEW event_volume_hourly
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', created) AS bucket,
trigger_ref,
COUNT(*) AS event_count
FROM event
GROUP BY bucket, trigger_ref
WITH NO DATA;
SELECT add_continuous_aggregate_policy('event_volume_hourly',
start_offset => INTERVAL '7 days',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '30 minutes'
);
-- ----------------------------------------------------------------------------
-- worker_status_hourly
-- Tracks worker status changes per hour (online/offline/draining transitions).
-- Powers: worker health trends widget.
-- ----------------------------------------------------------------------------
CREATE MATERIALIZED VIEW worker_status_hourly
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', time) AS bucket,
entity_ref AS worker_name,
new_values->>'status' AS new_status,
COUNT(*) AS transition_count
FROM worker_history
WHERE 'status' = ANY(changed_fields)
GROUP BY bucket, entity_ref, new_values->>'status'
WITH NO DATA;
SELECT add_continuous_aggregate_policy('worker_status_hourly',
start_offset => INTERVAL '30 days',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '1 hour'
);
-- ----------------------------------------------------------------------------
-- enforcement_volume_hourly
-- Tracks enforcement creation volume per hour by rule ref.
-- Powers: rule activation rate monitoring.
-- NOTE: Queries the enforcement table directly (it is now a hypertable)
-- instead of a separate enforcement_history table.
-- ----------------------------------------------------------------------------
CREATE MATERIALIZED VIEW enforcement_volume_hourly
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', created) AS bucket,
rule_ref,
COUNT(*) AS enforcement_count
FROM enforcement
GROUP BY bucket, rule_ref
WITH NO DATA;
SELECT add_continuous_aggregate_policy('enforcement_volume_hourly',
start_offset => INTERVAL '7 days',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '30 minutes'
);
-- ----------------------------------------------------------------------------
-- execution_volume_hourly
-- Tracks execution creation volume per hour by action_ref and status.
-- This queries the execution hypertable directly (like event_volume_hourly
-- queries the event table). Complements the existing execution_status_hourly
-- and execution_throughput_hourly aggregates which query execution_history.
--
-- Use case: direct execution volume monitoring without relying on the history
-- trigger (belt-and-suspenders, plus captures the initial status at creation).
-- ----------------------------------------------------------------------------
CREATE MATERIALIZED VIEW execution_volume_hourly
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', created) AS bucket,
action_ref,
status AS initial_status,
COUNT(*) AS execution_count
FROM execution
GROUP BY bucket, action_ref, status
WITH NO DATA;
SELECT add_continuous_aggregate_policy('execution_volume_hourly',
start_offset => INTERVAL '7 days',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '30 minutes'
);
-- ============================================================================
-- INITIAL REFRESH NOTE
-- ============================================================================
-- NOTE: refresh_continuous_aggregate() cannot run inside a transaction block,
-- and the migration runner wraps each file in BEGIN/COMMIT. The continuous
-- aggregate policies configured above will automatically backfill data within
-- their first scheduled interval (30 min 1 hour). On a fresh database there
-- is no history data to backfill anyway.
--
-- If you need an immediate manual refresh after migration, run outside a
-- transaction:
-- CALL refresh_continuous_aggregate('execution_status_hourly', NULL, NOW());
-- CALL refresh_continuous_aggregate('execution_throughput_hourly', NULL, NOW());
-- CALL refresh_continuous_aggregate('event_volume_hourly', NULL, NOW());
-- CALL refresh_continuous_aggregate('worker_status_hourly', NULL, NOW());
-- CALL refresh_continuous_aggregate('enforcement_volume_hourly', NULL, NOW());
-- CALL refresh_continuous_aggregate('execution_volume_hourly', NULL, NOW());

View File

@@ -0,0 +1,202 @@
-- Migration: Artifact Content System
-- Description: Enhances the artifact table with content fields (name, description,
-- content_type, size_bytes, execution link, structured data, visibility)
-- and creates the artifact_version table for versioned file/data storage.
--
-- The artifact table now serves as the "header" for a logical artifact,
-- while artifact_version rows hold the actual immutable content snapshots.
-- Progress-type artifacts store their live state directly in artifact.data
-- (append-style updates without creating new versions).
--
-- Version: 20250101000010
-- ============================================================================
-- ENHANCE ARTIFACT TABLE
-- ============================================================================
-- Human-readable name (e.g. "Build Log", "Test Results")
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS name TEXT;
-- Optional longer description
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS description TEXT;
-- MIME content type (e.g. "application/json", "text/plain", "image/png")
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS content_type TEXT;
-- Total size in bytes of the latest version's content (NULL for progress artifacts)
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS size_bytes BIGINT;
-- Execution that produced/owns this artifact (plain BIGINT, no FK — execution is a hypertable)
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS execution BIGINT;
-- Structured data for progress-type artifacts and small structured payloads.
-- Progress artifacts append entries here; file artifacts may store parsed metadata.
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS data JSONB;
-- Visibility: public artifacts are viewable by all authenticated users;
-- private artifacts are restricted based on the artifact's scope/owner.
-- The scope (identity, action, pack, etc.) + owner fields define who can access
-- a private artifact. Full RBAC enforcement is deferred — for now the column
-- enables filtering and is available for future permission checks.
ALTER TABLE artifact ADD COLUMN IF NOT EXISTS visibility artifact_visibility_enum NOT NULL DEFAULT 'private';
-- New indexes for the added columns
CREATE INDEX IF NOT EXISTS idx_artifact_execution ON artifact(execution);
CREATE INDEX IF NOT EXISTS idx_artifact_name ON artifact(name);
CREATE INDEX IF NOT EXISTS idx_artifact_execution_type ON artifact(execution, type);
CREATE INDEX IF NOT EXISTS idx_artifact_visibility ON artifact(visibility);
CREATE INDEX IF NOT EXISTS idx_artifact_visibility_scope ON artifact(visibility, scope, owner);
-- Comments for new columns
COMMENT ON COLUMN artifact.name IS 'Human-readable artifact name';
COMMENT ON COLUMN artifact.description IS 'Optional description of the artifact';
COMMENT ON COLUMN artifact.content_type IS 'MIME content type (e.g. application/json, text/plain)';
COMMENT ON COLUMN artifact.size_bytes IS 'Size of latest version content in bytes';
COMMENT ON COLUMN artifact.execution IS 'Execution that produced this artifact (no FK — execution is a hypertable)';
COMMENT ON COLUMN artifact.data IS 'Structured JSONB data for progress artifacts or metadata';
COMMENT ON COLUMN artifact.visibility IS 'Access visibility: public (all users) or private (scope/owner-restricted)';
-- ============================================================================
-- ARTIFACT_VERSION TABLE
-- ============================================================================
-- Each row is an immutable snapshot of artifact content. File-type artifacts get
-- a new version on each upload; progress-type artifacts do NOT use versions
-- (they update artifact.data directly).
CREATE TABLE artifact_version (
id BIGSERIAL PRIMARY KEY,
-- Parent artifact
artifact BIGINT NOT NULL REFERENCES artifact(id) ON DELETE CASCADE,
-- Monotonically increasing version number within the artifact (1-based)
version INTEGER NOT NULL,
-- MIME content type for this specific version (may differ from parent)
content_type TEXT,
-- Size of the content in bytes
size_bytes BIGINT,
-- Binary content (file uploads, DB-stored). NULL for file-backed versions.
content BYTEA,
-- Structured content (JSON payloads, parsed results, etc.)
content_json JSONB,
-- Relative path from artifacts_dir root for disk-stored content.
-- When set, content BYTEA is NULL — file lives on shared volume.
-- Pattern: {ref_slug}/v{version}.{ext}
-- e.g., "mypack/build_log/v1.txt"
file_path TEXT,
-- Free-form metadata about this version (e.g. commit hash, build number)
meta JSONB,
-- Who or what created this version (identity ref, action ref, "system", etc.)
created_by TEXT,
-- Immutable — no updated column
created TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Unique constraint: one version number per artifact
ALTER TABLE artifact_version
ADD CONSTRAINT uq_artifact_version_artifact_version UNIQUE (artifact, version);
-- Indexes
CREATE INDEX idx_artifact_version_artifact ON artifact_version(artifact);
CREATE INDEX idx_artifact_version_artifact_version ON artifact_version(artifact, version DESC);
CREATE INDEX idx_artifact_version_created ON artifact_version(created DESC);
CREATE INDEX idx_artifact_version_file_path ON artifact_version(file_path) WHERE file_path IS NOT NULL;
-- Comments
COMMENT ON TABLE artifact_version IS 'Immutable content snapshots for artifacts (file uploads, structured data)';
COMMENT ON COLUMN artifact_version.artifact IS 'Parent artifact this version belongs to';
COMMENT ON COLUMN artifact_version.version IS 'Version number (1-based, monotonically increasing per artifact)';
COMMENT ON COLUMN artifact_version.content_type IS 'MIME content type for this version';
COMMENT ON COLUMN artifact_version.size_bytes IS 'Size of content in bytes';
COMMENT ON COLUMN artifact_version.content IS 'Binary content (file data)';
COMMENT ON COLUMN artifact_version.content_json IS 'Structured JSON content';
COMMENT ON COLUMN artifact_version.meta IS 'Free-form metadata about this version';
COMMENT ON COLUMN artifact_version.created_by IS 'Who created this version (identity ref, action ref, system)';
COMMENT ON COLUMN artifact_version.file_path IS 'Relative path from artifacts_dir root for disk-stored content. When set, content BYTEA is NULL — file lives on shared volume.';
-- ============================================================================
-- HELPER FUNCTION: next_artifact_version
-- ============================================================================
-- Returns the next version number for an artifact (MAX(version) + 1, or 1 if none).
CREATE OR REPLACE FUNCTION next_artifact_version(p_artifact_id BIGINT)
RETURNS INTEGER AS $$
DECLARE
v_next INTEGER;
BEGIN
SELECT COALESCE(MAX(version), 0) + 1
INTO v_next
FROM artifact_version
WHERE artifact = p_artifact_id;
RETURN v_next;
END;
$$ LANGUAGE plpgsql;
COMMENT ON FUNCTION next_artifact_version IS 'Returns the next version number for the given artifact';
-- ============================================================================
-- RETENTION ENFORCEMENT FUNCTION
-- ============================================================================
-- Called after inserting a new version to enforce the artifact retention policy.
-- For 'versions' policy: deletes oldest versions beyond the limit.
-- Time-based policies (days/hours/minutes) are handled by a scheduled job (not this trigger).
CREATE OR REPLACE FUNCTION enforce_artifact_retention()
RETURNS TRIGGER AS $$
DECLARE
v_policy artifact_retention_enum;
v_limit INTEGER;
v_count INTEGER;
BEGIN
SELECT retention_policy, retention_limit
INTO v_policy, v_limit
FROM artifact
WHERE id = NEW.artifact;
IF v_policy = 'versions' AND v_limit > 0 THEN
-- Count existing versions
SELECT COUNT(*) INTO v_count
FROM artifact_version
WHERE artifact = NEW.artifact;
-- If over limit, delete the oldest ones
IF v_count > v_limit THEN
DELETE FROM artifact_version
WHERE id IN (
SELECT id
FROM artifact_version
WHERE artifact = NEW.artifact
ORDER BY version ASC
LIMIT (v_count - v_limit)
);
END IF;
END IF;
-- Update parent artifact size_bytes with the new version's size
UPDATE artifact
SET size_bytes = NEW.size_bytes,
content_type = COALESCE(NEW.content_type, content_type)
WHERE id = NEW.artifact;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_enforce_artifact_retention
AFTER INSERT ON artifact_version
FOR EACH ROW
EXECUTE FUNCTION enforce_artifact_retention();
COMMENT ON FUNCTION enforce_artifact_retention IS 'Enforces version-count retention policy and syncs size to parent artifact';

View File

@@ -0,0 +1,17 @@
-- Migration: Convert key.value from TEXT to JSONB
--
-- This allows keys to store structured data (objects, arrays, numbers, booleans)
-- in addition to plain strings. Existing string values are wrapped in JSON string
-- literals so they remain valid and accessible.
--
-- Before: value TEXT NOT NULL (e.g., 'my-secret-token')
-- After: value JSONB NOT NULL (e.g., '"my-secret-token"' or '{"user":"admin","pass":"s3cret"}')
-- Step 1: Convert existing TEXT values to JSONB.
-- to_jsonb(text) wraps a plain string as a JSON string literal, e.g.:
-- 'hello' -> '"hello"'
-- This preserves all existing values perfectly — encrypted values (base64 strings)
-- become JSON strings, and plain text values become JSON strings.
ALTER TABLE key
ALTER COLUMN value TYPE JSONB
USING to_jsonb(value);

View File

@@ -0,0 +1,348 @@
# Attune Database Migrations
This directory contains SQL migrations for the Attune automation platform database schema.
## Overview
Migrations are numbered and executed in order. Each migration file is named with a timestamp prefix to ensure proper ordering:
```
YYYYMMDDHHMMSS_description.sql
```
## Migration Files
The schema is organized into 5 logical migration files:
| File | Description |
|------|-------------|
| `20250101000001_initial_setup.sql` | Creates schema, service role, all enum types, and shared functions |
| `20250101000002_core_tables.sql` | Creates pack, runtime, worker, identity, permission_set, permission_assignment, policy, and key tables |
| `20250101000003_event_system.sql` | Creates trigger, sensor, event, and enforcement tables |
| `20250101000004_execution_system.sql` | Creates action, rule, execution, inquiry, workflow orchestration tables (workflow_definition, workflow_execution, workflow_task_execution), and workflow views |
| `20250101000005_supporting_tables.sql` | Creates notification, artifact, and queue_stats tables with performance indexes |
### Migration Dependencies
The migrations must be run in order due to foreign key dependencies:
1. **Initial Setup** - Foundation (schema, enums, functions)
2. **Core Tables** - Base entities (pack, runtime, worker, identity, permissions, policy, key)
3. **Event System** - Event monitoring (trigger, sensor, event, enforcement)
4. **Execution System** - Action execution (action, rule, execution, inquiry)
5. **Supporting Tables** - Auxiliary features (notification, artifact)
## Running Migrations
### Using SQLx CLI
```bash
# Install sqlx-cli if not already installed
cargo install sqlx-cli --no-default-features --features postgres
# Run all pending migrations
sqlx migrate run
# Check migration status
sqlx migrate info
# Revert last migration (if needed)
sqlx migrate revert
```
### Manual Execution
You can also run migrations manually using `psql`:
```bash
# Run all migrations in order
for file in migrations/202501*.sql; do
psql -U postgres -d attune -f "$file"
done
```
Or individually:
```bash
psql -U postgres -d attune -f migrations/20250101000001_initial_setup.sql
psql -U postgres -d attune -f migrations/20250101000002_core_tables.sql
# ... etc
```
## Database Setup
### Prerequisites
1. PostgreSQL 14 or later installed
2. Create the database:
```bash
createdb attune
```
3. Set environment variable:
```bash
export DATABASE_URL="postgresql://postgres:postgres@localhost:5432/attune"
```
### Initial Setup
```bash
# Navigate to workspace root
cd /path/to/attune
# Run migrations
sqlx migrate run
# Verify tables were created
psql -U postgres -d attune -c "\dt attune.*"
```
## Schema Overview
The Attune schema includes 22 tables organized into logical groups:
### Core Tables (Migration 2)
- **pack**: Automation component bundles
- **runtime**: Execution environments (Python, Node.js, containers)
- **worker**: Execution workers
- **identity**: Users and service accounts
- **permission_set**: Permission groups (like roles)
- **permission_assignment**: Identity-permission links (many-to-many)
- **policy**: Execution policies (rate limiting, concurrency)
- **key**: Secure configuration and secrets storage
### Event System (Migration 3)
- **trigger**: Event type definitions
- **sensor**: Event monitors that watch for triggers
- **event**: Event instances (trigger firings)
- **enforcement**: Rule activation instances
### Execution System (Migration 4)
- **action**: Executable operations (can be workflows)
- **rule**: Trigger-to-action automation logic
- **execution**: Action execution instances (supports workflows)
- **inquiry**: Human-in-the-loop interactions (approvals, inputs)
- **workflow_definition**: YAML-based workflow definitions (composable action graphs)
- **workflow_execution**: Runtime state tracking for workflow executions
- **workflow_task_execution**: Individual task executions within workflows
### Supporting Tables (Migration 5)
- **notification**: Real-time system notifications (uses PostgreSQL LISTEN/NOTIFY)
- **artifact**: Execution outputs (files, logs, progress data)
- **queue_stats**: Real-time execution queue statistics for FIFO ordering
## Key Features
### Automatic Timestamps
All tables include `created` and `updated` timestamps that are automatically managed by the `update_updated_column()` trigger function.
### Reference Preservation
Tables use both ID foreign keys and `*_ref` text columns. The ref columns preserve string references even when the referenced entity is deleted, maintaining complete audit trails.
### Soft Deletes
Foreign keys strategically use:
- `ON DELETE CASCADE` - For dependent data that should be removed
- `ON DELETE SET NULL` - To preserve historical records while breaking the link
### Validation Constraints
- **Reference format validation** - Lowercase, specific patterns (e.g., `pack.name`)
- **Semantic version validation** - For pack versions
- **Ownership validation** - Custom trigger for key table ownership rules
- **Range checks** - Port numbers, positive thresholds, etc.
### Performance Optimization
- **B-tree indexes** - On frequently queried columns (IDs, refs, status, timestamps)
- **Partial indexes** - For filtered queries (e.g., `enabled = TRUE`)
- **GIN indexes** - On JSONB and array columns for fast containment queries
- **Composite indexes** - For common multi-column query patterns
### PostgreSQL Features
- **JSONB** - Flexible schema storage for configurations, payloads, results
- **Array types** - Multi-value fields (tags, parameters, dependencies)
- **Custom enum types** - Constrained string values with type safety
- **Triggers** - Data validation, timestamp management, notifications
- **pg_notify** - Real-time notifications via PostgreSQL's LISTEN/NOTIFY
## Service Role
The migrations create a `svc_attune` role with appropriate permissions. **Change the password in production:**
```sql
ALTER ROLE svc_attune WITH PASSWORD 'secure_password_here';
```
The default password is `attune_service_password` (only for development).
## Rollback Strategy
### Complete Reset
To completely reset the database:
```bash
# Drop and recreate
dropdb attune
createdb attune
sqlx migrate run
```
Or drop just the schema:
```sql
psql -U postgres -d attune -c "DROP SCHEMA attune CASCADE;"
```
Then re-run migrations.
### Individual Migration Revert
With SQLx CLI:
```bash
sqlx migrate revert
```
Or manually remove from tracking:
```sql
DELETE FROM _sqlx_migrations WHERE version = 20250101000001;
```
## Best Practices
1. **Never edit existing migrations** - Create new migrations to modify schema
2. **Test migrations** - Always test on a copy of production data first
3. **Backup before migrating** - Backup production database before applying migrations
4. **Review changes** - Review all migrations before applying to production
5. **Version control** - Keep migrations in version control (they are!)
6. **Document changes** - Add comments to complex migrations
## Development Workflow
1. Create new migration file with timestamp:
```bash
touch migrations/$(date +%Y%m%d%H%M%S)_description.sql
```
2. Write migration SQL (follow existing patterns)
3. Test migration:
```bash
sqlx migrate run
```
4. Verify changes:
```bash
psql -U postgres -d attune
\d+ attune.table_name
```
5. Commit to version control
## Production Deployment
1. **Backup** production database
2. **Review** all pending migrations
3. **Test** migrations on staging environment with production data copy
4. **Schedule** maintenance window if needed
5. **Apply** migrations:
```bash
sqlx migrate run
```
6. **Verify** application functionality
7. **Monitor** for errors in logs
## Troubleshooting
### Migration already applied
If you need to re-run a migration:
```bash
# Remove from migration tracking (SQLx)
psql -U postgres -d attune -c "DELETE FROM _sqlx_migrations WHERE version = 20250101000001;"
# Then re-run
sqlx migrate run
```
### Permission denied
Ensure the PostgreSQL user has sufficient permissions:
```sql
GRANT ALL PRIVILEGES ON DATABASE attune TO postgres;
GRANT ALL PRIVILEGES ON SCHEMA attune TO postgres;
```
### Connection refused
Check PostgreSQL is running:
```bash
# Linux/macOS
pg_ctl status
sudo systemctl status postgresql
# Check if listening
psql -U postgres -c "SELECT version();"
```
### Foreign key constraint violations
Ensure migrations run in correct order. The consolidated migrations handle forward references correctly:
- Migration 2 creates tables with forward references (commented as such)
- Migration 3 and 4 add the foreign key constraints back
## Schema Diagram
```
┌─────────────┐
│ pack │◄──┐
└─────────────┘ │
▲ │
│ │
┌──────┴──────────┴──────┐
│ runtime │ trigger │ ... │ (Core entities reference pack)
└─────────┴─────────┴─────┘
▲ ▲
│ │
┌──────┴──────┐ │
│ sensor │──┘ (Sensors reference both runtime and trigger)
└─────────────┘
┌─────────────┐ ┌──────────────┐
│ event │────►│ enforcement │ (Events trigger enforcements)
└─────────────┘ └──────────────┘
┌──────────────┐
│ execution │ (Enforcements create executions)
└──────────────┘
```
## Workflow Orchestration
Migration 4 includes comprehensive workflow orchestration support:
- **workflow_definition**: Stores parsed YAML workflow definitions with tasks, variables, and transitions
- **workflow_execution**: Tracks runtime state including current/completed/failed tasks and variables
- **workflow_task_execution**: Individual task execution tracking with retry and timeout support
- **Action table extensions**: `is_workflow` and `workflow_def` columns link actions to workflows
- **Helper views**: Three views for querying workflow state (summary, task detail, action links)
## Queue Statistics
Migration 5 includes the queue_stats table for execution ordering:
- Tracks per-action queue length, active executions, and concurrency limits
- Enables FIFO queue management with database persistence
- Supports monitoring and API visibility of execution queues
## Additional Resources
- [SQLx Documentation](https://github.com/launchbadge/sqlx)
- [PostgreSQL Documentation](https://www.postgresql.org/docs/)
- [Attune Architecture Documentation](../docs/architecture.md)
- [Attune Data Model Documentation](../docs/data-model.md)

View File

@@ -0,0 +1,270 @@
# Core Pack Dependencies
**Philosophy:** The core pack has **zero runtime dependencies** beyond standard system utilities.
## Why Zero Dependencies?
1. **Portability:** Works in any environment with standard Unix utilities
2. **Reliability:** No version conflicts, no package installation failures
3. **Security:** Minimal attack surface, no third-party library vulnerabilities
4. **Performance:** Fast startup, no runtime initialization overhead
5. **Simplicity:** Easy to audit, test, and maintain
## Required System Utilities
All core pack actions rely only on utilities available in standard Linux/Unix environments:
| Utility | Purpose | Used By |
|---------|---------|---------|
| `bash` | Shell scripting | All shell actions |
| `jq` | JSON parsing/generation | All actions (parameter handling) |
| `curl` | HTTP client | `http_request.sh` |
| Standard Unix tools | Text processing, file operations | Various actions |
These utilities are:
- ✅ Pre-installed in all Attune worker containers
- ✅ Standard across Linux distributions
- ✅ Stable, well-tested, and widely used
- ✅ Available via package managers if needed
## No Runtime Dependencies
The core pack **does not require:**
- ❌ Python interpreter or packages
- ❌ Node.js runtime or npm modules
- ❌ Ruby, Perl, or other scripting languages
- ❌ Third-party libraries or frameworks
- ❌ Package installations at runtime
## Action Implementation Guidelines
### ✅ Preferred Approaches
**Use bash + standard utilities:**
```bash
#!/bin/bash
# Read params with jq
INPUT=$(cat)
PARAM=$(echo "$INPUT" | jq -r '.param // "default"')
# Process with standard tools
RESULT=$(echo "$PARAM" | tr '[:lower:]' '[:upper:]')
# Output with jq
jq -n --arg result "$RESULT" '{result: $result}'
```
**Use curl for HTTP:**
```bash
# Make HTTP requests with curl
curl -s -X POST "$URL" \
-H "Content-Type: application/json" \
-d '{"key": "value"}'
```
**Use jq for JSON processing:**
```bash
# Parse JSON responses
echo "$RESPONSE" | jq '.data.items[] | .name'
# Generate JSON output
jq -n \
--arg status "success" \
--argjson count 42 \
'{status: $status, count: $count}'
```
### ❌ Avoid
**Don't add runtime dependencies:**
```bash
# ❌ DON'T DO THIS
pip install requests
python3 script.py
# ❌ DON'T DO THIS
npm install axios
node script.js
# ❌ DON'T DO THIS
gem install httparty
ruby script.rb
```
**Don't use language-specific features:**
```python
# ❌ DON'T DO THIS in core pack
#!/usr/bin/env python3
import requests # External dependency!
response = requests.get(url)
```
Instead, use bash + curl:
```bash
# ✅ DO THIS in core pack
#!/bin/bash
response=$(curl -s "$url")
```
## When Runtime Dependencies Are Acceptable
For **custom packs** (not core pack), runtime dependencies are fine:
- ✅ Pack-specific Python libraries (installed in pack virtualenv)
- ✅ Pack-specific npm modules (installed in pack node_modules)
- ✅ Language runtimes (Python, Node.js) for complex logic
- ✅ Specialized tools for specific integrations
The core pack serves as a foundation with zero dependencies. Custom packs can have dependencies managed via:
- `requirements.txt` for Python packages
- `package.json` for Node.js modules
- Pack runtime environments (isolated per pack)
## Migration from Runtime Dependencies
If an action currently uses a runtime dependency, consider:
1. **Can it be done with bash + standard utilities?**
- Yes → Rewrite in bash
- No → Consider if it belongs in core pack
2. **Is the functionality complex?**
- Simple HTTP/JSON → Use curl + jq
- Complex API client → Move to custom pack
3. **Is it a specialized integration?**
- Yes → Move to integration-specific pack
- No → Keep in core pack with bash implementation
### Example: http_request Migration
**Before (Python with dependency):**
```python
#!/usr/bin/env python3
import requests # ❌ External dependency
response = requests.get(url, headers=headers)
print(response.json())
```
**After (Bash with standard utilities):**
```bash
#!/bin/bash
# ✅ No dependencies beyond curl + jq
response=$(curl -s -H "Authorization: Bearer $TOKEN" "$URL")
echo "$response" | jq '.'
```
## Testing Without Dependencies
Core pack actions can be tested anywhere with standard utilities:
```bash
# Local testing (no installation needed)
echo '{"param": "value"}' | ./action.sh
# Docker testing (minimal base image)
docker run --rm -i alpine:latest sh -c '
apk add --no-cache bash jq curl &&
/bin/bash < action.sh
'
# CI/CD testing (standard tools available)
./action.sh < test-params.json
```
## Benefits Realized
### For Developers
- No dependency management overhead
- Immediate action execution (no runtime setup)
- Easy to test locally
- Simple to audit and debug
### For Operators
- No version conflicts between packs
- No package installation failures
- Faster container startup
- Smaller container images
### For Security
- Minimal attack surface
- No third-party library vulnerabilities
- Easier to audit (standard tools only)
- No supply chain risks
### For Performance
- Fast action startup (no runtime initialization)
- Low memory footprint
- No package loading overhead
- Efficient resource usage
## Standard Utility Reference
### jq (JSON Processing)
```bash
# Parse input
VALUE=$(echo "$JSON" | jq -r '.key')
# Generate output
jq -n --arg val "$VALUE" '{result: $val}'
# Transform data
echo "$JSON" | jq '.items[] | select(.active)'
```
### curl (HTTP Client)
```bash
# GET request
curl -s "$URL"
# POST with JSON
curl -s -X POST "$URL" \
-H "Content-Type: application/json" \
-d '{"key": "value"}'
# With authentication
curl -s -H "Authorization: Bearer $TOKEN" "$URL"
```
### Standard Text Tools
```bash
# grep - Pattern matching
echo "$TEXT" | grep "pattern"
# sed - Text transformation
echo "$TEXT" | sed 's/old/new/g'
# awk - Text processing
echo "$TEXT" | awk '{print $1}'
# tr - Character translation
echo "$TEXT" | tr '[:lower:]' '[:upper:]'
```
## Future Considerations
The core pack will:
- ✅ Continue to have zero runtime dependencies
- ✅ Use only standard Unix utilities
- ✅ Serve as a reference implementation
- ✅ Provide foundational actions for workflows
Custom packs may:
- ✅ Have runtime dependencies (Python, Node.js, etc.)
- ✅ Use specialized libraries for integrations
- ✅ Require specific tools or SDKs
- ✅ Manage dependencies via pack environments
## Summary
**Core Pack = Zero Dependencies + Standard Utilities**
This philosophy ensures the core pack is:
- Portable across all environments
- Reliable without version conflicts
- Secure with minimal attack surface
- Performant with fast startup
- Simple to test and maintain
For actions requiring runtime dependencies, create custom packs with proper dependency management via `requirements.txt`, `package.json`, or similar mechanisms.

View File

@@ -0,0 +1,361 @@
# Attune Core Pack
The **Core Pack** is the foundational system pack for Attune, providing essential automation components including timer triggers, HTTP utilities, and basic shell actions.
## Overview
The core pack is automatically installed with Attune and provides the building blocks for creating automation workflows. It includes:
- **Timer Triggers**: Interval-based, cron-based, and one-shot datetime timers
- **HTTP Actions**: Make HTTP requests to external APIs
- **Shell Actions**: Execute basic shell commands (echo, sleep, noop)
- **Built-in Sensors**: System sensors for monitoring time-based events
## Components
### Actions
#### `core.echo`
Outputs a message to stdout.
**Parameters:**
- `message` (string, required): Message to echo
- `uppercase` (boolean, optional): Convert message to uppercase
**Example:**
```yaml
action: core.echo
parameters:
message: "Hello, Attune!"
uppercase: false
```
---
#### `core.sleep`
Pauses execution for a specified duration.
**Parameters:**
- `seconds` (integer, required): Number of seconds to sleep (0-3600)
- `message` (string, optional): Optional message to display before sleeping
**Example:**
```yaml
action: core.sleep
parameters:
seconds: 30
message: "Waiting 30 seconds..."
```
---
#### `core.noop`
Does nothing - useful for testing and placeholder workflows.
**Parameters:**
- `message` (string, optional): Optional message to log
- `exit_code` (integer, optional): Exit code to return (default: 0)
**Example:**
```yaml
action: core.noop
parameters:
message: "Testing workflow structure"
```
---
#### `core.http_request`
Make HTTP requests to external APIs with full control over headers, authentication, and request body.
**Parameters:**
- `url` (string, required): URL to send the request to
- `method` (string, optional): HTTP method (GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS)
- `headers` (object, optional): HTTP headers as key-value pairs
- `body` (string, optional): Request body for POST/PUT/PATCH
- `json_body` (object, optional): JSON request body (alternative to `body`)
- `query_params` (object, optional): URL query parameters
- `timeout` (integer, optional): Request timeout in seconds (default: 30)
- `verify_ssl` (boolean, optional): Verify SSL certificates (default: true)
- `auth_type` (string, optional): Authentication type (none, basic, bearer)
- `auth_username` (string, optional): Username for basic auth
- `auth_password` (string, secret, optional): Password for basic auth
- `auth_token` (string, secret, optional): Bearer token
- `follow_redirects` (boolean, optional): Follow HTTP redirects (default: true)
- `max_redirects` (integer, optional): Maximum redirects to follow (default: 10)
**Output:**
- `status_code` (integer): HTTP status code
- `headers` (object): Response headers
- `body` (string): Response body as text
- `json` (object): Parsed JSON response (if applicable)
- `elapsed_ms` (integer): Request duration in milliseconds
- `url` (string): Final URL after redirects
- `success` (boolean): Whether request was successful (2xx status)
**Example:**
```yaml
action: core.http_request
parameters:
url: "https://api.example.com/users"
method: "POST"
json_body:
name: "John Doe"
email: "john@example.com"
headers:
Content-Type: "application/json"
auth_type: "bearer"
auth_token: "${secret:api_token}"
```
---
### Triggers
#### `core.intervaltimer`
Fires at regular intervals based on time unit and interval.
**Parameters:**
- `unit` (string, required): Time unit (seconds, minutes, hours)
- `interval` (integer, required): Number of time units between triggers
**Payload:**
- `type`: "interval"
- `interval_seconds`: Total interval in seconds
- `fired_at`: ISO 8601 timestamp
- `execution_count`: Number of times fired
- `sensor_ref`: Reference to the sensor
**Example:**
```yaml
trigger: core.intervaltimer
config:
unit: "minutes"
interval: 5
```
---
#### `core.crontimer`
Fires based on cron schedule expressions.
**Parameters:**
- `expression` (string, required): Cron expression (6 fields: second minute hour day month weekday)
- `timezone` (string, optional): Timezone (default: UTC)
- `description` (string, optional): Human-readable schedule description
**Payload:**
- `type`: "cron"
- `fired_at`: ISO 8601 timestamp
- `scheduled_at`: When trigger was scheduled to fire
- `expression`: The cron expression
- `timezone`: Timezone used
- `next_fire_at`: Next scheduled fire time
- `execution_count`: Number of times fired
- `sensor_ref`: Reference to the sensor
**Cron Format:**
```
┌───────── second (0-59)
│ ┌─────── minute (0-59)
│ │ ┌───── hour (0-23)
│ │ │ ┌─── day of month (1-31)
│ │ │ │ ┌─ month (1-12)
│ │ │ │ │ ┌ day of week (0-6, 0=Sunday)
│ │ │ │ │ │
* * * * * *
```
**Examples:**
- `0 0 * * * *` - Every hour
- `0 0 0 * * *` - Every day at midnight
- `0 */15 * * * *` - Every 15 minutes
- `0 30 8 * * 1-5` - 8:30 AM on weekdays
---
#### `core.datetimetimer`
Fires once at a specific date and time.
**Parameters:**
- `fire_at` (string, required): ISO 8601 timestamp when timer should fire
- `timezone` (string, optional): Timezone (default: UTC)
- `description` (string, optional): Human-readable description
**Payload:**
- `type`: "one_shot"
- `fire_at`: Scheduled fire time
- `fired_at`: Actual fire time
- `timezone`: Timezone used
- `delay_ms`: Delay between scheduled and actual fire time
- `sensor_ref`: Reference to the sensor
**Example:**
```yaml
trigger: core.datetimetimer
config:
fire_at: "2024-12-31T23:59:59Z"
description: "New Year's countdown"
```
---
### Sensors
#### `core.interval_timer_sensor`
Built-in sensor that monitors time and fires interval timer triggers.
**Configuration:**
- `check_interval_seconds` (integer, optional): How often to check triggers (default: 1)
This sensor automatically runs as part of the Attune sensor service and manages all interval timer trigger instances.
---
## Configuration
The core pack supports the following configuration options:
```yaml
# config.yaml
packs:
core:
max_action_timeout: 300 # Maximum action timeout in seconds
enable_debug_logging: false # Enable debug logging
```
## Dependencies
### Python Dependencies
- `requests>=2.28.0` - For HTTP request action
- `croniter>=1.4.0` - For cron timer parsing (future)
### Runtime Dependencies
- Shell (bash/sh) - For shell-based actions
- Python 3.8+ - For Python-based actions and sensors
## Installation
The core pack is automatically installed with Attune. No manual installation is required.
To verify the core pack is loaded:
```bash
# Using CLI
attune pack list | grep core
# Using API
curl http://localhost:8080/api/v1/packs/core
```
## Usage Examples
### Example 1: Echo Every 10 Seconds
Create a rule that echoes "Hello, World!" every 10 seconds:
```yaml
ref: core.hello_world_rule
trigger: core.intervaltimer
trigger_config:
unit: "seconds"
interval: 10
action: core.echo
action_params:
message: "Hello, World!"
uppercase: false
```
### Example 2: HTTP Health Check Every 5 Minutes
Monitor an API endpoint every 5 minutes:
```yaml
ref: core.health_check_rule
trigger: core.intervaltimer
trigger_config:
unit: "minutes"
interval: 5
action: core.http_request
action_params:
url: "https://api.example.com/health"
method: "GET"
timeout: 10
```
### Example 3: Daily Report at Midnight
Generate a report every day at midnight:
```yaml
ref: core.daily_report_rule
trigger: core.crontimer
trigger_config:
expression: "0 0 0 * * *"
timezone: "UTC"
description: "Daily at midnight"
action: core.http_request
action_params:
url: "https://api.example.com/reports/generate"
method: "POST"
```
### Example 4: One-Time Reminder
Set a one-time reminder for a specific date and time:
```yaml
ref: core.meeting_reminder
trigger: core.datetimetimer
trigger_config:
fire_at: "2024-06-15T14:00:00Z"
description: "Team meeting reminder"
action: core.echo
action_params:
message: "Team meeting starts in 15 minutes!"
```
## Development
### Adding New Actions
1. Create action metadata file: `actions/<action_name>.yaml`
2. Create action implementation: `actions/<action_name>.sh` or `actions/<action_name>.py`
3. Make script executable: `chmod +x actions/<action_name>.sh`
4. Update pack manifest if needed
5. Test the action
### Testing Actions Locally
Test actions directly by setting environment variables:
```bash
# Test echo action
export ATTUNE_ACTION_MESSAGE="Test message"
export ATTUNE_ACTION_UPPERCASE=true
./actions/echo.sh
# Test HTTP request action
export ATTUNE_ACTION_URL="https://httpbin.org/get"
export ATTUNE_ACTION_METHOD="GET"
python3 actions/http_request.py
```
## Contributing
The core pack is part of the Attune project. Contributions are welcome!
1. Follow the existing code style and structure
2. Add tests for new actions/sensors
3. Update documentation
4. Submit a pull request
## License
The core pack is licensed under the same license as Attune.
## Support
- Documentation: https://docs.attune.io/packs/core
- Issues: https://github.com/attune-io/attune/issues
- Discussions: https://github.com/attune-io/attune/discussions

View File

@@ -0,0 +1,305 @@
# Core Pack Setup Guide
This guide explains how to set up and load the Attune core pack into your database.
## Overview
The **core pack** is Attune's built-in system pack that provides essential automation components including:
- **Timer Triggers**: Interval-based, cron-based, and datetime triggers
- **Basic Actions**: Echo, sleep, noop, and HTTP request actions
- **Built-in Sensors**: Interval timer sensor for time-based automation
The core pack must be loaded into the database before it can be used in rules and workflows.
## Prerequisites
Before loading the core pack, ensure:
1. **PostgreSQL is running** and accessible
2. **Database migrations are applied**: `sqlx migrate run`
3. **Python 3.8+** is installed (for the loader script)
4. **Required Python packages** are installed:
```bash
pip install psycopg2-binary pyyaml
```
## Loading Methods
### Method 1: Python Loader Script (Recommended)
The Python loader script reads the pack YAML files and creates database entries automatically.
**Usage:**
```bash
# From the project root
python3 scripts/load_core_pack.py
# With custom database URL
python3 scripts/load_core_pack.py --database-url "postgresql://user:pass@localhost:5432/attune"
# With custom pack directory
python3 scripts/load_core_pack.py --pack-dir ./packs
```
**What it does:**
- Reads `pack.yaml` for pack metadata
- Loads all trigger definitions from `triggers/*.yaml`
- Loads all action definitions from `actions/*.yaml`
- Loads all sensor definitions from `sensors/*.yaml`
- Creates or updates database entries (idempotent)
- Uses transactions (all-or-nothing)
**Output:**
```
============================================================
Core Pack Loader
============================================================
→ Loading pack metadata...
✓ Pack 'core' loaded (ID: 1)
→ Loading triggers...
✓ Trigger 'core.intervaltimer' (ID: 1)
✓ Trigger 'core.crontimer' (ID: 2)
✓ Trigger 'core.datetimetimer' (ID: 3)
→ Loading actions...
✓ Action 'core.echo' (ID: 1)
✓ Action 'core.sleep' (ID: 2)
✓ Action 'core.noop' (ID: 3)
✓ Action 'core.http_request' (ID: 4)
→ Loading sensors...
✓ Sensor 'core.interval_timer_sensor' (ID: 1)
============================================================
✓ Core pack loaded successfully!
============================================================
Pack ID: 1
Triggers: 3
Actions: 4
Sensors: 1
```
### Method 2: SQL Seed Script
For simpler setups or CI/CD, you can use the SQL seed script directly.
**Usage:**
```bash
psql $DATABASE_URL -f scripts/seed_core_pack.sql
```
**Note:** The SQL script may not include all pack metadata and is less flexible than the Python loader.
### Method 3: CLI (Future)
Once the CLI pack management commands are fully implemented:
```bash
attune pack register ./packs/core
```
## Verification
After loading, verify the core pack is available:
### Using CLI
```bash
# List all packs
attune pack list
# Show core pack details
attune pack show core
# List core pack actions
attune action list --pack core
# List core pack triggers
attune trigger list --pack core
```
### Using API
```bash
# Get pack info
curl http://localhost:8080/api/v1/packs/core | jq
# List actions
curl http://localhost:8080/api/v1/packs/core/actions | jq
# List triggers
curl http://localhost:8080/api/v1/packs/core/triggers | jq
```
### Using Database
```sql
-- Check pack exists
SELECT * FROM attune.pack WHERE ref = 'core';
-- Count components
SELECT
(SELECT COUNT(*) FROM attune.trigger WHERE pack_ref = 'core') as triggers,
(SELECT COUNT(*) FROM attune.action WHERE pack_ref = 'core') as actions,
(SELECT COUNT(*) FROM attune.sensor WHERE pack_ref = 'core') as sensors;
```
## Testing the Core Pack
### 1. Test Actions Directly
Test actions using environment variables:
```bash
# Test echo action
export ATTUNE_ACTION_MESSAGE="Hello, Attune!"
export ATTUNE_ACTION_UPPERCASE=false
./packs/core/actions/echo.sh
# Test sleep action
export ATTUNE_ACTION_SECONDS=2
export ATTUNE_ACTION_MESSAGE="Sleeping..."
./packs/core/actions/sleep.sh
# Test HTTP request action
export ATTUNE_ACTION_URL="https://httpbin.org/get"
export ATTUNE_ACTION_METHOD="GET"
python3 packs/core/actions/http_request.py
```
### 2. Run Pack Test Suite
```bash
# Run comprehensive test suite
./packs/core/test_core_pack.sh
```
### 3. Create a Test Rule
Create a simple rule to test the core pack integration:
```bash
# Create a rule that echoes every 10 seconds
attune rule create \
--name "test_timer_echo" \
--trigger "core.intervaltimer" \
--trigger-config '{"unit":"seconds","interval":10}' \
--action "core.echo" \
--action-params '{"message":"Timer triggered!"}' \
--enabled
```
## Updating the Core Pack
To update the core pack after making changes:
1. Edit the relevant YAML files in `packs/core/`
2. Re-run the loader script:
```bash
python3 scripts/load_core_pack.py
```
3. The loader will update existing entries (upsert)
## Troubleshooting
### "Failed to connect to database"
- Verify PostgreSQL is running: `pg_isready`
- Check `DATABASE_URL` environment variable
- Test connection: `psql $DATABASE_URL -c "SELECT 1"`
### "pack.yaml not found"
- Ensure you're running from the project root
- Check the `--pack-dir` argument points to the correct directory
- Verify `packs/core/pack.yaml` exists
### "ModuleNotFoundError: No module named 'psycopg2'"
```bash
pip install psycopg2-binary pyyaml
```
### "Pack loaded but not visible in API"
- Restart the API service to reload pack data
- Check pack is enabled: `SELECT enabled FROM attune.pack WHERE ref = 'core'`
### Actions not executing
- Verify action scripts are executable: `chmod +x packs/core/actions/*.sh`
- Check worker service is running and can access the packs directory
- Verify runtime configuration is correct
## Development Workflow
When developing new core pack components:
1. **Add new action:**
- Create `actions/new_action.yaml` with metadata
- Create `actions/new_action.sh` (or `.py`) with implementation
- Make script executable: `chmod +x actions/new_action.sh`
- Test locally: `export ATTUNE_ACTION_*=... && ./actions/new_action.sh`
- Load into database: `python3 scripts/load_core_pack.py`
2. **Add new trigger:**
- Create `triggers/new_trigger.yaml` with metadata
- Load into database: `python3 scripts/load_core_pack.py`
- Create sensor if needed
3. **Add new sensor:**
- Create `sensors/new_sensor.yaml` with metadata
- Create `sensors/new_sensor.py` with implementation
- Load into database: `python3 scripts/load_core_pack.py`
- Restart sensor service
## Environment Variables
The loader script supports the following environment variables:
- `DATABASE_URL` - PostgreSQL connection string
- Default: `postgresql://postgres:postgres@localhost:5432/attune`
- Example: `postgresql://user:pass@db.example.com:5432/attune`
- `ATTUNE_PACKS_DIR` - Base directory for packs
- Default: `./packs`
- Example: `/opt/attune/packs`
## CI/CD Integration
For automated deployments:
```yaml
# Example GitHub Actions workflow
- name: Load Core Pack
run: |
python3 scripts/load_core_pack.py \
--database-url "${{ secrets.DATABASE_URL }}"
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
```
## Next Steps
After loading the core pack:
1. **Create your first rule** using core triggers and actions
2. **Enable sensors** to start generating events
3. **Monitor executions** via the API or Web UI
4. **Explore pack documentation** in `README.md`
## Additional Resources
- **Pack README**: `packs/core/README.md` - Comprehensive component documentation
- **Testing Guide**: `packs/core/TESTING.md` - Testing procedures
- **API Documentation**: `docs/api-packs.md` - Pack management API
- **Action Development**: `docs/action-development.md` - Creating custom actions
## Support
If you encounter issues:
1. Check this troubleshooting section
2. Review logs from services (api, executor, worker, sensor)
3. Verify database state with SQL queries
4. File an issue with detailed error messages and logs
---
**Last Updated:** 2025-01-20
**Core Pack Version:** 1.0.0

View File

@@ -0,0 +1,410 @@
# Core Pack Testing Guide
Quick reference for testing core pack actions and sensors locally.
---
## Prerequisites
```bash
# Ensure scripts are executable
chmod +x packs/core/actions/*.sh
chmod +x packs/core/actions/*.py
chmod +x packs/core/sensors/*.py
# Install Python dependencies
pip install requests>=2.28.0
```
---
## Testing Actions
Actions receive parameters via environment variables prefixed with `ATTUNE_ACTION_`.
### Test `core.echo`
```bash
# Basic echo
export ATTUNE_ACTION_MESSAGE="Hello, Attune!"
./packs/core/actions/echo.sh
# With uppercase conversion
export ATTUNE_ACTION_MESSAGE="test message"
export ATTUNE_ACTION_UPPERCASE=true
./packs/core/actions/echo.sh
```
**Expected Output:**
```
Hello, Attune!
TEST MESSAGE
```
---
### Test `core.sleep`
```bash
# Sleep for 2 seconds
export ATTUNE_ACTION_SECONDS=2
export ATTUNE_ACTION_MESSAGE="Sleeping..."
time ./packs/core/actions/sleep.sh
```
**Expected Output:**
```
Sleeping...
Slept for 2 seconds
real 0m2.004s
```
---
### Test `core.noop`
```bash
# No operation with message
export ATTUNE_ACTION_MESSAGE="Testing noop"
./packs/core/actions/noop.sh
# With custom exit code
export ATTUNE_ACTION_EXIT_CODE=0
./packs/core/actions/noop.sh
echo "Exit code: $?"
```
**Expected Output:**
```
[NOOP] Testing noop
No operation completed successfully
Exit code: 0
```
---
### Test `core.http_request`
```bash
# Simple GET request
export ATTUNE_ACTION_URL="https://httpbin.org/get"
export ATTUNE_ACTION_METHOD="GET"
python3 ./packs/core/actions/http_request.py
# POST with JSON body
export ATTUNE_ACTION_URL="https://httpbin.org/post"
export ATTUNE_ACTION_METHOD="POST"
export ATTUNE_ACTION_JSON_BODY='{"name": "test", "value": 123}'
python3 ./packs/core/actions/http_request.py
# With custom headers
export ATTUNE_ACTION_URL="https://httpbin.org/headers"
export ATTUNE_ACTION_METHOD="GET"
export ATTUNE_ACTION_HEADERS='{"X-Custom-Header": "test-value"}'
python3 ./packs/core/actions/http_request.py
# With query parameters
export ATTUNE_ACTION_URL="https://httpbin.org/get"
export ATTUNE_ACTION_METHOD="GET"
export ATTUNE_ACTION_QUERY_PARAMS='{"foo": "bar", "page": "1"}'
python3 ./packs/core/actions/http_request.py
# With timeout
export ATTUNE_ACTION_URL="https://httpbin.org/delay/5"
export ATTUNE_ACTION_METHOD="GET"
export ATTUNE_ACTION_TIMEOUT=2
python3 ./packs/core/actions/http_request.py
```
**Expected Output:**
```json
{
"status_code": 200,
"headers": {
"Content-Type": "application/json",
...
},
"body": "...",
"json": {
"args": {},
"headers": {...},
...
},
"elapsed_ms": 234,
"url": "https://httpbin.org/get",
"success": true
}
```
---
## Testing Sensors
Sensors receive configuration via environment variables prefixed with `ATTUNE_SENSOR_`.
### Test `core.interval_timer_sensor`
```bash
# Create test trigger instances JSON
export ATTUNE_SENSOR_TRIGGERS='[
{
"id": 1,
"ref": "core.intervaltimer",
"config": {
"unit": "seconds",
"interval": 5
}
}
]'
# Run sensor (will output events every 5 seconds)
python3 ./packs/core/sensors/interval_timer_sensor.py
```
**Expected Output:**
```
Interval Timer Sensor started (check_interval=1s)
{"type": "interval", "interval_seconds": 5, "fired_at": "2024-01-20T12:00:00Z", "execution_count": 1, "sensor_ref": "core.interval_timer_sensor", "trigger_instance_id": 1, "trigger_ref": "core.intervaltimer"}
{"type": "interval", "interval_seconds": 5, "fired_at": "2024-01-20T12:00:05Z", "execution_count": 2, "sensor_ref": "core.interval_timer_sensor", "trigger_instance_id": 1, "trigger_ref": "core.intervaltimer"}
...
```
Press `Ctrl+C` to stop the sensor.
---
## Testing with Multiple Trigger Instances
```bash
# Test multiple timers
export ATTUNE_SENSOR_TRIGGERS='[
{
"id": 1,
"ref": "core.intervaltimer",
"config": {"unit": "seconds", "interval": 3}
},
{
"id": 2,
"ref": "core.intervaltimer",
"config": {"unit": "seconds", "interval": 5}
},
{
"id": 3,
"ref": "core.intervaltimer",
"config": {"unit": "seconds", "interval": 10}
}
]'
python3 ./packs/core/sensors/interval_timer_sensor.py
```
You should see events firing at different intervals (3s, 5s, 10s).
---
## Validation Tests
### Validate YAML Schemas
```bash
# Install yamllint (optional)
pip install yamllint
# Validate all YAML files
yamllint packs/core/**/*.yaml
```
### Validate JSON Schemas
```bash
# Check parameter schemas are valid JSON Schema
cat packs/core/actions/http_request.yaml | grep -A 50 "parameters:" | python3 -c "
import sys, yaml, json
data = yaml.safe_load(sys.stdin)
print(json.dumps(data, indent=2))
"
```
---
## Error Testing
### Test Invalid Parameters
```bash
# Invalid seconds value for sleep
export ATTUNE_ACTION_SECONDS=-1
./packs/core/actions/sleep.sh
# Expected: ERROR: seconds must be between 0 and 3600
# Invalid exit code for noop
export ATTUNE_ACTION_EXIT_CODE=999
./packs/core/actions/noop.sh
# Expected: ERROR: exit_code must be between 0 and 255
# Missing required parameter for HTTP request
unset ATTUNE_ACTION_URL
python3 ./packs/core/actions/http_request.py
# Expected: ERROR: Required parameter 'url' not provided
```
---
## Performance Testing
### Measure Action Execution Time
```bash
# Echo action
time for i in {1..100}; do
export ATTUNE_ACTION_MESSAGE="Test $i"
./packs/core/actions/echo.sh > /dev/null
done
# HTTP request action
time for i in {1..10}; do
export ATTUNE_ACTION_URL="https://httpbin.org/get"
python3 ./packs/core/actions/http_request.py > /dev/null
done
```
---
## Integration Testing (with Attune Services)
### Prerequisites
```bash
# Start Attune services
docker-compose up -d postgres rabbitmq redis
# Run migrations
sqlx migrate run
# Load core pack (future)
# attune pack load packs/core
```
### Test Action Execution via API
```bash
# Create execution manually
curl -X POST http://localhost:8080/api/v1/executions \
-H "Content-Type: application/json" \
-d '{
"action_ref": "core.echo",
"parameters": {
"message": "API test",
"uppercase": true
}
}'
# Check execution status
curl http://localhost:8080/api/v1/executions/{execution_id}
```
### Test Sensor via Sensor Service
```bash
# Start sensor service (future)
# cargo run --bin attune-sensor
# Check events created
curl http://localhost:8080/api/v1/events?limit=10
```
---
## Troubleshooting
### Action Not Executing
```bash
# Check file permissions
ls -la packs/core/actions/
# Ensure scripts are executable
chmod +x packs/core/actions/*.sh
chmod +x packs/core/actions/*.py
```
### Python Import Errors
```bash
# Install required packages
pip install requests>=2.28.0
# Verify Python version
python3 --version # Should be 3.8+
```
### Environment Variables Not Working
```bash
# Print all ATTUNE_* environment variables
env | grep ATTUNE_
# Test with explicit export
export ATTUNE_ACTION_MESSAGE="test"
echo $ATTUNE_ACTION_MESSAGE
```
---
## Automated Test Script
Create a test script `test_core_pack.sh`:
```bash
#!/bin/bash
set -e
echo "Testing Core Pack Actions..."
# Test echo
echo "→ Testing core.echo..."
export ATTUNE_ACTION_MESSAGE="Test"
./packs/core/actions/echo.sh > /dev/null
echo "✓ core.echo passed"
# Test sleep
echo "→ Testing core.sleep..."
export ATTUNE_ACTION_SECONDS=1
./packs/core/actions/sleep.sh > /dev/null
echo "✓ core.sleep passed"
# Test noop
echo "→ Testing core.noop..."
export ATTUNE_ACTION_MESSAGE="test"
./packs/core/actions/noop.sh > /dev/null
echo "✓ core.noop passed"
# Test HTTP request
echo "→ Testing core.http_request..."
export ATTUNE_ACTION_URL="https://httpbin.org/get"
export ATTUNE_ACTION_METHOD="GET"
python3 ./packs/core/actions/http_request.py > /dev/null
echo "✓ core.http_request passed"
echo ""
echo "All tests passed! ✓"
```
Run with:
```bash
chmod +x test_core_pack.sh
./test_core_pack.sh
```
---
## Next Steps
1. Implement pack loader to register components in database
2. Update worker service to execute actions from filesystem
3. Update sensor service to run sensors from filesystem
4. Add comprehensive integration tests
5. Create CLI commands for pack management
See `docs/core-pack-integration.md` for implementation details.

Some files were not shown because too many files have changed in this diff Show More