87 Commits

Author SHA1 Message Date
David Culbreth
7ef2b59b23 working on arm64 native
Some checks failed
CI / Rustfmt (push) Successful in 24s
CI / Cargo Audit & Deny (push) Successful in 36s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Blocking Checks (push) Successful in 48s
CI / Web Advisory Checks (push) Successful in 37s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
CI / Clippy (push) Failing after 1m53s
Publish Images / Publish Docker Dist Bundle (push) Failing after 8s
Publish Images / Publish web (amd64) (push) Successful in 56s
CI / Security Advisory Checks (push) Successful in 38s
Publish Images / Publish web (arm64) (push) Successful in 3m29s
CI / Tests (push) Successful in 9m21s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 12m28s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m20s
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish manifest attune/agent (push) Has been skipped
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
2026-03-27 16:37:46 -05:00
3a13bf754a fixing docker compose distribution
Some checks failed
CI / Rustfmt (push) Successful in 20s
CI / Clippy (push) Successful in 2m3s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Web Blocking Checks (push) Successful in 1m21s
CI / Security Blocking Checks (push) Successful in 10s
CI / Web Advisory Checks (push) Successful in 1m3s
CI / Security Advisory Checks (push) Successful in 37s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Tests (push) Successful in 8m46s
Publish Images / Publish web (arm64) (push) Successful in 3m20s
Publish Images / Publish Docker Dist Bundle (push) Failing after 9s
Publish Images / Publish web (amd64) (push) Successful in 52s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m20s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m30s
Publish Images / Publish agent (amd64) (push) Successful in 29s
Publish Images / Publish executor (amd64) (push) Successful in 35s
Publish Images / Publish api (amd64) (push) Successful in 42s
Publish Images / Publish notifier (amd64) (push) Successful in 35s
Publish Images / Publish agent (arm64) (push) Successful in 1m3s
Publish Images / Publish api (arm64) (push) Successful in 1m55s
Publish Images / Publish executor (arm64) (push) Successful in 2m1s
Publish Images / Publish notifier (arm64) (push) Successful in 1m54s
Publish Images / Publish manifest attune/agent (push) Successful in 10s
Publish Images / Publish manifest attune/api (push) Successful in 12s
Publish Images / Publish manifest attune/executor (push) Successful in 10s
Publish Images / Publish manifest attune/notifier (push) Successful in 9s
Publish Images / Publish manifest attune/web (push) Successful in 7s
2026-03-26 15:39:07 -05:00
f4ef823f43 fixing audit finding
Some checks failed
CI / Rustfmt (push) Successful in 21s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Blocking Checks (push) Successful in 53s
CI / Web Advisory Checks (push) Successful in 34s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 36s
CI / Clippy (push) Successful in 2m8s
Publish Images / Publish Docker Dist Bundle (push) Failing after 8s
Publish Images / Publish web (amd64) (push) Successful in 53s
Publish Images / Publish web (arm64) (push) Successful in 3m28s
CI / Tests (push) Successful in 9m20s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m21s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m23s
Publish Images / Publish agent (amd64) (push) Successful in 23s
Publish Images / Publish api (amd64) (push) Successful in 33s
Publish Images / Publish notifier (amd64) (push) Successful in 38s
Publish Images / Publish executor (amd64) (push) Successful in 54s
Publish Images / Publish agent (arm64) (push) Successful in 59s
Publish Images / Publish executor (arm64) (push) Successful in 1m55s
Publish Images / Publish api (arm64) (push) Successful in 2m2s
Publish Images / Publish notifier (arm64) (push) Successful in 2m3s
Publish Images / Publish manifest attune/agent (push) Successful in 10s
Publish Images / Publish manifest attune/executor (push) Successful in 19s
Publish Images / Publish manifest attune/api (push) Successful in 21s
Publish Images / Publish manifest attune/notifier (push) Successful in 12s
Publish Images / Publish manifest attune/web (push) Successful in 7s
2026-03-26 14:05:53 -05:00
ab7d31de2f fixing docker compose distribution 2026-03-26 14:04:57 -05:00
938c271ff5 distributable, please
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Successful in 36s
CI / Security Blocking Checks (push) Successful in 6s
CI / Web Blocking Checks (push) Successful in 53s
CI / Web Advisory Checks (push) Successful in 34s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 38s
CI / Clippy (push) Successful in 2m7s
Publish Images / Publish Docker Dist Bundle (push) Failing after 19s
Publish Images / Publish web (amd64) (push) Successful in 49s
Publish Images / Publish web (arm64) (push) Successful in 3m31s
CI / Tests (push) Successful in 8m48s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m42s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m19s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish api (amd64) (push) Successful in 38s
Publish Images / Publish notifier (amd64) (push) Successful in 42s
Publish Images / Publish executor (amd64) (push) Successful in 46s
Publish Images / Publish agent (arm64) (push) Successful in 56s
Publish Images / Publish api (arm64) (push) Successful in 1m52s
Publish Images / Publish executor (arm64) (push) Successful in 2m2s
Publish Images / Publish notifier (arm64) (push) Successful in 2m3s
Publish Images / Publish manifest attune/agent (push) Successful in 6s
Publish Images / Publish manifest attune/api (push) Successful in 11s
Publish Images / Publish manifest attune/executor (push) Successful in 10s
Publish Images / Publish manifest attune/notifier (push) Successful in 8s
Publish Images / Publish manifest attune/web (push) Successful in 8s
2026-03-26 12:26:23 -05:00
da8055cb79 publishable docker compose?
Some checks failed
CI / Cargo Audit & Deny (push) Successful in 31s
CI / Rustfmt (push) Successful in 18s
CI / Security Blocking Checks (push) Successful in 6s
CI / Web Blocking Checks (push) Successful in 52s
CI / Web Advisory Checks (push) Successful in 31s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
CI / Security Advisory Checks (push) Successful in 38s
CI / Clippy (push) Successful in 1m58s
Publish Images / Publish Docker Dist Bundle (push) Failing after 21s
Publish Images / Publish web (amd64) (push) Successful in 50s
Publish Images / Publish web (arm64) (push) Successful in 3m26s
CI / Tests (push) Successful in 9m1s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m25s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m42s
Publish Images / Publish agent (amd64) (push) Successful in 28s
Publish Images / Publish api (amd64) (push) Successful in 45s
Publish Images / Publish executor (amd64) (push) Successful in 46s
Publish Images / Publish notifier (amd64) (push) Successful in 49s
Publish Images / Publish agent (arm64) (push) Successful in 1m0s
Publish Images / Publish api (arm64) (push) Successful in 1m51s
Publish Images / Publish executor (arm64) (push) Successful in 2m1s
Publish Images / Publish notifier (arm64) (push) Successful in 2m1s
Publish Images / Publish manifest attune/agent (push) Successful in 6s
Publish Images / Publish manifest attune/api (push) Successful in 10s
Publish Images / Publish manifest attune/executor (push) Successful in 7s
Publish Images / Publish manifest attune/notifier (push) Successful in 9s
Publish Images / Publish manifest attune/web (push) Successful in 7s
2026-03-26 08:46:18 -05:00
03a239d22b manifest publish retries and more descriptive logs.
All checks were successful
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Successful in 34s
CI / Security Blocking Checks (push) Successful in 8s
CI / Web Blocking Checks (push) Successful in 52s
CI / Web Advisory Checks (push) Successful in 38s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
CI / Clippy (push) Successful in 2m1s
CI / Security Advisory Checks (push) Successful in 1m24s
Publish Images / Publish web (amd64) (push) Successful in 46s
Publish Images / Publish web (arm64) (push) Successful in 3m23s
CI / Tests (push) Successful in 8m54s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m27s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m19s
Publish Images / Publish agent (amd64) (push) Successful in 23s
Publish Images / Publish api (amd64) (push) Successful in 37s
Publish Images / Publish executor (amd64) (push) Successful in 47s
Publish Images / Publish agent (arm64) (push) Successful in 1m1s
Publish Images / Publish notifier (amd64) (push) Successful in 40s
Publish Images / Publish api (arm64) (push) Successful in 1m51s
Publish Images / Publish executor (arm64) (push) Successful in 2m1s
Publish Images / Publish notifier (arm64) (push) Successful in 1m49s
Publish Images / Publish manifest attune/agent (push) Successful in 7s
Publish Images / Publish manifest attune/executor (push) Successful in 8s
Publish Images / Publish manifest attune/notifier (push) Successful in 9s
Publish Images / Publish manifest attune/api (push) Successful in 18s
Publish Images / Publish manifest attune/web (push) Successful in 8s
2026-03-26 07:40:07 -05:00
ba83958337 trying to fix manifest push
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Successful in 35s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Blocking Checks (push) Successful in 51s
CI / Web Advisory Checks (push) Successful in 37s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Clippy (push) Successful in 2m9s
CI / Security Advisory Checks (push) Successful in 1m25s
Publish Images / Publish web (amd64) (push) Successful in 52s
Publish Images / Publish web (arm64) (push) Successful in 3m27s
CI / Tests (push) Successful in 8m48s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m50s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m29s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish api (amd64) (push) Successful in 37s
Publish Images / Publish executor (amd64) (push) Successful in 40s
Publish Images / Publish agent (arm64) (push) Successful in 1m2s
Publish Images / Publish notifier (amd64) (push) Successful in 38s
Publish Images / Publish executor (arm64) (push) Successful in 1m57s
Publish Images / Publish api (arm64) (push) Successful in 1m58s
Publish Images / Publish notifier (arm64) (push) Successful in 2m6s
Publish Images / Publish manifest attune/agent (push) Successful in 12s
Publish Images / Publish manifest attune/api (push) Successful in 11s
Publish Images / Publish manifest attune/notifier (push) Successful in 13s
Publish Images / Publish manifest attune/executor (push) Successful in 16s
Publish Images / Publish manifest attune/web (push) Failing after 37s
2026-03-25 17:29:27 -05:00
c11bc1a2bf trying to fix manifest push
Some checks failed
CI / Rustfmt (push) Successful in 23s
CI / Clippy (push) Successful in 2m6s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Web Blocking Checks (push) Successful in 52s
CI / Security Blocking Checks (push) Successful in 6s
CI / Web Advisory Checks (push) Successful in 36s
CI / Security Advisory Checks (push) Successful in 38s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
Publish Images / Publish web (arm64) (push) Successful in 3m26s
CI / Tests (push) Successful in 8m52s
Publish Images / Publish web (amd64) (push) Successful in 1m8s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m29s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m46s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish api (amd64) (push) Successful in 40s
Publish Images / Publish executor (amd64) (push) Successful in 39s
Publish Images / Publish agent (arm64) (push) Successful in 57s
Publish Images / Publish notifier (amd64) (push) Successful in 41s
Publish Images / Publish api (arm64) (push) Successful in 2m3s
Publish Images / Publish executor (arm64) (push) Successful in 2m2s
Publish Images / Publish notifier (arm64) (push) Successful in 1m57s
Publish Images / Publish manifest attune/api (push) Failing after 10s
Publish Images / Publish manifest attune/agent (push) Successful in 12s
Publish Images / Publish manifest attune/executor (push) Successful in 11s
Publish Images / Publish manifest attune/notifier (push) Successful in 11s
Publish Images / Publish manifest attune/web (push) Failing after 8s
2026-03-25 17:10:36 -05:00
eb82755137 trying different urls? not sure why publishing is only working for the arm64 builds
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Security Blocking Checks (push) Has been cancelled
CI / Tests (push) Has been cancelled
CI / Cargo Audit & Deny (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Security Advisory Checks (push) Has been cancelled
CI / Web Blocking Checks (push) Has been cancelled
Publish Images / Resolve Publish Metadata (push) Successful in 1s
Publish Images / Publish web (amd64) (push) Successful in 45s
Publish Images / Publish web (arm64) (push) Successful in 3m19s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m24s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m43s
Publish Images / Publish agent (amd64) (push) Successful in 27s
Publish Images / Publish api (amd64) (push) Successful in 41s
Publish Images / Publish agent (arm64) (push) Successful in 1m0s
Publish Images / Publish notifier (amd64) (push) Successful in 40s
Publish Images / Publish executor (arm64) (push) Successful in 1m58s
Publish Images / Publish notifier (arm64) (push) Successful in 1m53s
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Successful in 45s
Publish Images / Publish api (arm64) (push) Successful in 2m2s
Publish Images / Publish manifest attune/agent (push) Failing after 1s
2026-03-25 14:29:15 -05:00
058f392616 updating the publisher, again
Some checks failed
CI / Cargo Audit & Deny (push) Successful in 1m11s
CI / Rustfmt (push) Successful in 1m20s
CI / Security Blocking Checks (push) Successful in 9s
CI / Clippy (push) Successful in 2m1s
CI / Web Advisory Checks (push) Successful in 1m9s
CI / Web Blocking Checks (push) Successful in 1m26s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 39s
Publish Images / Publish web (arm64) (push) Successful in 3m50s
CI / Tests (push) Successful in 9m4s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m17s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 12m21s
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
Publish Images / Publish web (amd64) (push) Failing after 47s
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish manifest attune/agent (push) Has been skipped
2026-03-25 13:10:44 -05:00
0264a66b5a renaming container artifacts and adding project linking stage
Some checks failed
CI / Rustfmt (push) Successful in 21s
CI / Clippy (push) Successful in 2m3s
CI / Cargo Audit & Deny (push) Successful in 34s
CI / Web Blocking Checks (push) Successful in 1m27s
CI / Security Blocking Checks (push) Successful in 15s
CI / Web Advisory Checks (push) Successful in 32s
CI / Security Advisory Checks (push) Successful in 1m25s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Tests (push) Successful in 8m56s
Publish Images / Publish web (arm64) (push) Failing after 3m49s
Publish Images / Publish web (amd64) (push) Failing after 1m28s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 12m21s
Publish Images / Build Rust Bundles (arm64) (push) Failing after 12m28s
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish manifest attune/api (push) Has been skipped
Publish Images / Publish manifest attune/executor (push) Has been skipped
Publish Images / Publish manifest attune/agent (push) Has been skipped
Publish Images / Publish manifest attune/notifier (push) Has been skipped
Publish Images / Publish manifest attune/web (push) Has been skipped
2026-03-25 12:39:47 -05:00
542e72a454 fixing glibc version check
Some checks failed
CI / Clippy (push) Successful in 2m1s
CI / Rustfmt (push) Successful in 21s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Web Blocking Checks (push) Successful in 53s
CI / Security Blocking Checks (push) Successful in 8s
CI / Web Advisory Checks (push) Successful in 37s
CI / Security Advisory Checks (push) Successful in 36s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
Publish Images / Publish web (arm64) (push) Successful in 3m39s
CI / Tests (push) Successful in 8m37s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 12m21s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 12m15s
Publish Images / Publish agent (amd64) (push) Successful in 26s
Publish Images / Publish api (amd64) (push) Successful in 39s
Publish Images / Publish executor (amd64) (push) Successful in 37s
Publish Images / Publish notifier (amd64) (push) Successful in 37s
Publish Images / Publish agent (arm64) (push) Successful in 1m34s
Publish Images / Publish executor (arm64) (push) Successful in 2m12s
Publish Images / Publish api (arm64) (push) Successful in 2m22s
Publish Images / Publish manifest attune-executor (push) Has been skipped
Publish Images / Publish manifest attune-notifier (push) Has been skipped
Publish Images / Publish manifest attune-web (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Successful in 2m10s
Publish Images / Publish web (amd64) (push) Successful in 47s
Publish Images / Publish manifest attune-agent (push) Failing after 2s
Publish Images / Publish manifest attune-api (push) Failing after 1s
2026-03-25 11:17:50 -05:00
a118563366 building? hopefully?
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Clippy (push) Successful in 2m3s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Web Blocking Checks (push) Successful in 52s
CI / Security Blocking Checks (push) Successful in 8s
CI / Web Advisory Checks (push) Successful in 36s
CI / Security Advisory Checks (push) Successful in 43s
Publish Images / Resolve Publish Metadata (push) Successful in 2s
Publish Images / Publish web (arm64) (push) Failing after 3m53s
CI / Tests (push) Successful in 8m45s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 8m57s
Publish Images / Publish web (amd64) (push) Successful in 48s
Publish Images / Publish agent (amd64) (push) Has been cancelled
Publish Images / Publish api (amd64) (push) Has been cancelled
Publish Images / Publish executor (amd64) (push) Has been cancelled
Publish Images / Publish notifier (amd64) (push) Has been cancelled
Publish Images / Publish agent (arm64) (push) Has been cancelled
Publish Images / Publish api (arm64) (push) Has been cancelled
Publish Images / Publish executor (arm64) (push) Has been cancelled
Publish Images / Build Rust Bundles (arm64) (push) Has been cancelled
Publish Images / Publish notifier (arm64) (push) Has been cancelled
Publish Images / Publish manifest attune-agent (push) Has been cancelled
Publish Images / Publish manifest attune-api (push) Has been cancelled
Publish Images / Publish manifest attune-executor (push) Has been cancelled
Publish Images / Publish manifest attune-notifier (push) Has been cancelled
Publish Images / Publish manifest attune-web (push) Has been cancelled
2026-03-25 10:52:07 -05:00
a057ad5db5 adjusting publish pipeline to cross-compile because rpis are slow
Some checks failed
CI / Rustfmt (push) Successful in 21s
CI / Clippy (push) Failing after 2m3s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Web Blocking Checks (push) Successful in 51s
CI / Security Blocking Checks (push) Successful in 5s
CI / Web Advisory Checks (push) Successful in 38s
CI / Security Advisory Checks (push) Successful in 36s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
Publish Images / Publish web (arm64) (push) Successful in 3m34s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 4m1s
CI / Tests (push) Successful in 8m47s
Publish Images / Publish web (amd64) (push) Failing after 46s
Publish Images / Build Rust Bundles (arm64) (push) Failing after 4m3s
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish manifest attune-agent (push) Has been skipped
Publish Images / Publish manifest attune-api (push) Has been skipped
Publish Images / Publish manifest attune-executor (push) Has been skipped
Publish Images / Publish manifest attune-notifier (push) Has been skipped
Publish Images / Publish manifest attune-web (push) Has been skipped
2026-03-25 10:07:48 -05:00
8e273ec683 more adjustments to publisher 2026-03-25 08:14:06 -05:00
16f1c2f079 matching runner tags after changing runner tags
Some checks failed
CI / Rustfmt (push) Successful in 1m4s
CI / Clippy (push) Failing after 1m46s
CI / Cargo Audit & Deny (push) Successful in 34s
CI / Web Blocking Checks (push) Successful in 1m24s
CI / Security Blocking Checks (push) Successful in 8s
CI / Web Advisory Checks (push) Successful in 32s
CI / Security Advisory Checks (push) Successful in 1m26s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
CI / Tests (push) Successful in 8m51s
Publish Images / Publish web (amd64) (push) Successful in 1m4s
Publish Images / Build Rust Bundles (amd64) (push) Successful in 10m59s
Publish Images / Build Rust Bundles (arm64) (push) Successful in 1h19m31s
Publish Images / Publish agent (amd64) (push) Failing after 14s
Publish Images / Publish executor (amd64) (push) Failing after 12s
Publish Images / Publish api (amd64) (push) Failing after 32s
Publish Images / Publish notifier (amd64) (push) Failing after 14s
Publish Images / Publish api (arm64) (push) Failing after 1m58s
Publish Images / Publish executor (arm64) (push) Failing after 49s
Publish Images / Publish notifier (arm64) (push) Failing after 48s
Publish Images / Publish web (arm64) (push) Successful in 3m47s
Publish Images / Publish agent (arm64) (push) Failing after 4m13s
Publish Images / Publish manifest attune-agent (push) Has been skipped
Publish Images / Publish manifest attune-api (push) Has been skipped
Publish Images / Publish manifest attune-executor (push) Has been skipped
Publish Images / Publish manifest attune-notifier (push) Has been skipped
Publish Images / Publish manifest attune-web (push) Has been skipped
2026-03-25 01:22:50 -05:00
62307e8c65 publishing with intentional architecture
Some checks failed
Publish Images / Resolve Publish Metadata (push) Successful in 18s
Publish Images / Publish web (arm64) (push) Successful in 7m16s
CI / Rustfmt (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Security Advisory Checks (push) Has been cancelled
CI / Tests (push) Has been cancelled
CI / Cargo Audit & Deny (push) Has been cancelled
CI / Web Blocking Checks (push) Has been cancelled
CI / Security Blocking Checks (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
Publish Images / Publish agent (amd64) (push) Has been cancelled
Publish Images / Publish api (amd64) (push) Has been cancelled
Publish Images / Publish executor (amd64) (push) Has been cancelled
Publish Images / Publish notifier (amd64) (push) Has been cancelled
Publish Images / Publish agent (arm64) (push) Has been cancelled
Publish Images / Publish api (arm64) (push) Has been cancelled
Publish Images / Publish executor (arm64) (push) Has been cancelled
Publish Images / Publish notifier (arm64) (push) Has been cancelled
Publish Images / Publish web (amd64) (push) Has been cancelled
Publish Images / Build Rust Bundles (amd64) (push) Has started running
Publish Images / Publish manifest attune-agent (push) Has been cancelled
Publish Images / Publish manifest attune-api (push) Has been cancelled
Publish Images / Publish manifest attune-executor (push) Has been cancelled
Publish Images / Publish manifest attune-notifier (push) Has been cancelled
Publish Images / Build Rust Bundles (arm64) (push) Has been cancelled
Publish Images / Publish manifest attune-web (push) Has been cancelled
2026-03-25 01:10:10 -05:00
2ebb03b868 first pass at access control setup 2026-03-24 14:45:07 -05:00
af5175b96a removing no-longer-used dockerfiles.
Some checks failed
CI / Cargo Audit & Deny (push) Successful in 1m10s
CI / Security Blocking Checks (push) Successful in 10s
CI / Web Advisory Checks (push) Successful in 1m13s
CI / Clippy (push) Failing after 2m50s
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 1s
CI / Security Advisory Checks (push) Successful in 1m24s
Publish Images And Chart / Publish init-packs (push) Failing after 12s
CI / Rustfmt (push) Successful in 4m22s
Publish Images And Chart / Publish web (push) Successful in 45s
Publish Images And Chart / Publish worker (push) Failing after 54s
Publish Images And Chart / Publish agent (push) Successful in 4m14s
CI / Web Blocking Checks (push) Successful in 9m31s
CI / Tests (push) Successful in 9m41s
Publish Images And Chart / Publish migrations (push) Failing after 13s
Publish Images And Chart / Publish sensor (push) Failing after 12s
Publish Images And Chart / Publish init-user (push) Failing after 2m3s
Publish Images And Chart / Publish api (push) Successful in 8m55s
Publish Images And Chart / Publish notifier (push) Successful in 8m53s
Publish Images And Chart / Publish executor (push) Successful in 1h16m29s
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
2026-03-23 13:05:53 -05:00
8af8c1af9c first iteration of agent-style worker and sensor containers. 2026-03-23 12:49:15 -05:00
d4c6240485 agent workers 2026-03-21 10:05:02 -05:00
4d5a3b1bf5 agent-style workers 2026-03-21 08:27:20 -05:00
8ba7e3bb84 [wip] universal workers 2026-03-21 07:32:11 -05:00
0782675a2b purging unused Dockerfiles 2026-03-20 21:21:44 -05:00
5a18c73572 trying to make the pipeline builds work, desperately. 2026-03-20 20:15:44 -05:00
1c16f65476 addressing configuration dependency issues
Some checks failed
CI / Rustfmt (push) Successful in 59s
CI / Web Blocking Checks (push) Has been cancelled
CI / Security Blocking Checks (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
CI / Tests (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Security Advisory Checks (push) Has been cancelled
CI / Cargo Audit & Deny (push) Has been cancelled
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 3s
Publish Images And Chart / Publish init-packs (push) Successful in 47s
Publish Images And Chart / Publish sensor (push) Failing after 23s
Publish Images And Chart / Publish init-user (push) Successful in 1m51s
Publish Images And Chart / Publish migrations (push) Successful in 1m57s
Publish Images And Chart / Publish web (push) Successful in 57s
Publish Images And Chart / Publish api (push) Failing after 48s
Publish Images And Chart / Publish worker (push) Failing after 1m23s
Publish Images And Chart / Publish executor (push) Failing after 1m9s
Publish Images And Chart / Publish notifier (push) Failing after 1h44m16s
Publish Images And Chart / Publish Helm Chart (push) Has been cancelled
2026-03-20 19:50:44 -05:00
ae8029f9c4 patching npm audit finding
Some checks failed
CI / Tests (push) Has been cancelled
CI / Rustfmt (push) Has been cancelled
CI / Cargo Audit & Deny (push) Has been cancelled
CI / Web Blocking Checks (push) Has been cancelled
CI / Security Blocking Checks (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
CI / Security Advisory Checks (push) Has been cancelled
CI / Clippy (push) Has been cancelled
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 2s
Publish Images And Chart / Publish init-user (push) Failing after 25s
Publish Images And Chart / Publish sensor (push) Failing after 22s
Publish Images And Chart / Publish migrations (push) Failing after 1m2s
Publish Images And Chart / Publish web (push) Failing after 50s
Publish Images And Chart / Publish worker (push) Failing after 50s
Publish Images And Chart / Publish executor (push) Has been cancelled
Publish Images And Chart / Publish api (push) Has been cancelled
Publish Images And Chart / Publish notifier (push) Has been cancelled
Publish Images And Chart / Publish init-packs (push) Failing after 1m21s
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
2026-03-20 17:06:32 -05:00
882ba0da84 attempting more pipeline changes for local cluster registries 2026-03-20 17:04:57 -05:00
ee4fc31b9d attempting more pipeline changes for local cluster registries
Some checks failed
CI / Rustfmt (push) Successful in 57s
CI / Web Blocking Checks (push) Has been cancelled
CI / Security Blocking Checks (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
CI / Security Advisory Checks (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Tests (push) Has been cancelled
CI / Cargo Audit & Deny (push) Has been cancelled
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 1s
Publish Images And Chart / Publish init-packs (push) Failing after 14s
Publish Images And Chart / Publish migrations (push) Failing after 17s
Publish Images And Chart / Publish init-user (push) Failing after 35s
Publish Images And Chart / Publish sensor (push) Failing after 17s
Publish Images And Chart / Publish api (push) Failing after 15s
Publish Images And Chart / Publish web (push) Failing after 39s
Publish Images And Chart / Publish worker (push) Failing after 40s
Publish Images And Chart / Publish executor (push) Failing after 16s
Publish Images And Chart / Publish notifier (push) Failing after 38s
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
2026-03-20 16:59:01 -05:00
c791495572 attempting more pipeline changes for local cluster registries
Some checks failed
CI / Cargo Audit & Deny (push) Has been cancelled
CI / Web Blocking Checks (push) Has been cancelled
CI / Security Blocking Checks (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Rustfmt (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
CI / Security Advisory Checks (push) Has been cancelled
CI / Tests (push) Has been cancelled
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 2s
Publish Images And Chart / Publish Helm Chart (push) Blocked by required conditions
Publish Images And Chart / Publish notifier (push) Waiting to run
Publish Images And Chart / Publish init-packs (push) Failing after 1m19s
Publish Images And Chart / Publish init-user (push) Failing after 1m7s
Publish Images And Chart / Publish migrations (push) Failing after 1m7s
Publish Images And Chart / Publish sensor (push) Failing after 45s
Publish Images And Chart / Publish web (push) Failing after 3m28s
Publish Images And Chart / Publish worker (push) Failing after 48s
Publish Images And Chart / Publish api (push) Has been cancelled
Publish Images And Chart / Publish executor (push) Has been cancelled
2026-03-20 16:48:41 -05:00
35182ccb28 attempting more pipeline changes for local cluster registries
Some checks failed
CI / Rustfmt (push) Successful in 54s
CI / Security Advisory Checks (push) Waiting to run
CI / Web Blocking Checks (push) Has been cancelled
CI / Tests (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Security Blocking Checks (push) Has been cancelled
CI / Cargo Audit & Deny (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 5s
Publish Images And Chart / Publish init-packs (push) Has started running
Publish Images And Chart / Publish init-user (push) Failing after 40s
Publish Images And Chart / Publish migrations (push) Failing after 39s
Publish Images And Chart / Publish sensor (push) Failing after 34s
Publish Images And Chart / Publish web (push) Failing after 37s
Publish Images And Chart / Publish worker (push) Failing after 39s
Publish Images And Chart / Publish api (push) Failing after 37s
Publish Images And Chart / Publish executor (push) Failing after 36s
Publish Images And Chart / Publish notifier (push) Failing after 36s
Publish Images And Chart / Publish Helm Chart (push) Has been cancelled
2026-03-20 16:40:20 -05:00
16e6b69fc7 updating publish workflow again
Some checks failed
CI / Cargo Audit & Deny (push) Has been cancelled
CI / Tests (push) Has been cancelled
CI / Web Blocking Checks (push) Has been cancelled
CI / Security Blocking Checks (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
CI / Security Advisory Checks (push) Has been cancelled
CI / Rustfmt (push) Has been cancelled
CI / Clippy (push) Has been cancelled
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 2s
Publish Images And Chart / Publish init-packs (push) Failing after 33s
Publish Images And Chart / Publish migrations (push) Failing after 21s
Publish Images And Chart / Publish init-user (push) Has started running
Publish Images And Chart / Publish sensor (push) Has started running
Publish Images And Chart / Publish web (push) Has started running
Publish Images And Chart / Publish worker (push) Has been cancelled
Publish Images And Chart / Publish api (push) Has been cancelled
Publish Images And Chart / Publish executor (push) Has been cancelled
Publish Images And Chart / Publish notifier (push) Has been cancelled
Publish Images And Chart / Publish Helm Chart (push) Has been cancelled
2026-03-20 16:33:55 -05:00
a7962eec09 auto-detect cluster registry host
Some checks failed
CI / Rustfmt (push) Successful in 53s
CI / Cargo Audit & Deny (push) Successful in 2m4s
CI / Web Blocking Checks (push) Successful in 4m47s
CI / Security Blocking Checks (push) Successful in 55s
CI / Tests (push) Successful in 8m51s
CI / Security Advisory Checks (push) Successful in 39s
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 2s
Publish Images And Chart / Publish init-packs (push) Failing after 15s
Publish Images And Chart / Publish init-user (push) Failing after 13s
CI / Web Advisory Checks (push) Successful in 1m31s
Publish Images And Chart / Publish migrations (push) Failing after 12s
Publish Images And Chart / Publish web (push) Failing after 13s
Publish Images And Chart / Publish worker (push) Failing after 12s
Publish Images And Chart / Publish sensor (push) Failing after 38s
Publish Images And Chart / Publish api (push) Failing after 13s
Publish Images And Chart / Publish notifier (push) Failing after 8s
Publish Images And Chart / Publish executor (push) Failing after 33s
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
CI / Clippy (push) Successful in 19m26s
2026-03-20 16:12:45 -05:00
2182be1008 adding git hooks to catch pipeline issues before pushing
Some checks failed
CI / Rustfmt (push) Successful in 51s
CI / Clippy (push) Successful in 2m8s
CI / Web Blocking Checks (push) Successful in 47s
CI / Security Blocking Checks (push) Successful in 9s
CI / Cargo Audit & Deny (push) Successful in 2m3s
CI / Web Advisory Checks (push) Successful in 25s
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 1s
Publish Images And Chart / Publish init-packs (push) Failing after 11s
Publish Images And Chart / Publish init-user (push) Failing after 11s
Publish Images And Chart / Publish migrations (push) Failing after 11s
Publish Images And Chart / Publish sensor (push) Failing after 6s
Publish Images And Chart / Publish web (push) Failing after 9s
Publish Images And Chart / Publish worker (push) Failing after 7s
Publish Images And Chart / Publish api (push) Failing after 7s
Publish Images And Chart / Publish executor (push) Failing after 10s
Publish Images And Chart / Publish notifier (push) Failing after 10s
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
CI / Security Advisory Checks (push) Successful in 6m20s
CI / Tests (push) Successful in 1h35m45s
2026-03-20 12:56:17 -05:00
43b27044bb formatting 2026-03-20 12:38:12 -05:00
4df621c5c8 adding some initial SSO providers, updating publish workflow
Some checks failed
CI / Rustfmt (push) Failing after 21s
CI / Cargo Audit & Deny (push) Failing after 33s
CI / Web Blocking Checks (push) Successful in 50s
CI / Security Blocking Checks (push) Successful in 7s
CI / Web Advisory Checks (push) Successful in 33s
CI / Security Advisory Checks (push) Successful in 34s
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 1s
Publish Images And Chart / Publish init-packs (push) Failing after 11s
Publish Images And Chart / Publish init-user (push) Failing after 10s
Publish Images And Chart / Publish migrations (push) Failing after 11s
Publish Images And Chart / Publish sensor (push) Failing after 10s
Publish Images And Chart / Publish web (push) Failing after 10s
Publish Images And Chart / Publish worker (push) Failing after 10s
Publish Images And Chart / Publish api (push) Failing after 7s
Publish Images And Chart / Publish executor (push) Failing after 9s
Publish Images And Chart / Publish notifier (push) Failing after 10s
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
CI / Clippy (push) Successful in 18m52s
CI / Tests (push) Has been cancelled
2026-03-20 12:37:24 -05:00
57fa3bf7cf added oidc adapter
Some checks failed
CI / Rustfmt (push) Failing after 56s
CI / Clippy (push) Successful in 2m4s
CI / Web Blocking Checks (push) Successful in 50s
CI / Cargo Audit & Deny (push) Successful in 2m2s
CI / Security Blocking Checks (push) Successful in 10s
CI / Security Advisory Checks (push) Successful in 41s
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 3s
Publish Images And Chart / Publish init-packs (push) Failing after 13s
Publish Images And Chart / Publish init-user (push) Failing after 11s
CI / Web Advisory Checks (push) Successful in 1m38s
Publish Images And Chart / Publish migrations (push) Failing after 11s
Publish Images And Chart / Publish web (push) Failing after 10s
Publish Images And Chart / Publish worker (push) Failing after 10s
Publish Images And Chart / Publish sensor (push) Failing after 31s
Publish Images And Chart / Publish api (push) Failing after 10s
Publish Images And Chart / Publish notifier (push) Failing after 11s
Publish Images And Chart / Publish executor (push) Failing after 31s
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
CI / Tests (push) Successful in 1h34m2s
2026-03-18 16:35:21 -05:00
1d59ff5de4 fixing lints
Some checks failed
CI / Rustfmt (push) Successful in 52s
CI / Clippy (push) Failing after 22m37s
CI / Cargo Audit & Deny (push) Successful in 2m11s
CI / Security Blocking Checks (push) Successful in 52s
CI / Web Advisory Checks (push) Failing after 20m36s
CI / Web Blocking Checks (push) Failing after 38m23s
CI / Security Advisory Checks (push) Failing after 11m48s
CI / Tests (push) Failing after 1h32m20s
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 5s
Publish Images And Chart / Publish migrations (push) Failing after 39s
Publish Images And Chart / Publish sensor (push) Failing after 33s
Publish Images And Chart / Publish web (push) Failing after 34s
Publish Images And Chart / Publish init-user (push) Failing after 2m0s
Publish Images And Chart / Publish worker (push) Failing after 33s
Publish Images And Chart / Publish api (push) Failing after 32s
Publish Images And Chart / Publish executor (push) Failing after 34s
Publish Images And Chart / Publish notifier (push) Failing after 37s
Publish Images And Chart / Publish init-packs (push) Failing after 12m15s
Publish Images And Chart / Publish Helm Chart (push) Has been cancelled
2026-03-17 14:51:19 -05:00
f96861d417 properly handling patch updates
Some checks failed
CI / Clippy (push) Failing after 3m6s
CI / Rustfmt (push) Failing after 3m9s
CI / Cargo Audit & Deny (push) Successful in 5m2s
CI / Tests (push) Successful in 8m15s
CI / Security Blocking Checks (push) Successful in 10s
CI / Web Advisory Checks (push) Successful in 1m4s
CI / Web Blocking Checks (push) Failing after 4m52s
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 2s
CI / Security Advisory Checks (push) Successful in 1m31s
Publish Images And Chart / Publish init-user (push) Failing after 30s
Publish Images And Chart / Publish init-packs (push) Failing after 1m41s
Publish Images And Chart / Publish migrations (push) Failing after 10s
Publish Images And Chart / Publish web (push) Failing after 11s
Publish Images And Chart / Publish sensor (push) Failing after 32s
Publish Images And Chart / Publish worker (push) Failing after 11s
Publish Images And Chart / Publish executor (push) Failing after 11s
Publish Images And Chart / Publish notifier (push) Failing after 9s
Publish Images And Chart / Publish api (push) Failing after 31s
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
2026-03-17 12:17:58 -05:00
643023b6d5 updating dockerignore
Some checks failed
CI / Rustfmt (push) Successful in 19s
CI / Clippy (push) Successful in 1m57s
CI / Cargo Audit & Deny (push) Successful in 31s
CI / Web Blocking Checks (push) Successful in 1m36s
CI / Security Blocking Checks (push) Successful in 11s
CI / Web Advisory Checks (push) Successful in 34s
CI / Security Advisory Checks (push) Successful in 1m32s
CI / Tests (push) Successful in 8m54s
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 3s
Publish Images And Chart / Publish Helm Chart (push) Has been cancelled
Publish Images And Chart / Publish init-user (push) Failing after 8s
Publish Images And Chart / Publish sensor (push) Failing after 13s
Publish Images And Chart / Publish init-packs (push) Failing after 19s
Publish Images And Chart / Publish worker (push) Failing after 13s
Publish Images And Chart / Publish api (push) Failing after 14s
Publish Images And Chart / Publish notifier (push) Failing after 13s
Publish Images And Chart / Publish executor (push) Failing after 10s
Publish Images And Chart / Publish web (push) Failing after 11m22s
Publish Images And Chart / Publish migrations (push) Failing after 11m37s
2026-03-16 09:11:51 -05:00
feb070c165 [wip] helmchart
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Clippy (push) Successful in 1m55s
CI / Cargo Audit & Deny (push) Successful in 42s
CI / Web Blocking Checks (push) Successful in 1m24s
CI / Tests (push) Successful in 8m21s
CI / Web Advisory Checks (push) Successful in 1m12s
Publish Images And Chart / Publish init-user (push) Failing after 1m0s
Publish Images And Chart / Publish migrations (push) Failing after 23s
Publish Images And Chart / Publish sensor (push) Failing after 19s
Publish Images And Chart / Publish worker (push) Failing after 17s
Publish Images And Chart / Publish api (push) Failing after 17s
Publish Images And Chart / Publish executor (push) Failing after 17s
Publish Images And Chart / Publish web (push) Failing after 1m33s
Publish Images And Chart / Publish notifier (push) Failing after 54s
CI / Security Blocking Checks (push) Successful in 10s
CI / Security Advisory Checks (push) Successful in 1m25s
Publish Images And Chart / Resolve Publish Metadata (push) Successful in 2s
Publish Images And Chart / Publish init-packs (push) Failing after 27s
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
2026-03-16 08:31:19 -05:00
6a86dd7ca6 [wip]helmchart
Some checks failed
CI / Rustfmt (push) Successful in 1m30s
Publish Images And Chart / Resolve Publish Metadata (push) Failing after 2s
Publish Images And Chart / Publish init-packs (push) Has been skipped
Publish Images And Chart / Publish init-user (push) Has been skipped
Publish Images And Chart / Publish migrations (push) Has been skipped
Publish Images And Chart / Publish sensor (push) Has been skipped
Publish Images And Chart / Publish web (push) Has been skipped
Publish Images And Chart / Publish worker (push) Has been skipped
Publish Images And Chart / Publish api (push) Has been skipped
Publish Images And Chart / Publish executor (push) Has been skipped
Publish Images And Chart / Publish notifier (push) Has been skipped
Publish Images And Chart / Publish Helm Chart (push) Has been skipped
CI / Web Blocking Checks (push) Successful in 1m55s
CI / Security Advisory Checks (push) Failing after 13m14s
CI / Web Advisory Checks (push) Failing after 13m20s
CI / Security Blocking Checks (push) Failing after 13m31s
CI / Cargo Audit & Deny (push) Failing after 14m51s
CI / Tests (push) Failing after 14m53s
CI / Clippy (push) Failing after 14m59s
2026-03-14 18:11:10 -05:00
6307888722 fixing tests
All checks were successful
CI / Rustfmt (push) Successful in 23s
CI / Cargo Audit & Deny (push) Successful in 34s
CI / Web Blocking Checks (push) Successful in 49s
CI / Security Blocking Checks (push) Successful in 9s
CI / Clippy (push) Successful in 2m6s
CI / Web Advisory Checks (push) Successful in 24s
CI / Security Advisory Checks (push) Successful in 37s
CI / Tests (push) Successful in 7m39s
2026-03-11 14:53:15 -05:00
9b0ff4a6d2 linting
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Successful in 36s
CI / Web Blocking Checks (push) Successful in 50s
CI / Security Blocking Checks (push) Successful in 9s
CI / Clippy (push) Successful in 2m2s
CI / Web Advisory Checks (push) Successful in 33s
CI / Security Advisory Checks (push) Successful in 38s
CI / Tests (push) Failing after 8m12s
2026-03-11 12:55:24 -05:00
5c0ff6f271 fixing lint issues
Some checks failed
CI / Rustfmt (push) Successful in 23s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Clippy (push) Failing after 1m55s
CI / Web Blocking Checks (push) Successful in 47s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Advisory Checks (push) Successful in 30s
CI / Security Advisory Checks (push) Successful in 31s
CI / Tests (push) Failing after 8m6s
2026-03-11 11:57:06 -05:00
1645ad84ee fixing lint issues
Some checks failed
CI / Cargo Audit & Deny (push) Has been cancelled
CI / Web Blocking Checks (push) Has been cancelled
CI / Security Blocking Checks (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
CI / Security Advisory Checks (push) Has been cancelled
CI / Rustfmt (push) Has started running
CI / Clippy (push) Has been cancelled
CI / Tests (push) Has been cancelled
2026-03-11 11:56:57 -05:00
765afc7d76 cargo format
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Web Blocking Checks (push) Failing after 29s
CI / Security Blocking Checks (push) Successful in 9s
CI / Clippy (push) Failing after 1m59s
CI / Web Advisory Checks (push) Successful in 31s
CI / Security Advisory Checks (push) Successful in 36s
CI / Tests (push) Failing after 8m8s
2026-03-11 11:24:50 -05:00
b5d6bb2243 more polish on workflows
Some checks failed
CI / Rustfmt (push) Failing after 25s
CI / Clippy (push) Failing after 2m3s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Web Blocking Checks (push) Failing after 26s
CI / Security Blocking Checks (push) Successful in 8s
CI / Security Advisory Checks (push) Has been cancelled
CI / Web Advisory Checks (push) Has been cancelled
CI / Tests (push) Has been cancelled
2026-03-11 11:21:28 -05:00
a7ed135af2 more edge case resolution on workflow builder
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Web Blocking Checks (push) Failing after 26s
CI / Security Blocking Checks (push) Successful in 8s
CI / Clippy (push) Failing after 2m0s
CI / Web Advisory Checks (push) Successful in 32s
CI / Security Advisory Checks (push) Successful in 37s
CI / Tests (push) Failing after 7m33s
2026-03-11 09:29:17 -05:00
71ea3f34ca cancelling actions works now 2026-03-10 19:53:20 -05:00
5b45b17fa6 [wip] single runtime handling 2026-03-10 09:30:57 -05:00
9e7e35cbe3 [wip] workflow cancellation policy
Some checks failed
CI / Rustfmt (push) Successful in 21s
CI / Cargo Audit & Deny (push) Successful in 32s
CI / Web Blocking Checks (push) Successful in 50s
CI / Security Blocking Checks (push) Successful in 9s
CI / Clippy (push) Failing after 1m58s
CI / Web Advisory Checks (push) Successful in 34s
CI / Security Advisory Checks (push) Successful in 1m26s
CI / Tests (push) Successful in 8m47s
2026-03-09 14:08:01 -05:00
87d830f952 [wip] cli capability parity
Some checks failed
CI / Rustfmt (push) Successful in 23s
CI / Cargo Audit & Deny (push) Successful in 30s
CI / Web Blocking Checks (push) Successful in 48s
CI / Security Blocking Checks (push) Successful in 8s
CI / Clippy (push) Failing after 1m55s
CI / Web Advisory Checks (push) Successful in 35s
CI / Security Advisory Checks (push) Successful in 37s
CI / Tests (push) Successful in 8m5s
2026-03-06 16:58:50 -06:00
48b6ca6bd7 marking integration tests
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Clippy (push) Failing after 1m54s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Web Blocking Checks (push) Successful in 49s
CI / Security Blocking Checks (push) Successful in 8s
CI / Web Advisory Checks (push) Successful in 32s
CI / Security Advisory Checks (push) Successful in 37s
CI / Tests (push) Failing after 8m46s
2026-03-05 16:33:06 -06:00
4b0000c116 no more cargo advisories ignored 2026-03-05 15:48:35 -06:00
9af3192d1d hopefully resolving cargo audit
Some checks failed
CI / Rustfmt (push) Successful in 19s
CI / Cargo Audit & Deny (push) Successful in 29s
CI / Web Blocking Checks (push) Successful in 48s
CI / Security Blocking Checks (push) Successful in 8s
CI / Clippy (push) Successful in 2m2s
CI / Web Advisory Checks (push) Successful in 32s
CI / Security Advisory Checks (push) Successful in 35s
CI / Tests (push) Failing after 7m54s
2026-03-05 14:30:29 -06:00
649648896e Update license from MIT to Apache 2.0
Some checks failed
CI / Rustfmt (push) Successful in 23s
CI / Clippy (push) Successful in 2m0s
CI / Cargo Audit & Deny (push) Failing after 32s
CI / Web Blocking Checks (push) Successful in 50s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Advisory Checks (push) Successful in 32s
CI / Security Advisory Checks (push) Successful in 36s
CI / Tests (push) Failing after 7m57s
2026-03-05 09:48:42 -06:00
a00f7c80fb audit stuff 2026-03-05 09:27:59 -06:00
c61fe26713 eslint and build
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Failing after 32s
CI / Web Blocking Checks (push) Successful in 47s
CI / Security Blocking Checks (push) Successful in 9s
CI / Clippy (push) Successful in 2m9s
CI / Web Advisory Checks (push) Successful in 37s
CI / Security Advisory Checks (push) Successful in 34s
CI / Tests (push) Failing after 8m37s
2026-03-05 08:18:07 -06:00
179180d604 eslint
Some checks failed
CI / Rustfmt (push) Successful in 22s
CI / Cargo Audit & Deny (push) Failing after 1m2s
CI / Web Blocking Checks (push) Failing after 35s
CI / Security Blocking Checks (push) Successful in 8s
CI / Clippy (push) Successful in 2m43s
CI / Web Advisory Checks (push) Successful in 35s
CI / Security Advisory Checks (push) Successful in 37s
CI / Tests (push) Failing after 9m28s
2026-03-05 06:52:55 -06:00
f54eef3a14 formatting
Some checks failed
CI / Security Blocking Checks (push) Successful in 34s
CI / Web Blocking Checks (push) Failing after 1m44s
CI / Rust Blocking Checks (push) Failing after 9m57s
CI / Web Advisory Checks (push) Successful in 1m14s
CI / Security Advisory Checks (push) Successful in 1m28s
2026-03-04 23:46:31 -06:00
13749409cd making linters happy
Some checks failed
CI / Rust Blocking Checks (push) Failing after 22s
CI / Web Blocking Checks (push) Failing after 26s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Advisory Checks (push) Successful in 32s
CI / Security Advisory Checks (push) Has been cancelled
2026-03-04 23:44:45 -06:00
6a5a3c2b78 trying again with ci pipeline
Some checks failed
CI / Rust Blocking Checks (push) Failing after 1m42s
CI / Web Blocking Checks (push) Failing after 29s
CI / Security Blocking Checks (push) Successful in 9s
CI / Web Advisory Checks (push) Successful in 36s
CI / Security Advisory Checks (push) Successful in 1m28s
2026-03-04 22:44:37 -06:00
95765f50a8 formatting 2026-03-04 22:42:23 -06:00
67a1c02543 trying to run a gitea workflow
Some checks failed
CI / Security Advisory Checks (push) Waiting to run
CI / Rust Blocking Checks (push) Failing after 47s
CI / Web Blocking Checks (push) Failing after 46s
CI / Security Blocking Checks (push) Failing after 8s
CI / Web Advisory Checks (push) Failing after 9s
2026-03-04 22:36:16 -06:00
7438f92502 working on workflows 2026-03-04 22:02:34 -06:00
b54aa3ec26 artifact management 2026-03-03 14:16:23 -06:00
8299e5efcb artifacts! 2026-03-03 13:42:41 -06:00
5da940639a WIP 2026-03-02 19:27:52 -06:00
42a9f1d31a branding v1 2026-03-02 12:03:20 -06:00
bbe94d75f8 proper sql filtering 2026-03-01 20:43:48 -06:00
6b9d7d6cf2 still working on workflows. 2026-02-27 16:57:10 -06:00
daeff10f18 [WIP] Workflows 2026-02-27 16:34:17 -06:00
570c52e623 correctly processing enforcements 2026-02-26 15:35:39 -06:00
b43495b26d change capture 2026-02-26 14:34:02 -06:00
7ee3604eb1 [WIP] change capture 2026-02-25 23:40:50 -06:00
495b81236a node running, runtime version awareness 2026-02-25 23:24:07 -06:00
e89b5991ec concurrent action execution 2026-02-25 14:16:56 -06:00
adb9f30464 inputs for workflows 2026-02-25 08:34:38 -06:00
91dfc52a1f adding chart meta to supported backend data 2026-02-24 15:57:55 -06:00
80c8eaaf22 more workflow editor polish 2026-02-24 12:30:33 -06:00
7d942f5dca splines! 2026-02-24 09:28:39 -06:00
4c81ba1de8 workflow builder, first edition 2026-02-23 22:51:49 -06:00
53a3fbb6b1 [WIP] workflow builder 2026-02-23 20:45:10 -06:00
d629da32fa sql migration rollup 2026-02-20 14:25:43 -06:00
a84c07082c sensors using keys 2026-02-20 14:11:06 -06:00
667 changed files with 110501 additions and 15098 deletions

0
.codex_write_test Normal file
View File

View File

@@ -50,8 +50,8 @@ web/node_modules/
web/dist/
web/.vite/
# SQLx offline data (generated at build time)
#.sqlx/
# SQLx offline data (generated when using `cargo sqlx prepare`)
# .sqlx/
# Configuration files (copied selectively)
config.development.yaml
@@ -61,6 +61,7 @@ config.example.yaml
# Scripts (not needed in runtime)
scripts/
!scripts/load_core_pack.py
# Cargo lock (workspace handles this)
# Uncomment if you want deterministic builds:

298
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,298 @@
name: CI
on:
pull_request:
push:
branches:
- main
- master
env:
CARGO_TERM_COLOR: always
RUST_MIN_STACK: 67108864
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
RUSTUP_MAX_RETRIES: 10
# Gitea Actions runner tool cache. Actions like setup-node/setup-python can reuse this.
RUNNER_TOOL_CACHE: /toolcache
jobs:
rust-fmt:
name: Rustfmt
runs-on: build-amd64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Cache Rust toolchain
uses: actions/cache@v4
with:
path: |
~/.rustup/toolchains
~/.rustup/update-hashes
key: rustup-rustfmt-${{ runner.os }}-stable-v1
restore-keys: |
rustup-${{ runner.os }}-stable-v1
rustup-
- name: Setup Rust
uses: dtolnay/rust-toolchain@stable
with:
components: rustfmt
- name: Rustfmt
run: cargo fmt --all -- --check
rust-clippy:
name: Clippy
runs-on: build-amd64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Cache Rust toolchain
uses: actions/cache@v4
with:
path: |
~/.rustup/toolchains
~/.rustup/update-hashes
key: rustup-clippy-${{ runner.os }}-stable-v1
restore-keys: |
rustup-${{ runner.os }}-stable-v1
rustup-
- name: Setup Rust
uses: dtolnay/rust-toolchain@stable
with:
components: clippy
- name: Cache Cargo registry + index
uses: actions/cache@v4
with:
path: |
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
key: cargo-registry-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
cargo-registry-
- name: Cache Cargo build artifacts
uses: actions/cache@v4
with:
path: target
key: cargo-clippy-${{ hashFiles('**/Cargo.lock') }}-${{ hashFiles('**/*.rs', '**/Cargo.toml') }}
restore-keys: |
cargo-clippy-${{ hashFiles('**/Cargo.lock') }}-
cargo-clippy-
- name: Clippy
run: cargo clippy --workspace --all-targets --all-features -- -D warnings
rust-test:
name: Tests
runs-on: build-amd64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Cache Rust toolchain
uses: actions/cache@v4
with:
path: |
~/.rustup/toolchains
~/.rustup/update-hashes
key: rustup-test-${{ runner.os }}-stable-v1
restore-keys: |
rustup-${{ runner.os }}-stable-v1
rustup-
- name: Setup Rust
uses: dtolnay/rust-toolchain@stable
- name: Cache Cargo registry + index
uses: actions/cache@v4
with:
path: |
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
key: cargo-registry-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
cargo-registry-
- name: Cache Cargo build artifacts
uses: actions/cache@v4
with:
path: target
key: cargo-test-${{ hashFiles('**/Cargo.lock') }}-${{ hashFiles('**/*.rs', '**/Cargo.toml') }}
restore-keys: |
cargo-test-${{ hashFiles('**/Cargo.lock') }}-
cargo-test-
- name: Tests
run: cargo test --workspace --all-features
rust-audit:
name: Cargo Audit & Deny
runs-on: build-amd64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Cache Rust toolchain
uses: actions/cache@v4
with:
path: |
~/.rustup/toolchains
~/.rustup/update-hashes
key: rustup-audit-${{ runner.os }}-stable-v1
restore-keys: |
rustup-${{ runner.os }}-stable-v1
rustup-
- name: Setup Rust
uses: dtolnay/rust-toolchain@stable
- name: Cache Cargo registry + index
uses: actions/cache@v4
with:
path: |
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
key: cargo-registry-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
cargo-registry-
- name: Cache cargo-binstall and installed binaries
uses: actions/cache@v4
with:
path: |
~/.cargo/bin/cargo-binstall
~/.cargo/bin/cargo-deny
key: cargo-security-tools-v2
- name: Install cargo-binstall
run: |
if ! command -v cargo-binstall &> /dev/null; then
curl -L --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/cargo-bins/cargo-binstall/main/install-from-binstall-release.sh | bash
fi
- name: Install security tools (pre-built binaries)
run: |
command -v cargo-deny &> /dev/null || cargo binstall --no-confirm --locked cargo-deny
- name: Cargo Deny
run: cargo deny check
web-blocking:
name: Web Blocking Checks
runs-on: build-amd64
defaults:
run:
working-directory: web
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
cache: "npm"
cache-dependency-path: web/package-lock.json
- name: Install dependencies
run: npm ci
- name: ESLint
run: npm run lint
- name: TypeScript
run: npm run typecheck
- name: Build
run: npm run build
security-blocking:
name: Security Blocking Checks
runs-on: build-amd64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Gitleaks
run: |
mkdir -p "$HOME/bin"
GITLEAKS_VERSION="8.24.2"
ARCH="$(uname -m)"
case "$ARCH" in
x86_64) ARCH="x64" ;;
aarch64|arm64) ARCH="arm64" ;;
*)
echo "Unsupported architecture: $ARCH"
exit 1
;;
esac
curl -sSfL \
-o /tmp/gitleaks.tar.gz \
"https://github.com/gitleaks/gitleaks/releases/download/v${GITLEAKS_VERSION}/gitleaks_${GITLEAKS_VERSION}_linux_${ARCH}.tar.gz"
tar -xzf /tmp/gitleaks.tar.gz -C "$HOME/bin" gitleaks
chmod +x "$HOME/bin/gitleaks"
- name: Gitleaks
run: |
"$HOME/bin/gitleaks" git \
--report-format sarif \
--report-path gitleaks.sarif \
--config .gitleaks.toml
web-advisory:
name: Web Advisory Checks
runs-on: build-amd64
continue-on-error: true
defaults:
run:
working-directory: web
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
cache: "npm"
cache-dependency-path: web/package-lock.json
- name: Install dependencies
run: npm ci
- name: Knip
run: npm run knip
continue-on-error: true
- name: NPM Audit (prod deps)
run: npm audit --omit=dev
continue-on-error: true
security-advisory:
name: Security Advisory Checks
runs-on: build-amd64
continue-on-error: true
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install Semgrep
run: pip install semgrep
- name: Semgrep
run: semgrep scan --config p/default --error
continue-on-error: true

1062
.gitea/workflows/publish.yml Normal file

File diff suppressed because it is too large Load Diff

15
.githooks/pre-commit Executable file
View File

@@ -0,0 +1,15 @@
#!/bin/bash
set -euo pipefail
repo_root="$(git rev-parse --show-toplevel)"
cd "$repo_root"
echo "Formatting Rust code..."
cargo fmt --all
echo "Refreshing staged Rust files..."
git add --all '*.rs'
echo "Running pre-commit checks..."
make pre-commit

10
.gitignore vendored
View File

@@ -1,6 +1,5 @@
# Rust
target/
Cargo.lock
**/*.rs.bk
*.pdb
@@ -12,6 +11,7 @@ Cargo.lock
# Configuration files (keep *.example.yaml)
config.yaml
config.*.yaml
!docker/distributable/config.docker.yaml
!config.example.yaml
!config.development.yaml
!config.test.yaml
@@ -36,6 +36,7 @@ logs/
# Build artifacts
dist/
build/
artifacts/
# Testing
coverage/
@@ -70,8 +71,6 @@ ENV/
# Node (if used for tooling)
node_modules/
package-lock.json
yarn.lock
tests/pids/*
# Docker
@@ -81,3 +80,8 @@ docker-compose.override.yml
*.pid
packs.examples/
packs.external/
codex/
# Compiled pack binaries (built via Docker or build-pack-binaries.sh)
packs/core/sensors/attune-core-timer-sensor

16
.gitleaks.toml Normal file
View File

@@ -0,0 +1,16 @@
title = "attune-gitleaks-config"
[allowlist]
description = "Known development credentials and examples"
regexes = [
'''test@attune\.local''',
'''TestPass123!''',
'''JWT_SECRET''',
'''ENCRYPTION_KEY''',
]
paths = [
'''^docs/''',
'''^reference/''',
'''^web/openapi\.json$''',
'''^work-summary/''',
]

6
.gitmodules vendored Normal file
View File

@@ -0,0 +1,6 @@
[submodule "packs.external/python_example"]
path = packs.external/python_example
url = https://git.rdrx.app/attune-packs/python_example.git
[submodule "packs.external/nodejs_example"]
path = packs.external/nodejs_example
url = https://git.rdrx.app/attune-packs/nodejs_example.git

6
.semgrepignore Normal file
View File

@@ -0,0 +1,6 @@
target/
web/dist/
web/node_modules/
web/src/api/
packs.dev/
packs.external/

372
AGENTS.md

File diff suppressed because one or more lines are too long

View File

@@ -1,430 +0,0 @@
# Attune Project Rules
## Project Overview
Attune is an **event-driven automation and orchestration platform** built in Rust, similar to StackStorm. It enables building complex workflows triggered by events with multi-tenancy, RBAC, and human-in-the-loop capabilities.
## Development Status: Pre-Production
**This project is under active development with no users, deployments, or stable releases.**
### Breaking Changes Policy
- **Breaking changes are explicitly allowed and encouraged** when they improve the architecture, API design, or developer experience
- **No backward compatibility required** - there are no existing versions to support
- **Database migrations can be modified or consolidated** - no production data exists
- **API contracts can change freely** - no external integrations depend on them, only internal interfaces with other services and the web UI must be maintained.
- **Configuration formats can be redesigned** - no existing config files need migration
- **Service interfaces can be refactored** - no live deployments to worry about
When this project reaches v1.0 or gets its first production deployment, this section should be removed and replaced with appropriate stability guarantees and versioning policies.
## Languages & Core Technologies
- **Primary Language**: Rust 2021 edition
- **Database**: PostgreSQL 14+ (primary data store + LISTEN/NOTIFY pub/sub)
- **Message Queue**: RabbitMQ 3.12+ (via lapin)
- **Cache**: Redis 7.0+ (optional)
- **Web UI**: TypeScript + React 19 + Vite
- **Async Runtime**: Tokio
- **Web Framework**: Axum 0.8
- **ORM**: SQLx (compile-time query checking)
## Project Structure (Cargo Workspace)
```
attune/
├── Cargo.toml # Workspace root
├── config.{development,test}.yaml # Environment configs
├── Makefile # Common dev tasks
├── crates/ # Rust services
│ ├── common/ # Shared library (models, db, repos, mq, config, error)
│ ├── api/ # REST API service (8080)
│ ├── executor/ # Execution orchestration service
│ ├── worker/ # Action execution service (multi-runtime)
│ ├── sensor/ # Event monitoring service
│ ├── notifier/ # Real-time notification service
│ └── cli/ # Command-line interface
├── migrations/ # SQLx database migrations (18 tables)
├── web/ # React web UI (Vite + TypeScript)
├── packs/ # Pack bundles
│ └── core/ # Core pack (timers, HTTP, etc.)
├── docs/ # Technical documentation
├── scripts/ # Helper scripts (DB setup, testing)
└── tests/ # Integration tests
```
## Service Architecture (Distributed Microservices)
1. **attune-api**: REST API gateway, JWT auth, all client interactions
2. **attune-executor**: Manages execution lifecycle, scheduling, policy enforcement
3. **attune-worker**: Executes actions in multiple runtimes (Python/Node.js/containers)
4. **attune-sensor**: Monitors triggers, generates events
5. **attune-notifier**: Real-time notifications via PostgreSQL LISTEN/NOTIFY + WebSocket
**Communication**: Services communicate via RabbitMQ for async operations
## Docker Compose Orchestration
**All Attune services run via Docker Compose.**
- **Compose file**: `docker-compose.yaml` (root directory)
- **Configuration**: `config.docker.yaml` (Docker-specific settings)
- **Default user**: `test@attune.local` / `TestPass123!` (auto-created)
**Services**:
- **Infrastructure**: postgres, rabbitmq, redis
- **Init** (run-once): migrations, init-user, init-packs
- **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000)
**Commands**:
```bash
docker compose up -d # Start all services
docker compose down # Stop all services
docker compose logs -f <svc> # View logs
```
**Key environment overrides**: `JWT_SECRET`, `ENCRYPTION_KEY` (required for production)
## Domain Model & Event Flow
**Critical Event Flow**:
```
Sensor → Trigger fires → Event created → Rule evaluates →
Enforcement created → Execution scheduled → Worker executes Action
```
**Key Entities** (all in `public` schema, IDs are `i64`):
- **Pack**: Bundle of automation components (actions, sensors, rules, triggers)
- **Trigger**: Event type definition (e.g., "webhook_received")
- **Sensor**: Monitors for trigger conditions, creates events
- **Event**: Instance of a trigger firing with payload
- **Action**: Executable task with parameters
- **Rule**: Links triggers to actions with conditional logic
- **Enforcement**: Represents a rule activation
- **Execution**: Single action run; supports parent-child relationships for workflows
- **Workflow Tasks**: Workflow-specific metadata stored in `execution.workflow_task` JSONB field
- **Inquiry**: Human-in-the-loop async interaction (approvals, inputs)
- **Identity**: User/service account with RBAC permissions
- **Key**: Encrypted secrets storage
## Key Tools & Libraries
### Shared Dependencies (workspace-level)
- **Async**: tokio, async-trait, futures
- **Web**: axum, tower, tower-http
- **Database**: sqlx (with postgres, json, chrono, uuid features)
- **Serialization**: serde, serde_json, serde_yaml_ng
- **Logging**: tracing, tracing-subscriber
- **Error Handling**: anyhow, thiserror
- **Config**: config crate (YAML + env vars)
- **Validation**: validator
- **Auth**: jsonwebtoken, argon2
- **CLI**: clap
- **OpenAPI**: utoipa, utoipa-swagger-ui
- **Message Queue**: lapin (RabbitMQ)
- **HTTP Client**: reqwest
- **Testing**: mockall, tempfile, serial_test
### Web UI Dependencies
- **Framework**: React 19 + react-router-dom
- **State**: Zustand, @tanstack/react-query
- **HTTP**: axios (with generated OpenAPI client)
- **Styling**: Tailwind CSS
- **Icons**: lucide-react
- **Build**: Vite, TypeScript
## Configuration System
- **Primary**: YAML config files (`config.yaml`, `config.{env}.yaml`)
- **Overrides**: Environment variables with prefix `ATTUNE__` and separator `__`
- Example: `ATTUNE__DATABASE__URL`, `ATTUNE__SERVER__PORT`
- **Loading Priority**: Base config → env-specific config → env vars
- **Required for Production**: `JWT_SECRET`, `ENCRYPTION_KEY` (32+ chars)
- **Location**: Root directory or `ATTUNE_CONFIG` env var path
## Authentication & Security
- **Auth Type**: JWT (access tokens: 1h, refresh tokens: 7d)
- **Password Hashing**: Argon2id
- **Protected Routes**: Use `RequireAuth(user)` extractor in Axum
- **Secrets Storage**: AES-GCM encrypted in `key` table with scoped ownership
- **User Info**: Stored in `identity` table
## Code Conventions & Patterns
### General
- **Error Handling**: Use `attune_common::error::Error` and `Result<T>` type alias
- **Async Everywhere**: All I/O operations use async/await with Tokio
- **Module Structure**: Public API exposed via `mod.rs` with `pub use` re-exports
### Database Layer
- **Schema**: All tables use unqualified names; schema determined by PostgreSQL `search_path`
- **Production**: Always uses `public` schema (configured explicitly in `config.production.yaml`)
- **Tests**: Each test uses isolated schema (e.g., `test_a1b2c3d4`) for true parallel execution
- **Schema Resolution**: PostgreSQL `search_path` mechanism, NO hardcoded schema prefixes in queries
- **Models**: Defined in `common/src/models.rs` with `#[derive(FromRow)]` for SQLx
- **Repositories**: One per entity in `common/src/repositories/`, provides CRUD + specialized queries
- **Pattern**: Services MUST interact with DB only through repository layer (no direct queries)
- **Transactions**: Use SQLx transactions for multi-table operations
- **IDs**: All IDs are `i64` (BIGSERIAL in PostgreSQL)
- **Timestamps**: `created`/`updated` columns auto-managed by DB triggers
- **JSON Fields**: Use `serde_json::Value` for flexible attributes/parameters, including `execution.workflow_task` JSONB
- **Enums**: PostgreSQL enum types mapped with `#[sqlx(type_name = "...")]`
- **Workflow Tasks**: Stored as JSONB in `execution.workflow_task` (consolidated from separate table 2026-01-27)
**Table Count**: 17 tables total in the schema
### Pack File Loading
- **Pack Base Directory**: Configured via `packs_base_dir` in config (defaults to `/opt/attune/packs`, development uses `./packs`)
- **Action Script Resolution**: Worker constructs file paths as `{packs_base_dir}/{pack_ref}/actions/{entrypoint}`
- **Runtime Selection**: Determined by action's runtime field (e.g., "Shell", "Python") - compared case-insensitively
- **Parameter Passing**: Shell actions receive parameters as environment variables with `ATTUNE_ACTION_` prefix
### API Service (`crates/api`)
- **Structure**: `routes/` (endpoints) + `dto/` (request/response) + `auth/` + `middleware/`
- **Responses**: Standardized `ApiResponse<T>` wrapper with `data` field
- **Protected Routes**: Apply `RequireAuth` middleware
- **OpenAPI**: Documented with `utoipa` attributes (`#[utoipa::path]`)
- **Error Handling**: Custom `ApiError` type with proper HTTP status codes
- **Available at**: `http://localhost:8080` (dev), `/api-spec/openapi.json` for spec
### Common Library (`crates/common`)
- **Modules**: `models`, `repositories`, `db`, `config`, `error`, `mq`, `crypto`, `utils`, `workflow`, `pack_registry`
- **Exports**: Commonly used types re-exported from `lib.rs`
- **Repository Layer**: All DB access goes through repositories in `repositories/`
- **Message Queue**: Abstractions in `mq/` for RabbitMQ communication
### Web UI (`web/`)
- **Generated Client**: OpenAPI client auto-generated from API spec
- Run: `npm run generate:api` (requires API running on :8080)
- Location: `src/api/`
- **State Management**: Zustand for global state, TanStack Query for server state
- **Styling**: Tailwind utility classes
- **Dev Server**: `npm run dev` (typically :3000 or :5173)
- **Build**: `npm run build`
## Development Workflow
### Common Commands (Makefile)
```bash
make build # Build all services
make build-release # Release build
make test # Run all tests
make test-integration # Run integration tests
make fmt # Format code
make clippy # Run linter
make lint # fmt + clippy
make run-api # Run API service
make run-executor # Run executor service
make run-worker # Run worker service
make run-sensor # Run sensor service
make run-notifier # Run notifier service
make db-create # Create database
make db-migrate # Run migrations
make db-reset # Drop & recreate DB
```
### Database Operations
- **Migrations**: Located in `migrations/`, applied via `sqlx migrate run`
- **Test DB**: Separate `attune_test` database, setup with `make db-test-setup`
- **Schema**: All tables in `public` schema with auto-updating timestamps
- **Core Pack**: Load with `./scripts/load-core-pack.sh` after DB setup
### Testing
- **Architecture**: Schema-per-test isolation (each test gets unique `test_<uuid>` schema)
- **Parallel Execution**: Tests run concurrently without `#[serial]` constraints (4-8x faster)
- **Unit Tests**: In module files alongside code
- **Integration Tests**: In `tests/` directory
- **Test DB Required**: Use `make db-test-setup` before integration tests
- **Run**: `cargo test` or `make test` (parallel by default)
- **Verbose**: `cargo test -- --nocapture --test-threads=1`
- **Cleanup**: Schemas auto-dropped on test completion; orphaned schemas cleaned via `./scripts/cleanup-test-schemas.sh`
- **SQLx Offline Mode**: Enabled for compile-time query checking without live DB; regenerate with `cargo sqlx prepare`
### CLI Tool
```bash
cargo install --path crates/cli # Install CLI
attune auth login # Login
attune pack list # List packs
attune action execute <ref> --param key=value
attune execution list # Monitor executions
```
## Test Failure Protocol
**Proactively investigate and fix test failures when discovered, even if unrelated to the current task.**
### Guidelines:
- **ALWAYS report test failures** to the user with relevant error output
- **ALWAYS run tests** after making changes: `make test` or `cargo test`
- **DO fix immediately** if the cause is obvious and fixable in 1-2 attempts
- **DO ask the user** if the failure is complex, requires architectural changes, or you're unsure of the cause
- **NEVER silently ignore** test failures or skip tests without approval
- **Gather context**: Run with `cargo test -- --nocapture --test-threads=1` for details
### Priority:
- **Critical** (build/compile failures): Fix immediately
- **Related** (affects current work): Fix before proceeding
- **Unrelated**: Report and ask if you should fix now or defer
When reporting, ask: "Should I fix this first or continue with [original task]?"
## Code Quality: Zero Warnings Policy
**Maintain zero compiler warnings across the workspace.** Clean builds ensure new issues are immediately visible.
### Workflow
- **Check after changes:** `cargo check --all-targets --workspace`
- **Before completing work:** Fix or document any warnings introduced
- **End of session:** Verify zero warnings before finishing
### Handling Warnings
- **Fix first:** Remove dead code, unused imports, unnecessary variables
- **Prefix `_`:** For intentionally unused variables that document intent
- **Use `#[allow(dead_code)]`:** For API methods intended for future use (add doc comment explaining why)
- **Never ignore blindly:** Every suppression needs a clear rationale
### Conservative Approach
- Preserve methods that complete a logical API surface
- Keep test helpers that are part of shared infrastructure
- When uncertain about removal, ask the user
### Red Flags
- ❌ Introducing new warnings
- ❌ Blanket `#[allow(warnings)]` without specific justification
- ❌ Accumulating warnings over time
## File Naming & Location Conventions
### When Adding Features:
- **New API Endpoint**:
- Route handler in `crates/api/src/routes/<domain>.rs`
- DTO in `crates/api/src/dto/<domain>.rs`
- Update `routes/mod.rs` and main router
- **New Domain Model**:
- Add to `crates/common/src/models.rs`
- Create migration in `migrations/YYYYMMDDHHMMSS_description.sql`
- Add repository in `crates/common/src/repositories/<entity>.rs`
- **New Service**: Add to `crates/` and update workspace `Cargo.toml` members
- **Configuration**: Update `crates/common/src/config.rs` with serde defaults
- **Documentation**: Add to `docs/` directory
### Important Files
- `crates/common/src/models.rs` - All domain models
- `crates/common/src/error.rs` - Error types
- `crates/common/src/config.rs` - Configuration structure
- `crates/api/src/routes/mod.rs` - API routing
- `config.development.yaml` - Dev configuration
- `Cargo.toml` - Workspace dependencies
- `Makefile` - Development commands
## Common Pitfalls to Avoid
1. **NEVER** bypass repositories - always use the repository layer for DB access
2. **NEVER** forget `RequireAuth` middleware on protected endpoints
3. **NEVER** hardcode service URLs - use configuration
4. **NEVER** commit secrets in config files (use env vars in production)
5. **NEVER** hardcode schema prefixes in SQL queries - rely on PostgreSQL `search_path` mechanism
6. **ALWAYS** use PostgreSQL enum type mappings for custom enums
7. **ALWAYS** use transactions for multi-table operations
8. **ALWAYS** start with `attune/` or correct crate name when specifying file paths
9. **ALWAYS** convert runtime names to lowercase for comparison (database may store capitalized)
10. **REMEMBER** IDs are `i64`, not `i32` or `uuid`
11. **REMEMBER** schema is determined by `search_path`, not hardcoded in queries (production uses `attune`, development uses `public`)
12. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare`
## Deployment
- **Target**: Distributed deployment with separate service instances
- **Docker**: Dockerfiles for each service (planned in `docker/` dir)
- **Config**: Use environment variables for secrets in production
- **Database**: PostgreSQL 14+ with connection pooling
- **Message Queue**: RabbitMQ required for service communication
- **Web UI**: Static files served separately or via API service
## Current Development Status
- ✅ **Complete**: Database migrations (17 tables), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic), Executor service (core functionality), Worker service (shell/Python execution)
- 🔄 **In Progress**: Sensor service, advanced workflow features, Python runtime dependency management
- 📋 **Planned**: Notifier service, execution policies, monitoring, pack registry system
## Quick Reference
### Start Development Environment
```bash
# Start PostgreSQL and RabbitMQ
# Load core pack: ./scripts/load-core-pack.sh
# Start API: make run-api
# Start Web UI: cd web && npm run dev
```
### File Path Examples
- Models: `attune/crates/common/src/models.rs`
- API routes: `attune/crates/api/src/routes/actions.rs`
- Repositories: `attune/crates/common/src/repositories/execution.rs`
- Migrations: `attune/migrations/*.sql`
- Web UI: `attune/web/src/`
- Config: `attune/config.development.yaml`
### Documentation Locations
- API docs: `attune/docs/api-*.md`
- Configuration: `attune/docs/configuration.md`
- Architecture: `attune/docs/*-architecture.md`, `attune/docs/*-service.md`
- Testing: `attune/docs/testing-*.md`, `attune/docs/running-tests.md`, `attune/docs/schema-per-test.md`
- AI Agent Work Summaries: `attune/work-summary/*.md`
- Deployment: `attune/docs/production-deployment.md`
- DO NOT create additional documentation files in the root of the project. all new documentation describing how to use the system should be placed in the `attune/docs` directory, and documentation describing the work performed should be placed in the `attune/work-summary` directory.
## Work Summary & Reporting
**Avoid redundant summarization - summarize changes once at completion, not continuously.**
### Guidelines:
- **Report progress** during work: brief status updates, blockers, questions
- **Summarize once** at completion: consolidated overview of all changes made
- **Work summaries**: Write to `attune/work-summary/*.md` only at task completion, not incrementally
- **Avoid duplication**: Don't re-explain the same changes multiple times in different formats
- **What changed, not how**: Focus on outcomes and impacts, not play-by-play narration
### Good Pattern:
```
[Making changes with tool calls and brief progress notes]
...
[At completion]
"I've completed the task. Here's a summary of changes: [single consolidated overview]"
```
### Bad Pattern:
```
[Makes changes]
"So I changed X, Y, and Z..."
[More changes]
"To summarize, I modified X, Y, and Z..."
[Writes work summary]
"In this session I updated X, Y, and Z..."
```
## Maintaining the AGENTS.md file
**IMPORTANT: Keep this file up-to-date as the project evolves.**
After making changes to the project, you MUST update this `AGENTS.md` file if any of the following occur:
- **New dependencies added or major dependencies removed** (check package.json, Cargo.toml, requirements.txt, etc.)
- **Project structure changes**: new directories/modules created, existing ones renamed or removed
- **Architecture changes**: new layers, patterns, or major refactoring that affects how components interact
- **New frameworks or tools adopted** (e.g., switching from REST to GraphQL, adding a new testing framework)
- **Deployment or infrastructure changes** (new CI/CD pipelines, different hosting, containerization added)
- **New major features** that introduce new subsystems or significantly change existing ones
- **Style guide or coding convention updates**
### `AGENTS.md` Content inclusion policy
- DO NOT simply summarize changes in the `AGENTS.md` file. If there are existing sections that need updating due to changes in the application architecture or project structure, update them accordingly.
- When relevant, work summaries should instead be written to `attune/work-summary/*.md`
### Update procedure:
1. After completing your changes, review if they affect any section of `AGENTS.md`
2. If yes, immediately update the relevant sections
3. Add a brief comment at the top of `AGENTS.md` with the date and what was updated (optional but helpful)
### Update format:
When updating, be surgical - modify only the affected sections rather than rewriting the entire file. Maintain the existing structure and tone.
**Treat `AGENTS.md` as living documentation.** An outdated `AGENTS.md` file is worse than no `AGENTS.md` file, as it will mislead future AI agents and waste time.
## Project Documentation Index
{{DOCUMENTATION_INDEX}}

7105
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -14,14 +14,14 @@ members = [
[workspace.package]
version = "0.1.0"
edition = "2021"
authors = ["Attune Team"]
license = "MIT"
repository = "https://github.com/yourusername/attune"
authors = ["David Culbreth"]
license = "Apache-2.0"
repository = "https://git.rdrx.app/attune-system/attune"
[workspace.dependencies]
# Async runtime
tokio = { version = "1.42", features = ["full"] }
tokio-util = "0.7"
tokio = { version = "1.50", features = ["full"] }
tokio-util = { version = "0.7", features = ["io"] }
tokio-stream = { version = "0.1", features = ["sync"] }
# Web framework
@@ -52,27 +52,32 @@ config = "0.15"
chrono = { version = "0.4", features = ["serde"] }
# UUID
uuid = { version = "1.11", features = ["v4", "serde"] }
uuid = { version = "1.22", features = ["v4", "serde"] }
# Validation
validator = { version = "0.20", features = ["derive"] }
# CLI
clap = { version = "4.5", features = ["derive"] }
clap = { version = "4.6", features = ["derive"] }
# Message queue / PubSub
# RabbitMQ
lapin = "3.7"
lapin = "4.3"
# Redis
redis = { version = "1.0", features = ["tokio-comp", "connection-manager"] }
# JSON Schema
schemars = { version = "1.2", features = ["chrono04"] }
jsonschema = "0.38"
jsonschema = "0.44"
# OpenAPI/Swagger
utoipa = { version = "5.4", features = ["chrono", "uuid"] }
# JWT
jsonwebtoken = { version = "10.3", features = ["hmac", "sha2"] }
hmac = "0.12"
signature = "2.2"
# Encryption
argon2 = "0.5"
ring = "0.17"
@@ -81,24 +86,39 @@ aes-gcm = "0.10"
sha2 = "0.10"
# Regular expressions
regex = "1.11"
regex = "1.12"
# HTTP client
reqwest = { version = "0.13", features = ["json"] }
reqwest-eventsource = "0.6"
hyper = { version = "1.0", features = ["full"] }
hyper = { version = "1.8", features = ["full"] }
# File system utilities
walkdir = "2.4"
walkdir = "2.5"
# Archive/compression
tar = "0.4"
flate2 = "1.1"
# WebSocket client
tokio-tungstenite = { version = "0.28", features = ["rustls-tls-native-roots"] }
# URL parsing
url = "2.5"
# Async utilities
async-trait = "0.1"
futures = "0.3"
# Version matching
semver = { version = "1.0", features = ["serde"] }
# Temp files
tempfile = "3.27"
# Testing
mockall = "0.14"
tempfile = "3.8"
serial_test = "3.2"
serial_test = "3.4"
# Concurrent data structures
dashmap = "6.1"

202
LICENSE Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

200
Makefile
View File

@@ -2,7 +2,13 @@
check fmt clippy install-tools db-create db-migrate db-reset docker-build \
docker-up docker-down docker-cache-warm docker-stop-system-services dev watch generate-agents-index \
docker-build-workers docker-build-worker-base docker-build-worker-python \
docker-build-worker-node docker-build-worker-full
docker-build-worker-node docker-build-worker-full deny ci-rust ci-web-blocking ci-web-advisory \
ci-security-blocking ci-security-advisory ci-blocking ci-advisory \
fmt-check pre-commit install-git-hooks \
build-agent docker-build-agent docker-build-agent-arm64 docker-build-agent-all \
run-agent run-agent-release \
docker-up-agent docker-down-agent \
docker-build-pack-binaries docker-build-pack-binaries-arm64 docker-build-pack-binaries-all
# Default target
help:
@@ -18,13 +24,18 @@ help:
@echo " make test - Run all tests"
@echo " make test-common - Run tests for common library"
@echo " make test-api - Run tests for API service"
@echo " make test-integration - Run integration tests"
@echo " make test-integration - Run integration tests (common + API)"
@echo " make test-integration-api - Run API integration tests (requires DB)"
@echo " make check - Check code without building"
@echo ""
@echo "Code Quality:"
@echo " make fmt - Format all code"
@echo " make fmt-check - Verify formatting without changing files"
@echo " make clippy - Run linter"
@echo " make lint - Run both fmt and clippy"
@echo " make deny - Run cargo-deny checks"
@echo " make pre-commit - Run the git pre-commit checks locally"
@echo " make install-git-hooks - Configure git to use the repo hook scripts"
@echo ""
@echo "Running Services:"
@echo " make run-api - Run API service"
@@ -53,6 +64,21 @@ help:
@echo " make docker-up - Start services with docker compose"
@echo " make docker-down - Stop services"
@echo ""
@echo "Agent (Universal Worker):"
@echo " make build-agent - Build statically-linked agent binary (musl)"
@echo " make docker-build-agent - Build agent Docker image (amd64, default)"
@echo " make docker-build-agent-arm64 - Build agent Docker image (arm64)"
@echo " make docker-build-agent-all - Build agent Docker images (amd64 + arm64)"
@echo " make run-agent - Run agent in development mode"
@echo " make run-agent-release - Run agent in release mode"
@echo " make docker-up-agent - Start all services + agent workers (ruby, etc.)"
@echo " make docker-down-agent - Stop agent stack"
@echo ""
@echo "Pack Binaries:"
@echo " make docker-build-pack-binaries - Build pack binaries Docker image (amd64, default)"
@echo " make docker-build-pack-binaries-arm64 - Build pack binaries Docker image (arm64)"
@echo " make docker-build-pack-binaries-all - Build pack binaries Docker images (amd64 + arm64)"
@echo ""
@echo "Development:"
@echo " make watch - Watch and rebuild on changes"
@echo " make install-tools - Install development tools"
@@ -61,6 +87,9 @@ help:
@echo " make generate-agents-index - Generate AGENTS.md index for AI agents"
@echo ""
# Increase rustc stack size to prevent SIGSEGV during compilation
export RUST_MIN_STACK:=67108864
# Building
build:
cargo build
@@ -84,13 +113,18 @@ test-api:
test-verbose:
cargo test -- --nocapture --test-threads=1
test-integration:
test-integration: test-integration-api
@echo "Setting up test database..."
@make db-test-setup
@echo "Running integration tests..."
@echo "Running common integration tests..."
cargo test --test '*' -p attune-common -- --test-threads=1
@echo "Integration tests complete"
test-integration-api:
@echo "Running API integration tests..."
cargo test -p attune-api -- --ignored --test-threads=1
@echo "API integration tests complete"
test-with-db: db-test-setup test-integration
@echo "All tests with database complete"
@@ -101,6 +135,9 @@ check:
fmt:
cargo fmt --all
fmt-check:
cargo fmt --all -- --check
clippy:
cargo clippy --all-features -- -D warnings
@@ -209,38 +246,86 @@ docker-build-api:
docker-build-web:
docker compose build web
# Build worker images
docker-build-workers: docker-build-worker-base docker-build-worker-python docker-build-worker-node docker-build-worker-full
@echo "✅ All worker images built successfully"
# Agent binary (statically-linked for injection into any container)
AGENT_RUST_TARGET ?= x86_64-unknown-linux-musl
docker-build-worker-base:
@echo "Building base worker (shell only)..."
DOCKER_BUILDKIT=1 docker build --target worker-base -t attune-worker:base -f docker/Dockerfile.worker .
@echo "✅ Base worker image built: attune-worker:base"
# Pack binaries (statically-linked for packs volume)
PACK_BINARIES_RUST_TARGET ?= x86_64-unknown-linux-musl
docker-build-worker-python:
@echo "Building Python worker (shell + python)..."
DOCKER_BUILDKIT=1 docker build --target worker-python -t attune-worker:python -f docker/Dockerfile.worker .
@echo "✅ Python worker image built: attune-worker:python"
build-agent:
@echo "Installing musl target (if not already installed)..."
rustup target add $(AGENT_RUST_TARGET) 2>/dev/null || true
@echo "Building statically-linked worker and sensor agent binaries..."
SQLX_OFFLINE=true cargo build --release --target $(AGENT_RUST_TARGET) --bin attune-agent --bin attune-sensor-agent
strip target/$(AGENT_RUST_TARGET)/release/attune-agent
strip target/$(AGENT_RUST_TARGET)/release/attune-sensor-agent
@echo "✅ Agent binaries built:"
@echo " - target/$(AGENT_RUST_TARGET)/release/attune-agent"
@echo " - target/$(AGENT_RUST_TARGET)/release/attune-sensor-agent"
@ls -lh target/$(AGENT_RUST_TARGET)/release/attune-agent
@ls -lh target/$(AGENT_RUST_TARGET)/release/attune-sensor-agent
docker-build-worker-node:
@echo "Building Node.js worker (shell + node)..."
DOCKER_BUILDKIT=1 docker build --target worker-node -t attune-worker:node -f docker/Dockerfile.worker .
@echo "✅ Node.js worker image built: attune-worker:node"
docker-build-agent:
@echo "Building agent Docker image ($(AGENT_RUST_TARGET))..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=$(AGENT_RUST_TARGET) --target agent-init -f docker/Dockerfile.agent -t attune-agent:latest .
@echo "✅ Agent image built: attune-agent:latest ($(AGENT_RUST_TARGET))"
docker-build-worker-full:
@echo "Building full worker (all runtimes)..."
DOCKER_BUILDKIT=1 docker build --target worker-full -t attune-worker:full -f docker/Dockerfile.worker .
@echo "✅ Full worker image built: attune-worker:full"
docker-build-agent-arm64:
@echo "Building arm64 agent Docker image..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target agent-init -f docker/Dockerfile.agent -t attune-agent:arm64 .
@echo "✅ Agent image built: attune-agent:arm64"
docker-build-agent-all:
@echo "Building agent Docker images for all architectures..."
$(MAKE) docker-build-agent
$(MAKE) docker-build-agent-arm64
@echo "✅ All agent images built: attune-agent:latest (amd64), attune-agent:arm64"
run-agent:
cargo run --bin attune-agent
run-agent-release:
cargo run --bin attune-agent --release
# Pack binaries (statically-linked for packs volume)
docker-build-pack-binaries:
@echo "Building pack binaries Docker image ($(PACK_BINARIES_RUST_TARGET))..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=$(PACK_BINARIES_RUST_TARGET) --target pack-binaries-init -f docker/Dockerfile.pack-binaries -t attune-pack-builder:latest .
@echo "✅ Pack binaries image built: attune-pack-builder:latest ($(PACK_BINARIES_RUST_TARGET))"
docker-build-pack-binaries-arm64:
@echo "Building arm64 pack binaries Docker image..."
DOCKER_BUILDKIT=1 docker buildx build --build-arg RUST_TARGET=aarch64-unknown-linux-musl --target pack-binaries-init -f docker/Dockerfile.pack-binaries -t attune-pack-builder:arm64 .
@echo "✅ Pack binaries image built: attune-pack-builder:arm64"
docker-build-pack-binaries-all:
@echo "Building pack binaries Docker images for all architectures..."
$(MAKE) docker-build-pack-binaries
$(MAKE) docker-build-pack-binaries-arm64
@echo "✅ All pack binary images built: attune-pack-builder:latest (amd64), attune-pack-builder:arm64"
run-sensor-agent:
cargo run --bin attune-sensor-agent
run-sensor-agent-release:
cargo run --bin attune-sensor-agent --release
docker-up:
@echo "Starting all services with Docker Compose..."
docker compose up -d
docker-up-agent:
@echo "Starting all services + agent-based workers..."
docker compose -f docker-compose.yaml -f docker-compose.agent.yaml up -d
docker-down:
@echo "Stopping all services..."
docker compose down
docker-down-agent:
@echo "Stopping all services (including agent workers)..."
docker compose -f docker-compose.yaml -f docker-compose.agent.yaml down
docker-down-volumes:
@echo "Stopping all services and removing volumes (WARNING: deletes data)..."
docker compose down -v
@@ -312,9 +397,60 @@ coverage:
update:
cargo update
# Audit dependencies for security issues
# Audit dependencies for security issues (ignores configured in deny.toml)
audit:
cargo audit
cargo deny check advisories
deny:
cargo deny check
ci-rust:
cargo fmt --all -- --check
cargo clippy --workspace --all-targets --all-features -- -D warnings
cargo test --workspace --all-features
cargo deny check
ci-web-blocking:
cd web && npm ci
cd web && npm run lint
cd web && npm run typecheck
cd web && npm run build
ci-web-pre-commit:
cd web && npm ci
cd web && npm run lint
cd web && npm run typecheck
ci-web-advisory:
cd web && npm ci
cd web && npm run knip
cd web && npm audit --omit=dev
ci-security-blocking:
mkdir -p $$HOME/bin
GITLEAKS_VERSION="8.24.2"; \
ARCH="$$(uname -m)"; \
case "$$ARCH" in \
x86_64) ARCH="x64" ;; \
aarch64|arm64) ARCH="arm64" ;; \
*) echo "Unsupported architecture: $$ARCH"; exit 1 ;; \
esac; \
curl -sSfL \
-o /tmp/gitleaks.tar.gz \
"https://github.com/gitleaks/gitleaks/releases/download/v$$GITLEAKS_VERSION/gitleaks_$$GITLEAKS_VERSION"_linux_"$$ARCH".tar.gz; \
tar -xzf /tmp/gitleaks.tar.gz -C $$HOME/bin gitleaks; \
chmod +x $$HOME/bin/gitleaks
$$HOME/bin/gitleaks git --report-format sarif --report-path gitleaks.sarif --config .gitleaks.toml
ci-security-advisory:
pip install semgrep
semgrep scan --config p/default --error
ci-blocking: ci-rust ci-web-blocking ci-security-blocking
@echo "✅ Blocking CI checks passed!"
ci-advisory: ci-web-advisory ci-security-advisory
@echo "Advisory CI checks complete."
# Check dependency tree
tree:
@@ -325,10 +461,16 @@ licenses:
cargo license --json > licenses.json
@echo "License information saved to licenses.json"
# All-in-one check before committing
pre-commit: fmt clippy test
@echo "✅ All checks passed! Ready to commit."
# Blocking checks run by the git pre-commit hook after formatting.
# Keep the local web step fast; full production builds stay in CI.
pre-commit: deny ci-web-pre-commit ci-security-blocking
@echo "✅ Pre-commit checks passed."
install-git-hooks:
git config core.hooksPath .githooks
chmod +x .githooks/pre-commit
@echo "✅ Git hooks configured to use .githooks/"
# CI simulation
ci: check clippy test
ci: ci-blocking ci-advisory
@echo "✅ CI checks passed!"

6
charts/attune/Chart.yaml Normal file
View File

@@ -0,0 +1,6 @@
apiVersion: v2
name: attune
description: Helm chart for deploying the Attune automation platform
type: application
version: 0.1.0
appVersion: "0.1.0"

View File

@@ -0,0 +1,26 @@
1. Set `global.imageRegistry`, `global.imageNamespace`, and `global.imageTag` so the chart pulls the images published by the Gitea workflow.
2. Set `web.config.apiUrl` and `web.config.wsUrl` to browser-reachable endpoints before exposing the web UI.
3. The shared `packs`, `runtime_envs`, and `artifacts` PVCs default to `ReadWriteMany`; your cluster storage class must support RWX or you need to override those claims.
{{- if .Values.agentWorkers }}
Agent-based workers enabled:
{{- range .Values.agentWorkers }}
- {{ .name }}: image={{ .image }}, replicas={{ .replicas | default 1 }}
{{- if .runtimes }} runtimes={{ join "," .runtimes }}{{ else }} runtimes=auto-detect{{ end }}
{{- end }}
Each agent worker uses an init container to copy the statically-linked
attune-agent binary into the worker pod via an emptyDir volume. The agent
auto-detects available runtimes in the container and registers with Attune.
The default sensor deployment also uses the same injection pattern, copying
`attune-sensor-agent` into the pod before starting a stock runtime image.
To add more agent workers, append entries to `agentWorkers` in your values:
agentWorkers:
- name: my-runtime
image: my-org/my-image:latest
replicas: 1
runtimes: [] # auto-detect
{{- end }}

View File

@@ -0,0 +1,113 @@
{{- define "attune.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "attune.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name (include "attune.name" .) | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- define "attune.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" -}}
{{- end -}}
{{- define "attune.labels" -}}
helm.sh/chart: {{ include "attune.chart" . }}
app.kubernetes.io/name: {{ include "attune.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{- define "attune.selectorLabels" -}}
app.kubernetes.io/name: {{ include "attune.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{- define "attune.componentLabels" -}}
{{ include "attune.selectorLabels" .root }}
app.kubernetes.io/component: {{ .component }}
{{- end -}}
{{- define "attune.image" -}}
{{- $root := .root -}}
{{- $image := .image -}}
{{- $registry := $root.Values.global.imageRegistry -}}
{{- $namespace := $root.Values.global.imageNamespace -}}
{{- $repository := $image.repository -}}
{{- $tag := default $root.Values.global.imageTag $image.tag -}}
{{- if and $registry $namespace -}}
{{- printf "%s/%s/%s:%s" $registry $namespace $repository $tag -}}
{{- else if $registry -}}
{{- printf "%s/%s:%s" $registry $repository $tag -}}
{{- else -}}
{{- printf "%s:%s" $repository $tag -}}
{{- end -}}
{{- end -}}
{{- define "attune.secretName" -}}
{{- if .Values.security.existingSecret -}}
{{- .Values.security.existingSecret -}}
{{- else -}}
{{- printf "%s-secrets" (include "attune.fullname" .) -}}
{{- end -}}
{{- end -}}
{{- define "attune.postgresqlServiceName" -}}
{{- if .Values.database.host -}}
{{- .Values.database.host -}}
{{- else -}}
{{- printf "%s-postgresql" (include "attune.fullname" .) -}}
{{- end -}}
{{- end -}}
{{- define "attune.rabbitmqServiceName" -}}
{{- if .Values.rabbitmq.host -}}
{{- .Values.rabbitmq.host -}}
{{- else -}}
{{- printf "%s-rabbitmq" (include "attune.fullname" .) -}}
{{- end -}}
{{- end -}}
{{- define "attune.redisServiceName" -}}
{{- if .Values.redis.host -}}
{{- .Values.redis.host -}}
{{- else -}}
{{- printf "%s-redis" (include "attune.fullname" .) -}}
{{- end -}}
{{- end -}}
{{- define "attune.databaseUrl" -}}
{{- if .Values.database.url -}}
{{- .Values.database.url -}}
{{- else -}}
{{- printf "postgresql://%s:%s@%s:%v/%s" .Values.database.username .Values.database.password (include "attune.postgresqlServiceName" .) .Values.database.port .Values.database.database -}}
{{- end -}}
{{- end -}}
{{- define "attune.rabbitmqUrl" -}}
{{- if .Values.rabbitmq.url -}}
{{- .Values.rabbitmq.url -}}
{{- else -}}
{{- printf "amqp://%s:%s@%s:%v" .Values.rabbitmq.username .Values.rabbitmq.password (include "attune.rabbitmqServiceName" .) .Values.rabbitmq.port -}}
{{- end -}}
{{- end -}}
{{- define "attune.redisUrl" -}}
{{- if .Values.redis.url -}}
{{- .Values.redis.url -}}
{{- else -}}
{{- printf "redis://%s:%v" (include "attune.redisServiceName" .) .Values.redis.port -}}
{{- end -}}
{{- end -}}
{{- define "attune.apiServiceName" -}}
{{- printf "%s-api" (include "attune.fullname" .) -}}
{{- end -}}
{{- define "attune.notifierServiceName" -}}
{{- printf "%s-notifier" (include "attune.fullname" .) -}}
{{- end -}}

View File

@@ -0,0 +1,137 @@
{{- range .Values.agentWorkers }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "attune.fullname" $ }}-agent-worker-{{ .name }}
labels:
{{- include "attune.labels" $ | nindent 4 }}
app.kubernetes.io/component: agent-worker-{{ .name }}
spec:
replicas: {{ .replicas | default 1 }}
selector:
matchLabels:
{{- include "attune.selectorLabels" $ | nindent 6 }}
app.kubernetes.io/component: agent-worker-{{ .name }}
template:
metadata:
labels:
{{- include "attune.selectorLabels" $ | nindent 8 }}
app.kubernetes.io/component: agent-worker-{{ .name }}
spec:
{{- if $.Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml $.Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
{{- if .runtimeClassName }}
runtimeClassName: {{ .runtimeClassName }}
{{- end }}
{{- if .nodeSelector }}
nodeSelector:
{{- toYaml .nodeSelector | nindent 8 }}
{{- end }}
{{- if .tolerations }}
tolerations:
{{- toYaml .tolerations | nindent 8 }}
{{- end }}
{{- if .stopGracePeriod }}
terminationGracePeriodSeconds: {{ .stopGracePeriod }}
{{- else }}
terminationGracePeriodSeconds: 45
{{- end }}
initContainers:
- name: agent-loader
image: {{ include "attune.image" (dict "root" $ "image" $.Values.images.agent) }}
imagePullPolicy: {{ $.Values.images.agent.pullPolicy }}
command: ["cp", "/usr/local/bin/attune-agent", "/opt/attune/agent/attune-agent"]
volumeMounts:
- name: agent-bin
mountPath: /opt/attune/agent
- name: wait-for-schema
image: postgres:16-alpine
command: ["/bin/sh", "-ec"]
args:
- |
until PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -tAc "SELECT to_regclass('${DB_SCHEMA}.identity')" | grep -q identity; do
echo "waiting for schema";
sleep 2;
done
envFrom:
- secretRef:
name: {{ include "attune.secretName" $ }}
- name: wait-for-packs
image: busybox:1.36
command: ["/bin/sh", "-ec"]
args:
- |
until [ -f /opt/attune/packs/core/pack.yaml ]; do
echo "waiting for packs";
sleep 2;
done
volumeMounts:
- name: packs
mountPath: /opt/attune/packs
containers:
- name: worker
image: {{ .image }}
{{- if .imagePullPolicy }}
imagePullPolicy: {{ .imagePullPolicy }}
{{- end }}
command: ["/opt/attune/agent/attune-agent"]
envFrom:
- secretRef:
name: {{ include "attune.secretName" $ }}
env:
- name: ATTUNE_CONFIG
value: /opt/attune/config.yaml
- name: ATTUNE__DATABASE__SCHEMA
value: {{ $.Values.database.schema | quote }}
- name: ATTUNE_WORKER_TYPE
value: container
- name: ATTUNE_WORKER_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ATTUNE_API_URL
value: http://{{ include "attune.apiServiceName" $ }}:{{ $.Values.api.service.port }}
- name: RUST_LOG
value: {{ .logLevel | default "info" }}
{{- if .runtimes }}
- name: ATTUNE_WORKER_RUNTIMES
value: {{ join "," .runtimes | quote }}
{{- end }}
{{- if .env }}
{{- toYaml .env | nindent 12 }}
{{- end }}
resources:
{{- toYaml (.resources | default dict) | nindent 12 }}
volumeMounts:
- name: agent-bin
mountPath: /opt/attune/agent
readOnly: true
- name: config
mountPath: /opt/attune/config.yaml
subPath: config.yaml
- name: packs
mountPath: /opt/attune/packs
readOnly: true
- name: runtime-envs
mountPath: /opt/attune/runtime_envs
- name: artifacts
mountPath: /opt/attune/artifacts
volumes:
- name: agent-bin
emptyDir: {}
- name: config
configMap:
name: {{ include "attune.fullname" $ }}-config
- name: packs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" $ }}-packs
- name: runtime-envs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" $ }}-runtime-envs
- name: artifacts
persistentVolumeClaim:
claimName: {{ include "attune.fullname" $ }}-artifacts
{{- end }}

View File

@@ -0,0 +1,542 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "attune.apiServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
type: {{ .Values.api.service.type }}
selector:
{{- include "attune.componentLabels" (dict "root" . "component" "api") | nindent 4 }}
ports:
- name: http
port: {{ .Values.api.service.port }}
targetPort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "attune.apiServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.api.replicaCount }}
selector:
matchLabels:
{{- include "attune.componentLabels" (dict "root" . "component" "api") | nindent 6 }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "api") | nindent 8 }}
spec:
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
initContainers:
- name: wait-for-schema
image: postgres:16-alpine
command: ["/bin/sh", "-ec"]
args:
- |
until PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -tAc "SELECT to_regclass('${DB_SCHEMA}.identity')" | grep -q identity; do
echo "waiting for schema";
sleep 2;
done
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
- name: wait-for-packs
image: busybox:1.36
command: ["/bin/sh", "-ec"]
args:
- |
until [ -f /opt/attune/packs/core/pack.yaml ]; do
echo "waiting for packs";
sleep 2;
done
volumeMounts:
- name: packs
mountPath: /opt/attune/packs
containers:
- name: api
image: {{ include "attune.image" (dict "root" . "image" .Values.images.api) }}
imagePullPolicy: {{ .Values.images.api.pullPolicy }}
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
env:
- name: ATTUNE_CONFIG
value: /opt/attune/config.yaml
- name: ATTUNE__DATABASE__SCHEMA
value: {{ .Values.database.schema | quote }}
- name: ATTUNE__WORKER__WORKER_TYPE
value: container
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 20
periodSeconds: 15
resources:
{{- toYaml .Values.api.resources | nindent 12 }}
volumeMounts:
- name: config
mountPath: /opt/attune/config.yaml
subPath: config.yaml
- name: packs
mountPath: /opt/attune/packs
- name: runtime-envs
mountPath: /opt/attune/runtime_envs
- name: artifacts
mountPath: /opt/attune/artifacts
volumes:
- name: config
configMap:
name: {{ include "attune.fullname" . }}-config
- name: packs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-packs
- name: runtime-envs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-runtime-envs
- name: artifacts
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-artifacts
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "attune.fullname" . }}-executor
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.executor.replicaCount }}
selector:
matchLabels:
{{- include "attune.componentLabels" (dict "root" . "component" "executor") | nindent 6 }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "executor") | nindent 8 }}
spec:
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
initContainers:
- name: wait-for-schema
image: postgres:16-alpine
command: ["/bin/sh", "-ec"]
args:
- |
until PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -tAc "SELECT to_regclass('${DB_SCHEMA}.identity')" | grep -q identity; do
echo "waiting for schema";
sleep 2;
done
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
- name: wait-for-packs
image: busybox:1.36
command: ["/bin/sh", "-ec"]
args:
- |
until [ -f /opt/attune/packs/core/pack.yaml ]; do
echo "waiting for packs";
sleep 2;
done
volumeMounts:
- name: packs
mountPath: /opt/attune/packs
containers:
- name: executor
image: {{ include "attune.image" (dict "root" . "image" .Values.images.executor) }}
imagePullPolicy: {{ .Values.images.executor.pullPolicy }}
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
env:
- name: ATTUNE_CONFIG
value: /opt/attune/config.yaml
- name: ATTUNE__DATABASE__SCHEMA
value: {{ .Values.database.schema | quote }}
- name: ATTUNE__WORKER__WORKER_TYPE
value: container
resources:
{{- toYaml .Values.executor.resources | nindent 12 }}
volumeMounts:
- name: config
mountPath: /opt/attune/config.yaml
subPath: config.yaml
- name: packs
mountPath: /opt/attune/packs
- name: artifacts
mountPath: /opt/attune/artifacts
volumes:
- name: config
configMap:
name: {{ include "attune.fullname" . }}-config
- name: packs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-packs
- name: artifacts
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-artifacts
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "attune.fullname" . }}-worker
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.worker.replicaCount }}
selector:
matchLabels:
{{- include "attune.componentLabels" (dict "root" . "component" "worker") | nindent 6 }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "worker") | nindent 8 }}
spec:
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
initContainers:
- name: wait-for-schema
image: postgres:16-alpine
command: ["/bin/sh", "-ec"]
args:
- |
until PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -tAc "SELECT to_regclass('${DB_SCHEMA}.identity')" | grep -q identity; do
echo "waiting for schema";
sleep 2;
done
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
- name: wait-for-packs
image: busybox:1.36
command: ["/bin/sh", "-ec"]
args:
- |
until [ -f /opt/attune/packs/core/pack.yaml ]; do
echo "waiting for packs";
sleep 2;
done
volumeMounts:
- name: packs
mountPath: /opt/attune/packs
containers:
- name: worker
image: {{ include "attune.image" (dict "root" . "image" .Values.images.worker) }}
imagePullPolicy: {{ .Values.images.worker.pullPolicy }}
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
env:
- name: ATTUNE_CONFIG
value: /opt/attune/config.yaml
- name: ATTUNE__DATABASE__SCHEMA
value: {{ .Values.database.schema | quote }}
- name: ATTUNE_WORKER_RUNTIMES
value: {{ .Values.worker.runtimes | quote }}
- name: ATTUNE_WORKER_TYPE
value: container
- name: ATTUNE_WORKER_NAME
value: {{ .Values.worker.name | quote }}
- name: ATTUNE_API_URL
value: http://{{ include "attune.apiServiceName" . }}:{{ .Values.api.service.port }}
resources:
{{- toYaml .Values.worker.resources | nindent 12 }}
volumeMounts:
- name: config
mountPath: /opt/attune/config.yaml
subPath: config.yaml
- name: packs
mountPath: /opt/attune/packs
- name: runtime-envs
mountPath: /opt/attune/runtime_envs
- name: artifacts
mountPath: /opt/attune/artifacts
volumes:
- name: config
configMap:
name: {{ include "attune.fullname" . }}-config
- name: packs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-packs
- name: runtime-envs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-runtime-envs
- name: artifacts
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-artifacts
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "attune.fullname" . }}-sensor
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.sensor.replicaCount }}
selector:
matchLabels:
{{- include "attune.componentLabels" (dict "root" . "component" "sensor") | nindent 6 }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "sensor") | nindent 8 }}
spec:
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
terminationGracePeriodSeconds: 45
initContainers:
- name: sensor-agent-loader
image: {{ include "attune.image" (dict "root" . "image" .Values.images.agent) }}
imagePullPolicy: {{ .Values.images.agent.pullPolicy }}
command: ["cp", "/usr/local/bin/attune-sensor-agent", "/opt/attune/agent/attune-sensor-agent"]
volumeMounts:
- name: agent-bin
mountPath: /opt/attune/agent
- name: wait-for-schema
image: postgres:16-alpine
command: ["/bin/sh", "-ec"]
args:
- |
until PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -tAc "SELECT to_regclass('${DB_SCHEMA}.identity')" | grep -q identity; do
echo "waiting for schema";
sleep 2;
done
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
- name: wait-for-packs
image: busybox:1.36
command: ["/bin/sh", "-ec"]
args:
- |
until [ -f /opt/attune/packs/core/pack.yaml ]; do
echo "waiting for packs";
sleep 2;
done
volumeMounts:
- name: packs
mountPath: /opt/attune/packs
containers:
- name: sensor
image: {{ include "attune.image" (dict "root" . "image" .Values.images.sensor) }}
imagePullPolicy: {{ .Values.images.sensor.pullPolicy }}
command: ["/opt/attune/agent/attune-sensor-agent"]
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
env:
- name: ATTUNE_CONFIG
value: /opt/attune/config.yaml
- name: ATTUNE__DATABASE__SCHEMA
value: {{ .Values.database.schema | quote }}
- name: ATTUNE__WORKER__WORKER_TYPE
value: container
- name: ATTUNE_SENSOR_RUNTIMES
value: {{ .Values.sensor.runtimes | quote }}
- name: ATTUNE_API_URL
value: http://{{ include "attune.apiServiceName" . }}:{{ .Values.api.service.port }}
- name: ATTUNE_MQ_URL
value: {{ include "attune.rabbitmqUrl" . | quote }}
- name: ATTUNE_PACKS_BASE_DIR
value: /opt/attune/packs
- name: RUST_LOG
value: {{ .Values.sensor.logLevel | quote }}
resources:
{{- toYaml .Values.sensor.resources | nindent 12 }}
volumeMounts:
- name: agent-bin
mountPath: /opt/attune/agent
readOnly: true
- name: config
mountPath: /opt/attune/config.yaml
subPath: config.yaml
- name: packs
mountPath: /opt/attune/packs
readOnly: true
- name: runtime-envs
mountPath: /opt/attune/runtime_envs
volumes:
- name: agent-bin
emptyDir: {}
- name: config
configMap:
name: {{ include "attune.fullname" . }}-config
- name: packs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-packs
- name: runtime-envs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-runtime-envs
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "attune.notifierServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
type: {{ .Values.notifier.service.type }}
selector:
{{- include "attune.componentLabels" (dict "root" . "component" "notifier") | nindent 4 }}
ports:
- name: ws
port: {{ .Values.notifier.service.port }}
targetPort: ws
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "attune.notifierServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.notifier.replicaCount }}
selector:
matchLabels:
{{- include "attune.componentLabels" (dict "root" . "component" "notifier") | nindent 6 }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "notifier") | nindent 8 }}
spec:
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
initContainers:
- name: wait-for-schema
image: postgres:16-alpine
command: ["/bin/sh", "-ec"]
args:
- |
until PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -tAc "SELECT to_regclass('${DB_SCHEMA}.identity')" | grep -q identity; do
echo "waiting for schema";
sleep 2;
done
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
containers:
- name: notifier
image: {{ include "attune.image" (dict "root" . "image" .Values.images.notifier) }}
imagePullPolicy: {{ .Values.images.notifier.pullPolicy }}
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
env:
- name: ATTUNE_CONFIG
value: /opt/attune/config.yaml
- name: ATTUNE__DATABASE__SCHEMA
value: {{ .Values.database.schema | quote }}
- name: ATTUNE__WORKER__WORKER_TYPE
value: container
ports:
- name: ws
containerPort: 8081
readinessProbe:
httpGet:
path: /health
port: ws
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: ws
initialDelaySeconds: 20
periodSeconds: 15
resources:
{{- toYaml .Values.notifier.resources | nindent 12 }}
volumeMounts:
- name: config
mountPath: /opt/attune/config.yaml
subPath: config.yaml
volumes:
- name: config
configMap:
name: {{ include "attune.fullname" . }}-config
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "attune.fullname" . }}-web
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
type: {{ .Values.web.service.type }}
selector:
{{- include "attune.componentLabels" (dict "root" . "component" "web") | nindent 4 }}
ports:
- name: http
port: {{ .Values.web.service.port }}
targetPort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "attune.fullname" . }}-web
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.web.replicaCount }}
selector:
matchLabels:
{{- include "attune.componentLabels" (dict "root" . "component" "web") | nindent 6 }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "web") | nindent 8 }}
spec:
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
containers:
- name: web
image: {{ include "attune.image" (dict "root" . "image" .Values.images.web) }}
imagePullPolicy: {{ .Values.images.web.pullPolicy }}
env:
- name: API_URL
value: {{ .Values.web.config.apiUrl | quote }}
- name: WS_URL
value: {{ .Values.web.config.wsUrl | quote }}
- name: ENVIRONMENT
value: {{ .Values.web.config.environment | quote }}
ports:
- name: http
containerPort: 80
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 20
periodSeconds: 15
resources:
{{- toYaml .Values.web.resources | nindent 12 }}

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "attune.fullname" . }}-config
labels:
{{- include "attune.labels" . | nindent 4 }}
data:
config.yaml: |
{{ .Files.Get "files/config.docker.yaml" | indent 4 }}

View File

@@ -0,0 +1,225 @@
{{- if .Values.database.postgresql.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "attune.postgresqlServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
selector:
{{- include "attune.componentLabels" (dict "root" . "component" "postgresql") | nindent 4 }}
ports:
- name: postgres
port: {{ .Values.database.port }}
targetPort: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "attune.postgresqlServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
serviceName: {{ include "attune.postgresqlServiceName" . }}
replicas: 1
selector:
matchLabels:
{{- include "attune.componentLabels" (dict "root" . "component" "postgresql") | nindent 6 }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "postgresql") | nindent 8 }}
spec:
containers:
- name: postgresql
image: "{{ .Values.database.postgresql.image.repository }}:{{ .Values.database.postgresql.image.tag }}"
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
value: {{ .Values.database.username | quote }}
- name: POSTGRES_PASSWORD
value: {{ .Values.database.password | quote }}
- name: POSTGRES_DB
value: {{ .Values.database.database | quote }}
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- name: postgres
containerPort: 5432
livenessProbe:
exec:
command: ["pg_isready", "-U", "{{ .Values.database.username }}"]
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
exec:
command: ["pg_isready", "-U", "{{ .Values.database.username }}"]
initialDelaySeconds: 10
periodSeconds: 10
resources:
{{- toYaml .Values.database.postgresql.resources | nindent 12 }}
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
{{- toYaml .Values.database.postgresql.persistence.accessModes | nindent 10 }}
resources:
requests:
storage: {{ .Values.database.postgresql.persistence.size }}
{{- if .Values.database.postgresql.persistence.storageClassName }}
storageClassName: {{ .Values.database.postgresql.persistence.storageClassName }}
{{- end }}
{{- end }}
{{- if .Values.rabbitmq.enabled }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "attune.rabbitmqServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
selector:
{{- include "attune.componentLabels" (dict "root" . "component" "rabbitmq") | nindent 4 }}
ports:
- name: amqp
port: {{ .Values.rabbitmq.port }}
targetPort: amqp
- name: management
port: {{ .Values.rabbitmq.managementPort }}
targetPort: management
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "attune.rabbitmqServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
serviceName: {{ include "attune.rabbitmqServiceName" . }}
replicas: 1
selector:
matchLabels:
{{- include "attune.componentLabels" (dict "root" . "component" "rabbitmq") | nindent 6 }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "rabbitmq") | nindent 8 }}
spec:
containers:
- name: rabbitmq
image: "{{ .Values.rabbitmq.image.repository }}:{{ .Values.rabbitmq.image.tag }}"
imagePullPolicy: IfNotPresent
env:
- name: RABBITMQ_DEFAULT_USER
value: {{ .Values.rabbitmq.username | quote }}
- name: RABBITMQ_DEFAULT_PASS
value: {{ .Values.rabbitmq.password | quote }}
- name: RABBITMQ_DEFAULT_VHOST
value: /
ports:
- name: amqp
containerPort: 5672
- name: management
containerPort: 15672
livenessProbe:
exec:
command: ["rabbitmq-diagnostics", "-q", "ping"]
initialDelaySeconds: 20
periodSeconds: 15
readinessProbe:
exec:
command: ["rabbitmq-diagnostics", "-q", "ping"]
initialDelaySeconds: 10
periodSeconds: 10
resources:
{{- toYaml .Values.rabbitmq.resources | nindent 12 }}
volumeMounts:
- name: data
mountPath: /var/lib/rabbitmq
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
{{- toYaml .Values.rabbitmq.persistence.accessModes | nindent 10 }}
resources:
requests:
storage: {{ .Values.rabbitmq.persistence.size }}
{{- if .Values.rabbitmq.persistence.storageClassName }}
storageClassName: {{ .Values.rabbitmq.persistence.storageClassName }}
{{- end }}
{{- end }}
{{- if .Values.redis.enabled }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "attune.redisServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
selector:
{{- include "attune.componentLabels" (dict "root" . "component" "redis") | nindent 4 }}
ports:
- name: redis
port: {{ .Values.redis.port }}
targetPort: redis
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "attune.redisServiceName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
serviceName: {{ include "attune.redisServiceName" . }}
replicas: 1
selector:
matchLabels:
{{- include "attune.componentLabels" (dict "root" . "component" "redis") | nindent 6 }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "redis") | nindent 8 }}
spec:
containers:
- name: redis
image: "{{ .Values.redis.image.repository }}:{{ .Values.redis.image.tag }}"
imagePullPolicy: IfNotPresent
command: ["redis-server", "--appendonly", "yes"]
ports:
- name: redis
containerPort: 6379
livenessProbe:
exec:
command: ["redis-cli", "ping"]
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
exec:
command: ["redis-cli", "ping"]
initialDelaySeconds: 10
periodSeconds: 10
resources:
{{- toYaml .Values.redis.resources | nindent 12 }}
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
{{- toYaml .Values.redis.persistence.accessModes | nindent 10 }}
resources:
requests:
storage: {{ .Values.redis.persistence.size }}
{{- if .Values.redis.persistence.storageClassName }}
storageClassName: {{ .Values.redis.persistence.storageClassName }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,35 @@
{{- if .Values.web.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "attune.fullname" . }}-web
labels:
{{- include "attune.labels" . | nindent 4 }}
{{- with .Values.web.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.web.ingress.className }}
ingressClassName: {{ .Values.web.ingress.className }}
{{- end }}
rules:
{{- range .Values.web.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "attune.fullname" $ }}-web
port:
number: {{ $.Values.web.service.port }}
{{- end }}
{{- end }}
{{- with .Values.web.ingress.tls }}
tls:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,154 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "attune.fullname" . }}-migrations
labels:
{{- include "attune.labels" . | nindent 4 }}
app.kubernetes.io/component: migrations
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-weight: "-20"
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
spec:
ttlSecondsAfterFinished: {{ .Values.jobs.migrations.ttlSecondsAfterFinished }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "migrations") | nindent 8 }}
spec:
restartPolicy: OnFailure
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
containers:
- name: migrations
image: {{ include "attune.image" (dict "root" . "image" .Values.images.migrations) }}
imagePullPolicy: {{ .Values.images.migrations.pullPolicy }}
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
env:
- name: MIGRATIONS_DIR
value: /migrations
resources:
{{- toYaml .Values.jobs.migrations.resources | nindent 12 }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "attune.fullname" . }}-init-user
labels:
{{- include "attune.labels" . | nindent 4 }}
app.kubernetes.io/component: init-user
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-weight: "-10"
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
spec:
ttlSecondsAfterFinished: {{ .Values.jobs.initUser.ttlSecondsAfterFinished }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "init-user") | nindent 8 }}
spec:
restartPolicy: OnFailure
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
containers:
- name: init-user
image: {{ include "attune.image" (dict "root" . "image" .Values.images.initUser) }}
imagePullPolicy: {{ .Values.images.initUser.pullPolicy }}
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
command: ["/bin/sh", "-ec"]
args:
- |
until PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -tAc "SELECT to_regclass('${DB_SCHEMA}.identity')" | grep -q identity; do
echo "waiting for database schema";
sleep 2;
done
exec /init-user.sh
resources:
{{- toYaml .Values.jobs.initUser.resources | nindent 12 }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "attune.fullname" . }}-init-packs
labels:
{{- include "attune.labels" . | nindent 4 }}
app.kubernetes.io/component: init-packs
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-weight: "0"
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
spec:
ttlSecondsAfterFinished: {{ .Values.jobs.initPacks.ttlSecondsAfterFinished }}
template:
metadata:
labels:
{{- include "attune.componentLabels" (dict "root" . "component" "init-packs") | nindent 8 }}
spec:
restartPolicy: OnFailure
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.global.imagePullSecrets | nindent 8 }}
{{- end }}
containers:
- name: init-packs
image: {{ include "attune.image" (dict "root" . "image" .Values.images.initPacks) }}
imagePullPolicy: {{ .Values.images.initPacks.pullPolicy }}
envFrom:
- secretRef:
name: {{ include "attune.secretName" . }}
command: ["/bin/sh", "-ec"]
args:
- |
until python3 - <<'PY'
import os
import psycopg2
conn = psycopg2.connect(
host=os.environ["DB_HOST"],
port=os.environ["DB_PORT"],
user=os.environ["DB_USER"],
password=os.environ["DB_PASSWORD"],
dbname=os.environ["DB_NAME"],
)
try:
with conn.cursor() as cur:
cur.execute("SET search_path TO %s, public" % os.environ["DB_SCHEMA"])
cur.execute("SELECT to_regclass(%s)", (f"{os.environ['DB_SCHEMA']}.identity",))
value = cur.fetchone()[0]
raise SystemExit(0 if value else 1)
finally:
conn.close()
PY
do
echo "waiting for database schema";
sleep 2;
done
exec /init-packs.sh
volumeMounts:
- name: packs
mountPath: /opt/attune/packs
- name: runtime-envs
mountPath: /opt/attune/runtime_envs
- name: artifacts
mountPath: /opt/attune/artifacts
resources:
{{- toYaml .Values.jobs.initPacks.resources | nindent 12 }}
volumes:
- name: packs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-packs
- name: runtime-envs
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-runtime-envs
- name: artifacts
persistentVolumeClaim:
claimName: {{ include "attune.fullname" . }}-artifacts

View File

@@ -0,0 +1,53 @@
{{- if .Values.sharedStorage.packs.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "attune.fullname" . }}-packs
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
accessModes:
{{- toYaml .Values.sharedStorage.packs.accessModes | nindent 4 }}
resources:
requests:
storage: {{ .Values.sharedStorage.packs.size }}
{{- if .Values.sharedStorage.packs.storageClassName }}
storageClassName: {{ .Values.sharedStorage.packs.storageClassName }}
{{- end }}
---
{{- end }}
{{- if .Values.sharedStorage.runtimeEnvs.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "attune.fullname" . }}-runtime-envs
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
accessModes:
{{- toYaml .Values.sharedStorage.runtimeEnvs.accessModes | nindent 4 }}
resources:
requests:
storage: {{ .Values.sharedStorage.runtimeEnvs.size }}
{{- if .Values.sharedStorage.runtimeEnvs.storageClassName }}
storageClassName: {{ .Values.sharedStorage.runtimeEnvs.storageClassName }}
{{- end }}
---
{{- end }}
{{- if .Values.sharedStorage.artifacts.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "attune.fullname" . }}-artifacts
labels:
{{- include "attune.labels" . | nindent 4 }}
spec:
accessModes:
{{- toYaml .Values.sharedStorage.artifacts.accessModes | nindent 4 }}
resources:
requests:
storage: {{ .Values.sharedStorage.artifacts.size }}
{{- if .Values.sharedStorage.artifacts.storageClassName }}
storageClassName: {{ .Values.sharedStorage.artifacts.storageClassName }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,31 @@
{{- if not .Values.security.existingSecret }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "attune.secretName" . }}
labels:
{{- include "attune.labels" . | nindent 4 }}
type: Opaque
stringData:
ATTUNE__SECURITY__JWT_SECRET: {{ .Values.security.jwtSecret | quote }}
ATTUNE__SECURITY__ENCRYPTION_KEY: {{ .Values.security.encryptionKey | quote }}
ATTUNE__DATABASE__URL: {{ include "attune.databaseUrl" . | quote }}
ATTUNE__MESSAGE_QUEUE__URL: {{ include "attune.rabbitmqUrl" . | quote }}
ATTUNE__REDIS__URL: {{ include "attune.redisUrl" . | quote }}
DB_HOST: {{ include "attune.postgresqlServiceName" . | quote }}
DB_PORT: {{ .Values.database.port | quote }}
DB_USER: {{ .Values.database.username | quote }}
DB_PASSWORD: {{ .Values.database.password | quote }}
DB_NAME: {{ .Values.database.database | quote }}
DB_SCHEMA: {{ .Values.database.schema | quote }}
TEST_LOGIN: {{ .Values.bootstrap.testUser.login | quote }}
TEST_DISPLAY_NAME: {{ .Values.bootstrap.testUser.displayName | quote }}
TEST_PASSWORD: {{ .Values.bootstrap.testUser.password | quote }}
DEFAULT_ADMIN_LOGIN: {{ .Values.bootstrap.testUser.login | quote }}
DEFAULT_ADMIN_PERMISSION_SET_REF: "core.admin"
SOURCE_PACKS_DIR: "/source/packs"
TARGET_PACKS_DIR: "/opt/attune/packs"
RUNTIME_ENVS_DIR: "/opt/attune/runtime_envs"
ARTIFACTS_DIR: "/opt/attune/artifacts"
LOADER_SCRIPT: "/scripts/load_core_pack.py"
{{- end }}

253
charts/attune/values.yaml Normal file
View File

@@ -0,0 +1,253 @@
nameOverride: ""
fullnameOverride: ""
global:
imageRegistry: ""
imageNamespace: ""
imageTag: edge
imagePullSecrets: []
security:
existingSecret: ""
jwtSecret: change-me-in-production
encryptionKey: change-me-in-production-32-bytes-minimum
database:
schema: public
username: attune
password: attune
database: attune
host: ""
port: 5432
url: ""
postgresql:
enabled: true
image:
repository: timescale/timescaledb
tag: 2.17.2-pg16
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 20Gi
storageClassName: ""
resources: {}
rabbitmq:
username: attune
password: attune
host: ""
port: 5672
url: ""
managementPort: 15672
enabled: true
image:
repository: rabbitmq
tag: 3.13-management-alpine
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
storageClassName: ""
resources: {}
redis:
enabled: true
host: ""
port: 6379
url: ""
image:
repository: redis
tag: 7-alpine
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
storageClassName: ""
resources: {}
bootstrap:
testUser:
login: test@attune.local
displayName: Test User
password: TestPass123!
sharedStorage:
packs:
enabled: true
accessModes:
- ReadWriteMany
size: 2Gi
storageClassName: ""
runtimeEnvs:
enabled: true
accessModes:
- ReadWriteMany
size: 10Gi
storageClassName: ""
artifacts:
enabled: true
accessModes:
- ReadWriteMany
size: 20Gi
storageClassName: ""
images:
api:
repository: attune-api
tag: ""
pullPolicy: IfNotPresent
executor:
repository: attune-executor
tag: ""
pullPolicy: IfNotPresent
worker:
repository: attune-worker
tag: ""
pullPolicy: IfNotPresent
sensor:
repository: nikolaik/python-nodejs
tag: python3.12-nodejs22-slim
pullPolicy: IfNotPresent
notifier:
repository: attune-notifier
tag: ""
pullPolicy: IfNotPresent
web:
repository: attune-web
tag: ""
pullPolicy: IfNotPresent
migrations:
repository: attune-migrations
tag: ""
pullPolicy: IfNotPresent
initUser:
repository: attune-init-user
tag: ""
pullPolicy: IfNotPresent
initPacks:
repository: attune-init-packs
tag: ""
pullPolicy: IfNotPresent
agent:
repository: attune-agent
tag: ""
pullPolicy: IfNotPresent
jobs:
migrations:
ttlSecondsAfterFinished: 300
resources: {}
initUser:
ttlSecondsAfterFinished: 300
resources: {}
initPacks:
ttlSecondsAfterFinished: 300
resources: {}
api:
replicaCount: 1
service:
type: ClusterIP
port: 8080
resources: {}
executor:
replicaCount: 1
resources: {}
worker:
replicaCount: 1
runtimes: shell,python,node,native
name: worker-full-01
resources: {}
sensor:
replicaCount: 1
runtimes: shell,python,node,native
logLevel: debug
resources: {}
notifier:
replicaCount: 1
service:
type: ClusterIP
port: 8081
resources: {}
web:
replicaCount: 1
service:
type: ClusterIP
port: 80
config:
environment: kubernetes
apiUrl: http://localhost:8080
wsUrl: ws://localhost:8081
resources: {}
ingress:
enabled: false
className: ""
annotations: {}
hosts:
- host: attune.local
paths:
- path: /
pathType: Prefix
tls: []
# Agent-based workers
# These deploy the universal worker agent into any container image.
# The agent auto-detects available runtimes (python, ruby, node, etc.)
# and registers with the Attune platform.
#
# Each entry creates a separate Deployment with an init container that
# copies the statically-linked agent binary into the worker container.
#
# Supported fields per worker:
# name (required) - Unique name for this worker (used in resource names)
# image (required) - Container image with your desired runtime(s)
# replicas (optional) - Number of pod replicas (default: 1)
# runtimes (optional) - List of runtimes to expose; [] = auto-detect
# resources (optional) - Kubernetes resource requests/limits
# env (optional) - Extra environment variables (list of {name, value})
# imagePullPolicy (optional) - Pull policy for the worker image
# logLevel (optional) - RUST_LOG level (default: "info")
# runtimeClassName (optional) - Kubernetes RuntimeClass (e.g., "nvidia" for GPU)
# nodeSelector (optional) - Node selector map for pod scheduling
# tolerations (optional) - Tolerations list for pod scheduling
# stopGracePeriod (optional) - Termination grace period in seconds (default: 45)
#
# Examples:
# agentWorkers:
# - name: ruby
# image: ruby:3.3
# replicas: 2
# runtimes: [] # auto-detect
# resources: {}
#
# - name: python-gpu
# image: nvidia/cuda:12.3.1-runtime-ubuntu22.04
# replicas: 1
# runtimes: [python, shell]
# runtimeClassName: nvidia
# nodeSelector:
# gpu: "true"
# tolerations:
# - key: nvidia.com/gpu
# operator: Exists
# effect: NoSchedule
# resources:
# limits:
# nvidia.com/gpu: 1
#
# - name: custom
# image: my-org/my-custom-image:latest
# replicas: 1
# runtimes: []
# env:
# - name: MY_CUSTOM_VAR
# value: my-value
agentWorkers: []

View File

@@ -46,6 +46,22 @@ security:
jwt_refresh_expiration: 2592000 # 30 days
encryption_key: test-encryption-key-32-chars-okay
enable_auth: true
allow_self_registration: true
oidc:
enabled: false
discovery_url: https://auth.rdrx.app/.well-known/openid-configuration
client_id: 31d194737840d32bd3afe6474826976bae346d77247a158c4dc43887278eb605
client_secret: null
redirect_uri: http://localhost:3000/auth/callback
post_logout_redirect_uri: http://localhost:3000/login
scopes:
- groups
ldap:
enabled: false
url: ldap://localhost:389
bind_dn_template: "uid={login},ou=users,dc=example,dc=com"
provider_name: ldap
provider_label: Development LDAP
# Packs directory (where pack action files are located)
packs_base_dir: ./packs
@@ -55,6 +71,11 @@ packs_base_dir: ./packs
# Pattern: {runtime_envs_dir}/{pack_ref}/{runtime_name}
runtime_envs_dir: ./runtime_envs
# Artifacts directory (shared volume for file-based artifact storage).
# File-type artifacts are written here by execution processes and served by the API.
# Pattern: {artifacts_dir}/{ref_slug}/v{version}.{ext}
artifacts_dir: ./artifacts
# Worker service configuration
worker:
service_name: attune-worker-e2e
@@ -104,3 +125,8 @@ executor:
scheduled_timeout: 120 # 2 minutes (faster feedback in dev)
timeout_check_interval: 30 # Check every 30 seconds
enable_timeout_monitor: true
# Agent binary distribution (optional - for local development)
# Binary is built via: make build-agent
# agent:
# binary_dir: ./target/x86_64-unknown-linux-musl/release

View File

@@ -86,6 +86,48 @@ security:
# Enable authentication
enable_auth: true
# Login page defaults for the web UI. Users can still override with:
# /login?auth=direct
# /login?auth=<provider_name>
login_page:
show_local_login: true
show_oidc_login: true
show_ldap_login: true
# Optional OIDC browser login configuration
oidc:
enabled: false
discovery_url: https://auth.example.com/.well-known/openid-configuration
client_id: your-confidential-client-id
provider_name: sso
provider_label: Example SSO
provider_icon_url: https://auth.example.com/assets/logo.svg
client_secret: your-confidential-client-secret
redirect_uri: http://localhost:3000/auth/callback
post_logout_redirect_uri: http://localhost:3000/login
scopes:
- groups
# Optional LDAP authentication configuration
ldap:
enabled: false
url: ldap://ldap.example.com:389
# Direct-bind mode: construct DN from template
# bind_dn_template: "uid={login},ou=users,dc=example,dc=com"
# Search-and-bind mode: search for user with a service account
user_search_base: "ou=users,dc=example,dc=com"
user_filter: "(uid={login})"
search_bind_dn: "cn=readonly,dc=example,dc=com"
search_bind_password: "readonly-password"
login_attr: uid
email_attr: mail
display_name_attr: cn
group_attr: memberOf
starttls: false
danger_skip_tls_verify: false
provider_name: ldap
provider_label: Company LDAP
# Worker configuration (optional, for worker services)
# Uncomment and configure if running worker processes
# worker:

View File

@@ -48,6 +48,7 @@ security:
jwt_refresh_expiration: 3600 # 1 hour
encryption_key: test-encryption-key-32-chars-okay
enable_auth: true
allow_self_registration: true
# Test packs directory (use /tmp for tests to avoid permission issues)
packs_base_dir: /tmp/attune-test-packs

View File

@@ -26,7 +26,9 @@ async-trait = { workspace = true }
futures = { workspace = true }
# Web framework
axum = { workspace = true }
axum = { workspace = true, features = ["multipart"] }
axum-extra = { version = "0.10", features = ["cookie"] }
cookie = "0.18"
tower = { workspace = true }
tower-http = { workspace = true }
@@ -67,21 +69,32 @@ jsonschema = { workspace = true }
# HTTP client
reqwest = { workspace = true }
openidconnect = "4.0"
ldap3 = { version = "0.12", default-features = false, features = ["sync", "tls-rustls-ring"] }
url = { workspace = true }
# Archive/compression
tar = { workspace = true }
flate2 = { workspace = true }
# Temp files (used for pack upload extraction)
tempfile = { workspace = true }
# Authentication
jsonwebtoken = { version = "10.2", features = ["rust_crypto"] }
argon2 = { workspace = true }
rand = "0.9"
rand = "0.10"
# HMAC and cryptography
hmac = "0.12"
sha1 = "0.10"
sha2 = { workspace = true }
hex = "0.4"
subtle = "2.6"
# OpenAPI/Swagger
utoipa = { workspace = true, features = ["axum_extras"] }
utoipa-swagger-ui = { version = "9.0", features = ["axum"] }
jsonwebtoken = { workspace = true, features = ["rust_crypto"] }
[dev-dependencies]
mockall = { workspace = true }

View File

@@ -1,389 +1,11 @@
//! JWT token generation and validation
//!
//! This module re-exports all JWT functionality from `attune_common::auth::jwt`.
//! The canonical implementation lives in the common crate so that all services
//! (API, worker, sensor) share the same token types and signing logic.
use chrono::{Duration, Utc};
use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation};
use serde::{Deserialize, Serialize};
use thiserror::Error;
#[derive(Debug, Error)]
pub enum JwtError {
#[error("Failed to encode JWT: {0}")]
EncodeError(String),
#[error("Failed to decode JWT: {0}")]
DecodeError(String),
#[error("Token has expired")]
Expired,
#[error("Invalid token")]
Invalid,
}
/// JWT Claims structure
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Claims {
/// Subject (identity ID)
pub sub: String,
/// Identity login
pub login: String,
/// Issued at (Unix timestamp)
pub iat: i64,
/// Expiration time (Unix timestamp)
pub exp: i64,
/// Token type (access or refresh)
#[serde(default)]
pub token_type: TokenType,
/// Optional scope (e.g., "sensor", "service")
#[serde(skip_serializing_if = "Option::is_none")]
pub scope: Option<String>,
/// Optional metadata (e.g., trigger_types for sensors)
#[serde(skip_serializing_if = "Option::is_none")]
pub metadata: Option<serde_json::Value>,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "lowercase")]
pub enum TokenType {
Access,
Refresh,
Sensor,
}
impl Default for TokenType {
fn default() -> Self {
Self::Access
}
}
/// Configuration for JWT tokens
#[derive(Debug, Clone)]
pub struct JwtConfig {
/// Secret key for signing tokens
pub secret: String,
/// Access token expiration duration (in seconds)
pub access_token_expiration: i64,
/// Refresh token expiration duration (in seconds)
pub refresh_token_expiration: i64,
}
impl Default for JwtConfig {
fn default() -> Self {
Self {
secret: "insecure_default_secret_change_in_production".to_string(),
access_token_expiration: 3600, // 1 hour
refresh_token_expiration: 604800, // 7 days
}
}
}
/// Generate a JWT access token
///
/// # Arguments
/// * `identity_id` - The identity ID
/// * `login` - The identity login
/// * `config` - JWT configuration
///
/// # Returns
/// * `Result<String, JwtError>` - The encoded JWT token
pub fn generate_access_token(
identity_id: i64,
login: &str,
config: &JwtConfig,
) -> Result<String, JwtError> {
generate_token(identity_id, login, config, TokenType::Access)
}
/// Generate a JWT refresh token
///
/// # Arguments
/// * `identity_id` - The identity ID
/// * `login` - The identity login
/// * `config` - JWT configuration
///
/// # Returns
/// * `Result<String, JwtError>` - The encoded JWT token
pub fn generate_refresh_token(
identity_id: i64,
login: &str,
config: &JwtConfig,
) -> Result<String, JwtError> {
generate_token(identity_id, login, config, TokenType::Refresh)
}
/// Generate a JWT token
///
/// # Arguments
/// * `identity_id` - The identity ID
/// * `login` - The identity login
/// * `config` - JWT configuration
/// * `token_type` - Type of token to generate
///
/// # Returns
/// * `Result<String, JwtError>` - The encoded JWT token
pub fn generate_token(
identity_id: i64,
login: &str,
config: &JwtConfig,
token_type: TokenType,
) -> Result<String, JwtError> {
let now = Utc::now();
let expiration = match token_type {
TokenType::Access => config.access_token_expiration,
TokenType::Refresh => config.refresh_token_expiration,
TokenType::Sensor => 86400, // Sensor tokens handled separately via generate_sensor_token()
};
let exp = (now + Duration::seconds(expiration)).timestamp();
let claims = Claims {
sub: identity_id.to_string(),
login: login.to_string(),
iat: now.timestamp(),
exp,
token_type,
scope: None,
metadata: None,
};
encode(
&Header::default(),
&claims,
&EncodingKey::from_secret(config.secret.as_bytes()),
)
.map_err(|e| JwtError::EncodeError(e.to_string()))
}
/// Generate a sensor token with specific trigger types
///
/// # Arguments
/// * `identity_id` - The identity ID for the sensor
/// * `sensor_ref` - The sensor reference (e.g., "sensor:core.timer")
/// * `trigger_types` - List of trigger types this sensor can create events for
/// * `config` - JWT configuration
/// * `ttl_seconds` - Time to live in seconds (default: 24 hours)
///
/// # Returns
/// * `Result<String, JwtError>` - The encoded JWT token
pub fn generate_sensor_token(
identity_id: i64,
sensor_ref: &str,
trigger_types: Vec<String>,
config: &JwtConfig,
ttl_seconds: Option<i64>,
) -> Result<String, JwtError> {
let now = Utc::now();
let expiration = ttl_seconds.unwrap_or(86400); // Default: 24 hours
let exp = (now + Duration::seconds(expiration)).timestamp();
let metadata = serde_json::json!({
"trigger_types": trigger_types,
});
let claims = Claims {
sub: identity_id.to_string(),
login: sensor_ref.to_string(),
iat: now.timestamp(),
exp,
token_type: TokenType::Sensor,
scope: Some("sensor".to_string()),
metadata: Some(metadata),
};
encode(
&Header::default(),
&claims,
&EncodingKey::from_secret(config.secret.as_bytes()),
)
.map_err(|e| JwtError::EncodeError(e.to_string()))
}
/// Validate and decode a JWT token
///
/// # Arguments
/// * `token` - The JWT token string
/// * `config` - JWT configuration
///
/// # Returns
/// * `Result<Claims, JwtError>` - The decoded claims if valid
pub fn validate_token(token: &str, config: &JwtConfig) -> Result<Claims, JwtError> {
let validation = Validation::default();
decode::<Claims>(
token,
&DecodingKey::from_secret(config.secret.as_bytes()),
&validation,
)
.map(|data| data.claims)
.map_err(|e| {
if e.to_string().contains("ExpiredSignature") {
JwtError::Expired
} else {
JwtError::DecodeError(e.to_string())
}
})
}
/// Extract token from Authorization header
///
/// # Arguments
/// * `auth_header` - The Authorization header value
///
/// # Returns
/// * `Option<&str>` - The token if present and valid format
pub fn extract_token_from_header(auth_header: &str) -> Option<&str> {
if auth_header.starts_with("Bearer ") {
Some(&auth_header[7..])
} else {
None
}
}
#[cfg(test)]
mod tests {
use super::*;
fn test_config() -> JwtConfig {
JwtConfig {
secret: "test_secret_key_for_testing".to_string(),
access_token_expiration: 3600,
refresh_token_expiration: 604800,
}
}
#[test]
fn test_generate_and_validate_access_token() {
let config = test_config();
let token =
generate_access_token(123, "testuser", &config).expect("Failed to generate token");
let claims = validate_token(&token, &config).expect("Failed to validate token");
assert_eq!(claims.sub, "123");
assert_eq!(claims.login, "testuser");
assert_eq!(claims.token_type, TokenType::Access);
}
#[test]
fn test_generate_and_validate_refresh_token() {
let config = test_config();
let token =
generate_refresh_token(456, "anotheruser", &config).expect("Failed to generate token");
let claims = validate_token(&token, &config).expect("Failed to validate token");
assert_eq!(claims.sub, "456");
assert_eq!(claims.login, "anotheruser");
assert_eq!(claims.token_type, TokenType::Refresh);
}
#[test]
fn test_invalid_token() {
let config = test_config();
let result = validate_token("invalid.token.here", &config);
assert!(result.is_err());
}
#[test]
fn test_token_with_wrong_secret() {
let config = test_config();
let token = generate_access_token(789, "user", &config).expect("Failed to generate token");
let wrong_config = JwtConfig {
secret: "different_secret".to_string(),
..config
};
let result = validate_token(&token, &wrong_config);
assert!(result.is_err());
}
#[test]
fn test_expired_token() {
// Create a token that's already expired by setting exp in the past
let now = Utc::now().timestamp();
let expired_claims = Claims {
sub: "999".to_string(),
login: "expireduser".to_string(),
iat: now - 3600,
exp: now - 1800, // Expired 30 minutes ago
token_type: TokenType::Access,
scope: None,
metadata: None,
};
let config = test_config();
let expired_token = encode(
&Header::default(),
&expired_claims,
&EncodingKey::from_secret(config.secret.as_bytes()),
)
.expect("Failed to encode token");
// Validate the expired token
let result = validate_token(&expired_token, &config);
assert!(matches!(result, Err(JwtError::Expired)));
}
#[test]
fn test_extract_token_from_header() {
let header = "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9";
let token = extract_token_from_header(header);
assert_eq!(token, Some("eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"));
let invalid_header = "Token abc123";
let token = extract_token_from_header(invalid_header);
assert_eq!(token, None);
let no_token = "Bearer ";
let token = extract_token_from_header(no_token);
assert_eq!(token, Some(""));
}
#[test]
fn test_claims_serialization() {
let claims = Claims {
sub: "123".to_string(),
login: "testuser".to_string(),
iat: 1234567890,
exp: 1234571490,
token_type: TokenType::Access,
scope: None,
metadata: None,
};
let json = serde_json::to_string(&claims).expect("Failed to serialize");
let deserialized: Claims = serde_json::from_str(&json).expect("Failed to deserialize");
assert_eq!(claims.sub, deserialized.sub);
assert_eq!(claims.login, deserialized.login);
assert_eq!(claims.token_type, deserialized.token_type);
}
#[test]
fn test_generate_sensor_token() {
let config = test_config();
let trigger_types = vec!["core.timer".to_string(), "core.webhook".to_string()];
let token = generate_sensor_token(
999,
"sensor:core.timer",
trigger_types.clone(),
&config,
Some(86400),
)
.expect("Failed to generate sensor token");
let claims = validate_token(&token, &config).expect("Failed to validate token");
assert_eq!(claims.sub, "999");
assert_eq!(claims.login, "sensor:core.timer");
assert_eq!(claims.token_type, TokenType::Sensor);
assert_eq!(claims.scope, Some("sensor".to_string()));
let metadata = claims.metadata.expect("Metadata should be present");
let trigger_types_from_token = metadata["trigger_types"]
.as_array()
.expect("trigger_types should be an array");
assert_eq!(trigger_types_from_token.len(), 2);
}
}
pub use attune_common::auth::jwt::{
extract_token_from_header, generate_access_token, generate_execution_token,
generate_refresh_token, generate_sensor_token, generate_token, validate_token, Claims,
JwtConfig, JwtError, TokenType,
};

504
crates/api/src/auth/ldap.rs Normal file
View File

@@ -0,0 +1,504 @@
//! LDAP authentication helpers for username/password login.
use attune_common::{
config::LdapConfig,
repositories::{
identity::{
CreateIdentityInput, IdentityRepository, IdentityRoleAssignmentRepository,
UpdateIdentityInput,
},
Create, Update,
},
};
use ldap3::{dn_escape, ldap_escape, Ldap, LdapConnAsync, LdapConnSettings, Scope, SearchEntry};
use serde::{Deserialize, Serialize};
use serde_json::json;
use sha2::{Digest, Sha256};
use crate::{
auth::jwt::{generate_access_token, generate_refresh_token},
dto::TokenResponse,
middleware::error::ApiError,
state::SharedState,
};
/// Claims extracted from the LDAP directory for an authenticated user.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LdapUserClaims {
/// The LDAP server URL the user was authenticated against.
pub server_url: String,
/// The user's full distinguished name.
pub dn: String,
/// Login attribute value (uid, sAMAccountName, etc.).
pub login: Option<String>,
/// Email address.
pub email: Option<String>,
/// Display name (cn).
pub display_name: Option<String>,
/// Group memberships (memberOf values).
pub groups: Vec<String>,
}
/// The result of a successful LDAP authentication.
#[derive(Debug, Clone)]
pub struct LdapAuthenticatedIdentity {
pub token_response: TokenResponse,
}
/// Authenticate a user against the configured LDAP directory.
///
/// This performs a bind (either direct or search+bind) to verify
/// the user's credentials, then fetches their attributes and upserts
/// the identity in the database.
pub async fn authenticate(
state: &SharedState,
login: &str,
password: &str,
) -> Result<LdapAuthenticatedIdentity, ApiError> {
let ldap_config = ldap_config(state)?;
// Connect and authenticate
let claims = if ldap_config.bind_dn_template.is_some() {
direct_bind(&ldap_config, login, password).await?
} else {
search_and_bind(&ldap_config, login, password).await?
};
// Upsert identity in DB and issue JWT tokens
let identity = upsert_identity(state, &claims).await?;
if identity.frozen {
return Err(ApiError::Forbidden(
"Identity is frozen and cannot authenticate".to_string(),
));
}
let access_token = generate_access_token(identity.id, &identity.login, &state.jwt_config)?;
let refresh_token = generate_refresh_token(identity.id, &identity.login, &state.jwt_config)?;
let token_response = TokenResponse::new(
access_token,
refresh_token,
state.jwt_config.access_token_expiration,
)
.with_user(
identity.id,
identity.login.clone(),
identity.display_name.clone(),
);
Ok(LdapAuthenticatedIdentity { token_response })
}
// ---------------------------------------------------------------------------
// Internal helpers
// ---------------------------------------------------------------------------
fn ldap_config(state: &SharedState) -> Result<LdapConfig, ApiError> {
let config = state
.config
.security
.ldap
.clone()
.filter(|ldap| ldap.enabled)
.ok_or_else(|| {
ApiError::NotImplemented("LDAP authentication is not configured".to_string())
})?;
// Reject partial service-account configuration: having exactly one of
// search_bind_dn / search_bind_password is almost certainly a config
// error and would silently fall back to anonymous search, which is a
// very different security posture than the admin intended.
let has_dn = config.search_bind_dn.is_some();
let has_pw = config.search_bind_password.is_some();
if has_dn != has_pw {
let missing = if has_dn {
"search_bind_password"
} else {
"search_bind_dn"
};
return Err(ApiError::InternalServerError(format!(
"LDAP misconfiguration: search_bind_dn and search_bind_password must both be set \
or both be omitted (missing {missing})"
)));
}
Ok(config)
}
/// Build an `LdapConnSettings` from the config.
fn conn_settings(config: &LdapConfig) -> LdapConnSettings {
let mut settings = LdapConnSettings::new();
if config.starttls {
settings = settings.set_starttls(true);
}
if config.danger_skip_tls_verify {
settings = settings.set_no_tls_verify(true);
}
settings
}
/// Open a new LDAP connection.
async fn connect(config: &LdapConfig) -> Result<Ldap, ApiError> {
let settings = conn_settings(config);
let url = config.url.as_deref().unwrap_or_default();
let (conn, ldap) = LdapConnAsync::with_settings(settings, url)
.await
.map_err(|err| {
ApiError::InternalServerError(format!("Failed to connect to LDAP server: {err}"))
})?;
// Drive the connection in the background
ldap3::drive!(conn);
Ok(ldap)
}
/// Direct-bind authentication: construct the DN from the template and bind.
async fn direct_bind(
config: &LdapConfig,
login: &str,
password: &str,
) -> Result<LdapUserClaims, ApiError> {
let template = config.bind_dn_template.as_deref().unwrap_or_default();
// Escape the login value for safe interpolation into a Distinguished Name
// (RFC 4514). Without this, characters like `,`, `+`, `"`, `\`, `<`, `>`,
// `;`, `=`, NUL, `#` (leading), or space (leading/trailing) in the username
// would alter the DN structure.
let escaped_login = dn_escape(login);
let bind_dn = template.replace("{login}", &escaped_login);
let mut ldap = connect(config).await?;
// Bind as the user
let result = ldap
.simple_bind(&bind_dn, password)
.await
.map_err(|err| ApiError::InternalServerError(format!("LDAP bind failed: {err}")))?;
if result.rc != 0 {
let _ = ldap.unbind().await;
return Err(ApiError::Unauthorized(
"Invalid LDAP credentials".to_string(),
));
}
// Fetch user attributes
let claims = fetch_user_attributes(config, &mut ldap, &bind_dn).await?;
let _ = ldap.unbind().await;
Ok(claims)
}
/// Search-and-bind authentication:
/// 1. Bind as the service account (or anonymous)
/// 2. Search for the user entry (must match exactly one)
/// 3. Re-bind as the user with their DN + password
async fn search_and_bind(
config: &LdapConfig,
login: &str,
password: &str,
) -> Result<LdapUserClaims, ApiError> {
let search_base = config.user_search_base.as_deref().ok_or_else(|| {
ApiError::InternalServerError(
"LDAP user_search_base is required when bind_dn_template is not set".to_string(),
)
})?;
let mut ldap = connect(config).await?;
// Step 1: Bind as service account or anonymous.
// Partial config (only one of dn/password) is already rejected by
// ldap_config(), so this match is exhaustive over valid states.
if let (Some(bind_dn), Some(bind_pw)) = (
config.search_bind_dn.as_deref(),
config.search_bind_password.as_deref(),
) {
let result = ldap.simple_bind(bind_dn, bind_pw).await.map_err(|err| {
ApiError::InternalServerError(format!("LDAP service bind failed: {err}"))
})?;
if result.rc != 0 {
let _ = ldap.unbind().await;
return Err(ApiError::InternalServerError(
"LDAP service account bind failed — check search_bind_dn and search_bind_password"
.to_string(),
));
}
}
// If no service account, we proceed with an anonymous connection (already connected)
// Step 2: Search for the user.
// Escape the login value for safe interpolation into an LDAP search filter
// (RFC 4515). Without this, characters like `(`, `)`, `*`, `\`, and NUL in
// the username could broaden the filter, match unintended entries, or break
// the search entirely.
let escaped_login = ldap_escape(login);
let filter = config.user_filter.replace("{login}", &escaped_login);
let attrs = vec![
config.login_attr.as_str(),
config.email_attr.as_str(),
config.display_name_attr.as_str(),
config.group_attr.as_str(),
"dn",
];
let (results, _result) = ldap
.search(search_base, Scope::Subtree, &filter, attrs)
.await
.map_err(|err| ApiError::InternalServerError(format!("LDAP user search failed: {err}")))?
.success()
.map_err(|err| ApiError::InternalServerError(format!("LDAP search error: {err}")))?;
// The search must return exactly one entry. Zero means the user was not
// found; more than one means the filter or directory layout is ambiguous
// and we must not guess which identity to authenticate.
let result_count = results.len();
if result_count == 0 {
let _ = ldap.unbind().await;
return Err(ApiError::Unauthorized(
"Invalid LDAP credentials".to_string(),
));
}
if result_count > 1 {
let _ = ldap.unbind().await;
return Err(ApiError::InternalServerError(format!(
"LDAP user search returned {result_count} entries (expected exactly 1) — \
tighten the user_filter or user_search_base to ensure uniqueness"
)));
}
// SAFETY: result_count == 1 guaranteed by the checks above.
let entry = results
.into_iter()
.next()
.expect("checked result_count == 1");
let search_entry = SearchEntry::construct(entry);
let user_dn = search_entry.dn.clone();
// Step 3: Re-bind as the user
let result = ldap
.simple_bind(&user_dn, password)
.await
.map_err(|err| ApiError::InternalServerError(format!("LDAP user bind failed: {err}")))?;
if result.rc != 0 {
let _ = ldap.unbind().await;
return Err(ApiError::Unauthorized(
"Invalid LDAP credentials".to_string(),
));
}
let claims = extract_claims(config, &search_entry);
let _ = ldap.unbind().await;
Ok(claims)
}
/// Fetch the user's LDAP attributes after a successful bind.
async fn fetch_user_attributes(
config: &LdapConfig,
ldap: &mut Ldap,
user_dn: &str,
) -> Result<LdapUserClaims, ApiError> {
let attrs = vec![
config.login_attr.as_str(),
config.email_attr.as_str(),
config.display_name_attr.as_str(),
config.group_attr.as_str(),
];
let (results, _result) = ldap
.search(user_dn, Scope::Base, "(objectClass=*)", attrs)
.await
.map_err(|err| {
ApiError::InternalServerError(format!(
"LDAP attribute fetch failed for DN {user_dn}: {err}"
))
})?
.success()
.map_err(|err| {
ApiError::InternalServerError(format!("LDAP attribute search error: {err}"))
})?;
let entry = results.into_iter().next().ok_or_else(|| {
ApiError::InternalServerError(format!("LDAP entry not found for DN: {user_dn}"))
})?;
let search_entry = SearchEntry::construct(entry);
Ok(extract_claims(config, &search_entry))
}
/// Extract user claims from an LDAP search entry.
fn extract_claims(config: &LdapConfig, entry: &SearchEntry) -> LdapUserClaims {
let first_attr =
|name: &str| -> Option<String> { entry.attrs.get(name).and_then(|v| v.first()).cloned() };
let groups = entry
.attrs
.get(&config.group_attr)
.cloned()
.unwrap_or_default();
LdapUserClaims {
server_url: config.url.clone().unwrap_or_default(),
dn: entry.dn.clone(),
login: first_attr(&config.login_attr),
email: first_attr(&config.email_attr),
display_name: first_attr(&config.display_name_attr),
groups,
}
}
/// Upsert an identity row for the LDAP-authenticated user.
async fn upsert_identity(
state: &SharedState,
claims: &LdapUserClaims,
) -> Result<attune_common::models::identity::Identity, ApiError> {
let existing =
IdentityRepository::find_by_ldap_dn(&state.db, &claims.server_url, &claims.dn).await?;
let desired_login = derive_login(claims);
let display_name = claims.display_name.clone();
let attributes = json!({ "ldap": claims });
match existing {
Some(identity) => {
let updated = UpdateIdentityInput {
display_name,
password_hash: None,
attributes: Some(attributes),
frozen: None,
};
let identity = IdentityRepository::update(&state.db, identity.id, updated)
.await
.map_err(ApiError::from)?;
sync_roles(&state.db, identity.id, "ldap", &claims.groups).await?;
Ok(identity)
}
None => {
// Avoid login collisions
let login = match IdentityRepository::find_by_login(&state.db, &desired_login).await? {
Some(_) => fallback_dn_login(claims),
None => desired_login,
};
let identity = IdentityRepository::create(
&state.db,
CreateIdentityInput {
login,
display_name,
password_hash: None,
attributes,
},
)
.await
.map_err(ApiError::from)?;
sync_roles(&state.db, identity.id, "ldap", &claims.groups).await?;
Ok(identity)
}
}
}
async fn sync_roles(
db: &sqlx::PgPool,
identity_id: i64,
source: &str,
roles: &[String],
) -> Result<(), ApiError> {
IdentityRoleAssignmentRepository::replace_managed_roles(db, identity_id, source, roles)
.await
.map_err(Into::into)
}
/// Derive the login name from LDAP claims.
fn derive_login(claims: &LdapUserClaims) -> String {
claims
.login
.clone()
.or_else(|| claims.email.clone())
.unwrap_or_else(|| fallback_dn_login(claims))
}
/// Generate a deterministic fallback login from the LDAP server URL + DN.
fn fallback_dn_login(claims: &LdapUserClaims) -> String {
let mut hasher = Sha256::new();
hasher.update(claims.server_url.as_bytes());
hasher.update(b":");
hasher.update(claims.dn.as_bytes());
let digest = hex::encode(hasher.finalize());
format!("ldap:{}", &digest[..24])
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn direct_bind_dn_escapes_special_characters() {
// Simulate what direct_bind does with the template
let template = "uid={login},ou=users,dc=example,dc=com";
let malicious_login = "admin,ou=admins,dc=evil,dc=com";
let escaped = dn_escape(malicious_login);
let bind_dn = template.replace("{login}", &escaped);
// The commas in the login value must be escaped so they don't
// introduce additional RDN components.
assert!(
bind_dn.contains("\\2c"),
"commas in login must be escaped in DN: {bind_dn}"
);
assert!(
bind_dn.starts_with("uid=admin\\2cou\\3dadmins\\2cdc\\3devil\\2cdc\\3dcom,ou=users"),
"DN structure must be preserved: {bind_dn}"
);
}
#[test]
fn search_filter_escapes_special_characters() {
let filter_template = "(uid={login})";
let malicious_login = "admin)(|(uid=*))";
let escaped = ldap_escape(malicious_login);
let filter = filter_template.replace("{login}", &escaped);
// The parentheses and asterisk must be escaped so they don't
// alter the filter structure.
assert!(
!filter.contains(")("),
"parentheses in login must be escaped in filter: {filter}"
);
assert!(
filter.contains("\\28"),
"open-paren must be hex-escaped: {filter}"
);
assert!(
filter.contains("\\29"),
"close-paren must be hex-escaped: {filter}"
);
assert!(
filter.contains("\\2a"),
"asterisk must be hex-escaped: {filter}"
);
}
#[test]
fn dn_escape_preserves_safe_usernames() {
let safe = "jdoe";
let escaped = dn_escape(safe);
assert_eq!(escaped.as_ref(), "jdoe");
}
#[test]
fn filter_escape_preserves_safe_usernames() {
let safe = "jdoe";
let escaped = ldap_escape(safe);
assert_eq!(escaped.as_ref(), "jdoe");
}
#[test]
fn fallback_dn_login_is_deterministic() {
let claims = LdapUserClaims {
server_url: "ldap://ldap.example.com".to_string(),
dn: "uid=test,ou=users,dc=example,dc=com".to_string(),
login: None,
email: None,
display_name: None,
groups: vec![],
};
let a = fallback_dn_login(&claims);
let b = fallback_dn_login(&claims);
assert_eq!(a, b);
assert!(a.starts_with("ldap:"));
assert_eq!(a.len(), "ldap:".len() + 24);
}
}

View File

@@ -2,7 +2,7 @@
use axum::{
extract::{Request, State},
http::{header::AUTHORIZATION, StatusCode},
http::{header::AUTHORIZATION, HeaderMap, StatusCode},
middleware::Next,
response::{IntoResponse, Response},
Json,
@@ -10,7 +10,11 @@ use axum::{
use serde_json::json;
use std::sync::Arc;
use super::jwt::{extract_token_from_header, validate_token, Claims, JwtConfig, TokenType};
use attune_common::auth::jwt::{
extract_token_from_header, validate_token, Claims, JwtConfig, TokenType,
};
use super::oidc::{cookie_authenticated_user, ACCESS_COOKIE_NAME};
/// Authentication middleware state
#[derive(Clone)]
@@ -48,21 +52,7 @@ pub async fn require_auth(
mut request: Request,
next: Next,
) -> Result<Response, AuthError> {
// Extract Authorization header
let auth_header = request
.headers()
.get(AUTHORIZATION)
.and_then(|h| h.to_str().ok())
.ok_or(AuthError::MissingToken)?;
// Extract token from Bearer scheme
let token = extract_token_from_header(auth_header).ok_or(AuthError::InvalidToken)?;
// Validate token
let claims = validate_token(token, &auth.jwt_config).map_err(|e| match e {
super::jwt::JwtError::Expired => AuthError::ExpiredToken,
_ => AuthError::InvalidToken,
})?;
let claims = extract_claims(request.headers(), &auth.jwt_config)?;
// Add claims to request extensions
request
@@ -88,25 +78,19 @@ impl axum::extract::FromRequestParts<crate::state::SharedState> for RequireAuth
return Ok(RequireAuth(user.clone()));
}
// Otherwise, extract and validate token directly from header
// Extract Authorization header
let auth_header = parts
.headers
.get(AUTHORIZATION)
.and_then(|h| h.to_str().ok())
.ok_or(AuthError::MissingToken)?;
let claims = if let Some(user) =
cookie_authenticated_user(&parts.headers, state).map_err(map_cookie_auth_error)?
{
user.claims
} else {
extract_claims(&parts.headers, &state.jwt_config)?
};
// Extract token from Bearer scheme
let token = extract_token_from_header(auth_header).ok_or(AuthError::InvalidToken)?;
// Validate token using jwt_config from app state
let claims = validate_token(token, &state.jwt_config).map_err(|e| match e {
super::jwt::JwtError::Expired => AuthError::ExpiredToken,
_ => AuthError::InvalidToken,
})?;
// Allow both access tokens and sensor tokens
if claims.token_type != TokenType::Access && claims.token_type != TokenType::Sensor {
// Allow access, sensor, and execution-scoped tokens
if claims.token_type != TokenType::Access
&& claims.token_type != TokenType::Sensor
&& claims.token_type != TokenType::Execution
{
return Err(AuthError::InvalidToken);
}
@@ -114,6 +98,33 @@ impl axum::extract::FromRequestParts<crate::state::SharedState> for RequireAuth
}
}
fn extract_claims(headers: &HeaderMap, jwt_config: &JwtConfig) -> Result<Claims, AuthError> {
if let Some(auth_header) = headers.get(AUTHORIZATION).and_then(|h| h.to_str().ok()) {
let token = extract_token_from_header(auth_header).ok_or(AuthError::InvalidToken)?;
return validate_token(token, jwt_config).map_err(|e| match e {
super::jwt::JwtError::Expired => AuthError::ExpiredToken,
_ => AuthError::InvalidToken,
});
}
if headers
.get(axum::http::header::COOKIE)
.and_then(|value| value.to_str().ok())
.is_some_and(|cookies| cookies.contains(ACCESS_COOKIE_NAME))
{
return Err(AuthError::InvalidToken);
}
Err(AuthError::MissingToken)
}
fn map_cookie_auth_error(error: crate::middleware::error::ApiError) -> AuthError {
match error {
crate::middleware::error::ApiError::Unauthorized(_) => AuthError::InvalidToken,
_ => AuthError::InvalidToken,
}
}
/// Authentication errors
#[derive(Debug)]
pub enum AuthError {
@@ -154,7 +165,7 @@ mod tests {
login: "testuser".to_string(),
iat: 1234567890,
exp: 1234571490,
token_type: super::super::jwt::TokenType::Access,
token_type: TokenType::Access,
scope: None,
metadata: None,
};

View File

@@ -1,7 +1,9 @@
//! Authentication and authorization module
pub mod jwt;
pub mod ldap;
pub mod middleware;
pub mod oidc;
pub mod password;
pub use jwt::{generate_token, validate_token, Claims};

803
crates/api/src/auth/oidc.rs Normal file
View File

@@ -0,0 +1,803 @@
//! OpenID Connect helpers for browser login.
use attune_common::{
config::OidcConfig,
repositories::{
identity::{
CreateIdentityInput, IdentityRepository, IdentityRoleAssignmentRepository,
UpdateIdentityInput,
},
Create, Update,
},
};
use axum::{
http::{header, HeaderMap, HeaderValue, StatusCode},
response::{IntoResponse, Redirect, Response},
};
use axum_extra::extract::cookie::{Cookie, SameSite};
use cookie::time::Duration as CookieDuration;
use jsonwebtoken::{
decode, decode_header,
jwk::{AlgorithmParameters, JwkSet},
Algorithm, DecodingKey, Validation,
};
use openidconnect::{
core::{CoreAuthenticationFlow, CoreClient, CoreProviderMetadata, CoreUserInfoClaims},
reqwest::Client as OidcHttpClient,
AuthorizationCode, ClientId, ClientSecret, CsrfToken, LocalizedClaim, Nonce,
OAuth2TokenResponse, PkceCodeChallenge, PkceCodeVerifier, RedirectUrl, Scope,
TokenResponse as OidcTokenResponse,
};
use serde::{Deserialize, Serialize};
use serde_json::{json, Value as JsonValue};
use sha2::{Digest, Sha256};
use url::{form_urlencoded::byte_serialize, Url};
use crate::{
auth::jwt::{generate_access_token, generate_refresh_token, validate_token},
dto::{CurrentUserResponse, TokenResponse},
middleware::error::ApiError,
state::SharedState,
};
pub const ACCESS_COOKIE_NAME: &str = "attune_access_token";
pub const REFRESH_COOKIE_NAME: &str = "attune_refresh_token";
pub const OIDC_ID_TOKEN_COOKIE_NAME: &str = "attune_oidc_id_token";
pub const OIDC_STATE_COOKIE_NAME: &str = "attune_oidc_state";
pub const OIDC_NONCE_COOKIE_NAME: &str = "attune_oidc_nonce";
pub const OIDC_PKCE_COOKIE_NAME: &str = "attune_oidc_pkce_verifier";
pub const OIDC_REDIRECT_COOKIE_NAME: &str = "attune_oidc_redirect_to";
const LOGIN_CALLBACK_PATH: &str = "/login/callback";
#[derive(Debug, Clone, Deserialize)]
pub struct OidcDiscoveryDocument {
#[serde(flatten)]
pub metadata: CoreProviderMetadata,
#[serde(default)]
pub end_session_endpoint: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OidcIdentityClaims {
pub issuer: String,
pub sub: String,
pub email: Option<String>,
pub email_verified: Option<bool>,
pub name: Option<String>,
pub preferred_username: Option<String>,
pub groups: Vec<String>,
}
#[derive(Debug, Clone, Deserialize)]
struct VerifiedIdTokenClaims {
iss: String,
sub: String,
#[serde(default)]
nonce: Option<String>,
#[serde(default)]
email: Option<String>,
#[serde(default)]
email_verified: Option<bool>,
#[serde(default)]
name: Option<String>,
#[serde(default)]
preferred_username: Option<String>,
#[serde(default)]
groups: Vec<String>,
}
#[derive(Debug, Clone)]
pub struct OidcAuthenticatedIdentity {
pub current_user: CurrentUserResponse,
pub token_response: TokenResponse,
pub id_token: String,
}
#[derive(Debug, Clone)]
pub struct OidcLoginRedirect {
pub authorization_url: String,
pub cookies: Vec<Cookie<'static>>,
}
#[derive(Debug, Clone)]
pub struct OidcLogoutRedirect {
pub redirect_url: String,
pub cookies: Vec<Cookie<'static>>,
}
#[derive(Debug, Deserialize)]
pub struct OidcCallbackQuery {
pub code: Option<String>,
pub state: Option<String>,
pub error: Option<String>,
pub error_description: Option<String>,
}
pub async fn build_login_redirect(
state: &SharedState,
redirect_to: Option<&str>,
) -> Result<OidcLoginRedirect, ApiError> {
let oidc = oidc_config(state)?;
let discovery = fetch_discovery_document(&oidc).await?;
let _http_client = OidcHttpClient::builder()
.redirect(openidconnect::reqwest::redirect::Policy::none())
.build()
.map_err(|err| {
ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}"))
})?;
let redirect_uri_str = oidc.redirect_uri.clone().unwrap_or_default();
let redirect_uri = RedirectUrl::new(redirect_uri_str).map_err(|err| {
ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}"))
})?;
let client_secret = oidc.client_secret.clone().ok_or_else(|| {
ApiError::InternalServerError("OIDC client secret is missing".to_string())
})?;
let client_id = oidc.client_id.clone().unwrap_or_default();
let client = CoreClient::from_provider_metadata(
discovery.metadata.clone(),
ClientId::new(client_id),
Some(ClientSecret::new(client_secret)),
)
.set_redirect_uri(redirect_uri);
let redirect_target = sanitize_redirect_target(redirect_to);
let pkce = PkceCodeChallenge::new_random_sha256();
let (auth_url, csrf_state, nonce) = client
.authorize_url(
CoreAuthenticationFlow::AuthorizationCode,
CsrfToken::new_random,
Nonce::new_random,
)
.add_scope(Scope::new("openid".to_string()))
.add_scope(Scope::new("email".to_string()))
.add_scope(Scope::new("profile".to_string()))
.add_scopes(
oidc.scopes
.iter()
.filter(|scope| !matches!(scope.as_str(), "openid" | "email" | "profile"))
.cloned()
.map(Scope::new),
)
.set_pkce_challenge(pkce.0)
.url();
Ok(OidcLoginRedirect {
authorization_url: auth_url.to_string(),
cookies: vec![
build_cookie(
state,
OIDC_STATE_COOKIE_NAME,
csrf_state.secret().to_string(),
600,
true,
),
build_cookie(
state,
OIDC_NONCE_COOKIE_NAME,
nonce.secret().to_string(),
600,
true,
),
build_cookie(
state,
OIDC_PKCE_COOKIE_NAME,
pkce.1.secret().to_string(),
600,
true,
),
build_cookie(
state,
OIDC_REDIRECT_COOKIE_NAME,
redirect_target,
600,
false,
),
],
})
}
pub async fn handle_callback(
state: &SharedState,
headers: &HeaderMap,
query: &OidcCallbackQuery,
) -> Result<OidcAuthenticatedIdentity, ApiError> {
if let Some(error) = &query.error {
let description = query
.error_description
.as_deref()
.unwrap_or("OpenID Connect login failed");
return Err(ApiError::Unauthorized(format!("{error}: {description}")));
}
let code = query
.code
.as_ref()
.ok_or_else(|| ApiError::BadRequest("Missing authorization code".to_string()))?;
let returned_state = query
.state
.as_ref()
.ok_or_else(|| ApiError::BadRequest("Missing OIDC state".to_string()))?;
let expected_state = get_cookie_value(headers, OIDC_STATE_COOKIE_NAME)
.ok_or_else(|| ApiError::Unauthorized("Missing OIDC state cookie".to_string()))?;
let expected_nonce = get_cookie_value(headers, OIDC_NONCE_COOKIE_NAME)
.ok_or_else(|| ApiError::Unauthorized("Missing OIDC nonce cookie".to_string()))?;
let pkce_verifier = get_cookie_value(headers, OIDC_PKCE_COOKIE_NAME)
.ok_or_else(|| ApiError::Unauthorized("Missing OIDC PKCE verifier cookie".to_string()))?;
if returned_state != &expected_state {
return Err(ApiError::Unauthorized(
"OIDC state validation failed".to_string(),
));
}
let oidc = oidc_config(state)?;
let discovery = fetch_discovery_document(&oidc).await?;
let http_client = OidcHttpClient::builder()
.redirect(openidconnect::reqwest::redirect::Policy::none())
.build()
.map_err(|err| {
ApiError::InternalServerError(format!("Failed to build OIDC HTTP client: {err}"))
})?;
let redirect_uri_str = oidc.redirect_uri.clone().unwrap_or_default();
let redirect_uri = RedirectUrl::new(redirect_uri_str).map_err(|err| {
ApiError::InternalServerError(format!("Invalid OIDC redirect URI: {err}"))
})?;
let client_secret = oidc.client_secret.clone().ok_or_else(|| {
ApiError::InternalServerError("OIDC client secret is missing".to_string())
})?;
let client_id = oidc.client_id.clone().unwrap_or_default();
let client = CoreClient::from_provider_metadata(
discovery.metadata.clone(),
ClientId::new(client_id),
Some(ClientSecret::new(client_secret)),
)
.set_redirect_uri(redirect_uri);
let token_response = client
.exchange_code(AuthorizationCode::new(code.clone()))
.map_err(|err| {
ApiError::InternalServerError(format!("OIDC token request is misconfigured: {err}"))
})?
.set_pkce_verifier(PkceCodeVerifier::new(pkce_verifier))
.request_async(&http_client)
.await
.map_err(|err| ApiError::Unauthorized(format!("OIDC token exchange failed: {err}")))?;
let id_token = token_response.id_token().ok_or_else(|| {
ApiError::Unauthorized("OIDC provider did not return an ID token".to_string())
})?;
let raw_id_token = id_token.to_string();
let claims = verify_id_token(&raw_id_token, &discovery, &oidc, &expected_nonce).await?;
let mut oidc_claims = OidcIdentityClaims {
issuer: claims.iss,
sub: claims.sub,
email: claims.email,
email_verified: claims.email_verified,
name: claims.name,
preferred_username: claims.preferred_username,
groups: claims.groups,
};
if let Ok(userinfo_request) = client.user_info(token_response.access_token().to_owned(), None) {
if let Ok(userinfo) = userinfo_request.request_async(&http_client).await {
merge_userinfo_claims(&mut oidc_claims, &userinfo);
}
}
let identity = upsert_identity(state, &oidc_claims).await?;
if identity.frozen {
return Err(ApiError::Forbidden(
"Identity is frozen and cannot authenticate".to_string(),
));
}
let access_token = generate_access_token(identity.id, &identity.login, &state.jwt_config)?;
let refresh_token = generate_refresh_token(identity.id, &identity.login, &state.jwt_config)?;
let token_response = TokenResponse::new(
access_token,
refresh_token,
state.jwt_config.access_token_expiration,
)
.with_user(
identity.id,
identity.login.clone(),
identity.display_name.clone(),
);
Ok(OidcAuthenticatedIdentity {
current_user: CurrentUserResponse {
id: identity.id,
login: identity.login.clone(),
display_name: identity.display_name.clone(),
},
id_token: raw_id_token,
token_response,
})
}
pub async fn build_logout_redirect(
state: &SharedState,
headers: &HeaderMap,
) -> Result<OidcLogoutRedirect, ApiError> {
let oidc = oidc_config(state)?;
let discovery = fetch_discovery_document(&oidc).await?;
let post_logout_redirect_uri = oidc
.post_logout_redirect_uri
.clone()
.unwrap_or_else(|| "/login".to_string());
let redirect_url = if let Some(end_session_endpoint) = discovery.end_session_endpoint {
let mut url = Url::parse(&end_session_endpoint).map_err(|err| {
ApiError::InternalServerError(format!("Invalid end_session_endpoint: {err}"))
})?;
{
let mut pairs = url.query_pairs_mut();
if let Some(id_token_hint) = get_cookie_value(headers, OIDC_ID_TOKEN_COOKIE_NAME) {
pairs.append_pair("id_token_hint", &id_token_hint);
}
pairs.append_pair("post_logout_redirect_uri", &post_logout_redirect_uri);
pairs.append_pair("client_id", oidc.client_id.as_deref().unwrap_or_default());
}
String::from(url)
} else {
post_logout_redirect_uri
};
Ok(OidcLogoutRedirect {
redirect_url,
cookies: clear_auth_cookies(state),
})
}
pub fn clear_auth_cookies(state: &SharedState) -> Vec<Cookie<'static>> {
[
ACCESS_COOKIE_NAME,
REFRESH_COOKIE_NAME,
OIDC_ID_TOKEN_COOKIE_NAME,
OIDC_STATE_COOKIE_NAME,
OIDC_NONCE_COOKIE_NAME,
OIDC_PKCE_COOKIE_NAME,
OIDC_REDIRECT_COOKIE_NAME,
]
.into_iter()
.map(|name| remove_cookie(state, name))
.collect()
}
pub fn build_auth_cookies(
state: &SharedState,
token_response: &TokenResponse,
id_token: &str,
) -> Vec<Cookie<'static>> {
let mut cookies = vec![
build_cookie(
state,
ACCESS_COOKIE_NAME,
token_response.access_token.clone(),
state.jwt_config.access_token_expiration,
true,
),
build_cookie(
state,
REFRESH_COOKIE_NAME,
token_response.refresh_token.clone(),
state.jwt_config.refresh_token_expiration,
true,
),
];
if !id_token.is_empty() {
cookies.push(build_cookie(
state,
OIDC_ID_TOKEN_COOKIE_NAME,
id_token.to_string(),
state.jwt_config.refresh_token_expiration,
true,
));
}
cookies
}
pub fn apply_cookies_to_headers(
headers: &mut HeaderMap,
cookies: &[Cookie<'static>],
) -> Result<(), ApiError> {
for cookie in cookies {
let value = HeaderValue::from_str(&cookie.to_string()).map_err(|err| {
ApiError::InternalServerError(format!("Failed to serialize cookie header: {err}"))
})?;
headers.append(header::SET_COOKIE, value);
}
Ok(())
}
pub fn oidc_callback_redirect_response(
state: &SharedState,
token_response: &TokenResponse,
redirect_to: Option<String>,
id_token: &str,
) -> Result<Response, ApiError> {
let redirect_target = sanitize_redirect_target(redirect_to.as_deref());
let redirect_url = format!(
"{LOGIN_CALLBACK_PATH}#access_token={}&refresh_token={}&expires_in={}&redirect_to={}",
encode_fragment_value(&token_response.access_token),
encode_fragment_value(&token_response.refresh_token),
token_response.expires_in,
encode_fragment_value(&redirect_target),
);
let mut response = Redirect::temporary(&redirect_url).into_response();
let mut cookies = build_auth_cookies(state, token_response, id_token);
cookies.push(remove_cookie(state, OIDC_STATE_COOKIE_NAME));
cookies.push(remove_cookie(state, OIDC_NONCE_COOKIE_NAME));
cookies.push(remove_cookie(state, OIDC_PKCE_COOKIE_NAME));
cookies.push(remove_cookie(state, OIDC_REDIRECT_COOKIE_NAME));
apply_cookies_to_headers(response.headers_mut(), &cookies)?;
Ok(response)
}
pub fn cookie_authenticated_user(
headers: &HeaderMap,
state: &SharedState,
) -> Result<Option<crate::auth::middleware::AuthenticatedUser>, ApiError> {
let Some(token) = get_cookie_value(headers, ACCESS_COOKIE_NAME) else {
return Ok(None);
};
let claims = validate_token(&token, &state.jwt_config).map_err(ApiError::from)?;
Ok(Some(crate::auth::middleware::AuthenticatedUser { claims }))
}
pub fn get_cookie_value(headers: &HeaderMap, name: &str) -> Option<String> {
headers
.get_all(header::COOKIE)
.iter()
.filter_map(|value| value.to_str().ok())
.flat_map(|value| value.split(';'))
.filter_map(|part| {
let mut pieces = part.trim().splitn(2, '=');
let key = pieces.next()?.trim();
let value = pieces.next()?.trim();
if key == name {
Some(value.to_string())
} else {
None
}
})
.next()
}
fn oidc_config(state: &SharedState) -> Result<OidcConfig, ApiError> {
state
.config
.security
.oidc
.clone()
.filter(|oidc| oidc.enabled)
.ok_or_else(|| {
ApiError::NotImplemented("OIDC authentication is not configured".to_string())
})
}
async fn fetch_discovery_document(oidc: &OidcConfig) -> Result<OidcDiscoveryDocument, ApiError> {
let discovery_url = oidc.discovery_url.as_deref().unwrap_or_default();
let discovery = reqwest::get(discovery_url).await.map_err(|err| {
ApiError::InternalServerError(format!("Failed to fetch OIDC discovery document: {err}"))
})?;
if !discovery.status().is_success() {
return Err(ApiError::InternalServerError(format!(
"OIDC discovery request failed with status {}",
discovery.status()
)));
}
discovery
.json::<OidcDiscoveryDocument>()
.await
.map_err(|err| {
ApiError::InternalServerError(format!("Failed to parse OIDC discovery document: {err}"))
})
}
async fn upsert_identity(
state: &SharedState,
oidc_claims: &OidcIdentityClaims,
) -> Result<attune_common::models::identity::Identity, ApiError> {
let existing_by_subject =
IdentityRepository::find_by_oidc_subject(&state.db, &oidc_claims.issuer, &oidc_claims.sub)
.await?;
let desired_login = derive_login(oidc_claims);
let display_name = derive_display_name(oidc_claims);
let attributes = json!({
"oidc": oidc_claims,
});
match existing_by_subject {
Some(identity) => {
let updated = UpdateIdentityInput {
display_name,
password_hash: None,
attributes: Some(attributes.clone()),
frozen: None,
};
let identity = IdentityRepository::update(&state.db, identity.id, updated)
.await
.map_err(ApiError::from)?;
sync_roles(&state.db, identity.id, "oidc", &oidc_claims.groups).await?;
Ok(identity)
}
None => {
let login = match IdentityRepository::find_by_login(&state.db, &desired_login).await? {
Some(_) => fallback_subject_login(oidc_claims),
None => desired_login,
};
let identity = IdentityRepository::create(
&state.db,
CreateIdentityInput {
login,
display_name,
password_hash: None,
attributes,
},
)
.await
.map_err(ApiError::from)?;
sync_roles(&state.db, identity.id, "oidc", &oidc_claims.groups).await?;
Ok(identity)
}
}
}
async fn sync_roles(
db: &sqlx::PgPool,
identity_id: i64,
source: &str,
roles: &[String],
) -> Result<(), ApiError> {
IdentityRoleAssignmentRepository::replace_managed_roles(db, identity_id, source, roles)
.await
.map_err(Into::into)
}
fn derive_login(oidc_claims: &OidcIdentityClaims) -> String {
oidc_claims
.email
.clone()
.or_else(|| oidc_claims.preferred_username.clone())
.unwrap_or_else(|| fallback_subject_login(oidc_claims))
}
async fn verify_id_token(
raw_id_token: &str,
discovery: &OidcDiscoveryDocument,
oidc: &OidcConfig,
expected_nonce: &str,
) -> Result<VerifiedIdTokenClaims, ApiError> {
let header = decode_header(raw_id_token).map_err(|err| {
ApiError::Unauthorized(format!("OIDC ID token header decode failed: {err}"))
})?;
let algorithm = match header.alg {
Algorithm::RS256 => Algorithm::RS256,
Algorithm::RS384 => Algorithm::RS384,
Algorithm::RS512 => Algorithm::RS512,
other => {
return Err(ApiError::Unauthorized(format!(
"OIDC ID token uses unsupported signing algorithm: {other:?}"
)))
}
};
let jwks = reqwest::get(discovery.metadata.jwks_uri().url().as_str())
.await
.map_err(|err| ApiError::InternalServerError(format!("Failed to fetch OIDC JWKS: {err}")))?
.json::<JwkSet>()
.await
.map_err(|err| {
ApiError::InternalServerError(format!("Failed to parse OIDC JWKS: {err}"))
})?;
let jwk = jwks
.keys
.iter()
.find(|jwk| {
jwk.common.key_id == header.kid
&& matches!(
jwk.common.public_key_use,
Some(jsonwebtoken::jwk::PublicKeyUse::Signature)
)
&& matches!(
jwk.algorithm,
AlgorithmParameters::RSA(_) | AlgorithmParameters::EllipticCurve(_)
)
})
.ok_or_else(|| ApiError::Unauthorized("OIDC signing key not found in JWKS".to_string()))?;
let decoding_key = DecodingKey::from_jwk(jwk)
.map_err(|err| ApiError::Unauthorized(format!("OIDC JWK decode failed: {err}")))?;
let issuer = discovery.metadata.issuer().to_string();
let mut validation = Validation::new(algorithm);
validation.set_issuer(&[issuer.as_str()]);
validation.set_audience(&[oidc.client_id.as_deref().unwrap_or_default()]);
validation.set_required_spec_claims(&["exp", "iat", "iss", "sub", "aud"]);
validation.validate_nbf = false;
let token = decode::<VerifiedIdTokenClaims>(raw_id_token, &decoding_key, &validation)
.map_err(|err| ApiError::Unauthorized(format!("OIDC ID token validation failed: {err}")))?;
if token.claims.nonce.as_deref() != Some(expected_nonce) {
return Err(ApiError::Unauthorized(
"OIDC nonce validation failed".to_string(),
));
}
Ok(token.claims)
}
fn derive_display_name(oidc_claims: &OidcIdentityClaims) -> Option<String> {
oidc_claims
.name
.clone()
.or_else(|| oidc_claims.preferred_username.clone())
.or_else(|| oidc_claims.email.clone())
}
fn fallback_subject_login(oidc_claims: &OidcIdentityClaims) -> String {
let mut hasher = Sha256::new();
hasher.update(oidc_claims.issuer.as_bytes());
hasher.update(b":");
hasher.update(oidc_claims.sub.as_bytes());
let digest = hex::encode(hasher.finalize());
format!("oidc:{}", &digest[..24])
}
fn extract_groups_from_claims<T>(claims: &T) -> Vec<String>
where
T: Serialize,
{
let Ok(json) = serde_json::to_value(claims) else {
return Vec::new();
};
match json.get("groups") {
Some(JsonValue::Array(values)) => values
.iter()
.filter_map(|value| value.as_str().map(ToString::to_string))
.collect(),
Some(JsonValue::String(value)) => vec![value.to_string()],
_ => Vec::new(),
}
}
fn merge_userinfo_claims(oidc_claims: &mut OidcIdentityClaims, userinfo: &CoreUserInfoClaims) {
if oidc_claims.email.is_none() {
oidc_claims.email = userinfo.email().map(|email| email.as_str().to_string());
}
if oidc_claims.name.is_none() {
oidc_claims.name = userinfo.name().and_then(first_localized_claim);
}
if oidc_claims.preferred_username.is_none() {
oidc_claims.preferred_username = userinfo
.preferred_username()
.map(|username| username.as_str().to_string());
}
if oidc_claims.groups.is_empty() {
oidc_claims.groups = extract_groups_from_claims(userinfo.additional_claims());
}
}
fn first_localized_claim<T>(claim: &LocalizedClaim<T>) -> Option<String>
where
T: std::ops::Deref<Target = String>,
{
claim
.iter()
.next()
.map(|(_, value)| value.as_str().to_string())
}
fn build_cookie(
state: &SharedState,
name: &'static str,
value: String,
max_age_seconds: i64,
http_only: bool,
) -> Cookie<'static> {
let mut cookie = Cookie::build((name, value))
.path("/")
.same_site(SameSite::Lax)
.http_only(http_only)
.max_age(CookieDuration::seconds(max_age_seconds))
.build();
if should_use_secure_cookies(state) {
cookie.set_secure(true);
}
cookie
}
fn remove_cookie(state: &SharedState, name: &'static str) -> Cookie<'static> {
let mut cookie = Cookie::build((name, String::new()))
.path("/")
.same_site(SameSite::Lax)
.http_only(true)
.max_age(CookieDuration::seconds(0))
.build();
cookie.make_removal();
if should_use_secure_cookies(state) {
cookie.set_secure(true);
}
cookie
}
fn should_use_secure_cookies(state: &SharedState) -> bool {
state.config.is_production()
|| state
.config
.security
.oidc
.as_ref()
.and_then(|oidc| oidc.redirect_uri.as_deref())
.map(|uri| uri.starts_with("https://"))
.unwrap_or(false)
}
fn sanitize_redirect_target(redirect_to: Option<&str>) -> String {
let fallback = "/".to_string();
let Some(redirect_to) = redirect_to else {
return fallback;
};
if redirect_to.starts_with('/') && !redirect_to.starts_with("//") {
redirect_to.to_string()
} else {
fallback
}
}
pub fn unauthorized_redirect(location: &str) -> Response {
let mut response = Redirect::to(location).into_response();
*response.status_mut() = StatusCode::FOUND;
response
}
fn encode_fragment_value(value: &str) -> String {
byte_serialize(value.as_bytes()).collect()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn sanitize_redirect_target_rejects_external_urls() {
assert_eq!(sanitize_redirect_target(Some("https://example.com")), "/");
assert_eq!(sanitize_redirect_target(Some("//example.com")), "/");
assert_eq!(
sanitize_redirect_target(Some("/executions/42")),
"/executions/42"
);
}
#[test]
fn extract_groups_from_claims_accepts_array_and_string() {
let array_claims = serde_json::json!({ "groups": ["admins", "operators"] });
let string_claims = serde_json::json!({ "groups": "admins" });
assert_eq!(
extract_groups_from_claims(&array_claims),
vec!["admins".to_string(), "operators".to_string()]
);
assert_eq!(
extract_groups_from_claims(&string_claims),
vec!["admins".to_string()]
);
}
}

154
crates/api/src/authz.rs Normal file
View File

@@ -0,0 +1,154 @@
//! RBAC authorization service for API handlers.
//!
//! This module evaluates grants assigned to user identities via
//! `permission_set` and `permission_assignment`.
use crate::{
auth::{jwt::TokenType, middleware::AuthenticatedUser},
middleware::ApiError,
};
use attune_common::{
rbac::{Action, AuthorizationContext, Grant, Resource},
repositories::{
identity::{IdentityRepository, IdentityRoleAssignmentRepository, PermissionSetRepository},
FindById,
},
};
use sqlx::PgPool;
#[derive(Debug, Clone)]
pub struct AuthorizationCheck {
pub resource: Resource,
pub action: Action,
pub context: AuthorizationContext,
}
#[derive(Clone)]
pub struct AuthorizationService {
db: PgPool,
}
impl AuthorizationService {
pub fn new(db: PgPool) -> Self {
Self { db }
}
pub async fn authorize(
&self,
user: &AuthenticatedUser,
mut check: AuthorizationCheck,
) -> Result<(), ApiError> {
// Non-access tokens are governed by dedicated scope checks in route logic.
// They are not evaluated through identity RBAC grants.
if user.claims.token_type != TokenType::Access {
return Ok(());
}
let identity_id = user.identity_id().map_err(|_| {
ApiError::Unauthorized("Invalid authentication subject in access token".to_string())
})?;
// Ensure identity exists and load identity attributes used by attribute constraints.
let identity = IdentityRepository::find_by_id(&self.db, identity_id)
.await?
.ok_or_else(|| ApiError::Unauthorized("Identity not found".to_string()))?;
check.context.identity_id = identity_id;
check.context.identity_attributes = match identity.attributes {
serde_json::Value::Object(map) => map.into_iter().collect(),
_ => Default::default(),
};
let grants = self.load_effective_grants(identity_id).await?;
let allowed = Self::is_allowed(&grants, check.resource, check.action, &check.context);
if !allowed {
return Err(ApiError::Forbidden(format!(
"Insufficient permissions: {}:{}",
resource_name(check.resource),
action_name(check.action)
)));
}
Ok(())
}
pub async fn effective_grants(&self, user: &AuthenticatedUser) -> Result<Vec<Grant>, ApiError> {
if user.claims.token_type != TokenType::Access {
return Ok(Vec::new());
}
let identity_id = user.identity_id().map_err(|_| {
ApiError::Unauthorized("Invalid authentication subject in access token".to_string())
})?;
self.load_effective_grants(identity_id).await
}
pub fn is_allowed(
grants: &[Grant],
resource: Resource,
action: Action,
context: &AuthorizationContext,
) -> bool {
grants.iter().any(|g| g.allows(resource, action, context))
}
async fn load_effective_grants(&self, identity_id: i64) -> Result<Vec<Grant>, ApiError> {
let mut permission_sets =
PermissionSetRepository::find_by_identity(&self.db, identity_id).await?;
let roles =
IdentityRoleAssignmentRepository::find_role_names_by_identity(&self.db, identity_id)
.await?;
let role_permission_sets = PermissionSetRepository::find_by_roles(&self.db, &roles).await?;
permission_sets.extend(role_permission_sets);
let mut seen_permission_sets = std::collections::HashSet::new();
permission_sets.retain(|permission_set| seen_permission_sets.insert(permission_set.id));
let mut grants = Vec::new();
for permission_set in permission_sets {
let set_grants: Vec<Grant> =
serde_json::from_value(permission_set.grants).map_err(|e| {
ApiError::InternalServerError(format!(
"Invalid grant schema in permission set '{}': {}",
permission_set.r#ref, e
))
})?;
grants.extend(set_grants);
}
Ok(grants)
}
}
fn resource_name(resource: Resource) -> &'static str {
match resource {
Resource::Packs => "packs",
Resource::Actions => "actions",
Resource::Rules => "rules",
Resource::Triggers => "triggers",
Resource::Executions => "executions",
Resource::Events => "events",
Resource::Enforcements => "enforcements",
Resource::Inquiries => "inquiries",
Resource::Keys => "keys",
Resource::Artifacts => "artifacts",
Resource::Identities => "identities",
Resource::Permissions => "permissions",
}
}
fn action_name(action: Action) -> &'static str {
match action {
Action::Read => "read",
Action::Create => "create",
Action::Update => "update",
Action::Delete => "delete",
Action::Execute => "execute",
Action::Cancel => "cancel",
Action::Respond => "respond",
Action::Manage => "manage",
Action::Decrypt => "decrypt",
}
}

View File

@@ -25,9 +25,8 @@ pub struct CreateActionRequest {
pub label: String,
/// Action description
#[validate(length(min = 1))]
#[schema(example = "Posts a message to a Slack channel")]
pub description: String,
pub description: Option<String>,
/// Entry point for action execution (e.g., path to script, function name)
#[validate(length(min = 1, max = 1024))]
@@ -38,14 +37,19 @@ pub struct CreateActionRequest {
#[schema(example = 1)]
pub runtime: Option<i64>,
/// Parameter schema (JSON Schema) defining expected inputs
/// Optional semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0")
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Object, nullable = true, example = json!({"type": "object", "properties": {"channel": {"type": "string"}, "message": {"type": "string"}}}))]
#[schema(example = ">=3.12", nullable = true)]
pub runtime_version_constraint: Option<String>,
/// Parameter schema (StackStorm-style) defining expected inputs with inline required/secret
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Object, nullable = true, example = json!({"channel": {"type": "string", "description": "Slack channel", "required": true}, "message": {"type": "string", "description": "Message text", "required": true}}))]
pub param_schema: Option<JsonValue>,
/// Output schema (JSON Schema) defining expected outputs
/// Output schema (flat format) defining expected outputs with inline required/secret
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Object, nullable = true, example = json!({"type": "object", "properties": {"message_id": {"type": "string"}}}))]
#[schema(value_type = Object, nullable = true, example = json!({"message_id": {"type": "string", "description": "ID of the sent message", "required": true}}))]
pub out_schema: Option<JsonValue>,
}
@@ -58,7 +62,6 @@ pub struct UpdateActionRequest {
pub label: Option<String>,
/// Action description
#[validate(length(min = 1))]
#[schema(example = "Posts a message to a Slack channel with enhanced features")]
pub description: Option<String>,
@@ -71,7 +74,10 @@ pub struct UpdateActionRequest {
#[schema(example = 1)]
pub runtime: Option<i64>,
/// Parameter schema
/// Optional semver version constraint patch for the runtime.
pub runtime_version_constraint: Option<RuntimeVersionConstraintPatch>,
/// Parameter schema (StackStorm-style with inline required/secret)
#[schema(value_type = Object, nullable = true)]
pub param_schema: Option<JsonValue>,
@@ -80,6 +86,14 @@ pub struct UpdateActionRequest {
pub out_schema: Option<JsonValue>,
}
/// Explicit patch operation for a nullable runtime version constraint.
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum RuntimeVersionConstraintPatch {
Set(String),
Clear,
}
/// Response DTO for action information
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct ActionResponse {
@@ -105,7 +119,7 @@ pub struct ActionResponse {
/// Action description
#[schema(example = "Posts a message to a Slack channel")]
pub description: String,
pub description: Option<String>,
/// Entry point
#[schema(example = "/actions/slack/post_message.py")]
@@ -115,7 +129,12 @@ pub struct ActionResponse {
#[schema(example = 1)]
pub runtime: Option<i64>,
/// Parameter schema
/// Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0")
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(example = ">=3.12", nullable = true)]
pub runtime_version_constraint: Option<String>,
/// Parameter schema (StackStorm-style with inline required/secret)
#[schema(value_type = Object, nullable = true)]
pub param_schema: Option<JsonValue>,
@@ -123,6 +142,11 @@ pub struct ActionResponse {
#[schema(value_type = Object, nullable = true)]
pub out_schema: Option<JsonValue>,
/// Workflow definition ID (non-null if this action is a workflow)
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(example = 42, nullable = true)]
pub workflow_def: Option<i64>,
/// Whether this is an ad-hoc action (not from pack installation)
#[schema(example = false)]
pub is_adhoc: bool,
@@ -157,7 +181,7 @@ pub struct ActionSummary {
/// Action description
#[schema(example = "Posts a message to a Slack channel")]
pub description: String,
pub description: Option<String>,
/// Entry point
#[schema(example = "/actions/slack/post_message.py")]
@@ -167,6 +191,16 @@ pub struct ActionSummary {
#[schema(example = 1)]
pub runtime: Option<i64>,
/// Semver version constraint for the runtime
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(example = ">=3.12", nullable = true)]
pub runtime_version_constraint: Option<String>,
/// Workflow definition ID (non-null if this action is a workflow)
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(example = 42, nullable = true)]
pub workflow_def: Option<i64>,
/// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>,
@@ -188,8 +222,10 @@ impl From<attune_common::models::action::Action> for ActionResponse {
description: action.description,
entrypoint: action.entrypoint,
runtime: action.runtime,
runtime_version_constraint: action.runtime_version_constraint,
param_schema: action.param_schema,
out_schema: action.out_schema,
workflow_def: action.workflow_def,
is_adhoc: action.is_adhoc,
created: action.created,
updated: action.updated,
@@ -208,6 +244,8 @@ impl From<attune_common::models::action::Action> for ActionSummary {
description: action.description,
entrypoint: action.entrypoint,
runtime: action.runtime,
runtime_version_constraint: action.runtime_version_constraint,
workflow_def: action.workflow_def,
created: action.created,
updated: action.updated,
}
@@ -281,9 +319,10 @@ mod tests {
r#ref: "".to_string(), // Invalid: empty
pack_ref: "test-pack".to_string(),
label: "Test Action".to_string(),
description: "Test description".to_string(),
description: Some("Test description".to_string()),
entrypoint: "/actions/test.py".to_string(),
runtime: None,
runtime_version_constraint: None,
param_schema: None,
out_schema: None,
};
@@ -297,9 +336,10 @@ mod tests {
r#ref: "test.action".to_string(),
pack_ref: "test-pack".to_string(),
label: "Test Action".to_string(),
description: "Test description".to_string(),
description: Some("Test description".to_string()),
entrypoint: "/actions/test.py".to_string(),
runtime: None,
runtime_version_constraint: None,
param_schema: None,
out_schema: None,
};
@@ -314,6 +354,7 @@ mod tests {
description: None,
entrypoint: None,
runtime: None,
runtime_version_constraint: None,
param_schema: None,
out_schema: None,
};

View File

@@ -0,0 +1,358 @@
//! Analytics DTOs for API requests and responses
//!
//! These types represent the API-facing view of analytics data derived from
//! TimescaleDB continuous aggregates over entity history hypertables.
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use utoipa::{IntoParams, ToSchema};
use attune_common::repositories::analytics::{
AnalyticsTimeRange, EnforcementVolumeBucket, EventVolumeBucket, ExecutionStatusBucket,
ExecutionThroughputBucket, FailureRateSummary, WorkerStatusBucket,
};
// ---------------------------------------------------------------------------
// Query parameters
// ---------------------------------------------------------------------------
/// Common query parameters for analytics endpoints.
#[derive(Debug, Clone, Deserialize, IntoParams)]
pub struct AnalyticsQueryParams {
/// Start of time range (ISO 8601). Defaults to 24 hours ago.
#[param(example = "2026-02-25T00:00:00Z")]
pub since: Option<DateTime<Utc>>,
/// End of time range (ISO 8601). Defaults to now.
#[param(example = "2026-02-26T00:00:00Z")]
pub until: Option<DateTime<Utc>>,
/// Number of hours to look back from now (alternative to since/until).
/// Ignored if `since` is provided.
#[param(example = 24, minimum = 1, maximum = 8760)]
pub hours: Option<i64>,
}
impl AnalyticsQueryParams {
/// Convert to the repository-level time range.
pub fn to_time_range(&self) -> AnalyticsTimeRange {
match (&self.since, &self.until) {
(Some(since), Some(until)) => AnalyticsTimeRange {
since: *since,
until: *until,
},
(Some(since), None) => AnalyticsTimeRange {
since: *since,
until: Utc::now(),
},
(None, Some(until)) => {
let hours = self.hours.unwrap_or(24).clamp(1, 8760);
AnalyticsTimeRange {
since: *until - chrono::Duration::hours(hours),
until: *until,
}
}
(None, None) => {
let hours = self.hours.unwrap_or(24).clamp(1, 8760);
AnalyticsTimeRange::last_hours(hours)
}
}
}
}
/// Path parameter for filtering analytics by a specific entity ref.
#[derive(Debug, Clone, Deserialize, IntoParams)]
pub struct AnalyticsRefParam {
/// Optional entity ref filter (action_ref, trigger_ref, rule_ref, or worker name)
#[param(example = "core.http_request")]
pub entity_ref: Option<String>,
}
// ---------------------------------------------------------------------------
// Response types
// ---------------------------------------------------------------------------
/// A single data point in an hourly time series.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct TimeSeriesPoint {
/// Start of the 1-hour bucket (ISO 8601)
#[schema(example = "2026-02-26T10:00:00Z")]
pub bucket: DateTime<Utc>,
/// The series label (e.g., status name, action ref). Null for aggregate totals.
#[schema(example = "completed")]
pub label: Option<String>,
/// The count value for this bucket
#[schema(example = 42)]
pub value: i64,
}
/// Response for execution status transitions over time.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct ExecutionStatusTimeSeriesResponse {
/// Time range start
pub since: DateTime<Utc>,
/// Time range end
pub until: DateTime<Utc>,
/// Data points: one per (bucket, status) pair
pub data: Vec<TimeSeriesPoint>,
}
/// Response for execution throughput over time.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct ExecutionThroughputResponse {
/// Time range start
pub since: DateTime<Utc>,
/// Time range end
pub until: DateTime<Utc>,
/// Data points: one per bucket (total executions created)
pub data: Vec<TimeSeriesPoint>,
}
/// Response for event volume over time.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct EventVolumeResponse {
/// Time range start
pub since: DateTime<Utc>,
/// Time range end
pub until: DateTime<Utc>,
/// Data points: one per bucket (total events created)
pub data: Vec<TimeSeriesPoint>,
}
/// Response for worker status transitions over time.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct WorkerStatusTimeSeriesResponse {
/// Time range start
pub since: DateTime<Utc>,
/// Time range end
pub until: DateTime<Utc>,
/// Data points: one per (bucket, status) pair
pub data: Vec<TimeSeriesPoint>,
}
/// Response for enforcement volume over time.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct EnforcementVolumeResponse {
/// Time range start
pub since: DateTime<Utc>,
/// Time range end
pub until: DateTime<Utc>,
/// Data points: one per bucket (total enforcements created)
pub data: Vec<TimeSeriesPoint>,
}
/// Response for the execution failure rate summary.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct FailureRateResponse {
/// Time range start
pub since: DateTime<Utc>,
/// Time range end
pub until: DateTime<Utc>,
/// Total executions reaching a terminal state in the window
#[schema(example = 100)]
pub total_terminal: i64,
/// Number of failed executions
#[schema(example = 12)]
pub failed_count: i64,
/// Number of timed-out executions
#[schema(example = 3)]
pub timeout_count: i64,
/// Number of completed executions
#[schema(example = 85)]
pub completed_count: i64,
/// Failure rate as a percentage (0.0 100.0)
#[schema(example = 15.0)]
pub failure_rate_pct: f64,
}
/// Combined dashboard analytics response.
///
/// Returns all key metrics in a single response for the dashboard page,
/// avoiding multiple round-trips.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct DashboardAnalyticsResponse {
/// Time range start
pub since: DateTime<Utc>,
/// Time range end
pub until: DateTime<Utc>,
/// Execution throughput per hour
pub execution_throughput: Vec<TimeSeriesPoint>,
/// Execution status transitions per hour
pub execution_status: Vec<TimeSeriesPoint>,
/// Event volume per hour
pub event_volume: Vec<TimeSeriesPoint>,
/// Enforcement volume per hour
pub enforcement_volume: Vec<TimeSeriesPoint>,
/// Worker status transitions per hour
pub worker_status: Vec<TimeSeriesPoint>,
/// Execution failure rate summary
pub failure_rate: FailureRateResponse,
}
// ---------------------------------------------------------------------------
// Conversion helpers
// ---------------------------------------------------------------------------
impl From<ExecutionStatusBucket> for TimeSeriesPoint {
fn from(b: ExecutionStatusBucket) -> Self {
Self {
bucket: b.bucket,
label: b.new_status,
value: b.transition_count,
}
}
}
impl From<ExecutionThroughputBucket> for TimeSeriesPoint {
fn from(b: ExecutionThroughputBucket) -> Self {
Self {
bucket: b.bucket,
label: b.action_ref,
value: b.execution_count,
}
}
}
impl From<EventVolumeBucket> for TimeSeriesPoint {
fn from(b: EventVolumeBucket) -> Self {
Self {
bucket: b.bucket,
label: b.trigger_ref,
value: b.event_count,
}
}
}
impl From<WorkerStatusBucket> for TimeSeriesPoint {
fn from(b: WorkerStatusBucket) -> Self {
Self {
bucket: b.bucket,
label: b.new_status,
value: b.transition_count,
}
}
}
impl From<EnforcementVolumeBucket> for TimeSeriesPoint {
fn from(b: EnforcementVolumeBucket) -> Self {
Self {
bucket: b.bucket,
label: b.rule_ref,
value: b.enforcement_count,
}
}
}
impl FailureRateResponse {
/// Create from the repository summary plus the query time range.
pub fn from_summary(summary: FailureRateSummary, range: &AnalyticsTimeRange) -> Self {
Self {
since: range.since,
until: range.until,
total_terminal: summary.total_terminal,
failed_count: summary.failed_count,
timeout_count: summary.timeout_count,
completed_count: summary.completed_count,
failure_rate_pct: summary.failure_rate_pct,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_query_params_defaults() {
let params = AnalyticsQueryParams {
since: None,
until: None,
hours: None,
};
let range = params.to_time_range();
let diff = range.until - range.since;
assert!((diff.num_hours() - 24).abs() <= 1);
}
#[test]
fn test_query_params_custom_hours() {
let params = AnalyticsQueryParams {
since: None,
until: None,
hours: Some(6),
};
let range = params.to_time_range();
let diff = range.until - range.since;
assert!((diff.num_hours() - 6).abs() <= 1);
}
#[test]
fn test_query_params_hours_clamped() {
let params = AnalyticsQueryParams {
since: None,
until: None,
hours: Some(99999),
};
let range = params.to_time_range();
let diff = range.until - range.since;
// Clamped to 8760 hours (1 year)
assert!((diff.num_hours() - 8760).abs() <= 1);
}
#[test]
fn test_query_params_explicit_range() {
let since = Utc::now() - chrono::Duration::hours(48);
let until = Utc::now();
let params = AnalyticsQueryParams {
since: Some(since),
until: Some(until),
hours: Some(6), // ignored when since is provided
};
let range = params.to_time_range();
assert_eq!(range.since, since);
assert_eq!(range.until, until);
}
#[test]
fn test_failure_rate_response_from_summary() {
let summary = FailureRateSummary {
total_terminal: 100,
failed_count: 12,
timeout_count: 3,
completed_count: 85,
failure_rate_pct: 15.0,
};
let range = AnalyticsTimeRange::last_hours(24);
let response = FailureRateResponse::from_summary(summary, &range);
assert_eq!(response.total_terminal, 100);
assert_eq!(response.failed_count, 12);
assert_eq!(response.failure_rate_pct, 15.0);
}
#[test]
fn test_time_series_point_from_execution_status_bucket() {
let bucket = ExecutionStatusBucket {
bucket: Utc::now(),
action_ref: Some("core.http".into()),
new_status: Some("completed".into()),
transition_count: 10,
};
let point: TimeSeriesPoint = bucket.into();
assert_eq!(point.label.as_deref(), Some("completed"));
assert_eq!(point.value, 10);
}
#[test]
fn test_time_series_point_from_event_volume_bucket() {
let bucket = EventVolumeBucket {
bucket: Utc::now(),
trigger_ref: Some("core.timer".into()),
event_count: 25,
};
let point: TimeSeriesPoint = bucket.into();
assert_eq!(point.label.as_deref(), Some("core.timer"));
assert_eq!(point.value, 25);
}
}

View File

@@ -0,0 +1,607 @@
//! Artifact DTOs for API requests and responses
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
use utoipa::{IntoParams, ToSchema};
use attune_common::models::enums::{
ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType,
};
// ============================================================================
// Artifact DTOs
// ============================================================================
/// Request DTO for creating a new artifact
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct CreateArtifactRequest {
/// Artifact reference (unique identifier, e.g. "build.log", "test.results")
#[schema(example = "mypack.build_log")]
pub r#ref: String,
/// Owner scope type
#[schema(example = "action")]
pub scope: OwnerType,
/// Owner identifier (ref string of the owning entity)
#[schema(example = "mypack.deploy")]
pub owner: String,
/// Artifact type
#[schema(example = "file_text")]
pub r#type: ArtifactType,
/// Visibility level (public = all users, private = scope/owner restricted).
/// If omitted, defaults to `public` for progress artifacts and `private` for all others.
pub visibility: Option<ArtifactVisibility>,
/// Retention policy type
#[serde(default = "default_retention_policy")]
#[schema(example = "versions")]
pub retention_policy: RetentionPolicyType,
/// Retention limit (number of versions, days, hours, or minutes depending on policy)
#[serde(default = "default_retention_limit")]
#[schema(example = 5)]
pub retention_limit: i32,
/// Human-readable name
#[schema(example = "Build Log")]
pub name: Option<String>,
/// Optional description
#[schema(example = "Output log from the build action")]
pub description: Option<String>,
/// MIME content type (e.g. "text/plain", "application/json")
#[schema(example = "text/plain")]
pub content_type: Option<String>,
/// Execution ID that produced this artifact
#[schema(example = 42)]
pub execution: Option<i64>,
/// Initial structured data (for progress-type artifacts or metadata)
#[schema(value_type = Option<Object>)]
pub data: Option<JsonValue>,
}
fn default_retention_policy() -> RetentionPolicyType {
RetentionPolicyType::Versions
}
fn default_retention_limit() -> i32 {
5
}
/// Request DTO for updating an existing artifact
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct UpdateArtifactRequest {
/// Updated owner scope
pub scope: Option<OwnerType>,
/// Updated owner identifier
pub owner: Option<String>,
/// Updated artifact type
pub r#type: Option<ArtifactType>,
/// Updated visibility
pub visibility: Option<ArtifactVisibility>,
/// Updated retention policy
pub retention_policy: Option<RetentionPolicyType>,
/// Updated retention limit
pub retention_limit: Option<i32>,
/// Updated name
pub name: Option<ArtifactStringPatch>,
/// Updated description
pub description: Option<ArtifactStringPatch>,
/// Updated content type
pub content_type: Option<ArtifactStringPatch>,
/// Updated execution patch (set a new execution ID or clear the link)
pub execution: Option<ArtifactExecutionPatch>,
/// Updated structured data (replaces existing data entirely)
pub data: Option<ArtifactJsonPatch>,
}
/// Explicit patch operation for a nullable execution link.
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum ArtifactExecutionPatch {
Set(i64),
Clear,
}
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum ArtifactStringPatch {
Set(String),
Clear,
}
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum ArtifactJsonPatch {
Set(JsonValue),
Clear,
}
/// Request DTO for appending to a progress-type artifact
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct AppendProgressRequest {
/// The entry to append to the progress data array.
/// Can be any JSON value (string, object, number, etc.)
#[schema(value_type = Object, example = json!({"step": "compile", "status": "done", "duration_ms": 1234}))]
pub entry: JsonValue,
}
/// Request DTO for setting the full data payload on an artifact
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct SetDataRequest {
/// The data to set (replaces existing data entirely)
#[schema(value_type = Object)]
pub data: JsonValue,
}
/// Response DTO for artifact information
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct ArtifactResponse {
/// Artifact ID
#[schema(example = 1)]
pub id: i64,
/// Artifact reference
#[schema(example = "mypack.build_log")]
pub r#ref: String,
/// Owner scope type
pub scope: OwnerType,
/// Owner identifier
#[schema(example = "mypack.deploy")]
pub owner: String,
/// Artifact type
pub r#type: ArtifactType,
/// Visibility level
pub visibility: ArtifactVisibility,
/// Retention policy
pub retention_policy: RetentionPolicyType,
/// Retention limit
#[schema(example = 5)]
pub retention_limit: i32,
/// Human-readable name
#[schema(example = "Build Log")]
pub name: Option<String>,
/// Description
pub description: Option<String>,
/// MIME content type
#[schema(example = "text/plain")]
pub content_type: Option<String>,
/// Size of the latest version in bytes
pub size_bytes: Option<i64>,
/// Execution that produced this artifact
pub execution: Option<i64>,
/// Structured data (progress entries, metadata, etc.)
#[serde(skip_serializing_if = "Option::is_none")]
pub data: Option<JsonValue>,
/// Creation timestamp
pub created: DateTime<Utc>,
/// Last update timestamp
pub updated: DateTime<Utc>,
}
/// Simplified artifact for list endpoints
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct ArtifactSummary {
/// Artifact ID
pub id: i64,
/// Artifact reference
pub r#ref: String,
/// Artifact type
pub r#type: ArtifactType,
/// Visibility level
pub visibility: ArtifactVisibility,
/// Human-readable name
pub name: Option<String>,
/// MIME content type
pub content_type: Option<String>,
/// Size of latest version in bytes
pub size_bytes: Option<i64>,
/// Execution that produced this artifact
pub execution: Option<i64>,
/// Owner scope
pub scope: OwnerType,
/// Owner identifier
pub owner: String,
/// Creation timestamp
pub created: DateTime<Utc>,
/// Last update timestamp
pub updated: DateTime<Utc>,
}
/// Query parameters for filtering artifacts
#[derive(Debug, Clone, Deserialize, IntoParams)]
pub struct ArtifactQueryParams {
/// Filter by owner scope type
pub scope: Option<OwnerType>,
/// Filter by owner identifier
pub owner: Option<String>,
/// Filter by artifact type
pub r#type: Option<ArtifactType>,
/// Filter by visibility
pub visibility: Option<ArtifactVisibility>,
/// Filter by execution ID
pub execution: Option<i64>,
/// Search by name (case-insensitive substring match)
pub name: Option<String>,
/// Page number (1-based)
#[serde(default = "default_page")]
#[param(example = 1, minimum = 1)]
pub page: u32,
/// Items per page
#[serde(default = "default_per_page")]
#[param(example = 20, minimum = 1, maximum = 100)]
pub per_page: u32,
}
impl ArtifactQueryParams {
pub fn offset(&self) -> u32 {
(self.page.saturating_sub(1)) * self.per_page
}
pub fn limit(&self) -> u32 {
self.per_page.min(100)
}
}
fn default_page() -> u32 {
1
}
fn default_per_page() -> u32 {
20
}
// ============================================================================
// ArtifactVersion DTOs
// ============================================================================
/// Request DTO for creating a new artifact version with JSON content
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct CreateVersionJsonRequest {
/// Structured JSON content for this version
#[schema(value_type = Object)]
pub content: JsonValue,
/// MIME content type override (defaults to "application/json")
pub content_type: Option<String>,
/// Free-form metadata about this version
#[schema(value_type = Option<Object>)]
pub meta: Option<JsonValue>,
/// Who created this version (e.g. action ref, identity, "system")
pub created_by: Option<String>,
}
/// Request DTO for creating a new file-backed artifact version.
/// No file content is included — the caller writes the file directly to
/// `$ATTUNE_ARTIFACTS_DIR/{file_path}` after receiving the response.
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct CreateFileVersionRequest {
/// MIME content type (e.g. "text/plain", "application/octet-stream")
#[schema(example = "text/plain")]
pub content_type: Option<String>,
/// Free-form metadata about this version
#[schema(value_type = Option<Object>)]
pub meta: Option<JsonValue>,
/// Who created this version (e.g. action ref, identity, "system")
pub created_by: Option<String>,
}
/// Request DTO for the upsert-and-allocate endpoint.
///
/// Looks up an artifact by ref (creating it if it doesn't exist), then
/// allocates a new file-backed version and returns the `file_path` where
/// the caller should write the file on the shared artifact volume.
///
/// This replaces the multi-step create → 409-handling → allocate dance
/// with a single API call.
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct AllocateFileVersionByRefRequest {
// -- Artifact metadata (used only when creating a new artifact) ----------
/// Owner scope type (default: action)
#[schema(example = "action")]
pub scope: Option<OwnerType>,
/// Owner identifier (ref string of the owning entity)
#[schema(example = "python_example.artifact_demo")]
pub owner: Option<String>,
/// Artifact type (must be a file-backed type; default: file_text)
#[schema(example = "file_text")]
pub r#type: Option<ArtifactType>,
/// Visibility level. If omitted, uses type-aware default.
pub visibility: Option<ArtifactVisibility>,
/// Retention policy type (default: versions)
pub retention_policy: Option<RetentionPolicyType>,
/// Retention limit (default: 10)
pub retention_limit: Option<i32>,
/// Human-readable name
#[schema(example = "Demo Log")]
pub name: Option<String>,
/// Optional description
pub description: Option<String>,
/// Execution ID to link this artifact to
#[schema(example = 42)]
pub execution: Option<i64>,
// -- Version metadata ----------------------------------------------------
/// MIME content type for this version (e.g. "text/plain")
#[schema(example = "text/plain")]
pub content_type: Option<String>,
/// Free-form metadata about this version
#[schema(value_type = Option<Object>)]
pub meta: Option<JsonValue>,
/// Who created this version (e.g. action ref, identity, "system")
pub created_by: Option<String>,
}
/// Response DTO for an artifact version (without binary content)
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct ArtifactVersionResponse {
/// Version ID
pub id: i64,
/// Parent artifact ID
pub artifact: i64,
/// Version number (1-based)
pub version: i32,
/// MIME content type
pub content_type: Option<String>,
/// Size of content in bytes
pub size_bytes: Option<i64>,
/// Structured JSON content (if this version has JSON data)
#[serde(skip_serializing_if = "Option::is_none")]
pub content_json: Option<JsonValue>,
/// Relative file path for disk-backed versions (from artifacts_dir root).
/// When present, the file content lives on the shared volume, not in the DB.
#[serde(skip_serializing_if = "Option::is_none")]
pub file_path: Option<String>,
/// Free-form metadata
#[serde(skip_serializing_if = "Option::is_none")]
pub meta: Option<JsonValue>,
/// Who created this version
pub created_by: Option<String>,
/// Creation timestamp
pub created: DateTime<Utc>,
}
/// Simplified version for list endpoints
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct ArtifactVersionSummary {
/// Version ID
pub id: i64,
/// Version number
pub version: i32,
/// MIME content type
pub content_type: Option<String>,
/// Size of content in bytes
pub size_bytes: Option<i64>,
/// Relative file path for disk-backed versions
#[serde(skip_serializing_if = "Option::is_none")]
pub file_path: Option<String>,
/// Who created this version
pub created_by: Option<String>,
/// Creation timestamp
pub created: DateTime<Utc>,
}
// ============================================================================
// Conversions
// ============================================================================
impl From<attune_common::models::artifact::Artifact> for ArtifactResponse {
fn from(a: attune_common::models::artifact::Artifact) -> Self {
Self {
id: a.id,
r#ref: a.r#ref,
scope: a.scope,
owner: a.owner,
r#type: a.r#type,
visibility: a.visibility,
retention_policy: a.retention_policy,
retention_limit: a.retention_limit,
name: a.name,
description: a.description,
content_type: a.content_type,
size_bytes: a.size_bytes,
execution: a.execution,
data: a.data,
created: a.created,
updated: a.updated,
}
}
}
impl From<attune_common::models::artifact::Artifact> for ArtifactSummary {
fn from(a: attune_common::models::artifact::Artifact) -> Self {
Self {
id: a.id,
r#ref: a.r#ref,
r#type: a.r#type,
visibility: a.visibility,
name: a.name,
content_type: a.content_type,
size_bytes: a.size_bytes,
execution: a.execution,
scope: a.scope,
owner: a.owner,
created: a.created,
updated: a.updated,
}
}
}
impl From<attune_common::models::artifact_version::ArtifactVersion> for ArtifactVersionResponse {
fn from(v: attune_common::models::artifact_version::ArtifactVersion) -> Self {
Self {
id: v.id,
artifact: v.artifact,
version: v.version,
content_type: v.content_type,
size_bytes: v.size_bytes,
content_json: v.content_json,
file_path: v.file_path,
meta: v.meta,
created_by: v.created_by,
created: v.created,
}
}
}
impl From<attune_common::models::artifact_version::ArtifactVersion> for ArtifactVersionSummary {
fn from(v: attune_common::models::artifact_version::ArtifactVersion) -> Self {
Self {
id: v.id,
version: v.version,
content_type: v.content_type,
size_bytes: v.size_bytes,
file_path: v.file_path,
created_by: v.created_by,
created: v.created,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_query_params_defaults() {
let json = r#"{}"#;
let params: ArtifactQueryParams = serde_json::from_str(json).unwrap();
assert_eq!(params.page, 1);
assert_eq!(params.per_page, 20);
assert!(params.scope.is_none());
assert!(params.r#type.is_none());
assert!(params.visibility.is_none());
}
#[test]
fn test_query_params_offset() {
let params = ArtifactQueryParams {
scope: None,
owner: None,
r#type: None,
visibility: None,
execution: None,
name: None,
page: 3,
per_page: 20,
};
assert_eq!(params.offset(), 40);
}
#[test]
fn test_query_params_limit_cap() {
let params = ArtifactQueryParams {
scope: None,
owner: None,
r#type: None,
visibility: None,
execution: None,
name: None,
page: 1,
per_page: 200,
};
assert_eq!(params.limit(), 100);
}
#[test]
fn test_create_request_defaults() {
let json = r#"{
"ref": "test.artifact",
"scope": "system",
"owner": "",
"type": "file_text"
}"#;
let req: CreateArtifactRequest = serde_json::from_str(json).unwrap();
assert_eq!(req.retention_policy, RetentionPolicyType::Versions);
assert_eq!(req.retention_limit, 5);
assert!(
req.visibility.is_none(),
"Omitting visibility should deserialize as None (server applies type-aware default)"
);
}
#[test]
fn test_append_progress_request() {
let json = r#"{"entry": {"step": "build", "status": "done"}}"#;
let req: AppendProgressRequest = serde_json::from_str(json).unwrap();
assert!(req.entry.is_object());
}
}

View File

@@ -136,3 +136,63 @@ pub struct CurrentUserResponse {
#[schema(example = "Administrator")]
pub display_name: Option<String>,
}
/// Public authentication settings for the login page.
#[derive(Debug, Clone, Serialize, Deserialize, ToSchema)]
pub struct AuthSettingsResponse {
/// Whether authentication is enabled for the server.
#[schema(example = true)]
pub authentication_enabled: bool,
/// Whether local username/password login is configured.
#[schema(example = true)]
pub local_password_enabled: bool,
/// Whether local username/password login should be shown by default.
#[schema(example = true)]
pub local_password_visible_by_default: bool,
/// Whether OIDC login is configured and enabled.
#[schema(example = false)]
pub oidc_enabled: bool,
/// Whether OIDC login should be shown by default.
#[schema(example = false)]
pub oidc_visible_by_default: bool,
/// Provider name for `?auth=<provider>`.
#[schema(example = "sso")]
pub oidc_provider_name: Option<String>,
/// User-facing provider label for the login button.
#[schema(example = "Example SSO")]
pub oidc_provider_label: Option<String>,
/// Optional icon URL shown beside the provider label.
#[schema(example = "https://auth.example.com/assets/logo.svg")]
pub oidc_provider_icon_url: Option<String>,
/// Whether LDAP login is configured and enabled.
#[schema(example = false)]
pub ldap_enabled: bool,
/// Whether LDAP login should be shown by default.
#[schema(example = false)]
pub ldap_visible_by_default: bool,
/// Provider name for `?auth=<provider>`.
#[schema(example = "ldap")]
pub ldap_provider_name: Option<String>,
/// User-facing provider label for the login button.
#[schema(example = "Company LDAP")]
pub ldap_provider_label: Option<String>,
/// Optional icon URL shown beside the provider label.
#[schema(example = "https://ldap.example.com/assets/logo.svg")]
pub ldap_provider_icon_url: Option<String>,
/// Whether unauthenticated self-service registration is allowed.
#[schema(example = false)]
pub self_registration_enabled: bool,
}

View File

@@ -53,10 +53,6 @@ pub struct EventResponse {
/// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>,
/// Last update timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
pub updated: DateTime<Utc>,
}
impl From<Event> for EventResponse {
@@ -72,7 +68,6 @@ impl From<Event> for EventResponse {
rule: event.rule,
rule_ref: event.rule_ref,
created: event.created,
updated: event.updated,
}
}
}
@@ -230,9 +225,9 @@ pub struct EnforcementResponse {
#[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>,
/// Last update timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
pub updated: DateTime<Utc>,
/// Timestamp when the enforcement was resolved (status changed from created to processed/disabled)
#[schema(example = "2024-01-13T10:30:01Z", nullable = true)]
pub resolved_at: Option<DateTime<Utc>>,
}
impl From<Enforcement> for EnforcementResponse {
@@ -249,7 +244,7 @@ impl From<Enforcement> for EnforcementResponse {
condition: enforcement.condition,
conditions: enforcement.conditions,
created: enforcement.created,
updated: enforcement.updated,
resolved_at: enforcement.resolved_at,
}
}
}
@@ -324,6 +319,10 @@ pub struct EnforcementQueryParams {
#[param(example = "core.webhook")]
pub trigger_ref: Option<String>,
/// Filter by rule reference
#[param(example = "core.on_webhook")]
pub rule_ref: Option<String>,
/// Page number (1-indexed)
#[serde(default = "default_page")]
#[param(example = 1, minimum = 1)]

View File

@@ -6,6 +6,8 @@ use serde_json::Value as JsonValue;
use utoipa::{IntoParams, ToSchema};
use attune_common::models::enums::ExecutionStatus;
use attune_common::models::execution::WorkflowTaskMetadata;
use attune_common::repositories::execution::ExecutionWithRefs;
/// Request DTO for creating a manual execution
#[derive(Debug, Clone, Deserialize, ToSchema)]
@@ -50,10 +52,14 @@ pub struct ExecutionResponse {
#[schema(example = 1)]
pub enforcement: Option<i64>,
/// Executor ID (worker/executor that ran this)
/// Identity ID that initiated this execution
#[schema(example = 1)]
pub executor: Option<i64>,
/// Worker ID currently assigned to this execution
#[schema(example = 1)]
pub worker: Option<i64>,
/// Execution status
#[schema(example = "succeeded")]
pub status: ExecutionStatus,
@@ -62,6 +68,17 @@ pub struct ExecutionResponse {
#[schema(value_type = Object, example = json!({"message_id": "1234567890.123456"}))]
pub result: Option<JsonValue>,
/// When the execution actually started running (worker picked it up).
/// Null if the execution hasn't started running yet.
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(example = "2024-01-13T10:31:00Z", nullable = true)]
pub started_at: Option<DateTime<Utc>>,
/// Workflow task metadata (only populated for workflow task executions)
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Option<Object>, nullable = true)]
pub workflow_task: Option<WorkflowTaskMetadata>,
/// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>,
@@ -102,6 +119,17 @@ pub struct ExecutionSummary {
#[schema(example = "core.timer")]
pub trigger_ref: Option<String>,
/// When the execution actually started running (worker picked it up).
/// Null if the execution hasn't started running yet.
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(example = "2024-01-13T10:31:00Z", nullable = true)]
pub started_at: Option<DateTime<Utc>>,
/// Workflow task metadata (only populated for workflow task executions)
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Option<Object>, nullable = true)]
pub workflow_task: Option<WorkflowTaskMetadata>,
/// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>,
@@ -150,6 +178,12 @@ pub struct ExecutionQueryParams {
#[param(example = 1)]
pub parent: Option<i64>,
/// If true, only return top-level executions (those without a parent).
/// Useful for the "By Workflow" view where child tasks are loaded separately.
#[serde(default)]
#[param(example = false)]
pub top_level_only: Option<bool>,
/// Page number (for pagination)
#[serde(default = "default_page")]
#[param(example = 1, minimum = 1)]
@@ -186,10 +220,13 @@ impl From<attune_common::models::execution::Execution> for ExecutionResponse {
parent: execution.parent,
enforcement: execution.enforcement,
executor: execution.executor,
worker: execution.worker,
status: execution.status,
result: execution
.result
.map(|r| serde_json::to_value(r).unwrap_or(JsonValue::Null)),
started_at: execution.started_at,
workflow_task: execution.workflow_task,
created: execution.created,
updated: execution.updated,
}
@@ -207,12 +244,34 @@ impl From<attune_common::models::execution::Execution> for ExecutionSummary {
enforcement: execution.enforcement,
rule_ref: None, // Populated separately via enforcement lookup
trigger_ref: None, // Populated separately via enforcement lookup
started_at: execution.started_at,
workflow_task: execution.workflow_task,
created: execution.created,
updated: execution.updated,
}
}
}
/// Convert from the joined query result (execution + enforcement refs).
/// `rule_ref` and `trigger_ref` are already populated from the SQL JOIN.
impl From<ExecutionWithRefs> for ExecutionSummary {
fn from(row: ExecutionWithRefs) -> Self {
Self {
id: row.id,
action_ref: row.action_ref,
status: row.status,
parent: row.parent,
enforcement: row.enforcement,
rule_ref: row.rule_ref,
trigger_ref: row.trigger_ref,
started_at: row.started_at,
workflow_task: row.workflow_task,
created: row.created,
updated: row.updated,
}
}
}
fn default_page() -> u32 {
1
}
@@ -256,6 +315,7 @@ mod tests {
action_ref: None,
enforcement: None,
parent: None,
top_level_only: None,
pack_name: None,
rule_ref: None,
trigger_ref: None,
@@ -274,6 +334,7 @@ mod tests {
action_ref: None,
enforcement: None,
parent: None,
top_level_only: None,
pack_name: None,
rule_ref: None,
trigger_ref: None,

View File

@@ -0,0 +1,211 @@
//! History DTOs for API requests and responses
//!
//! These types represent the API-facing view of entity history records
//! stored in TimescaleDB hypertables.
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
use utoipa::{IntoParams, ToSchema};
use attune_common::models::entity_history::HistoryEntityType;
/// Response DTO for a single entity history record.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct HistoryRecordResponse {
/// When the change occurred
#[schema(example = "2026-02-26T10:30:00Z")]
pub time: DateTime<Utc>,
/// The operation: `INSERT`, `UPDATE`, or `DELETE`
#[schema(example = "UPDATE")]
pub operation: String,
/// The primary key of the changed entity
#[schema(example = 42)]
pub entity_id: i64,
/// Denormalized human-readable identifier (e.g., action_ref, worker name)
#[schema(example = "core.http_request")]
pub entity_ref: Option<String>,
/// Names of fields that changed (empty for INSERT/DELETE)
#[schema(example = json!(["status", "result"]))]
pub changed_fields: Vec<String>,
/// Previous values of changed fields (null for INSERT)
#[schema(value_type = Object, example = json!({"status": "requested"}))]
pub old_values: Option<JsonValue>,
/// New values of changed fields (null for DELETE)
#[schema(value_type = Object, example = json!({"status": "running"}))]
pub new_values: Option<JsonValue>,
}
impl From<attune_common::models::entity_history::EntityHistoryRecord> for HistoryRecordResponse {
fn from(record: attune_common::models::entity_history::EntityHistoryRecord) -> Self {
Self {
time: record.time,
operation: record.operation,
entity_id: record.entity_id,
entity_ref: record.entity_ref,
changed_fields: record.changed_fields,
old_values: record.old_values,
new_values: record.new_values,
}
}
}
/// Query parameters for filtering history records.
#[derive(Debug, Clone, Deserialize, IntoParams)]
pub struct HistoryQueryParams {
/// Filter by entity ID
#[param(example = 42)]
pub entity_id: Option<i64>,
/// Filter by entity ref (e.g., action_ref, worker name)
#[param(example = "core.http_request")]
pub entity_ref: Option<String>,
/// Filter by operation type: `INSERT`, `UPDATE`, or `DELETE`
#[param(example = "UPDATE")]
pub operation: Option<String>,
/// Only include records where this field was changed
#[param(example = "status")]
pub changed_field: Option<String>,
/// Only include records at or after this time (ISO 8601)
#[param(example = "2026-02-01T00:00:00Z")]
pub since: Option<DateTime<Utc>>,
/// Only include records at or before this time (ISO 8601)
#[param(example = "2026-02-28T23:59:59Z")]
pub until: Option<DateTime<Utc>>,
/// Page number (1-based)
#[serde(default = "default_page")]
#[param(example = 1, minimum = 1)]
pub page: u32,
/// Number of items per page
#[serde(default = "default_page_size")]
#[param(example = 50, minimum = 1, maximum = 1000)]
pub page_size: u32,
}
fn default_page() -> u32 {
1
}
fn default_page_size() -> u32 {
50
}
impl HistoryQueryParams {
/// Convert to the repository-level query params.
pub fn to_repo_params(
&self,
) -> attune_common::repositories::entity_history::HistoryQueryParams {
let limit = (self.page_size.clamp(1, 1000)) as i64;
let offset = ((self.page.saturating_sub(1)) as i64) * limit;
attune_common::repositories::entity_history::HistoryQueryParams {
entity_id: self.entity_id,
entity_ref: self.entity_ref.clone(),
operation: self.operation.clone(),
changed_field: self.changed_field.clone(),
since: self.since,
until: self.until,
limit: Some(limit),
offset: Some(offset),
}
}
}
/// Path parameter for the entity type segment.
#[derive(Debug, Clone, Deserialize, IntoParams)]
pub struct HistoryEntityTypePath {
/// Entity type: `execution` or `worker`
pub entity_type: String,
}
impl HistoryEntityTypePath {
/// Parse the entity type string, returning a typed enum or an error message.
pub fn parse(&self) -> Result<HistoryEntityType, String> {
self.entity_type.parse::<HistoryEntityType>()
}
}
/// Path parameters for entity-specific history (e.g., `/executions/42/history`).
#[derive(Debug, Clone, Deserialize, IntoParams)]
pub struct EntityIdPath {
/// The entity's primary key
pub id: i64,
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_query_params_defaults() {
let json = r#"{}"#;
let params: HistoryQueryParams = serde_json::from_str(json).unwrap();
assert_eq!(params.page, 1);
assert_eq!(params.page_size, 50);
assert!(params.entity_id.is_none());
assert!(params.operation.is_none());
}
#[test]
fn test_query_params_to_repo_params() {
let params = HistoryQueryParams {
entity_id: Some(42),
entity_ref: None,
operation: Some("UPDATE".to_string()),
changed_field: Some("status".to_string()),
since: None,
until: None,
page: 3,
page_size: 20,
};
let repo = params.to_repo_params();
assert_eq!(repo.entity_id, Some(42));
assert_eq!(repo.operation, Some("UPDATE".to_string()));
assert_eq!(repo.changed_field, Some("status".to_string()));
assert_eq!(repo.limit, Some(20));
assert_eq!(repo.offset, Some(40)); // (3-1) * 20
}
#[test]
fn test_query_params_page_size_cap() {
let params = HistoryQueryParams {
entity_id: None,
entity_ref: None,
operation: None,
changed_field: None,
since: None,
until: None,
page: 1,
page_size: 5000,
};
let repo = params.to_repo_params();
assert_eq!(repo.limit, Some(1000));
}
#[test]
fn test_entity_type_path_parse() {
let path = HistoryEntityTypePath {
entity_type: "execution".to_string(),
};
assert_eq!(path.parse().unwrap(), HistoryEntityType::Execution);
let path = HistoryEntityTypePath {
entity_type: "unknown".to_string(),
};
assert!(path.parse().is_err());
}
}

View File

@@ -137,8 +137,8 @@ pub struct CreateInquiryRequest {
#[schema(example = "Approve deployment to production?")]
pub prompt: String,
/// Optional JSON schema for the expected response format
#[schema(value_type = Object, example = json!({"type": "object", "properties": {"approved": {"type": "boolean"}}}))]
/// Optional schema for the expected response format (flat format with inline required/secret)
#[schema(value_type = Object, example = json!({"approved": {"type": "boolean", "description": "Whether the deployment is approved", "required": true}}))]
pub response_schema: Option<JsonSchema>,
/// Optional identity ID to assign this inquiry to

View File

@@ -2,6 +2,7 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
use utoipa::{IntoParams, ToSchema};
use validator::Validate;
@@ -61,9 +62,9 @@ pub struct KeyResponse {
#[schema(example = true)]
pub encrypted: bool,
/// The secret value (decrypted if encrypted)
#[schema(example = "ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")]
pub value: String,
/// The secret value (decrypted if encrypted). Can be a string, object, array, number, or boolean.
#[schema(value_type = Value, example = json!("ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"))]
pub value: JsonValue,
/// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
@@ -194,21 +195,16 @@ pub struct CreateKeyRequest {
#[schema(example = "GitHub API Token")]
pub name: String,
/// The secret value to store
#[validate(length(min = 1, max = 10000))]
#[schema(example = "ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")]
pub value: String,
/// The secret value to store. Can be a string, object, array, number, or boolean.
#[schema(value_type = Value, example = json!("ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"))]
pub value: JsonValue,
/// Whether to encrypt the value (recommended: true)
#[serde(default = "default_encrypted")]
#[schema(example = true)]
/// Whether to encrypt the value at rest (default: false; use --encrypt / -e from CLI)
#[serde(default)]
#[schema(example = false)]
pub encrypted: bool,
}
fn default_encrypted() -> bool {
true
}
/// Request to update an existing key/secret
#[derive(Debug, Clone, Serialize, Deserialize, Validate, ToSchema)]
pub struct UpdateKeyRequest {
@@ -217,10 +213,9 @@ pub struct UpdateKeyRequest {
#[schema(example = "GitHub API Token (Updated)")]
pub name: Option<String>,
/// Update the secret value
#[validate(length(min = 1, max = 10000))]
#[schema(example = "ghp_new_token_xxxxxxxxxxxxxxxxxxxxxxxx")]
pub value: Option<String>,
/// Update the secret value. Can be a string, object, array, number, or boolean.
#[schema(value_type = Option<Value>, example = json!("ghp_new_token_xxxxxxxxxxxxxxxxxxxxxxxx"))]
pub value: Option<JsonValue>,
/// Update encryption status (re-encrypts if changing from false to true)
#[schema(example = true)]

View File

@@ -1,22 +1,37 @@
//! Data Transfer Objects (DTOs) for API requests and responses
pub mod action;
pub mod analytics;
pub mod artifact;
pub mod auth;
pub mod common;
pub mod event;
pub mod execution;
pub mod history;
pub mod inquiry;
pub mod key;
pub mod pack;
pub mod permission;
pub mod rule;
pub mod runtime;
pub mod trigger;
pub mod webhook;
pub mod workflow;
pub use action::{ActionResponse, ActionSummary, CreateActionRequest, UpdateActionRequest};
pub use analytics::{
AnalyticsQueryParams, DashboardAnalyticsResponse, EventVolumeResponse,
ExecutionStatusTimeSeriesResponse, ExecutionThroughputResponse, FailureRateResponse,
TimeSeriesPoint,
};
pub use artifact::{
AppendProgressRequest, ArtifactQueryParams, ArtifactResponse, ArtifactSummary,
ArtifactVersionResponse, ArtifactVersionSummary, CreateArtifactRequest,
CreateVersionJsonRequest, SetDataRequest, UpdateArtifactRequest,
};
pub use auth::{
ChangePasswordRequest, CurrentUserResponse, LoginRequest, RefreshTokenRequest, RegisterRequest,
TokenResponse,
AuthSettingsResponse, ChangePasswordRequest, CurrentUserResponse, LoginRequest,
RefreshTokenRequest, RegisterRequest, TokenResponse,
};
pub use common::{
ApiResponse, PaginatedResponse, PaginationMeta, PaginationParams, SuccessResponse,
@@ -25,14 +40,24 @@ pub use event::{
EnforcementQueryParams, EnforcementResponse, EnforcementSummary, EventQueryParams,
EventResponse, EventSummary,
};
pub use execution::{CreateExecutionRequest, ExecutionQueryParams, ExecutionResponse, ExecutionSummary};
pub use execution::{
CreateExecutionRequest, ExecutionQueryParams, ExecutionResponse, ExecutionSummary,
};
pub use history::{HistoryEntityTypePath, HistoryQueryParams, HistoryRecordResponse};
pub use inquiry::{
CreateInquiryRequest, InquiryQueryParams, InquiryRespondRequest, InquiryResponse,
InquirySummary, UpdateInquiryRequest,
};
pub use key::{CreateKeyRequest, KeyQueryParams, KeyResponse, KeySummary, UpdateKeyRequest};
pub use pack::{CreatePackRequest, PackResponse, PackSummary, UpdatePackRequest};
pub use permission::{
CreateIdentityRequest, CreateIdentityRoleAssignmentRequest, CreatePermissionAssignmentRequest,
CreatePermissionSetRoleAssignmentRequest, IdentityResponse, IdentityRoleAssignmentResponse,
IdentitySummary, PermissionAssignmentResponse, PermissionSetQueryParams,
PermissionSetRoleAssignmentResponse, PermissionSetSummary, UpdateIdentityRequest,
};
pub use rule::{CreateRuleRequest, RuleResponse, RuleSummary, UpdateRuleRequest};
pub use runtime::{CreateRuntimeRequest, RuntimeResponse, RuntimeSummary, UpdateRuntimeRequest};
pub use trigger::{
CreateSensorRequest, CreateTriggerRequest, SensorResponse, SensorSummary, TriggerResponse,
TriggerSummary, UpdateSensorRequest, UpdateTriggerRequest,

View File

@@ -28,9 +28,9 @@ pub struct CreatePackRequest {
#[schema(example = "1.0.0")]
pub version: String,
/// Configuration schema (JSON Schema)
/// Configuration schema (flat format with inline required/secret per parameter)
#[serde(default = "default_empty_object")]
#[schema(value_type = Object, example = json!({"type": "object", "properties": {"api_token": {"type": "string"}}}))]
#[schema(value_type = Object, example = json!({"api_token": {"type": "string", "description": "API authentication key", "required": true, "secret": true}}))]
pub conf_schema: JsonValue,
/// Pack configuration values
@@ -95,11 +95,6 @@ pub struct InstallPackRequest {
#[schema(example = "main")]
pub ref_spec: Option<String>,
/// Force reinstall if pack already exists
#[serde(default)]
#[schema(example = false)]
pub force: bool,
/// Skip running pack tests during installation
#[serde(default)]
#[schema(example = false)]
@@ -134,7 +129,7 @@ pub struct UpdatePackRequest {
/// Pack description
#[schema(example = "Enhanced Slack integration with new features")]
pub description: Option<String>,
pub description: Option<PackDescriptionPatch>,
/// Pack version
#[validate(length(min = 1, max = 50))]
@@ -170,6 +165,13 @@ pub struct UpdatePackRequest {
pub is_standard: Option<bool>,
}
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum PackDescriptionPatch {
Set(String),
Clear,
}
/// Response DTO for pack information
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct PackResponse {

View File

@@ -0,0 +1,110 @@
use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
use utoipa::{IntoParams, ToSchema};
use validator::Validate;
#[derive(Debug, Clone, Deserialize, IntoParams)]
pub struct PermissionSetQueryParams {
#[serde(default)]
pub pack_ref: Option<String>,
}
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct IdentitySummary {
pub id: i64,
pub login: String,
pub display_name: Option<String>,
pub frozen: bool,
pub attributes: JsonValue,
pub roles: Vec<String>,
}
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct IdentityRoleAssignmentResponse {
pub id: i64,
pub identity_id: i64,
pub role: String,
pub source: String,
pub managed: bool,
pub created: chrono::DateTime<chrono::Utc>,
pub updated: chrono::DateTime<chrono::Utc>,
}
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct IdentityResponse {
pub id: i64,
pub login: String,
pub display_name: Option<String>,
pub frozen: bool,
pub attributes: JsonValue,
pub roles: Vec<IdentityRoleAssignmentResponse>,
pub direct_permissions: Vec<PermissionAssignmentResponse>,
}
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct PermissionSetSummary {
pub id: i64,
pub r#ref: String,
pub pack_ref: Option<String>,
pub label: Option<String>,
pub description: Option<String>,
pub grants: JsonValue,
pub roles: Vec<PermissionSetRoleAssignmentResponse>,
}
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct PermissionAssignmentResponse {
pub id: i64,
pub identity_id: i64,
pub permission_set_id: i64,
pub permission_set_ref: String,
pub created: chrono::DateTime<chrono::Utc>,
}
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct PermissionSetRoleAssignmentResponse {
pub id: i64,
pub permission_set_id: i64,
pub permission_set_ref: Option<String>,
pub role: String,
pub created: chrono::DateTime<chrono::Utc>,
}
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct CreatePermissionAssignmentRequest {
pub identity_id: Option<i64>,
pub identity_login: Option<String>,
pub permission_set_ref: String,
}
#[derive(Debug, Clone, Deserialize, Validate, ToSchema)]
pub struct CreateIdentityRoleAssignmentRequest {
#[validate(length(min = 1, max = 255))]
pub role: String,
}
#[derive(Debug, Clone, Deserialize, Validate, ToSchema)]
pub struct CreatePermissionSetRoleAssignmentRequest {
#[validate(length(min = 1, max = 255))]
pub role: String,
}
#[derive(Debug, Clone, Deserialize, Validate, ToSchema)]
pub struct CreateIdentityRequest {
#[validate(length(min = 3, max = 255))]
pub login: String,
#[validate(length(max = 255))]
pub display_name: Option<String>,
#[validate(length(min = 8, max = 128))]
pub password: Option<String>,
#[serde(default)]
pub attributes: JsonValue,
}
#[derive(Debug, Clone, Deserialize, ToSchema)]
pub struct UpdateIdentityRequest {
pub display_name: Option<String>,
pub password: Option<String>,
pub attributes: Option<JsonValue>,
pub frozen: Option<bool>,
}

View File

@@ -25,9 +25,8 @@ pub struct CreateRuleRequest {
pub label: String,
/// Rule description
#[validate(length(min = 1))]
#[schema(example = "Send Slack notification when an error occurs")]
pub description: String,
pub description: Option<String>,
/// Action reference to execute when rule matches
#[validate(length(min = 1, max = 255))]
@@ -69,7 +68,6 @@ pub struct UpdateRuleRequest {
pub label: Option<String>,
/// Rule description
#[validate(length(min = 1))]
#[schema(example = "Enhanced error notification with filtering")]
pub description: Option<String>,
@@ -115,7 +113,7 @@ pub struct RuleResponse {
/// Rule description
#[schema(example = "Send Slack notification when an error occurs")]
pub description: String,
pub description: Option<String>,
/// Action ID (null if the referenced action has been deleted)
#[schema(example = 1)]
@@ -183,7 +181,7 @@ pub struct RuleSummary {
/// Rule description
#[schema(example = "Send Slack notification when an error occurs")]
pub description: String,
pub description: Option<String>,
/// Action reference
#[schema(example = "slack.post_message")]
@@ -297,7 +295,7 @@ mod tests {
r#ref: "".to_string(), // Invalid: empty
pack_ref: "test-pack".to_string(),
label: "Test Rule".to_string(),
description: "Test description".to_string(),
description: Some("Test description".to_string()),
action_ref: "test.action".to_string(),
trigger_ref: "test.trigger".to_string(),
conditions: default_empty_object(),
@@ -315,7 +313,7 @@ mod tests {
r#ref: "test.rule".to_string(),
pack_ref: "test-pack".to_string(),
label: "Test Rule".to_string(),
description: "Test description".to_string(),
description: Some("Test description".to_string()),
action_ref: "test.action".to_string(),
trigger_ref: "test.trigger".to_string(),
conditions: serde_json::json!({

View File

@@ -0,0 +1,181 @@
//! Runtime DTOs for API requests and responses
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
use utoipa::ToSchema;
use validator::Validate;
/// Request DTO for creating a runtime.
#[derive(Debug, Clone, Deserialize, Validate, ToSchema)]
pub struct CreateRuntimeRequest {
/// Unique reference identifier (e.g. "core.python", "core.nodejs")
#[validate(length(min = 1, max = 255))]
#[schema(example = "core.python")]
pub r#ref: String,
/// Optional pack reference this runtime belongs to
#[validate(length(min = 1, max = 255))]
#[schema(example = "core", nullable = true)]
pub pack_ref: Option<String>,
/// Optional human-readable description
#[validate(length(min = 1))]
#[schema(example = "Python runtime with virtualenv support", nullable = true)]
pub description: Option<String>,
/// Display name
#[validate(length(min = 1, max = 255))]
#[schema(example = "Python")]
pub name: String,
/// Distribution metadata used for verification and platform support
#[serde(default)]
#[schema(value_type = Object, example = json!({"linux": {"supported": true}}))]
pub distributions: JsonValue,
/// Optional installation metadata
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Object, nullable = true, example = json!({"method": "system"}))]
pub installation: Option<JsonValue>,
/// Runtime execution configuration
#[serde(default)]
#[schema(value_type = Object, example = json!({"interpreter": {"command": "python3"}}))]
pub execution_config: JsonValue,
}
/// Request DTO for updating a runtime.
#[derive(Debug, Clone, Deserialize, Validate, ToSchema)]
pub struct UpdateRuntimeRequest {
/// Optional human-readable description patch.
pub description: Option<NullableStringPatch>,
/// Display name
#[validate(length(min = 1, max = 255))]
#[schema(example = "Python 3")]
pub name: Option<String>,
/// Distribution metadata used for verification and platform support
#[schema(value_type = Object, nullable = true)]
pub distributions: Option<JsonValue>,
/// Optional installation metadata patch.
pub installation: Option<NullableJsonPatch>,
/// Runtime execution configuration
#[schema(value_type = Object, nullable = true)]
pub execution_config: Option<JsonValue>,
}
/// Explicit patch operation for nullable string fields.
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum NullableStringPatch {
#[schema(title = "SetString")]
Set(String),
Clear,
}
/// Explicit patch operation for nullable JSON fields.
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum NullableJsonPatch {
#[schema(title = "SetJson")]
Set(JsonValue),
Clear,
}
/// Full runtime response.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct RuntimeResponse {
#[schema(example = 1)]
pub id: i64,
#[schema(example = "core.python")]
pub r#ref: String,
#[schema(example = 1, nullable = true)]
pub pack: Option<i64>,
#[schema(example = "core", nullable = true)]
pub pack_ref: Option<String>,
#[schema(example = "Python runtime with virtualenv support", nullable = true)]
pub description: Option<String>,
#[schema(example = "Python")]
pub name: String,
#[schema(value_type = Object)]
pub distributions: JsonValue,
#[schema(value_type = Object, nullable = true)]
pub installation: Option<JsonValue>,
#[schema(value_type = Object)]
pub execution_config: JsonValue,
#[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>,
#[schema(example = "2024-01-13T10:30:00Z")]
pub updated: DateTime<Utc>,
}
/// Runtime summary for list views.
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct RuntimeSummary {
#[schema(example = 1)]
pub id: i64,
#[schema(example = "core.python")]
pub r#ref: String,
#[schema(example = "core", nullable = true)]
pub pack_ref: Option<String>,
#[schema(example = "Python runtime with virtualenv support", nullable = true)]
pub description: Option<String>,
#[schema(example = "Python")]
pub name: String,
#[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>,
#[schema(example = "2024-01-13T10:30:00Z")]
pub updated: DateTime<Utc>,
}
impl From<attune_common::models::runtime::Runtime> for RuntimeResponse {
fn from(runtime: attune_common::models::runtime::Runtime) -> Self {
Self {
id: runtime.id,
r#ref: runtime.r#ref,
pack: runtime.pack,
pack_ref: runtime.pack_ref,
description: runtime.description,
name: runtime.name,
distributions: runtime.distributions,
installation: runtime.installation,
execution_config: runtime.execution_config,
created: runtime.created,
updated: runtime.updated,
}
}
}
impl From<attune_common::models::runtime::Runtime> for RuntimeSummary {
fn from(runtime: attune_common::models::runtime::Runtime) -> Self {
Self {
id: runtime.id,
r#ref: runtime.r#ref,
pack_ref: runtime.pack_ref,
description: runtime.description,
name: runtime.name,
created: runtime.created,
updated: runtime.updated,
}
}
}

View File

@@ -28,14 +28,14 @@ pub struct CreateTriggerRequest {
#[schema(example = "Triggers when a webhook is received")]
pub description: Option<String>,
/// Parameter schema (JSON Schema) defining event payload structure
/// Parameter schema (StackStorm-style) defining trigger configuration with inline required/secret
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Object, nullable = true, example = json!({"type": "object", "properties": {"url": {"type": "string"}}}))]
#[schema(value_type = Object, nullable = true, example = json!({"url": {"type": "string", "description": "Webhook URL", "required": true}}))]
pub param_schema: Option<JsonValue>,
/// Output schema (JSON Schema) defining event data structure
/// Output schema (flat format) defining event data structure with inline required/secret
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Object, nullable = true, example = json!({"type": "object", "properties": {"payload": {"type": "object"}}}))]
#[schema(value_type = Object, nullable = true, example = json!({"payload": {"type": "object", "description": "Event payload data", "required": true}}))]
pub out_schema: Option<JsonValue>,
/// Whether the trigger is enabled
@@ -54,21 +54,35 @@ pub struct UpdateTriggerRequest {
/// Trigger description
#[schema(example = "Updated webhook trigger description")]
pub description: Option<String>,
pub description: Option<TriggerStringPatch>,
/// Parameter schema
/// Parameter schema (StackStorm-style with inline required/secret)
#[schema(value_type = Object, nullable = true)]
pub param_schema: Option<JsonValue>,
pub param_schema: Option<TriggerJsonPatch>,
/// Output schema
#[schema(value_type = Object, nullable = true)]
pub out_schema: Option<JsonValue>,
pub out_schema: Option<TriggerJsonPatch>,
/// Whether the trigger is enabled
#[schema(example = true)]
pub enabled: Option<bool>,
}
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum TriggerStringPatch {
Set(String),
Clear,
}
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum TriggerJsonPatch {
Set(JsonValue),
Clear,
}
/// Response DTO for trigger information
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct TriggerResponse {
@@ -100,7 +114,7 @@ pub struct TriggerResponse {
#[schema(example = true)]
pub enabled: bool,
/// Parameter schema
/// Parameter schema (StackStorm-style with inline required/secret)
#[schema(value_type = Object, nullable = true)]
pub param_schema: Option<JsonValue>,
@@ -189,9 +203,8 @@ pub struct CreateSensorRequest {
pub label: String,
/// Sensor description
#[validate(length(min = 1))]
#[schema(example = "Monitors CPU usage and generates events")]
pub description: String,
pub description: Option<String>,
/// Entry point for sensor execution (e.g., path to script, function name)
#[validate(length(min = 1, max = 1024))]
@@ -208,9 +221,9 @@ pub struct CreateSensorRequest {
#[schema(example = "monitoring.cpu_threshold")]
pub trigger_ref: String,
/// Parameter schema (JSON Schema) for sensor configuration
/// Parameter schema (flat format) for sensor configuration
#[serde(skip_serializing_if = "Option::is_none")]
#[schema(value_type = Object, nullable = true, example = json!({"type": "object", "properties": {"threshold": {"type": "number"}}}))]
#[schema(value_type = Object, nullable = true, example = json!({"threshold": {"type": "number", "description": "Alert threshold", "required": true}}))]
pub param_schema: Option<JsonValue>,
/// Configuration values for this sensor instance (conforms to param_schema)
@@ -233,7 +246,6 @@ pub struct UpdateSensorRequest {
pub label: Option<String>,
/// Sensor description
#[validate(length(min = 1))]
#[schema(example = "Enhanced CPU monitoring with alerts")]
pub description: Option<String>,
@@ -242,15 +254,22 @@ pub struct UpdateSensorRequest {
#[schema(example = "/sensors/monitoring/cpu_monitor_v2.py")]
pub entrypoint: Option<String>,
/// Parameter schema
/// Parameter schema (StackStorm-style with inline required/secret)
#[schema(value_type = Object, nullable = true)]
pub param_schema: Option<JsonValue>,
pub param_schema: Option<SensorJsonPatch>,
/// Whether the sensor is enabled
#[schema(example = false)]
pub enabled: Option<bool>,
}
#[derive(Debug, Clone, Deserialize, Serialize, ToSchema)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
pub enum SensorJsonPatch {
Set(JsonValue),
Clear,
}
/// Response DTO for sensor information
#[derive(Debug, Clone, Serialize, ToSchema)]
pub struct SensorResponse {
@@ -276,7 +295,7 @@ pub struct SensorResponse {
/// Sensor description
#[schema(example = "Monitors CPU usage and generates events")]
pub description: String,
pub description: Option<String>,
/// Entry point
#[schema(example = "/sensors/monitoring/cpu_monitor.py")]
@@ -302,7 +321,7 @@ pub struct SensorResponse {
#[schema(example = true)]
pub enabled: bool,
/// Parameter schema
/// Parameter schema (StackStorm-style with inline required/secret)
#[schema(value_type = Object, nullable = true)]
pub param_schema: Option<JsonValue>,
@@ -336,7 +355,7 @@ pub struct SensorSummary {
/// Sensor description
#[schema(example = "Monitors CPU usage and generates events")]
pub description: String,
pub description: Option<String>,
/// Trigger reference
#[schema(example = "monitoring.cpu_threshold")]
@@ -478,7 +497,7 @@ mod tests {
r#ref: "test.sensor".to_string(),
pack_ref: "test-pack".to_string(),
label: "Test Sensor".to_string(),
description: "Test description".to_string(),
description: Some("Test description".to_string()),
entrypoint: "/sensors/test.py".to_string(),
runtime_ref: "python3".to_string(),
trigger_ref: "test.trigger".to_string(),

View File

@@ -6,6 +6,50 @@ use serde_json::Value as JsonValue;
use utoipa::{IntoParams, ToSchema};
use validator::Validate;
/// Request DTO for saving a workflow file to disk and syncing to DB
#[derive(Debug, Clone, Deserialize, Validate, ToSchema)]
pub struct SaveWorkflowFileRequest {
/// Workflow name (becomes filename: {name}.workflow.yaml)
#[validate(length(min = 1, max = 255))]
#[schema(example = "deploy_app")]
pub name: String,
/// Human-readable label
#[validate(length(min = 1, max = 255))]
#[schema(example = "Deploy Application")]
pub label: String,
/// Workflow description
#[schema(example = "Deploys an application to the target environment")]
pub description: Option<String>,
/// Workflow version (semantic versioning recommended)
#[validate(length(min = 1, max = 50))]
#[schema(example = "1.0.0")]
pub version: String,
/// Pack reference this workflow belongs to
#[validate(length(min = 1, max = 255))]
#[schema(example = "core")]
pub pack_ref: String,
/// The full workflow definition as JSON (will be serialized to YAML on disk)
#[schema(value_type = Object)]
pub definition: JsonValue,
/// Parameter schema (flat format with inline required/secret)
#[schema(value_type = Object, nullable = true)]
pub param_schema: Option<JsonValue>,
/// Output schema (flat format)
#[schema(value_type = Object, nullable = true)]
pub out_schema: Option<JsonValue>,
/// Tags for categorization
#[schema(example = json!(["deployment", "automation"]))]
pub tags: Option<Vec<String>>,
}
/// Request DTO for creating a new workflow
#[derive(Debug, Clone, Deserialize, Validate, ToSchema)]
pub struct CreateWorkflowRequest {
@@ -33,12 +77,12 @@ pub struct CreateWorkflowRequest {
#[schema(example = "1.0.0")]
pub version: String,
/// Parameter schema (JSON Schema) defining expected inputs
#[schema(value_type = Object, example = json!({"type": "object", "properties": {"severity": {"type": "string"}, "channel": {"type": "string"}}}))]
/// Parameter schema (StackStorm-style) defining expected inputs with inline required/secret
#[schema(value_type = Object, example = json!({"severity": {"type": "string", "description": "Incident severity", "required": true}, "channel": {"type": "string", "description": "Notification channel"}}))]
pub param_schema: Option<JsonValue>,
/// Output schema (JSON Schema) defining expected outputs
#[schema(value_type = Object, example = json!({"type": "object", "properties": {"incident_id": {"type": "string"}}}))]
/// Output schema (flat format) defining expected outputs with inline required/secret
#[schema(value_type = Object, example = json!({"incident_id": {"type": "string", "description": "Unique incident identifier", "required": true}}))]
pub out_schema: Option<JsonValue>,
/// Workflow definition (complete workflow YAML structure as JSON)
@@ -48,10 +92,6 @@ pub struct CreateWorkflowRequest {
/// Tags for categorization and search
#[schema(example = json!(["incident", "slack", "approval"]))]
pub tags: Option<Vec<String>>,
/// Whether the workflow is enabled
#[schema(example = true)]
pub enabled: Option<bool>,
}
/// Request DTO for updating a workflow
@@ -71,7 +111,7 @@ pub struct UpdateWorkflowRequest {
#[schema(example = "1.1.0")]
pub version: Option<String>,
/// Parameter schema
/// Parameter schema (StackStorm-style with inline required/secret)
#[schema(value_type = Object, nullable = true)]
pub param_schema: Option<JsonValue>,
@@ -86,10 +126,6 @@ pub struct UpdateWorkflowRequest {
/// Tags
#[schema(example = json!(["incident", "slack", "approval", "automation"]))]
pub tags: Option<Vec<String>>,
/// Whether the workflow is enabled
#[schema(example = true)]
pub enabled: Option<bool>,
}
/// Response DTO for workflow information
@@ -123,7 +159,7 @@ pub struct WorkflowResponse {
#[schema(example = "1.0.0")]
pub version: String,
/// Parameter schema
/// Parameter schema (StackStorm-style with inline required/secret)
#[schema(value_type = Object, nullable = true)]
pub param_schema: Option<JsonValue>,
@@ -139,10 +175,6 @@ pub struct WorkflowResponse {
#[schema(example = json!(["incident", "slack", "approval"]))]
pub tags: Vec<String>,
/// Whether the workflow is enabled
#[schema(example = true)]
pub enabled: bool,
/// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>,
@@ -183,10 +215,6 @@ pub struct WorkflowSummary {
#[schema(example = json!(["incident", "slack", "approval"]))]
pub tags: Vec<String>,
/// Whether the workflow is enabled
#[schema(example = true)]
pub enabled: bool,
/// Creation timestamp
#[schema(example = "2024-01-13T10:30:00Z")]
pub created: DateTime<Utc>,
@@ -211,7 +239,6 @@ impl From<attune_common::models::workflow::WorkflowDefinition> for WorkflowRespo
out_schema: workflow.out_schema,
definition: workflow.definition,
tags: workflow.tags,
enabled: workflow.enabled,
created: workflow.created,
updated: workflow.updated,
}
@@ -229,7 +256,6 @@ impl From<attune_common::models::workflow::WorkflowDefinition> for WorkflowSumma
description: workflow.description,
version: workflow.version,
tags: workflow.tags,
enabled: workflow.enabled,
created: workflow.created,
updated: workflow.updated,
}
@@ -243,10 +269,6 @@ pub struct WorkflowSearchParams {
#[param(example = "incident,approval")]
pub tags: Option<String>,
/// Filter by enabled status
#[param(example = true)]
pub enabled: Option<bool>,
/// Search term for label/description (case-insensitive)
#[param(example = "incident")]
pub search: Option<String>,
@@ -272,7 +294,6 @@ mod tests {
out_schema: None,
definition: serde_json::json!({"tasks": []}),
tags: None,
enabled: None,
};
assert!(req.validate().is_err());
@@ -290,7 +311,6 @@ mod tests {
out_schema: None,
definition: serde_json::json!({"tasks": []}),
tags: Some(vec!["test".to_string()]),
enabled: Some(true),
};
assert!(req.validate().is_ok());
@@ -306,7 +326,6 @@ mod tests {
out_schema: None,
definition: None,
tags: None,
enabled: None,
};
// Should be valid even with all None values
@@ -317,7 +336,6 @@ mod tests {
fn test_workflow_search_params() {
let params = WorkflowSearchParams {
tags: Some("incident,approval".to_string()),
enabled: Some(true),
search: Some("response".to_string()),
pack_ref: Some("core".to_string()),
};

View File

@@ -5,6 +5,7 @@
//! It is primarily used by the binary target and integration tests.
pub mod auth;
pub mod authz;
pub mod dto;
pub mod middleware;
pub mod openapi;

View File

@@ -33,8 +33,92 @@ struct Args {
port: Option<u16>,
}
/// Attempt to connect to RabbitMQ and create a publisher.
/// Returns the publisher on success.
async fn try_connect_publisher(mq_url: &str) -> Result<Publisher> {
let mq_connection = Connection::connect(mq_url).await?;
// Setup common message queue infrastructure (exchanges and DLX)
let mq_setup_config = attune_common::mq::MessageQueueConfig::default();
if let Err(e) = mq_connection
.setup_common_infrastructure(&mq_setup_config)
.await
{
warn!(
"Failed to setup common MQ infrastructure (may already exist): {}",
e
);
}
let publisher = Publisher::new(
&mq_connection,
PublisherConfig {
confirm_publish: true,
timeout_secs: 30,
exchange: "attune.executions".to_string(),
},
)
.await?;
Ok(publisher)
}
/// Background task that keeps trying to establish the MQ publisher connection.
/// Once connected it installs the publisher into `state`, then monitors the
/// connection health and reconnects if it drops.
async fn mq_reconnect_loop(state: Arc<AppState>, mq_url: String) {
// Retry delay sequence (seconds): 1, 2, 4, 8, 16, 30, 30, …
let delays: &[u64] = &[1, 2, 4, 8, 16, 30];
let mut attempt: usize = 0;
loop {
let delay = delays.get(attempt).copied().unwrap_or(30);
match try_connect_publisher(&mq_url).await {
Ok(publisher) => {
info!(
"Message queue publisher connected (attempt {})",
attempt + 1
);
state.set_publisher(Arc::new(publisher)).await;
attempt = 0; // reset backoff after a successful connect
// Poll liveness: the publisher will error on use when the
// underlying channel is gone. We do a lightweight wait here so
// we notice disconnections and attempt to reconnect.
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(10)).await;
if state.get_publisher().await.is_none() {
// Something cleared the publisher externally; re-enter
// the outer connect loop.
break;
}
// TODO: add a real health-check ping when the lapin API
// exposes one (e.g. channel.basic_noop). For now a broken
// publisher will be detected on the first failed publish and
// can be cleared by the handler to trigger reconnection here.
}
}
Err(e) => {
warn!(
"Failed to connect to message queue (attempt {}, retrying in {}s): {}",
attempt + 1,
delay,
e
);
tokio::time::sleep(tokio::time::Duration::from_secs(delay)).await;
attempt = attempt.saturating_add(1);
}
}
}
}
#[tokio::main]
async fn main() -> Result<()> {
// Install a JWT crypto provider that supports both Attune's HS tokens
// and external RS256 OIDC identity tokens.
let _ = jsonwebtoken::crypto::rust_crypto::DEFAULT_PROVIDER.install_default();
// Initialize tracing subscriber
tracing_subscriber::fmt()
.with_target(false)
@@ -66,59 +150,21 @@ async fn main() -> Result<()> {
let database = Database::new(&config.database).await?;
info!("Database connection established");
// Initialize message queue connection and publisher (optional)
let mut state = AppState::new(database.pool().clone(), config.clone());
// Initialize application state (publisher starts as None)
let state = Arc::new(AppState::new(database.pool().clone(), config.clone()));
// Spawn background MQ reconnect loop if a message queue is configured.
// The loop will keep retrying until it connects, then install the publisher
// into the shared state so request handlers can use it immediately.
if let Some(ref mq_config) = config.message_queue {
info!("Connecting to message queue...");
match Connection::connect(&mq_config.url).await {
Ok(mq_connection) => {
info!("Message queue connection established");
// Setup common message queue infrastructure (exchanges and DLX)
let mq_setup_config = attune_common::mq::MessageQueueConfig::default();
match mq_connection
.setup_common_infrastructure(&mq_setup_config)
.await
{
Ok(_) => info!("Common message queue infrastructure setup completed"),
Err(e) => {
warn!(
"Failed to setup common MQ infrastructure (may already exist): {}",
e
);
}
}
// Create publisher
match Publisher::new(
&mq_connection,
PublisherConfig {
confirm_publish: true,
timeout_secs: 30,
exchange: "attune.executions".to_string(),
},
)
.await
{
Ok(publisher) => {
info!("Message queue publisher initialized");
state = state.with_publisher(Arc::new(publisher));
}
Err(e) => {
warn!("Failed to create publisher: {}", e);
warn!("Executions will not be queued for processing");
}
}
}
Err(e) => {
warn!("Failed to connect to message queue: {}", e);
warn!("Executions will not be queued for processing");
}
}
info!("Message queue configured starting background connection loop...");
let mq_url = mq_config.url.clone();
let state_clone = state.clone();
tokio::spawn(async move {
mq_reconnect_loop(state_clone, mq_url).await;
});
} else {
warn!("Message queue not configured");
warn!("Executions will not be queued for processing");
warn!("Message queue not configured executions will not be queued for processing");
}
info!(
@@ -143,7 +189,7 @@ async fn main() -> Result<()> {
info!("PostgreSQL notification listener started");
// Create and start server
let server = Server::new(std::sync::Arc::new(state));
let server = Server::new(state.clone());
info!("Attune API Service is ready");

View File

@@ -148,8 +148,42 @@ impl From<sqlx::Error> for ApiError {
match err {
sqlx::Error::RowNotFound => ApiError::NotFound("Resource not found".to_string()),
sqlx::Error::Database(db_err) => {
// Check for unique constraint violations
if let Some(constraint) = db_err.constraint() {
// PostgreSQL error codes:
// 23505 = unique_violation → 409 Conflict
// 23503 = foreign_key_violation → 422 Unprocessable Entity
// 23514 = check_violation → 422 Unprocessable Entity
// P0001 = raise_exception → 400 Bad Request (trigger-raised errors)
let pg_code = db_err.code().map(|c| c.to_string()).unwrap_or_default();
if pg_code == "23505" {
// Unique constraint violation — duplicate key
let detail = db_err
.constraint()
.map(|c| format!(" ({})", c))
.unwrap_or_default();
ApiError::Conflict(format!("Already exists{}", detail))
} else if pg_code == "23503" {
// Foreign key violation — the referenced row doesn't exist
let detail = db_err
.constraint()
.map(|c| format!(" ({})", c))
.unwrap_or_default();
ApiError::UnprocessableEntity(format!(
"Referenced entity does not exist{}",
detail
))
} else if pg_code == "23514" {
// CHECK constraint violation — value doesn't meet constraint
let detail = db_err
.constraint()
.map(|c| format!(": {}", c))
.unwrap_or_default();
ApiError::UnprocessableEntity(format!("Validation constraint failed{}", detail))
} else if pg_code == "P0001" {
// RAISE EXCEPTION from a trigger or function
// Extract the human-readable message from the exception
let msg = db_err.message().to_string();
ApiError::BadRequest(msg)
} else if let Some(constraint) = db_err.constraint() {
ApiError::Conflict(format!("Constraint violation: {}", constraint))
} else {
ApiError::DatabaseError(format!("Database error: {}", db_err))

View File

@@ -10,8 +10,8 @@ use crate::dto::{
ActionResponse, ActionSummary, CreateActionRequest, QueueStatsResponse, UpdateActionRequest,
},
auth::{
ChangePasswordRequest, CurrentUserResponse, LoginRequest, RefreshTokenRequest,
RegisterRequest, TokenResponse,
AuthSettingsResponse, ChangePasswordRequest, CurrentUserResponse, LoginRequest,
RefreshTokenRequest, RegisterRequest, TokenResponse,
},
common::{ApiResponse, PaginatedResponse, PaginationMeta, SuccessResponse},
event::{EnforcementResponse, EnforcementSummary, EventResponse, EventSummary},
@@ -26,7 +26,15 @@ use crate::dto::{
PackWorkflowSyncResponse, PackWorkflowValidationResponse, RegisterPackRequest,
UpdatePackRequest, WorkflowSyncResult,
},
permission::{
CreateIdentityRequest, CreateIdentityRoleAssignmentRequest,
CreatePermissionAssignmentRequest, CreatePermissionSetRoleAssignmentRequest,
IdentityResponse, IdentityRoleAssignmentResponse, IdentitySummary,
PermissionAssignmentResponse, PermissionSetRoleAssignmentResponse, PermissionSetSummary,
UpdateIdentityRequest,
},
rule::{CreateRuleRequest, RuleResponse, RuleSummary, UpdateRuleRequest},
runtime::{CreateRuntimeRequest, RuntimeResponse, RuntimeSummary, UpdateRuntimeRequest},
trigger::{
CreateSensorRequest, CreateTriggerRequest, SensorResponse, SensorSummary, TriggerResponse,
TriggerSummary, UpdateSensorRequest, UpdateTriggerRequest,
@@ -63,7 +71,9 @@ use crate::dto::{
crate::routes::health::liveness,
// Authentication
crate::routes::auth::auth_settings,
crate::routes::auth::login,
crate::routes::auth::ldap_login,
crate::routes::auth::register,
crate::routes::auth::refresh_token,
crate::routes::auth::get_current_user,
@@ -92,6 +102,14 @@ use crate::dto::{
crate::routes::actions::delete_action,
crate::routes::actions::get_queue_stats,
// Runtimes
crate::routes::runtimes::list_runtimes,
crate::routes::runtimes::list_runtimes_by_pack,
crate::routes::runtimes::get_runtime,
crate::routes::runtimes::create_runtime,
crate::routes::runtimes::update_runtime,
crate::routes::runtimes::delete_runtime,
// Triggers
crate::routes::triggers::list_triggers,
crate::routes::triggers::list_enabled_triggers,
@@ -160,6 +178,23 @@ use crate::dto::{
crate::routes::keys::update_key,
crate::routes::keys::delete_key,
// Permissions
crate::routes::permissions::list_identities,
crate::routes::permissions::get_identity,
crate::routes::permissions::create_identity,
crate::routes::permissions::update_identity,
crate::routes::permissions::delete_identity,
crate::routes::permissions::list_permission_sets,
crate::routes::permissions::list_identity_permissions,
crate::routes::permissions::create_permission_assignment,
crate::routes::permissions::delete_permission_assignment,
crate::routes::permissions::create_identity_role_assignment,
crate::routes::permissions::delete_identity_role_assignment,
crate::routes::permissions::create_permission_set_role_assignment,
crate::routes::permissions::delete_permission_set_role_assignment,
crate::routes::permissions::freeze_identity,
crate::routes::permissions::unfreeze_identity,
// Workflows
crate::routes::workflows::list_workflows,
crate::routes::workflows::list_workflows_by_pack,
@@ -173,15 +208,21 @@ use crate::dto::{
crate::routes::webhooks::disable_webhook,
crate::routes::webhooks::regenerate_webhook_key,
crate::routes::webhooks::receive_webhook,
// Agent
crate::routes::agent::download_agent_binary,
crate::routes::agent::agent_info,
),
components(
schemas(
// Common types
ApiResponse<TokenResponse>,
ApiResponse<AuthSettingsResponse>,
ApiResponse<CurrentUserResponse>,
ApiResponse<PackResponse>,
ApiResponse<PackInstallResponse>,
ApiResponse<ActionResponse>,
ApiResponse<RuntimeResponse>,
ApiResponse<TriggerResponse>,
ApiResponse<SensorResponse>,
ApiResponse<RuleResponse>,
@@ -190,10 +231,13 @@ use crate::dto::{
ApiResponse<EnforcementResponse>,
ApiResponse<InquiryResponse>,
ApiResponse<KeyResponse>,
ApiResponse<IdentityResponse>,
ApiResponse<PermissionAssignmentResponse>,
ApiResponse<WorkflowResponse>,
ApiResponse<QueueStatsResponse>,
PaginatedResponse<PackSummary>,
PaginatedResponse<ActionSummary>,
PaginatedResponse<RuntimeSummary>,
PaginatedResponse<TriggerSummary>,
PaginatedResponse<SensorSummary>,
PaginatedResponse<RuleSummary>,
@@ -202,12 +246,14 @@ use crate::dto::{
PaginatedResponse<EnforcementSummary>,
PaginatedResponse<InquirySummary>,
PaginatedResponse<KeySummary>,
PaginatedResponse<IdentitySummary>,
PaginatedResponse<WorkflowSummary>,
PaginationMeta,
SuccessResponse,
// Auth DTOs
LoginRequest,
crate::routes::auth::LdapLoginRequest,
RegisterRequest,
RefreshTokenRequest,
ChangePasswordRequest,
@@ -233,6 +279,25 @@ use crate::dto::{
attune_common::models::pack_test::PackTestSummary,
PaginatedResponse<attune_common::models::pack_test::PackTestSummary>,
// Permission DTOs
CreateIdentityRequest,
UpdateIdentityRequest,
IdentityResponse,
PermissionSetSummary,
PermissionAssignmentResponse,
CreatePermissionAssignmentRequest,
CreateIdentityRoleAssignmentRequest,
IdentityRoleAssignmentResponse,
CreatePermissionSetRoleAssignmentRequest,
PermissionSetRoleAssignmentResponse,
// Runtime DTOs
CreateRuntimeRequest,
UpdateRuntimeRequest,
RuntimeResponse,
RuntimeSummary,
IdentitySummary,
// Action DTOs
CreateActionRequest,
UpdateActionRequest,
@@ -293,6 +358,10 @@ use crate::dto::{
WebhookReceiverRequest,
WebhookReceiverResponse,
ApiResponse<WebhookReceiverResponse>,
// Agent DTOs
crate::routes::agent::AgentBinaryInfo,
crate::routes::agent::AgentArchInfo,
)
),
modifiers(&SecurityAddon),
@@ -311,6 +380,7 @@ use crate::dto::{
(name = "secrets", description = "Secret management endpoints"),
(name = "workflows", description = "Workflow management endpoints"),
(name = "webhooks", description = "Webhook management and receiver endpoints"),
(name = "agent", description = "Agent binary distribution endpoints"),
)
)]
pub struct ApiDoc;
@@ -393,18 +463,57 @@ mod tests {
// We have 57 unique paths with 81 total operations (HTTP methods)
// This test ensures we don't accidentally remove endpoints
assert!(
path_count >= 57,
"Expected at least 57 unique API paths, found {}",
path_count >= 59,
"Expected at least 59 unique API paths, found {}",
path_count
);
assert!(
operation_count >= 81,
"Expected at least 81 API operations, found {}",
operation_count >= 83,
"Expected at least 83 API operations, found {}",
operation_count
);
println!("Total API paths: {}", path_count);
println!("Total API operations: {}", operation_count);
}
#[test]
fn test_auth_endpoints_registered() {
let doc = ApiDoc::openapi();
let expected_auth_paths = vec![
"/auth/settings",
"/auth/login",
"/auth/ldap/login",
"/auth/register",
"/auth/refresh",
"/auth/me",
"/auth/change-password",
];
for path in &expected_auth_paths {
assert!(
doc.paths.paths.contains_key(*path),
"Expected auth endpoint {} to be registered in OpenAPI spec, but it was missing. \
Registered paths: {:?}",
path,
doc.paths.paths.keys().collect::<Vec<_>>()
);
}
}
#[test]
fn test_ldap_login_request_schema_registered() {
let doc = ApiDoc::openapi();
let components = doc.components.as_ref().expect("components should exist");
assert!(
components.schemas.contains_key("LdapLoginRequest"),
"Expected LdapLoginRequest schema to be registered in OpenAPI components. \
Registered schemas: {:?}",
components.schemas.keys().collect::<Vec<_>>()
);
}
}

View File

@@ -10,19 +10,21 @@ use axum::{
use std::sync::Arc;
use validator::Validate;
use attune_common::rbac::{Action, AuthorizationContext, Resource};
use attune_common::repositories::{
action::{ActionRepository, CreateActionInput, UpdateActionInput},
action::{ActionRepository, ActionSearchFilters, CreateActionInput, UpdateActionInput},
pack::PackRepository,
queue_stats::QueueStatsRepository,
Create, Delete, FindByRef, List, Update,
Create, Delete, FindByRef, Patch, Update,
};
use crate::{
auth::middleware::RequireAuth,
authz::{AuthorizationCheck, AuthorizationService},
dto::{
action::{
ActionResponse, ActionSummary, CreateActionRequest, QueueStatsResponse,
UpdateActionRequest,
RuntimeVersionConstraintPatch, UpdateActionRequest,
},
common::{PaginatedResponse, PaginationParams},
ApiResponse, SuccessResponse,
@@ -47,21 +49,20 @@ pub async fn list_actions(
RequireAuth(_user): RequireAuth,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
// Get all actions (we'll implement pagination in repository later)
let actions = ActionRepository::list(&state.db).await?;
// All filtering and pagination happen in a single SQL query.
let filters = ActionSearchFilters {
pack: None,
query: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = actions.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(actions.len());
let result = ActionRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_actions: Vec<ActionSummary> = actions[start..end]
.iter()
.map(|a| ActionSummary::from(a.clone()))
.collect();
let paginated_actions: Vec<ActionSummary> =
result.rows.into_iter().map(ActionSummary::from).collect();
let response = PaginatedResponse::new(paginated_actions, &pagination, total);
let response = PaginatedResponse::new(paginated_actions, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -92,21 +93,20 @@ pub async fn list_actions_by_pack(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref)))?;
// Get actions for this pack
let actions = ActionRepository::find_by_pack(&state.db, pack.id).await?;
// All filtering and pagination happen in a single SQL query.
let filters = ActionSearchFilters {
pack: Some(pack.id),
query: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = actions.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(actions.len());
let result = ActionRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_actions: Vec<ActionSummary> = actions[start..end]
.iter()
.map(|a| ActionSummary::from(a.clone()))
.collect();
let paginated_actions: Vec<ActionSummary> =
result.rows.into_iter().map(ActionSummary::from).collect();
let response = PaginatedResponse::new(paginated_actions, &pagination, total);
let response = PaginatedResponse::new(paginated_actions, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -155,14 +155,17 @@ pub async fn get_action(
)]
pub async fn create_action(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Json(request): Json<CreateActionRequest>,
) -> ApiResult<impl IntoResponse> {
// Validate request
request.validate()?;
// Check if action with same ref already exists
if let Some(_) = ActionRepository::find_by_ref(&state.db, &request.r#ref).await? {
if ActionRepository::find_by_ref(&state.db, &request.r#ref)
.await?
.is_some()
{
return Err(ApiError::Conflict(format!(
"Action with ref '{}' already exists",
request.r#ref
@@ -174,6 +177,26 @@ pub async fn create_action(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", request.pack_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.pack_ref = Some(pack.r#ref.clone());
ctx.target_ref = Some(request.r#ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Actions,
action: Action::Create,
context: ctx,
},
)
.await?;
}
// If runtime is specified, we could verify it exists (future enhancement)
// For now, the database foreign key constraint will handle invalid runtime IDs
@@ -186,6 +209,7 @@ pub async fn create_action(
description: request.description,
entrypoint: request.entrypoint,
runtime: request.runtime,
runtime_version_constraint: request.runtime_version_constraint,
param_schema: request.param_schema,
out_schema: request.out_schema,
is_adhoc: true, // Actions created via API are ad-hoc (not from pack installation)
@@ -217,7 +241,7 @@ pub async fn create_action(
)]
pub async fn update_action(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Path(action_ref): Path<String>,
Json(request): Json<UpdateActionRequest>,
) -> ApiResult<impl IntoResponse> {
@@ -229,14 +253,42 @@ pub async fn update_action(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Action '{}' not found", action_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_id = Some(existing_action.id);
ctx.target_ref = Some(existing_action.r#ref.clone());
ctx.pack_ref = Some(existing_action.pack_ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Actions,
action: Action::Update,
context: ctx,
},
)
.await?;
}
// Create update input
let update_input = UpdateActionInput {
label: request.label,
description: request.description,
description: request.description.map(Patch::Set),
entrypoint: request.entrypoint,
runtime: request.runtime,
runtime_version_constraint: request.runtime_version_constraint.map(|patch| match patch {
RuntimeVersionConstraintPatch::Set(value) => Patch::Set(value),
RuntimeVersionConstraintPatch::Clear => Patch::Clear,
}),
param_schema: request.param_schema,
out_schema: request.out_schema,
parameter_delivery: None,
parameter_format: None,
output_format: None,
};
let action = ActionRepository::update(&state.db, existing_action.id, update_input).await?;
@@ -263,7 +315,7 @@ pub async fn update_action(
)]
pub async fn delete_action(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Path(action_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
// Check if action exists
@@ -271,6 +323,27 @@ pub async fn delete_action(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Action '{}' not found", action_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_id = Some(action.id);
ctx.target_ref = Some(action.r#ref.clone());
ctx.pack_ref = Some(action.pack_ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Actions,
action: Action::Delete,
context: ctx,
},
)
.await?;
}
// Delete the action
let deleted = ActionRepository::delete(&state.db, action.id).await?;

View File

@@ -0,0 +1,482 @@
//! Agent binary download endpoints
//!
//! Provides endpoints for downloading the attune-agent binary for injection
//! into arbitrary containers. This supports deployments where shared Docker
//! volumes are impractical (Kubernetes, ECS, remote Docker hosts).
use axum::{
body::Body,
extract::{Query, State},
http::{header, HeaderMap, StatusCode},
response::IntoResponse,
routing::get,
Json, Router,
};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use subtle::ConstantTimeEq;
use tokio::fs;
use tokio_util::io::ReaderStream;
use utoipa::{IntoParams, ToSchema};
use crate::state::AppState;
/// Query parameters for the binary download endpoint
#[derive(Debug, Deserialize, IntoParams)]
pub struct BinaryDownloadParams {
/// Target architecture (x86_64, aarch64). Defaults to x86_64.
#[param(example = "x86_64")]
pub arch: Option<String>,
/// Optional bootstrap token for authentication
pub token: Option<String>,
}
/// Agent binary metadata
#[derive(Debug, Serialize, ToSchema)]
pub struct AgentBinaryInfo {
/// Available architectures
pub architectures: Vec<AgentArchInfo>,
/// Agent version (from build)
pub version: String,
}
/// Per-architecture binary info
#[derive(Debug, Serialize, ToSchema)]
pub struct AgentArchInfo {
/// Architecture name
pub arch: String,
/// Binary size in bytes
pub size_bytes: u64,
/// Whether this binary is available
pub available: bool,
}
/// Validate that the architecture name is safe (no path traversal) and normalize it.
fn validate_arch(arch: &str) -> Result<&str, (StatusCode, Json<serde_json::Value>)> {
match arch {
"x86_64" | "aarch64" => Ok(arch),
// Accept arm64 as an alias for aarch64
"arm64" => Ok("aarch64"),
_ => Err((
StatusCode::BAD_REQUEST,
Json(serde_json::json!({
"error": "Invalid architecture",
"message": format!("Unsupported architecture '{}'. Supported: x86_64, aarch64", arch),
})),
)),
}
}
/// Validate bootstrap token if configured.
///
/// If the agent config has a `bootstrap_token` set, the request must provide it
/// via the `X-Agent-Token` header or the `token` query parameter. If no token
/// is configured, access is unrestricted.
fn validate_token(
config: &attune_common::config::Config,
headers: &HeaderMap,
query_token: &Option<String>,
) -> Result<(), (StatusCode, Json<serde_json::Value>)> {
let expected_token = config
.agent
.as_ref()
.and_then(|ac| ac.bootstrap_token.as_ref());
let expected_token = match expected_token {
Some(t) => t,
None => {
use std::sync::Once;
static WARN_ONCE: Once = Once::new();
WARN_ONCE.call_once(|| {
tracing::warn!(
"Agent binary download endpoint has no bootstrap_token configured. \
Anyone with network access to the API can download the agent binary. \
Set agent.bootstrap_token in config to restrict access."
);
});
return Ok(());
}
};
// Check X-Agent-Token header first, then query param
let provided_token = headers
.get("x-agent-token")
.and_then(|v| v.to_str().ok())
.map(|s| s.to_string())
.or_else(|| query_token.clone());
match provided_token {
Some(ref t) if bool::from(t.as_bytes().ct_eq(expected_token.as_bytes())) => Ok(()),
Some(_) => Err((
StatusCode::UNAUTHORIZED,
Json(serde_json::json!({
"error": "Invalid token",
"message": "The provided bootstrap token is invalid",
})),
)),
None => Err((
StatusCode::UNAUTHORIZED,
Json(serde_json::json!({
"error": "Token required",
"message": "A bootstrap token is required. Provide via X-Agent-Token header or token query parameter.",
})),
)),
}
}
/// Download the agent binary
///
/// Returns the statically-linked attune-agent binary for the requested architecture.
/// The binary can be injected into any container to turn it into an Attune worker.
#[utoipa::path(
get,
path = "/api/v1/agent/binary",
params(BinaryDownloadParams),
responses(
(status = 200, description = "Agent binary", content_type = "application/octet-stream"),
(status = 400, description = "Invalid architecture"),
(status = 401, description = "Invalid or missing bootstrap token"),
(status = 404, description = "Agent binary not found"),
(status = 503, description = "Agent binary distribution not configured"),
),
tag = "agent"
)]
pub async fn download_agent_binary(
State(state): State<Arc<AppState>>,
headers: HeaderMap,
Query(params): Query<BinaryDownloadParams>,
) -> Result<impl IntoResponse, (StatusCode, Json<serde_json::Value>)> {
// Validate bootstrap token if configured
validate_token(&state.config, &headers, &params.token)?;
let agent_config = state.config.agent.as_ref().ok_or_else(|| {
(
StatusCode::SERVICE_UNAVAILABLE,
Json(serde_json::json!({
"error": "Not configured",
"message": "Agent binary distribution is not configured. Set agent.binary_dir in config.",
})),
)
})?;
let arch = params.arch.as_deref().unwrap_or("x86_64");
let arch = validate_arch(arch)?;
let binary_dir = std::path::Path::new(&agent_config.binary_dir);
// Try arch-specific binary first, then fall back to generic name.
// IMPORTANT: The generic `attune-agent` binary is only safe to serve for
// x86_64 requests, because the current build pipeline produces an
// x86_64-unknown-linux-musl binary. Serving it for aarch64/arm64 would
// give the caller an incompatible executable (exec format error).
let arch_specific = binary_dir.join(format!("attune-agent-{}", arch));
let generic = binary_dir.join("attune-agent");
let binary_path = if arch_specific.exists() {
arch_specific
} else if arch == "x86_64" && generic.exists() {
tracing::debug!(
"Arch-specific binary not found at {:?}, falling back to generic {:?} (safe for x86_64)",
arch_specific,
generic
);
generic
} else {
tracing::warn!(
"Agent binary not found. Checked: {:?} and {:?}",
arch_specific,
generic
);
return Err((
StatusCode::NOT_FOUND,
Json(serde_json::json!({
"error": "Not found",
"message": format!(
"Agent binary not found for architecture '{}'. Ensure the agent binary is built and placed in '{}'.",
arch,
agent_config.binary_dir
),
})),
));
};
// Get file metadata for Content-Length
let metadata = fs::metadata(&binary_path).await.map_err(|e| {
tracing::error!(
"Failed to read agent binary metadata at {:?}: {}",
binary_path,
e
);
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({
"error": "Internal error",
"message": "Failed to read agent binary",
})),
)
})?;
// Open file for streaming
let file = fs::File::open(&binary_path).await.map_err(|e| {
tracing::error!("Failed to open agent binary at {:?}: {}", binary_path, e);
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({
"error": "Internal error",
"message": "Failed to open agent binary",
})),
)
})?;
let stream = ReaderStream::new(file);
let body = Body::from_stream(stream);
let headers_response = [
(header::CONTENT_TYPE, "application/octet-stream".to_string()),
(
header::CONTENT_DISPOSITION,
"attachment; filename=\"attune-agent\"".to_string(),
),
(header::CONTENT_LENGTH, metadata.len().to_string()),
(header::CACHE_CONTROL, "public, max-age=3600".to_string()),
];
tracing::info!(
arch = arch,
size_bytes = metadata.len(),
path = ?binary_path,
"Serving agent binary download"
);
Ok((headers_response, body))
}
/// Get agent binary metadata
///
/// Returns information about available agent binaries, including
/// supported architectures and binary sizes.
#[utoipa::path(
get,
path = "/api/v1/agent/info",
responses(
(status = 200, description = "Agent binary info", body = AgentBinaryInfo),
(status = 503, description = "Agent binary distribution not configured"),
),
tag = "agent"
)]
pub async fn agent_info(
State(state): State<Arc<AppState>>,
) -> Result<impl IntoResponse, (StatusCode, Json<serde_json::Value>)> {
let agent_config = state.config.agent.as_ref().ok_or_else(|| {
(
StatusCode::SERVICE_UNAVAILABLE,
Json(serde_json::json!({
"error": "Not configured",
"message": "Agent binary distribution is not configured.",
})),
)
})?;
let binary_dir = std::path::Path::new(&agent_config.binary_dir);
let architectures = ["x86_64", "aarch64"];
let mut arch_infos = Vec::new();
for arch in &architectures {
let arch_specific = binary_dir.join(format!("attune-agent-{}", arch));
let generic = binary_dir.join("attune-agent");
// Only fall back to the generic binary for x86_64, since the build
// pipeline currently produces x86_64-only generic binaries.
let (available, size_bytes) = if arch_specific.exists() {
match fs::metadata(&arch_specific).await {
Ok(m) => (true, m.len()),
Err(_) => (false, 0),
}
} else if *arch == "x86_64" && generic.exists() {
match fs::metadata(&generic).await {
Ok(m) => (true, m.len()),
Err(_) => (false, 0),
}
} else {
(false, 0)
};
arch_infos.push(AgentArchInfo {
arch: arch.to_string(),
size_bytes,
available,
});
}
Ok(Json(AgentBinaryInfo {
architectures: arch_infos,
version: env!("CARGO_PKG_VERSION").to_string(),
}))
}
/// Create agent routes
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
.route("/agent/binary", get(download_agent_binary))
.route("/agent/info", get(agent_info))
}
#[cfg(test)]
mod tests {
use super::*;
use attune_common::config::AgentConfig;
use axum::http::{HeaderMap, HeaderValue};
// ── validate_arch tests ─────────────────────────────────────────
#[test]
fn test_validate_arch_valid_x86_64() {
let result = validate_arch("x86_64");
assert!(result.is_ok());
assert_eq!(result.unwrap(), "x86_64");
}
#[test]
fn test_validate_arch_valid_aarch64() {
let result = validate_arch("aarch64");
assert!(result.is_ok());
assert_eq!(result.unwrap(), "aarch64");
}
#[test]
fn test_validate_arch_arm64_alias() {
// "arm64" is an alias for "aarch64"
let result = validate_arch("arm64");
assert!(result.is_ok());
assert_eq!(result.unwrap(), "aarch64");
}
#[test]
fn test_validate_arch_invalid() {
let result = validate_arch("mips");
assert!(result.is_err());
let (status, body) = result.unwrap_err();
assert_eq!(status, StatusCode::BAD_REQUEST);
assert_eq!(body.0["error"], "Invalid architecture");
}
// ── validate_token tests ────────────────────────────────────────
/// Helper: build a minimal Config with the given agent config.
/// Only the `agent` field is relevant for `validate_token`.
fn test_config(agent: Option<AgentConfig>) -> attune_common::config::Config {
let manifest_dir = std::env::var("CARGO_MANIFEST_DIR").unwrap_or_else(|_| ".".to_string());
let config_path = format!("{}/../../config.test.yaml", manifest_dir);
let mut config = attune_common::config::Config::load_from_file(&config_path)
.expect("Failed to load test config");
config.agent = agent;
config
}
#[test]
fn test_validate_token_no_config() {
// When no agent config is set at all, no token is required.
let config = test_config(None);
let headers = HeaderMap::new();
let query_token = None;
let result = validate_token(&config, &headers, &query_token);
assert!(result.is_ok());
}
#[test]
fn test_validate_token_no_bootstrap_token_configured() {
// Agent config exists but bootstrap_token is None → no token required.
let config = test_config(Some(AgentConfig {
binary_dir: "/tmp/test".to_string(),
bootstrap_token: None,
}));
let headers = HeaderMap::new();
let query_token = None;
let result = validate_token(&config, &headers, &query_token);
assert!(result.is_ok());
}
#[test]
fn test_validate_token_valid_from_header() {
let config = test_config(Some(AgentConfig {
binary_dir: "/tmp/test".to_string(),
bootstrap_token: Some("s3cret-bootstrap".to_string()),
}));
let mut headers = HeaderMap::new();
headers.insert(
"x-agent-token",
HeaderValue::from_static("s3cret-bootstrap"),
);
let query_token = None;
let result = validate_token(&config, &headers, &query_token);
assert!(result.is_ok());
}
#[test]
fn test_validate_token_valid_from_query() {
let config = test_config(Some(AgentConfig {
binary_dir: "/tmp/test".to_string(),
bootstrap_token: Some("s3cret-bootstrap".to_string()),
}));
let headers = HeaderMap::new();
let query_token = Some("s3cret-bootstrap".to_string());
let result = validate_token(&config, &headers, &query_token);
assert!(result.is_ok());
}
#[test]
fn test_validate_token_invalid() {
let config = test_config(Some(AgentConfig {
binary_dir: "/tmp/test".to_string(),
bootstrap_token: Some("correct-token".to_string()),
}));
let mut headers = HeaderMap::new();
headers.insert("x-agent-token", HeaderValue::from_static("wrong-token"));
let query_token = None;
let result = validate_token(&config, &headers, &query_token);
assert!(result.is_err());
let (status, body) = result.unwrap_err();
assert_eq!(status, StatusCode::UNAUTHORIZED);
assert_eq!(body.0["error"], "Invalid token");
}
#[test]
fn test_validate_token_missing_when_required() {
// bootstrap_token is configured but caller provides nothing.
let config = test_config(Some(AgentConfig {
binary_dir: "/tmp/test".to_string(),
bootstrap_token: Some("required-token".to_string()),
}));
let headers = HeaderMap::new();
let query_token = None;
let result = validate_token(&config, &headers, &query_token);
assert!(result.is_err());
let (status, body) = result.unwrap_err();
assert_eq!(status, StatusCode::UNAUTHORIZED);
assert_eq!(body.0["error"], "Token required");
}
#[test]
fn test_validate_token_header_takes_precedence_over_query() {
// When both header and query provide a token, the header value is
// checked first (it appears first in the or_else chain). Provide a
// valid token in the header and an invalid one in the query — should
// succeed because the header matches.
let config = test_config(Some(AgentConfig {
binary_dir: "/tmp/test".to_string(),
bootstrap_token: Some("the-real-token".to_string()),
}));
let mut headers = HeaderMap::new();
headers.insert("x-agent-token", HeaderValue::from_static("the-real-token"));
let query_token = Some("wrong-token".to_string());
let result = validate_token(&config, &headers, &query_token);
assert!(result.is_ok());
}
}

View File

@@ -0,0 +1,304 @@
//! Analytics API routes
//!
//! Provides read-only access to TimescaleDB continuous aggregates for dashboard
//! widgets and time-series analytics. All data is pre-computed by TimescaleDB
//! continuous aggregate policies — these endpoints simply query the materialized views.
use axum::{
extract::{Query, State},
http::StatusCode,
response::IntoResponse,
routing::get,
Json, Router,
};
use std::sync::Arc;
use attune_common::repositories::analytics::AnalyticsRepository;
use crate::{
auth::middleware::RequireAuth,
dto::{
analytics::{
AnalyticsQueryParams, DashboardAnalyticsResponse, EnforcementVolumeResponse,
EventVolumeResponse, ExecutionStatusTimeSeriesResponse, ExecutionThroughputResponse,
FailureRateResponse, TimeSeriesPoint, WorkerStatusTimeSeriesResponse,
},
common::ApiResponse,
},
middleware::ApiResult,
state::AppState,
};
/// Get a combined dashboard analytics payload.
///
/// Returns all key metrics in a single response to avoid multiple round-trips
/// from the dashboard page. Includes execution throughput, status transitions,
/// event volume, enforcement volume, worker status, and failure rate.
#[utoipa::path(
get,
path = "/api/v1/analytics/dashboard",
tag = "analytics",
params(AnalyticsQueryParams),
responses(
(status = 200, description = "Dashboard analytics", body = inline(ApiResponse<DashboardAnalyticsResponse>)),
),
security(("bearer_auth" = []))
)]
pub async fn get_dashboard_analytics(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Query(query): Query<AnalyticsQueryParams>,
) -> ApiResult<impl IntoResponse> {
let range = query.to_time_range();
// Run all aggregate queries concurrently
let (throughput, status, events, enforcements, workers, failure_rate) = tokio::try_join!(
AnalyticsRepository::execution_throughput_hourly(&state.db, &range),
AnalyticsRepository::execution_status_hourly(&state.db, &range),
AnalyticsRepository::event_volume_hourly(&state.db, &range),
AnalyticsRepository::enforcement_volume_hourly(&state.db, &range),
AnalyticsRepository::worker_status_hourly(&state.db, &range),
AnalyticsRepository::execution_failure_rate(&state.db, &range),
)?;
let response = DashboardAnalyticsResponse {
since: range.since,
until: range.until,
execution_throughput: throughput.into_iter().map(Into::into).collect(),
execution_status: status.into_iter().map(Into::into).collect(),
event_volume: events.into_iter().map(Into::into).collect(),
enforcement_volume: enforcements.into_iter().map(Into::into).collect(),
worker_status: workers.into_iter().map(Into::into).collect(),
failure_rate: FailureRateResponse::from_summary(failure_rate, &range),
};
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
}
/// Get execution status transitions over time.
///
/// Returns hourly buckets of execution status transitions (e.g., how many
/// executions moved to "completed", "failed", "running" per hour).
#[utoipa::path(
get,
path = "/api/v1/analytics/executions/status",
tag = "analytics",
params(AnalyticsQueryParams),
responses(
(status = 200, description = "Execution status transitions", body = inline(ApiResponse<ExecutionStatusTimeSeriesResponse>)),
),
security(("bearer_auth" = []))
)]
pub async fn get_execution_status_analytics(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Query(query): Query<AnalyticsQueryParams>,
) -> ApiResult<impl IntoResponse> {
let range = query.to_time_range();
let rows = AnalyticsRepository::execution_status_hourly(&state.db, &range).await?;
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
let response = ExecutionStatusTimeSeriesResponse {
since: range.since,
until: range.until,
data,
};
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
}
/// Get execution throughput over time.
///
/// Returns hourly buckets of execution creation counts.
#[utoipa::path(
get,
path = "/api/v1/analytics/executions/throughput",
tag = "analytics",
params(AnalyticsQueryParams),
responses(
(status = 200, description = "Execution throughput", body = inline(ApiResponse<ExecutionThroughputResponse>)),
),
security(("bearer_auth" = []))
)]
pub async fn get_execution_throughput_analytics(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Query(query): Query<AnalyticsQueryParams>,
) -> ApiResult<impl IntoResponse> {
let range = query.to_time_range();
let rows = AnalyticsRepository::execution_throughput_hourly(&state.db, &range).await?;
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
let response = ExecutionThroughputResponse {
since: range.since,
until: range.until,
data,
};
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
}
/// Get the execution failure rate summary.
///
/// Returns aggregate failure/timeout/completion counts and the failure rate
/// percentage over the requested time range.
#[utoipa::path(
get,
path = "/api/v1/analytics/executions/failure-rate",
tag = "analytics",
params(AnalyticsQueryParams),
responses(
(status = 200, description = "Failure rate summary", body = inline(ApiResponse<FailureRateResponse>)),
),
security(("bearer_auth" = []))
)]
pub async fn get_failure_rate_analytics(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Query(query): Query<AnalyticsQueryParams>,
) -> ApiResult<impl IntoResponse> {
let range = query.to_time_range();
let summary = AnalyticsRepository::execution_failure_rate(&state.db, &range).await?;
let response = FailureRateResponse::from_summary(summary, &range);
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
}
/// Get event volume over time.
///
/// Returns hourly buckets of event creation counts, aggregated across all triggers.
#[utoipa::path(
get,
path = "/api/v1/analytics/events/volume",
tag = "analytics",
params(AnalyticsQueryParams),
responses(
(status = 200, description = "Event volume", body = inline(ApiResponse<EventVolumeResponse>)),
),
security(("bearer_auth" = []))
)]
pub async fn get_event_volume_analytics(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Query(query): Query<AnalyticsQueryParams>,
) -> ApiResult<impl IntoResponse> {
let range = query.to_time_range();
let rows = AnalyticsRepository::event_volume_hourly(&state.db, &range).await?;
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
let response = EventVolumeResponse {
since: range.since,
until: range.until,
data,
};
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
}
/// Get worker status transitions over time.
///
/// Returns hourly buckets of worker status changes (online/offline/draining).
#[utoipa::path(
get,
path = "/api/v1/analytics/workers/status",
tag = "analytics",
params(AnalyticsQueryParams),
responses(
(status = 200, description = "Worker status transitions", body = inline(ApiResponse<WorkerStatusTimeSeriesResponse>)),
),
security(("bearer_auth" = []))
)]
pub async fn get_worker_status_analytics(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Query(query): Query<AnalyticsQueryParams>,
) -> ApiResult<impl IntoResponse> {
let range = query.to_time_range();
let rows = AnalyticsRepository::worker_status_hourly(&state.db, &range).await?;
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
let response = WorkerStatusTimeSeriesResponse {
since: range.since,
until: range.until,
data,
};
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
}
/// Get enforcement volume over time.
///
/// Returns hourly buckets of enforcement creation counts, aggregated across all rules.
#[utoipa::path(
get,
path = "/api/v1/analytics/enforcements/volume",
tag = "analytics",
params(AnalyticsQueryParams),
responses(
(status = 200, description = "Enforcement volume", body = inline(ApiResponse<EnforcementVolumeResponse>)),
),
security(("bearer_auth" = []))
)]
pub async fn get_enforcement_volume_analytics(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Query(query): Query<AnalyticsQueryParams>,
) -> ApiResult<impl IntoResponse> {
let range = query.to_time_range();
let rows = AnalyticsRepository::enforcement_volume_hourly(&state.db, &range).await?;
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
let response = EnforcementVolumeResponse {
since: range.since,
until: range.until,
data,
};
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
}
// ---------------------------------------------------------------------------
// Router
// ---------------------------------------------------------------------------
/// Build the analytics routes.
///
/// Mounts:
/// - `GET /analytics/dashboard` — combined dashboard payload
/// - `GET /analytics/executions/status` — execution status transitions
/// - `GET /analytics/executions/throughput` — execution creation throughput
/// - `GET /analytics/executions/failure-rate` — failure rate summary
/// - `GET /analytics/events/volume` — event creation volume
/// - `GET /analytics/workers/status` — worker status transitions
/// - `GET /analytics/enforcements/volume` — enforcement creation volume
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
.route("/analytics/dashboard", get(get_dashboard_analytics))
.route(
"/analytics/executions/status",
get(get_execution_status_analytics),
)
.route(
"/analytics/executions/throughput",
get(get_execution_throughput_analytics),
)
.route(
"/analytics/executions/failure-rate",
get(get_failure_rate_analytics),
)
.route("/analytics/events/volume", get(get_event_volume_analytics))
.route(
"/analytics/workers/status",
get(get_worker_status_analytics),
)
.route(
"/analytics/enforcements/volume",
get(get_enforcement_volume_analytics),
)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,9 @@
//! Authentication routes
use axum::{
extract::State,
extract::{Query, State},
http::HeaderMap,
response::{IntoResponse, Redirect, Response},
routing::{get, post},
Json, Router,
};
@@ -21,11 +23,16 @@ use crate::{
TokenType,
},
middleware::RequireAuth,
oidc::{
apply_cookies_to_headers, build_login_redirect, build_logout_redirect,
cookie_authenticated_user, get_cookie_value, oidc_callback_redirect_response,
OidcCallbackQuery, REFRESH_COOKIE_NAME,
},
verify_password,
},
dto::{
ApiResponse, ChangePasswordRequest, CurrentUserResponse, LoginRequest, RefreshTokenRequest,
RegisterRequest, SuccessResponse, TokenResponse,
ApiResponse, AuthSettingsResponse, ChangePasswordRequest, CurrentUserResponse,
LoginRequest, RefreshTokenRequest, RegisterRequest, SuccessResponse, TokenResponse,
},
middleware::error::ApiError,
state::SharedState,
@@ -63,7 +70,12 @@ pub struct SensorTokenResponse {
/// Create authentication routes
pub fn routes() -> Router<SharedState> {
Router::new()
.route("/settings", get(auth_settings))
.route("/login", post(login))
.route("/oidc/login", get(oidc_login))
.route("/callback", get(oidc_callback))
.route("/ldap/login", post(ldap_login))
.route("/logout", get(logout))
.route("/register", post(register))
.route("/refresh", post(refresh_token))
.route("/me", get(get_current_user))
@@ -72,6 +84,63 @@ pub fn routes() -> Router<SharedState> {
.route("/internal/sensor-token", post(create_sensor_token_internal))
}
/// Authentication settings endpoint
///
/// GET /auth/settings
#[utoipa::path(
get,
path = "/auth/settings",
tag = "auth",
responses(
(status = 200, description = "Authentication settings", body = inline(ApiResponse<AuthSettingsResponse>))
)
)]
pub async fn auth_settings(
State(state): State<SharedState>,
) -> Result<Json<ApiResponse<AuthSettingsResponse>>, ApiError> {
let oidc = state
.config
.security
.oidc
.as_ref()
.filter(|oidc| oidc.enabled);
let ldap = state
.config
.security
.ldap
.as_ref()
.filter(|ldap| ldap.enabled);
let response = AuthSettingsResponse {
authentication_enabled: state.config.security.enable_auth,
local_password_enabled: state.config.security.enable_auth,
local_password_visible_by_default: state.config.security.enable_auth
&& state.config.security.login_page.show_local_login,
oidc_enabled: oidc.is_some(),
oidc_visible_by_default: oidc.is_some() && state.config.security.login_page.show_oidc_login,
oidc_provider_name: oidc.map(|oidc| oidc.provider_name.clone()),
oidc_provider_label: oidc.map(|oidc| {
oidc.provider_label
.clone()
.unwrap_or_else(|| oidc.provider_name.clone())
}),
oidc_provider_icon_url: oidc.and_then(|oidc| oidc.provider_icon_url.clone()),
ldap_enabled: ldap.is_some(),
ldap_visible_by_default: ldap.is_some() && state.config.security.login_page.show_ldap_login,
ldap_provider_name: ldap.map(|ldap| ldap.provider_name.clone()),
ldap_provider_label: ldap.map(|ldap| {
ldap.provider_label
.clone()
.unwrap_or_else(|| ldap.provider_name.clone())
}),
ldap_provider_icon_url: ldap.and_then(|ldap| ldap.provider_icon_url.clone()),
self_registration_enabled: state.config.security.allow_self_registration,
};
Ok(Json(ApiResponse::new(response)))
}
/// Login endpoint
///
/// POST /auth/login
@@ -100,6 +169,12 @@ pub async fn login(
.await?
.ok_or_else(|| ApiError::Unauthorized("Invalid login or password".to_string()))?;
if identity.frozen {
return Err(ApiError::Forbidden(
"Identity is frozen and cannot authenticate".to_string(),
));
}
// Check if identity has a password set
let password_hash = identity
.password_hash
@@ -152,13 +227,22 @@ pub async fn register(
State(state): State<SharedState>,
Json(payload): Json<RegisterRequest>,
) -> Result<Json<ApiResponse<TokenResponse>>, ApiError> {
if !state.config.security.allow_self_registration {
return Err(ApiError::Forbidden(
"Self-service registration is disabled; identities must be provisioned by an administrator or identity provider".to_string(),
));
}
// Validate request
payload
.validate()
.map_err(|e| ApiError::ValidationError(format!("Invalid registration request: {}", e)))?;
// Check if login already exists
if let Some(_) = IdentityRepository::find_by_login(&state.db, &payload.login).await? {
if IdentityRepository::find_by_login(&state.db, &payload.login)
.await?
.is_some()
{
return Err(ApiError::Conflict(format!(
"Identity with login '{}' already exists",
payload.login
@@ -168,7 +252,7 @@ pub async fn register(
// Hash password
let password_hash = hash_password(&payload.password)?;
// Create identity with password hash
// Registration creates an identity only; permission assignments are managed separately.
let input = CreateIdentityInput {
login: payload.login.clone(),
display_name: payload.display_name,
@@ -212,15 +296,22 @@ pub async fn register(
)]
pub async fn refresh_token(
State(state): State<SharedState>,
Json(payload): Json<RefreshTokenRequest>,
) -> Result<Json<ApiResponse<TokenResponse>>, ApiError> {
// Validate request
payload
.validate()
.map_err(|e| ApiError::ValidationError(format!("Invalid refresh token request: {}", e)))?;
headers: HeaderMap,
payload: Option<Json<RefreshTokenRequest>>,
) -> Result<Response, ApiError> {
let browser_cookie_refresh = payload.is_none();
let refresh_token = if let Some(Json(payload)) = payload {
payload.validate().map_err(|e| {
ApiError::ValidationError(format!("Invalid refresh token request: {}", e))
})?;
payload.refresh_token
} else {
get_cookie_value(&headers, REFRESH_COOKIE_NAME)
.ok_or_else(|| ApiError::Unauthorized("Missing refresh token".to_string()))?
};
// Validate refresh token
let claims = validate_token(&payload.refresh_token, &state.jwt_config)
let claims = validate_token(&refresh_token, &state.jwt_config)
.map_err(|_| ApiError::Unauthorized("Invalid or expired refresh token".to_string()))?;
// Ensure it's a refresh token
@@ -239,6 +330,12 @@ pub async fn refresh_token(
.await?
.ok_or_else(|| ApiError::Unauthorized("Identity not found".to_string()))?;
if identity.frozen {
return Err(ApiError::Forbidden(
"Identity is frozen and cannot authenticate".to_string(),
));
}
// Generate new tokens
let access_token = generate_access_token(identity.id, &identity.login, &state.jwt_config)?;
let refresh_token = generate_refresh_token(identity.id, &identity.login, &state.jwt_config)?;
@@ -248,8 +345,18 @@ pub async fn refresh_token(
refresh_token,
state.jwt_config.access_token_expiration,
);
let response_body = Json(ApiResponse::new(response.clone()));
Ok(Json(ApiResponse::new(response)))
if browser_cookie_refresh {
let mut http_response = response_body.into_response();
apply_cookies_to_headers(
http_response.headers_mut(),
&crate::auth::oidc::build_auth_cookies(&state, &response, ""),
)?;
return Ok(http_response);
}
Ok(response_body.into_response())
}
/// Get current user endpoint
@@ -270,15 +377,27 @@ pub async fn refresh_token(
)]
pub async fn get_current_user(
State(state): State<SharedState>,
RequireAuth(user): RequireAuth,
headers: HeaderMap,
user: Result<RequireAuth, crate::auth::middleware::AuthError>,
) -> Result<Json<ApiResponse<CurrentUserResponse>>, ApiError> {
let identity_id = user.identity_id()?;
let authenticated_user = match user {
Ok(RequireAuth(user)) => user,
Err(_) => cookie_authenticated_user(&headers, &state)?
.ok_or_else(|| ApiError::Unauthorized("Unauthorized".to_string()))?,
};
let identity_id = authenticated_user.identity_id()?;
// Fetch identity from database
let identity = IdentityRepository::find_by_id(&state.db, identity_id)
.await?
.ok_or_else(|| ApiError::NotFound("Identity not found".to_string()))?;
if identity.frozen {
return Err(ApiError::Forbidden(
"Identity is frozen and cannot authenticate".to_string(),
));
}
let response = CurrentUserResponse {
id: identity.id,
login: identity.login,
@@ -288,6 +407,106 @@ pub async fn get_current_user(
Ok(Json(ApiResponse::new(response)))
}
/// Request body for LDAP login.
#[derive(Debug, Serialize, Deserialize, Validate, ToSchema)]
pub struct LdapLoginRequest {
/// User login name (uid, sAMAccountName, etc.)
#[validate(length(min = 1, max = 255))]
pub login: String,
/// User password
#[validate(length(min = 1, max = 512))]
pub password: String,
}
#[derive(Debug, Deserialize)]
pub struct OidcLoginParams {
pub redirect_to: Option<String>,
}
/// Begin browser OIDC login by redirecting to the provider.
pub async fn oidc_login(
State(state): State<SharedState>,
Query(params): Query<OidcLoginParams>,
) -> Result<Response, ApiError> {
let login_redirect = build_login_redirect(&state, params.redirect_to.as_deref()).await?;
let mut response = Redirect::temporary(&login_redirect.authorization_url).into_response();
apply_cookies_to_headers(response.headers_mut(), &login_redirect.cookies)?;
Ok(response)
}
/// Handle the OIDC authorization code callback.
pub async fn oidc_callback(
State(state): State<SharedState>,
headers: HeaderMap,
Query(query): Query<OidcCallbackQuery>,
) -> Result<Response, ApiError> {
let redirect_to = get_cookie_value(&headers, crate::auth::oidc::OIDC_REDIRECT_COOKIE_NAME);
let authenticated = crate::auth::oidc::handle_callback(&state, &headers, &query).await?;
oidc_callback_redirect_response(
&state,
&authenticated.token_response,
redirect_to,
&authenticated.id_token,
)
}
/// Authenticate via LDAP directory.
///
/// POST /auth/ldap/login
#[utoipa::path(
post,
path = "/auth/ldap/login",
tag = "auth",
request_body = LdapLoginRequest,
responses(
(status = 200, description = "Successfully authenticated via LDAP", body = inline(ApiResponse<TokenResponse>)),
(status = 401, description = "Invalid LDAP credentials"),
(status = 501, description = "LDAP not configured")
)
)]
pub async fn ldap_login(
State(state): State<SharedState>,
Json(payload): Json<LdapLoginRequest>,
) -> Result<Json<ApiResponse<TokenResponse>>, ApiError> {
payload
.validate()
.map_err(|e| ApiError::ValidationError(format!("Invalid LDAP login request: {e}")))?;
let authenticated =
crate::auth::ldap::authenticate(&state, &payload.login, &payload.password).await?;
Ok(Json(ApiResponse::new(authenticated.token_response)))
}
/// Logout the current browser session and optionally redirect through the provider logout flow.
pub async fn logout(
State(state): State<SharedState>,
headers: HeaderMap,
) -> Result<Response, ApiError> {
let oidc_enabled = state
.config
.security
.oidc
.as_ref()
.is_some_and(|oidc| oidc.enabled);
let response = if oidc_enabled {
let logout_redirect = build_logout_redirect(&state, &headers).await?;
let mut response = Redirect::temporary(&logout_redirect.redirect_url).into_response();
apply_cookies_to_headers(response.headers_mut(), &logout_redirect.cookies)?;
response
} else {
let mut response = Redirect::temporary("/login").into_response();
apply_cookies_to_headers(
response.headers_mut(),
&crate::auth::oidc::clear_auth_cookies(&state),
)?;
response
};
Ok(response)
}
/// Change password endpoint
///
/// POST /auth/change-password
@@ -350,6 +569,7 @@ pub async fn change_password(
display_name: None,
password_hash: Some(new_password_hash),
attributes: None,
frozen: None,
};
IdentityRepository::update(&state.db, identity_id, update_input).await?;

View File

@@ -16,9 +16,12 @@ use validator::Validate;
use attune_common::{
mq::{EventCreatedPayload, MessageEnvelope, MessageType},
repositories::{
event::{CreateEventInput, EnforcementRepository, EventRepository},
event::{
CreateEventInput, EnforcementRepository, EnforcementSearchFilters, EventRepository,
EventSearchFilters,
},
trigger::TriggerRepository,
Create, FindById, FindByRef, List,
Create, FindById, FindByRef,
},
};
@@ -40,7 +43,9 @@ use crate::{
#[derive(Debug, Clone, Serialize, Deserialize, Validate, ToSchema)]
pub struct CreateEventRequest {
/// Trigger reference (e.g., "core.timer", "core.webhook")
/// Also accepts "trigger_type" for compatibility with the sensor interface spec.
#[validate(length(min = 1))]
#[serde(alias = "trigger_type")]
#[schema(example = "core.timer")]
pub trigger_ref: String,
@@ -77,6 +82,17 @@ pub async fn create_event(
State(state): State<Arc<AppState>>,
Json(payload): Json<CreateEventRequest>,
) -> ApiResult<impl IntoResponse> {
// Only sensor and execution tokens may create events directly.
// User sessions must go through the webhook receiver instead.
use crate::auth::jwt::TokenType;
if user.0.claims.token_type == TokenType::Access {
return Err(ApiError::Forbidden(
"Events may only be created by sensor services. To fire an event as a user, \
enable webhooks on the trigger and POST to its webhook URL."
.to_string(),
));
}
// Validate request
payload
.validate()
@@ -123,7 +139,6 @@ pub async fn create_event(
};
// Determine source (sensor) from authenticated user if it's a sensor token
use crate::auth::jwt::TokenType;
let (source_id, source_ref) = match user.0.claims.token_type {
TokenType::Sensor => {
// Extract sensor reference from login
@@ -165,7 +180,7 @@ pub async fn create_event(
let event = EventRepository::create(&state.db, input).await?;
// Publish EventCreated message to message queue if publisher is available
if let Some(ref publisher) = state.publisher {
if let Some(publisher) = state.get_publisher().await {
let message_payload = EventCreatedPayload {
event_id: event.id,
trigger_id: event.trigger,
@@ -218,53 +233,27 @@ pub async fn list_events(
State(state): State<Arc<AppState>>,
Query(query): Query<EventQueryParams>,
) -> ApiResult<impl IntoResponse> {
// Get events based on filters
let events = if let Some(trigger_id) = query.trigger {
// Filter by trigger ID
EventRepository::find_by_trigger(&state.db, trigger_id).await?
} else if let Some(trigger_ref) = &query.trigger_ref {
// Filter by trigger reference
EventRepository::find_by_trigger_ref(&state.db, trigger_ref).await?
} else {
// Get all events
EventRepository::list(&state.db).await?
// All filtering and pagination happen in a single SQL query.
let filters = EventSearchFilters {
trigger: query.trigger,
trigger_ref: query.trigger_ref.clone(),
source: query.source,
rule_ref: query.rule_ref.clone(),
limit: query.limit(),
offset: query.offset(),
};
// Apply additional filters in memory
let mut filtered_events = events;
let result = EventRepository::search(&state.db, &filters).await?;
if let Some(source_id) = query.source {
filtered_events.retain(|e| e.source == Some(source_id));
}
let paginated_events: Vec<EventSummary> =
result.rows.into_iter().map(EventSummary::from).collect();
if let Some(rule_ref) = &query.rule_ref {
let rule_ref_lower = rule_ref.to_lowercase();
filtered_events.retain(|e| {
e.rule_ref
.as_ref()
.map(|r| r.to_lowercase().contains(&rule_ref_lower))
.unwrap_or(false)
});
}
// Calculate pagination
let total = filtered_events.len() as u64;
let start = query.offset() as usize;
let end = (start + query.limit() as usize).min(filtered_events.len());
// Get paginated slice
let paginated_events: Vec<EventSummary> = filtered_events[start..end]
.iter()
.map(|event| EventSummary::from(event.clone()))
.collect();
// Convert query params to pagination params for response
let pagination_params = PaginationParams {
page: query.page,
page_size: query.per_page,
};
let response = PaginatedResponse::new(paginated_events, &pagination_params, total);
let response = PaginatedResponse::new(paginated_events, &pagination_params, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -317,46 +306,32 @@ pub async fn list_enforcements(
State(state): State<Arc<AppState>>,
Query(query): Query<EnforcementQueryParams>,
) -> ApiResult<impl IntoResponse> {
// Get enforcements based on filters
let enforcements = if let Some(status) = query.status {
// Filter by status
EnforcementRepository::find_by_status(&state.db, status).await?
} else if let Some(rule_id) = query.rule {
// Filter by rule ID
EnforcementRepository::find_by_rule(&state.db, rule_id).await?
} else if let Some(event_id) = query.event {
// Filter by event ID
EnforcementRepository::find_by_event(&state.db, event_id).await?
} else {
// Get all enforcements
EnforcementRepository::list(&state.db).await?
// All filtering and pagination happen in a single SQL query.
// Filters are combinable (AND), not mutually exclusive.
let filters = EnforcementSearchFilters {
status: query.status,
rule: query.rule,
event: query.event,
trigger_ref: query.trigger_ref.clone(),
rule_ref: query.rule_ref.clone(),
limit: query.limit(),
offset: query.offset(),
};
// Apply additional filters in memory
let mut filtered_enforcements = enforcements;
let result = EnforcementRepository::search(&state.db, &filters).await?;
if let Some(trigger_ref) = &query.trigger_ref {
filtered_enforcements.retain(|e| e.trigger_ref == *trigger_ref);
}
// Calculate pagination
let total = filtered_enforcements.len() as u64;
let start = query.offset() as usize;
let end = (start + query.limit() as usize).min(filtered_enforcements.len());
// Get paginated slice
let paginated_enforcements: Vec<EnforcementSummary> = filtered_enforcements[start..end]
.iter()
.map(|enforcement| EnforcementSummary::from(enforcement.clone()))
let paginated_enforcements: Vec<EnforcementSummary> = result
.rows
.into_iter()
.map(EnforcementSummary::from)
.collect();
// Convert query params to pagination params for response
let pagination_params = PaginationParams {
page: query.page,
page_size: query.per_page,
};
let response = PaginatedResponse::new(paginated_enforcements, &pagination_params, total);
let response = PaginatedResponse::new(paginated_enforcements, &pagination_params, result.total);
Ok((StatusCode::OK, Json(response)))
}

View File

@@ -10,20 +10,30 @@ use axum::{
routing::get,
Json, Router,
};
use chrono::Utc;
use futures::stream::{Stream, StreamExt};
use std::sync::Arc;
use tokio_stream::wrappers::BroadcastStream;
use attune_common::models::enums::ExecutionStatus;
use attune_common::mq::{ExecutionRequestedPayload, MessageEnvelope, MessageType};
use attune_common::mq::{
ExecutionCancelRequestedPayload, ExecutionRequestedPayload, MessageEnvelope, MessageType,
Publisher,
};
use attune_common::repositories::{
action::ActionRepository,
execution::{CreateExecutionInput, ExecutionRepository},
Create, EnforcementRepository, FindById, FindByRef, List,
execution::{
CreateExecutionInput, ExecutionRepository, ExecutionSearchFilters, UpdateExecutionInput,
},
workflow::{WorkflowDefinitionRepository, WorkflowExecutionRepository},
Create, FindById, FindByRef, Update,
};
use attune_common::workflow::{CancellationPolicy, WorkflowDefinition};
use sqlx::Row;
use crate::{
auth::middleware::RequireAuth,
authz::{AuthorizationCheck, AuthorizationService},
dto::{
common::{PaginatedResponse, PaginationParams},
execution::{
@@ -34,6 +44,7 @@ use crate::{
middleware::{ApiError, ApiResult},
state::AppState,
};
use attune_common::rbac::{Action, AuthorizationContext, Resource};
/// Create a new execution (manual execution)
///
@@ -53,7 +64,7 @@ use crate::{
)]
pub async fn create_execution(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Json(request): Json<CreateExecutionRequest>,
) -> ApiResult<impl IntoResponse> {
// Validate that the action exists
@@ -61,6 +72,29 @@ pub async fn create_execution(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Action '{}' not found", request.action_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut action_ctx = AuthorizationContext::new(identity_id);
action_ctx.target_id = Some(action.id);
action_ctx.target_ref = Some(action.r#ref.clone());
action_ctx.pack_ref = Some(action.pack_ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Actions,
action: Action::Execute,
context: action_ctx,
},
)
.await?;
}
// Create execution input
let execution_input = CreateExecutionInput {
action: Some(action.id),
@@ -76,6 +110,7 @@ pub async fn create_execution(
parent: None,
enforcement: None,
executor: None,
worker: None,
status: ExecutionStatus::Requested,
result: None,
workflow_task: None, // Non-workflow execution
@@ -98,7 +133,7 @@ pub async fn create_execution(
.with_source("api-service")
.with_correlation_id(uuid::Uuid::new_v4());
if let Some(publisher) = &state.publisher {
if let Some(publisher) = state.get_publisher().await {
publisher.publish_envelope(&message).await.map_err(|e| {
ApiError::InternalServerError(format!("Failed to publish message: {}", e))
})?;
@@ -125,113 +160,37 @@ pub async fn list_executions(
RequireAuth(_user): RequireAuth,
Query(query): Query<ExecutionQueryParams>,
) -> ApiResult<impl IntoResponse> {
// Get executions based on filters
let executions = if let Some(status) = query.status {
// Filter by status
ExecutionRepository::find_by_status(&state.db, status).await?
} else if let Some(enforcement_id) = query.enforcement {
// Filter by enforcement
ExecutionRepository::find_by_enforcement(&state.db, enforcement_id).await?
} else {
// Get all executions
ExecutionRepository::list(&state.db).await?
// All filtering, pagination, and the enforcement JOIN happen in a single
// SQL query — no in-memory filtering or post-fetch lookups.
let filters = ExecutionSearchFilters {
status: query.status,
action_ref: query.action_ref.clone(),
pack_name: query.pack_name.clone(),
rule_ref: query.rule_ref.clone(),
trigger_ref: query.trigger_ref.clone(),
executor: query.executor,
result_contains: query.result_contains.clone(),
enforcement: query.enforcement,
parent: query.parent,
top_level_only: query.top_level_only == Some(true),
limit: query.limit(),
offset: query.offset(),
};
// Apply additional filters in memory (could be optimized with database queries)
let mut filtered_executions = executions;
let result = ExecutionRepository::search(&state.db, &filters).await?;
if let Some(action_ref) = &query.action_ref {
filtered_executions.retain(|e| e.action_ref == *action_ref);
}
if let Some(pack_name) = &query.pack_name {
filtered_executions.retain(|e| {
// action_ref format is "pack.action"
e.action_ref.starts_with(&format!("{}.", pack_name))
});
}
if let Some(result_search) = &query.result_contains {
let search_lower = result_search.to_lowercase();
filtered_executions.retain(|e| {
if let Some(result) = &e.result {
// Convert result to JSON string and search case-insensitively
let result_str = serde_json::to_string(result).unwrap_or_default();
result_str.to_lowercase().contains(&search_lower)
} else {
false
}
});
}
if let Some(parent_id) = query.parent {
filtered_executions.retain(|e| e.parent == Some(parent_id));
}
if let Some(executor_id) = query.executor {
filtered_executions.retain(|e| e.executor == Some(executor_id));
}
// Fetch enforcements for all executions to populate rule_ref and trigger_ref
let enforcement_ids: Vec<i64> = filtered_executions
.iter()
.filter_map(|e| e.enforcement)
let paginated_executions: Vec<ExecutionSummary> = result
.rows
.into_iter()
.map(ExecutionSummary::from)
.collect();
let enforcement_map: std::collections::HashMap<i64, _> = if !enforcement_ids.is_empty() {
let enforcements = EnforcementRepository::list(&state.db).await?;
enforcements.into_iter().map(|enf| (enf.id, enf)).collect()
} else {
std::collections::HashMap::new()
};
// Filter by rule_ref if specified
if let Some(rule_ref) = &query.rule_ref {
filtered_executions.retain(|e| {
e.enforcement
.and_then(|enf_id| enforcement_map.get(&enf_id))
.map(|enf| enf.rule_ref == *rule_ref)
.unwrap_or(false)
});
}
// Filter by trigger_ref if specified
if let Some(trigger_ref) = &query.trigger_ref {
filtered_executions.retain(|e| {
e.enforcement
.and_then(|enf_id| enforcement_map.get(&enf_id))
.map(|enf| enf.trigger_ref == *trigger_ref)
.unwrap_or(false)
});
}
// Calculate pagination
let total = filtered_executions.len() as u64;
let start = query.offset() as usize;
let end = (start + query.limit() as usize).min(filtered_executions.len());
// Get paginated slice and populate rule_ref/trigger_ref from enforcements
let paginated_executions: Vec<ExecutionSummary> = filtered_executions[start..end]
.iter()
.map(|e| {
let mut summary = ExecutionSummary::from(e.clone());
if let Some(enf_id) = e.enforcement {
if let Some(enforcement) = enforcement_map.get(&enf_id) {
summary.rule_ref = Some(enforcement.rule_ref.clone());
summary.trigger_ref = Some(enforcement.trigger_ref.clone());
}
}
summary
})
.collect();
// Convert query params to pagination params for response
let pagination_params = PaginationParams {
page: query.page,
page_size: query.per_page,
};
let response = PaginatedResponse::new(paginated_executions, &pagination_params, total);
let response = PaginatedResponse::new(paginated_executions, &pagination_params, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -306,21 +265,23 @@ pub async fn list_executions_by_status(
}
};
// Get executions by status
let executions = ExecutionRepository::find_by_status(&state.db, status).await?;
// Use the search method for SQL-side filtering + pagination.
let filters = ExecutionSearchFilters {
status: Some(status),
limit: pagination.limit(),
offset: pagination.offset(),
..Default::default()
};
// Calculate pagination
let total = executions.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(executions.len());
let result = ExecutionRepository::search(&state.db, &filters).await?;
// Get paginated slice
let paginated_executions: Vec<ExecutionSummary> = executions[start..end]
.iter()
.map(|e| ExecutionSummary::from(e.clone()))
let paginated_executions: Vec<ExecutionSummary> = result
.rows
.into_iter()
.map(ExecutionSummary::from)
.collect();
let response = PaginatedResponse::new(paginated_executions, &pagination, total);
let response = PaginatedResponse::new(paginated_executions, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -346,21 +307,23 @@ pub async fn list_executions_by_enforcement(
Path(enforcement_id): Path<i64>,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
// Get executions by enforcement
let executions = ExecutionRepository::find_by_enforcement(&state.db, enforcement_id).await?;
// Use the search method for SQL-side filtering + pagination.
let filters = ExecutionSearchFilters {
enforcement: Some(enforcement_id),
limit: pagination.limit(),
offset: pagination.offset(),
..Default::default()
};
// Calculate pagination
let total = executions.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(executions.len());
let result = ExecutionRepository::search(&state.db, &filters).await?;
// Get paginated slice
let paginated_executions: Vec<ExecutionSummary> = executions[start..end]
.iter()
.map(|e| ExecutionSummary::from(e.clone()))
let paginated_executions: Vec<ExecutionSummary> = result
.rows
.into_iter()
.map(ExecutionSummary::from)
.collect();
let response = PaginatedResponse::new(paginated_executions, &pagination, total);
let response = PaginatedResponse::new(paginated_executions, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -380,34 +343,37 @@ pub async fn get_execution_stats(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
) -> ApiResult<impl IntoResponse> {
// Get all executions (limited by repository to 1000)
let executions = ExecutionRepository::list(&state.db).await?;
// Use a single SQL query with COUNT + GROUP BY instead of fetching all rows.
let rows = sqlx::query(
"SELECT status::text AS status, COUNT(*) AS cnt FROM execution GROUP BY status",
)
.fetch_all(&state.db)
.await?;
// Calculate statistics
let total = executions.len();
let completed = executions
.iter()
.filter(|e| e.status == attune_common::models::enums::ExecutionStatus::Completed)
.count();
let failed = executions
.iter()
.filter(|e| e.status == attune_common::models::enums::ExecutionStatus::Failed)
.count();
let running = executions
.iter()
.filter(|e| e.status == attune_common::models::enums::ExecutionStatus::Running)
.count();
let pending = executions
.iter()
.filter(|e| {
matches!(
e.status,
attune_common::models::enums::ExecutionStatus::Requested
| attune_common::models::enums::ExecutionStatus::Scheduling
| attune_common::models::enums::ExecutionStatus::Scheduled
)
})
.count();
let mut completed: i64 = 0;
let mut failed: i64 = 0;
let mut running: i64 = 0;
let mut pending: i64 = 0;
let mut cancelled: i64 = 0;
let mut timeout: i64 = 0;
let mut abandoned: i64 = 0;
let mut total: i64 = 0;
for row in &rows {
let status: &str = row.get("status");
let cnt: i64 = row.get("cnt");
total += cnt;
match status {
"completed" => completed = cnt,
"failed" => failed = cnt,
"running" => running = cnt,
"requested" | "scheduling" | "scheduled" => pending += cnt,
"cancelled" | "canceling" => cancelled += cnt,
"timeout" => timeout = cnt,
"abandoned" => abandoned = cnt,
_ => {}
}
}
let stats = serde_json::json!({
"total": total,
@@ -415,9 +381,9 @@ pub async fn get_execution_stats(
"failed": failed,
"running": running,
"pending": pending,
"cancelled": executions.iter().filter(|e| e.status == attune_common::models::enums::ExecutionStatus::Cancelled).count(),
"timeout": executions.iter().filter(|e| e.status == attune_common::models::enums::ExecutionStatus::Timeout).count(),
"abandoned": executions.iter().filter(|e| e.status == attune_common::models::enums::ExecutionStatus::Abandoned).count(),
"cancelled": cancelled,
"timeout": timeout,
"abandoned": abandoned,
});
let response = ApiResponse::new(stats);
@@ -425,6 +391,467 @@ pub async fn get_execution_stats(
Ok((StatusCode::OK, Json(response)))
}
/// Cancel a running execution
///
/// This endpoint requests cancellation of an execution. The execution must be in a
/// cancellable state (requested, scheduling, scheduled, running, or canceling).
/// For running executions, the worker will send SIGINT to the process, then SIGTERM
/// after a 10-second grace period if it hasn't stopped.
///
/// **Workflow cascading**: When a workflow (parent) execution is cancelled, all of
/// its incomplete child task executions are also cancelled. Children that haven't
/// reached a worker yet are set to Cancelled immediately; children that are running
/// receive a cancel MQ message so their worker can gracefully stop the process.
/// The workflow_execution record is also marked as Cancelled to prevent the
/// scheduler from dispatching any further tasks.
#[utoipa::path(
post,
path = "/api/v1/executions/{id}/cancel",
tag = "executions",
params(
("id" = i64, Path, description = "Execution ID")
),
responses(
(status = 200, description = "Cancellation requested", body = inline(ApiResponse<ExecutionResponse>)),
(status = 404, description = "Execution not found"),
(status = 409, description = "Execution is not in a cancellable state"),
),
security(("bearer_auth" = []))
)]
pub async fn cancel_execution(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
// Load the execution
let execution = ExecutionRepository::find_by_id(&state.db, id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Execution with ID {} not found", id)))?;
// Check if the execution is in a cancellable state
let cancellable = matches!(
execution.status,
ExecutionStatus::Requested
| ExecutionStatus::Scheduling
| ExecutionStatus::Scheduled
| ExecutionStatus::Running
| ExecutionStatus::Canceling
);
if !cancellable {
return Err(ApiError::Conflict(format!(
"Execution {} is in status '{}' and cannot be cancelled",
id,
format!("{:?}", execution.status).to_lowercase()
)));
}
// If already canceling, just return the current state
if execution.status == ExecutionStatus::Canceling {
let response = ApiResponse::new(ExecutionResponse::from(execution));
return Ok((StatusCode::OK, Json(response)));
}
let publisher = state.get_publisher().await;
// For executions that haven't reached a worker yet, cancel immediately
if matches!(
execution.status,
ExecutionStatus::Requested | ExecutionStatus::Scheduling | ExecutionStatus::Scheduled
) {
let update = UpdateExecutionInput {
status: Some(ExecutionStatus::Cancelled),
result: Some(
serde_json::json!({"error": "Cancelled by user before execution started"}),
),
..Default::default()
};
let updated = ExecutionRepository::update(&state.db, id, update).await?;
let delegated_to_executor = publish_status_change_to_executor(
publisher.as_deref(),
&execution,
ExecutionStatus::Cancelled,
"api-service",
)
.await;
if !delegated_to_executor {
cancel_workflow_children(&state.db, publisher.as_deref(), id).await;
}
let response = ApiResponse::new(ExecutionResponse::from(updated));
return Ok((StatusCode::OK, Json(response)));
}
// For running executions, set status to Canceling and send cancel message to the worker
let update = UpdateExecutionInput {
status: Some(ExecutionStatus::Canceling),
..Default::default()
};
let updated = ExecutionRepository::update(&state.db, id, update).await?;
let delegated_to_executor = publish_status_change_to_executor(
publisher.as_deref(),
&execution,
ExecutionStatus::Canceling,
"api-service",
)
.await;
// Send cancel request to the worker via MQ
if let Some(worker_id) = execution.worker {
send_cancel_to_worker(publisher.as_deref(), id, worker_id).await;
} else {
tracing::warn!(
"Execution {} has no worker assigned; marked as canceling but no MQ message sent",
id
);
}
if !delegated_to_executor {
cancel_workflow_children(&state.db, publisher.as_deref(), id).await;
}
let response = ApiResponse::new(ExecutionResponse::from(updated));
Ok((StatusCode::OK, Json(response)))
}
/// Send a cancel MQ message to a specific worker for a specific execution.
async fn send_cancel_to_worker(publisher: Option<&Publisher>, execution_id: i64, worker_id: i64) {
let payload = ExecutionCancelRequestedPayload {
execution_id,
worker_id,
};
let envelope = MessageEnvelope::new(MessageType::ExecutionCancelRequested, payload)
.with_source("api-service")
.with_correlation_id(uuid::Uuid::new_v4());
if let Some(publisher) = publisher {
let routing_key = format!("execution.cancel.worker.{}", worker_id);
let exchange = "attune.executions";
if let Err(e) = publisher
.publish_envelope_with_routing(&envelope, exchange, &routing_key)
.await
{
tracing::error!(
"Failed to publish cancel request for execution {}: {}",
execution_id,
e
);
}
} else {
tracing::warn!(
"No MQ publisher available to send cancel request for execution {}",
execution_id
);
}
}
async fn publish_status_change_to_executor(
publisher: Option<&Publisher>,
execution: &attune_common::models::Execution,
new_status: ExecutionStatus,
source: &str,
) -> bool {
let Some(publisher) = publisher else {
return false;
};
let new_status = match new_status {
ExecutionStatus::Requested => "requested",
ExecutionStatus::Scheduling => "scheduling",
ExecutionStatus::Scheduled => "scheduled",
ExecutionStatus::Running => "running",
ExecutionStatus::Completed => "completed",
ExecutionStatus::Failed => "failed",
ExecutionStatus::Canceling => "canceling",
ExecutionStatus::Cancelled => "cancelled",
ExecutionStatus::Timeout => "timeout",
ExecutionStatus::Abandoned => "abandoned",
};
let payload = attune_common::mq::ExecutionStatusChangedPayload {
execution_id: execution.id,
action_ref: execution.action_ref.clone(),
previous_status: format!("{:?}", execution.status).to_lowercase(),
new_status: new_status.to_string(),
changed_at: Utc::now(),
};
let envelope = MessageEnvelope::new(MessageType::ExecutionStatusChanged, payload)
.with_source(source)
.with_correlation_id(uuid::Uuid::new_v4());
if let Err(e) = publisher.publish_envelope(&envelope).await {
tracing::error!(
"Failed to publish status change for execution {} to executor: {}",
execution.id,
e
);
return false;
}
true
}
/// Resolve the [`CancellationPolicy`] for a workflow parent execution.
///
/// Looks up the `workflow_execution` → `workflow_definition` chain and
/// deserialises the stored definition to extract the policy. Returns
/// [`CancellationPolicy::AllowFinish`] (the default) when any lookup
/// step fails so that the safest behaviour is used as a fallback.
async fn resolve_cancellation_policy(
db: &sqlx::PgPool,
parent_execution_id: i64,
) -> CancellationPolicy {
let wf_exec =
match WorkflowExecutionRepository::find_by_execution(db, parent_execution_id).await {
Ok(Some(wf)) => wf,
_ => return CancellationPolicy::default(),
};
let wf_def = match WorkflowDefinitionRepository::find_by_id(db, wf_exec.workflow_def).await {
Ok(Some(def)) => def,
_ => return CancellationPolicy::default(),
};
// Deserialise the stored JSON definition to extract the policy field.
match serde_json::from_value::<WorkflowDefinition>(wf_def.definition) {
Ok(def) => def.cancellation_policy,
Err(e) => {
tracing::warn!(
"Failed to deserialise workflow definition for workflow_def {}: {}. \
Falling back to AllowFinish cancellation policy.",
wf_exec.workflow_def,
e
);
CancellationPolicy::default()
}
}
}
/// Cancel all incomplete child executions of a workflow parent execution.
///
/// This handles the workflow cascade: when a workflow execution is cancelled,
/// its child task executions must also be cancelled to prevent further work.
/// Additionally, the `workflow_execution` record is marked Cancelled so the
/// scheduler's `advance_workflow` will short-circuit and not dispatch new tasks.
///
/// Behaviour depends on the workflow's [`CancellationPolicy`]:
///
/// - **`AllowFinish`** (default): Children in pre-running states (Requested,
/// Scheduling, Scheduled) are set to Cancelled immediately. Running children
/// are left alone and will complete naturally; `advance_workflow` sees the
/// cancelled `workflow_execution` and will not dispatch further tasks.
///
/// - **`CancelRunning`**: Pre-running children are cancelled as above.
/// Running children also receive a cancel MQ message so their worker can
/// gracefully stop the process (SIGINT → SIGTERM → SIGKILL).
async fn cancel_workflow_children(
db: &sqlx::PgPool,
publisher: Option<&Publisher>,
parent_execution_id: i64,
) {
// Determine the cancellation policy from the workflow definition.
let policy = resolve_cancellation_policy(db, parent_execution_id).await;
cancel_workflow_children_with_policy(db, publisher, parent_execution_id, policy).await;
}
/// Inner implementation that carries the resolved [`CancellationPolicy`]
/// through recursive calls so that nested child workflows inherit the
/// top-level policy.
async fn cancel_workflow_children_with_policy(
db: &sqlx::PgPool,
publisher: Option<&Publisher>,
parent_execution_id: i64,
policy: CancellationPolicy,
) {
// Find all child executions that are still incomplete
let children: Vec<attune_common::models::Execution> = match sqlx::query_as::<
_,
attune_common::models::Execution,
>(&format!(
"SELECT {} FROM execution WHERE parent = $1 AND status NOT IN ('completed', 'failed', 'timeout', 'cancelled', 'abandoned')",
attune_common::repositories::execution::SELECT_COLUMNS
))
.bind(parent_execution_id)
.fetch_all(db)
.await
{
Ok(rows) => rows,
Err(e) => {
tracing::error!(
"Failed to fetch child executions for parent {}: {}",
parent_execution_id,
e
);
return;
}
};
if children.is_empty() {
return;
}
tracing::info!(
"Cascading cancellation from execution {} to {} child execution(s) (policy: {:?})",
parent_execution_id,
children.len(),
policy,
);
for child in &children {
let child_id = child.id;
if matches!(
child.status,
ExecutionStatus::Requested | ExecutionStatus::Scheduling | ExecutionStatus::Scheduled
) {
// Pre-running: cancel immediately in DB (both policies)
let update = UpdateExecutionInput {
status: Some(ExecutionStatus::Cancelled),
result: Some(serde_json::json!({
"error": "Cancelled: parent workflow execution was cancelled"
})),
..Default::default()
};
if let Err(e) = ExecutionRepository::update(db, child_id, update).await {
tracing::error!("Failed to cancel child execution {}: {}", child_id, e);
} else {
tracing::info!("Cancelled pre-running child execution {}", child_id);
}
} else if matches!(
child.status,
ExecutionStatus::Running | ExecutionStatus::Canceling
) {
match policy {
CancellationPolicy::CancelRunning => {
// Running: set to Canceling and send MQ message to the worker
if child.status != ExecutionStatus::Canceling {
let update = UpdateExecutionInput {
status: Some(ExecutionStatus::Canceling),
..Default::default()
};
if let Err(e) = ExecutionRepository::update(db, child_id, update).await {
tracing::error!(
"Failed to set child execution {} to canceling: {}",
child_id,
e
);
}
}
if let Some(worker_id) = child.worker {
send_cancel_to_worker(publisher, child_id, worker_id).await;
}
}
CancellationPolicy::AllowFinish => {
// Running tasks are allowed to complete naturally.
// advance_workflow will see the cancelled workflow_execution
// and will not dispatch any further tasks.
tracing::info!(
"AllowFinish policy: leaving running child execution {} alone",
child_id
);
}
}
}
// Recursively cancel grandchildren (nested workflows)
// Use Box::pin to allow the recursive async call
Box::pin(cancel_workflow_children_with_policy(
db, publisher, child_id, policy,
))
.await;
}
// Also mark any associated workflow_execution record as Cancelled so that
// advance_workflow short-circuits and does not dispatch new tasks.
// A workflow_execution is linked to the parent execution via its `execution` column.
if let Ok(Some(wf_exec)) =
WorkflowExecutionRepository::find_by_execution(db, parent_execution_id).await
{
if !matches!(
wf_exec.status,
ExecutionStatus::Completed | ExecutionStatus::Failed | ExecutionStatus::Cancelled
) {
let wf_update = attune_common::repositories::workflow::UpdateWorkflowExecutionInput {
status: Some(ExecutionStatus::Cancelled),
error_message: Some(
"Cancelled: parent workflow execution was cancelled".to_string(),
),
current_tasks: Some(vec![]),
completed_tasks: None,
failed_tasks: None,
skipped_tasks: None,
variables: None,
paused: None,
pause_reason: None,
};
if let Err(e) = WorkflowExecutionRepository::update(db, wf_exec.id, wf_update).await {
tracing::error!("Failed to cancel workflow_execution {}: {}", wf_exec.id, e);
} else {
tracing::info!(
"Cancelled workflow_execution {} for parent execution {}",
wf_exec.id,
parent_execution_id
);
}
}
}
// If no children are still running (all were pre-running or were
// cancelled), finalize the parent execution as Cancelled immediately.
// Without this, the parent would stay stuck in "Canceling" because no
// task completion would trigger advance_workflow to finalize it.
let still_running: Vec<attune_common::models::Execution> = match sqlx::query_as::<
_,
attune_common::models::Execution,
>(&format!(
"SELECT {} FROM execution WHERE parent = $1 AND status IN ('running', 'canceling', 'scheduling', 'scheduled', 'requested')",
attune_common::repositories::execution::SELECT_COLUMNS
))
.bind(parent_execution_id)
.fetch_all(db)
.await
{
Ok(rows) => rows,
Err(e) => {
tracing::error!(
"Failed to check remaining children for parent {}: {}",
parent_execution_id,
e
);
return;
}
};
if still_running.is_empty() {
// No children left in flight — finalize the parent execution now.
let update = UpdateExecutionInput {
status: Some(ExecutionStatus::Cancelled),
result: Some(serde_json::json!({
"error": "Workflow cancelled",
"succeeded": false,
})),
..Default::default()
};
if let Err(e) = ExecutionRepository::update(db, parent_execution_id, update).await {
tracing::error!(
"Failed to finalize parent execution {} as Cancelled: {}",
parent_execution_id,
e
);
} else {
tracing::info!(
"Finalized parent execution {} as Cancelled (no running children remain)",
parent_execution_id
);
}
}
}
/// Create execution routes
/// Stream execution updates via Server-Sent Events
///
@@ -511,6 +938,10 @@ pub fn routes() -> Router<Arc<AppState>> {
.route("/executions/stats", get(get_execution_stats))
.route("/executions/stream", get(stream_execution_updates))
.route("/executions/{id}", get(get_execution))
.route(
"/executions/{id}/cancel",
axum::routing::post(cancel_execution),
)
.route(
"/executions/status/{status}",
get(list_executions_by_status),

View File

@@ -0,0 +1,191 @@
//! Entity history API routes
//!
//! Provides read-only access to the TimescaleDB entity history hypertables.
//! History records are written by PostgreSQL triggers — these endpoints only query them.
use axum::{
extract::{Path, Query, State},
http::StatusCode,
response::IntoResponse,
routing::get,
Json, Router,
};
use std::sync::Arc;
use attune_common::models::entity_history::HistoryEntityType;
use attune_common::repositories::entity_history::EntityHistoryRepository;
use crate::{
auth::middleware::RequireAuth,
dto::{
common::{PaginatedResponse, PaginationMeta, PaginationParams},
history::{HistoryQueryParams, HistoryRecordResponse},
},
middleware::{ApiError, ApiResult},
state::AppState,
};
/// List history records for a given entity type.
///
/// Supported entity types: `execution`, `worker`.
/// Returns a paginated list of change records ordered by time descending.
#[utoipa::path(
get,
path = "/api/v1/history/{entity_type}",
tag = "history",
params(
("entity_type" = String, Path, description = "Entity type: execution or worker"),
HistoryQueryParams,
),
responses(
(status = 200, description = "Paginated list of history records", body = PaginatedResponse<HistoryRecordResponse>),
(status = 400, description = "Invalid entity type"),
),
security(("bearer_auth" = []))
)]
pub async fn list_entity_history(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(entity_type_str): Path<String>,
Query(query): Query<HistoryQueryParams>,
) -> ApiResult<impl IntoResponse> {
let entity_type = parse_entity_type(&entity_type_str)?;
let repo_params = query.to_repo_params();
let (records, total) = tokio::try_join!(
EntityHistoryRepository::query(&state.db, entity_type, &repo_params),
EntityHistoryRepository::count(&state.db, entity_type, &repo_params),
)?;
let data: Vec<HistoryRecordResponse> = records.into_iter().map(Into::into).collect();
let pagination_params = PaginationParams {
page: query.page,
page_size: query.page_size,
};
let response = PaginatedResponse {
data,
pagination: PaginationMeta::new(
pagination_params.page,
pagination_params.page_size,
total as u64,
),
};
Ok((StatusCode::OK, Json(response)))
}
/// Get history for a specific execution by ID.
///
/// Returns all change records for the given execution, ordered by time descending.
#[utoipa::path(
get,
path = "/api/v1/executions/{id}/history",
tag = "history",
params(
("id" = i64, Path, description = "Execution ID"),
HistoryQueryParams,
),
responses(
(status = 200, description = "History records for the execution", body = PaginatedResponse<HistoryRecordResponse>),
),
security(("bearer_auth" = []))
)]
pub async fn get_execution_history(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(id): Path<i64>,
Query(query): Query<HistoryQueryParams>,
) -> ApiResult<impl IntoResponse> {
get_entity_history_by_id(&state, HistoryEntityType::Execution, id, query).await
}
/// Get history for a specific worker by ID.
///
/// Returns all change records for the given worker, ordered by time descending.
#[utoipa::path(
get,
path = "/api/v1/workers/{id}/history",
tag = "history",
params(
("id" = i64, Path, description = "Worker ID"),
HistoryQueryParams,
),
responses(
(status = 200, description = "History records for the worker", body = PaginatedResponse<HistoryRecordResponse>),
),
security(("bearer_auth" = []))
)]
pub async fn get_worker_history(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(id): Path<i64>,
Query(query): Query<HistoryQueryParams>,
) -> ApiResult<impl IntoResponse> {
get_entity_history_by_id(&state, HistoryEntityType::Worker, id, query).await
}
// ---------------------------------------------------------------------------
// Shared helpers
// ---------------------------------------------------------------------------
/// Parse and validate the entity type path parameter.
fn parse_entity_type(s: &str) -> Result<HistoryEntityType, ApiError> {
s.parse::<HistoryEntityType>().map_err(ApiError::BadRequest)
}
/// Shared implementation for `GET /<entities>/:id/history` endpoints.
async fn get_entity_history_by_id(
state: &AppState,
entity_type: HistoryEntityType,
entity_id: i64,
query: HistoryQueryParams,
) -> ApiResult<impl IntoResponse> {
// Override entity_id from the path — ignore any entity_id in query params
let mut repo_params = query.to_repo_params();
repo_params.entity_id = Some(entity_id);
let (records, total) = tokio::try_join!(
EntityHistoryRepository::query(&state.db, entity_type, &repo_params),
EntityHistoryRepository::count(&state.db, entity_type, &repo_params),
)?;
let data: Vec<HistoryRecordResponse> = records.into_iter().map(Into::into).collect();
let pagination_params = PaginationParams {
page: query.page,
page_size: query.page_size,
};
let response = PaginatedResponse {
data,
pagination: PaginationMeta::new(
pagination_params.page,
pagination_params.page_size,
total as u64,
),
};
Ok((StatusCode::OK, Json(response)))
}
// ---------------------------------------------------------------------------
// Router
// ---------------------------------------------------------------------------
/// Build the history routes.
///
/// Mounts:
/// - `GET /history/:entity_type` — generic history query
/// - `GET /executions/:id/history` — execution-specific history
/// - `GET /workers/:id/history` — worker-specific history (note: currently no /workers base route exists)
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
// Generic history endpoint
.route("/history/{entity_type}", get(list_entity_history))
// Entity-specific convenience endpoints
.route("/executions/{id}/history", get(get_execution_history))
.route("/workers/{id}/history", get(get_worker_history))
}

View File

@@ -14,8 +14,10 @@ use attune_common::{
mq::{InquiryRespondedPayload, MessageEnvelope, MessageType},
repositories::{
execution::ExecutionRepository,
inquiry::{CreateInquiryInput, InquiryRepository, UpdateInquiryInput},
Create, Delete, FindById, List, Update,
inquiry::{
CreateInquiryInput, InquiryRepository, InquirySearchFilters, UpdateInquiryInput,
},
Create, Delete, FindById, Update,
},
};
@@ -51,45 +53,30 @@ pub async fn list_inquiries(
State(state): State<Arc<AppState>>,
Query(query): Query<InquiryQueryParams>,
) -> ApiResult<impl IntoResponse> {
// Get inquiries based on filters
let inquiries = if let Some(status) = query.status {
// Filter by status
InquiryRepository::find_by_status(&state.db, status).await?
} else if let Some(execution_id) = query.execution {
// Filter by execution
InquiryRepository::find_by_execution(&state.db, execution_id).await?
} else {
// Get all inquiries
InquiryRepository::list(&state.db).await?
// All filtering and pagination happen in a single SQL query.
// Filters are combinable (AND), not mutually exclusive.
let limit = query.limit.unwrap_or(50).min(500) as u32;
let offset = query.offset.unwrap_or(0) as u32;
let filters = InquirySearchFilters {
status: query.status,
execution: query.execution,
assigned_to: query.assigned_to,
limit,
offset,
};
// Apply additional filters in memory
let mut filtered_inquiries = inquiries;
let result = InquiryRepository::search(&state.db, &filters).await?;
if let Some(assigned_to) = query.assigned_to {
filtered_inquiries.retain(|i| i.assigned_to == Some(assigned_to));
}
let paginated_inquiries: Vec<InquirySummary> =
result.rows.into_iter().map(InquirySummary::from).collect();
// Calculate pagination
let total = filtered_inquiries.len() as u64;
let offset = query.offset.unwrap_or(0);
let limit = query.limit.unwrap_or(50).min(500);
let start = offset;
let end = (start + limit).min(filtered_inquiries.len());
// Get paginated slice
let paginated_inquiries: Vec<InquirySummary> = filtered_inquiries[start..end]
.iter()
.map(|inquiry| InquirySummary::from(inquiry.clone()))
.collect();
// Convert to pagination params for response
let pagination_params = PaginationParams {
page: (offset / limit.max(1)) as u32 + 1,
page_size: limit as u32,
page: (offset / limit.max(1)) + 1,
page_size: limit,
};
let response = PaginatedResponse::new(paginated_inquiries, &pagination_params, total);
let response = PaginatedResponse::new(paginated_inquiries, &pagination_params, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -161,20 +148,21 @@ pub async fn list_inquiries_by_status(
}
};
let inquiries = InquiryRepository::find_by_status(&state.db, status).await?;
// Use the search method for SQL-side filtering + pagination.
let filters = InquirySearchFilters {
status: Some(status),
execution: None,
assigned_to: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = inquiries.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(inquiries.len());
let result = InquiryRepository::search(&state.db, &filters).await?;
// Get paginated slice
let paginated_inquiries: Vec<InquirySummary> = inquiries[start..end]
.iter()
.map(|inquiry| InquirySummary::from(inquiry.clone()))
.collect();
let paginated_inquiries: Vec<InquirySummary> =
result.rows.into_iter().map(InquirySummary::from).collect();
let response = PaginatedResponse::new(paginated_inquiries, &pagination, total);
let response = PaginatedResponse::new(paginated_inquiries, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -209,20 +197,21 @@ pub async fn list_inquiries_by_execution(
ApiError::NotFound(format!("Execution with ID {} not found", execution_id))
})?;
let inquiries = InquiryRepository::find_by_execution(&state.db, execution_id).await?;
// Use the search method for SQL-side filtering + pagination.
let filters = InquirySearchFilters {
status: None,
execution: Some(execution_id),
assigned_to: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = inquiries.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(inquiries.len());
let result = InquiryRepository::search(&state.db, &filters).await?;
// Get paginated slice
let paginated_inquiries: Vec<InquirySummary> = inquiries[start..end]
.iter()
.map(|inquiry| InquirySummary::from(inquiry.clone()))
.collect();
let paginated_inquiries: Vec<InquirySummary> =
result.rows.into_iter().map(InquirySummary::from).collect();
let response = PaginatedResponse::new(paginated_inquiries, &pagination, total);
let response = PaginatedResponse::new(paginated_inquiries, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -414,7 +403,7 @@ pub async fn respond_to_inquiry(
let updated_inquiry = InquiryRepository::update(&state.db, id, update_input).await?;
// Publish InquiryResponded message if publisher is available
if let Some(publisher) = &state.publisher {
if let Some(publisher) = state.get_publisher().await {
let user_id = user
.0
.identity_id()

View File

@@ -11,12 +11,20 @@ use std::sync::Arc;
use validator::Validate;
use attune_common::repositories::{
key::{CreateKeyInput, KeyRepository, UpdateKeyInput},
Create, Delete, List, Update,
action::ActionRepository,
key::{CreateKeyInput, KeyRepository, KeySearchFilters, UpdateKeyInput},
pack::PackRepository,
trigger::SensorRepository,
Create, Delete, FindByRef, Update,
};
use attune_common::{
models::{key::Key, OwnerType},
rbac::{Action, AuthorizationContext, Resource},
};
use crate::auth::RequireAuth;
use crate::auth::{jwt::TokenType, RequireAuth};
use crate::{
authz::{AuthorizationCheck, AuthorizationService},
dto::{
common::{PaginatedResponse, PaginationParams},
key::{CreateKeyRequest, KeyQueryParams, KeyResponse, KeySummary, UpdateKeyRequest},
@@ -38,44 +46,53 @@ use crate::{
security(("bearer_auth" = []))
)]
pub async fn list_keys(
_user: RequireAuth,
user: RequireAuth,
State(state): State<Arc<AppState>>,
Query(query): Query<KeyQueryParams>,
) -> ApiResult<impl IntoResponse> {
// Get keys based on filters
let keys = if let Some(owner_type) = query.owner_type {
// Filter by owner type
KeyRepository::find_by_owner_type(&state.db, owner_type).await?
} else {
// Get all keys
KeyRepository::list(&state.db).await?
// All filtering and pagination happen in a single SQL query.
let filters = KeySearchFilters {
owner_type: query.owner_type,
owner: query.owner.clone(),
limit: query.limit(),
offset: query.offset(),
};
// Apply additional filters in memory
let mut filtered_keys = keys;
let result = KeyRepository::search(&state.db, &filters).await?;
let mut rows = result.rows;
if let Some(owner) = &query.owner {
filtered_keys.retain(|k| k.owner.as_ref() == Some(owner));
if user.0.claims.token_type == TokenType::Access {
let identity_id = user
.0
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let grants = authz.effective_grants(&user.0).await?;
// Ensure the principal can read at least some key records.
let can_read_any_key = grants
.iter()
.any(|g| g.resource == Resource::Keys && g.actions.contains(&Action::Read));
if !can_read_any_key {
return Err(ApiError::Forbidden(
"Insufficient permissions: keys:read".to_string(),
));
}
rows.retain(|key| {
let ctx = key_authorization_context(identity_id, key);
AuthorizationService::is_allowed(&grants, Resource::Keys, Action::Read, &ctx)
});
}
// Calculate pagination
let total = filtered_keys.len() as u64;
let start = query.offset() as usize;
let end = (start + query.limit() as usize).min(filtered_keys.len());
let paginated_keys: Vec<KeySummary> = rows.into_iter().map(KeySummary::from).collect();
// Get paginated slice (values redacted in summary)
let paginated_keys: Vec<KeySummary> = filtered_keys[start..end]
.iter()
.map(|key| KeySummary::from(key.clone()))
.collect();
// Convert query params to pagination params for response
let pagination_params = PaginationParams {
page: query.page,
page_size: query.per_page,
};
let response = PaginatedResponse::new(paginated_keys, &pagination_params, total);
let response = PaginatedResponse::new(paginated_keys, &pagination_params, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -95,7 +112,7 @@ pub async fn list_keys(
security(("bearer_auth" = []))
)]
pub async fn get_key(
_user: RequireAuth,
user: RequireAuth,
State(state): State<Arc<AppState>>,
Path(key_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
@@ -103,24 +120,75 @@ pub async fn get_key(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Key '{}' not found", key_ref)))?;
// Decrypt value if encrypted
if key.encrypted {
let encryption_key = state
.config
.security
.encryption_key
.as_ref()
.ok_or_else(|| {
ApiError::InternalServerError("Encryption key not configured on server".to_string())
})?;
// For encrypted keys, track whether this caller is permitted to see the value.
// Non-Access tokens (sensor, execution) always get full access.
let can_decrypt = if user.0.claims.token_type == TokenType::Access {
let identity_id = user
.0
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let decrypted_value =
attune_common::crypto::decrypt(&key.value, encryption_key).map_err(|e| {
// Basic read check — hide behind 404 to prevent enumeration.
authz
.authorize(
&user.0,
AuthorizationCheck {
resource: Resource::Keys,
action: Action::Read,
context: key_authorization_context(identity_id, &key),
},
)
.await
.map_err(|_| ApiError::NotFound(format!("Key '{}' not found", key_ref)))?;
// For encrypted keys, separately check Keys::Decrypt.
// Failing this is not an error — we just return the value as null.
if key.encrypted {
authz
.authorize(
&user.0,
AuthorizationCheck {
resource: Resource::Keys,
action: Action::Decrypt,
context: key_authorization_context(identity_id, &key),
},
)
.await
.is_ok()
} else {
true
}
} else {
true
};
// Decrypt value if encrypted and caller has permission.
// If they lack Keys::Decrypt, return null rather than the ciphertext.
if key.encrypted {
if can_decrypt {
let encryption_key =
state
.config
.security
.encryption_key
.as_ref()
.ok_or_else(|| {
ApiError::InternalServerError(
"Encryption key not configured on server".to_string(),
)
})?;
let decrypted_value = attune_common::crypto::decrypt_json(&key.value, encryption_key)
.map_err(|e| {
tracing::error!("Failed to decrypt key '{}': {}", key_ref, e);
ApiError::InternalServerError(format!("Failed to decrypt key: {}", e))
})?;
key.value = decrypted_value;
key.value = decrypted_value;
} else {
key.value = serde_json::Value::Null;
}
}
let response = ApiResponse::new(KeyResponse::from(key));
@@ -142,21 +210,121 @@ pub async fn get_key(
security(("bearer_auth" = []))
)]
pub async fn create_key(
_user: RequireAuth,
user: RequireAuth,
State(state): State<Arc<AppState>>,
Json(request): Json<CreateKeyRequest>,
) -> ApiResult<impl IntoResponse> {
// Validate request
request.validate()?;
if user.0.claims.token_type == TokenType::Access {
let identity_id = user
.0
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.owner_identity_id = request.owner_identity;
ctx.owner_type = Some(request.owner_type);
ctx.owner_ref = requested_key_owner_ref(&request);
ctx.encrypted = Some(request.encrypted);
ctx.target_ref = Some(request.r#ref.clone());
authz
.authorize(
&user.0,
AuthorizationCheck {
resource: Resource::Keys,
action: Action::Create,
context: ctx,
},
)
.await?;
}
// Check if key with same ref already exists
if let Some(_) = KeyRepository::find_by_ref(&state.db, &request.r#ref).await? {
if KeyRepository::find_by_ref(&state.db, &request.r#ref)
.await?
.is_some()
{
return Err(ApiError::Conflict(format!(
"Key with ref '{}' already exists",
request.r#ref
)));
}
// Auto-resolve owner IDs from refs when only the ref is provided.
// This makes the API more ergonomic for sensors and other clients that
// know the owner ref but not the numeric database ID.
let mut owner_sensor = request.owner_sensor;
let mut owner_action = request.owner_action;
let mut owner_pack = request.owner_pack;
match request.owner_type {
OwnerType::Sensor => {
if owner_sensor.is_none() {
if let Some(ref sensor_ref) = request.owner_sensor_ref {
if let Some(sensor) =
SensorRepository::find_by_ref(&state.db, sensor_ref).await?
{
tracing::debug!(
"Auto-resolved owner_sensor from ref '{}' to id {}",
sensor_ref,
sensor.id
);
owner_sensor = Some(sensor.id);
} else {
return Err(ApiError::BadRequest(format!(
"Sensor with ref '{}' not found",
sensor_ref
)));
}
}
}
}
OwnerType::Action => {
if owner_action.is_none() {
if let Some(ref action_ref) = request.owner_action_ref {
if let Some(action) =
ActionRepository::find_by_ref(&state.db, action_ref).await?
{
tracing::debug!(
"Auto-resolved owner_action from ref '{}' to id {}",
action_ref,
action.id
);
owner_action = Some(action.id);
} else {
return Err(ApiError::BadRequest(format!(
"Action with ref '{}' not found",
action_ref
)));
}
}
}
}
OwnerType::Pack => {
if owner_pack.is_none() {
if let Some(ref pack_ref) = request.owner_pack_ref {
if let Some(pack) = PackRepository::find_by_ref(&state.db, pack_ref).await? {
tracing::debug!(
"Auto-resolved owner_pack from ref '{}' to id {}",
pack_ref,
pack.id
);
owner_pack = Some(pack.id);
} else {
return Err(ApiError::BadRequest(format!(
"Pack with ref '{}' not found",
pack_ref
)));
}
}
}
}
_ => {}
}
// Encrypt value if requested
let (value, encryption_key_hash) = if request.encrypted {
let encryption_key = state
@@ -170,11 +338,11 @@ pub async fn create_key(
)
})?;
let encrypted_value = attune_common::crypto::encrypt(&request.value, encryption_key)
let encrypted_value = attune_common::crypto::encrypt_json(&request.value, encryption_key)
.map_err(|e| {
tracing::error!("Failed to encrypt key value: {}", e);
ApiError::InternalServerError(format!("Failed to encrypt value: {}", e))
})?;
tracing::error!("Failed to encrypt key value: {}", e);
ApiError::InternalServerError(format!("Failed to encrypt value: {}", e))
})?;
let key_hash = attune_common::crypto::hash_encryption_key(encryption_key);
@@ -190,11 +358,11 @@ pub async fn create_key(
owner_type: request.owner_type,
owner: request.owner,
owner_identity: request.owner_identity,
owner_pack: request.owner_pack,
owner_pack,
owner_pack_ref: request.owner_pack_ref,
owner_action: request.owner_action,
owner_action,
owner_action_ref: request.owner_action_ref,
owner_sensor: request.owner_sensor,
owner_sensor,
owner_sensor_ref: request.owner_sensor_ref,
name: request.name,
encrypted: request.encrypted,
@@ -207,10 +375,11 @@ pub async fn create_key(
// Return decrypted value in response
if key.encrypted {
let encryption_key = state.config.security.encryption_key.as_ref().unwrap();
key.value = attune_common::crypto::decrypt(&key.value, encryption_key).map_err(|e| {
tracing::error!("Failed to decrypt newly created key: {}", e);
ApiError::InternalServerError(format!("Failed to decrypt value: {}", e))
})?;
key.value =
attune_common::crypto::decrypt_json(&key.value, encryption_key).map_err(|e| {
tracing::error!("Failed to decrypt newly created key: {}", e);
ApiError::InternalServerError(format!("Failed to decrypt value: {}", e))
})?;
}
let response = ApiResponse::with_message(KeyResponse::from(key), "Key created successfully");
@@ -235,7 +404,7 @@ pub async fn create_key(
security(("bearer_auth" = []))
)]
pub async fn update_key(
_user: RequireAuth,
user: RequireAuth,
State(state): State<Arc<AppState>>,
Path(key_ref): Path<String>,
Json(request): Json<UpdateKeyRequest>,
@@ -248,6 +417,24 @@ pub async fn update_key(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Key '{}' not found", key_ref)))?;
if user.0.claims.token_type == TokenType::Access {
let identity_id = user
.0
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
authz
.authorize(
&user.0,
AuthorizationCheck {
resource: Resource::Keys,
action: Action::Update,
context: key_authorization_context(identity_id, &existing),
},
)
.await?;
}
// Handle value update with encryption
let (value, encrypted, encryption_key_hash) = if let Some(new_value) = request.value {
let should_encrypt = request.encrypted.unwrap_or(existing.encrypted);
@@ -265,11 +452,11 @@ pub async fn update_key(
)
})?;
let encrypted_value = attune_common::crypto::encrypt(&new_value, encryption_key)
let encrypted_value = attune_common::crypto::encrypt_json(&new_value, encryption_key)
.map_err(|e| {
tracing::error!("Failed to encrypt key value: {}", e);
ApiError::InternalServerError(format!("Failed to encrypt value: {}", e))
})?;
tracing::error!("Failed to encrypt key value: {}", e);
ApiError::InternalServerError(format!("Failed to encrypt value: {}", e))
})?;
let key_hash = attune_common::crypto::hash_encryption_key(encryption_key);
@@ -303,7 +490,7 @@ pub async fn update_key(
ApiError::InternalServerError("Encryption key not configured on server".to_string())
})?;
updated_key.value = attune_common::crypto::decrypt(&updated_key.value, encryption_key)
updated_key.value = attune_common::crypto::decrypt_json(&updated_key.value, encryption_key)
.map_err(|e| {
tracing::error!("Failed to decrypt updated key '{}': {}", key_ref, e);
ApiError::InternalServerError(format!("Failed to decrypt value: {}", e))
@@ -331,7 +518,7 @@ pub async fn update_key(
security(("bearer_auth" = []))
)]
pub async fn delete_key(
_user: RequireAuth,
user: RequireAuth,
State(state): State<Arc<AppState>>,
Path(key_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
@@ -340,6 +527,24 @@ pub async fn delete_key(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Key '{}' not found", key_ref)))?;
if user.0.claims.token_type == TokenType::Access {
let identity_id = user
.0
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
authz
.authorize(
&user.0,
AuthorizationCheck {
resource: Resource::Keys,
action: Action::Delete,
context: key_authorization_context(identity_id, &key),
},
)
.await?;
}
// Delete the key
let deleted = KeyRepository::delete(&state.db, key.id).await?;
@@ -361,3 +566,45 @@ pub fn routes() -> Router<Arc<AppState>> {
get(get_key).put(update_key).delete(delete_key),
)
}
fn key_authorization_context(identity_id: i64, key: &Key) -> AuthorizationContext {
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_id = Some(key.id);
ctx.target_ref = Some(key.r#ref.clone());
ctx.owner_identity_id = key.owner_identity;
ctx.owner_type = Some(key.owner_type);
ctx.owner_ref = key_owner_ref(
key.owner_type,
key.owner.as_deref(),
key.owner_pack_ref.as_deref(),
key.owner_action_ref.as_deref(),
key.owner_sensor_ref.as_deref(),
);
ctx.encrypted = Some(key.encrypted);
ctx
}
fn requested_key_owner_ref(request: &CreateKeyRequest) -> Option<String> {
key_owner_ref(
request.owner_type,
request.owner.as_deref(),
request.owner_pack_ref.as_deref(),
request.owner_action_ref.as_deref(),
request.owner_sensor_ref.as_deref(),
)
}
fn key_owner_ref(
owner_type: OwnerType,
owner: Option<&str>,
owner_pack_ref: Option<&str>,
owner_action_ref: Option<&str>,
owner_sensor_ref: Option<&str>,
) -> Option<String> {
match owner_type {
OwnerType::Pack => owner_pack_ref.map(str::to_string),
OwnerType::Action => owner_action_ref.map(str::to_string),
OwnerType::Sensor => owner_sensor_ref.map(str::to_string),
_ => owner.map(str::to_string),
}
}

View File

@@ -1,27 +1,39 @@
//! API route modules
pub mod actions;
pub mod agent;
pub mod analytics;
pub mod artifacts;
pub mod auth;
pub mod events;
pub mod executions;
pub mod health;
pub mod history;
pub mod inquiries;
pub mod keys;
pub mod packs;
pub mod permissions;
pub mod rules;
pub mod runtimes;
pub mod triggers;
pub mod webhooks;
pub mod workflows;
pub use actions::routes as action_routes;
pub use agent::routes as agent_routes;
pub use analytics::routes as analytics_routes;
pub use artifacts::routes as artifact_routes;
pub use auth::routes as auth_routes;
pub use events::routes as event_routes;
pub use executions::routes as execution_routes;
pub use health::routes as health_routes;
pub use history::routes as history_routes;
pub use inquiries::routes as inquiry_routes;
pub use keys::routes as key_routes;
pub use packs::routes as pack_routes;
pub use permissions::routes as permission_routes;
pub use rules::routes as rule_routes;
pub use runtimes::routes as runtime_routes;
pub use triggers::routes as trigger_routes;
pub use webhooks::routes as webhook_routes;
pub use workflows::routes as workflow_routes;

View File

@@ -1,7 +1,7 @@
//! Pack management API routes
use axum::{
extract::{Path, Query, State},
extract::{Multipart, Path, Query, State},
http::StatusCode,
response::IntoResponse,
routing::get,
@@ -13,22 +13,26 @@ use validator::Validate;
use attune_common::models::pack_test::PackTestResult;
use attune_common::mq::{MessageEnvelope, MessageType, PackRegisteredPayload};
use attune_common::rbac::{Action, AuthorizationContext, Resource};
use attune_common::repositories::{
pack::{CreatePackInput, UpdatePackInput},
Create, Delete, FindById, FindByRef, PackRepository, PackTestRepository, Pagination, Update,
Create, Delete, FindById, FindByRef, PackRepository, PackTestRepository, Pagination, Patch,
Update,
};
use attune_common::workflow::{PackWorkflowService, PackWorkflowServiceConfig};
use crate::{
auth::middleware::RequireAuth,
authz::{AuthorizationCheck, AuthorizationService},
dto::{
common::{PaginatedResponse, PaginationParams},
pack::{
BuildPackEnvsRequest, BuildPackEnvsResponse, CreatePackRequest, DownloadPacksRequest,
DownloadPacksResponse, GetPackDependenciesRequest, GetPackDependenciesResponse,
InstallPackRequest, PackInstallResponse, PackResponse, PackSummary,
PackWorkflowSyncResponse, PackWorkflowValidationResponse, RegisterPackRequest,
RegisterPacksRequest, RegisterPacksResponse, UpdatePackRequest, WorkflowSyncResult,
InstallPackRequest, PackDescriptionPatch, PackInstallResponse, PackResponse,
PackSummary, PackWorkflowSyncResponse, PackWorkflowValidationResponse,
RegisterPackRequest, RegisterPacksRequest, RegisterPacksResponse, UpdatePackRequest,
WorkflowSyncResult,
},
ApiResponse, SuccessResponse,
},
@@ -115,7 +119,7 @@ pub async fn get_pack(
)]
pub async fn create_pack(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Json(request): Json<CreatePackRequest>,
) -> ApiResult<impl IntoResponse> {
// Validate request
@@ -129,6 +133,25 @@ pub async fn create_pack(
)));
}
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_ref = Some(request.r#ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Packs,
action: Action::Create,
context: ctx,
},
)
.await?;
}
// Create pack input
let pack_input = CreatePackInput {
r#ref: request.r#ref,
@@ -202,7 +225,7 @@ pub async fn create_pack(
)]
pub async fn update_pack(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Path(pack_ref): Path<String>,
Json(request): Json<UpdatePackRequest>,
) -> ApiResult<impl IntoResponse> {
@@ -214,10 +237,33 @@ pub async fn update_pack(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_id = Some(existing_pack.id);
ctx.target_ref = Some(existing_pack.r#ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Packs,
action: Action::Update,
context: ctx,
},
)
.await?;
}
// Create update input
let update_input = UpdatePackInput {
label: request.label,
description: request.description,
description: request.description.map(|patch| match patch {
PackDescriptionPatch::Set(value) => Patch::Set(value),
PackDescriptionPatch::Clear => Patch::Clear,
}),
version: request.version,
conf_schema: request.conf_schema,
config: request.config,
@@ -284,7 +330,7 @@ pub async fn update_pack(
)]
pub async fn delete_pack(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Path(pack_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
// Check if pack exists
@@ -292,6 +338,26 @@ pub async fn delete_pack(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_id = Some(pack.id);
ctx.target_ref = Some(pack.r#ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Packs,
action: Action::Delete,
context: ctx,
},
)
.await?;
}
// Delete the pack from the database (cascades to actions, triggers, sensors, rules, etc.
// Foreign keys on execution, event, enforcement, and rule tables use ON DELETE SET NULL
// so historical records are preserved with their text ref fields intact.)
@@ -445,6 +511,206 @@ async fn execute_and_store_pack_tests(
Some(Ok(result))
}
/// Upload and register a pack from a tar.gz archive (multipart/form-data)
///
/// The archive should be a gzipped tar containing the pack directory at its root
/// (i.e. the archive should unpack to files like `pack.yaml`, `actions/`, etc.).
/// The multipart field name must be `pack`.
///
/// Optional form fields:
/// - `force`: `"true"` to overwrite an existing pack with the same ref
/// - `skip_tests`: `"true"` to skip test execution after registration
#[utoipa::path(
post,
path = "/api/v1/packs/upload",
tag = "packs",
request_body(content = String, content_type = "multipart/form-data"),
responses(
(status = 201, description = "Pack uploaded and registered successfully", body = inline(ApiResponse<PackInstallResponse>)),
(status = 400, description = "Invalid archive or missing pack.yaml"),
(status = 409, description = "Pack already exists (use force=true to overwrite)"),
),
security(("bearer_auth" = []))
)]
pub async fn upload_pack(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
mut multipart: Multipart,
) -> ApiResult<impl IntoResponse> {
use std::io::Cursor;
const MAX_PACK_SIZE: usize = 100 * 1024 * 1024; // 100 MB
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Packs,
action: Action::Create,
context: AuthorizationContext::new(identity_id),
},
)
.await?;
}
let mut pack_bytes: Option<Vec<u8>> = None;
let mut force = false;
let mut skip_tests = false;
// Parse multipart fields
while let Some(field) = multipart
.next_field()
.await
.map_err(|e| ApiError::BadRequest(format!("Multipart error: {}", e)))?
{
match field.name() {
Some("pack") => {
let data = field.bytes().await.map_err(|e| {
ApiError::BadRequest(format!("Failed to read pack data: {}", e))
})?;
if data.len() > MAX_PACK_SIZE {
return Err(ApiError::BadRequest(format!(
"Pack archive too large: {} bytes (max {} bytes)",
data.len(),
MAX_PACK_SIZE
)));
}
pack_bytes = Some(data.to_vec());
}
Some("force") => {
let val = field.text().await.map_err(|e| {
ApiError::BadRequest(format!("Failed to read force field: {}", e))
})?;
force = val.trim().eq_ignore_ascii_case("true");
}
Some("skip_tests") => {
let val = field.text().await.map_err(|e| {
ApiError::BadRequest(format!("Failed to read skip_tests field: {}", e))
})?;
skip_tests = val.trim().eq_ignore_ascii_case("true");
}
_ => {
// Consume and ignore unknown fields
let _ = field.bytes().await;
}
}
}
let pack_data = pack_bytes.ok_or_else(|| {
ApiError::BadRequest("Missing required 'pack' field in multipart upload".to_string())
})?;
// Extract the tar.gz archive into a temporary directory
let temp_extract_dir = tempfile::tempdir().map_err(|e| {
ApiError::InternalServerError(format!("Failed to create temp directory: {}", e))
})?;
{
let cursor = Cursor::new(&pack_data[..]);
let gz = flate2::read::GzDecoder::new(cursor);
let mut archive = tar::Archive::new(gz);
archive.unpack(temp_extract_dir.path()).map_err(|e| {
ApiError::BadRequest(format!(
"Failed to extract pack archive (must be a valid .tar.gz): {}",
e
))
})?;
}
// Find pack.yaml — it may be at the root or inside a single subdirectory
// (e.g. when GitHub tarballs add a top-level directory)
let pack_root = find_pack_root(temp_extract_dir.path()).ok_or_else(|| {
ApiError::BadRequest(
"Could not find pack.yaml in the uploaded archive. \
Ensure the archive contains pack.yaml at its root or in a single top-level directory."
.to_string(),
)
})?;
// Read pack ref from pack.yaml to determine the final storage path
let pack_yaml_path = pack_root.join("pack.yaml");
let pack_yaml_content = std::fs::read_to_string(&pack_yaml_path)
.map_err(|e| ApiError::InternalServerError(format!("Failed to read pack.yaml: {}", e)))?;
let pack_yaml: serde_yaml_ng::Value = serde_yaml_ng::from_str(&pack_yaml_content)
.map_err(|e| ApiError::BadRequest(format!("Failed to parse pack.yaml: {}", e)))?;
let pack_ref = pack_yaml
.get("ref")
.and_then(|v| v.as_str())
.ok_or_else(|| ApiError::BadRequest("Missing 'ref' field in pack.yaml".to_string()))?
.to_string();
// Move pack to permanent storage
use attune_common::pack_registry::PackStorage;
let storage = PackStorage::new(&state.config.packs_base_dir);
let final_path = storage
.install_pack(&pack_root, &pack_ref, None)
.map_err(|e| {
ApiError::InternalServerError(format!("Failed to move pack to storage: {}", e))
})?;
tracing::info!(
"Pack '{}' uploaded and stored at {:?}",
pack_ref,
final_path
);
// Register the pack in the database
let pack_id = register_pack_internal(
state.clone(),
user.claims.sub,
final_path.to_string_lossy().to_string(),
force,
skip_tests,
)
.await
.inspect_err(|_e| {
// Clean up permanent storage on failure
let _ = std::fs::remove_dir_all(&final_path);
})?;
// Fetch the registered pack
let pack = PackRepository::find_by_id(&state.db, pack_id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack with ID {} not found", pack_id)))?;
let response = ApiResponse::with_message(
PackInstallResponse {
pack: PackResponse::from(pack),
test_result: None,
tests_skipped: skip_tests,
},
"Pack uploaded and registered successfully",
);
Ok((StatusCode::CREATED, Json(response)))
}
/// Walk the extracted directory and find the directory that contains `pack.yaml`.
/// Returns the path of the directory containing `pack.yaml`, or `None` if not found.
fn find_pack_root(base: &std::path::Path) -> Option<PathBuf> {
// Check root first
if base.join("pack.yaml").exists() {
return Some(base.to_path_buf());
}
// Check one level deep (e.g. GitHub tarballs: repo-main/pack.yaml)
if let Ok(entries) = std::fs::read_dir(base) {
for entry in entries.flatten() {
let path = entry.path();
if path.is_dir() && path.join("pack.yaml").exists() {
return Some(path);
}
}
}
None
}
/// Register a pack from local filesystem
#[utoipa::path(
post,
@@ -466,6 +732,23 @@ pub async fn register_pack(
// Validate request
request.validate()?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Packs,
action: Action::Create,
context: AuthorizationContext::new(identity_id),
},
)
.await?;
}
// Call internal registration logic
let pack_id = register_pack_internal(
state.clone(),
@@ -545,70 +828,103 @@ async fn register_pack_internal(
.and_then(|v| v.as_str())
.map(|s| s.to_string());
// Check if pack already exists
if !force {
if PackRepository::exists_by_ref(&state.db, &pack_ref).await? {
// Extract common metadata fields used for both create and update
let conf_schema = pack_yaml
.get("config_schema")
.and_then(|v| serde_json::to_value(v).ok())
.unwrap_or_else(|| serde_json::json!({}));
let meta = pack_yaml
.get("metadata")
.and_then(|v| serde_json::to_value(v).ok())
.unwrap_or_else(|| serde_json::json!({}));
let tags: Vec<String> = pack_yaml
.get("keywords")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default();
let runtime_deps: Vec<String> = pack_yaml
.get("runtime_deps")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default();
let dependencies: Vec<String> = pack_yaml
.get("dependencies")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default();
// Check if pack already exists — update in place to preserve IDs
let existing_pack = PackRepository::find_by_ref(&state.db, &pack_ref).await?;
let is_new_pack;
let pack = if let Some(existing) = existing_pack {
if !force {
return Err(ApiError::Conflict(format!(
"Pack '{}' already exists. Use force=true to reinstall.",
pack_ref
)));
}
// Update existing pack in place — preserves pack ID and all child entity IDs
let update_input = UpdatePackInput {
label: Some(label),
description: Some(match description {
Some(value) => Patch::Set(value),
None => Patch::Clear,
}),
version: Some(version.clone()),
conf_schema: Some(conf_schema),
config: None, // preserve user-set config
meta: Some(meta),
tags: Some(tags),
runtime_deps: Some(runtime_deps),
dependencies: Some(dependencies),
is_standard: None,
installers: None,
};
let updated = PackRepository::update(&state.db, existing.id, update_input).await?;
tracing::info!(
"Updated existing pack '{}' (ID: {}) in place",
pack_ref,
updated.id
);
is_new_pack = false;
updated
} else {
// Delete existing pack if force is true
if let Some(existing_pack) = PackRepository::find_by_ref(&state.db, &pack_ref).await? {
PackRepository::delete(&state.db, existing_pack.id).await?;
tracing::info!("Deleted existing pack '{}' for forced reinstall", pack_ref);
}
}
// Create new pack
let pack_input = CreatePackInput {
r#ref: pack_ref.clone(),
label,
description,
version: version.clone(),
conf_schema,
config: serde_json::json!({}),
meta,
tags,
runtime_deps,
dependencies,
is_standard: false,
installers: serde_json::json!({}),
};
// Create pack input
let pack_input = CreatePackInput {
r#ref: pack_ref.clone(),
label,
description,
version: version.clone(),
conf_schema: pack_yaml
.get("config_schema")
.and_then(|v| serde_json::to_value(v).ok())
.unwrap_or_else(|| serde_json::json!({})),
config: serde_json::json!({}),
meta: pack_yaml
.get("metadata")
.and_then(|v| serde_json::to_value(v).ok())
.unwrap_or_else(|| serde_json::json!({})),
tags: pack_yaml
.get("keywords")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default(),
runtime_deps: pack_yaml
.get("runtime_deps")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default(),
dependencies: pack_yaml
.get("dependencies")
.and_then(|v| v.as_sequence())
.map(|seq| {
seq.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
})
.unwrap_or_default(),
is_standard: false,
installers: serde_json::json!({}),
is_new_pack = true;
PackRepository::create(&state.db, pack_input).await?
};
let pack = PackRepository::create(&state.db, pack_input).await?;
// Auto-sync workflows after pack creation
let packs_base_dir = PathBuf::from(&state.config.packs_base_dir);
let service_config = PackWorkflowServiceConfig {
@@ -648,14 +964,18 @@ async fn register_pack_internal(
match component_loader.load_all(&pack_path).await {
Ok(load_result) => {
tracing::info!(
"Pack '{}' components loaded: {} runtimes, {} triggers, {} actions, {} sensors ({} skipped, {} warnings)",
"Pack '{}' components loaded: {} created, {} updated, {} skipped, {} removed, {} warnings \
(runtimes: {}/{}, triggers: {}/{}, actions: {}/{}, sensors: {}/{})",
pack.r#ref,
load_result.runtimes_loaded,
load_result.triggers_loaded,
load_result.actions_loaded,
load_result.sensors_loaded,
load_result.total_loaded(),
load_result.total_updated(),
load_result.total_skipped(),
load_result.warnings.len()
load_result.removed,
load_result.warnings.len(),
load_result.runtimes_loaded, load_result.runtimes_updated,
load_result.triggers_loaded, load_result.triggers_updated,
load_result.actions_loaded, load_result.actions_updated,
load_result.sensors_loaded, load_result.sensors_updated,
);
for warning in &load_result.warnings {
tracing::warn!("Pack component warning: {}", warning);
@@ -671,6 +991,10 @@ async fn register_pack_internal(
}
}
// Since entities are now updated in place (IDs preserved), ad-hoc rules
// and cross-pack FK references survive reinstallation automatically.
// No need to save/restore rules or re-link FKs.
// Set up runtime environments for the pack's actions.
// This creates virtualenvs, installs dependencies, etc. based on each
// runtime's execution_config from the database.
@@ -725,58 +1049,58 @@ async fn register_pack_internal(
// a best-effort optimisation for non-Docker (bare-metal) setups
// where the API host has the interpreter available.
if let Some(ref env_cfg) = exec_config.environment {
if env_cfg.env_type != "none" {
if !env_dir.exists() && !env_cfg.create_command.is_empty() {
// Ensure parent directories exist
if let Some(parent) = env_dir.parent() {
let _ = std::fs::create_dir_all(parent);
}
if env_cfg.env_type != "none"
&& !env_dir.exists()
&& !env_cfg.create_command.is_empty()
{
// Ensure parent directories exist
if let Some(parent) = env_dir.parent() {
let _ = std::fs::create_dir_all(parent);
}
let vars = exec_config
.build_template_vars_with_env(&pack_path, Some(&env_dir));
let resolved_cmd = attune_common::models::runtime::RuntimeExecutionConfig::resolve_command(
let vars = exec_config
.build_template_vars_with_env(&pack_path, Some(&env_dir));
let resolved_cmd = attune_common::models::runtime::RuntimeExecutionConfig::resolve_command(
&env_cfg.create_command,
&vars,
);
tracing::info!(
"Attempting to create {} environment (best-effort) at {}: {:?}",
env_cfg.env_type,
env_dir.display(),
resolved_cmd
);
tracing::info!(
"Attempting to create {} environment (best-effort) at {}: {:?}",
env_cfg.env_type,
env_dir.display(),
resolved_cmd
);
if let Some((program, args)) = resolved_cmd.split_first() {
match tokio::process::Command::new(program)
.args(args)
.current_dir(&pack_path)
.output()
.await
{
Ok(output) if output.status.success() => {
tracing::info!(
"Created {} environment at {}",
env_cfg.env_type,
env_dir.display()
);
}
Ok(output) => {
let stderr =
String::from_utf8_lossy(&output.stderr);
tracing::info!(
if let Some((program, args)) = resolved_cmd.split_first() {
match tokio::process::Command::new(program)
.args(args)
.current_dir(&pack_path)
.output()
.await
{
Ok(output) if output.status.success() => {
tracing::info!(
"Created {} environment at {}",
env_cfg.env_type,
env_dir.display()
);
}
Ok(output) => {
let stderr = String::from_utf8_lossy(&output.stderr);
tracing::info!(
"Environment creation skipped in API service (exit {}): {}. \
The worker will create it on first execution.",
output.status.code().unwrap_or(-1),
stderr.trim()
);
}
Err(e) => {
tracing::info!(
}
Err(e) => {
tracing::info!(
"Runtime '{}' not available in API service: {}. \
The worker will create the environment on first execution.",
program, e
);
}
}
}
}
@@ -880,11 +1204,12 @@ async fn register_pack_internal(
let test_passed = result.status == "passed";
if !test_passed && !force {
// Tests failed and force is not set - rollback pack creation
let _ = PackRepository::delete(&state.db, pack.id).await;
return Err(ApiError::BadRequest(format!(
"Pack registration failed: tests did not pass. Use force=true to register anyway."
)));
// Tests failed and force is not set — only delete if we just created this pack.
// If we updated an existing pack, deleting would destroy the original.
if is_new_pack {
let _ = PackRepository::delete(&state.db, pack.id).await;
}
return Err(ApiError::BadRequest("Pack registration failed: tests did not pass. Use force=true to register anyway.".to_string()));
}
if !test_passed && force {
@@ -898,7 +1223,9 @@ async fn register_pack_internal(
tracing::warn!("Failed to execute tests for pack '{}': {}", pack.r#ref, e);
// If tests can't be executed and force is not set, fail the registration
if !force {
let _ = PackRepository::delete(&state.db, pack.id).await;
if is_new_pack {
let _ = PackRepository::delete(&state.db, pack.id).await;
}
return Err(ApiError::BadRequest(format!(
"Pack registration failed: could not execute tests. Error: {}. Use force=true to register anyway.",
e
@@ -916,7 +1243,7 @@ async fn register_pack_internal(
// Publish pack.registered event so workers can proactively set up
// runtime environments (virtualenvs, node_modules, etc.).
if let Some(ref publisher) = state.publisher {
if let Some(publisher) = state.get_publisher().await {
let runtime_names = attune_common::pack_environment::collect_runtime_names_for_pack(
&state.db, pack.id, &pack_path,
)
@@ -964,7 +1291,6 @@ async fn register_pack_internal(
responses(
(status = 201, description = "Pack installed successfully", body = ApiResponse<PackInstallResponse>),
(status = 400, description = "Invalid request or tests failed", body = ApiResponse<String>),
(status = 409, description = "Pack already exists", body = ApiResponse<String>),
(status = 501, description = "Not implemented yet", body = ApiResponse<String>),
),
security(("bearer_auth" = []))
@@ -984,6 +1310,23 @@ pub async fn install_pack(
tracing::info!("Installing pack from source: {}", request.source);
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Packs,
action: Action::Create,
context: AuthorizationContext::new(identity_id),
},
)
.await?;
}
// Get user ID early to avoid borrow issues
let user_id = user.identity_id().ok();
let user_sub = user.claims.sub.clone();
@@ -1122,19 +1465,20 @@ pub async fn install_pack(
tracing::info!("Pack moved to permanent storage: {:?}", final_path);
// Register the pack in database (from permanent storage location)
// Register the pack in database (from permanent storage location).
// Remote installs always force-overwrite: if you're pulling from a remote,
// the intent is to get that pack installed regardless of local state.
let pack_id = register_pack_internal(
state.clone(),
user_sub,
final_path.to_string_lossy().to_string(),
request.force,
true, // always force for remote installs
request.skip_tests,
)
.await
.map_err(|e| {
.inspect_err(|_e| {
// Clean up the permanent storage if registration fails
let _ = std::fs::remove_dir_all(&final_path);
e
})?;
// Fetch the registered pack
@@ -2023,6 +2367,23 @@ pub async fn register_packs_batch(
RequireAuth(user): RequireAuth,
Json(request): Json<RegisterPacksRequest>,
) -> ApiResult<Json<ApiResponse<RegisterPacksResponse>>> {
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Packs,
action: Action::Create,
context: AuthorizationContext::new(identity_id),
},
)
.await?;
}
let start = std::time::Instant::now();
let mut registered = Vec::new();
let mut failed = Vec::new();
@@ -2105,6 +2466,7 @@ pub fn routes() -> Router<Arc<AppState>> {
axum::routing::post(register_packs_batch),
)
.route("/packs/install", axum::routing::post(install_pack))
.route("/packs/upload", axum::routing::post(upload_pack))
.route("/packs/download", axum::routing::post(download_packs))
.route(
"/packs/dependencies",

View File

@@ -0,0 +1,877 @@
use axum::{
extract::{Path, Query, State},
http::StatusCode,
response::IntoResponse,
routing::{delete, get, post},
Json, Router,
};
use std::sync::Arc;
use validator::Validate;
use attune_common::{
models::identity::{Identity, IdentityRoleAssignment},
rbac::{Action, AuthorizationContext, Resource},
repositories::{
identity::{
CreateIdentityInput, CreateIdentityRoleAssignmentInput,
CreatePermissionAssignmentInput, CreatePermissionSetRoleAssignmentInput,
IdentityRepository, IdentityRoleAssignmentRepository, PermissionAssignmentRepository,
PermissionSetRepository, PermissionSetRoleAssignmentRepository, UpdateIdentityInput,
},
Create, Delete, FindById, FindByRef, List, Update,
},
};
use crate::{
auth::hash_password,
auth::middleware::RequireAuth,
authz::{AuthorizationCheck, AuthorizationService},
dto::{
common::{PaginatedResponse, PaginationParams},
ApiResponse, CreateIdentityRequest, CreateIdentityRoleAssignmentRequest,
CreatePermissionAssignmentRequest, CreatePermissionSetRoleAssignmentRequest,
IdentityResponse, IdentityRoleAssignmentResponse, IdentitySummary,
PermissionAssignmentResponse, PermissionSetQueryParams,
PermissionSetRoleAssignmentResponse, PermissionSetSummary, SuccessResponse,
UpdateIdentityRequest,
},
middleware::{ApiError, ApiResult},
state::AppState,
};
#[utoipa::path(
get,
path = "/api/v1/identities",
tag = "permissions",
params(PaginationParams),
responses(
(status = 200, description = "List identities", body = PaginatedResponse<IdentitySummary>)
),
security(("bearer_auth" = []))
)]
pub async fn list_identities(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Query(query): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Identities, Action::Read).await?;
let identities = IdentityRepository::list(&state.db).await?;
let total = identities.len() as u64;
let start = query.offset() as usize;
let end = (start + query.limit() as usize).min(identities.len());
let page_items = if start >= identities.len() {
Vec::new()
} else {
identities[start..end].to_vec()
};
let mut summaries = Vec::with_capacity(page_items.len());
for identity in page_items {
let role_assignments =
IdentityRoleAssignmentRepository::find_by_identity(&state.db, identity.id).await?;
let roles = role_assignments.into_iter().map(|ra| ra.role).collect();
let mut summary = IdentitySummary::from(identity);
summary.roles = roles;
summaries.push(summary);
}
Ok((
StatusCode::OK,
Json(PaginatedResponse::new(summaries, &query, total)),
))
}
#[utoipa::path(
get,
path = "/api/v1/identities/{id}",
tag = "permissions",
params(
("id" = i64, Path, description = "Identity ID")
),
responses(
(status = 200, description = "Identity details", body = inline(ApiResponse<IdentityResponse>)),
(status = 404, description = "Identity not found")
),
security(("bearer_auth" = []))
)]
pub async fn get_identity(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(identity_id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Identities, Action::Read).await?;
let identity = IdentityRepository::find_by_id(&state.db, identity_id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Identity '{}' not found", identity_id)))?;
let roles = IdentityRoleAssignmentRepository::find_by_identity(&state.db, identity_id).await?;
let assignments =
PermissionAssignmentRepository::find_by_identity(&state.db, identity_id).await?;
let permission_sets = PermissionSetRepository::find_by_identity(&state.db, identity_id).await?;
let permission_set_refs = permission_sets
.into_iter()
.map(|ps| (ps.id, ps.r#ref))
.collect::<std::collections::HashMap<_, _>>();
Ok((
StatusCode::OK,
Json(ApiResponse::new(IdentityResponse {
id: identity.id,
login: identity.login,
display_name: identity.display_name,
frozen: identity.frozen,
attributes: identity.attributes,
roles: roles
.into_iter()
.map(IdentityRoleAssignmentResponse::from)
.collect(),
direct_permissions: assignments
.into_iter()
.filter_map(|assignment| {
permission_set_refs.get(&assignment.permset).cloned().map(
|permission_set_ref| PermissionAssignmentResponse {
id: assignment.id,
identity_id: assignment.identity,
permission_set_id: assignment.permset,
permission_set_ref,
created: assignment.created,
},
)
})
.collect(),
})),
))
}
#[utoipa::path(
post,
path = "/api/v1/identities",
tag = "permissions",
request_body = CreateIdentityRequest,
responses(
(status = 201, description = "Identity created", body = inline(ApiResponse<IdentityResponse>)),
(status = 409, description = "Identity already exists")
),
security(("bearer_auth" = []))
)]
pub async fn create_identity(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Json(request): Json<CreateIdentityRequest>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Identities, Action::Create).await?;
request.validate()?;
let password_hash = match request.password {
Some(password) => Some(hash_password(&password)?),
None => None,
};
let identity = IdentityRepository::create(
&state.db,
CreateIdentityInput {
login: request.login,
display_name: request.display_name,
password_hash,
attributes: request.attributes,
},
)
.await?;
Ok((
StatusCode::CREATED,
Json(ApiResponse::new(IdentityResponse::from(identity))),
))
}
#[utoipa::path(
put,
path = "/api/v1/identities/{id}",
tag = "permissions",
params(
("id" = i64, Path, description = "Identity ID")
),
request_body = UpdateIdentityRequest,
responses(
(status = 200, description = "Identity updated", body = inline(ApiResponse<IdentityResponse>)),
(status = 404, description = "Identity not found")
),
security(("bearer_auth" = []))
)]
pub async fn update_identity(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(identity_id): Path<i64>,
Json(request): Json<UpdateIdentityRequest>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Identities, Action::Update).await?;
IdentityRepository::find_by_id(&state.db, identity_id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Identity '{}' not found", identity_id)))?;
let password_hash = match request.password {
Some(password) => Some(hash_password(&password)?),
None => None,
};
let identity = IdentityRepository::update(
&state.db,
identity_id,
UpdateIdentityInput {
display_name: request.display_name,
password_hash,
attributes: request.attributes,
frozen: request.frozen,
},
)
.await?;
Ok((
StatusCode::OK,
Json(ApiResponse::new(IdentityResponse::from(identity))),
))
}
#[utoipa::path(
delete,
path = "/api/v1/identities/{id}",
tag = "permissions",
params(
("id" = i64, Path, description = "Identity ID")
),
responses(
(status = 200, description = "Identity deleted", body = inline(ApiResponse<SuccessResponse>)),
(status = 404, description = "Identity not found")
),
security(("bearer_auth" = []))
)]
pub async fn delete_identity(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(identity_id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Identities, Action::Delete).await?;
let caller_identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
if caller_identity_id == identity_id {
return Err(ApiError::BadRequest(
"Refusing to delete the currently authenticated identity".to_string(),
));
}
let deleted = IdentityRepository::delete(&state.db, identity_id).await?;
if !deleted {
return Err(ApiError::NotFound(format!(
"Identity '{}' not found",
identity_id
)));
}
Ok((
StatusCode::OK,
Json(ApiResponse::new(SuccessResponse::new(
"Identity deleted successfully",
))),
))
}
#[utoipa::path(
get,
path = "/api/v1/permissions/sets",
tag = "permissions",
params(PermissionSetQueryParams),
responses(
(status = 200, description = "List permission sets", body = Vec<PermissionSetSummary>)
),
security(("bearer_auth" = []))
)]
pub async fn list_permission_sets(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Query(query): Query<PermissionSetQueryParams>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Permissions, Action::Read).await?;
let mut permission_sets = PermissionSetRepository::list(&state.db).await?;
if let Some(pack_ref) = &query.pack_ref {
permission_sets.retain(|ps| ps.pack_ref.as_deref() == Some(pack_ref.as_str()));
}
let mut response = Vec::with_capacity(permission_sets.len());
for permission_set in permission_sets {
let permission_set_ref = permission_set.r#ref.clone();
let roles = PermissionSetRoleAssignmentRepository::find_by_permission_set(
&state.db,
permission_set.id,
)
.await?;
response.push(PermissionSetSummary {
id: permission_set.id,
r#ref: permission_set.r#ref,
pack_ref: permission_set.pack_ref,
label: permission_set.label,
description: permission_set.description,
grants: permission_set.grants,
roles: roles
.into_iter()
.map(|assignment| PermissionSetRoleAssignmentResponse {
id: assignment.id,
permission_set_id: assignment.permset,
permission_set_ref: Some(permission_set_ref.clone()),
role: assignment.role,
created: assignment.created,
})
.collect(),
});
}
Ok((StatusCode::OK, Json(response)))
}
#[utoipa::path(
get,
path = "/api/v1/identities/{id}/permissions",
tag = "permissions",
params(
("id" = i64, Path, description = "Identity ID")
),
responses(
(status = 200, description = "List permission assignments for an identity", body = Vec<PermissionAssignmentResponse>),
(status = 404, description = "Identity not found")
),
security(("bearer_auth" = []))
)]
pub async fn list_identity_permissions(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(identity_id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Permissions, Action::Read).await?;
IdentityRepository::find_by_id(&state.db, identity_id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Identity '{}' not found", identity_id)))?;
let assignments =
PermissionAssignmentRepository::find_by_identity(&state.db, identity_id).await?;
let permission_sets = PermissionSetRepository::find_by_identity(&state.db, identity_id).await?;
let permission_set_refs = permission_sets
.into_iter()
.map(|ps| (ps.id, ps.r#ref))
.collect::<std::collections::HashMap<_, _>>();
let response: Vec<PermissionAssignmentResponse> = assignments
.into_iter()
.filter_map(|assignment| {
permission_set_refs
.get(&assignment.permset)
.cloned()
.map(|permission_set_ref| PermissionAssignmentResponse {
id: assignment.id,
identity_id: assignment.identity,
permission_set_id: assignment.permset,
permission_set_ref,
created: assignment.created,
})
})
.collect();
Ok((StatusCode::OK, Json(response)))
}
#[utoipa::path(
post,
path = "/api/v1/permissions/assignments",
tag = "permissions",
request_body = CreatePermissionAssignmentRequest,
responses(
(status = 201, description = "Permission assignment created", body = inline(ApiResponse<PermissionAssignmentResponse>)),
(status = 404, description = "Identity or permission set not found"),
(status = 409, description = "Assignment already exists")
),
security(("bearer_auth" = []))
)]
pub async fn create_permission_assignment(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Json(request): Json<CreatePermissionAssignmentRequest>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Permissions, Action::Manage).await?;
let identity = resolve_identity(&state, &request).await?;
let permission_set =
PermissionSetRepository::find_by_ref(&state.db, &request.permission_set_ref)
.await?
.ok_or_else(|| {
ApiError::NotFound(format!(
"Permission set '{}' not found",
request.permission_set_ref
))
})?;
let assignment = PermissionAssignmentRepository::create(
&state.db,
CreatePermissionAssignmentInput {
identity: identity.id,
permset: permission_set.id,
},
)
.await?;
let response = PermissionAssignmentResponse {
id: assignment.id,
identity_id: assignment.identity,
permission_set_id: assignment.permset,
permission_set_ref: permission_set.r#ref,
created: assignment.created,
};
Ok((StatusCode::CREATED, Json(ApiResponse::new(response))))
}
#[utoipa::path(
delete,
path = "/api/v1/permissions/assignments/{id}",
tag = "permissions",
params(
("id" = i64, Path, description = "Permission assignment ID")
),
responses(
(status = 200, description = "Permission assignment deleted", body = inline(ApiResponse<SuccessResponse>)),
(status = 404, description = "Assignment not found")
),
security(("bearer_auth" = []))
)]
pub async fn delete_permission_assignment(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(assignment_id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Permissions, Action::Manage).await?;
let existing = PermissionAssignmentRepository::find_by_id(&state.db, assignment_id)
.await?
.ok_or_else(|| {
ApiError::NotFound(format!(
"Permission assignment '{}' not found",
assignment_id
))
})?;
let deleted = PermissionAssignmentRepository::delete(&state.db, existing.id).await?;
if !deleted {
return Err(ApiError::NotFound(format!(
"Permission assignment '{}' not found",
assignment_id
)));
}
Ok((
StatusCode::OK,
Json(ApiResponse::new(SuccessResponse::new(
"Permission assignment deleted successfully",
))),
))
}
#[utoipa::path(
post,
path = "/api/v1/identities/{id}/roles",
tag = "permissions",
params(
("id" = i64, Path, description = "Identity ID")
),
request_body = CreateIdentityRoleAssignmentRequest,
responses(
(status = 201, description = "Identity role assignment created", body = inline(ApiResponse<IdentityRoleAssignmentResponse>)),
(status = 404, description = "Identity not found")
),
security(("bearer_auth" = []))
)]
pub async fn create_identity_role_assignment(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(identity_id): Path<i64>,
Json(request): Json<CreateIdentityRoleAssignmentRequest>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Permissions, Action::Manage).await?;
request.validate()?;
IdentityRepository::find_by_id(&state.db, identity_id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Identity '{}' not found", identity_id)))?;
let assignment = IdentityRoleAssignmentRepository::create(
&state.db,
CreateIdentityRoleAssignmentInput {
identity: identity_id,
role: request.role,
source: "manual".to_string(),
managed: false,
},
)
.await?;
Ok((
StatusCode::CREATED,
Json(ApiResponse::new(IdentityRoleAssignmentResponse::from(
assignment,
))),
))
}
#[utoipa::path(
delete,
path = "/api/v1/identities/roles/{id}",
tag = "permissions",
params(
("id" = i64, Path, description = "Identity role assignment ID")
),
responses(
(status = 200, description = "Identity role assignment deleted", body = inline(ApiResponse<SuccessResponse>)),
(status = 404, description = "Identity role assignment not found")
),
security(("bearer_auth" = []))
)]
pub async fn delete_identity_role_assignment(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(assignment_id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Permissions, Action::Manage).await?;
let assignment = IdentityRoleAssignmentRepository::find_by_id(&state.db, assignment_id)
.await?
.ok_or_else(|| {
ApiError::NotFound(format!(
"Identity role assignment '{}' not found",
assignment_id
))
})?;
if assignment.managed {
return Err(ApiError::BadRequest(
"Managed role assignments must be updated through the identity provider sync"
.to_string(),
));
}
IdentityRoleAssignmentRepository::delete(&state.db, assignment_id).await?;
Ok((
StatusCode::OK,
Json(ApiResponse::new(SuccessResponse::new(
"Identity role assignment deleted successfully",
))),
))
}
#[utoipa::path(
post,
path = "/api/v1/permissions/sets/{id}/roles",
tag = "permissions",
params(
("id" = i64, Path, description = "Permission set ID")
),
request_body = CreatePermissionSetRoleAssignmentRequest,
responses(
(status = 201, description = "Permission set role assignment created", body = inline(ApiResponse<PermissionSetRoleAssignmentResponse>)),
(status = 404, description = "Permission set not found")
),
security(("bearer_auth" = []))
)]
pub async fn create_permission_set_role_assignment(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(permission_set_id): Path<i64>,
Json(request): Json<CreatePermissionSetRoleAssignmentRequest>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Permissions, Action::Manage).await?;
request.validate()?;
let permission_set = PermissionSetRepository::find_by_id(&state.db, permission_set_id)
.await?
.ok_or_else(|| {
ApiError::NotFound(format!("Permission set '{}' not found", permission_set_id))
})?;
let assignment = PermissionSetRoleAssignmentRepository::create(
&state.db,
CreatePermissionSetRoleAssignmentInput {
permset: permission_set_id,
role: request.role,
},
)
.await?;
Ok((
StatusCode::CREATED,
Json(ApiResponse::new(PermissionSetRoleAssignmentResponse {
id: assignment.id,
permission_set_id: assignment.permset,
permission_set_ref: Some(permission_set.r#ref),
role: assignment.role,
created: assignment.created,
})),
))
}
#[utoipa::path(
delete,
path = "/api/v1/permissions/sets/roles/{id}",
tag = "permissions",
params(
("id" = i64, Path, description = "Permission set role assignment ID")
),
responses(
(status = 200, description = "Permission set role assignment deleted", body = inline(ApiResponse<SuccessResponse>)),
(status = 404, description = "Permission set role assignment not found")
),
security(("bearer_auth" = []))
)]
pub async fn delete_permission_set_role_assignment(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(assignment_id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(&state, &user, Resource::Permissions, Action::Manage).await?;
PermissionSetRoleAssignmentRepository::find_by_id(&state.db, assignment_id)
.await?
.ok_or_else(|| {
ApiError::NotFound(format!(
"Permission set role assignment '{}' not found",
assignment_id
))
})?;
PermissionSetRoleAssignmentRepository::delete(&state.db, assignment_id).await?;
Ok((
StatusCode::OK,
Json(ApiResponse::new(SuccessResponse::new(
"Permission set role assignment deleted successfully",
))),
))
}
#[utoipa::path(
post,
path = "/api/v1/identities/{id}/freeze",
tag = "permissions",
params(
("id" = i64, Path, description = "Identity ID")
),
responses(
(status = 200, description = "Identity frozen", body = inline(ApiResponse<SuccessResponse>)),
(status = 404, description = "Identity not found")
),
security(("bearer_auth" = []))
)]
pub async fn freeze_identity(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(identity_id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
set_identity_frozen(&state, &user, identity_id, true).await
}
#[utoipa::path(
post,
path = "/api/v1/identities/{id}/unfreeze",
tag = "permissions",
params(
("id" = i64, Path, description = "Identity ID")
),
responses(
(status = 200, description = "Identity unfrozen", body = inline(ApiResponse<SuccessResponse>)),
(status = 404, description = "Identity not found")
),
security(("bearer_auth" = []))
)]
pub async fn unfreeze_identity(
State(state): State<Arc<AppState>>,
RequireAuth(user): RequireAuth,
Path(identity_id): Path<i64>,
) -> ApiResult<impl IntoResponse> {
set_identity_frozen(&state, &user, identity_id, false).await
}
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
.route("/identities", get(list_identities).post(create_identity))
.route(
"/identities/{id}",
get(get_identity)
.put(update_identity)
.delete(delete_identity),
)
.route(
"/identities/{id}/roles",
post(create_identity_role_assignment),
)
.route(
"/identities/{id}/permissions",
get(list_identity_permissions),
)
.route("/identities/{id}/freeze", post(freeze_identity))
.route("/identities/{id}/unfreeze", post(unfreeze_identity))
.route(
"/identities/roles/{id}",
delete(delete_identity_role_assignment),
)
.route("/permissions/sets", get(list_permission_sets))
.route(
"/permissions/sets/{id}/roles",
post(create_permission_set_role_assignment),
)
.route(
"/permissions/sets/roles/{id}",
delete(delete_permission_set_role_assignment),
)
.route(
"/permissions/assignments",
post(create_permission_assignment),
)
.route(
"/permissions/assignments/{id}",
delete(delete_permission_assignment),
)
}
async fn authorize_permissions(
state: &Arc<AppState>,
user: &crate::auth::middleware::AuthenticatedUser,
resource: Resource,
action: Action,
) -> ApiResult<()> {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
authz
.authorize(
user,
AuthorizationCheck {
resource,
action,
context: AuthorizationContext::new(identity_id),
},
)
.await
}
async fn resolve_identity(
state: &Arc<AppState>,
request: &CreatePermissionAssignmentRequest,
) -> ApiResult<Identity> {
match (request.identity_id, request.identity_login.as_deref()) {
(Some(identity_id), None) => IdentityRepository::find_by_id(&state.db, identity_id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Identity '{}' not found", identity_id))),
(None, Some(identity_login)) => {
IdentityRepository::find_by_login(&state.db, identity_login)
.await?
.ok_or_else(|| {
ApiError::NotFound(format!("Identity '{}' not found", identity_login))
})
}
(Some(_), Some(_)) => Err(ApiError::BadRequest(
"Provide either identity_id or identity_login, not both".to_string(),
)),
(None, None) => Err(ApiError::BadRequest(
"Either identity_id or identity_login is required".to_string(),
)),
}
}
impl From<Identity> for IdentitySummary {
fn from(value: Identity) -> Self {
Self {
id: value.id,
login: value.login,
display_name: value.display_name,
frozen: value.frozen,
attributes: value.attributes,
roles: Vec::new(),
}
}
}
impl From<IdentityRoleAssignment> for IdentityRoleAssignmentResponse {
fn from(value: IdentityRoleAssignment) -> Self {
Self {
id: value.id,
identity_id: value.identity,
role: value.role,
source: value.source,
managed: value.managed,
created: value.created,
updated: value.updated,
}
}
}
impl From<Identity> for IdentityResponse {
fn from(value: Identity) -> Self {
Self {
id: value.id,
login: value.login,
display_name: value.display_name,
frozen: value.frozen,
attributes: value.attributes,
roles: Vec::new(),
direct_permissions: Vec::new(),
}
}
}
async fn set_identity_frozen(
state: &Arc<AppState>,
user: &crate::auth::middleware::AuthenticatedUser,
identity_id: i64,
frozen: bool,
) -> ApiResult<impl IntoResponse> {
authorize_permissions(state, user, Resource::Identities, Action::Update).await?;
let caller_identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
if caller_identity_id == identity_id && frozen {
return Err(ApiError::BadRequest(
"Refusing to freeze the currently authenticated identity".to_string(),
));
}
IdentityRepository::find_by_id(&state.db, identity_id)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Identity '{}' not found", identity_id)))?;
IdentityRepository::update(
&state.db,
identity_id,
UpdateIdentityInput {
display_name: None,
password_hash: None,
attributes: None,
frozen: Some(frozen),
},
)
.await?;
let message = if frozen {
"Identity frozen successfully"
} else {
"Identity unfrozen successfully"
};
Ok((
StatusCode::OK,
Json(ApiResponse::new(SuccessResponse::new(message))),
))
}

View File

@@ -14,16 +14,18 @@ use validator::Validate;
use attune_common::mq::{
MessageEnvelope, MessageType, RuleCreatedPayload, RuleDisabledPayload, RuleEnabledPayload,
};
use attune_common::rbac::{Action, AuthorizationContext, Resource};
use attune_common::repositories::{
action::ActionRepository,
pack::PackRepository,
rule::{CreateRuleInput, RuleRepository, UpdateRuleInput},
rule::{CreateRuleInput, RuleRepository, RuleSearchFilters, UpdateRuleInput},
trigger::TriggerRepository,
Create, Delete, FindByRef, List, Update,
Create, Delete, FindByRef, Patch, Update,
};
use crate::{
auth::middleware::RequireAuth,
authz::{AuthorizationCheck, AuthorizationService},
dto::{
common::{PaginatedResponse, PaginationParams},
rule::{CreateRuleRequest, RuleResponse, RuleSummary, UpdateRuleRequest},
@@ -50,21 +52,21 @@ pub async fn list_rules(
RequireAuth(_user): RequireAuth,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
// Get all rules
let rules = RuleRepository::list(&state.db).await?;
let filters = RuleSearchFilters {
pack: None,
action: None,
trigger: None,
enabled: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = rules.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(rules.len());
let result = RuleRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_rules: Vec<RuleSummary> = rules[start..end]
.iter()
.map(|r| RuleSummary::from(r.clone()))
.collect();
let paginated_rules: Vec<RuleSummary> =
result.rows.into_iter().map(RuleSummary::from).collect();
let response = PaginatedResponse::new(paginated_rules, &pagination, total);
let response = PaginatedResponse::new(paginated_rules, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -85,21 +87,21 @@ pub async fn list_enabled_rules(
RequireAuth(_user): RequireAuth,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
// Get enabled rules
let rules = RuleRepository::find_enabled(&state.db).await?;
let filters = RuleSearchFilters {
pack: None,
action: None,
trigger: None,
enabled: Some(true),
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = rules.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(rules.len());
let result = RuleRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_rules: Vec<RuleSummary> = rules[start..end]
.iter()
.map(|r| RuleSummary::from(r.clone()))
.collect();
let paginated_rules: Vec<RuleSummary> =
result.rows.into_iter().map(RuleSummary::from).collect();
let response = PaginatedResponse::new(paginated_rules, &pagination, total);
let response = PaginatedResponse::new(paginated_rules, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -130,21 +132,21 @@ pub async fn list_rules_by_pack(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref)))?;
// Get rules for this pack
let rules = RuleRepository::find_by_pack(&state.db, pack.id).await?;
let filters = RuleSearchFilters {
pack: Some(pack.id),
action: None,
trigger: None,
enabled: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = rules.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(rules.len());
let result = RuleRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_rules: Vec<RuleSummary> = rules[start..end]
.iter()
.map(|r| RuleSummary::from(r.clone()))
.collect();
let paginated_rules: Vec<RuleSummary> =
result.rows.into_iter().map(RuleSummary::from).collect();
let response = PaginatedResponse::new(paginated_rules, &pagination, total);
let response = PaginatedResponse::new(paginated_rules, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -175,21 +177,21 @@ pub async fn list_rules_by_action(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Action '{}' not found", action_ref)))?;
// Get rules for this action
let rules = RuleRepository::find_by_action(&state.db, action.id).await?;
let filters = RuleSearchFilters {
pack: None,
action: Some(action.id),
trigger: None,
enabled: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = rules.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(rules.len());
let result = RuleRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_rules: Vec<RuleSummary> = rules[start..end]
.iter()
.map(|r| RuleSummary::from(r.clone()))
.collect();
let paginated_rules: Vec<RuleSummary> =
result.rows.into_iter().map(RuleSummary::from).collect();
let response = PaginatedResponse::new(paginated_rules, &pagination, total);
let response = PaginatedResponse::new(paginated_rules, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -220,21 +222,21 @@ pub async fn list_rules_by_trigger(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Trigger '{}' not found", trigger_ref)))?;
// Get rules for this trigger
let rules = RuleRepository::find_by_trigger(&state.db, trigger.id).await?;
let filters = RuleSearchFilters {
pack: None,
action: None,
trigger: Some(trigger.id),
enabled: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = rules.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(rules.len());
let result = RuleRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_rules: Vec<RuleSummary> = rules[start..end]
.iter()
.map(|r| RuleSummary::from(r.clone()))
.collect();
let paginated_rules: Vec<RuleSummary> =
result.rows.into_iter().map(RuleSummary::from).collect();
let response = PaginatedResponse::new(paginated_rules, &pagination, total);
let response = PaginatedResponse::new(paginated_rules, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -283,14 +285,17 @@ pub async fn get_rule(
)]
pub async fn create_rule(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Json(request): Json<CreateRuleRequest>,
) -> ApiResult<impl IntoResponse> {
// Validate request
request.validate()?;
// Check if rule with same ref already exists
if let Some(_) = RuleRepository::find_by_ref(&state.db, &request.r#ref).await? {
if RuleRepository::find_by_ref(&state.db, &request.r#ref)
.await?
.is_some()
{
return Err(ApiError::Conflict(format!(
"Rule with ref '{}' already exists",
request.r#ref
@@ -314,6 +319,26 @@ pub async fn create_rule(
ApiError::NotFound(format!("Trigger '{}' not found", request.trigger_ref))
})?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.pack_ref = Some(pack.r#ref.clone());
ctx.target_ref = Some(request.r#ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Rules,
action: Action::Create,
context: ctx,
},
)
.await?;
}
// Validate trigger parameters against schema
validate_trigger_params(&trigger, &request.trigger_params)?;
@@ -341,7 +366,7 @@ pub async fn create_rule(
let rule = RuleRepository::create(&state.db, rule_input).await?;
// Publish RuleCreated message to notify sensor service
if let Some(ref publisher) = state.publisher {
if let Some(publisher) = state.get_publisher().await {
let payload = RuleCreatedPayload {
rule_id: rule.id,
rule_ref: rule.r#ref.clone(),
@@ -389,7 +414,7 @@ pub async fn create_rule(
)]
pub async fn update_rule(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Path(rule_ref): Path<String>,
Json(request): Json<UpdateRuleRequest>,
) -> ApiResult<impl IntoResponse> {
@@ -401,6 +426,27 @@ pub async fn update_rule(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Rule '{}' not found", rule_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_id = Some(existing_rule.id);
ctx.target_ref = Some(existing_rule.r#ref.clone());
ctx.pack_ref = Some(existing_rule.pack_ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Rules,
action: Action::Update,
context: ctx,
},
)
.await?;
}
// If action parameters are being updated, validate against the action's schema
if let Some(ref action_params) = request.action_params {
let action = ActionRepository::find_by_ref(&state.db, &existing_rule.action_ref)
@@ -428,7 +474,7 @@ pub async fn update_rule(
// Create update input
let update_input = UpdateRuleInput {
label: request.label,
description: request.description,
description: request.description.map(Patch::Set),
conditions: request.conditions,
action_params: request.action_params,
trigger_params: request.trigger_params,
@@ -440,7 +486,7 @@ pub async fn update_rule(
// If the rule is enabled and trigger params changed, publish RuleEnabled message
// to notify sensors to restart with new parameters
if rule.enabled && trigger_params_changed {
if let Some(ref publisher) = state.publisher {
if let Some(publisher) = state.get_publisher().await {
let payload = RuleEnabledPayload {
rule_id: rule.id,
rule_ref: rule.r#ref.clone(),
@@ -486,7 +532,7 @@ pub async fn update_rule(
)]
pub async fn delete_rule(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Path(rule_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
// Check if rule exists
@@ -494,6 +540,27 @@ pub async fn delete_rule(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Rule '{}' not found", rule_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_id = Some(rule.id);
ctx.target_ref = Some(rule.r#ref.clone());
ctx.pack_ref = Some(rule.pack_ref.clone());
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Rules,
action: Action::Delete,
context: ctx,
},
)
.await?;
}
// Delete the rule
let deleted = RuleRepository::delete(&state.db, rule.id).await?;
@@ -543,7 +610,7 @@ pub async fn enable_rule(
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
// Publish RuleEnabled message to notify sensor service
if let Some(ref publisher) = state.publisher {
if let Some(publisher) = state.get_publisher().await {
let payload = RuleEnabledPayload {
rule_id: rule.id,
rule_ref: rule.r#ref.clone(),
@@ -606,7 +673,7 @@ pub async fn disable_rule(
let rule = RuleRepository::update(&state.db, existing_rule.id, update_input).await?;
// Publish RuleDisabled message to notify sensor service
if let Some(ref publisher) = state.publisher {
if let Some(publisher) = state.get_publisher().await {
let payload = RuleDisabledPayload {
rule_id: rule.id,
rule_ref: rule.r#ref.clone(),

View File

@@ -0,0 +1,307 @@
//! Runtime management API routes
use axum::{
extract::{Path, Query, State},
http::StatusCode,
response::IntoResponse,
routing::get,
Json, Router,
};
use std::sync::Arc;
use validator::Validate;
use attune_common::repositories::{
pack::PackRepository,
runtime::{CreateRuntimeInput, RuntimeRepository, UpdateRuntimeInput},
Create, Delete, FindByRef, List, Patch, Update,
};
use crate::{
auth::middleware::RequireAuth,
dto::{
common::{PaginatedResponse, PaginationParams},
runtime::{
CreateRuntimeRequest, NullableJsonPatch, NullableStringPatch, RuntimeResponse,
RuntimeSummary, UpdateRuntimeRequest,
},
ApiResponse, SuccessResponse,
},
middleware::{ApiError, ApiResult},
state::AppState,
};
#[utoipa::path(
get,
path = "/api/v1/runtimes",
tag = "runtimes",
params(PaginationParams),
responses(
(status = 200, description = "List of runtimes", body = PaginatedResponse<RuntimeSummary>)
),
security(("bearer_auth" = []))
)]
pub async fn list_runtimes(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
let all_runtimes = RuntimeRepository::list(&state.db).await?;
let total = all_runtimes.len() as u64;
let rows: Vec<_> = all_runtimes
.into_iter()
.skip(pagination.offset() as usize)
.take(pagination.limit() as usize)
.collect();
let response = PaginatedResponse::new(
rows.into_iter().map(RuntimeSummary::from).collect(),
&pagination,
total,
);
Ok((StatusCode::OK, Json(response)))
}
#[utoipa::path(
get,
path = "/api/v1/packs/{pack_ref}/runtimes",
tag = "runtimes",
params(
("pack_ref" = String, Path, description = "Pack reference identifier"),
PaginationParams
),
responses(
(status = 200, description = "List of runtimes for a pack", body = PaginatedResponse<RuntimeSummary>),
(status = 404, description = "Pack not found")
),
security(("bearer_auth" = []))
)]
pub async fn list_runtimes_by_pack(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(pack_ref): Path<String>,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
let pack = PackRepository::find_by_ref(&state.db, &pack_ref)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref)))?;
let all_runtimes = RuntimeRepository::find_by_pack(&state.db, pack.id).await?;
let total = all_runtimes.len() as u64;
let rows: Vec<_> = all_runtimes
.into_iter()
.skip(pagination.offset() as usize)
.take(pagination.limit() as usize)
.collect();
let response = PaginatedResponse::new(
rows.into_iter().map(RuntimeSummary::from).collect(),
&pagination,
total,
);
Ok((StatusCode::OK, Json(response)))
}
#[utoipa::path(
get,
path = "/api/v1/runtimes/{ref}",
tag = "runtimes",
params(("ref" = String, Path, description = "Runtime reference identifier")),
responses(
(status = 200, description = "Runtime details", body = ApiResponse<RuntimeResponse>),
(status = 404, description = "Runtime not found")
),
security(("bearer_auth" = []))
)]
pub async fn get_runtime(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(runtime_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
let runtime = RuntimeRepository::find_by_ref(&state.db, &runtime_ref)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Runtime '{}' not found", runtime_ref)))?;
Ok((
StatusCode::OK,
Json(ApiResponse::new(RuntimeResponse::from(runtime))),
))
}
#[utoipa::path(
post,
path = "/api/v1/runtimes",
tag = "runtimes",
request_body = CreateRuntimeRequest,
responses(
(status = 201, description = "Runtime created successfully", body = ApiResponse<RuntimeResponse>),
(status = 400, description = "Validation error"),
(status = 404, description = "Pack not found"),
(status = 409, description = "Runtime with same ref already exists")
),
security(("bearer_auth" = []))
)]
pub async fn create_runtime(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Json(request): Json<CreateRuntimeRequest>,
) -> ApiResult<impl IntoResponse> {
request.validate()?;
if RuntimeRepository::find_by_ref(&state.db, &request.r#ref)
.await?
.is_some()
{
return Err(ApiError::Conflict(format!(
"Runtime with ref '{}' already exists",
request.r#ref
)));
}
let (pack_id, pack_ref) = if let Some(ref pack_ref_str) = request.pack_ref {
let pack = PackRepository::find_by_ref(&state.db, pack_ref_str)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref_str)))?;
(Some(pack.id), Some(pack.r#ref))
} else {
(None, None)
};
let runtime = RuntimeRepository::create(
&state.db,
CreateRuntimeInput {
r#ref: request.r#ref,
pack: pack_id,
pack_ref,
description: request.description,
name: request.name,
aliases: vec![],
distributions: request.distributions,
installation: request.installation,
execution_config: request.execution_config,
auto_detected: false,
detection_config: serde_json::json!({}),
},
)
.await?;
Ok((
StatusCode::CREATED,
Json(ApiResponse::with_message(
RuntimeResponse::from(runtime),
"Runtime created successfully",
)),
))
}
#[utoipa::path(
put,
path = "/api/v1/runtimes/{ref}",
tag = "runtimes",
params(("ref" = String, Path, description = "Runtime reference identifier")),
request_body = UpdateRuntimeRequest,
responses(
(status = 200, description = "Runtime updated successfully", body = ApiResponse<RuntimeResponse>),
(status = 400, description = "Validation error"),
(status = 404, description = "Runtime not found")
),
security(("bearer_auth" = []))
)]
pub async fn update_runtime(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(runtime_ref): Path<String>,
Json(request): Json<UpdateRuntimeRequest>,
) -> ApiResult<impl IntoResponse> {
request.validate()?;
let existing_runtime = RuntimeRepository::find_by_ref(&state.db, &runtime_ref)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Runtime '{}' not found", runtime_ref)))?;
let runtime = RuntimeRepository::update(
&state.db,
existing_runtime.id,
UpdateRuntimeInput {
description: request.description.map(|patch| match patch {
NullableStringPatch::Set(value) => Patch::Set(value),
NullableStringPatch::Clear => Patch::Clear,
}),
name: request.name,
distributions: request.distributions,
installation: request.installation.map(|patch| match patch {
NullableJsonPatch::Set(value) => Patch::Set(value),
NullableJsonPatch::Clear => Patch::Clear,
}),
execution_config: request.execution_config,
..Default::default()
},
)
.await?;
Ok((
StatusCode::OK,
Json(ApiResponse::with_message(
RuntimeResponse::from(runtime),
"Runtime updated successfully",
)),
))
}
#[utoipa::path(
delete,
path = "/api/v1/runtimes/{ref}",
tag = "runtimes",
params(("ref" = String, Path, description = "Runtime reference identifier")),
responses(
(status = 200, description = "Runtime deleted successfully", body = SuccessResponse),
(status = 404, description = "Runtime not found")
),
security(("bearer_auth" = []))
)]
pub async fn delete_runtime(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(runtime_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
let runtime = RuntimeRepository::find_by_ref(&state.db, &runtime_ref)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Runtime '{}' not found", runtime_ref)))?;
let deleted = RuntimeRepository::delete(&state.db, runtime.id).await?;
if !deleted {
return Err(ApiError::NotFound(format!(
"Runtime '{}' not found",
runtime_ref
)));
}
Ok((
StatusCode::OK,
Json(SuccessResponse::new(format!(
"Runtime '{}' deleted successfully",
runtime_ref
))),
))
}
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
.route("/runtimes", get(list_runtimes).post(create_runtime))
.route(
"/runtimes/{ref}",
get(get_runtime).put(update_runtime).delete(delete_runtime),
)
.route("/packs/{pack_ref}/runtimes", get(list_runtimes_by_pack))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_runtime_routes_structure() {
let _router = routes();
}
}

View File

@@ -14,10 +14,10 @@ use attune_common::repositories::{
pack::PackRepository,
runtime::RuntimeRepository,
trigger::{
CreateSensorInput, CreateTriggerInput, SensorRepository, TriggerRepository,
UpdateSensorInput, UpdateTriggerInput,
CreateSensorInput, CreateTriggerInput, SensorRepository, SensorSearchFilters,
TriggerRepository, TriggerSearchFilters, UpdateSensorInput, UpdateTriggerInput,
},
Create, Delete, FindByRef, List, Update,
Create, Delete, FindByRef, Patch, Update,
};
use crate::{
@@ -25,8 +25,9 @@ use crate::{
dto::{
common::{PaginatedResponse, PaginationParams},
trigger::{
CreateSensorRequest, CreateTriggerRequest, SensorResponse, SensorSummary,
TriggerResponse, TriggerSummary, UpdateSensorRequest, UpdateTriggerRequest,
CreateSensorRequest, CreateTriggerRequest, SensorJsonPatch, SensorResponse,
SensorSummary, TriggerJsonPatch, TriggerResponse, TriggerStringPatch, TriggerSummary,
UpdateSensorRequest, UpdateTriggerRequest,
},
ApiResponse, SuccessResponse,
},
@@ -54,21 +55,19 @@ pub async fn list_triggers(
RequireAuth(_user): RequireAuth,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
// Get all triggers
let triggers = TriggerRepository::list(&state.db).await?;
let filters = TriggerSearchFilters {
pack: None,
enabled: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = triggers.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(triggers.len());
let result = TriggerRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_triggers: Vec<TriggerSummary> = triggers[start..end]
.iter()
.map(|t| TriggerSummary::from(t.clone()))
.collect();
let paginated_triggers: Vec<TriggerSummary> =
result.rows.into_iter().map(TriggerSummary::from).collect();
let response = PaginatedResponse::new(paginated_triggers, &pagination, total);
let response = PaginatedResponse::new(paginated_triggers, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -89,21 +88,19 @@ pub async fn list_enabled_triggers(
RequireAuth(_user): RequireAuth,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
// Get enabled triggers
let triggers = TriggerRepository::find_enabled(&state.db).await?;
let filters = TriggerSearchFilters {
pack: None,
enabled: Some(true),
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = triggers.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(triggers.len());
let result = TriggerRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_triggers: Vec<TriggerSummary> = triggers[start..end]
.iter()
.map(|t| TriggerSummary::from(t.clone()))
.collect();
let paginated_triggers: Vec<TriggerSummary> =
result.rows.into_iter().map(TriggerSummary::from).collect();
let response = PaginatedResponse::new(paginated_triggers, &pagination, total);
let response = PaginatedResponse::new(paginated_triggers, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -134,21 +131,19 @@ pub async fn list_triggers_by_pack(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref)))?;
// Get triggers for this pack
let triggers = TriggerRepository::find_by_pack(&state.db, pack.id).await?;
let filters = TriggerSearchFilters {
pack: Some(pack.id),
enabled: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = triggers.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(triggers.len());
let result = TriggerRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_triggers: Vec<TriggerSummary> = triggers[start..end]
.iter()
.map(|t| TriggerSummary::from(t.clone()))
.collect();
let paginated_triggers: Vec<TriggerSummary> =
result.rows.into_iter().map(TriggerSummary::from).collect();
let response = PaginatedResponse::new(paginated_triggers, &pagination, total);
let response = PaginatedResponse::new(paginated_triggers, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -204,7 +199,10 @@ pub async fn create_trigger(
request.validate()?;
// Check if trigger with same ref already exists
if let Some(_) = TriggerRepository::find_by_ref(&state.db, &request.r#ref).await? {
if TriggerRepository::find_by_ref(&state.db, &request.r#ref)
.await?
.is_some()
{
return Err(ApiError::Conflict(format!(
"Trigger with ref '{}' already exists",
request.r#ref
@@ -277,10 +275,19 @@ pub async fn update_trigger(
// Create update input
let update_input = UpdateTriggerInput {
label: request.label,
description: request.description,
description: request.description.map(|patch| match patch {
TriggerStringPatch::Set(value) => Patch::Set(value),
TriggerStringPatch::Clear => Patch::Clear,
}),
enabled: request.enabled,
param_schema: request.param_schema,
out_schema: request.out_schema,
param_schema: request.param_schema.map(|patch| match patch {
TriggerJsonPatch::Set(value) => Patch::Set(value),
TriggerJsonPatch::Clear => Patch::Clear,
}),
out_schema: request.out_schema.map(|patch| match patch {
TriggerJsonPatch::Set(value) => Patch::Set(value),
TriggerJsonPatch::Clear => Patch::Clear,
}),
};
let trigger = TriggerRepository::update(&state.db, existing_trigger.id, update_input).await?;
@@ -438,21 +445,20 @@ pub async fn list_sensors(
RequireAuth(_user): RequireAuth,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
// Get all sensors
let sensors = SensorRepository::list(&state.db).await?;
let filters = SensorSearchFilters {
pack: None,
trigger: None,
enabled: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = sensors.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(sensors.len());
let result = SensorRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_sensors: Vec<SensorSummary> = sensors[start..end]
.iter()
.map(|s| SensorSummary::from(s.clone()))
.collect();
let paginated_sensors: Vec<SensorSummary> =
result.rows.into_iter().map(SensorSummary::from).collect();
let response = PaginatedResponse::new(paginated_sensors, &pagination, total);
let response = PaginatedResponse::new(paginated_sensors, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -473,21 +479,20 @@ pub async fn list_enabled_sensors(
RequireAuth(_user): RequireAuth,
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
// Get enabled sensors
let sensors = SensorRepository::find_enabled(&state.db).await?;
let filters = SensorSearchFilters {
pack: None,
trigger: None,
enabled: Some(true),
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = sensors.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(sensors.len());
let result = SensorRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_sensors: Vec<SensorSummary> = sensors[start..end]
.iter()
.map(|s| SensorSummary::from(s.clone()))
.collect();
let paginated_sensors: Vec<SensorSummary> =
result.rows.into_iter().map(SensorSummary::from).collect();
let response = PaginatedResponse::new(paginated_sensors, &pagination, total);
let response = PaginatedResponse::new(paginated_sensors, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -518,21 +523,20 @@ pub async fn list_sensors_by_pack(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref)))?;
// Get sensors for this pack
let sensors = SensorRepository::find_by_pack(&state.db, pack.id).await?;
let filters = SensorSearchFilters {
pack: Some(pack.id),
trigger: None,
enabled: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = sensors.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(sensors.len());
let result = SensorRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_sensors: Vec<SensorSummary> = sensors[start..end]
.iter()
.map(|s| SensorSummary::from(s.clone()))
.collect();
let paginated_sensors: Vec<SensorSummary> =
result.rows.into_iter().map(SensorSummary::from).collect();
let response = PaginatedResponse::new(paginated_sensors, &pagination, total);
let response = PaginatedResponse::new(paginated_sensors, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -563,21 +567,20 @@ pub async fn list_sensors_by_trigger(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Trigger '{}' not found", trigger_ref)))?;
// Get sensors for this trigger
let sensors = SensorRepository::find_by_trigger(&state.db, trigger.id).await?;
let filters = SensorSearchFilters {
pack: None,
trigger: Some(trigger.id),
enabled: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = sensors.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(sensors.len());
let result = SensorRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_sensors: Vec<SensorSummary> = sensors[start..end]
.iter()
.map(|s| SensorSummary::from(s.clone()))
.collect();
let paginated_sensors: Vec<SensorSummary> =
result.rows.into_iter().map(SensorSummary::from).collect();
let response = PaginatedResponse::new(paginated_sensors, &pagination, total);
let response = PaginatedResponse::new(paginated_sensors, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -633,7 +636,10 @@ pub async fn create_sensor(
request.validate()?;
// Check if sensor with same ref already exists
if let Some(_) = SensorRepository::find_by_ref(&state.db, &request.r#ref).await? {
if SensorRepository::find_by_ref(&state.db, &request.r#ref)
.await?
.is_some()
{
return Err(ApiError::Conflict(format!(
"Sensor with ref '{}' already exists",
request.r#ref
@@ -669,6 +675,7 @@ pub async fn create_sensor(
entrypoint: request.entrypoint,
runtime: runtime.id,
runtime_ref: runtime.r#ref.clone(),
runtime_version_constraint: None,
trigger: trigger.id,
trigger_ref: trigger.r#ref.clone(),
enabled: request.enabled,
@@ -717,10 +724,19 @@ pub async fn update_sensor(
// Create update input
let update_input = UpdateSensorInput {
label: request.label,
description: request.description,
description: request.description.map(Patch::Set),
entrypoint: request.entrypoint,
runtime: None,
runtime_ref: None,
runtime_version_constraint: None,
trigger: None,
trigger_ref: None,
enabled: request.enabled,
param_schema: request.param_schema,
param_schema: request.param_schema.map(|patch| match patch {
SensorJsonPatch::Set(value) => Patch::Set(value),
SensorJsonPatch::Clear => Patch::Clear,
}),
config: None,
};
let sensor = SensorRepository::update(&state.db, existing_sensor.id, update_input).await?;
@@ -799,8 +815,14 @@ pub async fn enable_sensor(
label: None,
description: None,
entrypoint: None,
runtime: None,
runtime_ref: None,
runtime_version_constraint: None,
trigger: None,
trigger_ref: None,
enabled: Some(true),
param_schema: None,
config: None,
};
let sensor = SensorRepository::update(&state.db, existing_sensor.id, update_input).await?;
@@ -840,8 +862,14 @@ pub async fn disable_sensor(
label: None,
description: None,
entrypoint: None,
runtime: None,
runtime_ref: None,
runtime_version_constraint: None,
trigger: None,
trigger_ref: None,
enabled: Some(false),
param_schema: None,
config: None,
};
let sensor = SensorRepository::update(&state.db, existing_sensor.id, update_input).await?;

View File

@@ -20,8 +20,11 @@ use attune_common::{
},
};
use attune_common::rbac::{Action, AuthorizationContext, Resource};
use crate::{
auth::middleware::RequireAuth,
authz::{AuthorizationCheck, AuthorizationService},
dto::{
trigger::TriggerResponse,
webhook::{WebhookReceiverRequest, WebhookReceiverResponse},
@@ -170,7 +173,7 @@ fn get_webhook_config_array(
)]
pub async fn enable_webhook(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Path(trigger_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
// First, find the trigger by ref to get its ID
@@ -179,6 +182,26 @@ pub async fn enable_webhook(
.map_err(|e| ApiError::InternalServerError(e.to_string()))?
.ok_or_else(|| ApiError::NotFound(format!("Trigger '{}' not found", trigger_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_ref = Some(trigger.r#ref.clone());
ctx.pack_ref = trigger.pack_ref.clone();
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Triggers,
action: Action::Update,
context: ctx,
},
)
.await?;
}
// Enable webhooks for this trigger
let _webhook_info = TriggerRepository::enable_webhook(&state.db, trigger.id)
.await
@@ -213,7 +236,7 @@ pub async fn enable_webhook(
)]
pub async fn disable_webhook(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Path(trigger_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
// First, find the trigger by ref to get its ID
@@ -222,6 +245,26 @@ pub async fn disable_webhook(
.map_err(|e| ApiError::InternalServerError(e.to_string()))?
.ok_or_else(|| ApiError::NotFound(format!("Trigger '{}' not found", trigger_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_ref = Some(trigger.r#ref.clone());
ctx.pack_ref = trigger.pack_ref.clone();
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Triggers,
action: Action::Update,
context: ctx,
},
)
.await?;
}
// Disable webhooks for this trigger
TriggerRepository::disable_webhook(&state.db, trigger.id)
.await
@@ -257,7 +300,7 @@ pub async fn disable_webhook(
)]
pub async fn regenerate_webhook_key(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
RequireAuth(user): RequireAuth,
Path(trigger_ref): Path<String>,
) -> ApiResult<impl IntoResponse> {
// First, find the trigger by ref to get its ID
@@ -266,6 +309,26 @@ pub async fn regenerate_webhook_key(
.map_err(|e| ApiError::InternalServerError(e.to_string()))?
.ok_or_else(|| ApiError::NotFound(format!("Trigger '{}' not found", trigger_ref)))?;
if user.claims.token_type == crate::auth::jwt::TokenType::Access {
let identity_id = user
.identity_id()
.map_err(|_| ApiError::Unauthorized("Invalid user identity".to_string()))?;
let authz = AuthorizationService::new(state.db.clone());
let mut ctx = AuthorizationContext::new(identity_id);
ctx.target_ref = Some(trigger.r#ref.clone());
ctx.pack_ref = trigger.pack_ref.clone();
authz
.authorize(
&user,
AuthorizationCheck {
resource: Resource::Triggers,
action: Action::Update,
context: ctx,
},
)
.await?;
}
// Check if webhooks are enabled
if !trigger.webhook_enabled {
return Err(ApiError::BadRequest(
@@ -650,7 +713,7 @@ pub async fn receive_webhook(
"Webhook event {} created, attempting to publish EventCreated message",
event.id
);
if let Some(ref publisher) = state.publisher {
if let Some(publisher) = state.get_publisher().await {
let message_payload = EventCreatedPayload {
event_id: event.id,
trigger_id: event.trigger,
@@ -714,6 +777,7 @@ pub async fn receive_webhook(
}
// Helper function to log webhook events
#[allow(clippy::too_many_arguments)]
async fn log_webhook_event(
state: &AppState,
trigger: &attune_common::models::trigger::Trigger,
@@ -753,6 +817,7 @@ async fn log_webhook_event(
}
// Helper function to log failures when trigger is not found
#[allow(clippy::too_many_arguments)]
async fn log_webhook_failure(
_state: &AppState,
webhook_key: String,

View File

@@ -4,18 +4,21 @@ use axum::{
extract::{Path, Query, State},
http::StatusCode,
response::IntoResponse,
routing::get,
routing::{get, post, put},
Json, Router,
};
use std::path::PathBuf;
use std::sync::Arc;
use validator::Validate;
use attune_common::repositories::{
action::{ActionRepository, CreateActionInput, UpdateActionInput},
pack::PackRepository,
workflow::{
CreateWorkflowDefinitionInput, UpdateWorkflowDefinitionInput, WorkflowDefinitionRepository,
WorkflowSearchFilters,
},
Create, Delete, FindByRef, List, Update,
Create, Delete, FindByRef, Patch, Update,
};
use crate::{
@@ -23,8 +26,8 @@ use crate::{
dto::{
common::{PaginatedResponse, PaginationParams},
workflow::{
CreateWorkflowRequest, UpdateWorkflowRequest, WorkflowResponse, WorkflowSearchParams,
WorkflowSummary,
CreateWorkflowRequest, SaveWorkflowFileRequest, UpdateWorkflowRequest,
WorkflowResponse, WorkflowSearchParams, WorkflowSummary,
},
ApiResponse, SuccessResponse,
},
@@ -52,64 +55,29 @@ pub async fn list_workflows(
// Validate search params
search_params.validate()?;
// Get workflows based on filters
let mut workflows = if let Some(tags_str) = &search_params.tags {
// Filter by tags
let tags: Vec<&str> = tags_str.split(',').map(|s| s.trim()).collect();
let mut results = Vec::new();
for tag in tags {
let mut tag_results = WorkflowDefinitionRepository::find_by_tag(&state.db, tag).await?;
results.append(&mut tag_results);
}
// Remove duplicates by ID
results.sort_by_key(|w| w.id);
results.dedup_by_key(|w| w.id);
results
} else if search_params.enabled == Some(true) {
// Filter by enabled status (only return enabled workflows)
WorkflowDefinitionRepository::find_enabled(&state.db).await?
} else {
// Get all workflows
WorkflowDefinitionRepository::list(&state.db).await?
// Parse comma-separated tags into a Vec if provided
let tags = search_params.tags.as_ref().map(|t| {
t.split(',')
.map(|s| s.trim().to_string())
.collect::<Vec<_>>()
});
// All filtering and pagination happen in a single SQL query.
let filters = WorkflowSearchFilters {
pack: None,
pack_ref: search_params.pack_ref.clone(),
tags,
search: search_params.search.clone(),
limit: pagination.limit(),
offset: pagination.offset(),
};
// Apply enabled filter if specified and not already filtered by it
if let Some(enabled) = search_params.enabled {
if search_params.tags.is_some() {
// If we filtered by tags, also apply enabled filter
workflows.retain(|w| w.enabled == enabled);
}
}
let result = WorkflowDefinitionRepository::list_search(&state.db, &filters).await?;
// Apply search filter if provided
if let Some(search_term) = &search_params.search {
let search_lower = search_term.to_lowercase();
workflows.retain(|w| {
w.label.to_lowercase().contains(&search_lower)
|| w.description
.as_ref()
.map(|d| d.to_lowercase().contains(&search_lower))
.unwrap_or(false)
});
}
let paginated_workflows: Vec<WorkflowSummary> =
result.rows.into_iter().map(WorkflowSummary::from).collect();
// Apply pack_ref filter if provided
if let Some(pack_ref) = &search_params.pack_ref {
workflows.retain(|w| w.pack_ref == *pack_ref);
}
// Calculate pagination
let total = workflows.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(workflows.len());
// Get paginated slice
let paginated_workflows: Vec<WorkflowSummary> = workflows[start..end]
.iter()
.map(|w| WorkflowSummary::from(w.clone()))
.collect();
let response = PaginatedResponse::new(paginated_workflows, &pagination, total);
let response = PaginatedResponse::new(paginated_workflows, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -136,25 +104,26 @@ pub async fn list_workflows_by_pack(
Query(pagination): Query<PaginationParams>,
) -> ApiResult<impl IntoResponse> {
// Verify pack exists
let pack = PackRepository::find_by_ref(&state.db, &pack_ref)
let _pack = PackRepository::find_by_ref(&state.db, &pack_ref)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref)))?;
// Get workflows for this pack
let workflows = WorkflowDefinitionRepository::find_by_pack(&state.db, pack.id).await?;
// All filtering and pagination happen in a single SQL query.
let filters = WorkflowSearchFilters {
pack: None,
pack_ref: Some(pack_ref),
tags: None,
search: None,
limit: pagination.limit(),
offset: pagination.offset(),
};
// Calculate pagination
let total = workflows.len() as u64;
let start = ((pagination.page - 1) * pagination.limit()) as usize;
let end = (start + pagination.limit() as usize).min(workflows.len());
let result = WorkflowDefinitionRepository::list_search(&state.db, &filters).await?;
// Get paginated slice
let paginated_workflows: Vec<WorkflowSummary> = workflows[start..end]
.iter()
.map(|w| WorkflowSummary::from(w.clone()))
.collect();
let paginated_workflows: Vec<WorkflowSummary> =
result.rows.into_iter().map(WorkflowSummary::from).collect();
let response = PaginatedResponse::new(paginated_workflows, &pagination, total);
let response = PaginatedResponse::new(paginated_workflows, &pagination, result.total);
Ok((StatusCode::OK, Json(response)))
}
@@ -210,7 +179,10 @@ pub async fn create_workflow(
request.validate()?;
// Check if workflow with same ref already exists
if let Some(_) = WorkflowDefinitionRepository::find_by_ref(&state.db, &request.r#ref).await? {
if WorkflowDefinitionRepository::find_by_ref(&state.db, &request.r#ref)
.await?
.is_some()
{
return Err(ApiError::Conflict(format!(
"Workflow with ref '{}' already exists",
request.r#ref
@@ -224,21 +196,35 @@ pub async fn create_workflow(
// Create workflow input
let workflow_input = CreateWorkflowDefinitionInput {
r#ref: request.r#ref,
r#ref: request.r#ref.clone(),
pack: pack.id,
pack_ref: pack.r#ref.clone(),
label: request.label,
description: request.description,
version: request.version,
param_schema: request.param_schema,
out_schema: request.out_schema,
label: request.label.clone(),
description: request.description.clone(),
version: request.version.clone(),
param_schema: request.param_schema.clone(),
out_schema: request.out_schema.clone(),
definition: request.definition,
tags: request.tags.unwrap_or_default(),
enabled: request.enabled.unwrap_or(true),
tags: request.tags.clone().unwrap_or_default(),
};
let workflow = WorkflowDefinitionRepository::create(&state.db, workflow_input).await?;
// Create a companion action record so the workflow appears in action lists
create_companion_action(
&state.db,
&workflow.r#ref,
pack.id,
&pack.r#ref,
&request.label,
request.description.as_deref(),
"workflow",
request.param_schema.as_ref(),
request.out_schema.as_ref(),
workflow.id,
)
.await?;
let response = ApiResponse::with_message(
WorkflowResponse::from(workflow),
"Workflow created successfully",
@@ -279,19 +265,29 @@ pub async fn update_workflow(
// Create update input
let update_input = UpdateWorkflowDefinitionInput {
label: request.label,
description: request.description,
version: request.version,
param_schema: request.param_schema,
out_schema: request.out_schema,
label: request.label.clone(),
description: request.description.clone(),
version: request.version.clone(),
param_schema: request.param_schema.clone(),
out_schema: request.out_schema.clone(),
definition: request.definition,
tags: request.tags,
enabled: request.enabled,
};
let workflow =
WorkflowDefinitionRepository::update(&state.db, existing_workflow.id, update_input).await?;
// Update the companion action record if it exists
update_companion_action(
&state.db,
existing_workflow.id,
request.label.as_deref(),
request.description.as_deref(),
request.param_schema.as_ref(),
request.out_schema.as_ref(),
)
.await?;
let response = ApiResponse::with_message(
WorkflowResponse::from(workflow),
"Workflow updated successfully",
@@ -324,7 +320,7 @@ pub async fn delete_workflow(
.await?
.ok_or_else(|| ApiError::NotFound(format!("Workflow '{}' not found", workflow_ref)))?;
// Delete the workflow
// Delete the workflow (companion action is cascade-deleted via FK on action.workflow_def)
let deleted = WorkflowDefinitionRepository::delete(&state.db, workflow.id).await?;
if !deleted {
@@ -340,6 +336,569 @@ pub async fn delete_workflow(
Ok((StatusCode::OK, Json(response)))
}
/// Save a workflow file to disk and sync it to the database
///
/// Writes a `{name}.workflow.yaml` file to `{packs_base_dir}/{pack_ref}/actions/workflows/`
/// and creates or updates the corresponding workflow_definition record in the database.
/// Also creates a companion action record so the workflow appears in action lists and palettes.
#[utoipa::path(
post,
path = "/api/v1/packs/{pack_ref}/workflow-files",
tag = "workflows",
params(
("pack_ref" = String, Path, description = "Pack reference identifier")
),
request_body = SaveWorkflowFileRequest,
responses(
(status = 201, description = "Workflow file saved and synced", body = inline(ApiResponse<WorkflowResponse>)),
(status = 400, description = "Validation error"),
(status = 404, description = "Pack not found"),
(status = 409, description = "Workflow with same ref already exists"),
(status = 500, description = "Failed to write workflow file")
),
security(("bearer_auth" = []))
)]
pub async fn save_workflow_file(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(pack_ref): Path<String>,
Json(request): Json<SaveWorkflowFileRequest>,
) -> ApiResult<impl IntoResponse> {
request.validate()?;
// Verify pack exists
let pack = PackRepository::find_by_ref(&state.db, &pack_ref)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", pack_ref)))?;
let workflow_ref = format!("{}.{}", pack_ref, request.name);
// Check if workflow already exists
if WorkflowDefinitionRepository::find_by_ref(&state.db, &workflow_ref)
.await?
.is_some()
{
return Err(ApiError::Conflict(format!(
"Workflow with ref '{}' already exists",
workflow_ref
)));
}
// Write YAML file to disk
let packs_base_dir = PathBuf::from(&state.config.packs_base_dir);
write_workflow_yaml(&packs_base_dir, &pack_ref, &request).await?;
// Create workflow in database
let definition_json = serde_json::to_value(&request.definition).map_err(|e| {
ApiError::BadRequest(format!("Failed to serialize workflow definition: {}", e))
})?;
let workflow_input = CreateWorkflowDefinitionInput {
r#ref: workflow_ref.clone(),
pack: pack.id,
pack_ref: pack.r#ref.clone(),
label: request.label.clone(),
description: request.description.clone(),
version: request.version.clone(),
param_schema: request.param_schema.clone(),
out_schema: request.out_schema.clone(),
definition: definition_json,
tags: request.tags.clone().unwrap_or_default(),
};
let workflow = WorkflowDefinitionRepository::create(&state.db, workflow_input).await?;
// Create a companion action record so the workflow appears in action lists and palettes
let entrypoint = format!("workflows/{}.workflow.yaml", request.name);
create_companion_action(
&state.db,
&workflow_ref,
pack.id,
&pack.r#ref,
&request.label,
request.description.as_deref(),
&entrypoint,
request.param_schema.as_ref(),
request.out_schema.as_ref(),
workflow.id,
)
.await?;
let response = ApiResponse::with_message(
WorkflowResponse::from(workflow),
"Workflow file saved and synced successfully",
);
Ok((StatusCode::CREATED, Json(response)))
}
/// Update a workflow file on disk and sync changes to the database
#[utoipa::path(
put,
path = "/api/v1/workflows/{ref}/file",
tag = "workflows",
params(
("ref" = String, Path, description = "Workflow reference identifier")
),
request_body = SaveWorkflowFileRequest,
responses(
(status = 200, description = "Workflow file updated and synced", body = inline(ApiResponse<WorkflowResponse>)),
(status = 400, description = "Validation error"),
(status = 404, description = "Workflow not found"),
(status = 500, description = "Failed to write workflow file")
),
security(("bearer_auth" = []))
)]
pub async fn update_workflow_file(
State(state): State<Arc<AppState>>,
RequireAuth(_user): RequireAuth,
Path(workflow_ref): Path<String>,
Json(request): Json<SaveWorkflowFileRequest>,
) -> ApiResult<impl IntoResponse> {
request.validate()?;
// Check if workflow exists
let existing_workflow = WorkflowDefinitionRepository::find_by_ref(&state.db, &workflow_ref)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Workflow '{}' not found", workflow_ref)))?;
// Verify pack exists
let pack = PackRepository::find_by_ref(&state.db, &request.pack_ref)
.await?
.ok_or_else(|| ApiError::NotFound(format!("Pack '{}' not found", request.pack_ref)))?;
// Write updated YAML file to disk
let packs_base_dir = PathBuf::from(&state.config.packs_base_dir);
write_workflow_yaml(&packs_base_dir, &request.pack_ref, &request).await?;
// Update workflow in database
let definition_json = serde_json::to_value(&request.definition).map_err(|e| {
ApiError::BadRequest(format!("Failed to serialize workflow definition: {}", e))
})?;
let update_input = UpdateWorkflowDefinitionInput {
label: Some(request.label.clone()),
description: request.description.clone(),
version: Some(request.version),
param_schema: request.param_schema.clone(),
out_schema: request.out_schema.clone(),
definition: Some(definition_json),
tags: request.tags,
};
let workflow =
WorkflowDefinitionRepository::update(&state.db, existing_workflow.id, update_input).await?;
// Update the companion action record, or create it if it doesn't exist yet
// (handles workflows that were created before this fix was deployed)
let entrypoint = format!("workflows/{}.workflow.yaml", request.name);
ensure_companion_action(
&state.db,
existing_workflow.id,
&workflow_ref,
pack.id,
&pack.r#ref,
&request.label,
request.description.as_deref(),
&entrypoint,
request.param_schema.as_ref(),
request.out_schema.as_ref(),
)
.await?;
let response = ApiResponse::with_message(
WorkflowResponse::from(workflow),
"Workflow file updated and synced successfully",
);
Ok((StatusCode::OK, Json(response)))
}
/// Write a workflow definition to disk as YAML
async fn write_workflow_yaml(
packs_base_dir: &std::path::Path,
pack_ref: &str,
request: &SaveWorkflowFileRequest,
) -> Result<(), ApiError> {
let pack_dir = packs_base_dir.join(pack_ref);
let actions_dir = pack_dir.join("actions");
let workflows_dir = actions_dir.join("workflows");
// Ensure both directories exist
tokio::fs::create_dir_all(&workflows_dir)
.await
.map_err(|e| {
ApiError::InternalServerError(format!(
"Failed to create workflows directory '{}': {}",
workflows_dir.display(),
e
))
})?;
// ── 1. Write the workflow file (graph-only: version, vars, tasks, output_map) ──
let workflow_filename = format!("{}.workflow.yaml", request.name);
let workflow_filepath = workflows_dir.join(&workflow_filename);
// Strip action-level fields from the definition — the workflow file should
// contain only the execution graph. The action YAML is authoritative for
// ref, label, description, parameters, output, and tags.
let graph_only = strip_action_level_fields(&request.definition);
let workflow_yaml = serde_yaml_ng::to_string(&graph_only).map_err(|e| {
ApiError::BadRequest(format!("Failed to serialize workflow to YAML: {}", e))
})?;
let workflow_yaml_with_header = format!(
"# Workflow execution graph for {}.{}\n\
# Action-level metadata (ref, label, parameters, output, tags) is defined\n\
# in the companion action YAML: actions/{}.yaml\n\n{}",
pack_ref, request.name, request.name, workflow_yaml
);
tokio::fs::write(&workflow_filepath, &workflow_yaml_with_header)
.await
.map_err(|e| {
ApiError::InternalServerError(format!(
"Failed to write workflow file '{}': {}",
workflow_filepath.display(),
e
))
})?;
tracing::info!(
"Wrote workflow file: {} ({} bytes)",
workflow_filepath.display(),
workflow_yaml_with_header.len()
);
// ── 2. Write the companion action YAML ──
let action_filename = format!("{}.yaml", request.name);
let action_filepath = actions_dir.join(&action_filename);
let action_yaml = build_action_yaml(pack_ref, request);
tokio::fs::write(&action_filepath, &action_yaml)
.await
.map_err(|e| {
ApiError::InternalServerError(format!(
"Failed to write action YAML '{}': {}",
action_filepath.display(),
e
))
})?;
tracing::info!(
"Wrote action YAML: {} ({} bytes)",
action_filepath.display(),
action_yaml.len()
);
Ok(())
}
/// Strip action-level fields from a workflow definition JSON, keeping only
/// the execution graph: `version`, `vars`, `tasks`, `output_map`.
///
/// Fields removed: `ref`, `label`, `description`, `parameters`, `output`, `tags`.
fn strip_action_level_fields(definition: &serde_json::Value) -> serde_json::Value {
if let Some(obj) = definition.as_object() {
let mut graph = serde_json::Map::new();
// Keep only graph-level fields
for key in &["version", "vars", "tasks", "output_map"] {
if let Some(val) = obj.get(*key) {
graph.insert((*key).to_string(), val.clone());
}
}
serde_json::Value::Object(graph)
} else {
// Shouldn't happen, but pass through if not an object
definition.clone()
}
}
/// Build the companion action YAML content for a workflow action.
///
/// This file defines the action-level metadata (ref, label, parameters, etc.)
/// and references the workflow file via `workflow_file`.
fn build_action_yaml(pack_ref: &str, request: &SaveWorkflowFileRequest) -> String {
let mut lines = Vec::new();
lines.push(format!(
"# Action definition for workflow {}.{}",
pack_ref, request.name
));
lines.push("# The workflow graph (tasks, transitions, variables) is in:".to_string());
lines.push(format!(
"# actions/workflows/{}.workflow.yaml",
request.name
));
lines.push(String::new());
lines.push(format!("ref: {}.{}", pack_ref, request.name));
lines.push(format!("label: \"{}\"", request.label.replace('"', "\\\"")));
if let Some(ref desc) = request.description {
if !desc.is_empty() {
lines.push(format!("description: \"{}\"", desc.replace('"', "\\\"")));
}
}
lines.push(format!(
"workflow_file: workflows/{}.workflow.yaml",
request.name
));
// Parameters
if let Some(ref params) = request.param_schema {
if let Some(obj) = params.as_object() {
if !obj.is_empty() {
lines.push(String::new());
let params_yaml = serde_yaml_ng::to_string(params).unwrap_or_default();
lines.push("parameters:".to_string());
// Indent the YAML output under `parameters:`
for line in params_yaml.lines() {
lines.push(format!(" {}", line));
}
}
}
}
// Output schema
if let Some(ref output) = request.out_schema {
if let Some(obj) = output.as_object() {
if !obj.is_empty() {
lines.push(String::new());
let output_yaml = serde_yaml_ng::to_string(output).unwrap_or_default();
lines.push("output:".to_string());
for line in output_yaml.lines() {
lines.push(format!(" {}", line));
}
}
}
}
// Tags
if let Some(ref tags) = request.tags {
if !tags.is_empty() {
lines.push(String::new());
lines.push("tags:".to_string());
for tag in tags {
lines.push(format!(" - {}", tag));
}
}
}
lines.push(String::new()); // trailing newline
lines.join("\n")
}
/// Create a companion action record for a workflow definition.
///
/// This ensures the workflow appears in action lists and the action palette in the
/// workflow builder. The action is linked to the workflow definition via the
/// `workflow_def` FK.
#[allow(clippy::too_many_arguments)]
async fn create_companion_action(
db: &sqlx::PgPool,
workflow_ref: &str,
pack_id: i64,
pack_ref: &str,
label: &str,
description: Option<&str>,
entrypoint: &str,
param_schema: Option<&serde_json::Value>,
out_schema: Option<&serde_json::Value>,
workflow_def_id: i64,
) -> Result<(), ApiError> {
let action_input = CreateActionInput {
r#ref: workflow_ref.to_string(),
pack: pack_id,
pack_ref: pack_ref.to_string(),
label: label.to_string(),
description: description.map(|s| s.to_string()),
entrypoint: entrypoint.to_string(),
runtime: None,
runtime_version_constraint: None,
param_schema: param_schema.cloned(),
out_schema: out_schema.cloned(),
is_adhoc: false,
};
let action = ActionRepository::create(db, action_input)
.await
.map_err(|e| {
tracing::error!(
"Failed to create companion action for workflow '{}': {}",
workflow_ref,
e
);
ApiError::InternalServerError(format!(
"Failed to create companion action for workflow: {}",
e
))
})?;
// Link the action to the workflow definition (sets workflow_def FK)
ActionRepository::link_workflow_def(db, action.id, workflow_def_id)
.await
.map_err(|e| {
tracing::error!(
"Failed to link action to workflow definition '{}': {}",
workflow_ref,
e
);
ApiError::InternalServerError(format!(
"Failed to link action to workflow definition: {}",
e
))
})?;
tracing::info!(
"Created companion action '{}' (ID: {}) for workflow definition (ID: {})",
workflow_ref,
action.id,
workflow_def_id
);
Ok(())
}
/// Update the companion action record for a workflow definition.
///
/// Finds the action linked to the workflow definition and updates its metadata
/// to stay in sync with the workflow definition.
async fn update_companion_action(
db: &sqlx::PgPool,
workflow_def_id: i64,
label: Option<&str>,
description: Option<&str>,
param_schema: Option<&serde_json::Value>,
out_schema: Option<&serde_json::Value>,
) -> Result<(), ApiError> {
let existing_action = ActionRepository::find_by_workflow_def(db, workflow_def_id)
.await
.map_err(|e| {
tracing::warn!(
"Failed to look up companion action for workflow_def {}: {}",
workflow_def_id,
e
);
ApiError::InternalServerError(format!("Failed to look up companion action: {}", e))
})?;
if let Some(action) = existing_action {
let update_input = UpdateActionInput {
label: label.map(|s| s.to_string()),
description: description.map(|s| Patch::Set(s.to_string())),
entrypoint: None,
runtime: None,
runtime_version_constraint: None,
param_schema: param_schema.cloned(),
out_schema: out_schema.cloned(),
parameter_delivery: None,
parameter_format: None,
output_format: None,
};
ActionRepository::update(db, action.id, update_input)
.await
.map_err(|e| {
tracing::warn!(
"Failed to update companion action (ID: {}) for workflow_def {}: {}",
action.id,
workflow_def_id,
e
);
ApiError::InternalServerError(format!("Failed to update companion action: {}", e))
})?;
tracing::debug!(
"Updated companion action '{}' (ID: {}) for workflow definition (ID: {})",
action.r#ref,
action.id,
workflow_def_id
);
} else {
tracing::debug!(
"No companion action found for workflow_def {}; skipping update",
workflow_def_id
);
}
Ok(())
}
/// Ensure a companion action record exists for a workflow definition.
///
/// If the action already exists, update it. If it doesn't exist (e.g., for workflows
/// created before the companion-action fix), create it.
#[allow(clippy::too_many_arguments)]
async fn ensure_companion_action(
db: &sqlx::PgPool,
workflow_def_id: i64,
workflow_ref: &str,
pack_id: i64,
pack_ref: &str,
label: &str,
description: Option<&str>,
entrypoint: &str,
param_schema: Option<&serde_json::Value>,
out_schema: Option<&serde_json::Value>,
) -> Result<(), ApiError> {
let existing_action = ActionRepository::find_by_workflow_def(db, workflow_def_id)
.await
.map_err(|e| {
ApiError::InternalServerError(format!("Failed to look up companion action: {}", e))
})?;
if let Some(action) = existing_action {
// Update existing companion action
let update_input = UpdateActionInput {
label: Some(label.to_string()),
description: Some(match description {
Some(description) => Patch::Set(description.to_string()),
None => Patch::Clear,
}),
entrypoint: Some(entrypoint.to_string()),
runtime: None,
runtime_version_constraint: None,
param_schema: param_schema.cloned(),
out_schema: out_schema.cloned(),
parameter_delivery: None,
parameter_format: None,
output_format: None,
};
ActionRepository::update(db, action.id, update_input)
.await
.map_err(|e| {
ApiError::InternalServerError(format!("Failed to update companion action: {}", e))
})?;
tracing::debug!(
"Updated companion action '{}' (ID: {}) for workflow definition (ID: {})",
action.r#ref,
action.id,
workflow_def_id
);
} else {
// Create new companion action (backfill for pre-fix workflows)
create_companion_action(
db,
workflow_ref,
pack_id,
pack_ref,
label,
description,
entrypoint,
param_schema,
out_schema,
workflow_def_id,
)
.await?;
}
Ok(())
}
/// Create workflow routes
pub fn routes() -> Router<Arc<AppState>> {
Router::new()
@@ -350,16 +909,7 @@ pub fn routes() -> Router<Arc<AppState>> {
.put(update_workflow)
.delete(delete_workflow),
)
.route("/workflows/{ref}/file", put(update_workflow_file))
.route("/packs/{pack_ref}/workflows", get(list_workflows_by_pack))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workflow_routes_structure() {
// Just verify the router can be constructed
let _router = routes();
}
.route("/packs/{pack_ref}/workflow-files", post(save_workflow_file))
}

View File

@@ -47,16 +47,20 @@ impl Server {
let api_v1 = Router::new()
.merge(routes::pack_routes())
.merge(routes::action_routes())
.merge(routes::runtime_routes())
.merge(routes::rule_routes())
.merge(routes::execution_routes())
.merge(routes::trigger_routes())
.merge(routes::inquiry_routes())
.merge(routes::event_routes())
.merge(routes::key_routes())
.merge(routes::permission_routes())
.merge(routes::workflow_routes())
.merge(routes::webhook_routes())
// TODO: Add more route modules here
// etc.
.merge(routes::history_routes())
.merge(routes::analytics_routes())
.merge(routes::artifact_routes())
.merge(routes::agent_routes())
.with_state(self.state.clone());
// Auth routes at root level (not versioned for frontend compatibility)

View File

@@ -2,7 +2,7 @@
use sqlx::PgPool;
use std::sync::Arc;
use tokio::sync::broadcast;
use tokio::sync::{broadcast, RwLock};
use crate::auth::jwt::JwtConfig;
use attune_common::{config::Config, mq::Publisher};
@@ -18,8 +18,8 @@ pub struct AppState {
pub cors_origins: Vec<String>,
/// Application configuration
pub config: Arc<Config>,
/// Optional message queue publisher
pub publisher: Option<Arc<Publisher>>,
/// Optional message queue publisher (shared, swappable after reconnection)
pub publisher: Arc<RwLock<Option<Arc<Publisher>>>>,
/// Broadcast channel for SSE notifications
pub broadcast_tx: broadcast::Sender<String>,
}
@@ -50,15 +50,20 @@ impl AppState {
jwt_config: Arc::new(jwt_config),
cors_origins,
config: Arc::new(config),
publisher: None,
publisher: Arc::new(RwLock::new(None)),
broadcast_tx,
}
}
/// Set the message queue publisher
pub fn with_publisher(mut self, publisher: Arc<Publisher>) -> Self {
self.publisher = Some(publisher);
self
/// Set the message queue publisher (called once at startup or after reconnection)
pub async fn set_publisher(&self, publisher: Arc<Publisher>) {
let mut guard = self.publisher.write().await;
*guard = Some(publisher);
}
/// Get a clone of the current publisher, if available
pub async fn get_publisher(&self) -> Option<Arc<Publisher>> {
self.publisher.read().await.clone()
}
}

View File

@@ -1,9 +1,14 @@
//! Parameter validation module
//!
//! Validates trigger and action parameters against their declared JSON schemas.
//! Template-aware: values containing `{{ }}` template expressions are replaced
//! with schema-appropriate placeholders before validation, so template expressions
//! pass type checks while literal values are still validated normally.
//! Validates trigger and action parameters against their declared schemas.
//! Schemas use the flat StackStorm-style format:
//! { "param_name": { "type": "string", "required": true, "secret": true, ... }, ... }
//!
//! Before validation, flat schemas are converted to standard JSON Schema so we
//! can reuse the `jsonschema` crate. Template-aware: values containing `{{ }}`
//! template expressions are replaced with schema-appropriate placeholders before
//! validation, so template expressions pass type checks while literal values are
//! still validated normally.
use attune_common::models::{action::Action, trigger::Trigger};
use jsonschema::Validator;
@@ -11,6 +16,68 @@ use serde_json::Value;
use crate::middleware::ApiError;
/// Convert a flat StackStorm-style parameter schema into a standard JSON Schema
/// object suitable for `jsonschema::Validator`.
///
/// Input (flat):
/// ```json
/// { "url": { "type": "string", "required": true }, "timeout": { "type": "integer", "default": 30 } }
/// ```
///
/// Output (JSON Schema):
/// ```json
/// { "type": "object", "properties": { "url": { "type": "string" }, "timeout": { "type": "integer", "default": 30 } }, "required": ["url"] }
/// ```
fn flat_to_json_schema(flat: &Value) -> Value {
let Some(map) = flat.as_object() else {
// Not an object — return a permissive schema
return serde_json::json!({});
};
// If it already looks like a JSON Schema (has "type": "object" + "properties"),
// pass it through unchanged for backward tolerance.
if map.get("type").and_then(|v| v.as_str()) == Some("object") && map.contains_key("properties")
{
return flat.clone();
}
let mut properties = serde_json::Map::new();
let mut required: Vec<Value> = Vec::new();
for (key, prop_def) in map {
let Some(prop_obj) = prop_def.as_object() else {
// Skip non-object entries (shouldn't happen in valid schemas)
continue;
};
// Clone the property definition, stripping `required` and `secret`
// (they are not valid JSON Schema keywords).
let mut clean = prop_obj.clone();
let is_required = clean
.remove("required")
.and_then(|v| v.as_bool())
.unwrap_or(false);
clean.remove("secret");
// `position` is also an Attune extension, not JSON Schema
clean.remove("position");
if is_required {
required.push(Value::String(key.clone()));
}
properties.insert(key.clone(), Value::Object(clean));
}
let mut schema = serde_json::Map::new();
schema.insert("type".to_string(), Value::String("object".to_string()));
schema.insert("properties".to_string(), Value::Object(properties));
if !required.is_empty() {
schema.insert("required".to_string(), Value::Array(required));
}
Value::Object(schema)
}
/// Check if a JSON value is (or contains) a template expression.
fn is_template_expression(value: &Value) -> bool {
match value {
@@ -100,7 +167,8 @@ fn placeholder_for_schema(property_schema: &Value) -> Value {
/// schema-appropriate placeholders. Only replaces leaf values that match
/// `{{ ... }}`; non-template values are left untouched for normal validation.
///
/// `schema` should be the full JSON Schema object (with `properties`, `type`, etc).
/// `schema` must be a standard JSON Schema object (with `properties`, `type`, etc).
/// Call `flat_to_json_schema` first if starting from flat format.
fn replace_templates_with_placeholders(params: &Value, schema: &Value) -> Value {
match params {
Value::Object(map) => {
@@ -164,17 +232,23 @@ fn replace_templates_with_placeholders(params: &Value, schema: &Value) -> Value
/// Validate trigger parameters against the trigger's parameter schema.
/// Template expressions (`{{ ... }}`) are accepted for any field type.
///
/// The schema is expected in flat StackStorm format and is converted to
/// JSON Schema internally for validation.
pub fn validate_trigger_params(trigger: &Trigger, params: &Value) -> Result<(), ApiError> {
// If no schema is defined, accept any parameters
let Some(schema) = &trigger.param_schema else {
let Some(flat_schema) = &trigger.param_schema else {
return Ok(());
};
// Convert flat format to JSON Schema for validation
let schema = flat_to_json_schema(flat_schema);
// Replace template expressions with schema-appropriate placeholders
let sanitized = replace_templates_with_placeholders(params, schema);
let sanitized = replace_templates_with_placeholders(params, &schema);
// Compile the JSON schema
let compiled_schema = Validator::new(schema).map_err(|e| {
let compiled_schema = Validator::new(&schema).map_err(|e| {
ApiError::InternalServerError(format!(
"Invalid parameter schema for trigger '{}': {}",
trigger.r#ref, e
@@ -207,17 +281,23 @@ pub fn validate_trigger_params(trigger: &Trigger, params: &Value) -> Result<(),
/// Validate action parameters against the action's parameter schema.
/// Template expressions (`{{ ... }}`) are accepted for any field type.
///
/// The schema is expected in flat StackStorm format and is converted to
/// JSON Schema internally for validation.
pub fn validate_action_params(action: &Action, params: &Value) -> Result<(), ApiError> {
// If no schema is defined, accept any parameters
let Some(schema) = &action.param_schema else {
let Some(flat_schema) = &action.param_schema else {
return Ok(());
};
// Convert flat format to JSON Schema for validation
let schema = flat_to_json_schema(flat_schema);
// Replace template expressions with schema-appropriate placeholders
let sanitized = replace_templates_with_placeholders(params, schema);
let sanitized = replace_templates_with_placeholders(params, &schema);
// Compile the JSON schema
let compiled_schema = Validator::new(schema).map_err(|e| {
let compiled_schema = Validator::new(&schema).map_err(|e| {
ApiError::InternalServerError(format!(
"Invalid parameter schema for action '{}': {}",
action.r#ref, e
@@ -282,12 +362,12 @@ mod tests {
pack: 1,
pack_ref: "test".to_string(),
label: "Test Action".to_string(),
description: "Test action".to_string(),
description: Some("Test action".to_string()),
entrypoint: "test.sh".to_string(),
runtime: Some(1),
runtime_version_constraint: None,
param_schema: schema,
out_schema: None,
is_workflow: false,
workflow_def: None,
is_adhoc: false,
parameter_delivery: attune_common::models::ParameterDelivery::default(),
@@ -309,15 +389,65 @@ mod tests {
// ── Basic trigger validation (no templates) ──────────────────────
// ── flat_to_json_schema unit tests ───────────────────────────────
#[test]
fn test_flat_to_json_schema_basic() {
let flat = json!({
"url": { "type": "string", "required": true },
"timeout": { "type": "integer", "default": 30 }
});
let result = flat_to_json_schema(&flat);
assert_eq!(result["type"], "object");
assert_eq!(result["properties"]["url"]["type"], "string");
// `required` should be stripped from individual properties
assert!(result["properties"]["url"].get("required").is_none());
assert_eq!(result["properties"]["timeout"]["default"], 30);
// Top-level required array should contain "url"
let req = result["required"].as_array().unwrap();
assert!(req.contains(&json!("url")));
assert!(!req.contains(&json!("timeout")));
}
#[test]
fn test_flat_to_json_schema_strips_secret_and_position() {
let flat = json!({
"token": { "type": "string", "secret": true, "position": 0, "required": true }
});
let result = flat_to_json_schema(&flat);
let token = &result["properties"]["token"];
assert!(token.get("secret").is_none());
assert!(token.get("position").is_none());
assert!(token.get("required").is_none());
}
#[test]
fn test_flat_to_json_schema_empty() {
let flat = json!({});
let result = flat_to_json_schema(&flat);
assert_eq!(result["type"], "object");
assert!(result.get("required").is_none());
}
#[test]
fn test_flat_to_json_schema_passthrough_json_schema() {
// If already JSON Schema format, pass through unchanged
let js = json!({
"type": "object",
"properties": { "x": { "type": "string" } },
"required": ["x"]
});
let result = flat_to_json_schema(&js);
assert_eq!(result, js);
}
// ── Basic trigger validation (flat format) ──────────────────────
#[test]
fn test_validate_trigger_params_with_valid_params() {
let schema = json!({
"type": "object",
"properties": {
"unit": { "type": "string", "enum": ["seconds", "minutes", "hours"] },
"delta": { "type": "integer", "minimum": 1 }
},
"required": ["unit", "delta"]
"unit": { "type": "string", "enum": ["seconds", "minutes", "hours"], "required": true },
"delta": { "type": "integer", "minimum": 1, "required": true }
});
let trigger = make_trigger(Some(schema));
@@ -328,12 +458,8 @@ mod tests {
#[test]
fn test_validate_trigger_params_with_invalid_params() {
let schema = json!({
"type": "object",
"properties": {
"unit": { "type": "string", "enum": ["seconds", "minutes", "hours"] },
"delta": { "type": "integer", "minimum": 1 }
},
"required": ["unit", "delta"]
"unit": { "type": "string", "enum": ["seconds", "minutes", "hours"], "required": true },
"delta": { "type": "integer", "minimum": 1, "required": true }
});
let trigger = make_trigger(Some(schema));
@@ -351,16 +477,12 @@ mod tests {
assert!(validate_trigger_params(&trigger, &params).is_err());
}
// ── Basic action validation (no templates) ───────────────────────
// ── Basic action validation (flat format) ───────────────────────
#[test]
fn test_validate_action_params_with_valid_params() {
let schema = json!({
"type": "object",
"properties": {
"message": { "type": "string" }
},
"required": ["message"]
"message": { "type": "string", "required": true }
});
let action = make_action(Some(schema));
@@ -371,11 +493,7 @@ mod tests {
#[test]
fn test_validate_action_params_with_empty_params_but_required_fields() {
let schema = json!({
"type": "object",
"properties": {
"message": { "type": "string" }
},
"required": ["message"]
"message": { "type": "string", "required": true }
});
let action = make_action(Some(schema));
@@ -383,16 +501,12 @@ mod tests {
assert!(validate_action_params(&action, &params).is_err());
}
// ── Template-aware validation ────────────────────────────────────
// ── Template-aware validation (flat format) ──────────────────────
#[test]
fn test_template_in_integer_field_passes() {
let schema = json!({
"type": "object",
"properties": {
"counter": { "type": "integer" }
},
"required": ["counter"]
"counter": { "type": "integer", "required": true }
});
let action = make_action(Some(schema));
@@ -403,11 +517,7 @@ mod tests {
#[test]
fn test_template_in_boolean_field_passes() {
let schema = json!({
"type": "object",
"properties": {
"verbose": { "type": "boolean" }
},
"required": ["verbose"]
"verbose": { "type": "boolean", "required": true }
});
let action = make_action(Some(schema));
@@ -418,11 +528,7 @@ mod tests {
#[test]
fn test_template_in_number_field_passes() {
let schema = json!({
"type": "object",
"properties": {
"threshold": { "type": "number", "minimum": 0.0 }
},
"required": ["threshold"]
"threshold": { "type": "number", "minimum": 0.0, "required": true }
});
let action = make_action(Some(schema));
@@ -433,11 +539,7 @@ mod tests {
#[test]
fn test_template_in_enum_field_passes() {
let schema = json!({
"type": "object",
"properties": {
"level": { "type": "string", "enum": ["info", "warn", "error"] }
},
"required": ["level"]
"level": { "type": "string", "enum": ["info", "warn", "error"], "required": true }
});
let action = make_action(Some(schema));
@@ -448,11 +550,7 @@ mod tests {
#[test]
fn test_template_in_array_field_passes() {
let schema = json!({
"type": "object",
"properties": {
"recipients": { "type": "array", "items": { "type": "string" } }
},
"required": ["recipients"]
"recipients": { "type": "array", "items": { "type": "string" }, "required": true }
});
let action = make_action(Some(schema));
@@ -463,11 +561,7 @@ mod tests {
#[test]
fn test_template_in_object_field_passes() {
let schema = json!({
"type": "object",
"properties": {
"metadata": { "type": "object" }
},
"required": ["metadata"]
"metadata": { "type": "object", "required": true }
});
let action = make_action(Some(schema));
@@ -478,13 +572,9 @@ mod tests {
#[test]
fn test_mixed_template_and_literal_values() {
let schema = json!({
"type": "object",
"properties": {
"message": { "type": "string" },
"count": { "type": "integer" },
"verbose": { "type": "boolean" }
},
"required": ["message", "count", "verbose"]
"message": { "type": "string", "required": true },
"count": { "type": "integer", "required": true },
"verbose": { "type": "boolean", "required": true }
});
let action = make_action(Some(schema));
@@ -498,6 +588,26 @@ mod tests {
assert!(validate_action_params(&action, &params).is_ok());
}
// ── Secret fields are ignored during validation ──────────────────
#[test]
fn test_secret_field_validated_normally() {
let schema = json!({
"api_key": { "type": "string", "required": true, "secret": true },
"endpoint": { "type": "string" }
});
let action = make_action(Some(schema));
// Valid: secret field provided
let params = json!({ "api_key": "sk-1234", "endpoint": "https://api.example.com" });
assert!(validate_action_params(&action, &params).is_ok());
// Invalid: secret field missing but required
let params = json!({ "endpoint": "https://api.example.com" });
assert!(validate_action_params(&action, &params).is_err());
}
#[test]
fn test_literal_values_still_validated() {
let schema = json!({

View File

@@ -1,8 +1,8 @@
//! Webhook security helpers for HMAC verification and validation
use hmac::{Hmac, Mac};
use sha2::{Sha256, Sha512};
use sha1::Sha1;
use sha2::{Sha256, Sha512};
/// Verify HMAC signature for webhook payload
pub fn verify_hmac_signature(
@@ -33,8 +33,8 @@ pub fn verify_hmac_signature(
}
// Decode hex signature
let expected_signature = hex::decode(hex_signature)
.map_err(|e| format!("Invalid hex signature: {}", e))?;
let expected_signature =
hex::decode(hex_signature).map_err(|e| format!("Invalid hex signature: {}", e))?;
// Compute HMAC based on algorithm
let is_valid = match algorithm {
@@ -91,7 +91,11 @@ fn verify_hmac_sha1(payload: &[u8], expected: &[u8], secret: &str) -> bool {
}
/// Generate HMAC signature for testing
pub fn generate_hmac_signature(payload: &[u8], secret: &str, algorithm: &str) -> Result<String, String> {
pub fn generate_hmac_signature(
payload: &[u8],
secret: &str,
algorithm: &str,
) -> Result<String, String> {
let signature = match algorithm {
"sha256" => {
type HmacSha256 = Hmac<Sha256>;
@@ -127,12 +131,14 @@ pub fn generate_hmac_signature(payload: &[u8], secret: &str, algorithm: &str) ->
pub fn check_ip_in_cidr(ip: &str, cidr: &str) -> Result<bool, String> {
use std::net::IpAddr;
let ip_addr: IpAddr = ip.parse()
let ip_addr: IpAddr = ip
.parse()
.map_err(|e| format!("Invalid IP address: {}", e))?;
// If CIDR doesn't contain '/', treat it as a single IP
if !cidr.contains('/') {
let cidr_addr: IpAddr = cidr.parse()
let cidr_addr: IpAddr = cidr
.parse()
.map_err(|e| format!("Invalid CIDR notation: {}", e))?;
return Ok(ip_addr == cidr_addr);
}
@@ -143,9 +149,11 @@ pub fn check_ip_in_cidr(ip: &str, cidr: &str) -> Result<bool, String> {
return Err("Invalid CIDR format".to_string());
}
let network_addr: IpAddr = parts[0].parse()
let network_addr: IpAddr = parts[0]
.parse()
.map_err(|e| format!("Invalid network address: {}", e))?;
let prefix_len: u8 = parts[1].parse()
let prefix_len: u8 = parts[1]
.parse()
.map_err(|e| format!("Invalid prefix length: {}", e))?;
// Convert to bytes for comparison
@@ -156,7 +164,11 @@ pub fn check_ip_in_cidr(ip: &str, cidr: &str) -> Result<bool, String> {
}
let ip_bits = u32::from(ip);
let network_bits = u32::from(network);
let mask = if prefix_len == 0 { 0 } else { !0u32 << (32 - prefix_len) };
let mask = if prefix_len == 0 {
0
} else {
!0u32 << (32 - prefix_len)
};
Ok((ip_bits & mask) == (network_bits & mask))
}
(IpAddr::V6(ip), IpAddr::V6(network)) => {
@@ -165,7 +177,11 @@ pub fn check_ip_in_cidr(ip: &str, cidr: &str) -> Result<bool, String> {
}
let ip_bits = u128::from(ip);
let network_bits = u128::from(network);
let mask = if prefix_len == 0 { 0 } else { !0u128 << (128 - prefix_len) };
let mask = if prefix_len == 0 {
0
} else {
!0u128 << (128 - prefix_len)
};
Ok((ip_bits & mask) == (network_bits & mask))
}
_ => Err("IP address and CIDR must be same version (IPv4 or IPv6)".to_string()),

View File

@@ -0,0 +1,138 @@
//! Integration tests for agent binary distribution endpoints
//!
//! The agent endpoints (`/api/v1/agent/binary` and `/api/v1/agent/info`) are
//! intentionally unauthenticated — the agent needs to download its binary
//! before it has JWT credentials. An optional `bootstrap_token` can restrict
//! access, but that is validated inside the handler, not via RequireAuth
//! middleware.
//!
//! The test configuration (`config.test.yaml`) does NOT include an `agent`
//! section, so both endpoints return 503 Service Unavailable. This is the
//! correct behaviour: the endpoints are reachable (no 401/404 from middleware)
//! but the feature is not configured.
use axum::http::StatusCode;
#[allow(dead_code)]
mod helpers;
use helpers::TestContext;
// ── /api/v1/agent/info ──────────────────────────────────────────────
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_agent_info_not_configured() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.get("/api/v1/agent/info", None)
.await
.expect("Failed to make request");
// Agent config is not set in config.test.yaml, so the handler returns 503.
assert_eq!(response.status(), StatusCode::SERVICE_UNAVAILABLE);
let body: serde_json::Value = response.json().await.expect("Failed to parse JSON");
assert_eq!(body["error"], "Not configured");
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_agent_info_no_auth_required() {
// Verify that the endpoint is reachable WITHOUT any JWT token.
// If RequireAuth middleware were applied, this would return 401.
// Instead we expect 503 (not configured) — proving the endpoint
// is publicly accessible.
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.get("/api/v1/agent/info", None)
.await
.expect("Failed to make request");
// Must NOT be 401 Unauthorized — the endpoint has no auth middleware.
assert_ne!(
response.status(),
StatusCode::UNAUTHORIZED,
"agent/info should not require authentication"
);
// Should be 503 because agent config is absent.
assert_eq!(response.status(), StatusCode::SERVICE_UNAVAILABLE);
}
// ── /api/v1/agent/binary ────────────────────────────────────────────
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_agent_binary_not_configured() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.get("/api/v1/agent/binary", None)
.await
.expect("Failed to make request");
// Agent config is not set in config.test.yaml, so the handler returns 503.
assert_eq!(response.status(), StatusCode::SERVICE_UNAVAILABLE);
let body: serde_json::Value = response.json().await.expect("Failed to parse JSON");
assert_eq!(body["error"], "Not configured");
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_agent_binary_no_auth_required() {
// Same reasoning as test_agent_info_no_auth_required: the binary
// download endpoint must be publicly accessible (no RequireAuth).
// When no bootstrap_token is configured, any caller can reach the
// handler. We still get 503 because the agent feature itself is
// not configured in the test environment.
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.get("/api/v1/agent/binary", None)
.await
.expect("Failed to make request");
// Must NOT be 401 Unauthorized — the endpoint has no auth middleware.
assert_ne!(
response.status(),
StatusCode::UNAUTHORIZED,
"agent/binary should not require authentication when no bootstrap_token is configured"
);
// Should be 503 because agent config is absent.
assert_eq!(response.status(), StatusCode::SERVICE_UNAVAILABLE);
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_agent_binary_invalid_arch() {
// Architecture validation (`validate_arch`) rejects unsupported values
// with 400 Bad Request. However, in the handler the execution order is:
// 1. validate_token (passes — no bootstrap_token configured)
// 2. check agent config (fails with 503 — not configured)
// 3. validate_arch (never reached)
//
// So even with an invalid arch like "mips", we get 503 from the config
// check before the arch is ever validated. The arch validation is covered
// by unit tests in routes/agent.rs instead.
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.get("/api/v1/agent/binary?arch=mips", None)
.await
.expect("Failed to make request");
// 503 from the agent-config-not-set check, NOT 400 from arch validation.
assert_eq!(response.status(), StatusCode::SERVICE_UNAVAILABLE);
}

View File

@@ -7,6 +7,7 @@ use serde_json::json;
mod helpers;
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_register_debug() {
let ctx = TestContext::new()
.await
@@ -36,6 +37,7 @@ async fn test_register_debug() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_health_check() {
let ctx = TestContext::new()
.await
@@ -54,6 +56,7 @@ async fn test_health_check() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_health_detailed() {
let ctx = TestContext::new()
.await
@@ -74,6 +77,7 @@ async fn test_health_detailed() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_health_ready() {
let ctx = TestContext::new()
.await
@@ -90,6 +94,7 @@ async fn test_health_ready() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_health_live() {
let ctx = TestContext::new()
.await
@@ -106,6 +111,7 @@ async fn test_health_live() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_register_user() {
let ctx = TestContext::new()
.await
@@ -137,6 +143,7 @@ async fn test_register_user() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_register_duplicate_user() {
let ctx = TestContext::new()
.await
@@ -174,6 +181,7 @@ async fn test_register_duplicate_user() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_register_invalid_password() {
let ctx = TestContext::new()
.await
@@ -196,6 +204,7 @@ async fn test_register_invalid_password() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_login_success() {
let ctx = TestContext::new()
.await
@@ -238,6 +247,7 @@ async fn test_login_success() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_login_wrong_password() {
let ctx = TestContext::new()
.await
@@ -274,6 +284,7 @@ async fn test_login_wrong_password() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_login_nonexistent_user() {
let ctx = TestContext::new()
.await
@@ -294,7 +305,128 @@ async fn test_login_nonexistent_user() {
assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
}
// ── LDAP auth tests ──────────────────────────────────────────────────
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_ldap_login_returns_501_when_not_configured() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.post(
"/auth/ldap/login",
json!({
"login": "jdoe",
"password": "secret"
}),
None,
)
.await
.expect("Failed to make request");
// LDAP is not configured in config.test.yaml, so the endpoint
// should return 501 Not Implemented.
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_ldap_login_validates_empty_login() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.post(
"/auth/ldap/login",
json!({
"login": "",
"password": "secret"
}),
None,
)
.await
.expect("Failed to make request");
// Validation should fail before we even check LDAP config
assert_eq!(response.status(), StatusCode::UNPROCESSABLE_ENTITY);
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_ldap_login_validates_empty_password() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.post(
"/auth/ldap/login",
json!({
"login": "jdoe",
"password": ""
}),
None,
)
.await
.expect("Failed to make request");
assert_eq!(response.status(), StatusCode::UNPROCESSABLE_ENTITY);
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_ldap_login_validates_missing_fields() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.post("/auth/ldap/login", json!({}), None)
.await
.expect("Failed to make request");
// Missing required fields should return 422
assert_eq!(response.status(), StatusCode::UNPROCESSABLE_ENTITY);
}
// ── auth/settings LDAP field tests ──────────────────────────────────
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_auth_settings_includes_ldap_fields_disabled() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let response = ctx
.get("/auth/settings", None)
.await
.expect("Failed to make request");
assert_eq!(response.status(), StatusCode::OK);
let body: serde_json::Value = response.json().await.expect("Failed to parse JSON");
// LDAP is not configured in config.test.yaml, so these should all
// reflect the disabled state.
assert_eq!(body["data"]["ldap_enabled"], false);
assert_eq!(body["data"]["ldap_visible_by_default"], false);
assert!(body["data"]["ldap_provider_name"].is_null());
assert!(body["data"]["ldap_provider_label"].is_null());
assert!(body["data"]["ldap_provider_icon_url"].is_null());
// Existing fields should still be present
assert!(body["data"]["authentication_enabled"].is_boolean());
assert!(body["data"]["local_password_enabled"].is_boolean());
assert!(body["data"]["oidc_enabled"].is_boolean());
assert!(body["data"]["self_registration_enabled"].is_boolean());
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_get_current_user() {
let ctx = TestContext::new()
.await
@@ -318,6 +450,7 @@ async fn test_get_current_user() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_get_current_user_unauthorized() {
let ctx = TestContext::new()
.await
@@ -332,6 +465,7 @@ async fn test_get_current_user_unauthorized() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_get_current_user_invalid_token() {
let ctx = TestContext::new()
.await
@@ -346,6 +480,7 @@ async fn test_get_current_user_invalid_token() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_refresh_token() {
let ctx = TestContext::new()
.await
@@ -396,6 +531,7 @@ async fn test_refresh_token() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_refresh_with_invalid_token() {
let ctx = TestContext::new()
.await

View File

@@ -9,6 +9,10 @@ use attune_common::{
models::*,
repositories::{
action::{ActionRepository, CreateActionInput},
identity::{
CreatePermissionAssignmentInput, CreatePermissionSetInput,
PermissionAssignmentRepository, PermissionSetRepository,
},
pack::{CreatePackInput, PackRepository},
trigger::{CreateTriggerInput, TriggerRepository},
workflow::{CreateWorkflowDefinitionInput, WorkflowDefinitionRepository},
@@ -237,6 +241,7 @@ impl TestContext {
}
/// Create and authenticate a test user
#[allow(dead_code)]
pub async fn with_auth(mut self) -> Result<Self> {
// Generate unique username to avoid conflicts in parallel tests
let unique_id = uuid::Uuid::new_v4().to_string().replace("-", "")[..8].to_string();
@@ -246,6 +251,48 @@ impl TestContext {
Ok(self)
}
/// Create and authenticate a test user with identity + permission admin grants.
#[allow(dead_code)]
pub async fn with_admin_auth(mut self) -> Result<Self> {
let unique_id = uuid::Uuid::new_v4().to_string().replace("-", "")[..8].to_string();
let login = format!("adminuser_{}", unique_id);
let token = self.create_test_user(&login).await?;
let identity = attune_common::repositories::identity::IdentityRepository::find_by_login(
&self.pool, &login,
)
.await?
.ok_or_else(|| format!("Failed to find newly created identity '{}'", login))?;
let permset = PermissionSetRepository::create(
&self.pool,
CreatePermissionSetInput {
r#ref: "core.admin".to_string(),
pack: None,
pack_ref: None,
label: Some("Admin".to_string()),
description: Some("Test admin permission set".to_string()),
grants: json!([
{"resource": "identities", "actions": ["read", "create", "update", "delete"]},
{"resource": "permissions", "actions": ["read", "create", "update", "delete", "manage"]}
]),
},
)
.await?;
PermissionAssignmentRepository::create(
&self.pool,
CreatePermissionAssignmentInput {
identity: identity.id,
permset: permset.id,
},
)
.await?;
self.token = Some(token);
Ok(self)
}
/// Create a test user and return access token
async fn create_test_user(&self, login: &str) -> Result<String> {
// Register via API to get real token
@@ -348,6 +395,7 @@ impl TestContext {
}
/// Get authenticated token
#[allow(dead_code)]
pub fn token(&self) -> Option<&str> {
self.token.as_deref()
}
@@ -362,11 +410,11 @@ impl Drop for TestContext {
let test_packs_dir = self.test_packs_dir.clone();
// Spawn cleanup task in background
let _ = tokio::spawn(async move {
drop(tokio::spawn(async move {
if let Err(e) = cleanup_test_schema(&schema).await {
eprintln!("Failed to cleanup test schema {}: {}", schema, e);
}
});
}));
// Cleanup the test packs directory synchronously
let _ = std::fs::remove_dir_all(&test_packs_dir);
@@ -449,9 +497,10 @@ pub async fn create_test_action(pool: &PgPool, pack_id: i64, ref_name: &str) ->
pack: pack_id,
pack_ref: format!("pack_{}", pack_id),
label: format!("Test Action {}", ref_name),
description: format!("Test action for {}", ref_name),
description: Some(format!("Test action for {}", ref_name)),
entrypoint: "main.py".to_string(),
runtime: None,
runtime_version_constraint: None,
param_schema: None,
out_schema: None,
is_adhoc: false,
@@ -505,7 +554,6 @@ pub async fn create_test_workflow(
]
}),
tags: vec!["test".to_string()],
enabled: true,
};
Ok(WorkflowDefinitionRepository::create(pool, input).await?)

View File

@@ -127,6 +127,7 @@ actions:
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_from_local_directory() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -166,6 +167,7 @@ async fn test_install_pack_from_local_directory() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_with_dependency_validation_success() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -216,6 +218,7 @@ async fn test_install_pack_with_dependency_validation_success() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_with_missing_dependency_fails() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -255,6 +258,7 @@ async fn test_install_pack_with_missing_dependency_fails() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_skip_deps_bypasses_validation() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -290,6 +294,7 @@ async fn test_install_pack_skip_deps_bypasses_validation() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_with_runtime_validation() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -323,6 +328,7 @@ async fn test_install_pack_with_runtime_validation() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_metadata_tracking() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -372,6 +378,7 @@ async fn test_install_pack_metadata_tracking() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_force_reinstall() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -424,6 +431,7 @@ async fn test_install_pack_force_reinstall() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_storage_path_created() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -474,6 +482,7 @@ async fn test_install_pack_storage_path_created() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_invalid_source() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -504,6 +513,7 @@ async fn test_install_pack_invalid_source() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_missing_pack_yaml() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -538,6 +548,7 @@ async fn test_install_pack_missing_pack_yaml() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_invalid_pack_yaml() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -566,6 +577,7 @@ async fn test_install_pack_invalid_pack_yaml() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_without_auth_fails() -> Result<()> {
let ctx = TestContext::new().await?; // No auth
@@ -591,6 +603,7 @@ async fn test_install_pack_without_auth_fails() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_multiple_pack_installations() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();
@@ -638,6 +651,7 @@ async fn test_multiple_pack_installations() -> Result<()> {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_install_pack_version_upgrade() -> Result<()> {
let ctx = TestContext::new().await?.with_auth().await?;
let token = ctx.token().unwrap();

View File

@@ -22,7 +22,6 @@ ref: {}.example_workflow
label: Example Workflow
description: A test workflow for integration testing
version: "1.0.0"
enabled: true
parameters:
message:
type: string
@@ -46,7 +45,6 @@ ref: {}.another_workflow
label: Another Workflow
description: Second test workflow
version: "1.0.0"
enabled: false
tasks:
- name: task1
action: core.noop
@@ -58,13 +56,14 @@ tasks:
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_sync_pack_workflows_endpoint() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
// Use unique pack name to avoid conflicts in parallel tests
let pack_name = format!(
"test_pack_{}",
uuid::Uuid::new_v4().to_string().replace("-", "")[..8].to_string()
&uuid::Uuid::new_v4().to_string().replace("-", "")[..8]
);
// Create temporary directory for pack workflows
@@ -94,13 +93,14 @@ async fn test_sync_pack_workflows_endpoint() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_validate_pack_workflows_endpoint() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
// Use unique pack name to avoid conflicts in parallel tests
let pack_name = format!(
"test_pack_{}",
uuid::Uuid::new_v4().to_string().replace("-", "")[..8].to_string()
&uuid::Uuid::new_v4().to_string().replace("-", "")[..8]
);
// Create pack in database
@@ -120,6 +120,7 @@ async fn test_validate_pack_workflows_endpoint() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_sync_nonexistent_pack_returns_404() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -136,6 +137,7 @@ async fn test_sync_nonexistent_pack_returns_404() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_validate_nonexistent_pack_returns_404() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -152,13 +154,14 @@ async fn test_validate_nonexistent_pack_returns_404() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_sync_workflows_requires_authentication() {
let ctx = TestContext::new().await.unwrap();
// Use unique pack name to avoid conflicts in parallel tests
let pack_name = format!(
"test_pack_{}",
uuid::Uuid::new_v4().to_string().replace("-", "")[..8].to_string()
&uuid::Uuid::new_v4().to_string().replace("-", "")[..8]
);
// Create pack in database
@@ -179,13 +182,14 @@ async fn test_sync_workflows_requires_authentication() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_validate_workflows_requires_authentication() {
let ctx = TestContext::new().await.unwrap();
// Use unique pack name to avoid conflicts in parallel tests
let pack_name = format!(
"test_pack_{}",
uuid::Uuid::new_v4().to_string().replace("-", "")[..8].to_string()
&uuid::Uuid::new_v4().to_string().replace("-", "")[..8]
);
// Create pack in database
@@ -206,6 +210,7 @@ async fn test_validate_workflows_requires_authentication() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_pack_creation_with_auto_sync() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -236,6 +241,7 @@ async fn test_pack_creation_with_auto_sync() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_pack_update_with_auto_resync() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();

View File

@@ -0,0 +1,178 @@
use axum::http::StatusCode;
use helpers::*;
use serde_json::json;
mod helpers;
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_identity_crud_and_permission_assignment_flow() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context")
.with_admin_auth()
.await
.expect("Failed to create admin-authenticated test user");
let create_identity_response = ctx
.post(
"/api/v1/identities",
json!({
"login": "managed_user",
"display_name": "Managed User",
"password": "ManagedPass123!",
"attributes": {
"department": "platform"
}
}),
ctx.token(),
)
.await
.expect("Failed to create identity");
assert_eq!(create_identity_response.status(), StatusCode::CREATED);
let created_identity: serde_json::Value = create_identity_response
.json()
.await
.expect("Failed to parse identity create response");
let identity_id = created_identity["data"]["id"]
.as_i64()
.expect("Missing identity id");
let list_identities_response = ctx
.get("/api/v1/identities", ctx.token())
.await
.expect("Failed to list identities");
assert_eq!(list_identities_response.status(), StatusCode::OK);
let identities_body: serde_json::Value = list_identities_response
.json()
.await
.expect("Failed to parse identities response");
assert!(identities_body["data"]
.as_array()
.expect("Expected data array")
.iter()
.any(|item| item["login"] == "managed_user"));
let update_identity_response = ctx
.put(
&format!("/api/v1/identities/{}", identity_id),
json!({
"display_name": "Managed User Updated",
"attributes": {
"department": "security"
}
}),
ctx.token(),
)
.await
.expect("Failed to update identity");
assert_eq!(update_identity_response.status(), StatusCode::OK);
let get_identity_response = ctx
.get(&format!("/api/v1/identities/{}", identity_id), ctx.token())
.await
.expect("Failed to get identity");
assert_eq!(get_identity_response.status(), StatusCode::OK);
let identity_body: serde_json::Value = get_identity_response
.json()
.await
.expect("Failed to parse get identity response");
assert_eq!(
identity_body["data"]["display_name"],
"Managed User Updated"
);
assert_eq!(
identity_body["data"]["attributes"]["department"],
"security"
);
let permission_sets_response = ctx
.get("/api/v1/permissions/sets", ctx.token())
.await
.expect("Failed to list permission sets");
assert_eq!(permission_sets_response.status(), StatusCode::OK);
let assignment_response = ctx
.post(
"/api/v1/permissions/assignments",
json!({
"identity_id": identity_id,
"permission_set_ref": "core.admin"
}),
ctx.token(),
)
.await
.expect("Failed to create permission assignment");
assert_eq!(assignment_response.status(), StatusCode::CREATED);
let assignment_body: serde_json::Value = assignment_response
.json()
.await
.expect("Failed to parse permission assignment response");
let assignment_id = assignment_body["data"]["id"]
.as_i64()
.expect("Missing assignment id");
assert_eq!(assignment_body["data"]["permission_set_ref"], "core.admin");
let list_assignments_response = ctx
.get(
&format!("/api/v1/identities/{}/permissions", identity_id),
ctx.token(),
)
.await
.expect("Failed to list identity permissions");
assert_eq!(list_assignments_response.status(), StatusCode::OK);
let assignments_body: serde_json::Value = list_assignments_response
.json()
.await
.expect("Failed to parse identity permissions response");
assert!(assignments_body
.as_array()
.expect("Expected array response")
.iter()
.any(|item| item["permission_set_ref"] == "core.admin"));
let delete_assignment_response = ctx
.delete(
&format!("/api/v1/permissions/assignments/{}", assignment_id),
ctx.token(),
)
.await
.expect("Failed to delete assignment");
assert_eq!(delete_assignment_response.status(), StatusCode::OK);
let delete_identity_response = ctx
.delete(&format!("/api/v1/identities/{}", identity_id), ctx.token())
.await
.expect("Failed to delete identity");
assert_eq!(delete_identity_response.status(), StatusCode::OK);
let missing_identity_response = ctx
.get(&format!("/api/v1/identities/{}", identity_id), ctx.token())
.await
.expect("Failed to fetch deleted identity");
assert_eq!(missing_identity_response.status(), StatusCode::NOT_FOUND);
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_plain_authenticated_user_cannot_manage_identities() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context")
.with_auth()
.await
.expect("Failed to authenticate plain test user");
let response = ctx
.get("/api/v1/identities", ctx.token())
.await
.expect("Failed to call identities endpoint");
assert_eq!(response.status(), StatusCode::FORBIDDEN);
}

View File

@@ -0,0 +1,276 @@
use axum::http::StatusCode;
use helpers::*;
use serde_json::json;
use attune_common::{
models::enums::{ArtifactType, ArtifactVisibility, OwnerType, RetentionPolicyType},
repositories::{
artifact::{ArtifactRepository, CreateArtifactInput},
identity::{
CreatePermissionAssignmentInput, CreatePermissionSetInput, IdentityRepository,
PermissionAssignmentRepository, PermissionSetRepository,
},
key::{CreateKeyInput, KeyRepository},
Create,
},
};
mod helpers;
async fn register_scoped_user(
ctx: &TestContext,
login: &str,
grants: serde_json::Value,
) -> Result<String> {
let response = ctx
.post(
"/auth/register",
json!({
"login": login,
"password": "TestPassword123!",
"display_name": format!("Scoped User {}", login),
}),
None,
)
.await?;
assert_eq!(response.status(), StatusCode::CREATED);
let body: serde_json::Value = response.json().await?;
let token = body["data"]["access_token"]
.as_str()
.expect("missing access token")
.to_string();
let identity = IdentityRepository::find_by_login(&ctx.pool, login)
.await?
.expect("registered identity should exist");
let permset = PermissionSetRepository::create(
&ctx.pool,
CreatePermissionSetInput {
r#ref: format!("test.scoped_{}", uuid::Uuid::new_v4().simple()),
pack: None,
pack_ref: None,
label: Some("Scoped Test Permission Set".to_string()),
description: Some("Scoped test grants".to_string()),
grants,
},
)
.await?;
PermissionAssignmentRepository::create(
&ctx.pool,
CreatePermissionAssignmentInput {
identity: identity.id,
permset: permset.id,
},
)
.await?;
Ok(token)
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_pack_scoped_key_permissions_enforce_owner_refs() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let token = register_scoped_user(
&ctx,
&format!("scoped_keys_{}", uuid::Uuid::new_v4().simple()),
json!([
{
"resource": "keys",
"actions": ["read"],
"constraints": {
"owner_types": ["pack"],
"owner_refs": ["python_example"]
}
}
]),
)
.await
.expect("Failed to register scoped user");
KeyRepository::create(
&ctx.pool,
CreateKeyInput {
r#ref: format!("python_example_key_{}", uuid::Uuid::new_v4().simple()),
owner_type: OwnerType::Pack,
owner: Some("python_example".to_string()),
owner_identity: None,
owner_pack: None,
owner_pack_ref: Some("python_example".to_string()),
owner_action: None,
owner_action_ref: None,
owner_sensor: None,
owner_sensor_ref: None,
name: "Python Example Key".to_string(),
encrypted: false,
encryption_key_hash: None,
value: json!("allowed"),
},
)
.await
.expect("Failed to create scoped key");
let blocked_key = KeyRepository::create(
&ctx.pool,
CreateKeyInput {
r#ref: format!("other_pack_key_{}", uuid::Uuid::new_v4().simple()),
owner_type: OwnerType::Pack,
owner: Some("other_pack".to_string()),
owner_identity: None,
owner_pack: None,
owner_pack_ref: Some("other_pack".to_string()),
owner_action: None,
owner_action_ref: None,
owner_sensor: None,
owner_sensor_ref: None,
name: "Other Pack Key".to_string(),
encrypted: false,
encryption_key_hash: None,
value: json!("blocked"),
},
)
.await
.expect("Failed to create blocked key");
let allowed_list = ctx
.get("/api/v1/keys", Some(&token))
.await
.expect("Failed to list keys");
assert_eq!(allowed_list.status(), StatusCode::OK);
let allowed_body: serde_json::Value = allowed_list.json().await.expect("Invalid key list");
assert_eq!(
allowed_body["data"]
.as_array()
.expect("expected list")
.len(),
1
);
assert_eq!(allowed_body["data"][0]["owner"], "python_example");
let blocked_get = ctx
.get(&format!("/api/v1/keys/{}", blocked_key.r#ref), Some(&token))
.await
.expect("Failed to fetch blocked key");
assert_eq!(blocked_get.status(), StatusCode::NOT_FOUND);
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_pack_scoped_artifact_permissions_enforce_owner_refs() {
let ctx = TestContext::new()
.await
.expect("Failed to create test context");
let token = register_scoped_user(
&ctx,
&format!("scoped_artifacts_{}", uuid::Uuid::new_v4().simple()),
json!([
{
"resource": "artifacts",
"actions": ["read", "create"],
"constraints": {
"owner_types": ["pack"],
"owner_refs": ["python_example"]
}
}
]),
)
.await
.expect("Failed to register scoped user");
let allowed_artifact = ArtifactRepository::create(
&ctx.pool,
CreateArtifactInput {
r#ref: format!("python_example.allowed_{}", uuid::Uuid::new_v4().simple()),
scope: OwnerType::Pack,
owner: "python_example".to_string(),
r#type: ArtifactType::FileText,
visibility: ArtifactVisibility::Private,
retention_policy: RetentionPolicyType::Versions,
retention_limit: 5,
name: Some("Allowed Artifact".to_string()),
description: None,
content_type: Some("text/plain".to_string()),
execution: None,
data: None,
},
)
.await
.expect("Failed to create allowed artifact");
let blocked_artifact = ArtifactRepository::create(
&ctx.pool,
CreateArtifactInput {
r#ref: format!("other_pack.blocked_{}", uuid::Uuid::new_v4().simple()),
scope: OwnerType::Pack,
owner: "other_pack".to_string(),
r#type: ArtifactType::FileText,
visibility: ArtifactVisibility::Private,
retention_policy: RetentionPolicyType::Versions,
retention_limit: 5,
name: Some("Blocked Artifact".to_string()),
description: None,
content_type: Some("text/plain".to_string()),
execution: None,
data: None,
},
)
.await
.expect("Failed to create blocked artifact");
let allowed_get = ctx
.get(
&format!("/api/v1/artifacts/{}", allowed_artifact.id),
Some(&token),
)
.await
.expect("Failed to fetch allowed artifact");
assert_eq!(allowed_get.status(), StatusCode::OK);
let blocked_get = ctx
.get(
&format!("/api/v1/artifacts/{}", blocked_artifact.id),
Some(&token),
)
.await
.expect("Failed to fetch blocked artifact");
assert_eq!(blocked_get.status(), StatusCode::NOT_FOUND);
let create_allowed = ctx
.post(
"/api/v1/artifacts",
json!({
"ref": format!("python_example.created_{}", uuid::Uuid::new_v4().simple()),
"scope": "pack",
"owner": "python_example",
"type": "file_text",
"name": "Created Artifact"
}),
Some(&token),
)
.await
.expect("Failed to create allowed artifact");
assert_eq!(create_allowed.status(), StatusCode::CREATED);
let create_blocked = ctx
.post(
"/api/v1/artifacts",
json!({
"ref": format!("other_pack.created_{}", uuid::Uuid::new_v4().simple()),
"scope": "pack",
"owner": "other_pack",
"type": "file_text",
"name": "Blocked Artifact"
}),
Some(&token),
)
.await
.expect("Failed to create blocked artifact");
assert_eq!(create_blocked.status(), StatusCode::FORBIDDEN);
}

View File

@@ -52,9 +52,10 @@ async fn setup_test_pack_and_action(pool: &PgPool) -> Result<(Pack, Action)> {
pack: pack.id,
pack_ref: pack.r#ref.clone(),
label: "Test Action".to_string(),
description: "Test action for SSE tests".to_string(),
description: Some("Test action for SSE tests".to_string()),
entrypoint: "test.sh".to_string(),
runtime: None,
runtime_version_constraint: None,
param_schema: None,
out_schema: None,
is_adhoc: false,
@@ -74,6 +75,7 @@ async fn create_test_execution(pool: &PgPool, action_id: i64) -> Result<Executio
parent: None,
enforcement: None,
executor: None,
worker: None,
status: ExecutionStatus::Scheduled,
result: None,
workflow_task: None,
@@ -85,7 +87,7 @@ async fn create_test_execution(pool: &PgPool, action_id: i64) -> Result<Executio
/// Run with: cargo test test_sse_stream_receives_execution_updates -- --ignored --nocapture
/// After starting: cargo run -p attune-api -- -c config.test.yaml
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_sse_stream_receives_execution_updates() -> Result<()> {
// Set up test context with auth
let ctx = TestContext::new().await?.with_auth().await?;
@@ -119,23 +121,21 @@ async fn test_sse_stream_receives_execution_updates() -> Result<()> {
println!("Updating execution {} to 'running' status", execution_id);
// Update execution status - this should trigger PostgreSQL NOTIFY
let _ = sqlx::query(
"UPDATE execution SET status = 'running', start_time = NOW() WHERE id = $1",
)
.bind(execution_id)
.execute(&pool_clone)
.await;
let _ =
sqlx::query("UPDATE execution SET status = 'running', updated = NOW() WHERE id = $1")
.bind(execution_id)
.execute(&pool_clone)
.await;
println!("Update executed, waiting before setting to succeeded");
tokio::time::sleep(Duration::from_millis(500)).await;
// Update to succeeded
let _ = sqlx::query(
"UPDATE execution SET status = 'succeeded', end_time = NOW() WHERE id = $1",
)
.bind(execution_id)
.execute(&pool_clone)
.await;
let _ =
sqlx::query("UPDATE execution SET status = 'succeeded', updated = NOW() WHERE id = $1")
.bind(execution_id)
.execute(&pool_clone)
.await;
println!("Execution {} updated to 'succeeded'", execution_id);
});
@@ -226,7 +226,7 @@ async fn test_sse_stream_receives_execution_updates() -> Result<()> {
/// Test that SSE stream correctly filters by execution_id
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_sse_stream_filters_by_execution_id() -> Result<()> {
// Set up test context with auth
let ctx = TestContext::new().await?.with_auth().await?;
@@ -328,7 +328,7 @@ async fn test_sse_stream_filters_by_execution_id() -> Result<()> {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_sse_stream_requires_authentication() -> Result<()> {
// Try to connect without token
let sse_url = "http://localhost:8080/api/v1/executions/stream";
@@ -374,7 +374,7 @@ async fn test_sse_stream_requires_authentication() -> Result<()> {
/// Test streaming all executions (no filter)
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_sse_stream_all_executions() -> Result<()> {
// Set up test context with auth
let ctx = TestContext::new().await?.with_auth().await?;
@@ -467,7 +467,7 @@ async fn test_sse_stream_all_executions() -> Result<()> {
/// Test that PostgreSQL NOTIFY triggers actually fire
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_postgresql_notify_trigger_fires() -> Result<()> {
let ctx = TestContext::new().await?;

View File

@@ -108,7 +108,7 @@ async fn get_auth_token(app: &axum::Router, username: &str, password: &str) -> S
}
#[tokio::test]
#[ignore] // Run with --ignored flag when database is available
#[ignore = "integration test — requires database"]
async fn test_enable_webhook() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -151,7 +151,7 @@ async fn test_enable_webhook() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_disable_webhook() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -202,7 +202,7 @@ async fn test_disable_webhook() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_regenerate_webhook_key() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -254,7 +254,7 @@ async fn test_regenerate_webhook_key() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_regenerate_webhook_key_not_enabled() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -291,7 +291,7 @@ async fn test_regenerate_webhook_key_not_enabled() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_receive_webhook() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -362,7 +362,7 @@ async fn test_receive_webhook() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_receive_webhook_invalid_key() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state));
@@ -392,7 +392,7 @@ async fn test_receive_webhook_invalid_key() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_receive_webhook_disabled() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -442,7 +442,7 @@ async fn test_receive_webhook_disabled() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_requires_auth_for_management() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -475,7 +475,7 @@ async fn test_webhook_requires_auth_for_management() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_receive_webhook_minimal_payload() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));

View File

@@ -122,7 +122,7 @@ fn generate_hmac_signature(payload: &[u8], secret: &str, algorithm: &str) -> Str
// ============================================================================
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_hmac_sha256_valid() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -189,7 +189,7 @@ async fn test_webhook_hmac_sha256_valid() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_hmac_sha512_valid() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -246,7 +246,7 @@ async fn test_webhook_hmac_sha512_valid() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_hmac_invalid_signature() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -302,7 +302,7 @@ async fn test_webhook_hmac_invalid_signature() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_hmac_missing_signature() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -355,7 +355,7 @@ async fn test_webhook_hmac_missing_signature() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_hmac_wrong_secret() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -418,7 +418,7 @@ async fn test_webhook_hmac_wrong_secret() {
// ============================================================================
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_rate_limit_enforced() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -494,7 +494,7 @@ async fn test_webhook_rate_limit_enforced() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_rate_limit_disabled() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -541,7 +541,7 @@ async fn test_webhook_rate_limit_disabled() {
// ============================================================================
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_ip_whitelist_allowed() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -612,7 +612,7 @@ async fn test_webhook_ip_whitelist_allowed() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_ip_whitelist_blocked() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -669,7 +669,7 @@ async fn test_webhook_ip_whitelist_blocked() {
// ============================================================================
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_payload_size_limit_enforced() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));
@@ -720,7 +720,7 @@ async fn test_webhook_payload_size_limit_enforced() {
}
#[tokio::test]
#[ignore]
#[ignore = "integration test — requires database"]
async fn test_webhook_payload_size_within_limit() {
let state = setup_test_state().await;
let server = Server::new(std::sync::Arc::new(state.clone()));

View File

@@ -14,11 +14,12 @@ use helpers::*;
fn unique_pack_name() -> String {
format!(
"test_pack_{}",
uuid::Uuid::new_v4().to_string().replace("-", "")[..8].to_string()
&uuid::Uuid::new_v4().to_string().replace("-", "")[..8]
)
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_create_workflow_success() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -45,8 +46,7 @@ async fn test_create_workflow_success() {
}
]
},
"tags": ["test", "automation"],
"enabled": true
"tags": ["test", "automation"]
}),
ctx.token(),
)
@@ -59,11 +59,11 @@ async fn test_create_workflow_success() {
assert_eq!(body["data"]["ref"], "test-pack.test_workflow");
assert_eq!(body["data"]["label"], "Test Workflow");
assert_eq!(body["data"]["version"], "1.0.0");
assert_eq!(body["data"]["enabled"], true);
assert!(body["data"]["tags"].as_array().unwrap().len() == 2);
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_create_workflow_duplicate_ref() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -83,7 +83,6 @@ async fn test_create_workflow_duplicate_ref() {
out_schema: None,
definition: json!({"tasks": []}),
tags: vec![],
enabled: true,
};
WorkflowDefinitionRepository::create(&ctx.pool, input)
.await
@@ -109,6 +108,7 @@ async fn test_create_workflow_duplicate_ref() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_create_workflow_pack_not_found() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -131,6 +131,7 @@ async fn test_create_workflow_pack_not_found() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_get_workflow_by_ref() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -148,7 +149,6 @@ async fn test_get_workflow_by_ref() {
out_schema: None,
definition: json!({"tasks": [{"name": "task1"}]}),
tags: vec!["test".to_string()],
enabled: true,
};
WorkflowDefinitionRepository::create(&ctx.pool, input)
.await
@@ -169,6 +169,7 @@ async fn test_get_workflow_by_ref() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_get_workflow_not_found() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -181,6 +182,7 @@ async fn test_get_workflow_not_found() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_list_workflows() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -200,7 +202,6 @@ async fn test_list_workflows() {
out_schema: None,
definition: json!({"tasks": []}),
tags: vec!["test".to_string()],
enabled: i % 2 == 1, // Odd ones enabled
};
WorkflowDefinitionRepository::create(&ctx.pool, input)
.await
@@ -227,6 +228,7 @@ async fn test_list_workflows() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_list_workflows_by_pack() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -249,7 +251,6 @@ async fn test_list_workflows_by_pack() {
out_schema: None,
definition: json!({"tasks": []}),
tags: vec![],
enabled: true,
};
WorkflowDefinitionRepository::create(&ctx.pool, input)
.await
@@ -268,7 +269,6 @@ async fn test_list_workflows_by_pack() {
out_schema: None,
definition: json!({"tasks": []}),
tags: vec![],
enabled: true,
};
WorkflowDefinitionRepository::create(&ctx.pool, input)
.await
@@ -294,20 +294,21 @@ async fn test_list_workflows_by_pack() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_list_workflows_with_filters() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
let pack_name = unique_pack_name();
let pack = create_test_pack(&ctx.pool, &pack_name).await.unwrap();
// Create workflows with different tags and enabled status
// Create workflows with different tags
let workflows = vec![
("workflow1", vec!["incident", "approval"], true),
("workflow2", vec!["incident"], false),
("workflow3", vec!["automation"], true),
("workflow1", vec!["incident", "approval"]),
("workflow2", vec!["incident"]),
("workflow3", vec!["automation"]),
];
for (ref_name, tags, enabled) in workflows {
for (ref_name, tags) in workflows {
let input = CreateWorkflowDefinitionInput {
r#ref: format!("test-pack.{}", ref_name),
pack: pack.id,
@@ -319,24 +320,12 @@ async fn test_list_workflows_with_filters() {
out_schema: None,
definition: json!({"tasks": []}),
tags: tags.iter().map(|s| s.to_string()).collect(),
enabled,
};
WorkflowDefinitionRepository::create(&ctx.pool, input)
.await
.unwrap();
}
// Filter by enabled (and pack_ref for isolation)
let response = ctx
.get(
&format!("/api/v1/workflows?enabled=true&pack_ref={}", pack_name),
ctx.token(),
)
.await
.unwrap();
let body: Value = response.json().await.unwrap();
assert_eq!(body["data"].as_array().unwrap().len(), 2);
// Filter by tag (and pack_ref for isolation)
let response = ctx
.get(
@@ -361,6 +350,7 @@ async fn test_list_workflows_with_filters() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_update_workflow() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -378,7 +368,6 @@ async fn test_update_workflow() {
out_schema: None,
definition: json!({"tasks": []}),
tags: vec!["test".to_string()],
enabled: true,
};
WorkflowDefinitionRepository::create(&ctx.pool, input)
.await
@@ -391,8 +380,7 @@ async fn test_update_workflow() {
json!({
"label": "Updated Label",
"description": "Updated description",
"version": "1.1.0",
"enabled": false
"version": "1.1.0"
}),
ctx.token(),
)
@@ -405,10 +393,10 @@ async fn test_update_workflow() {
assert_eq!(body["data"]["label"], "Updated Label");
assert_eq!(body["data"]["description"], "Updated description");
assert_eq!(body["data"]["version"], "1.1.0");
assert_eq!(body["data"]["enabled"], false);
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_update_workflow_not_found() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -427,6 +415,7 @@ async fn test_update_workflow_not_found() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_delete_workflow() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -444,7 +433,6 @@ async fn test_delete_workflow() {
out_schema: None,
definition: json!({"tasks": []}),
tags: vec![],
enabled: true,
};
WorkflowDefinitionRepository::create(&ctx.pool, input)
.await
@@ -468,6 +456,7 @@ async fn test_delete_workflow() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_delete_workflow_not_found() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();
@@ -480,6 +469,7 @@ async fn test_delete_workflow_not_found() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_create_workflow_requires_auth() {
let ctx = TestContext::new().await.unwrap();
@@ -504,6 +494,7 @@ async fn test_create_workflow_requires_auth() {
}
#[tokio::test]
#[ignore = "integration test — requires database"]
async fn test_workflow_validation() {
let ctx = TestContext::new().await.unwrap().with_auth().await.unwrap();

View File

@@ -16,12 +16,13 @@ attune-common = { path = "../common" }
# Async runtime
tokio = { workspace = true }
futures = { workspace = true }
# CLI framework
clap = { workspace = true, features = ["derive", "env", "string"] }
# HTTP client
reqwest = { workspace = true }
reqwest = { workspace = true, features = ["multipart", "stream"] }
# Serialization
serde = { workspace = true }
@@ -37,19 +38,29 @@ chrono = { workspace = true }
# Configuration
config = { workspace = true }
dirs = "5.0"
dirs = "6.0"
# URL encoding
urlencoding = "2.1"
url = { workspace = true }
# Archive/compression
tar = { workspace = true }
flate2 = { workspace = true }
# WebSocket client (for notifier integration)
tokio-tungstenite = { workspace = true }
# Hashing
sha2 = { workspace = true }
# Terminal UI
colored = "2.1"
comfy-table = "7.1"
indicatif = "0.17"
dialoguer = "0.11"
colored = "3.1"
comfy-table = { version = "7.2", features = ["custom_styling"] }
dialoguer = "0.12"
# Authentication
jsonwebtoken = { version = "10.2", features = ["rust_crypto"] }
jsonwebtoken = { workspace = true }
# Logging
tracing = { workspace = true }
@@ -58,7 +69,7 @@ tracing-subscriber = { workspace = true }
[dev-dependencies]
tempfile = { workspace = true }
wiremock = "0.6"
assert_cmd = "2.0"
predicates = "3.0"
mockito = "1.2"
assert_cmd = "2.2"
predicates = "3.1"
mockito = "1.7"
tokio-test = "0.4"

View File

@@ -1,5 +1,5 @@
use anyhow::{Context, Result};
use reqwest::{Client as HttpClient, Method, RequestBuilder, Response, StatusCode};
use reqwest::{header, multipart, Client as HttpClient, Method, RequestBuilder, StatusCode};
use serde::{de::DeserializeOwned, Serialize};
use std::path::PathBuf;
use std::time::Duration;
@@ -39,7 +39,7 @@ impl ApiClient {
Self {
client: HttpClient::builder()
.timeout(Duration::from_secs(30))
.timeout(Duration::from_secs(300)) // longer timeout for uploads
.build()
.expect("Failed to build HTTP client"),
base_url,
@@ -50,10 +50,15 @@ impl ApiClient {
}
/// Create a new API client
/// Return the base URL this client is configured to talk to.
pub fn base_url(&self) -> &str {
&self.base_url
}
#[cfg(test)]
pub fn new(base_url: String, auth_token: Option<String>) -> Self {
let client = HttpClient::builder()
.timeout(Duration::from_secs(30))
.timeout(Duration::from_secs(300))
.build()
.expect("Failed to build HTTP client");
@@ -78,13 +83,14 @@ impl ApiClient {
self.auth_token = None;
}
/// Refresh the authentication token using the refresh token
/// Refresh the authentication token using the refresh token.
///
/// Returns Ok(true) if refresh succeeded, Ok(false) if no refresh token available
/// Returns `Ok(true)` if refresh succeeded, `Ok(false)` if no refresh token
/// is available or the server rejected it.
async fn refresh_auth_token(&mut self) -> Result<bool> {
let refresh_token = match &self.refresh_token {
Some(token) => token.clone(),
None => return Ok(false), // No refresh token available
None => return Ok(false),
};
#[derive(Serialize)]
@@ -98,7 +104,6 @@ impl ApiClient {
refresh_token: String,
}
// Build refresh request without auth token
let url = format!("{}/auth/refresh", self.base_url);
let req = self
.client
@@ -108,7 +113,7 @@ impl ApiClient {
let response = req.send().await.context("Failed to refresh token")?;
if !response.status().is_success() {
// Refresh failed - clear tokens
// Refresh failed clear tokens so we don't keep retrying
self.auth_token = None;
self.refresh_token = None;
return Ok(false);
@@ -123,7 +128,7 @@ impl ApiClient {
self.auth_token = Some(api_response.data.access_token.clone());
self.refresh_token = Some(api_response.data.refresh_token.clone());
// Persist to config file if we have the path
// Persist to config file
if self.config_path.is_some() {
if let Ok(mut config) = CliConfig::load() {
let _ = config.set_auth(
@@ -136,45 +141,98 @@ impl ApiClient {
Ok(true)
}
/// Build a request with common headers
fn build_request(&self, method: Method, path: &str) -> RequestBuilder {
// Auth endpoints are at /auth, not /auth
let url = if path.starts_with("/auth") {
// ── Request building helpers ────────────────────────────────────────
/// Build a full URL from a path.
fn url_for(&self, path: &str) -> String {
if path.starts_with("/auth") {
format!("{}{}", self.base_url, path)
} else {
format!("{}/api/v1{}", self.base_url, path)
};
let mut req = self.client.request(method, &url);
}
}
/// Build a `RequestBuilder` with auth header applied.
fn build_request(&self, method: Method, path: &str) -> RequestBuilder {
let url = self.url_for(path);
let mut req = self.client.request(method, &url);
if let Some(token) = &self.auth_token {
req = req.bearer_auth(token);
}
req
}
/// Execute a request and handle the response with automatic token refresh
async fn execute<T: DeserializeOwned>(&mut self, req: RequestBuilder) -> Result<T> {
// ── Core execute-with-retry machinery ──────────────────────────────
/// Send a request that carries a JSON body. On a 401 response the token
/// is refreshed and the request is rebuilt & retried exactly once.
async fn execute_json<T, B>(
&mut self,
method: Method,
path: &str,
body: Option<&B>,
) -> Result<T>
where
T: DeserializeOwned,
B: Serialize,
{
// First attempt
let req = self.attach_body(self.build_request(method.clone(), path), body);
let response = req.send().await.context("Failed to send request to API")?;
// If 401 and we have a refresh token, try to refresh once
if response.status() == StatusCode::UNAUTHORIZED && self.refresh_token.is_some() {
// Try to refresh the token
if self.refresh_auth_token().await? {
// Rebuild and retry the original request with new token
// Note: This is a simplified retry - the original request body is already consumed
// For a production implementation, we'd need to clone the request or store the body
return Err(anyhow::anyhow!(
"Token expired and was refreshed. Please retry your command."
));
}
if response.status() == StatusCode::UNAUTHORIZED
&& self.refresh_token.is_some()
&& self.refresh_auth_token().await?
{
// Retry with new token
let req = self.attach_body(self.build_request(method, path), body);
let response = req
.send()
.await
.context("Failed to send request to API (retry)")?;
return self.handle_response(response).await;
}
self.handle_response(response).await
}
/// Handle API response and extract data
async fn handle_response<T: DeserializeOwned>(&self, response: Response) -> Result<T> {
/// Send a request that carries a JSON body and expects no response body.
async fn execute_json_no_response<B: Serialize>(
&mut self,
method: Method,
path: &str,
body: Option<&B>,
) -> Result<()> {
let req = self.attach_body(self.build_request(method.clone(), path), body);
let response = req.send().await.context("Failed to send request to API")?;
if response.status() == StatusCode::UNAUTHORIZED
&& self.refresh_token.is_some()
&& self.refresh_auth_token().await?
{
let req = self.attach_body(self.build_request(method, path), body);
let response = req
.send()
.await
.context("Failed to send request to API (retry)")?;
return self.handle_empty_response(response).await;
}
self.handle_empty_response(response).await
}
/// Optionally attach a JSON body to a request builder.
fn attach_body<B: Serialize>(&self, req: RequestBuilder, body: Option<&B>) -> RequestBuilder {
match body {
Some(b) => req.json(b),
None => req,
}
}
// ── Response handling ──────────────────────────────────────────────
/// Parse a successful API response or return a descriptive error.
async fn handle_response<T: DeserializeOwned>(&self, response: reqwest::Response) -> Result<T> {
let status = response.status();
if status.is_success() {
@@ -189,7 +247,6 @@ impl ApiClient {
.await
.unwrap_or_else(|_| "Unknown error".to_string());
// Try to parse as API error
if let Ok(api_error) = serde_json::from_str::<ApiError>(&error_text) {
anyhow::bail!("API error ({}): {}", status, api_error.error);
} else {
@@ -198,10 +255,30 @@ impl ApiClient {
}
}
/// Handle a response where we only care about success/failure, not a body.
async fn handle_empty_response(&self, response: reqwest::Response) -> Result<()> {
let status = response.status();
if status.is_success() {
Ok(())
} else {
let error_text = response
.text()
.await
.unwrap_or_else(|_| "Unknown error".to_string());
if let Ok(api_error) = serde_json::from_str::<ApiError>(&error_text) {
anyhow::bail!("API error ({}): {}", status, api_error.error);
} else {
anyhow::bail!("API error ({}): {}", status, error_text);
}
}
}
// ── Public convenience methods ─────────────────────────────────────
/// GET request
pub async fn get<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> {
let req = self.build_request(Method::GET, path);
self.execute(req).await
self.execute_json::<T, ()>(Method::GET, path, None).await
}
/// GET request with query parameters (query string must be in path)
@@ -210,8 +287,7 @@ impl ApiClient {
/// Example: `client.get_with_query("/actions?enabled=true&pack=core").await`
#[allow(dead_code)]
pub async fn get_with_query<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> {
let req = self.build_request(Method::GET, path);
self.execute(req).await
self.execute_json::<T, ()>(Method::GET, path, None).await
}
/// POST request with JSON body
@@ -220,8 +296,7 @@ impl ApiClient {
path: &str,
body: &B,
) -> Result<T> {
let req = self.build_request(Method::POST, path).json(body);
self.execute(req).await
self.execute_json(Method::POST, path, Some(body)).await
}
/// PUT request with JSON body
@@ -232,8 +307,7 @@ impl ApiClient {
path: &str,
body: &B,
) -> Result<T> {
let req = self.build_request(Method::PUT, path).json(body);
self.execute(req).await
self.execute_json(Method::PUT, path, Some(body)).await
}
/// PATCH request with JSON body
@@ -242,8 +316,7 @@ impl ApiClient {
path: &str,
body: &B,
) -> Result<T> {
let req = self.build_request(Method::PATCH, path).json(body);
self.execute(req).await
self.execute_json(Method::PATCH, path, Some(body)).await
}
/// DELETE request with response parsing
@@ -254,8 +327,7 @@ impl ApiClient {
/// delete operations return metadata (e.g., cascade deletion summaries).
#[allow(dead_code)]
pub async fn delete<T: DeserializeOwned>(&mut self, path: &str) -> Result<T> {
let req = self.build_request(Method::DELETE, path);
self.execute(req).await
self.execute_json::<T, ()>(Method::DELETE, path, None).await
}
/// POST request without expecting response body
@@ -265,37 +337,153 @@ impl ApiClient {
/// Kept for API completeness even though not currently used.
#[allow(dead_code)]
pub async fn post_no_response<B: Serialize>(&mut self, path: &str, body: &B) -> Result<()> {
let req = self.build_request(Method::POST, path).json(body);
let response = req.send().await.context("Failed to send request to API")?;
let status = response.status();
if status.is_success() {
Ok(())
} else {
let error_text = response
.text()
.await
.unwrap_or_else(|_| "Unknown error".to_string());
anyhow::bail!("API error ({}): {}", status, error_text);
}
self.execute_json_no_response(Method::POST, path, Some(body))
.await
}
/// DELETE request without expecting response body
pub async fn delete_no_response(&mut self, path: &str) -> Result<()> {
let req = self.build_request(Method::DELETE, path);
self.execute_json_no_response::<()>(Method::DELETE, path, None)
.await
}
/// GET request that returns raw bytes and optional filename from Content-Disposition.
///
/// Used for downloading binary content (e.g., artifact files).
/// Returns `(bytes, content_type, optional_filename)`.
pub async fn download_bytes(
&mut self,
path: &str,
) -> Result<(Vec<u8>, String, Option<String>)> {
// First attempt
let req = self.build_request(Method::GET, path);
let response = req.send().await.context("Failed to send request to API")?;
if response.status() == StatusCode::UNAUTHORIZED
&& self.refresh_token.is_some()
&& self.refresh_auth_token().await?
{
// Retry with new token
let req = self.build_request(Method::GET, path);
let response = req
.send()
.await
.context("Failed to send request to API (retry)")?;
return self.handle_bytes_response(response).await;
}
self.handle_bytes_response(response).await
}
/// Parse a binary response, extracting content type and optional filename.
async fn handle_bytes_response(
&self,
response: reqwest::Response,
) -> Result<(Vec<u8>, String, Option<String>)> {
let status = response.status();
if status.is_success() {
Ok(())
let content_type = response
.headers()
.get(header::CONTENT_TYPE)
.and_then(|v| v.to_str().ok())
.unwrap_or("application/octet-stream")
.to_string();
let filename = response
.headers()
.get(header::CONTENT_DISPOSITION)
.and_then(|v| v.to_str().ok())
.and_then(|v| {
// Parse filename from Content-Disposition: attachment; filename="name.ext"
v.split("filename=")
.nth(1)
.map(|f| f.trim_matches('"').trim_matches('\'').to_string())
});
let bytes = response
.bytes()
.await
.context("Failed to read response bytes")?;
Ok((bytes.to_vec(), content_type, filename))
} else {
let error_text = response
.text()
.await
.unwrap_or_else(|_| "Unknown error".to_string());
anyhow::bail!("API error ({}): {}", status, error_text);
if let Ok(api_error) = serde_json::from_str::<ApiError>(&error_text) {
anyhow::bail!("API error ({}): {}", status, api_error.error);
} else {
anyhow::bail!("API error ({}): {}", status, error_text);
}
}
}
/// POST a multipart/form-data request with a file field and optional text fields.
///
/// - `file_field_name`: the multipart field name for the file
/// - `file_bytes`: raw bytes of the file content
/// - `file_name`: filename hint sent in the Content-Disposition header
/// - `mime_type`: MIME type of the file (e.g. `"application/gzip"`)
/// - `extra_fields`: additional text key/value fields to include in the form
pub async fn multipart_post<T: DeserializeOwned>(
&mut self,
path: &str,
file_field_name: &str,
file_bytes: Vec<u8>,
file_name: &str,
mime_type: &str,
extra_fields: Vec<(&str, String)>,
) -> Result<T> {
// Closure-like helper to build the multipart request from scratch.
// We need this because reqwest::multipart::Form is not Clone, so we
// must rebuild it for the retry attempt.
let build_multipart_request =
|client: &ApiClient, bytes: &[u8]| -> Result<reqwest::RequestBuilder> {
let url = format!("{}/api/v1{}", client.base_url, path);
let file_part = multipart::Part::bytes(bytes.to_vec())
.file_name(file_name.to_string())
.mime_str(mime_type)
.context("Invalid MIME type")?;
let mut form = multipart::Form::new().part(file_field_name.to_string(), file_part);
for (key, value) in &extra_fields {
form = form.text(key.to_string(), value.clone());
}
let mut req = client.client.post(&url).multipart(form);
if let Some(token) = &client.auth_token {
req = req.bearer_auth(token);
}
Ok(req)
};
// First attempt
let req = build_multipart_request(self, &file_bytes)?;
let response = req
.send()
.await
.context("Failed to send multipart request to API")?;
if response.status() == StatusCode::UNAUTHORIZED
&& self.refresh_token.is_some()
&& self.refresh_auth_token().await?
{
// Retry with new token
let req = build_multipart_request(self, &file_bytes)?;
let response = req
.send()
.await
.context("Failed to send multipart request to API (retry)")?;
return self.handle_response(response).await;
}
self.handle_response(response).await
}
}
#[cfg(test)]
@@ -320,4 +508,22 @@ mod tests {
client.clear_auth_token();
assert!(client.auth_token.is_none());
}
#[test]
fn test_url_for_api_path() {
let client = ApiClient::new("http://localhost:8080".to_string(), None);
assert_eq!(
client.url_for("/actions"),
"http://localhost:8080/api/v1/actions"
);
}
#[test]
fn test_url_for_auth_path() {
let client = ApiClient::new("http://localhost:8080".to_string(), None);
assert_eq!(
client.url_for("/auth/login"),
"http://localhost:8080/auth/login"
);
}
}

View File

@@ -6,6 +6,7 @@ use std::collections::HashMap;
use crate::client::ApiClient;
use crate::config::CliConfig;
use crate::output::{self, OutputFormat};
use crate::wait::{wait_for_execution, WaitOptions};
#[derive(Subcommand)]
pub enum ActionCommands {
@@ -51,7 +52,7 @@ pub enum ActionCommands {
action_ref: String,
/// Skip confirmation prompt
#[arg(short, long)]
#[arg(long)]
yes: bool,
},
/// Execute an action
@@ -74,6 +75,11 @@ pub enum ActionCommands {
/// Timeout in seconds when waiting (default: 300)
#[arg(long, default_value = "300", requires = "wait")]
timeout: u64,
/// Notifier WebSocket base URL (e.g. ws://localhost:8081).
/// Derived from --api-url automatically when not set.
#[arg(long, requires = "wait")]
notifier_url: Option<String>,
},
}
@@ -84,7 +90,7 @@ struct Action {
action_ref: String,
pack_ref: String,
label: String,
description: String,
description: Option<String>,
entrypoint: String,
runtime: Option<i64>,
created: String,
@@ -99,7 +105,7 @@ struct ActionDetail {
pack: i64,
pack_ref: String,
label: String,
description: String,
description: Option<String>,
entrypoint: String,
runtime: Option<i64>,
param_schema: Option<serde_json::Value>,
@@ -182,6 +188,7 @@ pub async fn handle_action_command(
params_json,
wait,
timeout,
notifier_url,
} => {
handle_execute(
action_ref,
@@ -191,6 +198,7 @@ pub async fn handle_action_command(
api_url,
wait,
timeout,
notifier_url,
output_format,
)
.await
@@ -233,7 +241,7 @@ async fn handle_list(
let mut table = output::create_table();
output::add_header(
&mut table,
vec!["ID", "Pack", "Name", "Runner", "Enabled", "Description"],
vec!["ID", "Pack", "Name", "Runner", "Description"],
);
for action in actions {
@@ -245,8 +253,7 @@ async fn handle_list(
.runtime
.map(|r| r.to_string())
.unwrap_or_else(|| "none".to_string()),
"".to_string(),
output::truncate(&action.description, 40),
output::truncate(&action.description.unwrap_or_default(), 40),
]);
}
@@ -281,7 +288,10 @@ async fn handle_show(
("Reference", action.action_ref.clone()),
("Pack", action.pack_ref.clone()),
("Label", action.label.clone()),
("Description", action.description.clone()),
(
"Description",
action.description.unwrap_or_else(|| "None".to_string()),
),
("Entry Point", action.entrypoint.clone()),
(
"Runtime",
@@ -306,6 +316,7 @@ async fn handle_show(
Ok(())
}
#[allow(clippy::too_many_arguments)]
async fn handle_update(
action_ref: String,
label: Option<String>,
@@ -348,7 +359,10 @@ async fn handle_update(
("Ref", action.action_ref.clone()),
("Pack", action.pack_ref.clone()),
("Label", action.label.clone()),
("Description", action.description.clone()),
(
"Description",
action.description.unwrap_or_else(|| "None".to_string()),
),
("Entrypoint", action.entrypoint.clone()),
(
"Runtime",
@@ -407,6 +421,7 @@ async fn handle_delete(
Ok(())
}
#[allow(clippy::too_many_arguments)]
async fn handle_execute(
action_ref: String,
params: Vec<String>,
@@ -415,6 +430,7 @@ async fn handle_execute(
api_url: &Option<String>,
wait: bool,
timeout: u64,
notifier_url: Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let config = CliConfig::load_with_profile(profile.as_deref())?;
@@ -445,70 +461,63 @@ async fn handle_execute(
parameters,
};
match output_format {
OutputFormat::Table => {
output::print_info(&format!("Executing action: {}", action_ref));
}
_ => {}
if output_format == OutputFormat::Table {
output::print_info(&format!("Executing action: {}", action_ref));
}
let path = "/executions/execute".to_string();
let mut execution: Execution = client.post(&path, &request).await?;
let execution: Execution = client.post(&path, &request).await?;
if wait {
if !wait {
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&execution, output_format)?;
}
OutputFormat::Table => {
output::print_info(&format!(
"Waiting for execution {} to complete...",
execution.id
));
output::print_success(&format!("Execution {} started", execution.id));
output::print_key_value_table(vec![
("Execution ID", execution.id.to_string()),
("Action", execution.action_ref.clone()),
("Status", output::format_status(&execution.status)),
]);
}
_ => {}
}
// Poll for completion
let start = std::time::Instant::now();
let timeout_duration = std::time::Duration::from_secs(timeout);
loop {
if start.elapsed() > timeout_duration {
anyhow::bail!("Execution timed out after {} seconds", timeout);
}
let exec_path = format!("/executions/{}", execution.id);
execution = client.get(&exec_path).await?;
if execution.status == "succeeded"
|| execution.status == "failed"
|| execution.status == "canceled"
{
break;
}
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
}
return Ok(());
}
if output_format == OutputFormat::Table {
output::print_info(&format!(
"Waiting for execution {} to complete...",
execution.id
));
}
let verbose = matches!(output_format, OutputFormat::Table);
let summary = wait_for_execution(WaitOptions {
execution_id: execution.id,
timeout_secs: timeout,
api_client: &mut client,
notifier_ws_url: notifier_url,
verbose,
})
.await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&execution, output_format)?;
output::print_output(&summary, output_format)?;
}
OutputFormat::Table => {
output::print_success(&format!(
"Execution {} {}",
execution.id,
if wait { "completed" } else { "started" }
));
output::print_success(&format!("Execution {} completed", summary.id));
output::print_section("Execution Details");
output::print_key_value_table(vec![
("Execution ID", execution.id.to_string()),
("Action", execution.action_ref.clone()),
("Status", output::format_status(&execution.status)),
("Created", output::format_timestamp(&execution.created)),
("Updated", output::format_timestamp(&execution.updated)),
("Execution ID", summary.id.to_string()),
("Action", summary.action_ref.clone()),
("Status", output::format_status(&summary.status)),
("Created", output::format_timestamp(&summary.created)),
("Updated", output::format_timestamp(&summary.updated)),
]);
if let Some(result) = execution.result {
if let Some(result) = summary.result {
if !result.is_null() {
output::print_section("Result");
println!("{}", serde_json::to_string_pretty(&result)?);

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,14 @@ pub enum AuthCommands {
/// Password (will prompt if not provided)
#[arg(long)]
password: Option<String>,
/// API URL to log in to (saved into the profile for future use)
#[arg(long)]
url: Option<String>,
/// Save credentials into a named profile (creates it if it doesn't exist)
#[arg(long)]
save_profile: Option<String>,
},
/// Log out and clear authentication tokens
Logout,
@@ -53,8 +61,22 @@ pub async fn handle_auth_command(
output_format: OutputFormat,
) -> Result<()> {
match command {
AuthCommands::Login { username, password } => {
handle_login(username, password, profile, api_url, output_format).await
AuthCommands::Login {
username,
password,
url,
save_profile,
} => {
// --url is a convenient alias for --api-url at login time
let effective_api_url = url.or_else(|| api_url.clone());
handle_login(
username,
password,
save_profile.as_ref().or(profile.as_ref()),
&effective_api_url,
output_format,
)
.await
}
AuthCommands::Logout => handle_logout(profile, output_format).await,
AuthCommands::Whoami => handle_whoami(profile, api_url, output_format).await,
@@ -65,11 +87,46 @@ pub async fn handle_auth_command(
async fn handle_login(
username: String,
password: Option<String>,
profile: &Option<String>,
profile: Option<&String>,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let config = CliConfig::load_with_profile(profile.as_deref())?;
// Determine which profile name will own these credentials.
// If --save-profile / --profile was given, use that; otherwise use the
// currently-active profile.
let mut config = CliConfig::load()?;
let target_profile_name = profile
.cloned()
.unwrap_or_else(|| config.current_profile.clone());
// If a URL was provided and the target profile doesn't exist yet, create it.
if !config.profiles.contains_key(&target_profile_name) {
let url = api_url
.clone()
.unwrap_or_else(|| "http://localhost:8080".to_string());
use crate::config::Profile;
config.set_profile(
target_profile_name.clone(),
Profile {
api_url: url,
auth_token: None,
refresh_token: None,
output_format: None,
description: None,
},
)?;
} else if let Some(url) = api_url {
// Profile exists — update its api_url if an explicit URL was provided.
if let Some(p) = config.profiles.get_mut(&target_profile_name) {
p.api_url = url.clone();
}
config.save()?;
}
// Build a temporary config view that points at the target profile so
// ApiClient uses the right base URL.
let mut login_config = CliConfig::load()?;
login_config.current_profile = target_profile_name.clone();
// Prompt for password if not provided
let password = match password {
@@ -82,7 +139,7 @@ async fn handle_login(
}
};
let mut client = ApiClient::from_config(&config, api_url);
let mut client = ApiClient::from_config(&login_config, api_url);
let login_req = LoginRequest {
login: username,
@@ -91,12 +148,20 @@ async fn handle_login(
let response: LoginResponse = client.post("/auth/login", &login_req).await?;
// Save tokens to config
// Persist tokens into the target profile.
let mut config = CliConfig::load()?;
config.set_auth(
response.access_token.clone(),
response.refresh_token.clone(),
)?;
// Ensure the profile exists (it may have just been created above and saved).
if let Some(p) = config.profiles.get_mut(&target_profile_name) {
p.auth_token = Some(response.access_token.clone());
p.refresh_token = Some(response.refresh_token.clone());
config.save()?;
} else {
// Fallback: set_auth writes to the current profile.
config.set_auth(
response.access_token.clone(),
response.refresh_token.clone(),
)?;
}
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
@@ -105,6 +170,12 @@ async fn handle_login(
OutputFormat::Table => {
output::print_success("Successfully logged in");
output::print_info(&format!("Token expires in {} seconds", response.expires_in));
if target_profile_name != config.current_profile {
output::print_info(&format!(
"Credentials saved to profile '{}'",
target_profile_name
));
}
}
}

View File

@@ -79,7 +79,7 @@ pub async fn handle_config_command(
}
async fn handle_list(output_format: OutputFormat) -> Result<()> {
let config = CliConfig::load()?; // Config commands always use default profile
let config = CliConfig::load()?; // Config commands always use default profile
let all_config = config.list_all();
match output_format {
@@ -105,7 +105,7 @@ async fn handle_list(output_format: OutputFormat) -> Result<()> {
}
async fn handle_get(key: String, output_format: OutputFormat) -> Result<()> {
let config = CliConfig::load()?; // Config commands always use default profile
let config = CliConfig::load()?; // Config commands always use default profile
let value = config.get_value(&key)?;
match output_format {
@@ -125,7 +125,7 @@ async fn handle_get(key: String, output_format: OutputFormat) -> Result<()> {
}
async fn handle_profiles(output_format: OutputFormat) -> Result<()> {
let config = CliConfig::load()?; // Config commands always use default profile
let config = CliConfig::load()?; // Config commands always use default profile
let profiles = config.list_profiles();
let current = &config.current_profile;
@@ -170,12 +170,12 @@ async fn handle_profiles(output_format: OutputFormat) -> Result<()> {
}
async fn handle_current(output_format: OutputFormat) -> Result<()> {
let config = CliConfig::load()?; // Config commands always use default profile
let config = CliConfig::load()?; // Config commands always use default profile
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
let result = serde_json::json!({
"current_profile": config.current_profile
"profile": config.current_profile
});
output::print_output(&result, output_format)?;
}
@@ -194,7 +194,7 @@ async fn handle_use(name: String, output_format: OutputFormat) -> Result<()> {
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
let result = serde_json::json!({
"current_profile": name,
"profile": name,
"message": "Switched profile"
});
output::print_output(&result, output_format)?;
@@ -266,7 +266,7 @@ async fn handle_remove_profile(name: String, output_format: OutputFormat) -> Res
}
async fn handle_show_profile(name: String, output_format: OutputFormat) -> Result<()> {
let config = CliConfig::load()?; // Config commands always use default profile
let config = CliConfig::load()?; // Config commands always use default profile
let profile = config
.get_profile(&name)
.context(format!("Profile '{}' not found", name))?;
@@ -299,10 +299,6 @@ async fn handle_show_profile(name: String, output_format: OutputFormat) -> Resul
),
];
if let Some(output_format) = &profile.output_format {
pairs.push(("Output Format", output_format.clone()));
}
if let Some(description) = &profile.description {
pairs.push(("Description", description.clone()));
}

View File

@@ -50,7 +50,7 @@ pub enum ExecutionCommands {
execution_id: i64,
/// Skip confirmation prompt
#[arg(short = 'y', long)]
#[arg(long)]
yes: bool,
},
/// Get raw execution result
@@ -163,6 +163,7 @@ pub async fn handle_execution_command(
}
}
#[allow(clippy::too_many_arguments)]
async fn handle_list(
profile: &Option<String>,
pack: Option<String>,

View File

@@ -0,0 +1,605 @@
use anyhow::Result;
use clap::Subcommand;
use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
use sha2::{Digest, Sha256};
use crate::client::ApiClient;
use crate::config::CliConfig;
use crate::output::{self, OutputFormat};
#[derive(Subcommand)]
pub enum KeyCommands {
/// List all keys (values redacted)
List {
/// Filter by owner type (system, identity, pack, action, sensor)
#[arg(long)]
owner_type: Option<String>,
/// Filter by owner string
#[arg(long)]
owner: Option<String>,
/// Page number
#[arg(long, default_value = "1")]
page: u32,
/// Items per page
#[arg(long, default_value = "50")]
per_page: u32,
},
/// Show details of a specific key
Show {
/// Key reference identifier
key_ref: String,
/// Decrypt and display the actual value (otherwise a SHA-256 hash is shown)
#[arg(short = 'd', long)]
decrypt: bool,
},
/// Create a new key/secret
Create {
/// Unique reference for the key (e.g., "github_token")
#[arg(long)]
r#ref: String,
/// Human-readable name for the key
#[arg(long)]
name: String,
/// The secret value to store. Plain strings are stored as JSON strings.
/// Use JSON syntax for structured values (e.g., '{"user":"admin","pass":"s3cret"}').
#[arg(long)]
value: String,
/// Owner type (system, identity, pack, action, sensor)
#[arg(long, default_value = "system")]
owner_type: String,
/// Owner string identifier
#[arg(long)]
owner: Option<String>,
/// Owner pack reference (auto-resolves pack ID)
#[arg(long)]
owner_pack_ref: Option<String>,
/// Owner action reference (auto-resolves action ID)
#[arg(long)]
owner_action_ref: Option<String>,
/// Owner sensor reference (auto-resolves sensor ID)
#[arg(long)]
owner_sensor_ref: Option<String>,
/// Encrypt the value before storing (default: unencrypted)
#[arg(short = 'e', long)]
encrypt: bool,
},
/// Update an existing key/secret
Update {
/// Key reference identifier
key_ref: String,
/// Update the human-readable name
#[arg(long)]
name: Option<String>,
/// Update the secret value. Plain strings are stored as JSON strings.
/// Use JSON syntax for structured values (e.g., '{"user":"admin","pass":"s3cret"}').
#[arg(long)]
value: Option<String>,
/// Update encryption status
#[arg(long)]
encrypted: Option<bool>,
},
/// Delete a key/secret
Delete {
/// Key reference identifier
key_ref: String,
/// Skip confirmation prompt
#[arg(long)]
yes: bool,
},
}
// ── Response / request types used for (de)serialization against the API ────
#[derive(Debug, Serialize, Deserialize)]
struct KeyResponse {
id: i64,
#[serde(rename = "ref")]
key_ref: String,
owner_type: String,
#[serde(default)]
owner: Option<String>,
#[serde(default)]
owner_identity: Option<i64>,
#[serde(default)]
owner_pack: Option<i64>,
#[serde(default)]
owner_pack_ref: Option<String>,
#[serde(default)]
owner_action: Option<i64>,
#[serde(default)]
owner_action_ref: Option<String>,
#[serde(default)]
owner_sensor: Option<i64>,
#[serde(default)]
owner_sensor_ref: Option<String>,
name: String,
encrypted: bool,
#[serde(default)]
value: JsonValue,
created: String,
updated: String,
}
#[derive(Debug, Serialize, Deserialize)]
struct KeySummary {
id: i64,
#[serde(rename = "ref")]
key_ref: String,
owner_type: String,
#[serde(default)]
owner: Option<String>,
name: String,
encrypted: bool,
created: String,
}
#[derive(Debug, Serialize)]
struct CreateKeyRequestBody {
r#ref: String,
owner_type: String,
#[serde(skip_serializing_if = "Option::is_none")]
owner: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
owner_pack_ref: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
owner_action_ref: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
owner_sensor_ref: Option<String>,
name: String,
value: JsonValue,
encrypted: bool,
}
#[derive(Debug, Serialize)]
struct UpdateKeyRequestBody {
#[serde(skip_serializing_if = "Option::is_none")]
name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
value: Option<JsonValue>,
#[serde(skip_serializing_if = "Option::is_none")]
encrypted: Option<bool>,
}
// ── Command dispatch ───────────────────────────────────────────────────────
pub async fn handle_key_command(
profile: &Option<String>,
command: KeyCommands,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
match command {
KeyCommands::List {
owner_type,
owner,
page,
per_page,
} => {
handle_list(
profile,
owner_type,
owner,
page,
per_page,
api_url,
output_format,
)
.await
}
KeyCommands::Show { key_ref, decrypt } => {
handle_show(profile, key_ref, decrypt, api_url, output_format).await
}
KeyCommands::Create {
r#ref,
name,
value,
owner_type,
owner,
owner_pack_ref,
owner_action_ref,
owner_sensor_ref,
encrypt,
} => {
handle_create(
profile,
r#ref,
name,
value,
owner_type,
owner,
owner_pack_ref,
owner_action_ref,
owner_sensor_ref,
encrypt,
api_url,
output_format,
)
.await
}
KeyCommands::Update {
key_ref,
name,
value,
encrypted,
} => {
handle_update(
profile,
key_ref,
name,
value,
encrypted,
api_url,
output_format,
)
.await
}
KeyCommands::Delete { key_ref, yes } => {
handle_delete(profile, key_ref, yes, api_url, output_format).await
}
}
}
// ── Handlers ───────────────────────────────────────────────────────────────
#[allow(clippy::too_many_arguments)]
async fn handle_list(
profile: &Option<String>,
owner_type: Option<String>,
owner: Option<String>,
page: u32,
per_page: u32,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let mut query_params = vec![format!("page={}", page), format!("per_page={}", per_page)];
if let Some(ot) = owner_type {
query_params.push(format!("owner_type={}", ot));
}
if let Some(o) = owner {
query_params.push(format!("owner={}", o));
}
let path = format!("/keys?{}", query_params.join("&"));
let keys: Vec<KeySummary> = client.get(&path).await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&keys, output_format)?;
}
OutputFormat::Table => {
if keys.is_empty() {
output::print_info("No keys found");
} else {
let mut table = output::create_table();
output::add_header(
&mut table,
vec![
"ID",
"Ref",
"Name",
"Owner Type",
"Owner",
"Encrypted",
"Created",
],
);
for key in keys {
table.add_row(vec![
key.id.to_string(),
key.key_ref.clone(),
key.name.clone(),
key.owner_type.clone(),
key.owner.clone().unwrap_or_else(|| "-".to_string()),
output::format_bool(key.encrypted),
output::format_timestamp(&key.created),
]);
}
println!("{}", table);
}
}
}
Ok(())
}
async fn handle_show(
profile: &Option<String>,
key_ref: String,
decrypt: bool,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let path = format!("/keys/{}", urlencoding::encode(&key_ref));
let key: KeyResponse = client.get(&path).await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
if decrypt {
output::print_output(&key, output_format)?;
} else {
// Redact value — replace with hash
let mut redacted = serde_json::to_value(&key)?;
if let Some(obj) = redacted.as_object_mut() {
obj.insert(
"value".to_string(),
JsonValue::String(hash_value_for_display(&key.value)),
);
}
output::print_output(&redacted, output_format)?;
}
}
OutputFormat::Table => {
output::print_section(&format!("Key: {}", key.key_ref));
let mut pairs = vec![
("ID", key.id.to_string()),
("Reference", key.key_ref.clone()),
("Name", key.name.clone()),
("Owner Type", key.owner_type.clone()),
(
"Owner",
key.owner.clone().unwrap_or_else(|| "-".to_string()),
),
];
if let Some(ref pack_ref) = key.owner_pack_ref {
pairs.push(("Owner Pack", pack_ref.clone()));
}
if let Some(ref action_ref) = key.owner_action_ref {
pairs.push(("Owner Action", action_ref.clone()));
}
if let Some(ref sensor_ref) = key.owner_sensor_ref {
pairs.push(("Owner Sensor", sensor_ref.clone()));
}
pairs.push(("Encrypted", output::format_bool(key.encrypted)));
if decrypt {
pairs.push(("Value", format_value_for_display(&key.value)));
} else {
pairs.push(("Value (SHA-256)", hash_value_for_display(&key.value)));
pairs.push((
"",
"(use --decrypt / -d to reveal the actual value)".to_string(),
));
}
pairs.push(("Created", output::format_timestamp(&key.created)));
pairs.push(("Updated", output::format_timestamp(&key.updated)));
output::print_key_value_table(pairs);
}
}
Ok(())
}
#[allow(clippy::too_many_arguments)]
async fn handle_create(
profile: &Option<String>,
key_ref: String,
name: String,
value: String,
owner_type: String,
owner: Option<String>,
owner_pack_ref: Option<String>,
owner_action_ref: Option<String>,
owner_sensor_ref: Option<String>,
encrypted: bool,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
// Validate owner_type before sending
validate_owner_type(&owner_type)?;
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let json_value = parse_value_as_json(&value);
let request = CreateKeyRequestBody {
r#ref: key_ref,
owner_type,
owner,
owner_pack_ref,
owner_action_ref,
owner_sensor_ref,
name,
value: json_value,
encrypted,
};
let key: KeyResponse = client.post("/keys", &request).await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&key, output_format)?;
}
OutputFormat::Table => {
output::print_success(&format!("Key '{}' created successfully", key.key_ref));
output::print_key_value_table(vec![
("ID", key.id.to_string()),
("Reference", key.key_ref.clone()),
("Name", key.name.clone()),
("Owner Type", key.owner_type.clone()),
(
"Owner",
key.owner.clone().unwrap_or_else(|| "-".to_string()),
),
("Encrypted", output::format_bool(key.encrypted)),
("Created", output::format_timestamp(&key.created)),
]);
}
}
Ok(())
}
async fn handle_update(
profile: &Option<String>,
key_ref: String,
name: Option<String>,
value: Option<String>,
encrypted: Option<bool>,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
if name.is_none() && value.is_none() && encrypted.is_none() {
anyhow::bail!(
"At least one field must be provided to update (--name, --value, or --encrypted)"
);
}
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let json_value = value.map(|v| parse_value_as_json(&v));
let request = UpdateKeyRequestBody {
name,
value: json_value,
encrypted,
};
let path = format!("/keys/{}", urlencoding::encode(&key_ref));
let key: KeyResponse = client.put(&path, &request).await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&key, output_format)?;
}
OutputFormat::Table => {
output::print_success(&format!("Key '{}' updated successfully", key.key_ref));
output::print_key_value_table(vec![
("ID", key.id.to_string()),
("Reference", key.key_ref.clone()),
("Name", key.name.clone()),
("Owner Type", key.owner_type.clone()),
(
"Owner",
key.owner.clone().unwrap_or_else(|| "-".to_string()),
),
("Encrypted", output::format_bool(key.encrypted)),
("Updated", output::format_timestamp(&key.updated)),
]);
}
}
Ok(())
}
async fn handle_delete(
profile: &Option<String>,
key_ref: String,
yes: bool,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
// Confirm deletion unless --yes is provided
if !yes && matches!(output_format, OutputFormat::Table) {
let confirm = dialoguer::Confirm::new()
.with_prompt(format!(
"Are you sure you want to delete key '{}'?",
key_ref
))
.default(false)
.interact()?;
if !confirm {
output::print_info("Deletion cancelled");
return Ok(());
}
}
let path = format!("/keys/{}", urlencoding::encode(&key_ref));
client.delete_no_response(&path).await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
let msg =
serde_json::json!({"message": format!("Key '{}' deleted successfully", key_ref)});
output::print_output(&msg, output_format)?;
}
OutputFormat::Table => {
output::print_success(&format!("Key '{}' deleted successfully", key_ref));
}
}
Ok(())
}
// ── Helpers ────────────────────────────────────────────────────────────────
/// Validate that the owner_type string is one of the accepted values.
fn validate_owner_type(owner_type: &str) -> Result<()> {
const VALID: &[&str] = &["system", "identity", "pack", "action", "sensor"];
if !VALID.contains(&owner_type) {
anyhow::bail!(
"Invalid owner type '{}'. Must be one of: {}",
owner_type,
VALID.join(", ")
);
}
Ok(())
}
/// Parse a CLI string value into a [`JsonValue`].
///
/// If the input is valid JSON (object, array, number, boolean, null, or
/// quoted string), it is used as-is. Otherwise, it is treated as a plain
/// string and wrapped in a JSON string value.
fn parse_value_as_json(input: &str) -> JsonValue {
match serde_json::from_str::<JsonValue>(input) {
Ok(v) => v,
Err(_) => JsonValue::String(input.to_string()),
}
}
/// Format a [`JsonValue`] for table display.
fn format_value_for_display(value: &JsonValue) -> String {
match value {
JsonValue::String(s) => s.clone(),
other => serde_json::to_string_pretty(other).unwrap_or_else(|_| other.to_string()),
}
}
/// Compute a SHA-256 hash of the JSON value for display purposes.
///
/// This lets users verify a value matches expectations without revealing
/// the actual content (e.g., to confirm it hasn't changed).
fn hash_value_for_display(value: &JsonValue) -> String {
let serialized = serde_json::to_string(value).unwrap_or_default();
let mut hasher = Sha256::new();
hasher.update(serialized.as_bytes());
let result = hasher.finalize();
format!("sha256:{:x}", result)
}

View File

@@ -1,9 +1,12 @@
pub mod action;
pub mod artifact;
pub mod auth;
pub mod config;
pub mod execution;
pub mod key;
pub mod pack;
pub mod pack_index;
pub mod rule;
pub mod sensor;
pub mod trigger;
pub mod workflow;

View File

@@ -1,5 +1,6 @@
use anyhow::Result;
use anyhow::{Context, Result};
use clap::Subcommand;
use flate2::{write::GzEncoder, Compression};
use serde::{Deserialize, Serialize};
use std::path::Path;
@@ -10,6 +11,37 @@ use crate::output::{self, OutputFormat};
#[derive(Subcommand)]
pub enum PackCommands {
/// Create an empty pack
///
/// Creates a new pack with no actions, triggers, rules, or sensors.
/// Use --interactive (-i) to be prompted for each field, or provide
/// fields via flags. Only --ref is required in non-interactive mode
/// (--label defaults to a title-cased ref, version defaults to 0.1.0).
Create {
/// Unique reference identifier (e.g., "my_pack", "slack")
#[arg(long, short = 'r')]
r#ref: Option<String>,
/// Human-readable label (defaults to title-cased ref)
#[arg(long, short)]
label: Option<String>,
/// Pack description
#[arg(long, short)]
description: Option<String>,
/// Pack version (semver format recommended)
#[arg(long = "pack-version", default_value = "0.1.0")]
pack_version: String,
/// Tags for categorization (comma-separated)
#[arg(long, value_delimiter = ',')]
tags: Vec<String>,
/// Interactive mode — prompt for each field
#[arg(long, short)]
interactive: bool,
},
/// List all installed packs
List {
/// Filter by pack name
@@ -63,10 +95,6 @@ pub enum PackCommands {
/// Update version
#[arg(long)]
version: Option<String>,
/// Update enabled status
#[arg(long)]
enabled: Option<bool>,
},
/// Uninstall a pack
Uninstall {
@@ -74,12 +102,12 @@ pub enum PackCommands {
pack_ref: String,
/// Skip confirmation prompt
#[arg(short = 'y', long)]
#[arg(long)]
yes: bool,
},
/// Register a pack from a local directory
/// Register a pack from a local directory (path must be accessible by the API server)
Register {
/// Path to pack directory
/// Path to pack directory (must be a path the API server can access)
path: String,
/// Force re-registration if pack already exists
@@ -90,6 +118,22 @@ pub enum PackCommands {
#[arg(long)]
skip_tests: bool,
},
/// Upload a local pack directory to the API server and register it
///
/// This command tarballs the local directory and streams it to the API,
/// so it works regardless of whether the API is local or running in Docker.
Upload {
/// Path to the local pack directory (must contain pack.yaml)
path: String,
/// Force re-registration if a pack with the same ref already exists
#[arg(short, long)]
force: bool,
/// Skip running pack tests after upload
#[arg(long)]
skip_tests: bool,
},
/// Test a pack's test suite
Test {
/// Pack reference (name) or path to pack directory
@@ -198,8 +242,6 @@ struct Pack {
#[serde(default)]
keywords: Option<Vec<String>>,
#[serde(default)]
enabled: Option<bool>,
#[serde(default)]
metadata: Option<serde_json::Value>,
created: String,
updated: String,
@@ -225,8 +267,6 @@ struct PackDetail {
#[serde(default)]
keywords: Option<Vec<String>>,
#[serde(default)]
enabled: Option<bool>,
#[serde(default)]
metadata: Option<serde_json::Value>,
created: String,
updated: String,
@@ -256,6 +296,26 @@ struct RegisterPackRequest {
skip_tests: bool,
}
#[derive(Debug, Serialize, Deserialize)]
struct UploadPackResponse {
pack: Pack,
#[serde(default)]
test_result: Option<serde_json::Value>,
#[serde(default)]
tests_skipped: bool,
}
#[derive(Debug, Serialize)]
struct CreatePackBody {
r#ref: String,
label: String,
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
version: String,
#[serde(default)]
tags: Vec<String>,
}
pub async fn handle_pack_command(
profile: &Option<String>,
command: PackCommands,
@@ -263,6 +323,27 @@ pub async fn handle_pack_command(
output_format: OutputFormat,
) -> Result<()> {
match command {
PackCommands::Create {
r#ref,
label,
description,
pack_version,
tags,
interactive,
} => {
handle_create(
profile,
r#ref,
label,
description,
pack_version,
tags,
interactive,
api_url,
output_format,
)
.await
}
PackCommands::List { name } => handle_list(profile, name, api_url, output_format).await,
PackCommands::Show { pack_ref } => {
handle_show(profile, pack_ref, api_url, output_format).await
@@ -296,6 +377,11 @@ pub async fn handle_pack_command(
force,
skip_tests,
} => handle_register(profile, path, force, skip_tests, api_url, output_format).await,
PackCommands::Upload {
path,
force,
skip_tests,
} => handle_upload(profile, path, force, skip_tests, api_url, output_format).await,
PackCommands::Test {
pack,
verbose,
@@ -310,7 +396,6 @@ pub async fn handle_pack_command(
label,
description,
version,
enabled,
} => {
handle_update(
profile,
@@ -318,7 +403,6 @@ pub async fn handle_pack_command(
label,
description,
version,
enabled,
api_url,
output_format,
)
@@ -370,6 +454,168 @@ pub async fn handle_pack_command(
}
}
/// Derive a human-readable label from a pack ref.
///
/// Splits on `_`, `-`, or `.` and title-cases each word.
fn label_from_ref(r: &str) -> String {
r.split(['_', '-', '.'])
.filter(|s| !s.is_empty())
.map(|word| {
let mut chars = word.chars();
match chars.next() {
Some(first) => {
let upper: String = first.to_uppercase().collect();
format!("{}{}", upper, chars.as_str())
}
None => String::new(),
}
})
.collect::<Vec<_>>()
.join(" ")
}
#[allow(clippy::too_many_arguments)]
async fn handle_create(
profile: &Option<String>,
ref_flag: Option<String>,
label_flag: Option<String>,
description_flag: Option<String>,
version_flag: String,
tags_flag: Vec<String>,
interactive: bool,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
// ── Collect field values ────────────────────────────────────────
let (pack_ref, label, description, version, tags) = if interactive {
// Interactive prompts
let pack_ref: String = match ref_flag {
Some(r) => r,
None => dialoguer::Input::new()
.with_prompt("Pack ref (unique identifier, e.g. \"my_pack\")")
.interact_text()?,
};
let default_label = label_flag
.clone()
.unwrap_or_else(|| label_from_ref(&pack_ref));
let label: String = dialoguer::Input::new()
.with_prompt("Label")
.default(default_label)
.interact_text()?;
let default_desc = description_flag.clone().unwrap_or_default();
let description: String = dialoguer::Input::new()
.with_prompt("Description (optional, Enter to skip)")
.default(default_desc)
.allow_empty(true)
.interact_text()?;
let description = if description.is_empty() {
None
} else {
Some(description)
};
let version: String = dialoguer::Input::new()
.with_prompt("Version")
.default(version_flag)
.interact_text()?;
let default_tags = if tags_flag.is_empty() {
String::new()
} else {
tags_flag.join(", ")
};
let tags_input: String = dialoguer::Input::new()
.with_prompt("Tags (comma-separated, optional)")
.default(default_tags)
.allow_empty(true)
.interact_text()?;
let tags: Vec<String> = tags_input
.split(',')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect();
// Show summary and confirm
println!();
output::print_section("New Pack Summary");
output::print_key_value_table(vec![
("Ref", pack_ref.clone()),
("Label", label.clone()),
(
"Description",
description.clone().unwrap_or_else(|| "(none)".to_string()),
),
("Version", version.clone()),
(
"Tags",
if tags.is_empty() {
"(none)".to_string()
} else {
tags.join(", ")
},
),
]);
println!();
let confirm = dialoguer::Confirm::new()
.with_prompt("Create this pack?")
.default(true)
.interact()?;
if !confirm {
output::print_info("Pack creation cancelled");
return Ok(());
}
(pack_ref, label, description, version, tags)
} else {
// Non-interactive: ref is required
let pack_ref = ref_flag.ok_or_else(|| {
anyhow::anyhow!(
"Pack ref is required. Provide --ref <value> or use --interactive mode."
)
})?;
let label = label_flag.unwrap_or_else(|| label_from_ref(&pack_ref));
let description = description_flag;
let version = version_flag;
let tags = tags_flag;
(pack_ref, label, description, version, tags)
};
// ── Send request ────────────────────────────────────────────────
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let body = CreatePackBody {
r#ref: pack_ref,
label,
description,
version,
tags,
};
let pack: Pack = client.post("/packs", &body).await?;
// ── Output ──────────────────────────────────────────────────────
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&pack, output_format)?;
}
OutputFormat::Table => {
output::print_success(&format!(
"Pack '{}' created successfully (id: {})",
pack.pack_ref, pack.id
));
}
}
Ok(())
}
async fn handle_list(
profile: &Option<String>,
name: Option<String>,
@@ -395,17 +641,13 @@ async fn handle_list(
output::print_info("No packs found");
} else {
let mut table = output::create_table();
output::add_header(
&mut table,
vec!["ID", "Name", "Version", "Enabled", "Description"],
);
output::add_header(&mut table, vec!["ID", "Name", "Version", "Description"]);
for pack in packs {
table.add_row(vec![
pack.id.to_string(),
pack.pack_ref,
pack.version,
output::format_bool(pack.enabled.unwrap_or(true)),
output::truncate(&pack.description.unwrap_or_default(), 50),
]);
}
@@ -449,7 +691,6 @@ async fn handle_show(
"Description",
pack.description.unwrap_or_else(|| "None".to_string()),
),
("Enabled", output::format_bool(pack.enabled.unwrap_or(true))),
("Actions", pack.action_count.unwrap_or(0).to_string()),
("Triggers", pack.trigger_count.unwrap_or(0).to_string()),
("Rules", pack.rule_count.unwrap_or(0).to_string()),
@@ -470,6 +711,7 @@ async fn handle_show(
Ok(())
}
#[allow(clippy::too_many_arguments)]
async fn handle_install(
profile: &Option<String>,
source: String,
@@ -487,18 +729,15 @@ async fn handle_install(
// Detect source type
let source_type = detect_source_type(&source, ref_spec.as_deref(), no_registry);
match output_format {
OutputFormat::Table => {
output::print_info(&format!(
"Installing pack from: {} ({})",
source, source_type
));
output::print_info("Starting installation...");
if skip_deps {
output::print_info("⚠ Dependency validation will be skipped");
}
if output_format == OutputFormat::Table {
output::print_info(&format!(
"Installing pack from: {} ({})",
source, source_type
));
output::print_info("Starting installation...");
if skip_deps {
output::print_info("⚠ Dependency validation will be skipped");
}
_ => {}
}
let request = InstallPackRequest {
@@ -593,6 +832,149 @@ async fn handle_uninstall(
Ok(())
}
async fn handle_upload(
profile: &Option<String>,
path: String,
force: bool,
skip_tests: bool,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
let pack_dir = Path::new(&path);
// Validate the directory exists and contains pack.yaml
if !pack_dir.exists() {
anyhow::bail!("Path does not exist: {}", path);
}
if !pack_dir.is_dir() {
anyhow::bail!("Path is not a directory: {}", path);
}
let pack_yaml_path = pack_dir.join("pack.yaml");
if !pack_yaml_path.exists() {
anyhow::bail!("No pack.yaml found in: {}", path);
}
// Read pack ref from pack.yaml so we can display it
let pack_yaml_content =
std::fs::read_to_string(&pack_yaml_path).context("Failed to read pack.yaml")?;
let pack_yaml: serde_yaml_ng::Value =
serde_yaml_ng::from_str(&pack_yaml_content).context("Failed to parse pack.yaml")?;
let pack_ref = pack_yaml
.get("ref")
.and_then(|v| v.as_str())
.unwrap_or("unknown");
if output_format == OutputFormat::Table {
output::print_info(&format!("Uploading pack '{}' from: {}", pack_ref, path));
output::print_info("Creating archive...");
}
// Build an in-memory tar.gz of the pack directory
let tar_gz_bytes = {
let buf = Vec::new();
let enc = GzEncoder::new(buf, Compression::default());
let mut tar = tar::Builder::new(enc);
// Walk the directory and add files to the archive
// We strip the leading path so the archive root is the pack directory contents
let abs_pack_dir = pack_dir
.canonicalize()
.context("Failed to resolve pack directory path")?;
append_dir_to_tar(&mut tar, &abs_pack_dir, &abs_pack_dir)?;
let encoder = tar.into_inner().context("Failed to finalise tar archive")?;
encoder.finish().context("Failed to flush gzip stream")?
};
let archive_size_kb = tar_gz_bytes.len() / 1024;
if output_format == OutputFormat::Table {
output::print_info(&format!(
"Archive ready ({} KB), uploading...",
archive_size_kb
));
}
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
let mut extra_fields = Vec::new();
if force {
extra_fields.push(("force", "true".to_string()));
}
if skip_tests {
extra_fields.push(("skip_tests", "true".to_string()));
}
let archive_name = format!("{}.tar.gz", pack_ref);
let response: UploadPackResponse = client
.multipart_post(
"/packs/upload",
"pack",
tar_gz_bytes,
&archive_name,
"application/gzip",
extra_fields,
)
.await?;
match output_format {
OutputFormat::Json | OutputFormat::Yaml => {
output::print_output(&response, output_format)?;
}
OutputFormat::Table => {
println!();
output::print_success(&format!(
"✓ Pack '{}' uploaded and registered successfully",
response.pack.pack_ref
));
output::print_info(&format!(" Version: {}", response.pack.version));
output::print_info(&format!(" ID: {}", response.pack.id));
if response.tests_skipped {
output::print_info(" ⚠ Tests were skipped");
} else if let Some(test_result) = &response.test_result {
if let Some(status) = test_result.get("status").and_then(|s| s.as_str()) {
if status == "passed" {
output::print_success(" ✓ All tests passed");
} else if status == "failed" {
output::print_error(" ✗ Some tests failed");
}
}
}
}
}
Ok(())
}
/// Recursively append a directory's contents to a tar archive.
/// `base` is the root directory being archived; `dir` is the current directory
/// being walked. Files are stored with paths relative to `base`.
fn append_dir_to_tar<W: std::io::Write>(
tar: &mut tar::Builder<W>,
base: &Path,
dir: &Path,
) -> Result<()> {
for entry in std::fs::read_dir(dir).context("Failed to read directory")? {
let entry = entry.context("Failed to read directory entry")?;
let entry_path = entry.path();
let relative_path = entry_path
.strip_prefix(base)
.context("Failed to compute relative path")?;
if entry_path.is_dir() {
append_dir_to_tar(tar, base, &entry_path)?;
} else if entry_path.is_file() {
tar.append_path_with_name(&entry_path, relative_path)
.with_context(|| format!("Failed to add {} to archive", entry_path.display()))?;
}
// symlinks are intentionally skipped
}
Ok(())
}
async fn handle_register(
profile: &Option<String>,
path: String,
@@ -604,19 +986,31 @@ async fn handle_register(
let config = CliConfig::load_with_profile(profile.as_deref())?;
let mut client = ApiClient::from_config(&config, api_url);
// Warn if the path looks like a local filesystem path that the API server
// probably can't see (i.e. not a known container mount point).
let looks_local = !path.starts_with("/opt/attune/")
&& !path.starts_with("/app/")
&& !path.starts_with("/packs");
if looks_local {
if output_format == OutputFormat::Table {
output::print_info(&format!("Registering pack from: {}", path));
eprintln!(
"⚠ Warning: '{}' looks like a local path. If the API is running in \
Docker it may not be able to access this path.\n \
Use `attune pack upload {}` instead to upload the pack directly.",
path, path
);
}
} else if output_format == OutputFormat::Table {
output::print_info(&format!("Registering pack from: {}", path));
}
let request = RegisterPackRequest {
path: path.clone(),
force,
skip_tests,
};
match output_format {
OutputFormat::Table => {
output::print_info(&format!("Registering pack from: {}", path));
}
_ => {}
}
let response: PackInstallResponse = client.post("/packs/register", &request).await?;
match output_format {
@@ -749,13 +1143,10 @@ async fn handle_test(
let executor = TestExecutor::new(pack_base_dir);
// Print test start message
match output_format {
OutputFormat::Table => {
println!();
output::print_section(&format!("🧪 Testing Pack: {} v{}", pack_ref, pack_version));
println!();
}
_ => {}
if output_format == OutputFormat::Table {
println!();
output::print_section(&format!("🧪 Testing Pack: {} v{}", pack_ref, pack_version));
println!();
}
// Execute tests
@@ -1264,7 +1655,7 @@ async fn handle_index_entry(
if let Some(ref git) = git_url {
let default_ref = format!("v{}", version);
let ref_value = git_ref.as_ref().map(|s| s.as_str()).unwrap_or(&default_ref);
let ref_value = git_ref.as_deref().unwrap_or(&default_ref);
let git_source = serde_json::json!({
"type": "git",
"url": git,
@@ -1366,13 +1757,13 @@ async fn handle_index_entry(
Ok(())
}
#[allow(clippy::too_many_arguments)]
async fn handle_update(
profile: &Option<String>,
pack_ref: String,
label: Option<String>,
description: Option<String>,
version: Option<String>,
enabled: Option<bool>,
api_url: &Option<String>,
output_format: OutputFormat,
) -> Result<()> {
@@ -1380,27 +1771,30 @@ async fn handle_update(
let mut client = ApiClient::from_config(&config, api_url);
// Check that at least one field is provided
if label.is_none() && description.is_none() && version.is_none() && enabled.is_none() {
if label.is_none() && description.is_none() && version.is_none() {
anyhow::bail!("At least one field must be provided to update");
}
#[derive(Serialize)]
#[serde(tag = "op", content = "value", rename_all = "snake_case")]
enum PackDescriptionPatch {
Set(String),
}
#[derive(Serialize)]
struct UpdatePackRequest {
#[serde(skip_serializing_if = "Option::is_none")]
label: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
description: Option<String>,
description: Option<PackDescriptionPatch>,
#[serde(skip_serializing_if = "Option::is_none")]
version: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
enabled: Option<bool>,
}
let request = UpdatePackRequest {
label,
description,
description: description.map(PackDescriptionPatch::Set),
version,
enabled,
};
let path = format!("/packs/{}", pack_ref);
@@ -1417,7 +1811,6 @@ async fn handle_update(
("Ref", pack.pack_ref.clone()),
("Label", pack.label.clone()),
("Version", pack.version.clone()),
("Enabled", output::format_bool(pack.enabled.unwrap_or(true))),
("Updated", output::format_timestamp(&pack.updated)),
]);
}
@@ -1425,3 +1818,48 @@ async fn handle_update(
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_label_from_ref_underscores() {
assert_eq!(label_from_ref("my_cool_pack"), "My Cool Pack");
}
#[test]
fn test_label_from_ref_hyphens() {
assert_eq!(label_from_ref("my-cool-pack"), "My Cool Pack");
}
#[test]
fn test_label_from_ref_dots() {
assert_eq!(label_from_ref("my.cool.pack"), "My Cool Pack");
}
#[test]
fn test_label_from_ref_mixed_separators() {
assert_eq!(label_from_ref("my_cool-pack.v2"), "My Cool Pack V2");
}
#[test]
fn test_label_from_ref_single_word() {
assert_eq!(label_from_ref("slack"), "Slack");
}
#[test]
fn test_label_from_ref_already_capitalized() {
assert_eq!(label_from_ref("AWS"), "AWS");
}
#[test]
fn test_label_from_ref_empty() {
assert_eq!(label_from_ref(""), "");
}
#[test]
fn test_label_from_ref_consecutive_separators() {
assert_eq!(label_from_ref("my__pack"), "My Pack");
}
}

Some files were not shown because too many files have changed in this diff Show More