Compare commits
2 Commits
495b81236a
...
b43495b26d
| Author | SHA1 | Date | |
|---|---|---|---|
| b43495b26d | |||
| 7ee3604eb1 |
17
AGENTS.md
17
AGENTS.md
@@ -19,7 +19,7 @@ When this project reaches v1.0 or gets its first production deployment, this sec
|
||||
|
||||
## Languages & Core Technologies
|
||||
- **Primary Language**: Rust 2021 edition
|
||||
- **Database**: PostgreSQL 14+ (primary data store + LISTEN/NOTIFY pub/sub)
|
||||
- **Database**: PostgreSQL 16+ with TimescaleDB 2.17+ (primary data store + LISTEN/NOTIFY pub/sub + time-series history)
|
||||
- **Message Queue**: RabbitMQ 3.12+ (via lapin)
|
||||
- **Cache**: Redis 7.0+ (optional)
|
||||
- **Web UI**: TypeScript + React 19 + Vite
|
||||
@@ -70,7 +70,7 @@ attune/
|
||||
- **Default user**: `test@attune.local` / `TestPass123!` (auto-created)
|
||||
|
||||
**Services**:
|
||||
- **Infrastructure**: postgres, rabbitmq, redis
|
||||
- **Infrastructure**: postgres (TimescaleDB), rabbitmq, redis
|
||||
- **Init** (run-once): migrations, init-user, init-packs
|
||||
- **Application**: api (8080), executor, worker-{shell,python,node,full}, sensor, notifier (8081), web (3000)
|
||||
|
||||
@@ -211,8 +211,11 @@ Enforcement created → Execution scheduled → Worker executes Action
|
||||
- **Enums**: PostgreSQL enum types mapped with `#[sqlx(type_name = "...")]`
|
||||
- **Workflow Tasks**: Stored as JSONB in `execution.workflow_task` (consolidated from separate table 2026-01-27)
|
||||
- **FK ON DELETE Policy**: Historical records (executions, events, enforcements) use `ON DELETE SET NULL` so they survive entity deletion while preserving text ref fields (`action_ref`, `trigger_ref`, etc.) for auditing. Pack-owned entities (actions, triggers, sensors, rules, runtimes) use `ON DELETE CASCADE` from pack. Workflow executions cascade-delete with their workflow definition.
|
||||
- **Entity History Tracking (TimescaleDB)**: Append-only `<table>_history` hypertables track field-level changes to `execution`, `worker`, `enforcement`, and `event` tables. Populated by PostgreSQL `AFTER INSERT OR UPDATE OR DELETE` triggers — no Rust code changes needed for recording. Uses JSONB diff format (`old_values`/`new_values`) with a `changed_fields TEXT[]` column for efficient filtering. Worker heartbeat-only updates are excluded. See `docs/plans/timescaledb-entity-history.md` for full design.
|
||||
- **History Large-Field Guardrails**: The `execution` history trigger stores a compact **digest summary** instead of the full value for the `result` column (which can be arbitrarily large). The digest is produced by the `_jsonb_digest_summary(JSONB)` helper function and has the shape `{"digest": "md5:<hex>", "size": <bytes>, "type": "<jsonb_typeof>"}`. This preserves change-detection semantics while avoiding history table bloat. The full result is always available on the live `execution` row. When adding new large JSONB columns to history triggers, use `_jsonb_digest_summary()` instead of storing the raw value.
|
||||
- **Nullable FK Fields**: `rule.action` and `rule.trigger` are nullable (`Option<Id>` in Rust) — a rule with NULL action/trigger is non-functional but preserved for traceability. `execution.action`, `execution.parent`, `execution.enforcement`, and `event.source` are also nullable.
|
||||
**Table Count**: 18 tables total in the schema (including `runtime_version`)
|
||||
**Table Count**: 22 tables total in the schema (including `runtime_version` and 4 `*_history` hypertables)
|
||||
**Migration Count**: 9 consolidated migrations (`000001` through `000009`) — see `migrations/` directory
|
||||
- **Pack Component Loading Order**: Runtimes → Triggers → Actions → Sensors (dependency order). Both `PackComponentLoader` (Rust) and `load_core_pack.py` (Python) follow this order.
|
||||
|
||||
### Pack File Loading & Action Execution
|
||||
@@ -480,6 +483,8 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
|
||||
14. **REMEMBER** to regenerate SQLx metadata after schema-related changes: `cargo sqlx prepare`
|
||||
15. **REMEMBER** packs are volumes - update with restart, not rebuild
|
||||
16. **REMEMBER** to build pack binaries separately: `./scripts/build-pack-binaries.sh`
|
||||
17. **REMEMBER** when adding mutable columns to `execution`, `worker`, `enforcement`, or `event`, add a corresponding `IS DISTINCT FROM` check to the entity's history trigger function in the TimescaleDB migration
|
||||
18. **REMEMBER** for large JSONB columns in history triggers (like `execution.result`), use `_jsonb_digest_summary()` instead of storing the raw value — see migration `000009_timescaledb_history`
|
||||
|
||||
## Deployment
|
||||
- **Target**: Distributed deployment with separate service instances
|
||||
@@ -490,9 +495,9 @@ When reporting, ask: "Should I fix this first or continue with [original task]?"
|
||||
- **Web UI**: Static files served separately or via API service
|
||||
|
||||
## Current Development Status
|
||||
- ✅ **Complete**: Database migrations (18 tables), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation
|
||||
- ✅ **Complete**: Database migrations (22 tables, 9 consolidated migration files), API service (most endpoints), common library, message queue infrastructure, repository layer, JWT auth, CLI tool, Web UI (basic + workflow builder), Executor service (core functionality), Worker service (shell/Python execution), Runtime version data model, constraint matching, worker version selection pipeline, version verification at startup, per-version environment isolation, TimescaleDB entity history tracking (execution, worker, enforcement, event), History API endpoints (generic + entity-specific with pagination & filtering), History UI panels on entity detail pages (execution, enforcement, event), TimescaleDB continuous aggregates (5 hourly rollup views with auto-refresh policies), Analytics API endpoints (7 endpoints under `/api/v1/analytics/` — dashboard, execution status/throughput/failure-rate, event volume, worker status, enforcement volume), Analytics dashboard widgets (bar charts, stacked status charts, failure rate ring gauge, time range selector)
|
||||
- 🔄 **In Progress**: Sensor service, advanced workflow features, Python runtime dependency management, API/UI endpoints for runtime version management
|
||||
- 📋 **Planned**: Notifier service, execution policies, monitoring, pack registry system
|
||||
- 📋 **Planned**: Notifier service, execution policies, monitoring, pack registry system, configurable retention periods via admin settings, export/archival to external storage
|
||||
|
||||
## Quick Reference
|
||||
|
||||
@@ -605,7 +610,7 @@ When updating, be surgical - modify only the affected sections rather than rewri
|
||||
|docs/migrations:{workflow-task-execution-consolidation.md}
|
||||
|docs/packs:{PACK_TESTING.md,QUICKREF-git-installation.md,core-pack-integration.md,pack-install-testing.md,pack-installation-git.md,pack-registry-cicd.md,pack-registry-spec.md,pack-structure.md,pack-testing-framework.md}
|
||||
|docs/performance:{QUICKREF-performance-optimization.md,log-size-limits.md,performance-analysis-workflow-lists.md,performance-before-after-results.md,performance-context-cloning-diagram.md}
|
||||
|docs/plans:{schema-per-test-refactor.md}
|
||||
|docs/plans:{schema-per-test-refactor.md,timescaledb-entity-history.md}
|
||||
|docs/sensors:{CHECKLIST-sensor-worker-registration.md,COMPLETION-sensor-worker-registration.md,SUMMARY-database-driven-detection.md,database-driven-runtime-detection.md,native-runtime.md,sensor-authentication-overview.md,sensor-interface.md,sensor-lifecycle-management.md,sensor-runtime.md,sensor-service-setup.md,sensor-worker-registration.md}
|
||||
|docs/testing:{e2e-test-plan.md,running-tests.md,schema-per-test.md,test-user-setup.md,testing-authentication.md,testing-dashboard-rules.md,testing-status.md}
|
||||
|docs/web-ui:{web-ui-pack-testing.md,websocket-usage.md}
|
||||
|
||||
358
crates/api/src/dto/analytics.rs
Normal file
358
crates/api/src/dto/analytics.rs
Normal file
@@ -0,0 +1,358 @@
|
||||
//! Analytics DTOs for API requests and responses
|
||||
//!
|
||||
//! These types represent the API-facing view of analytics data derived from
|
||||
//! TimescaleDB continuous aggregates over entity history hypertables.
|
||||
|
||||
use chrono::{DateTime, Utc};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use utoipa::{IntoParams, ToSchema};
|
||||
|
||||
use attune_common::repositories::analytics::{
|
||||
AnalyticsTimeRange, EnforcementVolumeBucket, EventVolumeBucket, ExecutionStatusBucket,
|
||||
ExecutionThroughputBucket, FailureRateSummary, WorkerStatusBucket,
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Query parameters
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Common query parameters for analytics endpoints.
|
||||
#[derive(Debug, Clone, Deserialize, IntoParams)]
|
||||
pub struct AnalyticsQueryParams {
|
||||
/// Start of time range (ISO 8601). Defaults to 24 hours ago.
|
||||
#[param(example = "2026-02-25T00:00:00Z")]
|
||||
pub since: Option<DateTime<Utc>>,
|
||||
|
||||
/// End of time range (ISO 8601). Defaults to now.
|
||||
#[param(example = "2026-02-26T00:00:00Z")]
|
||||
pub until: Option<DateTime<Utc>>,
|
||||
|
||||
/// Number of hours to look back from now (alternative to since/until).
|
||||
/// Ignored if `since` is provided.
|
||||
#[param(example = 24, minimum = 1, maximum = 8760)]
|
||||
pub hours: Option<i64>,
|
||||
}
|
||||
|
||||
impl AnalyticsQueryParams {
|
||||
/// Convert to the repository-level time range.
|
||||
pub fn to_time_range(&self) -> AnalyticsTimeRange {
|
||||
match (&self.since, &self.until) {
|
||||
(Some(since), Some(until)) => AnalyticsTimeRange {
|
||||
since: *since,
|
||||
until: *until,
|
||||
},
|
||||
(Some(since), None) => AnalyticsTimeRange {
|
||||
since: *since,
|
||||
until: Utc::now(),
|
||||
},
|
||||
(None, Some(until)) => {
|
||||
let hours = self.hours.unwrap_or(24).clamp(1, 8760);
|
||||
AnalyticsTimeRange {
|
||||
since: *until - chrono::Duration::hours(hours),
|
||||
until: *until,
|
||||
}
|
||||
}
|
||||
(None, None) => {
|
||||
let hours = self.hours.unwrap_or(24).clamp(1, 8760);
|
||||
AnalyticsTimeRange::last_hours(hours)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Path parameter for filtering analytics by a specific entity ref.
|
||||
#[derive(Debug, Clone, Deserialize, IntoParams)]
|
||||
pub struct AnalyticsRefParam {
|
||||
/// Optional entity ref filter (action_ref, trigger_ref, rule_ref, or worker name)
|
||||
#[param(example = "core.http_request")]
|
||||
pub entity_ref: Option<String>,
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Response types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// A single data point in an hourly time series.
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct TimeSeriesPoint {
|
||||
/// Start of the 1-hour bucket (ISO 8601)
|
||||
#[schema(example = "2026-02-26T10:00:00Z")]
|
||||
pub bucket: DateTime<Utc>,
|
||||
|
||||
/// The series label (e.g., status name, action ref). Null for aggregate totals.
|
||||
#[schema(example = "completed")]
|
||||
pub label: Option<String>,
|
||||
|
||||
/// The count value for this bucket
|
||||
#[schema(example = 42)]
|
||||
pub value: i64,
|
||||
}
|
||||
|
||||
/// Response for execution status transitions over time.
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct ExecutionStatusTimeSeriesResponse {
|
||||
/// Time range start
|
||||
pub since: DateTime<Utc>,
|
||||
/// Time range end
|
||||
pub until: DateTime<Utc>,
|
||||
/// Data points: one per (bucket, status) pair
|
||||
pub data: Vec<TimeSeriesPoint>,
|
||||
}
|
||||
|
||||
/// Response for execution throughput over time.
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct ExecutionThroughputResponse {
|
||||
/// Time range start
|
||||
pub since: DateTime<Utc>,
|
||||
/// Time range end
|
||||
pub until: DateTime<Utc>,
|
||||
/// Data points: one per bucket (total executions created)
|
||||
pub data: Vec<TimeSeriesPoint>,
|
||||
}
|
||||
|
||||
/// Response for event volume over time.
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct EventVolumeResponse {
|
||||
/// Time range start
|
||||
pub since: DateTime<Utc>,
|
||||
/// Time range end
|
||||
pub until: DateTime<Utc>,
|
||||
/// Data points: one per bucket (total events created)
|
||||
pub data: Vec<TimeSeriesPoint>,
|
||||
}
|
||||
|
||||
/// Response for worker status transitions over time.
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct WorkerStatusTimeSeriesResponse {
|
||||
/// Time range start
|
||||
pub since: DateTime<Utc>,
|
||||
/// Time range end
|
||||
pub until: DateTime<Utc>,
|
||||
/// Data points: one per (bucket, status) pair
|
||||
pub data: Vec<TimeSeriesPoint>,
|
||||
}
|
||||
|
||||
/// Response for enforcement volume over time.
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct EnforcementVolumeResponse {
|
||||
/// Time range start
|
||||
pub since: DateTime<Utc>,
|
||||
/// Time range end
|
||||
pub until: DateTime<Utc>,
|
||||
/// Data points: one per bucket (total enforcements created)
|
||||
pub data: Vec<TimeSeriesPoint>,
|
||||
}
|
||||
|
||||
/// Response for the execution failure rate summary.
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct FailureRateResponse {
|
||||
/// Time range start
|
||||
pub since: DateTime<Utc>,
|
||||
/// Time range end
|
||||
pub until: DateTime<Utc>,
|
||||
/// Total executions reaching a terminal state in the window
|
||||
#[schema(example = 100)]
|
||||
pub total_terminal: i64,
|
||||
/// Number of failed executions
|
||||
#[schema(example = 12)]
|
||||
pub failed_count: i64,
|
||||
/// Number of timed-out executions
|
||||
#[schema(example = 3)]
|
||||
pub timeout_count: i64,
|
||||
/// Number of completed executions
|
||||
#[schema(example = 85)]
|
||||
pub completed_count: i64,
|
||||
/// Failure rate as a percentage (0.0 – 100.0)
|
||||
#[schema(example = 15.0)]
|
||||
pub failure_rate_pct: f64,
|
||||
}
|
||||
|
||||
/// Combined dashboard analytics response.
|
||||
///
|
||||
/// Returns all key metrics in a single response for the dashboard page,
|
||||
/// avoiding multiple round-trips.
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct DashboardAnalyticsResponse {
|
||||
/// Time range start
|
||||
pub since: DateTime<Utc>,
|
||||
/// Time range end
|
||||
pub until: DateTime<Utc>,
|
||||
/// Execution throughput per hour
|
||||
pub execution_throughput: Vec<TimeSeriesPoint>,
|
||||
/// Execution status transitions per hour
|
||||
pub execution_status: Vec<TimeSeriesPoint>,
|
||||
/// Event volume per hour
|
||||
pub event_volume: Vec<TimeSeriesPoint>,
|
||||
/// Enforcement volume per hour
|
||||
pub enforcement_volume: Vec<TimeSeriesPoint>,
|
||||
/// Worker status transitions per hour
|
||||
pub worker_status: Vec<TimeSeriesPoint>,
|
||||
/// Execution failure rate summary
|
||||
pub failure_rate: FailureRateResponse,
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Conversion helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
impl From<ExecutionStatusBucket> for TimeSeriesPoint {
|
||||
fn from(b: ExecutionStatusBucket) -> Self {
|
||||
Self {
|
||||
bucket: b.bucket,
|
||||
label: b.new_status,
|
||||
value: b.transition_count,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<ExecutionThroughputBucket> for TimeSeriesPoint {
|
||||
fn from(b: ExecutionThroughputBucket) -> Self {
|
||||
Self {
|
||||
bucket: b.bucket,
|
||||
label: b.action_ref,
|
||||
value: b.execution_count,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<EventVolumeBucket> for TimeSeriesPoint {
|
||||
fn from(b: EventVolumeBucket) -> Self {
|
||||
Self {
|
||||
bucket: b.bucket,
|
||||
label: b.trigger_ref,
|
||||
value: b.event_count,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<WorkerStatusBucket> for TimeSeriesPoint {
|
||||
fn from(b: WorkerStatusBucket) -> Self {
|
||||
Self {
|
||||
bucket: b.bucket,
|
||||
label: b.new_status,
|
||||
value: b.transition_count,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<EnforcementVolumeBucket> for TimeSeriesPoint {
|
||||
fn from(b: EnforcementVolumeBucket) -> Self {
|
||||
Self {
|
||||
bucket: b.bucket,
|
||||
label: b.rule_ref,
|
||||
value: b.enforcement_count,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl FailureRateResponse {
|
||||
/// Create from the repository summary plus the query time range.
|
||||
pub fn from_summary(summary: FailureRateSummary, range: &AnalyticsTimeRange) -> Self {
|
||||
Self {
|
||||
since: range.since,
|
||||
until: range.until,
|
||||
total_terminal: summary.total_terminal,
|
||||
failed_count: summary.failed_count,
|
||||
timeout_count: summary.timeout_count,
|
||||
completed_count: summary.completed_count,
|
||||
failure_rate_pct: summary.failure_rate_pct,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_query_params_defaults() {
|
||||
let params = AnalyticsQueryParams {
|
||||
since: None,
|
||||
until: None,
|
||||
hours: None,
|
||||
};
|
||||
let range = params.to_time_range();
|
||||
let diff = range.until - range.since;
|
||||
assert!((diff.num_hours() - 24).abs() <= 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_query_params_custom_hours() {
|
||||
let params = AnalyticsQueryParams {
|
||||
since: None,
|
||||
until: None,
|
||||
hours: Some(6),
|
||||
};
|
||||
let range = params.to_time_range();
|
||||
let diff = range.until - range.since;
|
||||
assert!((diff.num_hours() - 6).abs() <= 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_query_params_hours_clamped() {
|
||||
let params = AnalyticsQueryParams {
|
||||
since: None,
|
||||
until: None,
|
||||
hours: Some(99999),
|
||||
};
|
||||
let range = params.to_time_range();
|
||||
let diff = range.until - range.since;
|
||||
// Clamped to 8760 hours (1 year)
|
||||
assert!((diff.num_hours() - 8760).abs() <= 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_query_params_explicit_range() {
|
||||
let since = Utc::now() - chrono::Duration::hours(48);
|
||||
let until = Utc::now();
|
||||
let params = AnalyticsQueryParams {
|
||||
since: Some(since),
|
||||
until: Some(until),
|
||||
hours: Some(6), // ignored when since is provided
|
||||
};
|
||||
let range = params.to_time_range();
|
||||
assert_eq!(range.since, since);
|
||||
assert_eq!(range.until, until);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_failure_rate_response_from_summary() {
|
||||
let summary = FailureRateSummary {
|
||||
total_terminal: 100,
|
||||
failed_count: 12,
|
||||
timeout_count: 3,
|
||||
completed_count: 85,
|
||||
failure_rate_pct: 15.0,
|
||||
};
|
||||
let range = AnalyticsTimeRange::last_hours(24);
|
||||
let response = FailureRateResponse::from_summary(summary, &range);
|
||||
assert_eq!(response.total_terminal, 100);
|
||||
assert_eq!(response.failed_count, 12);
|
||||
assert_eq!(response.failure_rate_pct, 15.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_time_series_point_from_execution_status_bucket() {
|
||||
let bucket = ExecutionStatusBucket {
|
||||
bucket: Utc::now(),
|
||||
action_ref: Some("core.http".into()),
|
||||
new_status: Some("completed".into()),
|
||||
transition_count: 10,
|
||||
};
|
||||
let point: TimeSeriesPoint = bucket.into();
|
||||
assert_eq!(point.label.as_deref(), Some("completed"));
|
||||
assert_eq!(point.value, 10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_time_series_point_from_event_volume_bucket() {
|
||||
let bucket = EventVolumeBucket {
|
||||
bucket: Utc::now(),
|
||||
trigger_ref: Some("core.timer".into()),
|
||||
event_count: 25,
|
||||
};
|
||||
let point: TimeSeriesPoint = bucket.into();
|
||||
assert_eq!(point.label.as_deref(), Some("core.timer"));
|
||||
assert_eq!(point.value, 25);
|
||||
}
|
||||
}
|
||||
211
crates/api/src/dto/history.rs
Normal file
211
crates/api/src/dto/history.rs
Normal file
@@ -0,0 +1,211 @@
|
||||
//! History DTOs for API requests and responses
|
||||
//!
|
||||
//! These types represent the API-facing view of entity history records
|
||||
//! stored in TimescaleDB hypertables.
|
||||
|
||||
use chrono::{DateTime, Utc};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::Value as JsonValue;
|
||||
use utoipa::{IntoParams, ToSchema};
|
||||
|
||||
use attune_common::models::entity_history::HistoryEntityType;
|
||||
|
||||
/// Response DTO for a single entity history record.
|
||||
#[derive(Debug, Clone, Serialize, ToSchema)]
|
||||
pub struct HistoryRecordResponse {
|
||||
/// When the change occurred
|
||||
#[schema(example = "2026-02-26T10:30:00Z")]
|
||||
pub time: DateTime<Utc>,
|
||||
|
||||
/// The operation: `INSERT`, `UPDATE`, or `DELETE`
|
||||
#[schema(example = "UPDATE")]
|
||||
pub operation: String,
|
||||
|
||||
/// The primary key of the changed entity
|
||||
#[schema(example = 42)]
|
||||
pub entity_id: i64,
|
||||
|
||||
/// Denormalized human-readable identifier (e.g., action_ref, worker name)
|
||||
#[schema(example = "core.http_request")]
|
||||
pub entity_ref: Option<String>,
|
||||
|
||||
/// Names of fields that changed (empty for INSERT/DELETE)
|
||||
#[schema(example = json!(["status", "result"]))]
|
||||
pub changed_fields: Vec<String>,
|
||||
|
||||
/// Previous values of changed fields (null for INSERT)
|
||||
#[schema(value_type = Object, example = json!({"status": "requested"}))]
|
||||
pub old_values: Option<JsonValue>,
|
||||
|
||||
/// New values of changed fields (null for DELETE)
|
||||
#[schema(value_type = Object, example = json!({"status": "running"}))]
|
||||
pub new_values: Option<JsonValue>,
|
||||
}
|
||||
|
||||
impl From<attune_common::models::entity_history::EntityHistoryRecord> for HistoryRecordResponse {
|
||||
fn from(record: attune_common::models::entity_history::EntityHistoryRecord) -> Self {
|
||||
Self {
|
||||
time: record.time,
|
||||
operation: record.operation,
|
||||
entity_id: record.entity_id,
|
||||
entity_ref: record.entity_ref,
|
||||
changed_fields: record.changed_fields,
|
||||
old_values: record.old_values,
|
||||
new_values: record.new_values,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Query parameters for filtering history records.
|
||||
#[derive(Debug, Clone, Deserialize, IntoParams)]
|
||||
pub struct HistoryQueryParams {
|
||||
/// Filter by entity ID
|
||||
#[param(example = 42)]
|
||||
pub entity_id: Option<i64>,
|
||||
|
||||
/// Filter by entity ref (e.g., action_ref, worker name)
|
||||
#[param(example = "core.http_request")]
|
||||
pub entity_ref: Option<String>,
|
||||
|
||||
/// Filter by operation type: `INSERT`, `UPDATE`, or `DELETE`
|
||||
#[param(example = "UPDATE")]
|
||||
pub operation: Option<String>,
|
||||
|
||||
/// Only include records where this field was changed
|
||||
#[param(example = "status")]
|
||||
pub changed_field: Option<String>,
|
||||
|
||||
/// Only include records at or after this time (ISO 8601)
|
||||
#[param(example = "2026-02-01T00:00:00Z")]
|
||||
pub since: Option<DateTime<Utc>>,
|
||||
|
||||
/// Only include records at or before this time (ISO 8601)
|
||||
#[param(example = "2026-02-28T23:59:59Z")]
|
||||
pub until: Option<DateTime<Utc>>,
|
||||
|
||||
/// Page number (1-based)
|
||||
#[serde(default = "default_page")]
|
||||
#[param(example = 1, minimum = 1)]
|
||||
pub page: u32,
|
||||
|
||||
/// Number of items per page
|
||||
#[serde(default = "default_page_size")]
|
||||
#[param(example = 50, minimum = 1, maximum = 1000)]
|
||||
pub page_size: u32,
|
||||
}
|
||||
|
||||
fn default_page() -> u32 {
|
||||
1
|
||||
}
|
||||
|
||||
fn default_page_size() -> u32 {
|
||||
50
|
||||
}
|
||||
|
||||
impl HistoryQueryParams {
|
||||
/// Convert to the repository-level query params.
|
||||
pub fn to_repo_params(
|
||||
&self,
|
||||
) -> attune_common::repositories::entity_history::HistoryQueryParams {
|
||||
let limit = (self.page_size.min(1000).max(1)) as i64;
|
||||
let offset = ((self.page.saturating_sub(1)) as i64) * limit;
|
||||
|
||||
attune_common::repositories::entity_history::HistoryQueryParams {
|
||||
entity_id: self.entity_id,
|
||||
entity_ref: self.entity_ref.clone(),
|
||||
operation: self.operation.clone(),
|
||||
changed_field: self.changed_field.clone(),
|
||||
since: self.since,
|
||||
until: self.until,
|
||||
limit: Some(limit),
|
||||
offset: Some(offset),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Path parameter for the entity type segment.
|
||||
#[derive(Debug, Clone, Deserialize, IntoParams)]
|
||||
pub struct HistoryEntityTypePath {
|
||||
/// Entity type: `execution`, `worker`, `enforcement`, or `event`
|
||||
pub entity_type: String,
|
||||
}
|
||||
|
||||
impl HistoryEntityTypePath {
|
||||
/// Parse the entity type string, returning a typed enum or an error message.
|
||||
pub fn parse(&self) -> Result<HistoryEntityType, String> {
|
||||
self.entity_type.parse::<HistoryEntityType>()
|
||||
}
|
||||
}
|
||||
|
||||
/// Path parameters for entity-specific history (e.g., `/executions/42/history`).
|
||||
#[derive(Debug, Clone, Deserialize, IntoParams)]
|
||||
pub struct EntityIdPath {
|
||||
/// The entity's primary key
|
||||
pub id: i64,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_query_params_defaults() {
|
||||
let json = r#"{}"#;
|
||||
let params: HistoryQueryParams = serde_json::from_str(json).unwrap();
|
||||
assert_eq!(params.page, 1);
|
||||
assert_eq!(params.page_size, 50);
|
||||
assert!(params.entity_id.is_none());
|
||||
assert!(params.operation.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_query_params_to_repo_params() {
|
||||
let params = HistoryQueryParams {
|
||||
entity_id: Some(42),
|
||||
entity_ref: None,
|
||||
operation: Some("UPDATE".to_string()),
|
||||
changed_field: Some("status".to_string()),
|
||||
since: None,
|
||||
until: None,
|
||||
page: 3,
|
||||
page_size: 20,
|
||||
};
|
||||
|
||||
let repo = params.to_repo_params();
|
||||
assert_eq!(repo.entity_id, Some(42));
|
||||
assert_eq!(repo.operation, Some("UPDATE".to_string()));
|
||||
assert_eq!(repo.changed_field, Some("status".to_string()));
|
||||
assert_eq!(repo.limit, Some(20));
|
||||
assert_eq!(repo.offset, Some(40)); // (3-1) * 20
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_query_params_page_size_cap() {
|
||||
let params = HistoryQueryParams {
|
||||
entity_id: None,
|
||||
entity_ref: None,
|
||||
operation: None,
|
||||
changed_field: None,
|
||||
since: None,
|
||||
until: None,
|
||||
page: 1,
|
||||
page_size: 5000,
|
||||
};
|
||||
|
||||
let repo = params.to_repo_params();
|
||||
assert_eq!(repo.limit, Some(1000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_entity_type_path_parse() {
|
||||
let path = HistoryEntityTypePath {
|
||||
entity_type: "execution".to_string(),
|
||||
};
|
||||
assert_eq!(path.parse().unwrap(), HistoryEntityType::Execution);
|
||||
|
||||
let path = HistoryEntityTypePath {
|
||||
entity_type: "unknown".to_string(),
|
||||
};
|
||||
assert!(path.parse().is_err());
|
||||
}
|
||||
}
|
||||
@@ -1,10 +1,12 @@
|
||||
//! Data Transfer Objects (DTOs) for API requests and responses
|
||||
|
||||
pub mod action;
|
||||
pub mod analytics;
|
||||
pub mod auth;
|
||||
pub mod common;
|
||||
pub mod event;
|
||||
pub mod execution;
|
||||
pub mod history;
|
||||
pub mod inquiry;
|
||||
pub mod key;
|
||||
pub mod pack;
|
||||
@@ -14,6 +16,11 @@ pub mod webhook;
|
||||
pub mod workflow;
|
||||
|
||||
pub use action::{ActionResponse, ActionSummary, CreateActionRequest, UpdateActionRequest};
|
||||
pub use analytics::{
|
||||
AnalyticsQueryParams, DashboardAnalyticsResponse, EventVolumeResponse,
|
||||
ExecutionStatusTimeSeriesResponse, ExecutionThroughputResponse, FailureRateResponse,
|
||||
TimeSeriesPoint,
|
||||
};
|
||||
pub use auth::{
|
||||
ChangePasswordRequest, CurrentUserResponse, LoginRequest, RefreshTokenRequest, RegisterRequest,
|
||||
TokenResponse,
|
||||
@@ -25,7 +32,10 @@ pub use event::{
|
||||
EnforcementQueryParams, EnforcementResponse, EnforcementSummary, EventQueryParams,
|
||||
EventResponse, EventSummary,
|
||||
};
|
||||
pub use execution::{CreateExecutionRequest, ExecutionQueryParams, ExecutionResponse, ExecutionSummary};
|
||||
pub use execution::{
|
||||
CreateExecutionRequest, ExecutionQueryParams, ExecutionResponse, ExecutionSummary,
|
||||
};
|
||||
pub use history::{HistoryEntityTypePath, HistoryQueryParams, HistoryRecordResponse};
|
||||
pub use inquiry::{
|
||||
CreateInquiryRequest, InquiryQueryParams, InquiryRespondRequest, InquiryResponse,
|
||||
InquirySummary, UpdateInquiryRequest,
|
||||
|
||||
304
crates/api/src/routes/analytics.rs
Normal file
304
crates/api/src/routes/analytics.rs
Normal file
@@ -0,0 +1,304 @@
|
||||
//! Analytics API routes
|
||||
//!
|
||||
//! Provides read-only access to TimescaleDB continuous aggregates for dashboard
|
||||
//! widgets and time-series analytics. All data is pre-computed by TimescaleDB
|
||||
//! continuous aggregate policies — these endpoints simply query the materialized views.
|
||||
|
||||
use axum::{
|
||||
extract::{Query, State},
|
||||
http::StatusCode,
|
||||
response::IntoResponse,
|
||||
routing::get,
|
||||
Json, Router,
|
||||
};
|
||||
use std::sync::Arc;
|
||||
|
||||
use attune_common::repositories::analytics::AnalyticsRepository;
|
||||
|
||||
use crate::{
|
||||
auth::middleware::RequireAuth,
|
||||
dto::{
|
||||
analytics::{
|
||||
AnalyticsQueryParams, DashboardAnalyticsResponse, EnforcementVolumeResponse,
|
||||
EventVolumeResponse, ExecutionStatusTimeSeriesResponse, ExecutionThroughputResponse,
|
||||
FailureRateResponse, TimeSeriesPoint, WorkerStatusTimeSeriesResponse,
|
||||
},
|
||||
common::ApiResponse,
|
||||
},
|
||||
middleware::ApiResult,
|
||||
state::AppState,
|
||||
};
|
||||
|
||||
/// Get a combined dashboard analytics payload.
|
||||
///
|
||||
/// Returns all key metrics in a single response to avoid multiple round-trips
|
||||
/// from the dashboard page. Includes execution throughput, status transitions,
|
||||
/// event volume, enforcement volume, worker status, and failure rate.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/analytics/dashboard",
|
||||
tag = "analytics",
|
||||
params(AnalyticsQueryParams),
|
||||
responses(
|
||||
(status = 200, description = "Dashboard analytics", body = inline(ApiResponse<DashboardAnalyticsResponse>)),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_dashboard_analytics(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Query(query): Query<AnalyticsQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
let range = query.to_time_range();
|
||||
|
||||
// Run all aggregate queries concurrently
|
||||
let (throughput, status, events, enforcements, workers, failure_rate) = tokio::try_join!(
|
||||
AnalyticsRepository::execution_throughput_hourly(&state.db, &range),
|
||||
AnalyticsRepository::execution_status_hourly(&state.db, &range),
|
||||
AnalyticsRepository::event_volume_hourly(&state.db, &range),
|
||||
AnalyticsRepository::enforcement_volume_hourly(&state.db, &range),
|
||||
AnalyticsRepository::worker_status_hourly(&state.db, &range),
|
||||
AnalyticsRepository::execution_failure_rate(&state.db, &range),
|
||||
)?;
|
||||
|
||||
let response = DashboardAnalyticsResponse {
|
||||
since: range.since,
|
||||
until: range.until,
|
||||
execution_throughput: throughput.into_iter().map(Into::into).collect(),
|
||||
execution_status: status.into_iter().map(Into::into).collect(),
|
||||
event_volume: events.into_iter().map(Into::into).collect(),
|
||||
enforcement_volume: enforcements.into_iter().map(Into::into).collect(),
|
||||
worker_status: workers.into_iter().map(Into::into).collect(),
|
||||
failure_rate: FailureRateResponse::from_summary(failure_rate, &range),
|
||||
};
|
||||
|
||||
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
|
||||
}
|
||||
|
||||
/// Get execution status transitions over time.
|
||||
///
|
||||
/// Returns hourly buckets of execution status transitions (e.g., how many
|
||||
/// executions moved to "completed", "failed", "running" per hour).
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/analytics/executions/status",
|
||||
tag = "analytics",
|
||||
params(AnalyticsQueryParams),
|
||||
responses(
|
||||
(status = 200, description = "Execution status transitions", body = inline(ApiResponse<ExecutionStatusTimeSeriesResponse>)),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_execution_status_analytics(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Query(query): Query<AnalyticsQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
let range = query.to_time_range();
|
||||
let rows = AnalyticsRepository::execution_status_hourly(&state.db, &range).await?;
|
||||
|
||||
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
|
||||
|
||||
let response = ExecutionStatusTimeSeriesResponse {
|
||||
since: range.since,
|
||||
until: range.until,
|
||||
data,
|
||||
};
|
||||
|
||||
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
|
||||
}
|
||||
|
||||
/// Get execution throughput over time.
|
||||
///
|
||||
/// Returns hourly buckets of execution creation counts.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/analytics/executions/throughput",
|
||||
tag = "analytics",
|
||||
params(AnalyticsQueryParams),
|
||||
responses(
|
||||
(status = 200, description = "Execution throughput", body = inline(ApiResponse<ExecutionThroughputResponse>)),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_execution_throughput_analytics(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Query(query): Query<AnalyticsQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
let range = query.to_time_range();
|
||||
let rows = AnalyticsRepository::execution_throughput_hourly(&state.db, &range).await?;
|
||||
|
||||
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
|
||||
|
||||
let response = ExecutionThroughputResponse {
|
||||
since: range.since,
|
||||
until: range.until,
|
||||
data,
|
||||
};
|
||||
|
||||
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
|
||||
}
|
||||
|
||||
/// Get the execution failure rate summary.
|
||||
///
|
||||
/// Returns aggregate failure/timeout/completion counts and the failure rate
|
||||
/// percentage over the requested time range.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/analytics/executions/failure-rate",
|
||||
tag = "analytics",
|
||||
params(AnalyticsQueryParams),
|
||||
responses(
|
||||
(status = 200, description = "Failure rate summary", body = inline(ApiResponse<FailureRateResponse>)),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_failure_rate_analytics(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Query(query): Query<AnalyticsQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
let range = query.to_time_range();
|
||||
let summary = AnalyticsRepository::execution_failure_rate(&state.db, &range).await?;
|
||||
|
||||
let response = FailureRateResponse::from_summary(summary, &range);
|
||||
|
||||
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
|
||||
}
|
||||
|
||||
/// Get event volume over time.
|
||||
///
|
||||
/// Returns hourly buckets of event creation counts, aggregated across all triggers.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/analytics/events/volume",
|
||||
tag = "analytics",
|
||||
params(AnalyticsQueryParams),
|
||||
responses(
|
||||
(status = 200, description = "Event volume", body = inline(ApiResponse<EventVolumeResponse>)),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_event_volume_analytics(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Query(query): Query<AnalyticsQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
let range = query.to_time_range();
|
||||
let rows = AnalyticsRepository::event_volume_hourly(&state.db, &range).await?;
|
||||
|
||||
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
|
||||
|
||||
let response = EventVolumeResponse {
|
||||
since: range.since,
|
||||
until: range.until,
|
||||
data,
|
||||
};
|
||||
|
||||
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
|
||||
}
|
||||
|
||||
/// Get worker status transitions over time.
|
||||
///
|
||||
/// Returns hourly buckets of worker status changes (online/offline/draining).
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/analytics/workers/status",
|
||||
tag = "analytics",
|
||||
params(AnalyticsQueryParams),
|
||||
responses(
|
||||
(status = 200, description = "Worker status transitions", body = inline(ApiResponse<WorkerStatusTimeSeriesResponse>)),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_worker_status_analytics(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Query(query): Query<AnalyticsQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
let range = query.to_time_range();
|
||||
let rows = AnalyticsRepository::worker_status_hourly(&state.db, &range).await?;
|
||||
|
||||
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
|
||||
|
||||
let response = WorkerStatusTimeSeriesResponse {
|
||||
since: range.since,
|
||||
until: range.until,
|
||||
data,
|
||||
};
|
||||
|
||||
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
|
||||
}
|
||||
|
||||
/// Get enforcement volume over time.
|
||||
///
|
||||
/// Returns hourly buckets of enforcement creation counts, aggregated across all rules.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/analytics/enforcements/volume",
|
||||
tag = "analytics",
|
||||
params(AnalyticsQueryParams),
|
||||
responses(
|
||||
(status = 200, description = "Enforcement volume", body = inline(ApiResponse<EnforcementVolumeResponse>)),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_enforcement_volume_analytics(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Query(query): Query<AnalyticsQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
let range = query.to_time_range();
|
||||
let rows = AnalyticsRepository::enforcement_volume_hourly(&state.db, &range).await?;
|
||||
|
||||
let data: Vec<TimeSeriesPoint> = rows.into_iter().map(Into::into).collect();
|
||||
|
||||
let response = EnforcementVolumeResponse {
|
||||
since: range.since,
|
||||
until: range.until,
|
||||
data,
|
||||
};
|
||||
|
||||
Ok((StatusCode::OK, Json(ApiResponse::new(response))))
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Router
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Build the analytics routes.
|
||||
///
|
||||
/// Mounts:
|
||||
/// - `GET /analytics/dashboard` — combined dashboard payload
|
||||
/// - `GET /analytics/executions/status` — execution status transitions
|
||||
/// - `GET /analytics/executions/throughput` — execution creation throughput
|
||||
/// - `GET /analytics/executions/failure-rate` — failure rate summary
|
||||
/// - `GET /analytics/events/volume` — event creation volume
|
||||
/// - `GET /analytics/workers/status` — worker status transitions
|
||||
/// - `GET /analytics/enforcements/volume` — enforcement creation volume
|
||||
pub fn routes() -> Router<Arc<AppState>> {
|
||||
Router::new()
|
||||
.route("/analytics/dashboard", get(get_dashboard_analytics))
|
||||
.route(
|
||||
"/analytics/executions/status",
|
||||
get(get_execution_status_analytics),
|
||||
)
|
||||
.route(
|
||||
"/analytics/executions/throughput",
|
||||
get(get_execution_throughput_analytics),
|
||||
)
|
||||
.route(
|
||||
"/analytics/executions/failure-rate",
|
||||
get(get_failure_rate_analytics),
|
||||
)
|
||||
.route("/analytics/events/volume", get(get_event_volume_analytics))
|
||||
.route(
|
||||
"/analytics/workers/status",
|
||||
get(get_worker_status_analytics),
|
||||
)
|
||||
.route(
|
||||
"/analytics/enforcements/volume",
|
||||
get(get_enforcement_volume_analytics),
|
||||
)
|
||||
}
|
||||
245
crates/api/src/routes/history.rs
Normal file
245
crates/api/src/routes/history.rs
Normal file
@@ -0,0 +1,245 @@
|
||||
//! Entity history API routes
|
||||
//!
|
||||
//! Provides read-only access to the TimescaleDB entity history hypertables.
|
||||
//! History records are written by PostgreSQL triggers — these endpoints only query them.
|
||||
|
||||
use axum::{
|
||||
extract::{Path, Query, State},
|
||||
http::StatusCode,
|
||||
response::IntoResponse,
|
||||
routing::get,
|
||||
Json, Router,
|
||||
};
|
||||
use std::sync::Arc;
|
||||
|
||||
use attune_common::models::entity_history::HistoryEntityType;
|
||||
use attune_common::repositories::entity_history::EntityHistoryRepository;
|
||||
|
||||
use crate::{
|
||||
auth::middleware::RequireAuth,
|
||||
dto::{
|
||||
common::{PaginatedResponse, PaginationMeta, PaginationParams},
|
||||
history::{HistoryQueryParams, HistoryRecordResponse},
|
||||
},
|
||||
middleware::{ApiError, ApiResult},
|
||||
state::AppState,
|
||||
};
|
||||
|
||||
/// List history records for a given entity type.
|
||||
///
|
||||
/// Supported entity types: `execution`, `worker`, `enforcement`, `event`.
|
||||
/// Returns a paginated list of change records ordered by time descending.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/history/{entity_type}",
|
||||
tag = "history",
|
||||
params(
|
||||
("entity_type" = String, Path, description = "Entity type: execution, worker, enforcement, or event"),
|
||||
HistoryQueryParams,
|
||||
),
|
||||
responses(
|
||||
(status = 200, description = "Paginated list of history records", body = PaginatedResponse<HistoryRecordResponse>),
|
||||
(status = 400, description = "Invalid entity type"),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn list_entity_history(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Path(entity_type_str): Path<String>,
|
||||
Query(query): Query<HistoryQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
let entity_type = parse_entity_type(&entity_type_str)?;
|
||||
|
||||
let repo_params = query.to_repo_params();
|
||||
|
||||
let (records, total) = tokio::try_join!(
|
||||
EntityHistoryRepository::query(&state.db, entity_type, &repo_params),
|
||||
EntityHistoryRepository::count(&state.db, entity_type, &repo_params),
|
||||
)?;
|
||||
|
||||
let data: Vec<HistoryRecordResponse> = records.into_iter().map(Into::into).collect();
|
||||
|
||||
let pagination_params = PaginationParams {
|
||||
page: query.page,
|
||||
page_size: query.page_size,
|
||||
};
|
||||
|
||||
let response = PaginatedResponse {
|
||||
data,
|
||||
pagination: PaginationMeta::new(
|
||||
pagination_params.page,
|
||||
pagination_params.page_size,
|
||||
total as u64,
|
||||
),
|
||||
};
|
||||
|
||||
Ok((StatusCode::OK, Json(response)))
|
||||
}
|
||||
|
||||
/// Get history for a specific execution by ID.
|
||||
///
|
||||
/// Returns all change records for the given execution, ordered by time descending.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/executions/{id}/history",
|
||||
tag = "history",
|
||||
params(
|
||||
("id" = i64, Path, description = "Execution ID"),
|
||||
HistoryQueryParams,
|
||||
),
|
||||
responses(
|
||||
(status = 200, description = "History records for the execution", body = PaginatedResponse<HistoryRecordResponse>),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_execution_history(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Path(id): Path<i64>,
|
||||
Query(query): Query<HistoryQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
get_entity_history_by_id(&state, HistoryEntityType::Execution, id, query).await
|
||||
}
|
||||
|
||||
/// Get history for a specific worker by ID.
|
||||
///
|
||||
/// Returns all change records for the given worker, ordered by time descending.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/workers/{id}/history",
|
||||
tag = "history",
|
||||
params(
|
||||
("id" = i64, Path, description = "Worker ID"),
|
||||
HistoryQueryParams,
|
||||
),
|
||||
responses(
|
||||
(status = 200, description = "History records for the worker", body = PaginatedResponse<HistoryRecordResponse>),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_worker_history(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Path(id): Path<i64>,
|
||||
Query(query): Query<HistoryQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
get_entity_history_by_id(&state, HistoryEntityType::Worker, id, query).await
|
||||
}
|
||||
|
||||
/// Get history for a specific enforcement by ID.
|
||||
///
|
||||
/// Returns all change records for the given enforcement, ordered by time descending.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/enforcements/{id}/history",
|
||||
tag = "history",
|
||||
params(
|
||||
("id" = i64, Path, description = "Enforcement ID"),
|
||||
HistoryQueryParams,
|
||||
),
|
||||
responses(
|
||||
(status = 200, description = "History records for the enforcement", body = PaginatedResponse<HistoryRecordResponse>),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_enforcement_history(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Path(id): Path<i64>,
|
||||
Query(query): Query<HistoryQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
get_entity_history_by_id(&state, HistoryEntityType::Enforcement, id, query).await
|
||||
}
|
||||
|
||||
/// Get history for a specific event by ID.
|
||||
///
|
||||
/// Returns all change records for the given event, ordered by time descending.
|
||||
#[utoipa::path(
|
||||
get,
|
||||
path = "/api/v1/events/{id}/history",
|
||||
tag = "history",
|
||||
params(
|
||||
("id" = i64, Path, description = "Event ID"),
|
||||
HistoryQueryParams,
|
||||
),
|
||||
responses(
|
||||
(status = 200, description = "History records for the event", body = PaginatedResponse<HistoryRecordResponse>),
|
||||
),
|
||||
security(("bearer_auth" = []))
|
||||
)]
|
||||
pub async fn get_event_history(
|
||||
State(state): State<Arc<AppState>>,
|
||||
RequireAuth(_user): RequireAuth,
|
||||
Path(id): Path<i64>,
|
||||
Query(query): Query<HistoryQueryParams>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
get_entity_history_by_id(&state, HistoryEntityType::Event, id, query).await
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Shared helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Parse and validate the entity type path parameter.
|
||||
fn parse_entity_type(s: &str) -> Result<HistoryEntityType, ApiError> {
|
||||
s.parse::<HistoryEntityType>().map_err(ApiError::BadRequest)
|
||||
}
|
||||
|
||||
/// Shared implementation for `GET /<entities>/:id/history` endpoints.
|
||||
async fn get_entity_history_by_id(
|
||||
state: &AppState,
|
||||
entity_type: HistoryEntityType,
|
||||
entity_id: i64,
|
||||
query: HistoryQueryParams,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
// Override entity_id from the path — ignore any entity_id in query params
|
||||
let mut repo_params = query.to_repo_params();
|
||||
repo_params.entity_id = Some(entity_id);
|
||||
|
||||
let (records, total) = tokio::try_join!(
|
||||
EntityHistoryRepository::query(&state.db, entity_type, &repo_params),
|
||||
EntityHistoryRepository::count(&state.db, entity_type, &repo_params),
|
||||
)?;
|
||||
|
||||
let data: Vec<HistoryRecordResponse> = records.into_iter().map(Into::into).collect();
|
||||
|
||||
let pagination_params = PaginationParams {
|
||||
page: query.page,
|
||||
page_size: query.page_size,
|
||||
};
|
||||
|
||||
let response = PaginatedResponse {
|
||||
data,
|
||||
pagination: PaginationMeta::new(
|
||||
pagination_params.page,
|
||||
pagination_params.page_size,
|
||||
total as u64,
|
||||
),
|
||||
};
|
||||
|
||||
Ok((StatusCode::OK, Json(response)))
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Router
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Build the history routes.
|
||||
///
|
||||
/// Mounts:
|
||||
/// - `GET /history/:entity_type` — generic history query
|
||||
/// - `GET /executions/:id/history` — execution-specific history
|
||||
/// - `GET /workers/:id/history` — worker-specific history (note: currently no /workers base route exists)
|
||||
/// - `GET /enforcements/:id/history` — enforcement-specific history
|
||||
/// - `GET /events/:id/history` — event-specific history
|
||||
pub fn routes() -> Router<Arc<AppState>> {
|
||||
Router::new()
|
||||
// Generic history endpoint
|
||||
.route("/history/{entity_type}", get(list_entity_history))
|
||||
// Entity-specific convenience endpoints
|
||||
.route("/executions/{id}/history", get(get_execution_history))
|
||||
.route("/workers/{id}/history", get(get_worker_history))
|
||||
.route("/enforcements/{id}/history", get(get_enforcement_history))
|
||||
.route("/events/{id}/history", get(get_event_history))
|
||||
}
|
||||
@@ -1,10 +1,12 @@
|
||||
//! API route modules
|
||||
|
||||
pub mod actions;
|
||||
pub mod analytics;
|
||||
pub mod auth;
|
||||
pub mod events;
|
||||
pub mod executions;
|
||||
pub mod health;
|
||||
pub mod history;
|
||||
pub mod inquiries;
|
||||
pub mod keys;
|
||||
pub mod packs;
|
||||
@@ -14,10 +16,12 @@ pub mod webhooks;
|
||||
pub mod workflows;
|
||||
|
||||
pub use actions::routes as action_routes;
|
||||
pub use analytics::routes as analytics_routes;
|
||||
pub use auth::routes as auth_routes;
|
||||
pub use events::routes as event_routes;
|
||||
pub use executions::routes as execution_routes;
|
||||
pub use health::routes as health_routes;
|
||||
pub use history::routes as history_routes;
|
||||
pub use inquiries::routes as inquiry_routes;
|
||||
pub use keys::routes as key_routes;
|
||||
pub use packs::routes as pack_routes;
|
||||
|
||||
@@ -55,6 +55,8 @@ impl Server {
|
||||
.merge(routes::key_routes())
|
||||
.merge(routes::workflow_routes())
|
||||
.merge(routes::webhook_routes())
|
||||
.merge(routes::history_routes())
|
||||
.merge(routes::analytics_routes())
|
||||
// TODO: Add more route modules here
|
||||
// etc.
|
||||
.with_state(self.state.clone());
|
||||
|
||||
@@ -10,6 +10,7 @@ use sqlx::FromRow;
|
||||
|
||||
// Re-export common types
|
||||
pub use action::*;
|
||||
pub use entity_history::*;
|
||||
pub use enums::*;
|
||||
pub use event::*;
|
||||
pub use execution::*;
|
||||
@@ -1439,3 +1440,91 @@ pub mod pack_test {
|
||||
pub last_test_passed: Option<bool>,
|
||||
}
|
||||
}
|
||||
|
||||
/// Entity history tracking models (TimescaleDB hypertables)
|
||||
///
|
||||
/// These models represent rows in the `<entity>_history` append-only hypertables
|
||||
/// that track field-level changes to operational tables via PostgreSQL triggers.
|
||||
pub mod entity_history {
|
||||
use super::*;
|
||||
|
||||
/// A single history record capturing a field-level change to an entity.
|
||||
///
|
||||
/// History records are append-only and populated by PostgreSQL triggers —
|
||||
/// they are never created or modified by application code.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
|
||||
pub struct EntityHistoryRecord {
|
||||
/// When the change occurred (hypertable partitioning dimension)
|
||||
pub time: DateTime<Utc>,
|
||||
|
||||
/// The operation that produced this record: `INSERT`, `UPDATE`, or `DELETE`
|
||||
pub operation: String,
|
||||
|
||||
/// The primary key of the changed row in the source table
|
||||
pub entity_id: Id,
|
||||
|
||||
/// Denormalized human-readable identifier (e.g., `action_ref`, `worker.name`, `rule_ref`, `trigger_ref`)
|
||||
pub entity_ref: Option<String>,
|
||||
|
||||
/// Names of fields that changed in this operation (empty for INSERT/DELETE)
|
||||
pub changed_fields: Vec<String>,
|
||||
|
||||
/// Previous values of the changed fields (NULL for INSERT)
|
||||
pub old_values: Option<JsonValue>,
|
||||
|
||||
/// New values of the changed fields (NULL for DELETE)
|
||||
pub new_values: Option<JsonValue>,
|
||||
}
|
||||
|
||||
/// Supported entity types that have history tracking.
|
||||
///
|
||||
/// Each variant maps to a `<name>_history` hypertable in the database.
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum HistoryEntityType {
|
||||
Execution,
|
||||
Worker,
|
||||
Enforcement,
|
||||
Event,
|
||||
}
|
||||
|
||||
impl HistoryEntityType {
|
||||
/// Returns the history table name for this entity type.
|
||||
pub fn table_name(&self) -> &'static str {
|
||||
match self {
|
||||
Self::Execution => "execution_history",
|
||||
Self::Worker => "worker_history",
|
||||
Self::Enforcement => "enforcement_history",
|
||||
Self::Event => "event_history",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Display for HistoryEntityType {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::Execution => write!(f, "execution"),
|
||||
Self::Worker => write!(f, "worker"),
|
||||
Self::Enforcement => write!(f, "enforcement"),
|
||||
Self::Event => write!(f, "event"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::str::FromStr for HistoryEntityType {
|
||||
type Err = String;
|
||||
|
||||
fn from_str(s: &str) -> std::result::Result<Self, Self::Err> {
|
||||
match s.to_lowercase().as_str() {
|
||||
"execution" => Ok(Self::Execution),
|
||||
"worker" => Ok(Self::Worker),
|
||||
"enforcement" => Ok(Self::Enforcement),
|
||||
"event" => Ok(Self::Event),
|
||||
other => Err(format!(
|
||||
"unknown history entity type '{}'; expected one of: execution, worker, enforcement, event",
|
||||
other
|
||||
)),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -191,9 +191,13 @@ impl RabbitMqConfig {
|
||||
/// Queue configurations
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct QueuesConfig {
|
||||
/// Events queue configuration
|
||||
/// Events queue configuration (sensor catch-all, bound with `#`)
|
||||
pub events: QueueConfig,
|
||||
|
||||
/// Executor events queue configuration (bound only to `event.created`)
|
||||
#[serde(default = "default_executor_events_queue")]
|
||||
pub executor_events: QueueConfig,
|
||||
|
||||
/// Executions queue configuration (legacy - to be deprecated)
|
||||
pub executions: QueueConfig,
|
||||
|
||||
@@ -216,6 +220,15 @@ pub struct QueuesConfig {
|
||||
pub notifications: QueueConfig,
|
||||
}
|
||||
|
||||
fn default_executor_events_queue() -> QueueConfig {
|
||||
QueueConfig {
|
||||
name: "attune.executor.events.queue".to_string(),
|
||||
durable: true,
|
||||
exclusive: false,
|
||||
auto_delete: false,
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for QueuesConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
@@ -225,6 +238,12 @@ impl Default for QueuesConfig {
|
||||
exclusive: false,
|
||||
auto_delete: false,
|
||||
},
|
||||
executor_events: QueueConfig {
|
||||
name: "attune.executor.events.queue".to_string(),
|
||||
durable: true,
|
||||
exclusive: false,
|
||||
auto_delete: false,
|
||||
},
|
||||
executions: QueueConfig {
|
||||
name: "attune.executions.queue".to_string(),
|
||||
durable: true,
|
||||
@@ -567,6 +586,7 @@ mod tests {
|
||||
fn test_default_queues() {
|
||||
let queues = QueuesConfig::default();
|
||||
assert_eq!(queues.events.name, "attune.events.queue");
|
||||
assert_eq!(queues.executor_events.name, "attune.executor.events.queue");
|
||||
assert_eq!(queues.executions.name, "attune.executions.queue");
|
||||
assert_eq!(
|
||||
queues.execution_completed.name,
|
||||
|
||||
@@ -396,6 +396,11 @@ impl Connection {
|
||||
None
|
||||
};
|
||||
|
||||
// Declare executor-specific events queue (only receives event.created messages,
|
||||
// unlike the sensor's catch-all events queue which is bound with `#`)
|
||||
self.declare_queue_with_optional_dlx(&config.rabbitmq.queues.executor_events, dlx)
|
||||
.await?;
|
||||
|
||||
// Declare executor queues
|
||||
self.declare_queue_with_optional_dlx(&config.rabbitmq.queues.enforcements, dlx)
|
||||
.await?;
|
||||
@@ -444,6 +449,15 @@ impl Connection {
|
||||
)
|
||||
.await?;
|
||||
|
||||
// Bind executor events queue to only event.created routing key
|
||||
// (the sensor's attune.events.queue uses `#` and gets all message types)
|
||||
self.bind_queue(
|
||||
&config.rabbitmq.queues.executor_events.name,
|
||||
&config.rabbitmq.exchanges.events.name,
|
||||
"event.created",
|
||||
)
|
||||
.await?;
|
||||
|
||||
info!("Executor infrastructure setup complete");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -190,8 +190,10 @@ pub mod exchanges {
|
||||
|
||||
/// Well-known queue names
|
||||
pub mod queues {
|
||||
/// Event processing queue
|
||||
/// Event processing queue (sensor catch-all, bound with `#`)
|
||||
pub const EVENTS: &str = "attune.events.queue";
|
||||
/// Executor event processing queue (bound only to `event.created`)
|
||||
pub const EXECUTOR_EVENTS: &str = "attune.executor.events.queue";
|
||||
/// Execution request queue
|
||||
pub const EXECUTIONS: &str = "attune.executions.queue";
|
||||
/// Notification delivery queue
|
||||
|
||||
565
crates/common/src/repositories/analytics.rs
Normal file
565
crates/common/src/repositories/analytics.rs
Normal file
@@ -0,0 +1,565 @@
|
||||
//! Analytics repository for querying TimescaleDB continuous aggregates
|
||||
//!
|
||||
//! This module provides read-only query methods for the continuous aggregate
|
||||
//! materialized views created in migration 000009_timescaledb_history. These views are
|
||||
//! auto-refreshed by TimescaleDB policies and provide pre-computed hourly
|
||||
//! rollups for dashboard widgets.
|
||||
|
||||
use chrono::{DateTime, Utc};
|
||||
use serde::Serialize;
|
||||
use sqlx::{Executor, FromRow, Postgres};
|
||||
|
||||
use crate::Result;
|
||||
|
||||
/// Repository for querying analytics continuous aggregates.
|
||||
///
|
||||
/// All methods are read-only. The underlying materialized views are
|
||||
/// auto-refreshed by TimescaleDB continuous aggregate policies.
|
||||
pub struct AnalyticsRepository;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Row types returned by aggregate queries
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// A single hourly bucket of execution status transitions.
|
||||
#[derive(Debug, Clone, Serialize, FromRow)]
|
||||
pub struct ExecutionStatusBucket {
|
||||
/// Start of the 1-hour bucket
|
||||
pub bucket: DateTime<Utc>,
|
||||
/// Action ref (e.g., "core.http_request"); NULL when grouped across all actions
|
||||
pub action_ref: Option<String>,
|
||||
/// The status that was transitioned to (e.g., "completed", "failed")
|
||||
pub new_status: Option<String>,
|
||||
/// Number of transitions in this bucket
|
||||
pub transition_count: i64,
|
||||
}
|
||||
|
||||
/// A single hourly bucket of execution throughput (creations).
|
||||
#[derive(Debug, Clone, Serialize, FromRow)]
|
||||
pub struct ExecutionThroughputBucket {
|
||||
/// Start of the 1-hour bucket
|
||||
pub bucket: DateTime<Utc>,
|
||||
/// Action ref; NULL when grouped across all actions
|
||||
pub action_ref: Option<String>,
|
||||
/// Number of executions created in this bucket
|
||||
pub execution_count: i64,
|
||||
}
|
||||
|
||||
/// A single hourly bucket of event volume.
|
||||
#[derive(Debug, Clone, Serialize, FromRow)]
|
||||
pub struct EventVolumeBucket {
|
||||
/// Start of the 1-hour bucket
|
||||
pub bucket: DateTime<Utc>,
|
||||
/// Trigger ref; NULL when grouped across all triggers
|
||||
pub trigger_ref: Option<String>,
|
||||
/// Number of events created in this bucket
|
||||
pub event_count: i64,
|
||||
}
|
||||
|
||||
/// A single hourly bucket of worker status transitions.
|
||||
#[derive(Debug, Clone, Serialize, FromRow)]
|
||||
pub struct WorkerStatusBucket {
|
||||
/// Start of the 1-hour bucket
|
||||
pub bucket: DateTime<Utc>,
|
||||
/// Worker name; NULL when grouped across all workers
|
||||
pub worker_name: Option<String>,
|
||||
/// The status transitioned to (e.g., "online", "offline")
|
||||
pub new_status: Option<String>,
|
||||
/// Number of transitions in this bucket
|
||||
pub transition_count: i64,
|
||||
}
|
||||
|
||||
/// A single hourly bucket of enforcement volume.
|
||||
#[derive(Debug, Clone, Serialize, FromRow)]
|
||||
pub struct EnforcementVolumeBucket {
|
||||
/// Start of the 1-hour bucket
|
||||
pub bucket: DateTime<Utc>,
|
||||
/// Rule ref; NULL when grouped across all rules
|
||||
pub rule_ref: Option<String>,
|
||||
/// Number of enforcements created in this bucket
|
||||
pub enforcement_count: i64,
|
||||
}
|
||||
|
||||
/// Aggregated failure rate over a time range.
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
pub struct FailureRateSummary {
|
||||
/// Total status transitions to terminal states in the window
|
||||
pub total_terminal: i64,
|
||||
/// Number of transitions to "failed" status
|
||||
pub failed_count: i64,
|
||||
/// Number of transitions to "timeout" status
|
||||
pub timeout_count: i64,
|
||||
/// Number of transitions to "completed" status
|
||||
pub completed_count: i64,
|
||||
/// Failure rate as a percentage (0.0 – 100.0)
|
||||
pub failure_rate_pct: f64,
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Query parameters
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Common time-range parameters for analytics queries.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct AnalyticsTimeRange {
|
||||
/// Start of the query window (inclusive). Defaults to 24 hours ago.
|
||||
pub since: DateTime<Utc>,
|
||||
/// End of the query window (inclusive). Defaults to now.
|
||||
pub until: DateTime<Utc>,
|
||||
}
|
||||
|
||||
impl Default for AnalyticsTimeRange {
|
||||
fn default() -> Self {
|
||||
let now = Utc::now();
|
||||
Self {
|
||||
since: now - chrono::Duration::hours(24),
|
||||
until: now,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl AnalyticsTimeRange {
|
||||
/// Create a range covering the last N hours from now.
|
||||
pub fn last_hours(hours: i64) -> Self {
|
||||
let now = Utc::now();
|
||||
Self {
|
||||
since: now - chrono::Duration::hours(hours),
|
||||
until: now,
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a range covering the last N days from now.
|
||||
pub fn last_days(days: i64) -> Self {
|
||||
let now = Utc::now();
|
||||
Self {
|
||||
since: now - chrono::Duration::days(days),
|
||||
until: now,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Repository implementation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
impl AnalyticsRepository {
|
||||
// =======================================================================
|
||||
// Execution status transitions
|
||||
// =======================================================================
|
||||
|
||||
/// Get execution status transitions per hour, aggregated across all actions.
|
||||
///
|
||||
/// Returns one row per (bucket, new_status) pair, ordered by bucket ascending.
|
||||
pub async fn execution_status_hourly<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
) -> Result<Vec<ExecutionStatusBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, ExecutionStatusBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
NULL::text AS action_ref,
|
||||
new_status,
|
||||
SUM(transition_count)::bigint AS transition_count
|
||||
FROM execution_status_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2
|
||||
GROUP BY bucket, new_status
|
||||
ORDER BY bucket ASC, new_status
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
/// Get execution status transitions per hour for a specific action.
|
||||
pub async fn execution_status_hourly_by_action<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
action_ref: &str,
|
||||
) -> Result<Vec<ExecutionStatusBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, ExecutionStatusBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
action_ref,
|
||||
new_status,
|
||||
transition_count
|
||||
FROM execution_status_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2 AND action_ref = $3
|
||||
ORDER BY bucket ASC, new_status
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.bind(action_ref)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
// =======================================================================
|
||||
// Execution throughput
|
||||
// =======================================================================
|
||||
|
||||
/// Get execution creation throughput per hour, aggregated across all actions.
|
||||
pub async fn execution_throughput_hourly<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
) -> Result<Vec<ExecutionThroughputBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, ExecutionThroughputBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
NULL::text AS action_ref,
|
||||
SUM(execution_count)::bigint AS execution_count
|
||||
FROM execution_throughput_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2
|
||||
GROUP BY bucket
|
||||
ORDER BY bucket ASC
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
/// Get execution creation throughput per hour for a specific action.
|
||||
pub async fn execution_throughput_hourly_by_action<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
action_ref: &str,
|
||||
) -> Result<Vec<ExecutionThroughputBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, ExecutionThroughputBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
action_ref,
|
||||
execution_count
|
||||
FROM execution_throughput_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2 AND action_ref = $3
|
||||
ORDER BY bucket ASC
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.bind(action_ref)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
// =======================================================================
|
||||
// Event volume
|
||||
// =======================================================================
|
||||
|
||||
/// Get event creation volume per hour, aggregated across all triggers.
|
||||
pub async fn event_volume_hourly<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
) -> Result<Vec<EventVolumeBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, EventVolumeBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
NULL::text AS trigger_ref,
|
||||
SUM(event_count)::bigint AS event_count
|
||||
FROM event_volume_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2
|
||||
GROUP BY bucket
|
||||
ORDER BY bucket ASC
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
/// Get event creation volume per hour for a specific trigger.
|
||||
pub async fn event_volume_hourly_by_trigger<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
trigger_ref: &str,
|
||||
) -> Result<Vec<EventVolumeBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, EventVolumeBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
trigger_ref,
|
||||
event_count
|
||||
FROM event_volume_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2 AND trigger_ref = $3
|
||||
ORDER BY bucket ASC
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.bind(trigger_ref)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
// =======================================================================
|
||||
// Worker health
|
||||
// =======================================================================
|
||||
|
||||
/// Get worker status transitions per hour, aggregated across all workers.
|
||||
pub async fn worker_status_hourly<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
) -> Result<Vec<WorkerStatusBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, WorkerStatusBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
NULL::text AS worker_name,
|
||||
new_status,
|
||||
SUM(transition_count)::bigint AS transition_count
|
||||
FROM worker_status_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2
|
||||
GROUP BY bucket, new_status
|
||||
ORDER BY bucket ASC, new_status
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
/// Get worker status transitions per hour for a specific worker.
|
||||
pub async fn worker_status_hourly_by_name<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
worker_name: &str,
|
||||
) -> Result<Vec<WorkerStatusBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, WorkerStatusBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
worker_name,
|
||||
new_status,
|
||||
transition_count
|
||||
FROM worker_status_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2 AND worker_name = $3
|
||||
ORDER BY bucket ASC, new_status
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.bind(worker_name)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
// =======================================================================
|
||||
// Enforcement volume
|
||||
// =======================================================================
|
||||
|
||||
/// Get enforcement creation volume per hour, aggregated across all rules.
|
||||
pub async fn enforcement_volume_hourly<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
) -> Result<Vec<EnforcementVolumeBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, EnforcementVolumeBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
NULL::text AS rule_ref,
|
||||
SUM(enforcement_count)::bigint AS enforcement_count
|
||||
FROM enforcement_volume_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2
|
||||
GROUP BY bucket
|
||||
ORDER BY bucket ASC
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
/// Get enforcement creation volume per hour for a specific rule.
|
||||
pub async fn enforcement_volume_hourly_by_rule<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
rule_ref: &str,
|
||||
) -> Result<Vec<EnforcementVolumeBucket>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let rows = sqlx::query_as::<_, EnforcementVolumeBucket>(
|
||||
r#"
|
||||
SELECT
|
||||
bucket,
|
||||
rule_ref,
|
||||
enforcement_count
|
||||
FROM enforcement_volume_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2 AND rule_ref = $3
|
||||
ORDER BY bucket ASC
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.bind(rule_ref)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
// =======================================================================
|
||||
// Derived analytics
|
||||
// =======================================================================
|
||||
|
||||
/// Compute the execution failure rate over a time range.
|
||||
///
|
||||
/// Uses the `execution_status_hourly` aggregate to count terminal-state
|
||||
/// transitions (completed, failed, timeout) and derive the failure
|
||||
/// percentage.
|
||||
pub async fn execution_failure_rate<'e, E>(
|
||||
executor: E,
|
||||
range: &AnalyticsTimeRange,
|
||||
) -> Result<FailureRateSummary>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
// Query terminal-state transitions from the aggregate
|
||||
let rows = sqlx::query_as::<_, (Option<String>, i64)>(
|
||||
r#"
|
||||
SELECT
|
||||
new_status,
|
||||
SUM(transition_count)::bigint AS cnt
|
||||
FROM execution_status_hourly
|
||||
WHERE bucket >= $1 AND bucket <= $2
|
||||
AND new_status IN ('completed', 'failed', 'timeout')
|
||||
GROUP BY new_status
|
||||
"#,
|
||||
)
|
||||
.bind(range.since)
|
||||
.bind(range.until)
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
let mut completed: i64 = 0;
|
||||
let mut failed: i64 = 0;
|
||||
let mut timeout: i64 = 0;
|
||||
|
||||
for (status, count) in &rows {
|
||||
match status.as_deref() {
|
||||
Some("completed") => completed = *count,
|
||||
Some("failed") => failed = *count,
|
||||
Some("timeout") => timeout = *count,
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
let total_terminal = completed + failed + timeout;
|
||||
let failure_rate_pct = if total_terminal > 0 {
|
||||
((failed + timeout) as f64 / total_terminal as f64) * 100.0
|
||||
} else {
|
||||
0.0
|
||||
};
|
||||
|
||||
Ok(FailureRateSummary {
|
||||
total_terminal,
|
||||
failed_count: failed,
|
||||
timeout_count: timeout,
|
||||
completed_count: completed,
|
||||
failure_rate_pct,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_analytics_time_range_default() {
|
||||
let range = AnalyticsTimeRange::default();
|
||||
let diff = range.until - range.since;
|
||||
// Should be approximately 24 hours
|
||||
assert!((diff.num_hours() - 24).abs() <= 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_analytics_time_range_last_hours() {
|
||||
let range = AnalyticsTimeRange::last_hours(6);
|
||||
let diff = range.until - range.since;
|
||||
assert!((diff.num_hours() - 6).abs() <= 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_analytics_time_range_last_days() {
|
||||
let range = AnalyticsTimeRange::last_days(7);
|
||||
let diff = range.until - range.since;
|
||||
assert!((diff.num_days() - 7).abs() <= 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_failure_rate_summary_zero_total() {
|
||||
let summary = FailureRateSummary {
|
||||
total_terminal: 0,
|
||||
failed_count: 0,
|
||||
timeout_count: 0,
|
||||
completed_count: 0,
|
||||
failure_rate_pct: 0.0,
|
||||
};
|
||||
assert_eq!(summary.failure_rate_pct, 0.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_failure_rate_calculation() {
|
||||
// 80 completed, 15 failed, 5 timeout → 20% failure rate
|
||||
let total = 80 + 15 + 5;
|
||||
let rate = ((15 + 5) as f64 / total as f64) * 100.0;
|
||||
assert!((rate - 20.0).abs() < 0.01);
|
||||
}
|
||||
}
|
||||
301
crates/common/src/repositories/entity_history.rs
Normal file
301
crates/common/src/repositories/entity_history.rs
Normal file
@@ -0,0 +1,301 @@
|
||||
//! Entity history repository for querying TimescaleDB history hypertables
|
||||
//!
|
||||
//! This module provides read-only query methods for the `<entity>_history` tables.
|
||||
//! History records are written exclusively by PostgreSQL triggers — this repository
|
||||
//! only reads them.
|
||||
|
||||
use chrono::{DateTime, Utc};
|
||||
use sqlx::{Executor, Postgres, QueryBuilder};
|
||||
|
||||
use crate::models::entity_history::{EntityHistoryRecord, HistoryEntityType};
|
||||
use crate::Result;
|
||||
|
||||
/// Repository for querying entity history hypertables.
|
||||
///
|
||||
/// All methods are read-only. History records are populated by PostgreSQL
|
||||
/// `AFTER INSERT OR UPDATE OR DELETE` triggers on the operational tables.
|
||||
pub struct EntityHistoryRepository;
|
||||
|
||||
/// Query parameters for filtering history records.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct HistoryQueryParams {
|
||||
/// Filter by entity ID (e.g., execution.id)
|
||||
pub entity_id: Option<i64>,
|
||||
|
||||
/// Filter by entity ref (e.g., action_ref, worker name)
|
||||
pub entity_ref: Option<String>,
|
||||
|
||||
/// Filter by operation type: `INSERT`, `UPDATE`, or `DELETE`
|
||||
pub operation: Option<String>,
|
||||
|
||||
/// Only include records where this field was changed
|
||||
pub changed_field: Option<String>,
|
||||
|
||||
/// Only include records at or after this time
|
||||
pub since: Option<DateTime<Utc>>,
|
||||
|
||||
/// Only include records at or before this time
|
||||
pub until: Option<DateTime<Utc>>,
|
||||
|
||||
/// Maximum number of records to return (default: 100, max: 1000)
|
||||
pub limit: Option<i64>,
|
||||
|
||||
/// Offset for pagination
|
||||
pub offset: Option<i64>,
|
||||
}
|
||||
|
||||
impl HistoryQueryParams {
|
||||
/// Returns the effective limit, capped at 1000.
|
||||
pub fn effective_limit(&self) -> i64 {
|
||||
self.limit.unwrap_or(100).min(1000).max(1)
|
||||
}
|
||||
|
||||
/// Returns the effective offset.
|
||||
pub fn effective_offset(&self) -> i64 {
|
||||
self.offset.unwrap_or(0).max(0)
|
||||
}
|
||||
}
|
||||
|
||||
impl EntityHistoryRepository {
|
||||
/// Query history records for a given entity type with optional filters.
|
||||
///
|
||||
/// Results are ordered by `time DESC` (most recent first).
|
||||
pub async fn query<'e, E>(
|
||||
executor: E,
|
||||
entity_type: HistoryEntityType,
|
||||
params: &HistoryQueryParams,
|
||||
) -> Result<Vec<EntityHistoryRecord>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
// We must use format! for the table name since it can't be a bind parameter,
|
||||
// but HistoryEntityType::table_name() returns a known static str so this is safe.
|
||||
let table = entity_type.table_name();
|
||||
|
||||
let mut qb: QueryBuilder<Postgres> =
|
||||
QueryBuilder::new(format!("SELECT time, operation, entity_id, entity_ref, changed_fields, old_values, new_values FROM {table} WHERE 1=1"));
|
||||
|
||||
if let Some(entity_id) = params.entity_id {
|
||||
qb.push(" AND entity_id = ").push_bind(entity_id);
|
||||
}
|
||||
|
||||
if let Some(ref entity_ref) = params.entity_ref {
|
||||
qb.push(" AND entity_ref = ").push_bind(entity_ref.clone());
|
||||
}
|
||||
|
||||
if let Some(ref operation) = params.operation {
|
||||
qb.push(" AND operation = ")
|
||||
.push_bind(operation.to_uppercase());
|
||||
}
|
||||
|
||||
if let Some(ref changed_field) = params.changed_field {
|
||||
qb.push(" AND ")
|
||||
.push_bind(changed_field.clone())
|
||||
.push(" = ANY(changed_fields)");
|
||||
}
|
||||
|
||||
if let Some(since) = params.since {
|
||||
qb.push(" AND time >= ").push_bind(since);
|
||||
}
|
||||
|
||||
if let Some(until) = params.until {
|
||||
qb.push(" AND time <= ").push_bind(until);
|
||||
}
|
||||
|
||||
qb.push(" ORDER BY time DESC");
|
||||
qb.push(" LIMIT ").push_bind(params.effective_limit());
|
||||
qb.push(" OFFSET ").push_bind(params.effective_offset());
|
||||
|
||||
let records = qb
|
||||
.build_query_as::<EntityHistoryRecord>()
|
||||
.fetch_all(executor)
|
||||
.await?;
|
||||
|
||||
Ok(records)
|
||||
}
|
||||
|
||||
/// Count history records for a given entity type with optional filters.
|
||||
///
|
||||
/// Useful for pagination metadata.
|
||||
pub async fn count<'e, E>(
|
||||
executor: E,
|
||||
entity_type: HistoryEntityType,
|
||||
params: &HistoryQueryParams,
|
||||
) -> Result<i64>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let table = entity_type.table_name();
|
||||
|
||||
let mut qb: QueryBuilder<Postgres> =
|
||||
QueryBuilder::new(format!("SELECT COUNT(*) FROM {table} WHERE 1=1"));
|
||||
|
||||
if let Some(entity_id) = params.entity_id {
|
||||
qb.push(" AND entity_id = ").push_bind(entity_id);
|
||||
}
|
||||
|
||||
if let Some(ref entity_ref) = params.entity_ref {
|
||||
qb.push(" AND entity_ref = ").push_bind(entity_ref.clone());
|
||||
}
|
||||
|
||||
if let Some(ref operation) = params.operation {
|
||||
qb.push(" AND operation = ")
|
||||
.push_bind(operation.to_uppercase());
|
||||
}
|
||||
|
||||
if let Some(ref changed_field) = params.changed_field {
|
||||
qb.push(" AND ")
|
||||
.push_bind(changed_field.clone())
|
||||
.push(" = ANY(changed_fields)");
|
||||
}
|
||||
|
||||
if let Some(since) = params.since {
|
||||
qb.push(" AND time >= ").push_bind(since);
|
||||
}
|
||||
|
||||
if let Some(until) = params.until {
|
||||
qb.push(" AND time <= ").push_bind(until);
|
||||
}
|
||||
|
||||
let row: (i64,) = qb.build_query_as().fetch_one(executor).await?;
|
||||
|
||||
Ok(row.0)
|
||||
}
|
||||
|
||||
/// Get history records for a specific entity by ID.
|
||||
///
|
||||
/// Convenience method equivalent to `query()` with `entity_id` set.
|
||||
pub async fn find_by_entity_id<'e, E>(
|
||||
executor: E,
|
||||
entity_type: HistoryEntityType,
|
||||
entity_id: i64,
|
||||
limit: Option<i64>,
|
||||
) -> Result<Vec<EntityHistoryRecord>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let params = HistoryQueryParams {
|
||||
entity_id: Some(entity_id),
|
||||
limit,
|
||||
..Default::default()
|
||||
};
|
||||
Self::query(executor, entity_type, ¶ms).await
|
||||
}
|
||||
|
||||
/// Get only status-change history records for a specific entity.
|
||||
///
|
||||
/// Filters to UPDATE operations where `changed_fields` includes `"status"`.
|
||||
pub async fn find_status_changes<'e, E>(
|
||||
executor: E,
|
||||
entity_type: HistoryEntityType,
|
||||
entity_id: i64,
|
||||
limit: Option<i64>,
|
||||
) -> Result<Vec<EntityHistoryRecord>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let params = HistoryQueryParams {
|
||||
entity_id: Some(entity_id),
|
||||
operation: Some("UPDATE".to_string()),
|
||||
changed_field: Some("status".to_string()),
|
||||
limit,
|
||||
..Default::default()
|
||||
};
|
||||
Self::query(executor, entity_type, ¶ms).await
|
||||
}
|
||||
|
||||
/// Get the most recent history record for a specific entity.
|
||||
pub async fn find_latest<'e, E>(
|
||||
executor: E,
|
||||
entity_type: HistoryEntityType,
|
||||
entity_id: i64,
|
||||
) -> Result<Option<EntityHistoryRecord>>
|
||||
where
|
||||
E: Executor<'e, Database = Postgres> + 'e,
|
||||
{
|
||||
let records = Self::find_by_entity_id(executor, entity_type, entity_id, Some(1)).await?;
|
||||
Ok(records.into_iter().next())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_history_query_params_defaults() {
|
||||
let params = HistoryQueryParams::default();
|
||||
assert_eq!(params.effective_limit(), 100);
|
||||
assert_eq!(params.effective_offset(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_history_query_params_limit_cap() {
|
||||
let params = HistoryQueryParams {
|
||||
limit: Some(5000),
|
||||
..Default::default()
|
||||
};
|
||||
assert_eq!(params.effective_limit(), 1000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_history_query_params_limit_min() {
|
||||
let params = HistoryQueryParams {
|
||||
limit: Some(-10),
|
||||
..Default::default()
|
||||
};
|
||||
assert_eq!(params.effective_limit(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_history_query_params_offset_min() {
|
||||
let params = HistoryQueryParams {
|
||||
offset: Some(-5),
|
||||
..Default::default()
|
||||
};
|
||||
assert_eq!(params.effective_offset(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_history_entity_type_table_name() {
|
||||
assert_eq!(
|
||||
HistoryEntityType::Execution.table_name(),
|
||||
"execution_history"
|
||||
);
|
||||
assert_eq!(HistoryEntityType::Worker.table_name(), "worker_history");
|
||||
assert_eq!(
|
||||
HistoryEntityType::Enforcement.table_name(),
|
||||
"enforcement_history"
|
||||
);
|
||||
assert_eq!(HistoryEntityType::Event.table_name(), "event_history");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_history_entity_type_from_str() {
|
||||
assert_eq!(
|
||||
"execution".parse::<HistoryEntityType>().unwrap(),
|
||||
HistoryEntityType::Execution
|
||||
);
|
||||
assert_eq!(
|
||||
"Worker".parse::<HistoryEntityType>().unwrap(),
|
||||
HistoryEntityType::Worker
|
||||
);
|
||||
assert_eq!(
|
||||
"ENFORCEMENT".parse::<HistoryEntityType>().unwrap(),
|
||||
HistoryEntityType::Enforcement
|
||||
);
|
||||
assert_eq!(
|
||||
"event".parse::<HistoryEntityType>().unwrap(),
|
||||
HistoryEntityType::Event
|
||||
);
|
||||
assert!("unknown".parse::<HistoryEntityType>().is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_history_entity_type_display() {
|
||||
assert_eq!(HistoryEntityType::Execution.to_string(), "execution");
|
||||
assert_eq!(HistoryEntityType::Worker.to_string(), "worker");
|
||||
assert_eq!(HistoryEntityType::Enforcement.to_string(), "enforcement");
|
||||
assert_eq!(HistoryEntityType::Event.to_string(), "event");
|
||||
}
|
||||
}
|
||||
@@ -28,7 +28,9 @@
|
||||
use sqlx::{Executor, Postgres, Transaction};
|
||||
|
||||
pub mod action;
|
||||
pub mod analytics;
|
||||
pub mod artifact;
|
||||
pub mod entity_history;
|
||||
pub mod event;
|
||||
pub mod execution;
|
||||
pub mod identity;
|
||||
@@ -46,7 +48,9 @@ pub mod workflow;
|
||||
|
||||
// Re-export repository types
|
||||
pub use action::{ActionRepository, PolicyRepository};
|
||||
pub use analytics::AnalyticsRepository;
|
||||
pub use artifact::ArtifactRepository;
|
||||
pub use entity_history::EntityHistoryRepository;
|
||||
pub use event::{EnforcementRepository, EventRepository};
|
||||
pub use execution::ExecutionRepository;
|
||||
pub use identity::{IdentityRepository, PermissionAssignmentRepository, PermissionSetRepository};
|
||||
|
||||
@@ -183,7 +183,14 @@ impl ExecutorService {
|
||||
|
||||
// Start event processor with its own consumer
|
||||
info!("Starting event processor...");
|
||||
let events_queue = self.inner.mq_config.rabbitmq.queues.events.name.clone();
|
||||
let events_queue = self
|
||||
.inner
|
||||
.mq_config
|
||||
.rabbitmq
|
||||
.queues
|
||||
.executor_events
|
||||
.name
|
||||
.clone();
|
||||
let event_consumer = Consumer::new(
|
||||
&self.inner.mq_connection,
|
||||
attune_common::mq::ConsumerConfig {
|
||||
|
||||
@@ -541,6 +541,7 @@ impl SensorManager {
|
||||
entrypoint,
|
||||
runtime,
|
||||
runtime_ref,
|
||||
runtime_version_constraint,
|
||||
trigger,
|
||||
trigger_ref,
|
||||
enabled,
|
||||
|
||||
@@ -16,7 +16,7 @@ services:
|
||||
# ============================================================================
|
||||
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
image: timescale/timescaledb:2.17.2-pg16
|
||||
container_name: attune-postgres
|
||||
environment:
|
||||
POSTGRES_USER: attune
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# RabbitMQ Queue Bindings - Quick Reference
|
||||
|
||||
**Last Updated:** 2026-02-03
|
||||
**Related Fix:** Queue Separation for InquiryHandler, CompletionListener, and ExecutionManager
|
||||
**Last Updated:** 2026-02-26
|
||||
**Related Fix:** Executor events queue separation (event.created only)
|
||||
|
||||
## Overview
|
||||
|
||||
@@ -21,7 +21,14 @@ Attune uses three main exchanges:
|
||||
|
||||
| Queue | Routing Key | Message Type | Consumer |
|
||||
|-------|-------------|--------------|----------|
|
||||
| `attune.events.queue` | `#` (all) | `EventCreatedPayload` | EventProcessor (executor) |
|
||||
| `attune.events.queue` | `#` (all) | All event types | Sensor service (rule lifecycle) |
|
||||
| `attune.executor.events.queue` | `event.created` | `EventCreatedPayload` | EventProcessor (executor) |
|
||||
| `attune.rules.lifecycle.queue` | `rule.created`, `rule.enabled`, `rule.disabled` | `RuleCreated/Enabled/DisabledPayload` | RuleLifecycleListener (sensor) |
|
||||
| `worker.{id}.packs` | `pack.registered` | `PackRegisteredPayload` | Worker (per-instance) |
|
||||
|
||||
> **Note:** The sensor's `attune.events.queue` is bound with `#` (all routing keys) for catch-all
|
||||
> event monitoring. The executor uses a dedicated `attune.executor.events.queue` bound only to
|
||||
> `event.created` to avoid deserializing unrelated message types (rule lifecycle, pack registration).
|
||||
|
||||
### Executions Exchange (`attune.executions`)
|
||||
|
||||
|
||||
@@ -46,19 +46,30 @@ Each service declares only the queues it consumes:
|
||||
**Role:** Orchestrates execution lifecycle, enforces rules, manages inquiries
|
||||
|
||||
**Queues Owned:**
|
||||
- `attune.executor.events.queue`
|
||||
- Exchange: `attune.events`
|
||||
- Routing: `event.created`
|
||||
- Purpose: Sensor-generated events for rule evaluation
|
||||
- Note: Dedicated queue so the executor only receives `EventCreatedPayload` messages,
|
||||
not rule lifecycle or pack registration messages that also flow through `attune.events`
|
||||
- `attune.enforcements.queue`
|
||||
- Exchange: `attune.executions`
|
||||
- Routing: `enforcement.#`
|
||||
- Purpose: Rule enforcement requests
|
||||
- `attune.execution.requests.queue`
|
||||
- Exchange: `attune.executions`
|
||||
- Routing: `execution.requested`
|
||||
- Purpose: New execution requests
|
||||
- `attune.execution.status.queue`
|
||||
- Exchange: `attune.executions`
|
||||
- Routing: `execution.status.changed`
|
||||
- Purpose: Execution status updates from workers
|
||||
- `attune.execution.completed.queue`
|
||||
- Exchange: `attune.executions`
|
||||
- Routing: `execution.completed`
|
||||
- Purpose: Completed execution results
|
||||
- `attune.inquiry.responses.queue`
|
||||
- Exchange: `attune.executions`
|
||||
- Routing: `inquiry.responded`
|
||||
- Purpose: Human-in-the-loop responses
|
||||
|
||||
@@ -92,8 +103,16 @@ Each service declares only the queues it consumes:
|
||||
|
||||
**Queues Owned:**
|
||||
- `attune.events.queue`
|
||||
- Exchange: `attune.events`
|
||||
- Routing: `#` (all events)
|
||||
- Purpose: Events generated by sensors and triggers
|
||||
- Purpose: Catch-all queue for sensor event monitoring
|
||||
- Note: Bound with `#` to receive all message types on the events exchange.
|
||||
The sensor service itself uses `attune.rules.lifecycle.queue` for rule changes
|
||||
(see RuleLifecycleListener). This queue exists for general event monitoring.
|
||||
- `attune.rules.lifecycle.queue`
|
||||
- Exchange: `attune.events`
|
||||
- Routing: `rule.created`, `rule.enabled`, `rule.disabled`
|
||||
- Purpose: Rule lifecycle events for starting/stopping sensors
|
||||
|
||||
**Setup Method:** `Connection::setup_sensor_infrastructure()`
|
||||
|
||||
@@ -147,11 +166,11 @@ Exception:
|
||||
### Rule Enforcement Flow
|
||||
```
|
||||
Event Created
|
||||
→ `attune.events` exchange
|
||||
→ `attune.events.queue` (consumed by Executor)
|
||||
→ `attune.events` exchange (routing: event.created)
|
||||
→ `attune.executor.events.queue` (consumed by Executor EventProcessor)
|
||||
→ Rule evaluation
|
||||
→ `enforcement.created` published to `attune.executions`
|
||||
→ `attune.enforcements.queue` (consumed by Executor)
|
||||
→ `attune.enforcements.queue` (consumed by Executor EnforcementProcessor)
|
||||
```
|
||||
|
||||
### Execution Flow
|
||||
@@ -241,7 +260,8 @@ Access at `http://localhost:15672` (credentials: `guest`/`guest`)
|
||||
|
||||
**Expected Queues:**
|
||||
- `attune.dlx.queue` - Dead letter queue
|
||||
- `attune.events.queue` - Events (Sensor)
|
||||
- `attune.events.queue` - Events catch-all (Sensor)
|
||||
- `attune.executor.events.queue` - Event created only (Executor)
|
||||
- `attune.enforcements.queue` - Enforcements (Executor)
|
||||
- `attune.execution.requests.queue` - Execution requests (Executor)
|
||||
- `attune.execution.status.queue` - Status updates (Executor)
|
||||
|
||||
270
docs/plans/timescaledb-entity-history.md
Normal file
270
docs/plans/timescaledb-entity-history.md
Normal file
@@ -0,0 +1,270 @@
|
||||
# TimescaleDB Entity History Tracking
|
||||
|
||||
## Overview
|
||||
|
||||
This plan describes the addition of **TimescaleDB-backed history tables** to track field-level changes on key operational entities in Attune. The goal is to provide an immutable audit log and time-series analytics for status transitions and other field changes, without modifying existing operational tables or application code.
|
||||
|
||||
## Motivation
|
||||
|
||||
Currently, when a field changes on an operational table (e.g., `execution.status` moves from `requested` → `running`), the row is updated in place and only the current state is retained. The `updated` timestamp is bumped, but there is no record of:
|
||||
|
||||
- What the previous value was
|
||||
- When each transition occurred
|
||||
- How long an entity spent in each state
|
||||
- Historical trends (e.g., failure rate over time, execution throughput per hour)
|
||||
|
||||
This data is essential for operational dashboards, debugging, SLA tracking, and capacity planning.
|
||||
|
||||
## Technology Choice: TimescaleDB
|
||||
|
||||
[TimescaleDB](https://www.timescale.com/) is a PostgreSQL extension that adds time-series capabilities:
|
||||
|
||||
- **Hypertables**: Automatic time-based partitioning (chunks by hour/day/week)
|
||||
- **Compression**: 10-20x storage reduction on aged-out chunks
|
||||
- **Retention policies**: Automatic data expiry
|
||||
- **Continuous aggregates**: Auto-refreshing materialized views for dashboard rollups
|
||||
- **`time_bucket()` function**: Efficient time-series grouping
|
||||
|
||||
It runs as an extension inside the existing PostgreSQL instance — no additional infrastructure.
|
||||
|
||||
## Design Decisions
|
||||
|
||||
### Separate history tables, not hypertable conversions
|
||||
|
||||
The operational tables (`execution`, `worker`, `enforcement`, `event`) will **NOT** be converted to hypertables. Reasons:
|
||||
|
||||
1. **UNIQUE constraints on hypertables must include the time partitioning column** — this would break `worker.name UNIQUE`, PK references, etc.
|
||||
2. **Foreign keys INTO hypertables are not supported** — `execution.parent` self-references `execution(id)`, `enforcement` references `rule`, etc.
|
||||
3. **UPDATE-heavy tables are a poor fit for hypertables** — hypertables are optimized for append-only INSERT workloads.
|
||||
|
||||
Instead, each tracked entity gets a companion `<table>_history` hypertable that receives append-only change records.
|
||||
|
||||
### JSONB diff format (not full row snapshots)
|
||||
|
||||
Each history row captures only the fields that changed, stored as JSONB:
|
||||
|
||||
- **Compact**: A status change is `{"status": "running"}`, not a copy of the entire row including large `result`/`config` JSONB blobs.
|
||||
- **Schema-decoupled**: Adding a column to the source table requires no changes to the history table structure — only a new `IS DISTINCT FROM` check in the trigger function.
|
||||
- **Answering "what changed?"**: Directly readable without diffing two full snapshots.
|
||||
|
||||
A `changed_fields TEXT[]` column enables efficient partial indexes and GIN-indexed queries for filtering by field name.
|
||||
|
||||
### PostgreSQL triggers for population
|
||||
|
||||
History rows are written by `AFTER INSERT OR UPDATE OR DELETE` triggers on the operational tables. This ensures:
|
||||
|
||||
- Every change is captured regardless of which service (API, executor, worker) made it.
|
||||
- No Rust application code changes are needed for recording.
|
||||
- It's impossible to miss a change path.
|
||||
|
||||
### Worker heartbeats excluded
|
||||
|
||||
`worker.last_heartbeat` is updated frequently by the heartbeat loop and is high-volume/low-value for history purposes. The trigger function explicitly excludes pure heartbeat-only updates. If heartbeat analytics are needed later, a dedicated lightweight table can be added.
|
||||
|
||||
## Tracked Entities
|
||||
|
||||
| Entity | History Table | `entity_ref` Source | Excluded Fields |
|
||||
|--------|--------------|---------------------|-----------------|
|
||||
| `execution` | `execution_history` | `action_ref` | *(none)* |
|
||||
| `worker` | `worker_history` | `name` | `last_heartbeat` (when sole change) |
|
||||
| `enforcement` | `enforcement_history` | `rule_ref` | *(none)* |
|
||||
| `event` | `event_history` | `trigger_ref` | *(none)* |
|
||||
|
||||
## Table Schema
|
||||
|
||||
All four history tables share the same structure:
|
||||
|
||||
```sql
|
||||
CREATE TABLE <entity>_history (
|
||||
time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
operation TEXT NOT NULL, -- 'INSERT', 'UPDATE', 'DELETE'
|
||||
entity_id BIGINT NOT NULL, -- PK of the source row
|
||||
entity_ref TEXT, -- denormalized ref/name for JOIN-free queries
|
||||
changed_fields TEXT[] NOT NULL DEFAULT '{}',
|
||||
old_values JSONB, -- previous values of changed fields
|
||||
new_values JSONB -- new values of changed fields
|
||||
);
|
||||
```
|
||||
|
||||
Column details:
|
||||
|
||||
| Column | Purpose |
|
||||
|--------|---------|
|
||||
| `time` | Hypertable partitioning dimension; when the change occurred |
|
||||
| `operation` | `INSERT`, `UPDATE`, or `DELETE` |
|
||||
| `entity_id` | The source row's `id` (conceptual FK, not enforced on hypertable) |
|
||||
| `entity_ref` | Denormalized human-readable identifier for efficient filtering |
|
||||
| `changed_fields` | Array of field names that changed — enables partial indexes and GIN queries |
|
||||
| `old_values` | JSONB of previous field values (NULL for INSERT) |
|
||||
| `new_values` | JSONB of new field values (NULL for DELETE) |
|
||||
|
||||
## Hypertable Configuration
|
||||
|
||||
| History Table | Chunk Interval | Rationale |
|
||||
|---------------|---------------|-----------|
|
||||
| `execution_history` | 1 day | Highest expected volume |
|
||||
| `enforcement_history` | 1 day | Correlated with execution volume |
|
||||
| `event_history` | 1 day | Can be high volume from active sensors |
|
||||
| `worker_history` | 7 days | Low volume (status changes are infrequent) |
|
||||
|
||||
## Indexes
|
||||
|
||||
Each history table gets:
|
||||
|
||||
1. **Entity lookup**: `(entity_id, time DESC)` — "show me history for entity X"
|
||||
2. **Status change filter**: Partial index on `time DESC` where `'status' = ANY(changed_fields)` — "show me all status changes"
|
||||
3. **Field filter**: GIN index on `changed_fields` — flexible field-based queries
|
||||
4. **Ref-based lookup**: `(entity_ref, time DESC)` — "show me all execution history for action `core.http_request`"
|
||||
|
||||
## Trigger Functions
|
||||
|
||||
Each tracked table gets a dedicated trigger function that:
|
||||
|
||||
1. On `INSERT`: Records the operation with key initial field values in `new_values`.
|
||||
2. On `DELETE`: Records the operation with entity identifiers.
|
||||
3. On `UPDATE`: Checks each mutable field with `IS DISTINCT FROM`. If any fields changed, records the old and new values. If nothing changed, no history row is written.
|
||||
|
||||
### Fields tracked per entity
|
||||
|
||||
**execution**: `status`, `result`, `executor`, `workflow_task`, `env_vars`
|
||||
|
||||
**worker**: `name`, `status`, `capabilities`, `meta`, `host`, `port` (excludes `last_heartbeat` when it's the only change)
|
||||
|
||||
**enforcement**: `status`, `payload`
|
||||
|
||||
**event**: `config`, `payload`
|
||||
|
||||
## Compression Policies
|
||||
|
||||
Applied after data leaves the "hot" query window:
|
||||
|
||||
| History Table | Compress After | `segmentby` | `orderby` |
|
||||
|---------------|---------------|-------------|-----------|
|
||||
| `execution_history` | 7 days | `entity_id` | `time DESC` |
|
||||
| `worker_history` | 7 days | `entity_id` | `time DESC` |
|
||||
| `enforcement_history` | 7 days | `entity_id` | `time DESC` |
|
||||
| `event_history` | 7 days | `entity_id` | `time DESC` |
|
||||
|
||||
`segmentby = entity_id` ensures that "show me history for entity X" queries are fast even on compressed chunks.
|
||||
|
||||
## Retention Policies
|
||||
|
||||
| History Table | Retain For | Rationale |
|
||||
|---------------|-----------|-----------|
|
||||
| `execution_history` | 90 days | Primary operational data |
|
||||
| `enforcement_history` | 90 days | Tied to execution lifecycle |
|
||||
| `event_history` | 30 days | High volume, less long-term value |
|
||||
| `worker_history` | 180 days | Low volume, useful for capacity trends |
|
||||
|
||||
## Continuous Aggregates (Future)
|
||||
|
||||
These are not part of the initial migration but are natural follow-ons:
|
||||
|
||||
```sql
|
||||
-- Execution status transitions per hour (for dashboards)
|
||||
CREATE MATERIALIZED VIEW execution_status_transitions_hourly
|
||||
WITH (timescaledb.continuous) AS
|
||||
SELECT
|
||||
time_bucket('1 hour', time) AS bucket,
|
||||
entity_ref AS action_ref,
|
||||
new_values->>'status' AS new_status,
|
||||
COUNT(*) AS transition_count
|
||||
FROM execution_history
|
||||
WHERE 'status' = ANY(changed_fields)
|
||||
GROUP BY bucket, entity_ref, new_values->>'status'
|
||||
WITH NO DATA;
|
||||
|
||||
-- Event volume per hour by trigger (for throughput monitoring)
|
||||
CREATE MATERIALIZED VIEW event_volume_hourly
|
||||
WITH (timescaledb.continuous) AS
|
||||
SELECT
|
||||
time_bucket('1 hour', time) AS bucket,
|
||||
entity_ref AS trigger_ref,
|
||||
COUNT(*) AS event_count
|
||||
FROM event_history
|
||||
WHERE operation = 'INSERT'
|
||||
GROUP BY bucket, entity_ref
|
||||
WITH NO DATA;
|
||||
```
|
||||
|
||||
## Infrastructure Changes
|
||||
|
||||
### Docker Compose
|
||||
|
||||
Change the PostgreSQL image from `postgres:16-alpine` to `timescale/timescaledb:latest-pg16` (or a pinned version like `timescale/timescaledb:2.17.2-pg16`).
|
||||
|
||||
No other infrastructure changes are needed — TimescaleDB is a drop-in extension.
|
||||
|
||||
### Local Development
|
||||
|
||||
For local development (non-Docker), TimescaleDB must be installed as a PostgreSQL extension. On macOS: `brew install timescaledb`. On Linux: follow [TimescaleDB install docs](https://docs.timescale.com/self-hosted/latest/install/).
|
||||
|
||||
### Testing
|
||||
|
||||
The schema-per-test isolation pattern works with TimescaleDB. The `timescaledb` extension is database-level (created once via `CREATE EXTENSION`), and hypertables in different schemas are independent. The test schema setup requires no changes — `create_hypertable()` operates within the active `search_path`.
|
||||
|
||||
### SQLx Compatibility
|
||||
|
||||
No special SQLx support is needed. History tables are standard PostgreSQL tables from SQLx's perspective. `INSERT`, `SELECT`, `time_bucket()`, and array operators all work as regular SQL. TimescaleDB-specific DDL (`create_hypertable`, `add_compression_policy`, etc.) runs in migrations only.
|
||||
|
||||
## Implementation Scope
|
||||
|
||||
### Phase 1 (migration) ✅
|
||||
- [x] `CREATE EXTENSION IF NOT EXISTS timescaledb`
|
||||
- [x] Create four `<entity>_history` tables
|
||||
- [x] Convert to hypertables with `create_hypertable()`
|
||||
- [x] Create indexes (entity lookup, status change filter, GIN on changed_fields, ref lookup)
|
||||
- [x] Create trigger functions for `execution`, `worker`, `enforcement`, `event`
|
||||
- [x] Attach triggers to operational tables
|
||||
- [x] Configure compression policies
|
||||
- [x] Configure retention policies
|
||||
|
||||
### Phase 2 (API & UI) ✅
|
||||
- [x] History model in `crates/common/src/models.rs` (`EntityHistoryRecord`, `HistoryEntityType`)
|
||||
- [x] History repository in `crates/common/src/repositories/entity_history.rs` (`query`, `count`, `find_by_entity_id`, `find_status_changes`, `find_latest`)
|
||||
- [x] History DTOs in `crates/api/src/dto/history.rs` (`HistoryRecordResponse`, `HistoryQueryParams`)
|
||||
- [x] API endpoints in `crates/api/src/routes/history.rs`:
|
||||
- `GET /api/v1/history/{entity_type}` — generic history query with filters & pagination
|
||||
- `GET /api/v1/executions/{id}/history` — execution-specific history
|
||||
- `GET /api/v1/workers/{id}/history` — worker-specific history
|
||||
- `GET /api/v1/enforcements/{id}/history` — enforcement-specific history
|
||||
- `GET /api/v1/events/{id}/history` — event-specific history
|
||||
- [x] Web UI history panel on entity detail pages
|
||||
- `web/src/hooks/useHistory.ts` — React Query hooks (`useEntityHistory`, `useExecutionHistory`, `useWorkerHistory`, `useEnforcementHistory`, `useEventHistory`)
|
||||
- `web/src/components/common/EntityHistoryPanel.tsx` — Reusable collapsible panel with timeline, field-level diffs, filters (operation, changed_field), and pagination
|
||||
- Integrated into `ExecutionDetailPage`, `EnforcementDetailPage`, `EventDetailPage` (worker detail page does not exist yet)
|
||||
- [x] Continuous aggregates for dashboards
|
||||
- Migration `20260226200000_continuous_aggregates.sql` creates 5 continuous aggregates: `execution_status_hourly`, `execution_throughput_hourly`, `event_volume_hourly`, `worker_status_hourly`, `enforcement_volume_hourly`
|
||||
- Auto-refresh policies (30 min for most, 1 hour for worker) with 7-day lookback
|
||||
|
||||
### Phase 3 (analytics) ✅
|
||||
- [x] Dashboard widgets showing execution throughput, failure rates, worker health trends
|
||||
- `crates/common/src/repositories/analytics.rs` — repository querying continuous aggregates (execution status/throughput, event volume, worker status, enforcement volume, failure rate)
|
||||
- `crates/api/src/dto/analytics.rs` — DTOs (`DashboardAnalyticsResponse`, `TimeSeriesPoint`, `FailureRateResponse`, `AnalyticsQueryParams`, etc.)
|
||||
- `crates/api/src/routes/analytics.rs` — 7 API endpoints under `/api/v1/analytics/` (dashboard, executions/status, executions/throughput, executions/failure-rate, events/volume, workers/status, enforcements/volume)
|
||||
- `web/src/hooks/useAnalytics.ts` — React Query hooks (`useDashboardAnalytics`, `useExecutionStatusAnalytics`, `useFailureRateAnalytics`, etc.)
|
||||
- `web/src/components/common/AnalyticsWidgets.tsx` — Dashboard visualization components (MiniBarChart, StackedBarChart, FailureRateCard with SVG ring gauge, StatCard, TimeRangeSelector with 6h/12h/24h/2d/7d presets)
|
||||
- Integrated into `DashboardPage.tsx` below existing metrics and activity sections
|
||||
- [ ] Configurable retention periods via admin settings
|
||||
- [ ] Export/archival to external storage before retention expiry
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Mitigation |
|
||||
|------|-----------|
|
||||
| Trigger overhead on hot paths | Triggers are lightweight (JSONB build + single INSERT into an append-optimized hypertable). Benchmark if execution throughput exceeds 1K/sec. |
|
||||
| Storage growth | Compression (7-day delay) + retention policies bound storage automatically. |
|
||||
| JSONB query performance | Partial indexes on `changed_fields` avoid full scans. Continuous aggregates pre-compute hot queries. |
|
||||
| Schema drift (new columns not tracked) | When adding mutable columns to tracked tables, add a corresponding `IS DISTINCT FROM` check to the trigger function. Document this in the pitfalls section of AGENTS.md. |
|
||||
| Test compatibility | TimescaleDB extension is database-level; schema-per-test isolation is unaffected. Verify in CI. |
|
||||
|
||||
## Docker Image Pinning
|
||||
|
||||
For reproducibility, pin the TimescaleDB image version rather than using `latest`:
|
||||
|
||||
```yaml
|
||||
postgres:
|
||||
image: timescale/timescaledb:2.17.2-pg16
|
||||
```
|
||||
|
||||
Update the pin periodically as new stable versions are released.
|
||||
@@ -1,5 +1,5 @@
|
||||
-- Migration: Pack System
|
||||
-- Description: Creates pack and runtime tables
|
||||
-- Description: Creates pack, runtime, and runtime_version tables
|
||||
-- Version: 20250101000002
|
||||
|
||||
-- ============================================================================
|
||||
@@ -160,3 +160,85 @@ COMMENT ON COLUMN runtime.distributions IS 'Runtime distribution metadata includ
|
||||
COMMENT ON COLUMN runtime.installation IS 'Installation requirements and instructions including package managers and setup steps';
|
||||
COMMENT ON COLUMN runtime.installers IS 'Array of installer actions to create pack-specific runtime environments. Each installer defines commands to set up isolated environments (e.g., Python venv, npm install).';
|
||||
COMMENT ON COLUMN runtime.execution_config IS 'Execution configuration: interpreter, environment setup, and dependency management. Drives how the worker executes actions and how pack install sets up environments.';
|
||||
|
||||
-- ============================================================================
|
||||
-- RUNTIME VERSION TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE runtime_version (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
runtime BIGINT NOT NULL REFERENCES runtime(id) ON DELETE CASCADE,
|
||||
runtime_ref TEXT NOT NULL,
|
||||
|
||||
-- Semantic version string (e.g., "3.12.1", "20.11.0")
|
||||
version TEXT NOT NULL,
|
||||
|
||||
-- Individual version components for efficient range queries.
|
||||
-- Nullable because some runtimes may use non-numeric versioning.
|
||||
version_major INT,
|
||||
version_minor INT,
|
||||
version_patch INT,
|
||||
|
||||
-- Complete execution configuration for this specific version.
|
||||
-- This is NOT a diff/override — it is a full standalone config that can
|
||||
-- replace the parent runtime's execution_config when this version is selected.
|
||||
-- Structure is identical to runtime.execution_config (RuntimeExecutionConfig).
|
||||
execution_config JSONB NOT NULL DEFAULT '{}'::jsonb,
|
||||
|
||||
-- Version-specific distribution/verification metadata.
|
||||
-- Structure mirrors runtime.distributions but with version-specific commands.
|
||||
-- Example: verification commands that check for a specific binary like python3.12.
|
||||
distributions JSONB NOT NULL DEFAULT '{}'::jsonb,
|
||||
|
||||
-- Whether this version is the default for the parent runtime.
|
||||
-- At most one version per runtime should be marked as default.
|
||||
is_default BOOLEAN NOT NULL DEFAULT FALSE,
|
||||
|
||||
-- Whether this version has been verified as available on the current system.
|
||||
available BOOLEAN NOT NULL DEFAULT TRUE,
|
||||
|
||||
-- When this version was last verified (via running verification commands).
|
||||
verified_at TIMESTAMPTZ,
|
||||
|
||||
-- Arbitrary version-specific metadata (e.g., EOL date, release notes URL,
|
||||
-- feature flags, platform-specific notes).
|
||||
meta JSONB NOT NULL DEFAULT '{}'::jsonb,
|
||||
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
-- Constraints
|
||||
CONSTRAINT runtime_version_unique UNIQUE(runtime, version)
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_runtime_version_runtime ON runtime_version(runtime);
|
||||
CREATE INDEX idx_runtime_version_runtime_ref ON runtime_version(runtime_ref);
|
||||
CREATE INDEX idx_runtime_version_version ON runtime_version(version);
|
||||
CREATE INDEX idx_runtime_version_available ON runtime_version(available) WHERE available = TRUE;
|
||||
CREATE INDEX idx_runtime_version_is_default ON runtime_version(is_default) WHERE is_default = TRUE;
|
||||
CREATE INDEX idx_runtime_version_components ON runtime_version(runtime, version_major, version_minor, version_patch);
|
||||
CREATE INDEX idx_runtime_version_created ON runtime_version(created DESC);
|
||||
CREATE INDEX idx_runtime_version_execution_config ON runtime_version USING GIN (execution_config);
|
||||
CREATE INDEX idx_runtime_version_meta ON runtime_version USING GIN (meta);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_runtime_version_updated
|
||||
BEFORE UPDATE ON runtime_version
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE runtime_version IS 'Specific versions of a runtime (e.g., Python 3.11, 3.12) with version-specific execution configuration';
|
||||
COMMENT ON COLUMN runtime_version.runtime IS 'Parent runtime this version belongs to';
|
||||
COMMENT ON COLUMN runtime_version.runtime_ref IS 'Parent runtime ref (e.g., core.python) for display/filtering';
|
||||
COMMENT ON COLUMN runtime_version.version IS 'Semantic version string (e.g., "3.12.1", "20.11.0")';
|
||||
COMMENT ON COLUMN runtime_version.version_major IS 'Major version component for efficient range queries';
|
||||
COMMENT ON COLUMN runtime_version.version_minor IS 'Minor version component for efficient range queries';
|
||||
COMMENT ON COLUMN runtime_version.version_patch IS 'Patch version component for efficient range queries';
|
||||
COMMENT ON COLUMN runtime_version.execution_config IS 'Complete execution configuration for this version (same structure as runtime.execution_config)';
|
||||
COMMENT ON COLUMN runtime_version.distributions IS 'Version-specific distribution/verification metadata';
|
||||
COMMENT ON COLUMN runtime_version.is_default IS 'Whether this is the default version for the parent runtime (at most one per runtime)';
|
||||
COMMENT ON COLUMN runtime_version.available IS 'Whether this version has been verified as available on the system';
|
||||
COMMENT ON COLUMN runtime_version.verified_at IS 'Timestamp of last availability verification';
|
||||
COMMENT ON COLUMN runtime_version.meta IS 'Arbitrary version-specific metadata';
|
||||
|
||||
@@ -1,6 +1,23 @@
|
||||
-- Migration: Event System
|
||||
-- Description: Creates trigger, sensor, event, and enforcement tables (with webhook_config, is_adhoc from start)
|
||||
-- Version: 20250101000003
|
||||
-- Migration: Event System and Actions
|
||||
-- Description: Creates trigger, sensor, event, enforcement, and action tables
|
||||
-- with runtime version constraint support. Includes webhook key
|
||||
-- generation function used by webhook management functions in 000007.
|
||||
-- Version: 20250101000004
|
||||
|
||||
-- ============================================================================
|
||||
-- WEBHOOK KEY GENERATION
|
||||
-- ============================================================================
|
||||
|
||||
-- Generates a unique webhook key in the format: wh_<32 random hex chars>
|
||||
-- Used by enable_trigger_webhook() and regenerate_trigger_webhook_key() in 000007.
|
||||
CREATE OR REPLACE FUNCTION generate_webhook_key()
|
||||
RETURNS VARCHAR(64) AS $$
|
||||
BEGIN
|
||||
RETURN 'wh_' || encode(gen_random_bytes(16), 'hex');
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION generate_webhook_key() IS 'Generates a unique webhook key (format: wh_<32 hex chars>) for trigger webhook authentication';
|
||||
|
||||
-- ============================================================================
|
||||
-- TRIGGER TABLE
|
||||
@@ -74,6 +91,7 @@ CREATE TABLE sensor (
|
||||
is_adhoc BOOLEAN NOT NULL DEFAULT FALSE,
|
||||
param_schema JSONB,
|
||||
config JSONB,
|
||||
runtime_version_constraint TEXT,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
@@ -106,6 +124,7 @@ COMMENT ON COLUMN sensor.runtime IS 'Runtime environment for execution';
|
||||
COMMENT ON COLUMN sensor.trigger IS 'Trigger type this sensor creates events for';
|
||||
COMMENT ON COLUMN sensor.enabled IS 'Whether this sensor is active';
|
||||
COMMENT ON COLUMN sensor.is_adhoc IS 'True if sensor was manually created (ad-hoc), false if installed from pack';
|
||||
COMMENT ON COLUMN sensor.runtime_version_constraint IS 'Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0"). NULL means any version.';
|
||||
|
||||
-- ============================================================================
|
||||
-- EVENT TABLE
|
||||
@@ -155,7 +174,7 @@ COMMENT ON COLUMN event.source IS 'Sensor that generated this event';
|
||||
|
||||
CREATE TABLE enforcement (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
rule BIGINT, -- Forward reference to rule table, will add constraint in next migration
|
||||
rule BIGINT, -- Forward reference to rule table, will add constraint after rule is created
|
||||
rule_ref TEXT NOT NULL,
|
||||
trigger_ref TEXT NOT NULL,
|
||||
config JSONB,
|
||||
@@ -200,5 +219,78 @@ COMMENT ON COLUMN enforcement.payload IS 'Event payload for rule evaluation';
|
||||
COMMENT ON COLUMN enforcement.condition IS 'Logical operator for conditions (any=OR, all=AND)';
|
||||
COMMENT ON COLUMN enforcement.conditions IS 'Condition expressions to evaluate';
|
||||
|
||||
-- Note: Rule table will be created in migration 20250101000006 after action table exists
|
||||
-- Note: Foreign key constraints for enforcement.rule and event.rule will be added in that migration
|
||||
-- ============================================================================
|
||||
-- ACTION TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE action (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
ref TEXT NOT NULL UNIQUE,
|
||||
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
|
||||
pack_ref TEXT NOT NULL,
|
||||
label TEXT NOT NULL,
|
||||
description TEXT NOT NULL,
|
||||
entrypoint TEXT NOT NULL,
|
||||
runtime BIGINT REFERENCES runtime(id),
|
||||
param_schema JSONB,
|
||||
out_schema JSONB,
|
||||
parameter_delivery TEXT NOT NULL DEFAULT 'stdin' CHECK (parameter_delivery IN ('stdin', 'file')),
|
||||
parameter_format TEXT NOT NULL DEFAULT 'json' CHECK (parameter_format IN ('dotenv', 'json', 'yaml')),
|
||||
output_format TEXT NOT NULL DEFAULT 'text' CHECK (output_format IN ('text', 'json', 'yaml', 'jsonl')),
|
||||
is_adhoc BOOLEAN NOT NULL DEFAULT FALSE,
|
||||
timeout_seconds INTEGER,
|
||||
max_retries INTEGER DEFAULT 0,
|
||||
runtime_version_constraint TEXT,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
-- Constraints
|
||||
CONSTRAINT action_ref_lowercase CHECK (ref = LOWER(ref)),
|
||||
CONSTRAINT action_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$')
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_action_ref ON action(ref);
|
||||
CREATE INDEX idx_action_pack ON action(pack);
|
||||
CREATE INDEX idx_action_runtime ON action(runtime);
|
||||
CREATE INDEX idx_action_parameter_delivery ON action(parameter_delivery);
|
||||
CREATE INDEX idx_action_parameter_format ON action(parameter_format);
|
||||
CREATE INDEX idx_action_output_format ON action(output_format);
|
||||
CREATE INDEX idx_action_is_adhoc ON action(is_adhoc) WHERE is_adhoc = true;
|
||||
CREATE INDEX idx_action_created ON action(created DESC);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_action_updated
|
||||
BEFORE UPDATE ON action
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE action IS 'Actions are executable tasks that can be triggered';
|
||||
COMMENT ON COLUMN action.ref IS 'Unique action reference (format: pack.name)';
|
||||
COMMENT ON COLUMN action.pack IS 'Pack this action belongs to';
|
||||
COMMENT ON COLUMN action.label IS 'Human-readable action name';
|
||||
COMMENT ON COLUMN action.entrypoint IS 'Script or command to execute';
|
||||
COMMENT ON COLUMN action.runtime IS 'Runtime environment for execution';
|
||||
COMMENT ON COLUMN action.param_schema IS 'JSON schema for action parameters';
|
||||
COMMENT ON COLUMN action.out_schema IS 'JSON schema for action output';
|
||||
COMMENT ON COLUMN action.parameter_delivery IS 'How parameters are delivered: stdin (standard input - secure), file (temporary file - secure for large payloads). Environment variables are set separately via execution.env_vars.';
|
||||
COMMENT ON COLUMN action.parameter_format IS 'Parameter serialization format: json (JSON object - default), dotenv (KEY=''VALUE''), yaml (YAML format)';
|
||||
COMMENT ON COLUMN action.output_format IS 'Output parsing format: text (no parsing - raw stdout), json (parse stdout as JSON), yaml (parse stdout as YAML), jsonl (parse each line as JSON, collect into array)';
|
||||
COMMENT ON COLUMN action.is_adhoc IS 'True if action was manually created (ad-hoc), false if installed from pack';
|
||||
COMMENT ON COLUMN action.timeout_seconds IS 'Worker queue TTL override in seconds. If NULL, uses global worker_queue_ttl_ms config. Allows per-action timeout tuning.';
|
||||
COMMENT ON COLUMN action.max_retries IS 'Maximum number of automatic retry attempts for failed executions. 0 = no retries (default).';
|
||||
COMMENT ON COLUMN action.runtime_version_constraint IS 'Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0"). NULL means any version.';
|
||||
|
||||
-- ============================================================================
|
||||
|
||||
-- Add foreign key constraint for policy table
|
||||
ALTER TABLE policy
|
||||
ADD CONSTRAINT policy_action_fkey
|
||||
FOREIGN KEY (action) REFERENCES action(id) ON DELETE CASCADE;
|
||||
|
||||
-- Note: Foreign key constraints for key table (key_owner_action_fkey, key_owner_sensor_fkey)
|
||||
-- will be added in migration 000007_supporting_systems.sql after the key table is created
|
||||
|
||||
-- Note: Rule table will be created in migration 000005 after execution table exists
|
||||
-- Note: Foreign key constraints for enforcement.rule and event.rule will be added there
|
||||
|
||||
@@ -1,70 +0,0 @@
|
||||
-- Migration: Action
|
||||
-- Description: Creates action table (with is_adhoc from start)
|
||||
-- Version: 20250101000005
|
||||
|
||||
-- ============================================================================
|
||||
-- ACTION TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE action (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
ref TEXT NOT NULL UNIQUE,
|
||||
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
|
||||
pack_ref TEXT NOT NULL,
|
||||
label TEXT NOT NULL,
|
||||
description TEXT NOT NULL,
|
||||
entrypoint TEXT NOT NULL,
|
||||
runtime BIGINT REFERENCES runtime(id),
|
||||
param_schema JSONB,
|
||||
out_schema JSONB,
|
||||
parameter_delivery TEXT NOT NULL DEFAULT 'stdin' CHECK (parameter_delivery IN ('stdin', 'file')),
|
||||
parameter_format TEXT NOT NULL DEFAULT 'json' CHECK (parameter_format IN ('dotenv', 'json', 'yaml')),
|
||||
output_format TEXT NOT NULL DEFAULT 'text' CHECK (output_format IN ('text', 'json', 'yaml', 'jsonl')),
|
||||
is_adhoc BOOLEAN NOT NULL DEFAULT FALSE,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
-- Constraints
|
||||
CONSTRAINT action_ref_lowercase CHECK (ref = LOWER(ref)),
|
||||
CONSTRAINT action_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$')
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_action_ref ON action(ref);
|
||||
CREATE INDEX idx_action_pack ON action(pack);
|
||||
CREATE INDEX idx_action_runtime ON action(runtime);
|
||||
CREATE INDEX idx_action_parameter_delivery ON action(parameter_delivery);
|
||||
CREATE INDEX idx_action_parameter_format ON action(parameter_format);
|
||||
CREATE INDEX idx_action_output_format ON action(output_format);
|
||||
CREATE INDEX idx_action_is_adhoc ON action(is_adhoc) WHERE is_adhoc = true;
|
||||
CREATE INDEX idx_action_created ON action(created DESC);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_action_updated
|
||||
BEFORE UPDATE ON action
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE action IS 'Actions are executable tasks that can be triggered';
|
||||
COMMENT ON COLUMN action.ref IS 'Unique action reference (format: pack.name)';
|
||||
COMMENT ON COLUMN action.pack IS 'Pack this action belongs to';
|
||||
COMMENT ON COLUMN action.label IS 'Human-readable action name';
|
||||
COMMENT ON COLUMN action.entrypoint IS 'Script or command to execute';
|
||||
COMMENT ON COLUMN action.runtime IS 'Runtime environment for execution';
|
||||
COMMENT ON COLUMN action.param_schema IS 'JSON schema for action parameters';
|
||||
COMMENT ON COLUMN action.out_schema IS 'JSON schema for action output';
|
||||
COMMENT ON COLUMN action.parameter_delivery IS 'How parameters are delivered: stdin (standard input - secure), file (temporary file - secure for large payloads). Environment variables are set separately via execution.env_vars.';
|
||||
COMMENT ON COLUMN action.parameter_format IS 'Parameter serialization format: json (JSON object - default), dotenv (KEY=''VALUE''), yaml (YAML format)';
|
||||
COMMENT ON COLUMN action.output_format IS 'Output parsing format: text (no parsing - raw stdout), json (parse stdout as JSON), yaml (parse stdout as YAML), jsonl (parse each line as JSON, collect into array)';
|
||||
COMMENT ON COLUMN action.is_adhoc IS 'True if action was manually created (ad-hoc), false if installed from pack';
|
||||
|
||||
-- ============================================================================
|
||||
|
||||
-- Add foreign key constraint for policy table
|
||||
ALTER TABLE policy
|
||||
ADD CONSTRAINT policy_action_fkey
|
||||
FOREIGN KEY (action) REFERENCES action(id) ON DELETE CASCADE;
|
||||
|
||||
-- Note: Foreign key constraints for key table (key_owner_action_fkey, key_owner_sensor_fkey)
|
||||
-- will be added in migration 20250101000009_keys_artifacts.sql after the key table is created
|
||||
397
migrations/20250101000005_execution_and_operations.sql
Normal file
397
migrations/20250101000005_execution_and_operations.sql
Normal file
@@ -0,0 +1,397 @@
|
||||
-- Migration: Execution and Operations
|
||||
-- Description: Creates execution, inquiry, rule, worker, and notification tables.
|
||||
-- Includes retry tracking, worker health views, and helper functions.
|
||||
-- Consolidates former migrations: 000006 (execution_system), 000008
|
||||
-- (worker_notification), 000014 (worker_table), and 20260209 (phase3).
|
||||
-- Version: 20250101000005
|
||||
|
||||
-- ============================================================================
|
||||
-- EXECUTION TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE execution (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
action BIGINT REFERENCES action(id) ON DELETE SET NULL,
|
||||
action_ref TEXT NOT NULL,
|
||||
config JSONB,
|
||||
env_vars JSONB,
|
||||
parent BIGINT REFERENCES execution(id) ON DELETE SET NULL,
|
||||
enforcement BIGINT REFERENCES enforcement(id) ON DELETE SET NULL,
|
||||
executor BIGINT REFERENCES identity(id) ON DELETE SET NULL,
|
||||
status execution_status_enum NOT NULL DEFAULT 'requested',
|
||||
result JSONB,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
is_workflow BOOLEAN DEFAULT false NOT NULL,
|
||||
workflow_def BIGINT,
|
||||
workflow_task JSONB,
|
||||
|
||||
-- Retry tracking (baked in from phase 3)
|
||||
retry_count INTEGER NOT NULL DEFAULT 0,
|
||||
max_retries INTEGER,
|
||||
retry_reason TEXT,
|
||||
original_execution BIGINT REFERENCES execution(id) ON DELETE SET NULL,
|
||||
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_execution_action ON execution(action);
|
||||
CREATE INDEX idx_execution_action_ref ON execution(action_ref);
|
||||
CREATE INDEX idx_execution_parent ON execution(parent);
|
||||
CREATE INDEX idx_execution_enforcement ON execution(enforcement);
|
||||
CREATE INDEX idx_execution_executor ON execution(executor);
|
||||
CREATE INDEX idx_execution_status ON execution(status);
|
||||
CREATE INDEX idx_execution_created ON execution(created DESC);
|
||||
CREATE INDEX idx_execution_updated ON execution(updated DESC);
|
||||
CREATE INDEX idx_execution_status_created ON execution(status, created DESC);
|
||||
CREATE INDEX idx_execution_status_updated ON execution(status, updated DESC);
|
||||
CREATE INDEX idx_execution_action_status ON execution(action, status);
|
||||
CREATE INDEX idx_execution_executor_created ON execution(executor, created DESC);
|
||||
CREATE INDEX idx_execution_parent_created ON execution(parent, created DESC);
|
||||
CREATE INDEX idx_execution_result_gin ON execution USING GIN (result);
|
||||
CREATE INDEX idx_execution_env_vars_gin ON execution USING GIN (env_vars);
|
||||
CREATE INDEX idx_execution_original_execution ON execution(original_execution) WHERE original_execution IS NOT NULL;
|
||||
CREATE INDEX idx_execution_status_retry ON execution(status, retry_count) WHERE status = 'failed' AND retry_count < COALESCE(max_retries, 0);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_execution_updated
|
||||
BEFORE UPDATE ON execution
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE execution IS 'Executions represent action runs, supports nested workflows';
|
||||
COMMENT ON COLUMN execution.action IS 'Action being executed (may be null if action deleted)';
|
||||
COMMENT ON COLUMN execution.action_ref IS 'Action reference (preserved even if action deleted)';
|
||||
COMMENT ON COLUMN execution.config IS 'Snapshot of action configuration at execution time';
|
||||
COMMENT ON COLUMN execution.env_vars IS 'Environment variables for this execution as key-value pairs (string -> string). These are set in the execution environment and are separate from action parameters. Used for execution context, configuration, and non-sensitive metadata.';
|
||||
COMMENT ON COLUMN execution.parent IS 'Parent execution ID for workflow hierarchies';
|
||||
COMMENT ON COLUMN execution.enforcement IS 'Enforcement that triggered this execution (if rule-driven)';
|
||||
COMMENT ON COLUMN execution.executor IS 'Identity that initiated the execution';
|
||||
COMMENT ON COLUMN execution.status IS 'Current execution lifecycle status';
|
||||
COMMENT ON COLUMN execution.result IS 'Execution output/results';
|
||||
COMMENT ON COLUMN execution.retry_count IS 'Current retry attempt number (0 = first attempt, 1 = first retry, etc.)';
|
||||
COMMENT ON COLUMN execution.max_retries IS 'Maximum retries for this execution. Copied from action.max_retries at creation time.';
|
||||
COMMENT ON COLUMN execution.retry_reason IS 'Reason for retry (e.g., "worker_unavailable", "transient_error", "manual_retry")';
|
||||
COMMENT ON COLUMN execution.original_execution IS 'ID of the original execution if this is a retry. Forms a retry chain.';
|
||||
|
||||
-- ============================================================================
|
||||
|
||||
-- ============================================================================
|
||||
-- INQUIRY TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE inquiry (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
execution BIGINT NOT NULL REFERENCES execution(id) ON DELETE CASCADE,
|
||||
prompt TEXT NOT NULL,
|
||||
response_schema JSONB,
|
||||
assigned_to BIGINT REFERENCES identity(id) ON DELETE SET NULL,
|
||||
status inquiry_status_enum NOT NULL DEFAULT 'pending',
|
||||
response JSONB,
|
||||
timeout_at TIMESTAMPTZ,
|
||||
responded_at TIMESTAMPTZ,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_inquiry_execution ON inquiry(execution);
|
||||
CREATE INDEX idx_inquiry_assigned_to ON inquiry(assigned_to);
|
||||
CREATE INDEX idx_inquiry_status ON inquiry(status);
|
||||
CREATE INDEX idx_inquiry_timeout_at ON inquiry(timeout_at) WHERE timeout_at IS NOT NULL;
|
||||
CREATE INDEX idx_inquiry_created ON inquiry(created DESC);
|
||||
CREATE INDEX idx_inquiry_status_created ON inquiry(status, created DESC);
|
||||
CREATE INDEX idx_inquiry_assigned_status ON inquiry(assigned_to, status);
|
||||
CREATE INDEX idx_inquiry_execution_status ON inquiry(execution, status);
|
||||
CREATE INDEX idx_inquiry_response_gin ON inquiry USING GIN (response);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_inquiry_updated
|
||||
BEFORE UPDATE ON inquiry
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE inquiry IS 'Inquiries enable human-in-the-loop workflows with async user interactions';
|
||||
COMMENT ON COLUMN inquiry.execution IS 'Execution that is waiting on this inquiry';
|
||||
COMMENT ON COLUMN inquiry.prompt IS 'Question or prompt text for the user';
|
||||
COMMENT ON COLUMN inquiry.response_schema IS 'JSON schema defining expected response format';
|
||||
COMMENT ON COLUMN inquiry.assigned_to IS 'Identity who should respond to this inquiry';
|
||||
COMMENT ON COLUMN inquiry.status IS 'Current inquiry lifecycle status';
|
||||
COMMENT ON COLUMN inquiry.response IS 'User response data';
|
||||
COMMENT ON COLUMN inquiry.timeout_at IS 'When this inquiry expires';
|
||||
COMMENT ON COLUMN inquiry.responded_at IS 'When the response was received';
|
||||
|
||||
-- ============================================================================
|
||||
|
||||
-- ============================================================================
|
||||
-- RULE TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE rule (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
ref TEXT NOT NULL UNIQUE,
|
||||
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
|
||||
pack_ref TEXT NOT NULL,
|
||||
label TEXT NOT NULL,
|
||||
description TEXT NOT NULL,
|
||||
action BIGINT REFERENCES action(id) ON DELETE SET NULL,
|
||||
action_ref TEXT NOT NULL,
|
||||
trigger BIGINT REFERENCES trigger(id) ON DELETE SET NULL,
|
||||
trigger_ref TEXT NOT NULL,
|
||||
conditions JSONB NOT NULL DEFAULT '[]'::jsonb,
|
||||
action_params JSONB DEFAULT '{}'::jsonb,
|
||||
trigger_params JSONB DEFAULT '{}'::jsonb,
|
||||
enabled BOOLEAN NOT NULL,
|
||||
is_adhoc BOOLEAN NOT NULL DEFAULT FALSE,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
-- Constraints
|
||||
CONSTRAINT rule_ref_lowercase CHECK (ref = LOWER(ref)),
|
||||
CONSTRAINT rule_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$')
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_rule_ref ON rule(ref);
|
||||
CREATE INDEX idx_rule_pack ON rule(pack);
|
||||
CREATE INDEX idx_rule_action ON rule(action);
|
||||
CREATE INDEX idx_rule_trigger ON rule(trigger);
|
||||
CREATE INDEX idx_rule_enabled ON rule(enabled) WHERE enabled = TRUE;
|
||||
CREATE INDEX idx_rule_is_adhoc ON rule(is_adhoc) WHERE is_adhoc = true;
|
||||
CREATE INDEX idx_rule_created ON rule(created DESC);
|
||||
CREATE INDEX idx_rule_trigger_enabled ON rule(trigger, enabled);
|
||||
CREATE INDEX idx_rule_action_enabled ON rule(action, enabled);
|
||||
CREATE INDEX idx_rule_pack_enabled ON rule(pack, enabled);
|
||||
CREATE INDEX idx_rule_action_params_gin ON rule USING GIN (action_params);
|
||||
CREATE INDEX idx_rule_trigger_params_gin ON rule USING GIN (trigger_params);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_rule_updated
|
||||
BEFORE UPDATE ON rule
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE rule IS 'Rules link triggers to actions with conditions';
|
||||
COMMENT ON COLUMN rule.ref IS 'Unique rule reference (format: pack.name)';
|
||||
COMMENT ON COLUMN rule.label IS 'Human-readable rule name';
|
||||
COMMENT ON COLUMN rule.action IS 'Action to execute when rule triggers (null if action deleted)';
|
||||
COMMENT ON COLUMN rule.trigger IS 'Trigger that activates this rule (null if trigger deleted)';
|
||||
COMMENT ON COLUMN rule.conditions IS 'Condition expressions to evaluate before executing action';
|
||||
COMMENT ON COLUMN rule.action_params IS 'Parameter overrides for the action';
|
||||
COMMENT ON COLUMN rule.trigger_params IS 'Parameter overrides for the trigger';
|
||||
COMMENT ON COLUMN rule.enabled IS 'Whether this rule is active';
|
||||
COMMENT ON COLUMN rule.is_adhoc IS 'True if rule was manually created (ad-hoc), false if installed from pack';
|
||||
|
||||
-- ============================================================================
|
||||
|
||||
-- Add foreign key constraints now that rule table exists
|
||||
ALTER TABLE enforcement
|
||||
ADD CONSTRAINT enforcement_rule_fkey
|
||||
FOREIGN KEY (rule) REFERENCES rule(id) ON DELETE SET NULL;
|
||||
|
||||
ALTER TABLE event
|
||||
ADD CONSTRAINT event_rule_fkey
|
||||
FOREIGN KEY (rule) REFERENCES rule(id) ON DELETE SET NULL;
|
||||
|
||||
-- ============================================================================
|
||||
-- WORKER TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE worker (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
name TEXT NOT NULL UNIQUE,
|
||||
worker_type worker_type_enum NOT NULL,
|
||||
worker_role worker_role_enum NOT NULL,
|
||||
runtime BIGINT REFERENCES runtime(id) ON DELETE SET NULL,
|
||||
host TEXT,
|
||||
port INTEGER,
|
||||
status worker_status_enum NOT NULL DEFAULT 'active',
|
||||
capabilities JSONB,
|
||||
meta JSONB,
|
||||
last_heartbeat TIMESTAMPTZ,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_worker_name ON worker(name);
|
||||
CREATE INDEX idx_worker_type ON worker(worker_type);
|
||||
CREATE INDEX idx_worker_role ON worker(worker_role);
|
||||
CREATE INDEX idx_worker_runtime ON worker(runtime);
|
||||
CREATE INDEX idx_worker_status ON worker(status);
|
||||
CREATE INDEX idx_worker_last_heartbeat ON worker(last_heartbeat DESC) WHERE last_heartbeat IS NOT NULL;
|
||||
CREATE INDEX idx_worker_created ON worker(created DESC);
|
||||
CREATE INDEX idx_worker_status_role ON worker(status, worker_role);
|
||||
CREATE INDEX idx_worker_capabilities_gin ON worker USING GIN (capabilities);
|
||||
CREATE INDEX idx_worker_meta_gin ON worker USING GIN (meta);
|
||||
CREATE INDEX idx_worker_capabilities_health_status ON worker USING GIN ((capabilities -> 'health' -> 'status'));
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_worker_updated
|
||||
BEFORE UPDATE ON worker
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE worker IS 'Worker registration and tracking table for action and sensor workers';
|
||||
COMMENT ON COLUMN worker.name IS 'Unique worker identifier (typically hostname-based)';
|
||||
COMMENT ON COLUMN worker.worker_type IS 'Worker deployment type (local or remote)';
|
||||
COMMENT ON COLUMN worker.worker_role IS 'Worker role (action or sensor)';
|
||||
COMMENT ON COLUMN worker.runtime IS 'Runtime environment this worker supports (optional)';
|
||||
COMMENT ON COLUMN worker.host IS 'Worker host address';
|
||||
COMMENT ON COLUMN worker.port IS 'Worker port number';
|
||||
COMMENT ON COLUMN worker.status IS 'Worker operational status';
|
||||
COMMENT ON COLUMN worker.capabilities IS 'Worker capabilities (e.g., max_concurrent_executions, supported runtimes)';
|
||||
COMMENT ON COLUMN worker.meta IS 'Additional worker metadata';
|
||||
COMMENT ON COLUMN worker.last_heartbeat IS 'Timestamp of last heartbeat from worker';
|
||||
|
||||
-- ============================================================================
|
||||
-- NOTIFICATION TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE notification (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
channel TEXT NOT NULL,
|
||||
entity_type TEXT NOT NULL,
|
||||
entity TEXT NOT NULL,
|
||||
activity TEXT NOT NULL,
|
||||
state notification_status_enum NOT NULL DEFAULT 'created',
|
||||
content JSONB,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_notification_channel ON notification(channel);
|
||||
CREATE INDEX idx_notification_entity_type ON notification(entity_type);
|
||||
CREATE INDEX idx_notification_entity ON notification(entity);
|
||||
CREATE INDEX idx_notification_state ON notification(state);
|
||||
CREATE INDEX idx_notification_created ON notification(created DESC);
|
||||
CREATE INDEX idx_notification_channel_state ON notification(channel, state);
|
||||
CREATE INDEX idx_notification_entity_type_entity ON notification(entity_type, entity);
|
||||
CREATE INDEX idx_notification_state_created ON notification(state, created DESC);
|
||||
CREATE INDEX idx_notification_content_gin ON notification USING GIN (content);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_notification_updated
|
||||
BEFORE UPDATE ON notification
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Function for pg_notify on notification insert
|
||||
CREATE OR REPLACE FUNCTION notify_on_insert()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
payload TEXT;
|
||||
BEGIN
|
||||
-- Build JSON payload with id, entity, and activity
|
||||
payload := json_build_object(
|
||||
'id', NEW.id,
|
||||
'entity_type', NEW.entity_type,
|
||||
'entity', NEW.entity,
|
||||
'activity', NEW.activity
|
||||
)::text;
|
||||
|
||||
-- Send notification to the specified channel
|
||||
PERFORM pg_notify(NEW.channel, payload);
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Trigger to send pg_notify on notification insert
|
||||
CREATE TRIGGER notify_on_notification_insert
|
||||
AFTER INSERT ON notification
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION notify_on_insert();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE notification IS 'System notifications about entity changes for real-time updates';
|
||||
COMMENT ON COLUMN notification.channel IS 'Notification channel (typically table name)';
|
||||
COMMENT ON COLUMN notification.entity_type IS 'Type of entity (table name)';
|
||||
COMMENT ON COLUMN notification.entity IS 'Entity identifier (typically ID or ref)';
|
||||
COMMENT ON COLUMN notification.activity IS 'Activity type (e.g., "created", "updated", "completed")';
|
||||
COMMENT ON COLUMN notification.state IS 'Processing state of notification';
|
||||
COMMENT ON COLUMN notification.content IS 'Optional notification payload data';
|
||||
|
||||
-- ============================================================================
|
||||
-- WORKER HEALTH VIEWS AND FUNCTIONS
|
||||
-- ============================================================================
|
||||
|
||||
-- View for healthy workers (convenience for queries)
|
||||
CREATE OR REPLACE VIEW healthy_workers AS
|
||||
SELECT
|
||||
w.id,
|
||||
w.name,
|
||||
w.worker_type,
|
||||
w.worker_role,
|
||||
w.runtime,
|
||||
w.status,
|
||||
w.capabilities,
|
||||
w.last_heartbeat,
|
||||
(w.capabilities -> 'health' ->> 'status')::TEXT as health_status,
|
||||
(w.capabilities -> 'health' ->> 'queue_depth')::INTEGER as queue_depth,
|
||||
(w.capabilities -> 'health' ->> 'consecutive_failures')::INTEGER as consecutive_failures
|
||||
FROM worker w
|
||||
WHERE
|
||||
w.status = 'active'
|
||||
AND w.last_heartbeat > NOW() - INTERVAL '30 seconds'
|
||||
AND (
|
||||
-- Healthy if no health info (backward compatible)
|
||||
w.capabilities -> 'health' IS NULL
|
||||
OR
|
||||
-- Or explicitly marked healthy
|
||||
w.capabilities -> 'health' ->> 'status' IN ('healthy', 'degraded')
|
||||
);
|
||||
|
||||
COMMENT ON VIEW healthy_workers IS 'Workers that are active, have fresh heartbeat, and are healthy or degraded (not unhealthy)';
|
||||
|
||||
-- Function to get worker queue depth estimate
|
||||
CREATE OR REPLACE FUNCTION get_worker_queue_depth(worker_id_param BIGINT)
|
||||
RETURNS INTEGER AS $$
|
||||
BEGIN
|
||||
RETURN (
|
||||
SELECT (capabilities -> 'health' ->> 'queue_depth')::INTEGER
|
||||
FROM worker
|
||||
WHERE id = worker_id_param
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql STABLE;
|
||||
|
||||
COMMENT ON FUNCTION get_worker_queue_depth IS 'Extract current queue depth from worker health metadata';
|
||||
|
||||
-- Function to check if execution is retriable
|
||||
CREATE OR REPLACE FUNCTION is_execution_retriable(execution_id_param BIGINT)
|
||||
RETURNS BOOLEAN AS $$
|
||||
DECLARE
|
||||
exec_record RECORD;
|
||||
BEGIN
|
||||
SELECT
|
||||
e.retry_count,
|
||||
e.max_retries,
|
||||
e.status
|
||||
INTO exec_record
|
||||
FROM execution e
|
||||
WHERE e.id = execution_id_param;
|
||||
|
||||
IF NOT FOUND THEN
|
||||
RETURN FALSE;
|
||||
END IF;
|
||||
|
||||
-- Can retry if:
|
||||
-- 1. Status is failed
|
||||
-- 2. max_retries is set and > 0
|
||||
-- 3. retry_count < max_retries
|
||||
RETURN (
|
||||
exec_record.status = 'failed'
|
||||
AND exec_record.max_retries IS NOT NULL
|
||||
AND exec_record.max_retries > 0
|
||||
AND exec_record.retry_count < exec_record.max_retries
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql STABLE;
|
||||
|
||||
COMMENT ON FUNCTION is_execution_retriable IS 'Check if a failed execution can be automatically retried based on retry limits';
|
||||
@@ -1,183 +0,0 @@
|
||||
-- Migration: Execution System
|
||||
-- Description: Creates execution (with workflow columns), inquiry, and rule tables
|
||||
-- Version: 20250101000006
|
||||
|
||||
-- ============================================================================
|
||||
-- EXECUTION TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE execution (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
action BIGINT REFERENCES action(id) ON DELETE SET NULL,
|
||||
action_ref TEXT NOT NULL,
|
||||
config JSONB,
|
||||
env_vars JSONB,
|
||||
parent BIGINT REFERENCES execution(id) ON DELETE SET NULL,
|
||||
enforcement BIGINT REFERENCES enforcement(id) ON DELETE SET NULL,
|
||||
executor BIGINT REFERENCES identity(id) ON DELETE SET NULL,
|
||||
status execution_status_enum NOT NULL DEFAULT 'requested',
|
||||
result JSONB,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
is_workflow BOOLEAN DEFAULT false NOT NULL,
|
||||
workflow_def BIGINT,
|
||||
workflow_task JSONB,
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_execution_action ON execution(action);
|
||||
CREATE INDEX idx_execution_action_ref ON execution(action_ref);
|
||||
CREATE INDEX idx_execution_parent ON execution(parent);
|
||||
CREATE INDEX idx_execution_enforcement ON execution(enforcement);
|
||||
CREATE INDEX idx_execution_executor ON execution(executor);
|
||||
CREATE INDEX idx_execution_status ON execution(status);
|
||||
CREATE INDEX idx_execution_created ON execution(created DESC);
|
||||
CREATE INDEX idx_execution_updated ON execution(updated DESC);
|
||||
CREATE INDEX idx_execution_status_created ON execution(status, created DESC);
|
||||
CREATE INDEX idx_execution_status_updated ON execution(status, updated DESC);
|
||||
CREATE INDEX idx_execution_action_status ON execution(action, status);
|
||||
CREATE INDEX idx_execution_executor_created ON execution(executor, created DESC);
|
||||
CREATE INDEX idx_execution_parent_created ON execution(parent, created DESC);
|
||||
CREATE INDEX idx_execution_result_gin ON execution USING GIN (result);
|
||||
CREATE INDEX idx_execution_env_vars_gin ON execution USING GIN (env_vars);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_execution_updated
|
||||
BEFORE UPDATE ON execution
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE execution IS 'Executions represent action runs, supports nested workflows';
|
||||
COMMENT ON COLUMN execution.action IS 'Action being executed (may be null if action deleted)';
|
||||
COMMENT ON COLUMN execution.action_ref IS 'Action reference (preserved even if action deleted)';
|
||||
COMMENT ON COLUMN execution.config IS 'Snapshot of action configuration at execution time';
|
||||
COMMENT ON COLUMN execution.env_vars IS 'Environment variables for this execution as key-value pairs (string -> string). These are set in the execution environment and are separate from action parameters. Used for execution context, configuration, and non-sensitive metadata.';
|
||||
COMMENT ON COLUMN execution.parent IS 'Parent execution ID for workflow hierarchies';
|
||||
COMMENT ON COLUMN execution.enforcement IS 'Enforcement that triggered this execution (if rule-driven)';
|
||||
COMMENT ON COLUMN execution.executor IS 'Identity that initiated the execution';
|
||||
COMMENT ON COLUMN execution.status IS 'Current execution lifecycle status';
|
||||
COMMENT ON COLUMN execution.result IS 'Execution output/results';
|
||||
|
||||
-- ============================================================================
|
||||
|
||||
-- ============================================================================
|
||||
-- INQUIRY TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE inquiry (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
execution BIGINT NOT NULL REFERENCES execution(id) ON DELETE CASCADE,
|
||||
prompt TEXT NOT NULL,
|
||||
response_schema JSONB,
|
||||
assigned_to BIGINT REFERENCES identity(id) ON DELETE SET NULL,
|
||||
status inquiry_status_enum NOT NULL DEFAULT 'pending',
|
||||
response JSONB,
|
||||
timeout_at TIMESTAMPTZ,
|
||||
responded_at TIMESTAMPTZ,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_inquiry_execution ON inquiry(execution);
|
||||
CREATE INDEX idx_inquiry_assigned_to ON inquiry(assigned_to);
|
||||
CREATE INDEX idx_inquiry_status ON inquiry(status);
|
||||
CREATE INDEX idx_inquiry_timeout_at ON inquiry(timeout_at) WHERE timeout_at IS NOT NULL;
|
||||
CREATE INDEX idx_inquiry_created ON inquiry(created DESC);
|
||||
CREATE INDEX idx_inquiry_status_created ON inquiry(status, created DESC);
|
||||
CREATE INDEX idx_inquiry_assigned_status ON inquiry(assigned_to, status);
|
||||
CREATE INDEX idx_inquiry_execution_status ON inquiry(execution, status);
|
||||
CREATE INDEX idx_inquiry_response_gin ON inquiry USING GIN (response);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_inquiry_updated
|
||||
BEFORE UPDATE ON inquiry
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE inquiry IS 'Inquiries enable human-in-the-loop workflows with async user interactions';
|
||||
COMMENT ON COLUMN inquiry.execution IS 'Execution that is waiting on this inquiry';
|
||||
COMMENT ON COLUMN inquiry.prompt IS 'Question or prompt text for the user';
|
||||
COMMENT ON COLUMN inquiry.response_schema IS 'JSON schema defining expected response format';
|
||||
COMMENT ON COLUMN inquiry.assigned_to IS 'Identity who should respond to this inquiry';
|
||||
COMMENT ON COLUMN inquiry.status IS 'Current inquiry lifecycle status';
|
||||
COMMENT ON COLUMN inquiry.response IS 'User response data';
|
||||
COMMENT ON COLUMN inquiry.timeout_at IS 'When this inquiry expires';
|
||||
COMMENT ON COLUMN inquiry.responded_at IS 'When the response was received';
|
||||
|
||||
-- ============================================================================
|
||||
|
||||
-- ============================================================================
|
||||
-- RULE TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE rule (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
ref TEXT NOT NULL UNIQUE,
|
||||
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
|
||||
pack_ref TEXT NOT NULL,
|
||||
label TEXT NOT NULL,
|
||||
description TEXT NOT NULL,
|
||||
action BIGINT REFERENCES action(id) ON DELETE SET NULL,
|
||||
action_ref TEXT NOT NULL,
|
||||
trigger BIGINT REFERENCES trigger(id) ON DELETE SET NULL,
|
||||
trigger_ref TEXT NOT NULL,
|
||||
conditions JSONB NOT NULL DEFAULT '[]'::jsonb,
|
||||
action_params JSONB DEFAULT '{}'::jsonb,
|
||||
trigger_params JSONB DEFAULT '{}'::jsonb,
|
||||
enabled BOOLEAN NOT NULL,
|
||||
is_adhoc BOOLEAN NOT NULL DEFAULT FALSE,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
-- Constraints
|
||||
CONSTRAINT rule_ref_lowercase CHECK (ref = LOWER(ref)),
|
||||
CONSTRAINT rule_ref_format CHECK (ref ~ '^[^.]+\.[^.]+$')
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_rule_ref ON rule(ref);
|
||||
CREATE INDEX idx_rule_pack ON rule(pack);
|
||||
CREATE INDEX idx_rule_action ON rule(action);
|
||||
CREATE INDEX idx_rule_trigger ON rule(trigger);
|
||||
CREATE INDEX idx_rule_enabled ON rule(enabled) WHERE enabled = TRUE;
|
||||
CREATE INDEX idx_rule_is_adhoc ON rule(is_adhoc) WHERE is_adhoc = true;
|
||||
CREATE INDEX idx_rule_created ON rule(created DESC);
|
||||
CREATE INDEX idx_rule_trigger_enabled ON rule(trigger, enabled);
|
||||
CREATE INDEX idx_rule_action_enabled ON rule(action, enabled);
|
||||
CREATE INDEX idx_rule_pack_enabled ON rule(pack, enabled);
|
||||
CREATE INDEX idx_rule_action_params_gin ON rule USING GIN (action_params);
|
||||
CREATE INDEX idx_rule_trigger_params_gin ON rule USING GIN (trigger_params);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_rule_updated
|
||||
BEFORE UPDATE ON rule
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE rule IS 'Rules link triggers to actions with conditions';
|
||||
COMMENT ON COLUMN rule.ref IS 'Unique rule reference (format: pack.name)';
|
||||
COMMENT ON COLUMN rule.label IS 'Human-readable rule name';
|
||||
COMMENT ON COLUMN rule.action IS 'Action to execute when rule triggers (null if action deleted)';
|
||||
COMMENT ON COLUMN rule.trigger IS 'Trigger that activates this rule (null if trigger deleted)';
|
||||
COMMENT ON COLUMN rule.conditions IS 'Condition expressions to evaluate before executing action';
|
||||
COMMENT ON COLUMN rule.action_params IS 'Parameter overrides for the action';
|
||||
COMMENT ON COLUMN rule.trigger_params IS 'Parameter overrides for the trigger';
|
||||
COMMENT ON COLUMN rule.enabled IS 'Whether this rule is active';
|
||||
COMMENT ON COLUMN rule.is_adhoc IS 'True if rule was manually created (ad-hoc), false if installed from pack';
|
||||
|
||||
-- ============================================================================
|
||||
|
||||
-- Add foreign key constraints now that rule table exists
|
||||
ALTER TABLE enforcement
|
||||
ADD CONSTRAINT enforcement_rule_fkey
|
||||
FOREIGN KEY (rule) REFERENCES rule(id) ON DELETE SET NULL;
|
||||
|
||||
ALTER TABLE event
|
||||
ADD CONSTRAINT event_rule_fkey
|
||||
FOREIGN KEY (rule) REFERENCES rule(id) ON DELETE SET NULL;
|
||||
|
||||
-- ============================================================================
|
||||
@@ -1,6 +1,7 @@
|
||||
-- Migration: Workflow System
|
||||
-- Description: Creates workflow_definition and workflow_execution tables (workflow_task_execution consolidated into execution.workflow_task JSONB)
|
||||
-- Version: 20250101000007
|
||||
-- Description: Creates workflow_definition and workflow_execution tables
|
||||
-- (workflow_task_execution consolidated into execution.workflow_task JSONB)
|
||||
-- Version: 20250101000006
|
||||
|
||||
-- ============================================================================
|
||||
-- WORKFLOW DEFINITION TABLE
|
||||
775
migrations/20250101000007_supporting_systems.sql
Normal file
775
migrations/20250101000007_supporting_systems.sql
Normal file
@@ -0,0 +1,775 @@
|
||||
-- Migration: Supporting Systems
|
||||
-- Description: Creates keys, artifacts, queue_stats, pack_environment, pack_testing,
|
||||
-- and webhook function tables.
|
||||
-- Consolidates former migrations: 000009 (keys_artifacts), 000010 (webhook_system),
|
||||
-- 000011 (pack_environments), and 000012 (pack_testing).
|
||||
-- Version: 20250101000007
|
||||
|
||||
-- ============================================================================
|
||||
-- KEY TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE key (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
ref TEXT NOT NULL UNIQUE,
|
||||
owner_type owner_type_enum NOT NULL,
|
||||
owner TEXT,
|
||||
owner_identity BIGINT REFERENCES identity(id),
|
||||
owner_pack BIGINT REFERENCES pack(id),
|
||||
owner_pack_ref TEXT,
|
||||
owner_action BIGINT, -- Forward reference to action table
|
||||
owner_action_ref TEXT,
|
||||
owner_sensor BIGINT, -- Forward reference to sensor table
|
||||
owner_sensor_ref TEXT,
|
||||
name TEXT NOT NULL,
|
||||
encrypted BOOLEAN NOT NULL,
|
||||
encryption_key_hash TEXT,
|
||||
value TEXT NOT NULL,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
-- Constraints
|
||||
CONSTRAINT key_ref_lowercase CHECK (ref = LOWER(ref)),
|
||||
CONSTRAINT key_ref_format CHECK (ref ~ '^[^.]+(\.[^.]+)*$')
|
||||
);
|
||||
|
||||
-- Unique index on owner_type, owner, name
|
||||
CREATE UNIQUE INDEX idx_key_unique ON key(owner_type, owner, name);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_key_ref ON key(ref);
|
||||
CREATE INDEX idx_key_owner_type ON key(owner_type);
|
||||
CREATE INDEX idx_key_owner_identity ON key(owner_identity);
|
||||
CREATE INDEX idx_key_owner_pack ON key(owner_pack);
|
||||
CREATE INDEX idx_key_owner_action ON key(owner_action);
|
||||
CREATE INDEX idx_key_owner_sensor ON key(owner_sensor);
|
||||
CREATE INDEX idx_key_created ON key(created DESC);
|
||||
CREATE INDEX idx_key_owner_type_owner ON key(owner_type, owner);
|
||||
CREATE INDEX idx_key_owner_identity_name ON key(owner_identity, name);
|
||||
CREATE INDEX idx_key_owner_pack_name ON key(owner_pack, name);
|
||||
|
||||
-- Function to validate and set owner fields
|
||||
CREATE OR REPLACE FUNCTION validate_key_owner()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
owner_count INTEGER := 0;
|
||||
BEGIN
|
||||
-- Count how many owner fields are set
|
||||
IF NEW.owner_identity IS NOT NULL THEN owner_count := owner_count + 1; END IF;
|
||||
IF NEW.owner_pack IS NOT NULL THEN owner_count := owner_count + 1; END IF;
|
||||
IF NEW.owner_action IS NOT NULL THEN owner_count := owner_count + 1; END IF;
|
||||
IF NEW.owner_sensor IS NOT NULL THEN owner_count := owner_count + 1; END IF;
|
||||
|
||||
-- System owner should have no owner fields set
|
||||
IF NEW.owner_type = 'system' THEN
|
||||
IF owner_count > 0 THEN
|
||||
RAISE EXCEPTION 'System owner cannot have specific owner fields set';
|
||||
END IF;
|
||||
NEW.owner := 'system';
|
||||
-- All other types must have exactly one owner field set
|
||||
ELSIF owner_count != 1 THEN
|
||||
RAISE EXCEPTION 'Exactly one owner field must be set for owner_type %', NEW.owner_type;
|
||||
-- Validate owner_type matches the populated field and set owner
|
||||
ELSIF NEW.owner_type = 'identity' THEN
|
||||
IF NEW.owner_identity IS NULL THEN
|
||||
RAISE EXCEPTION 'owner_identity must be set for owner_type identity';
|
||||
END IF;
|
||||
NEW.owner := NEW.owner_identity::TEXT;
|
||||
ELSIF NEW.owner_type = 'pack' THEN
|
||||
IF NEW.owner_pack IS NULL THEN
|
||||
RAISE EXCEPTION 'owner_pack must be set for owner_type pack';
|
||||
END IF;
|
||||
NEW.owner := NEW.owner_pack::TEXT;
|
||||
ELSIF NEW.owner_type = 'action' THEN
|
||||
IF NEW.owner_action IS NULL THEN
|
||||
RAISE EXCEPTION 'owner_action must be set for owner_type action';
|
||||
END IF;
|
||||
NEW.owner := NEW.owner_action::TEXT;
|
||||
ELSIF NEW.owner_type = 'sensor' THEN
|
||||
IF NEW.owner_sensor IS NULL THEN
|
||||
RAISE EXCEPTION 'owner_sensor must be set for owner_type sensor';
|
||||
END IF;
|
||||
NEW.owner := NEW.owner_sensor::TEXT;
|
||||
END IF;
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Trigger to validate owner fields
|
||||
CREATE TRIGGER validate_key_owner_trigger
|
||||
BEFORE INSERT OR UPDATE ON key
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION validate_key_owner();
|
||||
|
||||
-- Trigger for updated timestamp
|
||||
CREATE TRIGGER update_key_updated
|
||||
BEFORE UPDATE ON key
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE key IS 'Keys store configuration values and secrets with ownership scoping';
|
||||
COMMENT ON COLUMN key.ref IS 'Unique key reference (format: [owner.]name)';
|
||||
COMMENT ON COLUMN key.owner_type IS 'Type of owner (system, identity, pack, action, sensor)';
|
||||
COMMENT ON COLUMN key.owner IS 'Owner identifier (auto-populated by trigger)';
|
||||
COMMENT ON COLUMN key.owner_identity IS 'Identity owner (if owner_type=identity)';
|
||||
COMMENT ON COLUMN key.owner_pack IS 'Pack owner (if owner_type=pack)';
|
||||
COMMENT ON COLUMN key.owner_pack_ref IS 'Pack reference for owner_pack';
|
||||
COMMENT ON COLUMN key.owner_action IS 'Action owner (if owner_type=action)';
|
||||
COMMENT ON COLUMN key.owner_sensor IS 'Sensor owner (if owner_type=sensor)';
|
||||
COMMENT ON COLUMN key.name IS 'Key name within owner scope';
|
||||
COMMENT ON COLUMN key.encrypted IS 'Whether the value is encrypted';
|
||||
COMMENT ON COLUMN key.encryption_key_hash IS 'Hash of encryption key used';
|
||||
COMMENT ON COLUMN key.value IS 'The actual value (encrypted if encrypted=true)';
|
||||
|
||||
|
||||
-- Add foreign key constraints for action and sensor references
|
||||
ALTER TABLE key
|
||||
ADD CONSTRAINT key_owner_action_fkey
|
||||
FOREIGN KEY (owner_action) REFERENCES action(id) ON DELETE CASCADE;
|
||||
|
||||
ALTER TABLE key
|
||||
ADD CONSTRAINT key_owner_sensor_fkey
|
||||
FOREIGN KEY (owner_sensor) REFERENCES sensor(id) ON DELETE CASCADE;
|
||||
|
||||
-- ============================================================================
|
||||
-- ARTIFACT TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE artifact (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
ref TEXT NOT NULL,
|
||||
scope owner_type_enum NOT NULL DEFAULT 'system',
|
||||
owner TEXT NOT NULL DEFAULT '',
|
||||
type artifact_type_enum NOT NULL,
|
||||
retention_policy artifact_retention_enum NOT NULL DEFAULT 'versions',
|
||||
retention_limit INTEGER NOT NULL DEFAULT 1,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_artifact_ref ON artifact(ref);
|
||||
CREATE INDEX idx_artifact_scope ON artifact(scope);
|
||||
CREATE INDEX idx_artifact_owner ON artifact(owner);
|
||||
CREATE INDEX idx_artifact_type ON artifact(type);
|
||||
CREATE INDEX idx_artifact_created ON artifact(created DESC);
|
||||
CREATE INDEX idx_artifact_scope_owner ON artifact(scope, owner);
|
||||
CREATE INDEX idx_artifact_type_created ON artifact(type, created DESC);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_artifact_updated
|
||||
BEFORE UPDATE ON artifact
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE artifact IS 'Artifacts track files, logs, and outputs from executions';
|
||||
COMMENT ON COLUMN artifact.ref IS 'Artifact reference/path';
|
||||
COMMENT ON COLUMN artifact.scope IS 'Owner type (system, identity, pack, action, sensor)';
|
||||
COMMENT ON COLUMN artifact.owner IS 'Owner identifier';
|
||||
COMMENT ON COLUMN artifact.type IS 'Artifact type (file, url, progress, etc.)';
|
||||
COMMENT ON COLUMN artifact.retention_policy IS 'How to retain artifacts (versions, days, hours, minutes)';
|
||||
COMMENT ON COLUMN artifact.retention_limit IS 'Numeric limit for retention policy';
|
||||
|
||||
-- ============================================================================
|
||||
-- QUEUE_STATS TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE queue_stats (
|
||||
action_id BIGINT PRIMARY KEY REFERENCES action(id) ON DELETE CASCADE,
|
||||
queue_length INTEGER NOT NULL DEFAULT 0,
|
||||
active_count INTEGER NOT NULL DEFAULT 0,
|
||||
max_concurrent INTEGER NOT NULL DEFAULT 1,
|
||||
oldest_enqueued_at TIMESTAMPTZ,
|
||||
total_enqueued BIGINT NOT NULL DEFAULT 0,
|
||||
total_completed BIGINT NOT NULL DEFAULT 0,
|
||||
last_updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_queue_stats_last_updated ON queue_stats(last_updated);
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE queue_stats IS 'Real-time queue statistics for action execution ordering';
|
||||
COMMENT ON COLUMN queue_stats.action_id IS 'Foreign key to action table';
|
||||
COMMENT ON COLUMN queue_stats.queue_length IS 'Number of executions waiting in queue';
|
||||
COMMENT ON COLUMN queue_stats.active_count IS 'Number of currently running executions';
|
||||
COMMENT ON COLUMN queue_stats.max_concurrent IS 'Maximum concurrent executions allowed';
|
||||
COMMENT ON COLUMN queue_stats.oldest_enqueued_at IS 'Timestamp of oldest queued execution (NULL if queue empty)';
|
||||
COMMENT ON COLUMN queue_stats.total_enqueued IS 'Total executions enqueued since queue creation';
|
||||
COMMENT ON COLUMN queue_stats.total_completed IS 'Total executions completed since queue creation';
|
||||
COMMENT ON COLUMN queue_stats.last_updated IS 'Timestamp of last statistics update';
|
||||
|
||||
-- ============================================================================
|
||||
-- PACK ENVIRONMENT TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS pack_environment (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
|
||||
pack_ref TEXT NOT NULL,
|
||||
runtime BIGINT NOT NULL REFERENCES runtime(id) ON DELETE CASCADE,
|
||||
runtime_ref TEXT NOT NULL,
|
||||
env_path TEXT NOT NULL,
|
||||
status pack_environment_status_enum NOT NULL DEFAULT 'pending',
|
||||
installed_at TIMESTAMPTZ,
|
||||
last_verified TIMESTAMPTZ,
|
||||
install_log TEXT,
|
||||
install_error TEXT,
|
||||
metadata JSONB DEFAULT '{}'::jsonb,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(pack, runtime)
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_pack ON pack_environment(pack);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_runtime ON pack_environment(runtime);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_status ON pack_environment(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_pack_ref ON pack_environment(pack_ref);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_runtime_ref ON pack_environment(runtime_ref);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_pack_runtime ON pack_environment(pack, runtime);
|
||||
|
||||
-- Trigger for updated timestamp
|
||||
CREATE TRIGGER update_pack_environment_updated
|
||||
BEFORE UPDATE ON pack_environment
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE pack_environment IS 'Tracks pack-specific runtime environments for dependency isolation';
|
||||
COMMENT ON COLUMN pack_environment.pack IS 'Pack that owns this environment';
|
||||
COMMENT ON COLUMN pack_environment.pack_ref IS 'Pack reference for quick lookup';
|
||||
COMMENT ON COLUMN pack_environment.runtime IS 'Runtime used for this environment';
|
||||
COMMENT ON COLUMN pack_environment.runtime_ref IS 'Runtime reference for quick lookup';
|
||||
COMMENT ON COLUMN pack_environment.env_path IS 'Filesystem path to the environment directory (e.g., /opt/attune/packenvs/mypack/python)';
|
||||
COMMENT ON COLUMN pack_environment.status IS 'Current installation status';
|
||||
COMMENT ON COLUMN pack_environment.installed_at IS 'When the environment was successfully installed';
|
||||
COMMENT ON COLUMN pack_environment.last_verified IS 'Last time the environment was verified as working';
|
||||
COMMENT ON COLUMN pack_environment.install_log IS 'Installation output logs';
|
||||
COMMENT ON COLUMN pack_environment.install_error IS 'Error message if installation failed';
|
||||
COMMENT ON COLUMN pack_environment.metadata IS 'Additional metadata (installed packages, versions, etc.)';
|
||||
|
||||
-- ============================================================================
|
||||
-- PACK ENVIRONMENT: Update existing runtimes with installer metadata
|
||||
-- ============================================================================
|
||||
|
||||
-- Python runtime installers
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
|
||||
'installers', jsonb_build_array(
|
||||
jsonb_build_object(
|
||||
'name', 'create_venv',
|
||||
'description', 'Create Python virtual environment',
|
||||
'command', 'python3',
|
||||
'args', jsonb_build_array('-m', 'venv', '{env_path}'),
|
||||
'cwd', '{pack_path}',
|
||||
'env', jsonb_build_object(),
|
||||
'order', 1,
|
||||
'optional', false
|
||||
),
|
||||
jsonb_build_object(
|
||||
'name', 'upgrade_pip',
|
||||
'description', 'Upgrade pip to latest version',
|
||||
'command', '{env_path}/bin/pip',
|
||||
'args', jsonb_build_array('install', '--upgrade', 'pip'),
|
||||
'cwd', '{pack_path}',
|
||||
'env', jsonb_build_object(),
|
||||
'order', 2,
|
||||
'optional', true
|
||||
),
|
||||
jsonb_build_object(
|
||||
'name', 'install_requirements',
|
||||
'description', 'Install pack Python dependencies',
|
||||
'command', '{env_path}/bin/pip',
|
||||
'args', jsonb_build_array('install', '-r', '{pack_path}/requirements.txt'),
|
||||
'cwd', '{pack_path}',
|
||||
'env', jsonb_build_object(),
|
||||
'order', 3,
|
||||
'optional', false,
|
||||
'condition', jsonb_build_object(
|
||||
'file_exists', '{pack_path}/requirements.txt'
|
||||
)
|
||||
)
|
||||
),
|
||||
'executable_templates', jsonb_build_object(
|
||||
'python', '{env_path}/bin/python',
|
||||
'pip', '{env_path}/bin/pip'
|
||||
)
|
||||
)
|
||||
WHERE ref = 'core.python';
|
||||
|
||||
-- Node.js runtime installers
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
|
||||
'installers', jsonb_build_array(
|
||||
jsonb_build_object(
|
||||
'name', 'npm_install',
|
||||
'description', 'Install Node.js dependencies',
|
||||
'command', 'npm',
|
||||
'args', jsonb_build_array('install', '--prefix', '{env_path}'),
|
||||
'cwd', '{pack_path}',
|
||||
'env', jsonb_build_object(
|
||||
'NODE_PATH', '{env_path}/node_modules'
|
||||
),
|
||||
'order', 1,
|
||||
'optional', false,
|
||||
'condition', jsonb_build_object(
|
||||
'file_exists', '{pack_path}/package.json'
|
||||
)
|
||||
)
|
||||
),
|
||||
'executable_templates', jsonb_build_object(
|
||||
'node', 'node',
|
||||
'npm', 'npm'
|
||||
),
|
||||
'env_vars', jsonb_build_object(
|
||||
'NODE_PATH', '{env_path}/node_modules'
|
||||
)
|
||||
)
|
||||
WHERE ref = 'core.nodejs';
|
||||
|
||||
-- Shell runtime (no environment needed, uses system shell)
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
|
||||
'installers', jsonb_build_array(),
|
||||
'executable_templates', jsonb_build_object(
|
||||
'sh', 'sh',
|
||||
'bash', 'bash'
|
||||
),
|
||||
'requires_environment', false
|
||||
)
|
||||
WHERE ref = 'core.shell';
|
||||
|
||||
-- Native runtime (no environment needed, binaries are standalone)
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
|
||||
'installers', jsonb_build_array(),
|
||||
'executable_templates', jsonb_build_object(),
|
||||
'requires_environment', false
|
||||
)
|
||||
WHERE ref = 'core.native';
|
||||
|
||||
-- Built-in sensor runtime (internal, no environment)
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'installers', jsonb_build_array(),
|
||||
'requires_environment', false
|
||||
)
|
||||
WHERE ref = 'core.sensor.builtin';
|
||||
|
||||
-- ============================================================================
|
||||
-- PACK ENVIRONMENT: Helper functions
|
||||
-- ============================================================================
|
||||
|
||||
-- Function to get environment path for a pack/runtime combination
|
||||
CREATE OR REPLACE FUNCTION get_pack_environment_path(p_pack_ref TEXT, p_runtime_ref TEXT)
|
||||
RETURNS TEXT AS $$
|
||||
DECLARE
|
||||
v_runtime_name TEXT;
|
||||
v_base_template TEXT;
|
||||
v_result TEXT;
|
||||
BEGIN
|
||||
-- Get runtime name and base path template
|
||||
SELECT
|
||||
LOWER(name),
|
||||
installers->>'base_path_template'
|
||||
INTO v_runtime_name, v_base_template
|
||||
FROM runtime
|
||||
WHERE ref = p_runtime_ref;
|
||||
|
||||
IF v_base_template IS NULL THEN
|
||||
v_base_template := '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}';
|
||||
END IF;
|
||||
|
||||
-- Replace template variables
|
||||
v_result := v_base_template;
|
||||
v_result := REPLACE(v_result, '{pack_ref}', p_pack_ref);
|
||||
v_result := REPLACE(v_result, '{runtime_ref}', p_runtime_ref);
|
||||
v_result := REPLACE(v_result, '{runtime_name_lower}', v_runtime_name);
|
||||
|
||||
RETURN v_result;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql IMMUTABLE;
|
||||
|
||||
COMMENT ON FUNCTION get_pack_environment_path IS 'Calculate the filesystem path for a pack runtime environment';
|
||||
|
||||
-- Function to check if a runtime requires an environment
|
||||
CREATE OR REPLACE FUNCTION runtime_requires_environment(p_runtime_ref TEXT)
|
||||
RETURNS BOOLEAN AS $$
|
||||
DECLARE
|
||||
v_requires BOOLEAN;
|
||||
BEGIN
|
||||
SELECT COALESCE((installers->>'requires_environment')::boolean, true)
|
||||
INTO v_requires
|
||||
FROM runtime
|
||||
WHERE ref = p_runtime_ref;
|
||||
|
||||
RETURN COALESCE(v_requires, false);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql STABLE;
|
||||
|
||||
COMMENT ON FUNCTION runtime_requires_environment IS 'Check if a runtime needs a pack-specific environment';
|
||||
|
||||
-- ============================================================================
|
||||
-- PACK ENVIRONMENT: Status view
|
||||
-- ============================================================================
|
||||
|
||||
CREATE OR REPLACE VIEW v_pack_environment_status AS
|
||||
SELECT
|
||||
pe.id,
|
||||
pe.pack,
|
||||
p.ref AS pack_ref,
|
||||
p.label AS pack_name,
|
||||
pe.runtime,
|
||||
r.ref AS runtime_ref,
|
||||
r.name AS runtime_name,
|
||||
pe.env_path,
|
||||
pe.status,
|
||||
pe.installed_at,
|
||||
pe.last_verified,
|
||||
CASE
|
||||
WHEN pe.status = 'ready' AND pe.last_verified < NOW() - INTERVAL '7 days' THEN true
|
||||
ELSE false
|
||||
END AS needs_verification,
|
||||
CASE
|
||||
WHEN pe.status = 'ready' THEN 'healthy'
|
||||
WHEN pe.status = 'failed' THEN 'unhealthy'
|
||||
WHEN pe.status IN ('pending', 'installing') THEN 'provisioning'
|
||||
WHEN pe.status = 'outdated' THEN 'needs_update'
|
||||
ELSE 'unknown'
|
||||
END AS health_status,
|
||||
pe.install_error,
|
||||
pe.created,
|
||||
pe.updated
|
||||
FROM pack_environment pe
|
||||
JOIN pack p ON pe.pack = p.id
|
||||
JOIN runtime r ON pe.runtime = r.id;
|
||||
|
||||
COMMENT ON VIEW v_pack_environment_status IS 'Consolidated view of pack environment status with health indicators';
|
||||
|
||||
-- ============================================================================
|
||||
-- PACK TEST EXECUTION TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS pack_test_execution (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
pack_id BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
|
||||
pack_version VARCHAR(50) NOT NULL,
|
||||
execution_time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
trigger_reason VARCHAR(50) NOT NULL, -- 'install', 'update', 'manual', 'validation'
|
||||
total_tests INT NOT NULL,
|
||||
passed INT NOT NULL,
|
||||
failed INT NOT NULL,
|
||||
skipped INT NOT NULL,
|
||||
pass_rate DECIMAL(5,4) NOT NULL, -- 0.0000 to 1.0000
|
||||
duration_ms BIGINT NOT NULL,
|
||||
result JSONB NOT NULL, -- Full test result structure
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
CONSTRAINT valid_test_counts CHECK (total_tests >= 0 AND passed >= 0 AND failed >= 0 AND skipped >= 0),
|
||||
CONSTRAINT valid_pass_rate CHECK (pass_rate >= 0.0 AND pass_rate <= 1.0),
|
||||
CONSTRAINT valid_trigger_reason CHECK (trigger_reason IN ('install', 'update', 'manual', 'validation'))
|
||||
);
|
||||
|
||||
-- Indexes for efficient queries
|
||||
CREATE INDEX idx_pack_test_execution_pack_id ON pack_test_execution(pack_id);
|
||||
CREATE INDEX idx_pack_test_execution_time ON pack_test_execution(execution_time DESC);
|
||||
CREATE INDEX idx_pack_test_execution_pass_rate ON pack_test_execution(pass_rate);
|
||||
CREATE INDEX idx_pack_test_execution_trigger ON pack_test_execution(trigger_reason);
|
||||
|
||||
-- Comments for documentation
|
||||
COMMENT ON TABLE pack_test_execution IS 'Tracks pack test execution results for validation and auditing';
|
||||
COMMENT ON COLUMN pack_test_execution.pack_id IS 'Reference to the pack being tested';
|
||||
COMMENT ON COLUMN pack_test_execution.pack_version IS 'Version of the pack at test time';
|
||||
COMMENT ON COLUMN pack_test_execution.trigger_reason IS 'What triggered the test: install, update, manual, validation';
|
||||
COMMENT ON COLUMN pack_test_execution.pass_rate IS 'Percentage of tests passed (0.0 to 1.0)';
|
||||
COMMENT ON COLUMN pack_test_execution.result IS 'Full JSON structure with detailed test results';
|
||||
|
||||
-- Pack test result summary view (all test executions with pack info)
|
||||
CREATE OR REPLACE VIEW pack_test_summary AS
|
||||
SELECT
|
||||
p.id AS pack_id,
|
||||
p.ref AS pack_ref,
|
||||
p.label AS pack_label,
|
||||
pte.id AS test_execution_id,
|
||||
pte.pack_version,
|
||||
pte.execution_time AS test_time,
|
||||
pte.trigger_reason,
|
||||
pte.total_tests,
|
||||
pte.passed,
|
||||
pte.failed,
|
||||
pte.skipped,
|
||||
pte.pass_rate,
|
||||
pte.duration_ms,
|
||||
ROW_NUMBER() OVER (PARTITION BY p.id ORDER BY pte.execution_time DESC) AS rn
|
||||
FROM pack p
|
||||
LEFT JOIN pack_test_execution pte ON p.id = pte.pack_id
|
||||
WHERE pte.id IS NOT NULL;
|
||||
|
||||
COMMENT ON VIEW pack_test_summary IS 'Summary of all pack test executions with pack details';
|
||||
|
||||
-- Latest test results per pack view
|
||||
CREATE OR REPLACE VIEW pack_latest_test AS
|
||||
SELECT
|
||||
pack_id,
|
||||
pack_ref,
|
||||
pack_label,
|
||||
test_execution_id,
|
||||
pack_version,
|
||||
test_time,
|
||||
trigger_reason,
|
||||
total_tests,
|
||||
passed,
|
||||
failed,
|
||||
skipped,
|
||||
pass_rate,
|
||||
duration_ms
|
||||
FROM pack_test_summary
|
||||
WHERE rn = 1;
|
||||
|
||||
COMMENT ON VIEW pack_latest_test IS 'Latest test results for each pack';
|
||||
|
||||
-- Function to get pack test statistics
|
||||
CREATE OR REPLACE FUNCTION get_pack_test_stats(p_pack_id BIGINT)
|
||||
RETURNS TABLE (
|
||||
total_executions BIGINT,
|
||||
successful_executions BIGINT,
|
||||
failed_executions BIGINT,
|
||||
avg_pass_rate DECIMAL,
|
||||
avg_duration_ms BIGINT,
|
||||
last_test_time TIMESTAMPTZ,
|
||||
last_test_passed BOOLEAN
|
||||
) AS $$
|
||||
BEGIN
|
||||
RETURN QUERY
|
||||
SELECT
|
||||
COUNT(*)::BIGINT AS total_executions,
|
||||
COUNT(*) FILTER (WHERE passed = total_tests)::BIGINT AS successful_executions,
|
||||
COUNT(*) FILTER (WHERE failed > 0)::BIGINT AS failed_executions,
|
||||
AVG(pass_rate) AS avg_pass_rate,
|
||||
AVG(duration_ms)::BIGINT AS avg_duration_ms,
|
||||
MAX(execution_time) AS last_test_time,
|
||||
(SELECT failed = 0 FROM pack_test_execution
|
||||
WHERE pack_id = p_pack_id
|
||||
ORDER BY execution_time DESC
|
||||
LIMIT 1) AS last_test_passed
|
||||
FROM pack_test_execution
|
||||
WHERE pack_id = p_pack_id;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION get_pack_test_stats IS 'Get statistical summary of test executions for a pack';
|
||||
|
||||
-- Function to check if pack has recent passing tests
|
||||
CREATE OR REPLACE FUNCTION pack_has_passing_tests(
|
||||
p_pack_id BIGINT,
|
||||
p_hours_ago INT DEFAULT 24
|
||||
)
|
||||
RETURNS BOOLEAN AS $$
|
||||
DECLARE
|
||||
v_has_passing_tests BOOLEAN;
|
||||
BEGIN
|
||||
SELECT EXISTS(
|
||||
SELECT 1
|
||||
FROM pack_test_execution
|
||||
WHERE pack_id = p_pack_id
|
||||
AND execution_time > NOW() - (p_hours_ago || ' hours')::INTERVAL
|
||||
AND failed = 0
|
||||
AND total_tests > 0
|
||||
) INTO v_has_passing_tests;
|
||||
|
||||
RETURN v_has_passing_tests;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION pack_has_passing_tests IS 'Check if pack has recent passing test executions';
|
||||
|
||||
-- Add trigger to update pack metadata on test execution
|
||||
CREATE OR REPLACE FUNCTION update_pack_test_metadata()
|
||||
RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
-- Could update pack table with last_tested timestamp if we add that column
|
||||
-- For now, just a placeholder for future functionality
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE TRIGGER trigger_update_pack_test_metadata
|
||||
AFTER INSERT ON pack_test_execution
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_pack_test_metadata();
|
||||
|
||||
COMMENT ON TRIGGER trigger_update_pack_test_metadata ON pack_test_execution IS 'Updates pack metadata when tests are executed';
|
||||
|
||||
-- ============================================================================
|
||||
-- WEBHOOK FUNCTIONS
|
||||
-- ============================================================================
|
||||
|
||||
-- Drop existing functions to avoid signature conflicts
|
||||
DROP FUNCTION IF EXISTS enable_trigger_webhook(BIGINT, JSONB);
|
||||
DROP FUNCTION IF EXISTS enable_trigger_webhook(BIGINT);
|
||||
DROP FUNCTION IF EXISTS disable_trigger_webhook(BIGINT);
|
||||
DROP FUNCTION IF EXISTS regenerate_trigger_webhook_key(BIGINT);
|
||||
|
||||
-- Function to enable webhooks for a trigger
|
||||
CREATE OR REPLACE FUNCTION enable_trigger_webhook(
|
||||
p_trigger_id BIGINT,
|
||||
p_config JSONB DEFAULT '{}'::jsonb
|
||||
)
|
||||
RETURNS TABLE(
|
||||
webhook_enabled BOOLEAN,
|
||||
webhook_key VARCHAR(255),
|
||||
webhook_url TEXT
|
||||
) AS $$
|
||||
DECLARE
|
||||
v_webhook_key VARCHAR(255);
|
||||
v_api_base_url TEXT := 'http://localhost:8080'; -- Default, should be configured
|
||||
BEGIN
|
||||
-- Check if trigger exists
|
||||
IF NOT EXISTS (SELECT 1 FROM trigger WHERE id = p_trigger_id) THEN
|
||||
RAISE EXCEPTION 'Trigger with id % does not exist', p_trigger_id;
|
||||
END IF;
|
||||
|
||||
-- Generate webhook key if one doesn't exist
|
||||
SELECT t.webhook_key INTO v_webhook_key
|
||||
FROM trigger t
|
||||
WHERE t.id = p_trigger_id;
|
||||
|
||||
IF v_webhook_key IS NULL THEN
|
||||
v_webhook_key := generate_webhook_key();
|
||||
END IF;
|
||||
|
||||
-- Update trigger to enable webhooks
|
||||
UPDATE trigger
|
||||
SET
|
||||
webhook_enabled = TRUE,
|
||||
webhook_key = v_webhook_key,
|
||||
webhook_config = p_config,
|
||||
updated = NOW()
|
||||
WHERE id = p_trigger_id;
|
||||
|
||||
-- Return webhook details
|
||||
RETURN QUERY SELECT
|
||||
TRUE,
|
||||
v_webhook_key,
|
||||
v_api_base_url || '/api/v1/webhooks/' || v_webhook_key;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION enable_trigger_webhook(BIGINT, JSONB) IS
|
||||
'Enables webhooks for a trigger with optional configuration. Generates a new webhook key if one does not exist. Returns webhook details.';
|
||||
|
||||
-- Function to disable webhooks for a trigger
|
||||
CREATE OR REPLACE FUNCTION disable_trigger_webhook(
|
||||
p_trigger_id BIGINT
|
||||
)
|
||||
RETURNS BOOLEAN AS $$
|
||||
BEGIN
|
||||
-- Check if trigger exists
|
||||
IF NOT EXISTS (SELECT 1 FROM trigger WHERE id = p_trigger_id) THEN
|
||||
RAISE EXCEPTION 'Trigger with id % does not exist', p_trigger_id;
|
||||
END IF;
|
||||
|
||||
-- Update trigger to disable webhooks
|
||||
-- Set webhook_key to NULL when disabling to remove it from API responses
|
||||
UPDATE trigger
|
||||
SET
|
||||
webhook_enabled = FALSE,
|
||||
webhook_key = NULL,
|
||||
updated = NOW()
|
||||
WHERE id = p_trigger_id;
|
||||
|
||||
RETURN TRUE;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION disable_trigger_webhook(BIGINT) IS
|
||||
'Disables webhooks for a trigger. Webhook key is removed when disabled.';
|
||||
|
||||
-- Function to regenerate webhook key for a trigger
|
||||
CREATE OR REPLACE FUNCTION regenerate_trigger_webhook_key(
|
||||
p_trigger_id BIGINT
|
||||
)
|
||||
RETURNS TABLE(
|
||||
webhook_key VARCHAR(255),
|
||||
previous_key_revoked BOOLEAN
|
||||
) AS $$
|
||||
DECLARE
|
||||
v_new_key VARCHAR(255);
|
||||
v_old_key VARCHAR(255);
|
||||
v_webhook_enabled BOOLEAN;
|
||||
BEGIN
|
||||
-- Check if trigger exists
|
||||
IF NOT EXISTS (SELECT 1 FROM trigger WHERE id = p_trigger_id) THEN
|
||||
RAISE EXCEPTION 'Trigger with id % does not exist', p_trigger_id;
|
||||
END IF;
|
||||
|
||||
-- Get current webhook state
|
||||
SELECT t.webhook_key, t.webhook_enabled INTO v_old_key, v_webhook_enabled
|
||||
FROM trigger t
|
||||
WHERE t.id = p_trigger_id;
|
||||
|
||||
-- Check if webhooks are enabled
|
||||
IF NOT v_webhook_enabled THEN
|
||||
RAISE EXCEPTION 'Webhooks are not enabled for trigger %', p_trigger_id;
|
||||
END IF;
|
||||
|
||||
-- Generate new key
|
||||
v_new_key := generate_webhook_key();
|
||||
|
||||
-- Update trigger with new key
|
||||
UPDATE trigger
|
||||
SET
|
||||
webhook_key = v_new_key,
|
||||
updated = NOW()
|
||||
WHERE id = p_trigger_id;
|
||||
|
||||
-- Return new key and whether old key was present
|
||||
RETURN QUERY SELECT
|
||||
v_new_key,
|
||||
(v_old_key IS NOT NULL);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION regenerate_trigger_webhook_key(BIGINT) IS
|
||||
'Regenerates webhook key for a trigger. Returns new key and whether a previous key was revoked.';
|
||||
|
||||
-- Verify all webhook functions exist
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_proc p
|
||||
JOIN pg_namespace n ON p.pronamespace = n.oid
|
||||
WHERE n.nspname = current_schema()
|
||||
AND p.proname = 'enable_trigger_webhook'
|
||||
) THEN
|
||||
RAISE EXCEPTION 'enable_trigger_webhook function not found after migration';
|
||||
END IF;
|
||||
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_proc p
|
||||
JOIN pg_namespace n ON p.pronamespace = n.oid
|
||||
WHERE n.nspname = current_schema()
|
||||
AND p.proname = 'disable_trigger_webhook'
|
||||
) THEN
|
||||
RAISE EXCEPTION 'disable_trigger_webhook function not found after migration';
|
||||
END IF;
|
||||
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_proc p
|
||||
JOIN pg_namespace n ON p.pronamespace = n.oid
|
||||
WHERE n.nspname = current_schema()
|
||||
AND p.proname = 'regenerate_trigger_webhook_key'
|
||||
) THEN
|
||||
RAISE EXCEPTION 'regenerate_trigger_webhook_key function not found after migration';
|
||||
END IF;
|
||||
|
||||
RAISE NOTICE 'All webhook functions successfully created';
|
||||
END $$;
|
||||
@@ -1,6 +1,6 @@
|
||||
-- Migration: LISTEN/NOTIFY Triggers
|
||||
-- Description: Consolidated PostgreSQL LISTEN/NOTIFY triggers for real-time event notifications
|
||||
-- Version: 20250101000013
|
||||
-- Version: 20250101000008
|
||||
|
||||
-- ============================================================================
|
||||
-- EXECUTION CHANGE NOTIFICATION
|
||||
@@ -1,75 +0,0 @@
|
||||
-- Migration: Supporting Tables and Indexes
|
||||
-- Description: Creates notification and artifact tables plus performance optimization indexes
|
||||
-- Version: 20250101000005
|
||||
|
||||
|
||||
-- ============================================================================
|
||||
-- NOTIFICATION TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE notification (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
channel TEXT NOT NULL,
|
||||
entity_type TEXT NOT NULL,
|
||||
entity TEXT NOT NULL,
|
||||
activity TEXT NOT NULL,
|
||||
state notification_status_enum NOT NULL DEFAULT 'created',
|
||||
content JSONB,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_notification_channel ON notification(channel);
|
||||
CREATE INDEX idx_notification_entity_type ON notification(entity_type);
|
||||
CREATE INDEX idx_notification_entity ON notification(entity);
|
||||
CREATE INDEX idx_notification_state ON notification(state);
|
||||
CREATE INDEX idx_notification_created ON notification(created DESC);
|
||||
CREATE INDEX idx_notification_channel_state ON notification(channel, state);
|
||||
CREATE INDEX idx_notification_entity_type_entity ON notification(entity_type, entity);
|
||||
CREATE INDEX idx_notification_state_created ON notification(state, created DESC);
|
||||
CREATE INDEX idx_notification_content_gin ON notification USING GIN (content);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_notification_updated
|
||||
BEFORE UPDATE ON notification
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Function for pg_notify on notification insert
|
||||
CREATE OR REPLACE FUNCTION notify_on_insert()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
payload TEXT;
|
||||
BEGIN
|
||||
-- Build JSON payload with id, entity, and activity
|
||||
payload := json_build_object(
|
||||
'id', NEW.id,
|
||||
'entity_type', NEW.entity_type,
|
||||
'entity', NEW.entity,
|
||||
'activity', NEW.activity
|
||||
)::text;
|
||||
|
||||
-- Send notification to the specified channel
|
||||
PERFORM pg_notify(NEW.channel, payload);
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Trigger to send pg_notify on notification insert
|
||||
CREATE TRIGGER notify_on_notification_insert
|
||||
AFTER INSERT ON notification
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION notify_on_insert();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE notification IS 'System notifications about entity changes for real-time updates';
|
||||
COMMENT ON COLUMN notification.channel IS 'Notification channel (typically table name)';
|
||||
COMMENT ON COLUMN notification.entity_type IS 'Type of entity (table name)';
|
||||
COMMENT ON COLUMN notification.entity IS 'Entity identifier (typically ID or ref)';
|
||||
COMMENT ON COLUMN notification.activity IS 'Activity type (e.g., "created", "updated", "completed")';
|
||||
COMMENT ON COLUMN notification.state IS 'Processing state of notification';
|
||||
COMMENT ON COLUMN notification.content IS 'Optional notification payload data';
|
||||
|
||||
-- ============================================================================
|
||||
@@ -1,200 +0,0 @@
|
||||
-- Migration: Keys and Artifacts
|
||||
-- Description: Creates key table for secrets management and artifact table for execution outputs
|
||||
-- Version: 20250101000009
|
||||
|
||||
-- ============================================================================
|
||||
-- KEY TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE key (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
ref TEXT NOT NULL UNIQUE,
|
||||
owner_type owner_type_enum NOT NULL,
|
||||
owner TEXT,
|
||||
owner_identity BIGINT REFERENCES identity(id),
|
||||
owner_pack BIGINT REFERENCES pack(id),
|
||||
owner_pack_ref TEXT,
|
||||
owner_action BIGINT, -- Forward reference to action table
|
||||
owner_action_ref TEXT,
|
||||
owner_sensor BIGINT, -- Forward reference to sensor table
|
||||
owner_sensor_ref TEXT,
|
||||
name TEXT NOT NULL,
|
||||
encrypted BOOLEAN NOT NULL,
|
||||
encryption_key_hash TEXT,
|
||||
value TEXT NOT NULL,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
-- Constraints
|
||||
CONSTRAINT key_ref_lowercase CHECK (ref = LOWER(ref)),
|
||||
CONSTRAINT key_ref_format CHECK (ref ~ '^[^.]+(\.[^.]+)*$')
|
||||
);
|
||||
|
||||
-- Unique index on owner_type, owner, name
|
||||
CREATE UNIQUE INDEX idx_key_unique ON key(owner_type, owner, name);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_key_ref ON key(ref);
|
||||
CREATE INDEX idx_key_owner_type ON key(owner_type);
|
||||
CREATE INDEX idx_key_owner_identity ON key(owner_identity);
|
||||
CREATE INDEX idx_key_owner_pack ON key(owner_pack);
|
||||
CREATE INDEX idx_key_owner_action ON key(owner_action);
|
||||
CREATE INDEX idx_key_owner_sensor ON key(owner_sensor);
|
||||
CREATE INDEX idx_key_created ON key(created DESC);
|
||||
CREATE INDEX idx_key_owner_type_owner ON key(owner_type, owner);
|
||||
CREATE INDEX idx_key_owner_identity_name ON key(owner_identity, name);
|
||||
CREATE INDEX idx_key_owner_pack_name ON key(owner_pack, name);
|
||||
|
||||
-- Function to validate and set owner fields
|
||||
CREATE OR REPLACE FUNCTION validate_key_owner()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
owner_count INTEGER := 0;
|
||||
BEGIN
|
||||
-- Count how many owner fields are set
|
||||
IF NEW.owner_identity IS NOT NULL THEN owner_count := owner_count + 1; END IF;
|
||||
IF NEW.owner_pack IS NOT NULL THEN owner_count := owner_count + 1; END IF;
|
||||
IF NEW.owner_action IS NOT NULL THEN owner_count := owner_count + 1; END IF;
|
||||
IF NEW.owner_sensor IS NOT NULL THEN owner_count := owner_count + 1; END IF;
|
||||
|
||||
-- System owner should have no owner fields set
|
||||
IF NEW.owner_type = 'system' THEN
|
||||
IF owner_count > 0 THEN
|
||||
RAISE EXCEPTION 'System owner cannot have specific owner fields set';
|
||||
END IF;
|
||||
NEW.owner := 'system';
|
||||
-- All other types must have exactly one owner field set
|
||||
ELSIF owner_count != 1 THEN
|
||||
RAISE EXCEPTION 'Exactly one owner field must be set for owner_type %', NEW.owner_type;
|
||||
-- Validate owner_type matches the populated field and set owner
|
||||
ELSIF NEW.owner_type = 'identity' THEN
|
||||
IF NEW.owner_identity IS NULL THEN
|
||||
RAISE EXCEPTION 'owner_identity must be set for owner_type identity';
|
||||
END IF;
|
||||
NEW.owner := NEW.owner_identity::TEXT;
|
||||
ELSIF NEW.owner_type = 'pack' THEN
|
||||
IF NEW.owner_pack IS NULL THEN
|
||||
RAISE EXCEPTION 'owner_pack must be set for owner_type pack';
|
||||
END IF;
|
||||
NEW.owner := NEW.owner_pack::TEXT;
|
||||
ELSIF NEW.owner_type = 'action' THEN
|
||||
IF NEW.owner_action IS NULL THEN
|
||||
RAISE EXCEPTION 'owner_action must be set for owner_type action';
|
||||
END IF;
|
||||
NEW.owner := NEW.owner_action::TEXT;
|
||||
ELSIF NEW.owner_type = 'sensor' THEN
|
||||
IF NEW.owner_sensor IS NULL THEN
|
||||
RAISE EXCEPTION 'owner_sensor must be set for owner_type sensor';
|
||||
END IF;
|
||||
NEW.owner := NEW.owner_sensor::TEXT;
|
||||
END IF;
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Trigger to validate owner fields
|
||||
CREATE TRIGGER validate_key_owner_trigger
|
||||
BEFORE INSERT OR UPDATE ON key
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION validate_key_owner();
|
||||
|
||||
-- Trigger for updated timestamp
|
||||
CREATE TRIGGER update_key_updated
|
||||
BEFORE UPDATE ON key
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE key IS 'Keys store configuration values and secrets with ownership scoping';
|
||||
COMMENT ON COLUMN key.ref IS 'Unique key reference (format: [owner.]name)';
|
||||
COMMENT ON COLUMN key.owner_type IS 'Type of owner (system, identity, pack, action, sensor)';
|
||||
COMMENT ON COLUMN key.owner IS 'Owner identifier (auto-populated by trigger)';
|
||||
COMMENT ON COLUMN key.owner_identity IS 'Identity owner (if owner_type=identity)';
|
||||
COMMENT ON COLUMN key.owner_pack IS 'Pack owner (if owner_type=pack)';
|
||||
COMMENT ON COLUMN key.owner_pack_ref IS 'Pack reference for owner_pack';
|
||||
COMMENT ON COLUMN key.owner_action IS 'Action owner (if owner_type=action)';
|
||||
COMMENT ON COLUMN key.owner_sensor IS 'Sensor owner (if owner_type=sensor)';
|
||||
COMMENT ON COLUMN key.name IS 'Key name within owner scope';
|
||||
COMMENT ON COLUMN key.encrypted IS 'Whether the value is encrypted';
|
||||
COMMENT ON COLUMN key.encryption_key_hash IS 'Hash of encryption key used';
|
||||
COMMENT ON COLUMN key.value IS 'The actual value (encrypted if encrypted=true)';
|
||||
|
||||
|
||||
-- Add foreign key constraints for action and sensor references
|
||||
ALTER TABLE key
|
||||
ADD CONSTRAINT key_owner_action_fkey
|
||||
FOREIGN KEY (owner_action) REFERENCES action(id) ON DELETE CASCADE;
|
||||
|
||||
ALTER TABLE key
|
||||
ADD CONSTRAINT key_owner_sensor_fkey
|
||||
FOREIGN KEY (owner_sensor) REFERENCES sensor(id) ON DELETE CASCADE;
|
||||
|
||||
-- ============================================================================
|
||||
-- ARTIFACT TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE artifact (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
ref TEXT NOT NULL,
|
||||
scope owner_type_enum NOT NULL DEFAULT 'system',
|
||||
owner TEXT NOT NULL DEFAULT '',
|
||||
type artifact_type_enum NOT NULL,
|
||||
retention_policy artifact_retention_enum NOT NULL DEFAULT 'versions',
|
||||
retention_limit INTEGER NOT NULL DEFAULT 1,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_artifact_ref ON artifact(ref);
|
||||
CREATE INDEX idx_artifact_scope ON artifact(scope);
|
||||
CREATE INDEX idx_artifact_owner ON artifact(owner);
|
||||
CREATE INDEX idx_artifact_type ON artifact(type);
|
||||
CREATE INDEX idx_artifact_created ON artifact(created DESC);
|
||||
CREATE INDEX idx_artifact_scope_owner ON artifact(scope, owner);
|
||||
CREATE INDEX idx_artifact_type_created ON artifact(type, created DESC);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_artifact_updated
|
||||
BEFORE UPDATE ON artifact
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE artifact IS 'Artifacts track files, logs, and outputs from executions';
|
||||
COMMENT ON COLUMN artifact.ref IS 'Artifact reference/path';
|
||||
COMMENT ON COLUMN artifact.scope IS 'Owner type (system, identity, pack, action, sensor)';
|
||||
COMMENT ON COLUMN artifact.owner IS 'Owner identifier';
|
||||
COMMENT ON COLUMN artifact.type IS 'Artifact type (file, url, progress, etc.)';
|
||||
COMMENT ON COLUMN artifact.retention_policy IS 'How to retain artifacts (versions, days, hours, minutes)';
|
||||
COMMENT ON COLUMN artifact.retention_limit IS 'Numeric limit for retention policy';
|
||||
|
||||
-- ============================================================================
|
||||
-- QUEUE_STATS TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE queue_stats (
|
||||
action_id BIGINT PRIMARY KEY REFERENCES action(id) ON DELETE CASCADE,
|
||||
queue_length INTEGER NOT NULL DEFAULT 0,
|
||||
active_count INTEGER NOT NULL DEFAULT 0,
|
||||
max_concurrent INTEGER NOT NULL DEFAULT 1,
|
||||
oldest_enqueued_at TIMESTAMPTZ,
|
||||
total_enqueued BIGINT NOT NULL DEFAULT 0,
|
||||
total_completed BIGINT NOT NULL DEFAULT 0,
|
||||
last_updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_queue_stats_last_updated ON queue_stats(last_updated);
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE queue_stats IS 'Real-time queue statistics for action execution ordering';
|
||||
COMMENT ON COLUMN queue_stats.action_id IS 'Foreign key to action table';
|
||||
COMMENT ON COLUMN queue_stats.queue_length IS 'Number of executions waiting in queue';
|
||||
COMMENT ON COLUMN queue_stats.active_count IS 'Number of currently running executions';
|
||||
COMMENT ON COLUMN queue_stats.max_concurrent IS 'Maximum concurrent executions allowed';
|
||||
COMMENT ON COLUMN queue_stats.oldest_enqueued_at IS 'Timestamp of oldest queued execution (NULL if queue empty)';
|
||||
COMMENT ON COLUMN queue_stats.total_enqueued IS 'Total executions enqueued since queue creation';
|
||||
COMMENT ON COLUMN queue_stats.total_completed IS 'Total executions completed since queue creation';
|
||||
COMMENT ON COLUMN queue_stats.last_updated IS 'Timestamp of last statistics update';
|
||||
666
migrations/20250101000009_timescaledb_history.sql
Normal file
666
migrations/20250101000009_timescaledb_history.sql
Normal file
@@ -0,0 +1,666 @@
|
||||
-- Migration: TimescaleDB Entity History and Analytics
|
||||
-- Description: Creates append-only history hypertables for execution, worker, enforcement,
|
||||
-- and event tables. Uses JSONB diff format to track field-level changes via
|
||||
-- PostgreSQL triggers. Includes continuous aggregates for dashboard analytics.
|
||||
-- Consolidates former migrations: 20260226100000 (entity_history_timescaledb),
|
||||
-- 20260226200000 (continuous_aggregates), and 20260226300000 (fix + result digest).
|
||||
-- See docs/plans/timescaledb-entity-history.md for full design.
|
||||
-- Version: 20250101000009
|
||||
|
||||
-- ============================================================================
|
||||
-- EXTENSION
|
||||
-- ============================================================================
|
||||
|
||||
CREATE EXTENSION IF NOT EXISTS timescaledb;
|
||||
|
||||
-- ============================================================================
|
||||
-- HELPER FUNCTIONS
|
||||
-- ============================================================================
|
||||
|
||||
-- Returns a small {digest, size, type} object instead of the full JSONB value.
|
||||
-- Used in history triggers for columns that can be arbitrarily large (e.g. result).
|
||||
-- The full value is always available on the live row.
|
||||
CREATE OR REPLACE FUNCTION _jsonb_digest_summary(val JSONB)
|
||||
RETURNS JSONB AS $$
|
||||
BEGIN
|
||||
IF val IS NULL THEN
|
||||
RETURN NULL;
|
||||
END IF;
|
||||
RETURN jsonb_build_object(
|
||||
'digest', 'md5:' || md5(val::text),
|
||||
'size', octet_length(val::text),
|
||||
'type', jsonb_typeof(val)
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql IMMUTABLE;
|
||||
|
||||
COMMENT ON FUNCTION _jsonb_digest_summary(JSONB) IS
|
||||
'Returns a compact {digest, size, type} summary of a JSONB value for use in history tables. '
|
||||
'The digest is md5 of the text representation — sufficient for change-detection, not for security.';
|
||||
|
||||
-- ============================================================================
|
||||
-- HISTORY TABLES
|
||||
-- ============================================================================
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- execution_history
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE TABLE execution_history (
|
||||
time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
operation TEXT NOT NULL,
|
||||
entity_id BIGINT NOT NULL,
|
||||
entity_ref TEXT,
|
||||
changed_fields TEXT[] NOT NULL DEFAULT '{}',
|
||||
old_values JSONB,
|
||||
new_values JSONB
|
||||
);
|
||||
|
||||
SELECT create_hypertable('execution_history', 'time',
|
||||
chunk_time_interval => INTERVAL '1 day');
|
||||
|
||||
CREATE INDEX idx_execution_history_entity
|
||||
ON execution_history (entity_id, time DESC);
|
||||
|
||||
CREATE INDEX idx_execution_history_entity_ref
|
||||
ON execution_history (entity_ref, time DESC);
|
||||
|
||||
CREATE INDEX idx_execution_history_status_changes
|
||||
ON execution_history (time DESC)
|
||||
WHERE 'status' = ANY(changed_fields);
|
||||
|
||||
CREATE INDEX idx_execution_history_changed_fields
|
||||
ON execution_history USING GIN (changed_fields);
|
||||
|
||||
COMMENT ON TABLE execution_history IS 'Append-only history of field-level changes to the execution table (TimescaleDB hypertable)';
|
||||
COMMENT ON COLUMN execution_history.time IS 'When the change occurred (hypertable partitioning dimension)';
|
||||
COMMENT ON COLUMN execution_history.operation IS 'INSERT, UPDATE, or DELETE';
|
||||
COMMENT ON COLUMN execution_history.entity_id IS 'execution.id of the changed row';
|
||||
COMMENT ON COLUMN execution_history.entity_ref IS 'Denormalized action_ref for JOIN-free queries';
|
||||
COMMENT ON COLUMN execution_history.changed_fields IS 'Array of field names that changed (empty for INSERT/DELETE)';
|
||||
COMMENT ON COLUMN execution_history.old_values IS 'Previous values of changed fields (NULL for INSERT)';
|
||||
COMMENT ON COLUMN execution_history.new_values IS 'New values of changed fields (NULL for DELETE)';
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- worker_history
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE TABLE worker_history (
|
||||
time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
operation TEXT NOT NULL,
|
||||
entity_id BIGINT NOT NULL,
|
||||
entity_ref TEXT,
|
||||
changed_fields TEXT[] NOT NULL DEFAULT '{}',
|
||||
old_values JSONB,
|
||||
new_values JSONB
|
||||
);
|
||||
|
||||
SELECT create_hypertable('worker_history', 'time',
|
||||
chunk_time_interval => INTERVAL '7 days');
|
||||
|
||||
CREATE INDEX idx_worker_history_entity
|
||||
ON worker_history (entity_id, time DESC);
|
||||
|
||||
CREATE INDEX idx_worker_history_entity_ref
|
||||
ON worker_history (entity_ref, time DESC);
|
||||
|
||||
CREATE INDEX idx_worker_history_status_changes
|
||||
ON worker_history (time DESC)
|
||||
WHERE 'status' = ANY(changed_fields);
|
||||
|
||||
CREATE INDEX idx_worker_history_changed_fields
|
||||
ON worker_history USING GIN (changed_fields);
|
||||
|
||||
COMMENT ON TABLE worker_history IS 'Append-only history of field-level changes to the worker table (TimescaleDB hypertable)';
|
||||
COMMENT ON COLUMN worker_history.entity_ref IS 'Denormalized worker name for JOIN-free queries';
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- enforcement_history
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE TABLE enforcement_history (
|
||||
time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
operation TEXT NOT NULL,
|
||||
entity_id BIGINT NOT NULL,
|
||||
entity_ref TEXT,
|
||||
changed_fields TEXT[] NOT NULL DEFAULT '{}',
|
||||
old_values JSONB,
|
||||
new_values JSONB
|
||||
);
|
||||
|
||||
SELECT create_hypertable('enforcement_history', 'time',
|
||||
chunk_time_interval => INTERVAL '1 day');
|
||||
|
||||
CREATE INDEX idx_enforcement_history_entity
|
||||
ON enforcement_history (entity_id, time DESC);
|
||||
|
||||
CREATE INDEX idx_enforcement_history_entity_ref
|
||||
ON enforcement_history (entity_ref, time DESC);
|
||||
|
||||
CREATE INDEX idx_enforcement_history_status_changes
|
||||
ON enforcement_history (time DESC)
|
||||
WHERE 'status' = ANY(changed_fields);
|
||||
|
||||
CREATE INDEX idx_enforcement_history_changed_fields
|
||||
ON enforcement_history USING GIN (changed_fields);
|
||||
|
||||
COMMENT ON TABLE enforcement_history IS 'Append-only history of field-level changes to the enforcement table (TimescaleDB hypertable)';
|
||||
COMMENT ON COLUMN enforcement_history.entity_ref IS 'Denormalized rule_ref for JOIN-free queries';
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- event_history
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE TABLE event_history (
|
||||
time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
operation TEXT NOT NULL,
|
||||
entity_id BIGINT NOT NULL,
|
||||
entity_ref TEXT,
|
||||
changed_fields TEXT[] NOT NULL DEFAULT '{}',
|
||||
old_values JSONB,
|
||||
new_values JSONB
|
||||
);
|
||||
|
||||
SELECT create_hypertable('event_history', 'time',
|
||||
chunk_time_interval => INTERVAL '1 day');
|
||||
|
||||
CREATE INDEX idx_event_history_entity
|
||||
ON event_history (entity_id, time DESC);
|
||||
|
||||
CREATE INDEX idx_event_history_entity_ref
|
||||
ON event_history (entity_ref, time DESC);
|
||||
|
||||
CREATE INDEX idx_event_history_changed_fields
|
||||
ON event_history USING GIN (changed_fields);
|
||||
|
||||
COMMENT ON TABLE event_history IS 'Append-only history of field-level changes to the event table (TimescaleDB hypertable)';
|
||||
COMMENT ON COLUMN event_history.entity_ref IS 'Denormalized trigger_ref for JOIN-free queries';
|
||||
|
||||
-- ============================================================================
|
||||
-- TRIGGER FUNCTIONS
|
||||
-- ============================================================================
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- execution history trigger
|
||||
-- Tracked fields: status, result, executor, workflow_task, env_vars
|
||||
-- Note: result uses _jsonb_digest_summary() to avoid storing large payloads
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE OR REPLACE FUNCTION record_execution_history()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
changed TEXT[] := '{}';
|
||||
old_vals JSONB := '{}';
|
||||
new_vals JSONB := '{}';
|
||||
BEGIN
|
||||
IF TG_OP = 'INSERT' THEN
|
||||
INSERT INTO execution_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'INSERT', NEW.id, NEW.action_ref, '{}', NULL,
|
||||
jsonb_build_object(
|
||||
'status', NEW.status,
|
||||
'action_ref', NEW.action_ref,
|
||||
'executor', NEW.executor,
|
||||
'parent', NEW.parent,
|
||||
'enforcement', NEW.enforcement
|
||||
));
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
IF TG_OP = 'DELETE' THEN
|
||||
INSERT INTO execution_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'DELETE', OLD.id, OLD.action_ref, '{}', NULL, NULL);
|
||||
RETURN OLD;
|
||||
END IF;
|
||||
|
||||
-- UPDATE: detect which fields changed
|
||||
|
||||
IF OLD.status IS DISTINCT FROM NEW.status THEN
|
||||
changed := array_append(changed, 'status');
|
||||
old_vals := old_vals || jsonb_build_object('status', OLD.status);
|
||||
new_vals := new_vals || jsonb_build_object('status', NEW.status);
|
||||
END IF;
|
||||
|
||||
-- Result: store a compact digest instead of the full JSONB to avoid bloat.
|
||||
-- The live execution row always has the complete result.
|
||||
IF OLD.result IS DISTINCT FROM NEW.result THEN
|
||||
changed := array_append(changed, 'result');
|
||||
old_vals := old_vals || jsonb_build_object('result', _jsonb_digest_summary(OLD.result));
|
||||
new_vals := new_vals || jsonb_build_object('result', _jsonb_digest_summary(NEW.result));
|
||||
END IF;
|
||||
|
||||
IF OLD.executor IS DISTINCT FROM NEW.executor THEN
|
||||
changed := array_append(changed, 'executor');
|
||||
old_vals := old_vals || jsonb_build_object('executor', OLD.executor);
|
||||
new_vals := new_vals || jsonb_build_object('executor', NEW.executor);
|
||||
END IF;
|
||||
|
||||
IF OLD.workflow_task IS DISTINCT FROM NEW.workflow_task THEN
|
||||
changed := array_append(changed, 'workflow_task');
|
||||
old_vals := old_vals || jsonb_build_object('workflow_task', OLD.workflow_task);
|
||||
new_vals := new_vals || jsonb_build_object('workflow_task', NEW.workflow_task);
|
||||
END IF;
|
||||
|
||||
IF OLD.env_vars IS DISTINCT FROM NEW.env_vars THEN
|
||||
changed := array_append(changed, 'env_vars');
|
||||
old_vals := old_vals || jsonb_build_object('env_vars', OLD.env_vars);
|
||||
new_vals := new_vals || jsonb_build_object('env_vars', NEW.env_vars);
|
||||
END IF;
|
||||
|
||||
-- Only record if something actually changed
|
||||
IF array_length(changed, 1) > 0 THEN
|
||||
INSERT INTO execution_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'UPDATE', NEW.id, NEW.action_ref, changed, old_vals, new_vals);
|
||||
END IF;
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION record_execution_history() IS 'Records field-level changes to execution table in execution_history hypertable';
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- worker history trigger
|
||||
-- Tracked fields: name, status, capabilities, meta, host, port
|
||||
-- Excludes: last_heartbeat when it is the only field that changed
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE OR REPLACE FUNCTION record_worker_history()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
changed TEXT[] := '{}';
|
||||
old_vals JSONB := '{}';
|
||||
new_vals JSONB := '{}';
|
||||
BEGIN
|
||||
IF TG_OP = 'INSERT' THEN
|
||||
INSERT INTO worker_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'INSERT', NEW.id, NEW.name, '{}', NULL,
|
||||
jsonb_build_object(
|
||||
'name', NEW.name,
|
||||
'worker_type', NEW.worker_type,
|
||||
'worker_role', NEW.worker_role,
|
||||
'status', NEW.status,
|
||||
'host', NEW.host,
|
||||
'port', NEW.port
|
||||
));
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
IF TG_OP = 'DELETE' THEN
|
||||
INSERT INTO worker_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'DELETE', OLD.id, OLD.name, '{}', NULL, NULL);
|
||||
RETURN OLD;
|
||||
END IF;
|
||||
|
||||
-- UPDATE: detect which fields changed
|
||||
IF OLD.name IS DISTINCT FROM NEW.name THEN
|
||||
changed := array_append(changed, 'name');
|
||||
old_vals := old_vals || jsonb_build_object('name', OLD.name);
|
||||
new_vals := new_vals || jsonb_build_object('name', NEW.name);
|
||||
END IF;
|
||||
|
||||
IF OLD.status IS DISTINCT FROM NEW.status THEN
|
||||
changed := array_append(changed, 'status');
|
||||
old_vals := old_vals || jsonb_build_object('status', OLD.status);
|
||||
new_vals := new_vals || jsonb_build_object('status', NEW.status);
|
||||
END IF;
|
||||
|
||||
IF OLD.capabilities IS DISTINCT FROM NEW.capabilities THEN
|
||||
changed := array_append(changed, 'capabilities');
|
||||
old_vals := old_vals || jsonb_build_object('capabilities', OLD.capabilities);
|
||||
new_vals := new_vals || jsonb_build_object('capabilities', NEW.capabilities);
|
||||
END IF;
|
||||
|
||||
IF OLD.meta IS DISTINCT FROM NEW.meta THEN
|
||||
changed := array_append(changed, 'meta');
|
||||
old_vals := old_vals || jsonb_build_object('meta', OLD.meta);
|
||||
new_vals := new_vals || jsonb_build_object('meta', NEW.meta);
|
||||
END IF;
|
||||
|
||||
IF OLD.host IS DISTINCT FROM NEW.host THEN
|
||||
changed := array_append(changed, 'host');
|
||||
old_vals := old_vals || jsonb_build_object('host', OLD.host);
|
||||
new_vals := new_vals || jsonb_build_object('host', NEW.host);
|
||||
END IF;
|
||||
|
||||
IF OLD.port IS DISTINCT FROM NEW.port THEN
|
||||
changed := array_append(changed, 'port');
|
||||
old_vals := old_vals || jsonb_build_object('port', OLD.port);
|
||||
new_vals := new_vals || jsonb_build_object('port', NEW.port);
|
||||
END IF;
|
||||
|
||||
-- Only record if something besides last_heartbeat changed.
|
||||
-- Pure heartbeat-only updates are excluded to avoid high-volume noise.
|
||||
IF array_length(changed, 1) > 0 THEN
|
||||
INSERT INTO worker_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'UPDATE', NEW.id, NEW.name, changed, old_vals, new_vals);
|
||||
END IF;
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION record_worker_history() IS 'Records field-level changes to worker table in worker_history hypertable. Excludes heartbeat-only updates.';
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- enforcement history trigger
|
||||
-- Tracked fields: status, payload
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE OR REPLACE FUNCTION record_enforcement_history()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
changed TEXT[] := '{}';
|
||||
old_vals JSONB := '{}';
|
||||
new_vals JSONB := '{}';
|
||||
BEGIN
|
||||
IF TG_OP = 'INSERT' THEN
|
||||
INSERT INTO enforcement_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'INSERT', NEW.id, NEW.rule_ref, '{}', NULL,
|
||||
jsonb_build_object(
|
||||
'rule_ref', NEW.rule_ref,
|
||||
'trigger_ref', NEW.trigger_ref,
|
||||
'status', NEW.status,
|
||||
'condition', NEW.condition,
|
||||
'event', NEW.event
|
||||
));
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
IF TG_OP = 'DELETE' THEN
|
||||
INSERT INTO enforcement_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'DELETE', OLD.id, OLD.rule_ref, '{}', NULL, NULL);
|
||||
RETURN OLD;
|
||||
END IF;
|
||||
|
||||
-- UPDATE: detect which fields changed
|
||||
IF OLD.status IS DISTINCT FROM NEW.status THEN
|
||||
changed := array_append(changed, 'status');
|
||||
old_vals := old_vals || jsonb_build_object('status', OLD.status);
|
||||
new_vals := new_vals || jsonb_build_object('status', NEW.status);
|
||||
END IF;
|
||||
|
||||
IF OLD.payload IS DISTINCT FROM NEW.payload THEN
|
||||
changed := array_append(changed, 'payload');
|
||||
old_vals := old_vals || jsonb_build_object('payload', OLD.payload);
|
||||
new_vals := new_vals || jsonb_build_object('payload', NEW.payload);
|
||||
END IF;
|
||||
|
||||
-- Only record if something actually changed
|
||||
IF array_length(changed, 1) > 0 THEN
|
||||
INSERT INTO enforcement_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'UPDATE', NEW.id, NEW.rule_ref, changed, old_vals, new_vals);
|
||||
END IF;
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION record_enforcement_history() IS 'Records field-level changes to enforcement table in enforcement_history hypertable';
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- event history trigger
|
||||
-- Tracked fields: config, payload
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE OR REPLACE FUNCTION record_event_history()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
changed TEXT[] := '{}';
|
||||
old_vals JSONB := '{}';
|
||||
new_vals JSONB := '{}';
|
||||
BEGIN
|
||||
IF TG_OP = 'INSERT' THEN
|
||||
INSERT INTO event_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'INSERT', NEW.id, NEW.trigger_ref, '{}', NULL,
|
||||
jsonb_build_object(
|
||||
'trigger_ref', NEW.trigger_ref,
|
||||
'source', NEW.source,
|
||||
'source_ref', NEW.source_ref,
|
||||
'rule', NEW.rule,
|
||||
'rule_ref', NEW.rule_ref
|
||||
));
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
IF TG_OP = 'DELETE' THEN
|
||||
INSERT INTO event_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'DELETE', OLD.id, OLD.trigger_ref, '{}', NULL, NULL);
|
||||
RETURN OLD;
|
||||
END IF;
|
||||
|
||||
-- UPDATE: detect which fields changed
|
||||
IF OLD.config IS DISTINCT FROM NEW.config THEN
|
||||
changed := array_append(changed, 'config');
|
||||
old_vals := old_vals || jsonb_build_object('config', OLD.config);
|
||||
new_vals := new_vals || jsonb_build_object('config', NEW.config);
|
||||
END IF;
|
||||
|
||||
IF OLD.payload IS DISTINCT FROM NEW.payload THEN
|
||||
changed := array_append(changed, 'payload');
|
||||
old_vals := old_vals || jsonb_build_object('payload', OLD.payload);
|
||||
new_vals := new_vals || jsonb_build_object('payload', NEW.payload);
|
||||
END IF;
|
||||
|
||||
-- Only record if something actually changed
|
||||
IF array_length(changed, 1) > 0 THEN
|
||||
INSERT INTO event_history (time, operation, entity_id, entity_ref, changed_fields, old_values, new_values)
|
||||
VALUES (NOW(), 'UPDATE', NEW.id, NEW.trigger_ref, changed, old_vals, new_vals);
|
||||
END IF;
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION record_event_history() IS 'Records field-level changes to event table in event_history hypertable';
|
||||
|
||||
-- ============================================================================
|
||||
-- ATTACH TRIGGERS TO OPERATIONAL TABLES
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TRIGGER execution_history_trigger
|
||||
AFTER INSERT OR UPDATE OR DELETE ON execution
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION record_execution_history();
|
||||
|
||||
CREATE TRIGGER worker_history_trigger
|
||||
AFTER INSERT OR UPDATE OR DELETE ON worker
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION record_worker_history();
|
||||
|
||||
CREATE TRIGGER enforcement_history_trigger
|
||||
AFTER INSERT OR UPDATE OR DELETE ON enforcement
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION record_enforcement_history();
|
||||
|
||||
CREATE TRIGGER event_history_trigger
|
||||
AFTER INSERT OR UPDATE OR DELETE ON event
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION record_event_history();
|
||||
|
||||
-- ============================================================================
|
||||
-- COMPRESSION POLICIES
|
||||
-- ============================================================================
|
||||
|
||||
ALTER TABLE execution_history SET (
|
||||
timescaledb.compress,
|
||||
timescaledb.compress_segmentby = 'entity_id',
|
||||
timescaledb.compress_orderby = 'time DESC'
|
||||
);
|
||||
SELECT add_compression_policy('execution_history', INTERVAL '7 days');
|
||||
|
||||
ALTER TABLE worker_history SET (
|
||||
timescaledb.compress,
|
||||
timescaledb.compress_segmentby = 'entity_id',
|
||||
timescaledb.compress_orderby = 'time DESC'
|
||||
);
|
||||
SELECT add_compression_policy('worker_history', INTERVAL '7 days');
|
||||
|
||||
ALTER TABLE enforcement_history SET (
|
||||
timescaledb.compress,
|
||||
timescaledb.compress_segmentby = 'entity_id',
|
||||
timescaledb.compress_orderby = 'time DESC'
|
||||
);
|
||||
SELECT add_compression_policy('enforcement_history', INTERVAL '7 days');
|
||||
|
||||
ALTER TABLE event_history SET (
|
||||
timescaledb.compress,
|
||||
timescaledb.compress_segmentby = 'entity_id',
|
||||
timescaledb.compress_orderby = 'time DESC'
|
||||
);
|
||||
SELECT add_compression_policy('event_history', INTERVAL '7 days');
|
||||
|
||||
-- ============================================================================
|
||||
-- RETENTION POLICIES
|
||||
-- ============================================================================
|
||||
|
||||
SELECT add_retention_policy('execution_history', INTERVAL '90 days');
|
||||
SELECT add_retention_policy('enforcement_history', INTERVAL '90 days');
|
||||
SELECT add_retention_policy('event_history', INTERVAL '30 days');
|
||||
SELECT add_retention_policy('worker_history', INTERVAL '180 days');
|
||||
|
||||
-- ============================================================================
|
||||
-- CONTINUOUS AGGREGATES
|
||||
-- ============================================================================
|
||||
|
||||
-- Drop existing continuous aggregates if they exist, so this migration can be
|
||||
-- re-run safely after a partial failure. (TimescaleDB continuous aggregates
|
||||
-- must be dropped with CASCADE to remove their associated policies.)
|
||||
DROP MATERIALIZED VIEW IF EXISTS execution_status_hourly CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS execution_throughput_hourly CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS event_volume_hourly CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS worker_status_hourly CASCADE;
|
||||
DROP MATERIALIZED VIEW IF EXISTS enforcement_volume_hourly CASCADE;
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- execution_status_hourly
|
||||
-- Tracks execution status transitions per hour, grouped by action_ref and new status.
|
||||
-- Powers: execution throughput chart, failure rate widget, status breakdown over time.
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE MATERIALIZED VIEW execution_status_hourly
|
||||
WITH (timescaledb.continuous) AS
|
||||
SELECT
|
||||
time_bucket('1 hour', time) AS bucket,
|
||||
entity_ref AS action_ref,
|
||||
new_values->>'status' AS new_status,
|
||||
COUNT(*) AS transition_count
|
||||
FROM execution_history
|
||||
WHERE 'status' = ANY(changed_fields)
|
||||
GROUP BY bucket, entity_ref, new_values->>'status'
|
||||
WITH NO DATA;
|
||||
|
||||
SELECT add_continuous_aggregate_policy('execution_status_hourly',
|
||||
start_offset => INTERVAL '7 days',
|
||||
end_offset => INTERVAL '1 hour',
|
||||
schedule_interval => INTERVAL '30 minutes'
|
||||
);
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- execution_throughput_hourly
|
||||
-- Tracks total execution creation volume per hour, regardless of status.
|
||||
-- Powers: execution throughput sparkline on the dashboard.
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE MATERIALIZED VIEW execution_throughput_hourly
|
||||
WITH (timescaledb.continuous) AS
|
||||
SELECT
|
||||
time_bucket('1 hour', time) AS bucket,
|
||||
entity_ref AS action_ref,
|
||||
COUNT(*) AS execution_count
|
||||
FROM execution_history
|
||||
WHERE operation = 'INSERT'
|
||||
GROUP BY bucket, entity_ref
|
||||
WITH NO DATA;
|
||||
|
||||
SELECT add_continuous_aggregate_policy('execution_throughput_hourly',
|
||||
start_offset => INTERVAL '7 days',
|
||||
end_offset => INTERVAL '1 hour',
|
||||
schedule_interval => INTERVAL '30 minutes'
|
||||
);
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- event_volume_hourly
|
||||
-- Tracks event creation volume per hour by trigger ref.
|
||||
-- Powers: event throughput monitoring widget.
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE MATERIALIZED VIEW event_volume_hourly
|
||||
WITH (timescaledb.continuous) AS
|
||||
SELECT
|
||||
time_bucket('1 hour', time) AS bucket,
|
||||
entity_ref AS trigger_ref,
|
||||
COUNT(*) AS event_count
|
||||
FROM event_history
|
||||
WHERE operation = 'INSERT'
|
||||
GROUP BY bucket, entity_ref
|
||||
WITH NO DATA;
|
||||
|
||||
SELECT add_continuous_aggregate_policy('event_volume_hourly',
|
||||
start_offset => INTERVAL '7 days',
|
||||
end_offset => INTERVAL '1 hour',
|
||||
schedule_interval => INTERVAL '30 minutes'
|
||||
);
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- worker_status_hourly
|
||||
-- Tracks worker status changes per hour (online/offline/draining transitions).
|
||||
-- Powers: worker health trends widget.
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE MATERIALIZED VIEW worker_status_hourly
|
||||
WITH (timescaledb.continuous) AS
|
||||
SELECT
|
||||
time_bucket('1 hour', time) AS bucket,
|
||||
entity_ref AS worker_name,
|
||||
new_values->>'status' AS new_status,
|
||||
COUNT(*) AS transition_count
|
||||
FROM worker_history
|
||||
WHERE 'status' = ANY(changed_fields)
|
||||
GROUP BY bucket, entity_ref, new_values->>'status'
|
||||
WITH NO DATA;
|
||||
|
||||
SELECT add_continuous_aggregate_policy('worker_status_hourly',
|
||||
start_offset => INTERVAL '30 days',
|
||||
end_offset => INTERVAL '1 hour',
|
||||
schedule_interval => INTERVAL '1 hour'
|
||||
);
|
||||
|
||||
-- ----------------------------------------------------------------------------
|
||||
-- enforcement_volume_hourly
|
||||
-- Tracks enforcement creation volume per hour by rule ref.
|
||||
-- Powers: rule activation rate monitoring.
|
||||
-- ----------------------------------------------------------------------------
|
||||
|
||||
CREATE MATERIALIZED VIEW enforcement_volume_hourly
|
||||
WITH (timescaledb.continuous) AS
|
||||
SELECT
|
||||
time_bucket('1 hour', time) AS bucket,
|
||||
entity_ref AS rule_ref,
|
||||
COUNT(*) AS enforcement_count
|
||||
FROM enforcement_history
|
||||
WHERE operation = 'INSERT'
|
||||
GROUP BY bucket, entity_ref
|
||||
WITH NO DATA;
|
||||
|
||||
SELECT add_continuous_aggregate_policy('enforcement_volume_hourly',
|
||||
start_offset => INTERVAL '7 days',
|
||||
end_offset => INTERVAL '1 hour',
|
||||
schedule_interval => INTERVAL '30 minutes'
|
||||
);
|
||||
|
||||
-- ============================================================================
|
||||
-- INITIAL REFRESH NOTE
|
||||
-- ============================================================================
|
||||
-- NOTE: refresh_continuous_aggregate() cannot run inside a transaction block,
|
||||
-- and the migration runner wraps each file in BEGIN/COMMIT. The continuous
|
||||
-- aggregate policies configured above will automatically backfill data within
|
||||
-- their first scheduled interval (30 min – 1 hour). On a fresh database there
|
||||
-- is no history data to backfill anyway.
|
||||
--
|
||||
-- If you need an immediate manual refresh after migration, run outside a
|
||||
-- transaction:
|
||||
-- CALL refresh_continuous_aggregate('execution_status_hourly', NULL, NOW());
|
||||
-- CALL refresh_continuous_aggregate('execution_throughput_hourly', NULL, NOW());
|
||||
-- CALL refresh_continuous_aggregate('event_volume_hourly', NULL, NOW());
|
||||
-- CALL refresh_continuous_aggregate('worker_status_hourly', NULL, NOW());
|
||||
-- CALL refresh_continuous_aggregate('enforcement_volume_hourly', NULL, NOW());
|
||||
@@ -1,168 +0,0 @@
|
||||
-- Migration: Restore webhook functions
|
||||
-- Description: Recreate webhook functions that were accidentally dropped in 20260129000001
|
||||
-- Date: 2026-02-04
|
||||
|
||||
-- Drop existing functions to avoid signature conflicts
|
||||
DROP FUNCTION IF EXISTS enable_trigger_webhook(BIGINT, JSONB);
|
||||
DROP FUNCTION IF EXISTS enable_trigger_webhook(BIGINT);
|
||||
DROP FUNCTION IF EXISTS disable_trigger_webhook(BIGINT);
|
||||
DROP FUNCTION IF EXISTS regenerate_trigger_webhook_key(BIGINT);
|
||||
|
||||
-- Function to enable webhooks for a trigger
|
||||
CREATE OR REPLACE FUNCTION enable_trigger_webhook(
|
||||
p_trigger_id BIGINT,
|
||||
p_config JSONB DEFAULT '{}'::jsonb
|
||||
)
|
||||
RETURNS TABLE(
|
||||
webhook_enabled BOOLEAN,
|
||||
webhook_key VARCHAR(255),
|
||||
webhook_url TEXT
|
||||
) AS $$
|
||||
DECLARE
|
||||
v_webhook_key VARCHAR(255);
|
||||
v_api_base_url TEXT := 'http://localhost:8080'; -- Default, should be configured
|
||||
BEGIN
|
||||
-- Check if trigger exists
|
||||
IF NOT EXISTS (SELECT 1 FROM trigger WHERE id = p_trigger_id) THEN
|
||||
RAISE EXCEPTION 'Trigger with id % does not exist', p_trigger_id;
|
||||
END IF;
|
||||
|
||||
-- Generate webhook key if one doesn't exist
|
||||
SELECT t.webhook_key INTO v_webhook_key
|
||||
FROM trigger t
|
||||
WHERE t.id = p_trigger_id;
|
||||
|
||||
IF v_webhook_key IS NULL THEN
|
||||
v_webhook_key := generate_webhook_key();
|
||||
END IF;
|
||||
|
||||
-- Update trigger to enable webhooks
|
||||
UPDATE trigger
|
||||
SET
|
||||
webhook_enabled = TRUE,
|
||||
webhook_key = v_webhook_key,
|
||||
webhook_config = p_config,
|
||||
updated = NOW()
|
||||
WHERE id = p_trigger_id;
|
||||
|
||||
-- Return webhook details
|
||||
RETURN QUERY SELECT
|
||||
TRUE,
|
||||
v_webhook_key,
|
||||
v_api_base_url || '/api/v1/webhooks/' || v_webhook_key;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION enable_trigger_webhook(BIGINT, JSONB) IS
|
||||
'Enables webhooks for a trigger with optional configuration. Generates a new webhook key if one does not exist. Returns webhook details.';
|
||||
|
||||
-- Function to disable webhooks for a trigger
|
||||
CREATE OR REPLACE FUNCTION disable_trigger_webhook(
|
||||
p_trigger_id BIGINT
|
||||
)
|
||||
RETURNS BOOLEAN AS $$
|
||||
BEGIN
|
||||
-- Check if trigger exists
|
||||
IF NOT EXISTS (SELECT 1 FROM trigger WHERE id = p_trigger_id) THEN
|
||||
RAISE EXCEPTION 'Trigger with id % does not exist', p_trigger_id;
|
||||
END IF;
|
||||
|
||||
-- Update trigger to disable webhooks
|
||||
-- Set webhook_key to NULL when disabling to remove it from API responses
|
||||
UPDATE trigger
|
||||
SET
|
||||
webhook_enabled = FALSE,
|
||||
webhook_key = NULL,
|
||||
updated = NOW()
|
||||
WHERE id = p_trigger_id;
|
||||
|
||||
RETURN TRUE;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION disable_trigger_webhook(BIGINT) IS
|
||||
'Disables webhooks for a trigger. Webhook key is removed when disabled.';
|
||||
|
||||
-- Function to regenerate webhook key for a trigger
|
||||
CREATE OR REPLACE FUNCTION regenerate_trigger_webhook_key(
|
||||
p_trigger_id BIGINT
|
||||
)
|
||||
RETURNS TABLE(
|
||||
webhook_key VARCHAR(255),
|
||||
previous_key_revoked BOOLEAN
|
||||
) AS $$
|
||||
DECLARE
|
||||
v_new_key VARCHAR(255);
|
||||
v_old_key VARCHAR(255);
|
||||
v_webhook_enabled BOOLEAN;
|
||||
BEGIN
|
||||
-- Check if trigger exists
|
||||
IF NOT EXISTS (SELECT 1 FROM trigger WHERE id = p_trigger_id) THEN
|
||||
RAISE EXCEPTION 'Trigger with id % does not exist', p_trigger_id;
|
||||
END IF;
|
||||
|
||||
-- Get current webhook state
|
||||
SELECT t.webhook_key, t.webhook_enabled INTO v_old_key, v_webhook_enabled
|
||||
FROM trigger t
|
||||
WHERE t.id = p_trigger_id;
|
||||
|
||||
-- Check if webhooks are enabled
|
||||
IF NOT v_webhook_enabled THEN
|
||||
RAISE EXCEPTION 'Webhooks are not enabled for trigger %', p_trigger_id;
|
||||
END IF;
|
||||
|
||||
-- Generate new key
|
||||
v_new_key := generate_webhook_key();
|
||||
|
||||
-- Update trigger with new key
|
||||
UPDATE trigger
|
||||
SET
|
||||
webhook_key = v_new_key,
|
||||
updated = NOW()
|
||||
WHERE id = p_trigger_id;
|
||||
|
||||
-- Return new key and whether old key was present
|
||||
RETURN QUERY SELECT
|
||||
v_new_key,
|
||||
(v_old_key IS NOT NULL);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION regenerate_trigger_webhook_key(BIGINT) IS
|
||||
'Regenerates webhook key for a trigger. Returns new key and whether a previous key was revoked.';
|
||||
|
||||
-- Verify all functions exist
|
||||
DO $$
|
||||
BEGIN
|
||||
-- Check enable_trigger_webhook exists
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_proc p
|
||||
JOIN pg_namespace n ON p.pronamespace = n.oid
|
||||
WHERE n.nspname = current_schema()
|
||||
AND p.proname = 'enable_trigger_webhook'
|
||||
) THEN
|
||||
RAISE EXCEPTION 'enable_trigger_webhook function not found after migration';
|
||||
END IF;
|
||||
|
||||
-- Check disable_trigger_webhook exists
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_proc p
|
||||
JOIN pg_namespace n ON p.pronamespace = n.oid
|
||||
WHERE n.nspname = current_schema()
|
||||
AND p.proname = 'disable_trigger_webhook'
|
||||
) THEN
|
||||
RAISE EXCEPTION 'disable_trigger_webhook function not found after migration';
|
||||
END IF;
|
||||
|
||||
-- Check regenerate_trigger_webhook_key exists
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_proc p
|
||||
JOIN pg_namespace n ON p.pronamespace = n.oid
|
||||
WHERE n.nspname = current_schema()
|
||||
AND p.proname = 'regenerate_trigger_webhook_key'
|
||||
) THEN
|
||||
RAISE EXCEPTION 'regenerate_trigger_webhook_key function not found after migration';
|
||||
END IF;
|
||||
|
||||
RAISE NOTICE 'All webhook functions successfully restored';
|
||||
END $$;
|
||||
@@ -1,274 +0,0 @@
|
||||
-- Migration: Add Pack Runtime Environments
|
||||
-- Description: Adds support for per-pack isolated runtime environments with installer metadata
|
||||
-- Version: 20260203000002
|
||||
-- Note: runtime.installers column is defined in migration 20250101000002_pack_system.sql
|
||||
|
||||
-- ============================================================================
|
||||
-- PART 1: Create pack_environment table
|
||||
-- ============================================================================
|
||||
|
||||
-- Pack environment table
|
||||
CREATE TABLE IF NOT EXISTS pack_environment (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
pack BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
|
||||
pack_ref TEXT NOT NULL,
|
||||
runtime BIGINT NOT NULL REFERENCES runtime(id) ON DELETE CASCADE,
|
||||
runtime_ref TEXT NOT NULL,
|
||||
env_path TEXT NOT NULL,
|
||||
status pack_environment_status_enum NOT NULL DEFAULT 'pending',
|
||||
installed_at TIMESTAMPTZ,
|
||||
last_verified TIMESTAMPTZ,
|
||||
install_log TEXT,
|
||||
install_error TEXT,
|
||||
metadata JSONB DEFAULT '{}'::jsonb,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(pack, runtime)
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_pack ON pack_environment(pack);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_runtime ON pack_environment(runtime);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_status ON pack_environment(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_pack_ref ON pack_environment(pack_ref);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_runtime_ref ON pack_environment(runtime_ref);
|
||||
CREATE INDEX IF NOT EXISTS idx_pack_environment_pack_runtime ON pack_environment(pack, runtime);
|
||||
|
||||
-- Trigger for updated timestamp
|
||||
CREATE TRIGGER update_pack_environment_updated
|
||||
BEFORE UPDATE ON pack_environment
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE pack_environment IS 'Tracks pack-specific runtime environments for dependency isolation';
|
||||
COMMENT ON COLUMN pack_environment.pack IS 'Pack that owns this environment';
|
||||
COMMENT ON COLUMN pack_environment.pack_ref IS 'Pack reference for quick lookup';
|
||||
COMMENT ON COLUMN pack_environment.runtime IS 'Runtime used for this environment';
|
||||
COMMENT ON COLUMN pack_environment.runtime_ref IS 'Runtime reference for quick lookup';
|
||||
COMMENT ON COLUMN pack_environment.env_path IS 'Filesystem path to the environment directory (e.g., /opt/attune/packenvs/mypack/python)';
|
||||
COMMENT ON COLUMN pack_environment.status IS 'Current installation status';
|
||||
COMMENT ON COLUMN pack_environment.installed_at IS 'When the environment was successfully installed';
|
||||
COMMENT ON COLUMN pack_environment.last_verified IS 'Last time the environment was verified as working';
|
||||
COMMENT ON COLUMN pack_environment.install_log IS 'Installation output logs';
|
||||
COMMENT ON COLUMN pack_environment.install_error IS 'Error message if installation failed';
|
||||
COMMENT ON COLUMN pack_environment.metadata IS 'Additional metadata (installed packages, versions, etc.)';
|
||||
|
||||
-- ============================================================================
|
||||
-- PART 2: Update existing runtimes with installer metadata
|
||||
-- ============================================================================
|
||||
|
||||
-- Python runtime installers
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
|
||||
'installers', jsonb_build_array(
|
||||
jsonb_build_object(
|
||||
'name', 'create_venv',
|
||||
'description', 'Create Python virtual environment',
|
||||
'command', 'python3',
|
||||
'args', jsonb_build_array('-m', 'venv', '{env_path}'),
|
||||
'cwd', '{pack_path}',
|
||||
'env', jsonb_build_object(),
|
||||
'order', 1,
|
||||
'optional', false
|
||||
),
|
||||
jsonb_build_object(
|
||||
'name', 'upgrade_pip',
|
||||
'description', 'Upgrade pip to latest version',
|
||||
'command', '{env_path}/bin/pip',
|
||||
'args', jsonb_build_array('install', '--upgrade', 'pip'),
|
||||
'cwd', '{pack_path}',
|
||||
'env', jsonb_build_object(),
|
||||
'order', 2,
|
||||
'optional', true
|
||||
),
|
||||
jsonb_build_object(
|
||||
'name', 'install_requirements',
|
||||
'description', 'Install pack Python dependencies',
|
||||
'command', '{env_path}/bin/pip',
|
||||
'args', jsonb_build_array('install', '-r', '{pack_path}/requirements.txt'),
|
||||
'cwd', '{pack_path}',
|
||||
'env', jsonb_build_object(),
|
||||
'order', 3,
|
||||
'optional', false,
|
||||
'condition', jsonb_build_object(
|
||||
'file_exists', '{pack_path}/requirements.txt'
|
||||
)
|
||||
)
|
||||
),
|
||||
'executable_templates', jsonb_build_object(
|
||||
'python', '{env_path}/bin/python',
|
||||
'pip', '{env_path}/bin/pip'
|
||||
)
|
||||
)
|
||||
WHERE ref = 'core.python';
|
||||
|
||||
-- Node.js runtime installers
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
|
||||
'installers', jsonb_build_array(
|
||||
jsonb_build_object(
|
||||
'name', 'npm_install',
|
||||
'description', 'Install Node.js dependencies',
|
||||
'command', 'npm',
|
||||
'args', jsonb_build_array('install', '--prefix', '{env_path}'),
|
||||
'cwd', '{pack_path}',
|
||||
'env', jsonb_build_object(
|
||||
'NODE_PATH', '{env_path}/node_modules'
|
||||
),
|
||||
'order', 1,
|
||||
'optional', false,
|
||||
'condition', jsonb_build_object(
|
||||
'file_exists', '{pack_path}/package.json'
|
||||
)
|
||||
)
|
||||
),
|
||||
'executable_templates', jsonb_build_object(
|
||||
'node', 'node',
|
||||
'npm', 'npm'
|
||||
),
|
||||
'env_vars', jsonb_build_object(
|
||||
'NODE_PATH', '{env_path}/node_modules'
|
||||
)
|
||||
)
|
||||
WHERE ref = 'core.nodejs';
|
||||
|
||||
-- Shell runtime (no environment needed, uses system shell)
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
|
||||
'installers', jsonb_build_array(),
|
||||
'executable_templates', jsonb_build_object(
|
||||
'sh', 'sh',
|
||||
'bash', 'bash'
|
||||
),
|
||||
'requires_environment', false
|
||||
)
|
||||
WHERE ref = 'core.shell';
|
||||
|
||||
-- Native runtime (no environment needed, binaries are standalone)
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'base_path_template', '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}',
|
||||
'installers', jsonb_build_array(),
|
||||
'executable_templates', jsonb_build_object(),
|
||||
'requires_environment', false
|
||||
)
|
||||
WHERE ref = 'core.native';
|
||||
|
||||
-- Built-in sensor runtime (internal, no environment)
|
||||
UPDATE runtime
|
||||
SET installers = jsonb_build_object(
|
||||
'installers', jsonb_build_array(),
|
||||
'requires_environment', false
|
||||
)
|
||||
WHERE ref = 'core.sensor.builtin';
|
||||
|
||||
-- ============================================================================
|
||||
-- PART 3: Add helper functions
|
||||
-- ============================================================================
|
||||
|
||||
-- Function to get environment path for a pack/runtime combination
|
||||
CREATE OR REPLACE FUNCTION get_pack_environment_path(p_pack_ref TEXT, p_runtime_ref TEXT)
|
||||
RETURNS TEXT AS $$
|
||||
DECLARE
|
||||
v_runtime_name TEXT;
|
||||
v_base_template TEXT;
|
||||
v_result TEXT;
|
||||
BEGIN
|
||||
-- Get runtime name and base path template
|
||||
SELECT
|
||||
LOWER(name),
|
||||
installers->>'base_path_template'
|
||||
INTO v_runtime_name, v_base_template
|
||||
FROM runtime
|
||||
WHERE ref = p_runtime_ref;
|
||||
|
||||
IF v_base_template IS NULL THEN
|
||||
v_base_template := '/opt/attune/packenvs/{pack_ref}/{runtime_name_lower}';
|
||||
END IF;
|
||||
|
||||
-- Replace template variables
|
||||
v_result := v_base_template;
|
||||
v_result := REPLACE(v_result, '{pack_ref}', p_pack_ref);
|
||||
v_result := REPLACE(v_result, '{runtime_ref}', p_runtime_ref);
|
||||
v_result := REPLACE(v_result, '{runtime_name_lower}', v_runtime_name);
|
||||
|
||||
RETURN v_result;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql IMMUTABLE;
|
||||
|
||||
COMMENT ON FUNCTION get_pack_environment_path IS 'Calculate the filesystem path for a pack runtime environment';
|
||||
|
||||
-- Function to check if a runtime requires an environment
|
||||
CREATE OR REPLACE FUNCTION runtime_requires_environment(p_runtime_ref TEXT)
|
||||
RETURNS BOOLEAN AS $$
|
||||
DECLARE
|
||||
v_requires BOOLEAN;
|
||||
BEGIN
|
||||
SELECT COALESCE((installers->>'requires_environment')::boolean, true)
|
||||
INTO v_requires
|
||||
FROM runtime
|
||||
WHERE ref = p_runtime_ref;
|
||||
|
||||
RETURN COALESCE(v_requires, false);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql STABLE;
|
||||
|
||||
COMMENT ON FUNCTION runtime_requires_environment IS 'Check if a runtime needs a pack-specific environment';
|
||||
|
||||
-- ============================================================================
|
||||
-- PART 4: Create view for environment status
|
||||
-- ============================================================================
|
||||
|
||||
CREATE OR REPLACE VIEW v_pack_environment_status AS
|
||||
SELECT
|
||||
pe.id,
|
||||
pe.pack,
|
||||
p.ref AS pack_ref,
|
||||
p.label AS pack_name,
|
||||
pe.runtime,
|
||||
r.ref AS runtime_ref,
|
||||
r.name AS runtime_name,
|
||||
pe.env_path,
|
||||
pe.status,
|
||||
pe.installed_at,
|
||||
pe.last_verified,
|
||||
CASE
|
||||
WHEN pe.status = 'ready' AND pe.last_verified < NOW() - INTERVAL '7 days' THEN true
|
||||
ELSE false
|
||||
END AS needs_verification,
|
||||
CASE
|
||||
WHEN pe.status = 'ready' THEN 'healthy'
|
||||
WHEN pe.status = 'failed' THEN 'unhealthy'
|
||||
WHEN pe.status IN ('pending', 'installing') THEN 'provisioning'
|
||||
WHEN pe.status = 'outdated' THEN 'needs_update'
|
||||
ELSE 'unknown'
|
||||
END AS health_status,
|
||||
pe.install_error,
|
||||
pe.created,
|
||||
pe.updated
|
||||
FROM pack_environment pe
|
||||
JOIN pack p ON pe.pack = p.id
|
||||
JOIN runtime r ON pe.runtime = r.id;
|
||||
|
||||
COMMENT ON VIEW v_pack_environment_status IS 'Consolidated view of pack environment status with health indicators';
|
||||
|
||||
-- ============================================================================
|
||||
-- SUMMARY
|
||||
-- ============================================================================
|
||||
|
||||
-- Display summary of changes
|
||||
DO $$
|
||||
BEGIN
|
||||
RAISE NOTICE 'Pack environment system migration complete.';
|
||||
RAISE NOTICE '';
|
||||
RAISE NOTICE 'New table: pack_environment (tracks installed environments)';
|
||||
RAISE NOTICE 'New column: runtime.installers (environment setup instructions)';
|
||||
RAISE NOTICE 'New functions: get_pack_environment_path, runtime_requires_environment';
|
||||
RAISE NOTICE 'New view: v_pack_environment_status';
|
||||
RAISE NOTICE '';
|
||||
RAISE NOTICE 'Environment paths will be: /opt/attune/packenvs/{pack_ref}/{runtime}';
|
||||
END $$;
|
||||
@@ -1,154 +0,0 @@
|
||||
-- Migration: Add Pack Test Results Tracking
|
||||
-- Created: 2026-01-20
|
||||
-- Description: Add tables and views for tracking pack test execution results
|
||||
|
||||
-- Pack test execution tracking table
|
||||
CREATE TABLE IF NOT EXISTS pack_test_execution (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
pack_id BIGINT NOT NULL REFERENCES pack(id) ON DELETE CASCADE,
|
||||
pack_version VARCHAR(50) NOT NULL,
|
||||
execution_time TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
trigger_reason VARCHAR(50) NOT NULL, -- 'install', 'update', 'manual', 'validation'
|
||||
total_tests INT NOT NULL,
|
||||
passed INT NOT NULL,
|
||||
failed INT NOT NULL,
|
||||
skipped INT NOT NULL,
|
||||
pass_rate DECIMAL(5,4) NOT NULL, -- 0.0000 to 1.0000
|
||||
duration_ms BIGINT NOT NULL,
|
||||
result JSONB NOT NULL, -- Full test result structure
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
CONSTRAINT valid_test_counts CHECK (total_tests >= 0 AND passed >= 0 AND failed >= 0 AND skipped >= 0),
|
||||
CONSTRAINT valid_pass_rate CHECK (pass_rate >= 0.0 AND pass_rate <= 1.0),
|
||||
CONSTRAINT valid_trigger_reason CHECK (trigger_reason IN ('install', 'update', 'manual', 'validation'))
|
||||
);
|
||||
|
||||
-- Indexes for efficient queries
|
||||
CREATE INDEX idx_pack_test_execution_pack_id ON pack_test_execution(pack_id);
|
||||
CREATE INDEX idx_pack_test_execution_time ON pack_test_execution(execution_time DESC);
|
||||
CREATE INDEX idx_pack_test_execution_pass_rate ON pack_test_execution(pass_rate);
|
||||
CREATE INDEX idx_pack_test_execution_trigger ON pack_test_execution(trigger_reason);
|
||||
|
||||
-- Comments for documentation
|
||||
COMMENT ON TABLE pack_test_execution IS 'Tracks pack test execution results for validation and auditing';
|
||||
COMMENT ON COLUMN pack_test_execution.pack_id IS 'Reference to the pack being tested';
|
||||
COMMENT ON COLUMN pack_test_execution.pack_version IS 'Version of the pack at test time';
|
||||
COMMENT ON COLUMN pack_test_execution.trigger_reason IS 'What triggered the test: install, update, manual, validation';
|
||||
COMMENT ON COLUMN pack_test_execution.pass_rate IS 'Percentage of tests passed (0.0 to 1.0)';
|
||||
COMMENT ON COLUMN pack_test_execution.result IS 'Full JSON structure with detailed test results';
|
||||
|
||||
-- Pack test result summary view (all test executions with pack info)
|
||||
CREATE OR REPLACE VIEW pack_test_summary AS
|
||||
SELECT
|
||||
p.id AS pack_id,
|
||||
p.ref AS pack_ref,
|
||||
p.label AS pack_label,
|
||||
pte.id AS test_execution_id,
|
||||
pte.pack_version,
|
||||
pte.execution_time AS test_time,
|
||||
pte.trigger_reason,
|
||||
pte.total_tests,
|
||||
pte.passed,
|
||||
pte.failed,
|
||||
pte.skipped,
|
||||
pte.pass_rate,
|
||||
pte.duration_ms,
|
||||
ROW_NUMBER() OVER (PARTITION BY p.id ORDER BY pte.execution_time DESC) AS rn
|
||||
FROM pack p
|
||||
LEFT JOIN pack_test_execution pte ON p.id = pte.pack_id
|
||||
WHERE pte.id IS NOT NULL;
|
||||
|
||||
COMMENT ON VIEW pack_test_summary IS 'Summary of all pack test executions with pack details';
|
||||
|
||||
-- Latest test results per pack view
|
||||
CREATE OR REPLACE VIEW pack_latest_test AS
|
||||
SELECT
|
||||
pack_id,
|
||||
pack_ref,
|
||||
pack_label,
|
||||
test_execution_id,
|
||||
pack_version,
|
||||
test_time,
|
||||
trigger_reason,
|
||||
total_tests,
|
||||
passed,
|
||||
failed,
|
||||
skipped,
|
||||
pass_rate,
|
||||
duration_ms
|
||||
FROM pack_test_summary
|
||||
WHERE rn = 1;
|
||||
|
||||
COMMENT ON VIEW pack_latest_test IS 'Latest test results for each pack';
|
||||
|
||||
-- Function to get pack test statistics
|
||||
CREATE OR REPLACE FUNCTION get_pack_test_stats(p_pack_id BIGINT)
|
||||
RETURNS TABLE (
|
||||
total_executions BIGINT,
|
||||
successful_executions BIGINT,
|
||||
failed_executions BIGINT,
|
||||
avg_pass_rate DECIMAL,
|
||||
avg_duration_ms BIGINT,
|
||||
last_test_time TIMESTAMPTZ,
|
||||
last_test_passed BOOLEAN
|
||||
) AS $$
|
||||
BEGIN
|
||||
RETURN QUERY
|
||||
SELECT
|
||||
COUNT(*)::BIGINT AS total_executions,
|
||||
COUNT(*) FILTER (WHERE passed = total_tests)::BIGINT AS successful_executions,
|
||||
COUNT(*) FILTER (WHERE failed > 0)::BIGINT AS failed_executions,
|
||||
AVG(pass_rate) AS avg_pass_rate,
|
||||
AVG(duration_ms)::BIGINT AS avg_duration_ms,
|
||||
MAX(execution_time) AS last_test_time,
|
||||
(SELECT failed = 0 FROM pack_test_execution
|
||||
WHERE pack_id = p_pack_id
|
||||
ORDER BY execution_time DESC
|
||||
LIMIT 1) AS last_test_passed
|
||||
FROM pack_test_execution
|
||||
WHERE pack_id = p_pack_id;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION get_pack_test_stats IS 'Get statistical summary of test executions for a pack';
|
||||
|
||||
-- Function to check if pack has recent passing tests
|
||||
CREATE OR REPLACE FUNCTION pack_has_passing_tests(
|
||||
p_pack_id BIGINT,
|
||||
p_hours_ago INT DEFAULT 24
|
||||
)
|
||||
RETURNS BOOLEAN AS $$
|
||||
DECLARE
|
||||
v_has_passing_tests BOOLEAN;
|
||||
BEGIN
|
||||
SELECT EXISTS(
|
||||
SELECT 1
|
||||
FROM pack_test_execution
|
||||
WHERE pack_id = p_pack_id
|
||||
AND execution_time > NOW() - (p_hours_ago || ' hours')::INTERVAL
|
||||
AND failed = 0
|
||||
AND total_tests > 0
|
||||
) INTO v_has_passing_tests;
|
||||
|
||||
RETURN v_has_passing_tests;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION pack_has_passing_tests IS 'Check if pack has recent passing test executions';
|
||||
|
||||
-- Add trigger to update pack metadata on test execution
|
||||
CREATE OR REPLACE FUNCTION update_pack_test_metadata()
|
||||
RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
-- Could update pack table with last_tested timestamp if we add that column
|
||||
-- For now, just a placeholder for future functionality
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
CREATE TRIGGER trigger_update_pack_test_metadata
|
||||
AFTER INSERT ON pack_test_execution
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_pack_test_metadata();
|
||||
|
||||
COMMENT ON TRIGGER trigger_update_pack_test_metadata ON pack_test_execution IS 'Updates pack metadata when tests are executed';
|
||||
@@ -1,56 +0,0 @@
|
||||
-- Migration: Worker Table
|
||||
-- Description: Creates worker table for tracking worker registration and heartbeat
|
||||
-- Version: 20250101000014
|
||||
|
||||
-- ============================================================================
|
||||
-- WORKER TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE worker (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
name TEXT NOT NULL UNIQUE,
|
||||
worker_type worker_type_enum NOT NULL,
|
||||
worker_role worker_role_enum NOT NULL,
|
||||
runtime BIGINT REFERENCES runtime(id) ON DELETE SET NULL,
|
||||
host TEXT,
|
||||
port INTEGER,
|
||||
status worker_status_enum NOT NULL DEFAULT 'active',
|
||||
capabilities JSONB,
|
||||
meta JSONB,
|
||||
last_heartbeat TIMESTAMPTZ,
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_worker_name ON worker(name);
|
||||
CREATE INDEX idx_worker_type ON worker(worker_type);
|
||||
CREATE INDEX idx_worker_role ON worker(worker_role);
|
||||
CREATE INDEX idx_worker_runtime ON worker(runtime);
|
||||
CREATE INDEX idx_worker_status ON worker(status);
|
||||
CREATE INDEX idx_worker_last_heartbeat ON worker(last_heartbeat DESC) WHERE last_heartbeat IS NOT NULL;
|
||||
CREATE INDEX idx_worker_created ON worker(created DESC);
|
||||
CREATE INDEX idx_worker_status_role ON worker(status, worker_role);
|
||||
CREATE INDEX idx_worker_capabilities_gin ON worker USING GIN (capabilities);
|
||||
CREATE INDEX idx_worker_meta_gin ON worker USING GIN (meta);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_worker_updated
|
||||
BEFORE UPDATE ON worker
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE worker IS 'Worker registration and tracking table for action and sensor workers';
|
||||
COMMENT ON COLUMN worker.name IS 'Unique worker identifier (typically hostname-based)';
|
||||
COMMENT ON COLUMN worker.worker_type IS 'Worker deployment type (local or remote)';
|
||||
COMMENT ON COLUMN worker.worker_role IS 'Worker role (action or sensor)';
|
||||
COMMENT ON COLUMN worker.runtime IS 'Runtime environment this worker supports (optional)';
|
||||
COMMENT ON COLUMN worker.host IS 'Worker host address';
|
||||
COMMENT ON COLUMN worker.port IS 'Worker port number';
|
||||
COMMENT ON COLUMN worker.status IS 'Worker operational status';
|
||||
COMMENT ON COLUMN worker.capabilities IS 'Worker capabilities (e.g., max_concurrent_executions, supported runtimes)';
|
||||
COMMENT ON COLUMN worker.meta IS 'Additional worker metadata';
|
||||
COMMENT ON COLUMN worker.last_heartbeat IS 'Timestamp of last heartbeat from worker';
|
||||
|
||||
-- ============================================================================
|
||||
@@ -1,127 +0,0 @@
|
||||
-- Phase 3: Retry Tracking and Action Timeout Configuration
|
||||
-- This migration adds support for:
|
||||
-- 1. Retry tracking on executions (attempt count, max attempts, retry reason)
|
||||
-- 2. Action-level timeout configuration
|
||||
-- 3. Worker health metrics
|
||||
|
||||
-- Add retry tracking fields to execution table
|
||||
ALTER TABLE execution
|
||||
ADD COLUMN retry_count INTEGER NOT NULL DEFAULT 0,
|
||||
ADD COLUMN max_retries INTEGER,
|
||||
ADD COLUMN retry_reason TEXT,
|
||||
ADD COLUMN original_execution BIGINT REFERENCES execution(id) ON DELETE SET NULL;
|
||||
|
||||
-- Add index for finding retry chains
|
||||
CREATE INDEX idx_execution_original_execution ON execution(original_execution) WHERE original_execution IS NOT NULL;
|
||||
|
||||
-- Add timeout configuration to action table
|
||||
ALTER TABLE action
|
||||
ADD COLUMN timeout_seconds INTEGER,
|
||||
ADD COLUMN max_retries INTEGER DEFAULT 0;
|
||||
|
||||
-- Add comment explaining timeout behavior
|
||||
COMMENT ON COLUMN action.timeout_seconds IS 'Worker queue TTL override in seconds. If NULL, uses global worker_queue_ttl_ms config. Allows per-action timeout tuning.';
|
||||
COMMENT ON COLUMN action.max_retries IS 'Maximum number of automatic retry attempts for failed executions. 0 = no retries (default).';
|
||||
COMMENT ON COLUMN execution.retry_count IS 'Current retry attempt number (0 = first attempt, 1 = first retry, etc.)';
|
||||
COMMENT ON COLUMN execution.max_retries IS 'Maximum retries for this execution. Copied from action.max_retries at creation time.';
|
||||
COMMENT ON COLUMN execution.retry_reason IS 'Reason for retry (e.g., "worker_unavailable", "transient_error", "manual_retry")';
|
||||
COMMENT ON COLUMN execution.original_execution IS 'ID of the original execution if this is a retry. Forms a retry chain.';
|
||||
|
||||
-- Add worker health tracking fields
|
||||
-- These are stored in the capabilities JSONB field as a "health" object:
|
||||
-- {
|
||||
-- "runtimes": [...],
|
||||
-- "health": {
|
||||
-- "status": "healthy|degraded|unhealthy",
|
||||
-- "last_check": "2026-02-09T12:00:00Z",
|
||||
-- "consecutive_failures": 0,
|
||||
-- "total_executions": 100,
|
||||
-- "failed_executions": 2,
|
||||
-- "average_execution_time_ms": 1500,
|
||||
-- "queue_depth": 5
|
||||
-- }
|
||||
-- }
|
||||
|
||||
-- Add index for health-based queries (using JSONB path operators)
|
||||
CREATE INDEX idx_worker_capabilities_health_status ON worker
|
||||
USING GIN ((capabilities -> 'health' -> 'status'));
|
||||
|
||||
-- Add view for healthy workers (convenience for queries)
|
||||
CREATE OR REPLACE VIEW healthy_workers AS
|
||||
SELECT
|
||||
w.id,
|
||||
w.name,
|
||||
w.worker_type,
|
||||
w.worker_role,
|
||||
w.runtime,
|
||||
w.status,
|
||||
w.capabilities,
|
||||
w.last_heartbeat,
|
||||
(w.capabilities -> 'health' ->> 'status')::TEXT as health_status,
|
||||
(w.capabilities -> 'health' ->> 'queue_depth')::INTEGER as queue_depth,
|
||||
(w.capabilities -> 'health' ->> 'consecutive_failures')::INTEGER as consecutive_failures
|
||||
FROM worker w
|
||||
WHERE
|
||||
w.status = 'active'
|
||||
AND w.last_heartbeat > NOW() - INTERVAL '30 seconds'
|
||||
AND (
|
||||
-- Healthy if no health info (backward compatible)
|
||||
w.capabilities -> 'health' IS NULL
|
||||
OR
|
||||
-- Or explicitly marked healthy
|
||||
w.capabilities -> 'health' ->> 'status' IN ('healthy', 'degraded')
|
||||
);
|
||||
|
||||
COMMENT ON VIEW healthy_workers IS 'Workers that are active, have fresh heartbeat, and are healthy or degraded (not unhealthy)';
|
||||
|
||||
-- Add function to get worker queue depth estimate
|
||||
CREATE OR REPLACE FUNCTION get_worker_queue_depth(worker_id_param BIGINT)
|
||||
RETURNS INTEGER AS $$
|
||||
BEGIN
|
||||
-- Extract queue depth from capabilities.health.queue_depth
|
||||
-- Returns NULL if not available
|
||||
RETURN (
|
||||
SELECT (capabilities -> 'health' ->> 'queue_depth')::INTEGER
|
||||
FROM worker
|
||||
WHERE id = worker_id_param
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql STABLE;
|
||||
|
||||
COMMENT ON FUNCTION get_worker_queue_depth IS 'Extract current queue depth from worker health metadata';
|
||||
|
||||
-- Add function to check if execution is retriable
|
||||
CREATE OR REPLACE FUNCTION is_execution_retriable(execution_id_param BIGINT)
|
||||
RETURNS BOOLEAN AS $$
|
||||
DECLARE
|
||||
exec_record RECORD;
|
||||
BEGIN
|
||||
SELECT
|
||||
e.retry_count,
|
||||
e.max_retries,
|
||||
e.status
|
||||
INTO exec_record
|
||||
FROM execution e
|
||||
WHERE e.id = execution_id_param;
|
||||
|
||||
IF NOT FOUND THEN
|
||||
RETURN FALSE;
|
||||
END IF;
|
||||
|
||||
-- Can retry if:
|
||||
-- 1. Status is failed
|
||||
-- 2. max_retries is set and > 0
|
||||
-- 3. retry_count < max_retries
|
||||
RETURN (
|
||||
exec_record.status = 'failed'
|
||||
AND exec_record.max_retries IS NOT NULL
|
||||
AND exec_record.max_retries > 0
|
||||
AND exec_record.retry_count < exec_record.max_retries
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql STABLE;
|
||||
|
||||
COMMENT ON FUNCTION is_execution_retriable IS 'Check if a failed execution can be automatically retried based on retry limits';
|
||||
|
||||
-- Add indexes for retry queries
|
||||
CREATE INDEX idx_execution_status_retry ON execution(status, retry_count) WHERE status = 'failed' AND retry_count < COALESCE(max_retries, 0);
|
||||
@@ -1,105 +0,0 @@
|
||||
-- Migration: Runtime Versions
|
||||
-- Description: Adds support for multiple versions of the same runtime (e.g., Python 3.11, 3.12, 3.14).
|
||||
-- - New `runtime_version` table to store version-specific execution configurations
|
||||
-- - New `runtime_version_constraint` columns on action and sensor tables
|
||||
-- Version: 20260226000000
|
||||
|
||||
-- ============================================================================
|
||||
-- RUNTIME VERSION TABLE
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE runtime_version (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
runtime BIGINT NOT NULL REFERENCES runtime(id) ON DELETE CASCADE,
|
||||
runtime_ref TEXT NOT NULL,
|
||||
|
||||
-- Semantic version string (e.g., "3.12.1", "20.11.0")
|
||||
version TEXT NOT NULL,
|
||||
|
||||
-- Individual version components for efficient range queries.
|
||||
-- Nullable because some runtimes may use non-numeric versioning.
|
||||
version_major INT,
|
||||
version_minor INT,
|
||||
version_patch INT,
|
||||
|
||||
-- Complete execution configuration for this specific version.
|
||||
-- This is NOT a diff/override — it is a full standalone config that can
|
||||
-- replace the parent runtime's execution_config when this version is selected.
|
||||
-- Structure is identical to runtime.execution_config (RuntimeExecutionConfig).
|
||||
execution_config JSONB NOT NULL DEFAULT '{}'::jsonb,
|
||||
|
||||
-- Version-specific distribution/verification metadata.
|
||||
-- Structure mirrors runtime.distributions but with version-specific commands.
|
||||
-- Example: verification commands that check for a specific binary like python3.12.
|
||||
distributions JSONB NOT NULL DEFAULT '{}'::jsonb,
|
||||
|
||||
-- Whether this version is the default for the parent runtime.
|
||||
-- At most one version per runtime should be marked as default.
|
||||
is_default BOOLEAN NOT NULL DEFAULT FALSE,
|
||||
|
||||
-- Whether this version has been verified as available on the current system.
|
||||
available BOOLEAN NOT NULL DEFAULT TRUE,
|
||||
|
||||
-- When this version was last verified (via running verification commands).
|
||||
verified_at TIMESTAMPTZ,
|
||||
|
||||
-- Arbitrary version-specific metadata (e.g., EOL date, release notes URL,
|
||||
-- feature flags, platform-specific notes).
|
||||
meta JSONB NOT NULL DEFAULT '{}'::jsonb,
|
||||
|
||||
created TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
|
||||
-- Constraints
|
||||
CONSTRAINT runtime_version_unique UNIQUE(runtime, version)
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX idx_runtime_version_runtime ON runtime_version(runtime);
|
||||
CREATE INDEX idx_runtime_version_runtime_ref ON runtime_version(runtime_ref);
|
||||
CREATE INDEX idx_runtime_version_version ON runtime_version(version);
|
||||
CREATE INDEX idx_runtime_version_available ON runtime_version(available) WHERE available = TRUE;
|
||||
CREATE INDEX idx_runtime_version_is_default ON runtime_version(is_default) WHERE is_default = TRUE;
|
||||
CREATE INDEX idx_runtime_version_components ON runtime_version(runtime, version_major, version_minor, version_patch);
|
||||
CREATE INDEX idx_runtime_version_created ON runtime_version(created DESC);
|
||||
CREATE INDEX idx_runtime_version_execution_config ON runtime_version USING GIN (execution_config);
|
||||
CREATE INDEX idx_runtime_version_meta ON runtime_version USING GIN (meta);
|
||||
|
||||
-- Trigger
|
||||
CREATE TRIGGER update_runtime_version_updated
|
||||
BEFORE UPDATE ON runtime_version
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_updated_column();
|
||||
|
||||
-- Comments
|
||||
COMMENT ON TABLE runtime_version IS 'Specific versions of a runtime (e.g., Python 3.11, 3.12) with version-specific execution configuration';
|
||||
COMMENT ON COLUMN runtime_version.runtime IS 'Parent runtime this version belongs to';
|
||||
COMMENT ON COLUMN runtime_version.runtime_ref IS 'Parent runtime ref (e.g., core.python) for display/filtering';
|
||||
COMMENT ON COLUMN runtime_version.version IS 'Semantic version string (e.g., "3.12.1", "20.11.0")';
|
||||
COMMENT ON COLUMN runtime_version.version_major IS 'Major version component for efficient range queries';
|
||||
COMMENT ON COLUMN runtime_version.version_minor IS 'Minor version component for efficient range queries';
|
||||
COMMENT ON COLUMN runtime_version.version_patch IS 'Patch version component for efficient range queries';
|
||||
COMMENT ON COLUMN runtime_version.execution_config IS 'Complete execution configuration for this version (same structure as runtime.execution_config)';
|
||||
COMMENT ON COLUMN runtime_version.distributions IS 'Version-specific distribution/verification metadata';
|
||||
COMMENT ON COLUMN runtime_version.is_default IS 'Whether this is the default version for the parent runtime (at most one per runtime)';
|
||||
COMMENT ON COLUMN runtime_version.available IS 'Whether this version has been verified as available on the system';
|
||||
COMMENT ON COLUMN runtime_version.verified_at IS 'Timestamp of last availability verification';
|
||||
COMMENT ON COLUMN runtime_version.meta IS 'Arbitrary version-specific metadata';
|
||||
|
||||
-- ============================================================================
|
||||
-- ACTION TABLE: ADD RUNTIME VERSION CONSTRAINT
|
||||
-- ============================================================================
|
||||
|
||||
ALTER TABLE action
|
||||
ADD COLUMN runtime_version_constraint TEXT;
|
||||
|
||||
COMMENT ON COLUMN action.runtime_version_constraint IS 'Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0"). NULL means any version.';
|
||||
|
||||
-- ============================================================================
|
||||
-- SENSOR TABLE: ADD RUNTIME VERSION CONSTRAINT
|
||||
-- ============================================================================
|
||||
|
||||
ALTER TABLE sensor
|
||||
ADD COLUMN runtime_version_constraint TEXT;
|
||||
|
||||
COMMENT ON COLUMN sensor.runtime_version_constraint IS 'Semver version constraint for the runtime (e.g., ">=3.12", ">=3.12,<4.0", "~18.0"). NULL means any version.';
|
||||
772
web/src/components/common/AnalyticsWidgets.tsx
Normal file
772
web/src/components/common/AnalyticsWidgets.tsx
Normal file
@@ -0,0 +1,772 @@
|
||||
import { useMemo, useState } from "react";
|
||||
import {
|
||||
Activity,
|
||||
AlertTriangle,
|
||||
BarChart3,
|
||||
CheckCircle,
|
||||
Server,
|
||||
Zap,
|
||||
} from "lucide-react";
|
||||
import type {
|
||||
DashboardAnalytics,
|
||||
TimeSeriesPoint,
|
||||
FailureRateSummary,
|
||||
} from "@/hooks/useAnalytics";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Shared types & helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type TimeRangeHours = 6 | 12 | 24 | 48 | 168;
|
||||
|
||||
const TIME_RANGE_OPTIONS: { label: string; value: TimeRangeHours }[] = [
|
||||
{ label: "6h", value: 6 },
|
||||
{ label: "12h", value: 12 },
|
||||
{ label: "24h", value: 24 },
|
||||
{ label: "2d", value: 48 },
|
||||
{ label: "7d", value: 168 },
|
||||
];
|
||||
|
||||
function formatBucketLabel(iso: string, rangeHours: number): string {
|
||||
const d = new Date(iso);
|
||||
if (rangeHours <= 24) {
|
||||
return d.toLocaleTimeString([], { hour: "2-digit", minute: "2-digit" });
|
||||
}
|
||||
if (rangeHours <= 48) {
|
||||
return d.toLocaleDateString([], { weekday: "short", hour: "2-digit" });
|
||||
}
|
||||
return d.toLocaleDateString([], { month: "short", day: "numeric" });
|
||||
}
|
||||
|
||||
function formatBucketTooltip(iso: string): string {
|
||||
const d = new Date(iso);
|
||||
return d.toLocaleString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Aggregate TimeSeriesPoints into per-bucket totals or per-bucket-per-label groups.
|
||||
*/
|
||||
function aggregateByBucket(
|
||||
points: TimeSeriesPoint[],
|
||||
): Map<string, { total: number; byLabel: Map<string, number> }> {
|
||||
const map = new Map<
|
||||
string,
|
||||
{ total: number; byLabel: Map<string, number> }
|
||||
>();
|
||||
for (const p of points) {
|
||||
let entry = map.get(p.bucket);
|
||||
if (!entry) {
|
||||
entry = { total: 0, byLabel: new Map() };
|
||||
map.set(p.bucket, entry);
|
||||
}
|
||||
entry.total += p.value;
|
||||
if (p.label) {
|
||||
entry.byLabel.set(p.label, (entry.byLabel.get(p.label) || 0) + p.value);
|
||||
}
|
||||
}
|
||||
return map;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// TimeRangeSelector
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface TimeRangeSelectorProps {
|
||||
value: TimeRangeHours;
|
||||
onChange: (v: TimeRangeHours) => void;
|
||||
}
|
||||
|
||||
function TimeRangeSelector({ value, onChange }: TimeRangeSelectorProps) {
|
||||
return (
|
||||
<div className="inline-flex items-center bg-gray-100 rounded-md p-0.5 text-xs">
|
||||
{TIME_RANGE_OPTIONS.map((opt) => (
|
||||
<button
|
||||
key={opt.value}
|
||||
onClick={() => onChange(opt.value)}
|
||||
className={`px-2 py-1 rounded transition-colors ${
|
||||
value === opt.value
|
||||
? "bg-white shadow text-gray-900 font-medium"
|
||||
: "text-gray-500 hover:text-gray-700"
|
||||
}`}
|
||||
>
|
||||
{opt.label}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// MiniBarChart — pure-CSS bar chart for time-series data
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface MiniBarChartProps {
|
||||
/** Ordered time buckets with totals */
|
||||
buckets: { bucket: string; value: number }[];
|
||||
/** Current time range in hours (affects label formatting) */
|
||||
rangeHours: number;
|
||||
/** Bar color class (Tailwind bg-* class) */
|
||||
barColor?: string;
|
||||
/** Height of the chart in pixels */
|
||||
height?: number;
|
||||
/** Show zero line */
|
||||
showZeroLine?: boolean;
|
||||
}
|
||||
|
||||
function MiniBarChart({
|
||||
buckets,
|
||||
rangeHours,
|
||||
barColor = "bg-blue-500",
|
||||
height = 120,
|
||||
showZeroLine = true,
|
||||
}: MiniBarChartProps) {
|
||||
const [hoveredIdx, setHoveredIdx] = useState<number | null>(null);
|
||||
|
||||
const maxValue = useMemo(
|
||||
() => Math.max(1, ...buckets.map((b) => b.value)),
|
||||
[buckets],
|
||||
);
|
||||
|
||||
if (buckets.length === 0) {
|
||||
return (
|
||||
<div
|
||||
className="flex items-center justify-center text-gray-400 text-xs"
|
||||
style={{ height }}
|
||||
>
|
||||
No data in this time range
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// For large ranges, show fewer labels to avoid clutter
|
||||
const labelEvery =
|
||||
buckets.length > 24
|
||||
? Math.ceil(buckets.length / 8)
|
||||
: buckets.length > 12
|
||||
? 2
|
||||
: 1;
|
||||
|
||||
return (
|
||||
<div className="relative" style={{ height: height + 24 }}>
|
||||
{/* Tooltip */}
|
||||
{hoveredIdx !== null && buckets[hoveredIdx] && (
|
||||
<div className="absolute -top-1 left-1/2 -translate-x-1/2 z-10 bg-gray-800 text-white text-xs rounded px-2 py-1 whitespace-nowrap pointer-events-none shadow-lg">
|
||||
{formatBucketTooltip(buckets[hoveredIdx].bucket)}:{" "}
|
||||
<span className="font-semibold">{buckets[hoveredIdx].value}</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Bars */}
|
||||
<div className="flex items-end gap-px w-full" style={{ height }}>
|
||||
{buckets.map((b, i) => {
|
||||
const pct = (b.value / maxValue) * 100;
|
||||
return (
|
||||
<div
|
||||
key={b.bucket}
|
||||
className="flex-1 min-w-0 relative group"
|
||||
style={{ height: "100%" }}
|
||||
onMouseEnter={() => setHoveredIdx(i)}
|
||||
onMouseLeave={() => setHoveredIdx(null)}
|
||||
>
|
||||
<div className="absolute bottom-0 inset-x-0 flex justify-center">
|
||||
<div
|
||||
className={`w-full rounded-t-sm transition-all duration-150 ${
|
||||
hoveredIdx === i ? barColor.replace("500", "600") : barColor
|
||||
} ${hoveredIdx === i ? "opacity-100" : "opacity-80"}`}
|
||||
style={{
|
||||
height: `${Math.max(pct, b.value > 0 ? 2 : 0)}%`,
|
||||
minHeight: b.value > 0 ? "2px" : "0",
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
|
||||
{/* Zero line */}
|
||||
{showZeroLine && (
|
||||
<div className="absolute bottom-6 left-0 right-0 border-t border-gray-200" />
|
||||
)}
|
||||
|
||||
{/* X-axis labels */}
|
||||
<div className="flex items-start mt-1 h-5">
|
||||
{buckets.map((b, i) =>
|
||||
i % labelEvery === 0 ? (
|
||||
<div
|
||||
key={b.bucket}
|
||||
className="flex-1 text-center text-[9px] text-gray-400 truncate"
|
||||
style={{ minWidth: 0 }}
|
||||
>
|
||||
{formatBucketLabel(b.bucket, rangeHours)}
|
||||
</div>
|
||||
) : (
|
||||
<div key={b.bucket} className="flex-1" />
|
||||
),
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// StackedBarChart — stacked bar chart for status breakdowns
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const STATUS_COLORS: Record<string, { bg: string; legend: string }> = {
|
||||
completed: { bg: "bg-green-500", legend: "bg-green-500" },
|
||||
failed: { bg: "bg-red-500", legend: "bg-red-500" },
|
||||
timeout: { bg: "bg-orange-500", legend: "bg-orange-500" },
|
||||
running: { bg: "bg-blue-500", legend: "bg-blue-500" },
|
||||
requested: { bg: "bg-yellow-400", legend: "bg-yellow-400" },
|
||||
scheduled: { bg: "bg-yellow-500", legend: "bg-yellow-500" },
|
||||
scheduling: { bg: "bg-yellow-300", legend: "bg-yellow-300" },
|
||||
cancelled: { bg: "bg-gray-400", legend: "bg-gray-400" },
|
||||
canceling: { bg: "bg-gray-300", legend: "bg-gray-300" },
|
||||
abandoned: { bg: "bg-purple-400", legend: "bg-purple-400" },
|
||||
online: { bg: "bg-green-500", legend: "bg-green-500" },
|
||||
offline: { bg: "bg-red-400", legend: "bg-red-400" },
|
||||
draining: { bg: "bg-yellow-500", legend: "bg-yellow-500" },
|
||||
};
|
||||
|
||||
function getStatusColor(status: string): string {
|
||||
return STATUS_COLORS[status]?.bg || "bg-gray-400";
|
||||
}
|
||||
|
||||
interface StackedBarChartProps {
|
||||
points: TimeSeriesPoint[];
|
||||
rangeHours: number;
|
||||
height?: number;
|
||||
}
|
||||
|
||||
function StackedBarChart({
|
||||
points,
|
||||
rangeHours,
|
||||
height = 120,
|
||||
}: StackedBarChartProps) {
|
||||
const [hoveredIdx, setHoveredIdx] = useState<number | null>(null);
|
||||
|
||||
const { buckets, allLabels, maxTotal } = useMemo(() => {
|
||||
const agg = aggregateByBucket(points);
|
||||
const sorted = Array.from(agg.entries()).sort(([a], [b]) =>
|
||||
a.localeCompare(b),
|
||||
);
|
||||
|
||||
const labels = new Set<string>();
|
||||
sorted.forEach(([, v]) => v.byLabel.forEach((_, k) => labels.add(k)));
|
||||
|
||||
const mx = Math.max(1, ...sorted.map(([, v]) => v.total));
|
||||
|
||||
return {
|
||||
buckets: sorted.map(([bucket, v]) => ({
|
||||
bucket,
|
||||
total: v.total,
|
||||
byLabel: v.byLabel,
|
||||
})),
|
||||
allLabels: Array.from(labels).sort(),
|
||||
maxTotal: mx,
|
||||
};
|
||||
}, [points]);
|
||||
|
||||
if (buckets.length === 0) {
|
||||
return (
|
||||
<div
|
||||
className="flex items-center justify-center text-gray-400 text-xs"
|
||||
style={{ height }}
|
||||
>
|
||||
No data in this time range
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const labelEvery =
|
||||
buckets.length > 24
|
||||
? Math.ceil(buckets.length / 8)
|
||||
: buckets.length > 12
|
||||
? 2
|
||||
: 1;
|
||||
|
||||
return (
|
||||
<div>
|
||||
{/* Legend */}
|
||||
<div className="flex flex-wrap gap-x-3 gap-y-1 mb-2">
|
||||
{allLabels.map((label) => (
|
||||
<div
|
||||
key={label}
|
||||
className="flex items-center gap-1 text-[10px] text-gray-600"
|
||||
>
|
||||
<div
|
||||
className={`w-2 h-2 rounded-sm ${STATUS_COLORS[label]?.legend || "bg-gray-400"}`}
|
||||
/>
|
||||
{label}
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
|
||||
<div className="relative" style={{ height: height + 24 }}>
|
||||
{/* Tooltip */}
|
||||
{hoveredIdx !== null && buckets[hoveredIdx] && (
|
||||
<div className="absolute -top-1 left-1/2 -translate-x-1/2 z-10 bg-gray-800 text-white text-xs rounded px-2 py-1 whitespace-nowrap pointer-events-none shadow-lg">
|
||||
<div className="font-medium mb-0.5">
|
||||
{formatBucketTooltip(buckets[hoveredIdx].bucket)}
|
||||
</div>
|
||||
{Array.from(buckets[hoveredIdx].byLabel.entries()).map(
|
||||
([label, count]) => (
|
||||
<div key={label}>
|
||||
{label}: {count}
|
||||
</div>
|
||||
),
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Bars */}
|
||||
<div className="flex items-end gap-px w-full" style={{ height }}>
|
||||
{buckets.map((b, i) => {
|
||||
const totalPct = (b.total / maxTotal) * 100;
|
||||
return (
|
||||
<div
|
||||
key={b.bucket}
|
||||
className="flex-1 min-w-0 relative"
|
||||
style={{ height: "100%" }}
|
||||
onMouseEnter={() => setHoveredIdx(i)}
|
||||
onMouseLeave={() => setHoveredIdx(null)}
|
||||
>
|
||||
<div
|
||||
className="absolute bottom-0 inset-x-0 flex flex-col-reverse"
|
||||
style={{
|
||||
height: `${Math.max(totalPct, b.total > 0 ? 2 : 0)}%`,
|
||||
minHeight: b.total > 0 ? "2px" : "0",
|
||||
}}
|
||||
>
|
||||
{allLabels.map((label) => {
|
||||
const count = b.byLabel.get(label) || 0;
|
||||
if (count === 0) return null;
|
||||
const segmentPct = (count / b.total) * 100;
|
||||
return (
|
||||
<div
|
||||
key={label}
|
||||
className={`w-full ${getStatusColor(label)} ${
|
||||
hoveredIdx === i ? "opacity-100" : "opacity-80"
|
||||
} transition-opacity`}
|
||||
style={{
|
||||
height: `${segmentPct}%`,
|
||||
minHeight: "1px",
|
||||
}}
|
||||
/>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
|
||||
{/* X-axis labels */}
|
||||
<div className="flex items-start mt-1 h-5">
|
||||
{buckets.map((b, i) =>
|
||||
i % labelEvery === 0 ? (
|
||||
<div
|
||||
key={b.bucket}
|
||||
className="flex-1 text-center text-[9px] text-gray-400 truncate"
|
||||
style={{ minWidth: 0 }}
|
||||
>
|
||||
{formatBucketLabel(b.bucket, rangeHours)}
|
||||
</div>
|
||||
) : (
|
||||
<div key={b.bucket} className="flex-1" />
|
||||
),
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// FailureRateCard
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface FailureRateCardProps {
|
||||
summary: FailureRateSummary;
|
||||
}
|
||||
|
||||
function FailureRateCard({ summary }: FailureRateCardProps) {
|
||||
const rate = summary.failure_rate_pct;
|
||||
const rateColor =
|
||||
rate === 0
|
||||
? "text-green-600"
|
||||
: rate < 5
|
||||
? "text-yellow-600"
|
||||
: rate < 20
|
||||
? "text-orange-600"
|
||||
: "text-red-600";
|
||||
|
||||
const ringColor =
|
||||
rate === 0
|
||||
? "stroke-green-500"
|
||||
: rate < 5
|
||||
? "stroke-yellow-500"
|
||||
: rate < 20
|
||||
? "stroke-orange-500"
|
||||
: "stroke-red-500";
|
||||
|
||||
// SVG ring gauge
|
||||
const radius = 40;
|
||||
const circumference = 2 * Math.PI * radius;
|
||||
const failureArc = (rate / 100) * circumference;
|
||||
const successArc = circumference - failureArc;
|
||||
|
||||
return (
|
||||
<div className="flex items-center gap-6">
|
||||
{/* Ring gauge */}
|
||||
<div className="relative flex-shrink-0">
|
||||
<svg width="100" height="100" className="-rotate-90">
|
||||
{/* Background ring */}
|
||||
<circle
|
||||
cx="50"
|
||||
cy="50"
|
||||
r={radius}
|
||||
fill="none"
|
||||
strokeWidth="8"
|
||||
className="stroke-gray-200"
|
||||
/>
|
||||
{/* Success arc */}
|
||||
<circle
|
||||
cx="50"
|
||||
cy="50"
|
||||
r={radius}
|
||||
fill="none"
|
||||
strokeWidth="8"
|
||||
className="stroke-green-400"
|
||||
strokeDasharray={`${successArc} ${circumference}`}
|
||||
strokeLinecap="round"
|
||||
/>
|
||||
{/* Failure arc */}
|
||||
{rate > 0 && (
|
||||
<circle
|
||||
cx="50"
|
||||
cy="50"
|
||||
r={radius}
|
||||
fill="none"
|
||||
strokeWidth="8"
|
||||
className={ringColor}
|
||||
strokeDasharray={`${failureArc} ${circumference}`}
|
||||
strokeDashoffset={`${-successArc}`}
|
||||
strokeLinecap="round"
|
||||
/>
|
||||
)}
|
||||
</svg>
|
||||
<div className="absolute inset-0 flex items-center justify-center">
|
||||
<span className={`text-lg font-bold ${rateColor}`}>
|
||||
{rate.toFixed(1)}%
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Breakdown */}
|
||||
<div className="space-y-1.5 text-sm">
|
||||
<div className="flex items-center gap-2">
|
||||
<CheckCircle className="h-4 w-4 text-green-500" />
|
||||
<span className="text-gray-600">Completed:</span>
|
||||
<span className="font-medium text-gray-900">
|
||||
{summary.completed_count}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<AlertTriangle className="h-4 w-4 text-red-500" />
|
||||
<span className="text-gray-600">Failed:</span>
|
||||
<span className="font-medium text-gray-900">
|
||||
{summary.failed_count}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<AlertTriangle className="h-4 w-4 text-orange-500" />
|
||||
<span className="text-gray-600">Timeout:</span>
|
||||
<span className="font-medium text-gray-900">
|
||||
{summary.timeout_count}
|
||||
</span>
|
||||
</div>
|
||||
<div className="text-xs text-gray-400 mt-1">
|
||||
{summary.total_terminal} total terminal executions
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// StatCard — simple metric card with icon and value
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface StatCardProps {
|
||||
icon: React.ReactNode;
|
||||
label: string;
|
||||
value: number | string;
|
||||
subtext?: string;
|
||||
color?: string;
|
||||
}
|
||||
|
||||
function StatCard({
|
||||
icon,
|
||||
label,
|
||||
value,
|
||||
subtext,
|
||||
color = "text-blue-600",
|
||||
}: StatCardProps) {
|
||||
return (
|
||||
<div className="flex items-center gap-3">
|
||||
<div className={`${color} opacity-70`}>{icon}</div>
|
||||
<div>
|
||||
<p className="text-xs text-gray-500">{label}</p>
|
||||
<p className={`text-2xl font-bold ${color}`}>{value}</p>
|
||||
{subtext && <p className="text-[10px] text-gray-400">{subtext}</p>}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// AnalyticsDashboard — main composite widget
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface AnalyticsDashboardProps {
|
||||
/** The analytics data (from useDashboardAnalytics hook) */
|
||||
data: DashboardAnalytics | undefined;
|
||||
/** Whether the data is loading */
|
||||
isLoading: boolean;
|
||||
/** Error object if the fetch failed */
|
||||
error: Error | null;
|
||||
/** Current time range in hours */
|
||||
hours: TimeRangeHours;
|
||||
/** Callback to change the time range */
|
||||
onHoursChange: (h: TimeRangeHours) => void;
|
||||
}
|
||||
|
||||
export default function AnalyticsDashboard({
|
||||
data,
|
||||
isLoading,
|
||||
error,
|
||||
hours,
|
||||
onHoursChange,
|
||||
}: AnalyticsDashboardProps) {
|
||||
const executionBuckets = useMemo(() => {
|
||||
if (!data?.execution_throughput) return [];
|
||||
const agg = aggregateByBucket(data.execution_throughput);
|
||||
return Array.from(agg.entries())
|
||||
.sort(([a], [b]) => a.localeCompare(b))
|
||||
.map(([bucket, v]) => ({ bucket, value: v.total }));
|
||||
}, [data?.execution_throughput]);
|
||||
|
||||
const eventBuckets = useMemo(() => {
|
||||
if (!data?.event_volume) return [];
|
||||
const agg = aggregateByBucket(data.event_volume);
|
||||
return Array.from(agg.entries())
|
||||
.sort(([a], [b]) => a.localeCompare(b))
|
||||
.map(([bucket, v]) => ({ bucket, value: v.total }));
|
||||
}, [data?.event_volume]);
|
||||
|
||||
const enforcementBuckets = useMemo(() => {
|
||||
if (!data?.enforcement_volume) return [];
|
||||
const agg = aggregateByBucket(data.enforcement_volume);
|
||||
return Array.from(agg.entries())
|
||||
.sort(([a], [b]) => a.localeCompare(b))
|
||||
.map(([bucket, v]) => ({ bucket, value: v.total }));
|
||||
}, [data?.enforcement_volume]);
|
||||
|
||||
const totalExecutions = useMemo(
|
||||
() => executionBuckets.reduce((s, b) => s + b.value, 0),
|
||||
[executionBuckets],
|
||||
);
|
||||
|
||||
const totalEvents = useMemo(
|
||||
() => eventBuckets.reduce((s, b) => s + b.value, 0),
|
||||
[eventBuckets],
|
||||
);
|
||||
|
||||
const totalEnforcements = useMemo(
|
||||
() => enforcementBuckets.reduce((s, b) => s + b.value, 0),
|
||||
[enforcementBuckets],
|
||||
);
|
||||
|
||||
// Loading state
|
||||
if (isLoading && !data) {
|
||||
return (
|
||||
<div className="bg-white rounded-lg shadow p-6">
|
||||
<div className="flex items-center justify-between mb-4">
|
||||
<div className="flex items-center gap-2">
|
||||
<BarChart3 className="h-5 w-5 text-gray-500" />
|
||||
<h2 className="text-lg font-semibold text-gray-900">Analytics</h2>
|
||||
</div>
|
||||
<TimeRangeSelector value={hours} onChange={onHoursChange} />
|
||||
</div>
|
||||
<div className="flex items-center justify-center py-12">
|
||||
<div className="inline-block animate-spin rounded-full h-8 w-8 border-b-2 border-blue-600" />
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// Error state
|
||||
if (error) {
|
||||
return (
|
||||
<div className="bg-white rounded-lg shadow p-6">
|
||||
<div className="flex items-center justify-between mb-4">
|
||||
<div className="flex items-center gap-2">
|
||||
<BarChart3 className="h-5 w-5 text-gray-500" />
|
||||
<h2 className="text-lg font-semibold text-gray-900">Analytics</h2>
|
||||
</div>
|
||||
<TimeRangeSelector value={hours} onChange={onHoursChange} />
|
||||
</div>
|
||||
<div className="bg-red-50 border border-red-200 text-red-700 rounded p-3 text-sm">
|
||||
Failed to load analytics data.{" "}
|
||||
{error.message && (
|
||||
<span className="text-red-500">{error.message}</span>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (!data) return null;
|
||||
|
||||
return (
|
||||
<div className="space-y-6">
|
||||
{/* Header */}
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="flex items-center gap-2">
|
||||
<BarChart3 className="h-5 w-5 text-gray-500" />
|
||||
<h2 className="text-lg font-semibold text-gray-900">Analytics</h2>
|
||||
{isLoading && (
|
||||
<div className="inline-block animate-spin rounded-full h-4 w-4 border-b-2 border-blue-400" />
|
||||
)}
|
||||
</div>
|
||||
<TimeRangeSelector value={hours} onChange={onHoursChange} />
|
||||
</div>
|
||||
|
||||
{/* Summary stat cards */}
|
||||
<div className="grid grid-cols-1 sm:grid-cols-3 gap-4">
|
||||
<div className="bg-white rounded-lg shadow p-4">
|
||||
<StatCard
|
||||
icon={<Activity className="h-5 w-5" />}
|
||||
label={`Executions (${hours}h)`}
|
||||
value={totalExecutions}
|
||||
color="text-blue-600"
|
||||
/>
|
||||
</div>
|
||||
<div className="bg-white rounded-lg shadow p-4">
|
||||
<StatCard
|
||||
icon={<Zap className="h-5 w-5" />}
|
||||
label={`Events (${hours}h)`}
|
||||
value={totalEvents}
|
||||
color="text-indigo-600"
|
||||
/>
|
||||
</div>
|
||||
<div className="bg-white rounded-lg shadow p-4">
|
||||
<StatCard
|
||||
icon={<CheckCircle className="h-5 w-5" />}
|
||||
label={`Enforcements (${hours}h)`}
|
||||
value={totalEnforcements}
|
||||
color="text-purple-600"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Charts row 1: throughput + failure rate */}
|
||||
<div className="grid grid-cols-1 lg:grid-cols-3 gap-6">
|
||||
{/* Execution throughput */}
|
||||
<div className="bg-white rounded-lg shadow p-6 lg:col-span-2">
|
||||
<h3 className="text-sm font-semibold text-gray-700 mb-3 flex items-center gap-1.5">
|
||||
<Activity className="h-4 w-4 text-blue-500" />
|
||||
Execution Throughput
|
||||
</h3>
|
||||
<MiniBarChart
|
||||
buckets={executionBuckets}
|
||||
rangeHours={hours}
|
||||
barColor="bg-blue-500"
|
||||
height={140}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Failure rate */}
|
||||
<div className="bg-white rounded-lg shadow p-6">
|
||||
<h3 className="text-sm font-semibold text-gray-700 mb-3 flex items-center gap-1.5">
|
||||
<AlertTriangle className="h-4 w-4 text-red-500" />
|
||||
Failure Rate
|
||||
</h3>
|
||||
<FailureRateCard summary={data.failure_rate} />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Charts row 2: status breakdown + event volume */}
|
||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
||||
{/* Execution status breakdown */}
|
||||
<div className="bg-white rounded-lg shadow p-6">
|
||||
<h3 className="text-sm font-semibold text-gray-700 mb-3 flex items-center gap-1.5">
|
||||
<BarChart3 className="h-4 w-4 text-green-500" />
|
||||
Execution Status Over Time
|
||||
</h3>
|
||||
<StackedBarChart
|
||||
points={data.execution_status}
|
||||
rangeHours={hours}
|
||||
height={140}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Event volume */}
|
||||
<div className="bg-white rounded-lg shadow p-6">
|
||||
<h3 className="text-sm font-semibold text-gray-700 mb-3 flex items-center gap-1.5">
|
||||
<Zap className="h-4 w-4 text-indigo-500" />
|
||||
Event Volume
|
||||
</h3>
|
||||
<MiniBarChart
|
||||
buckets={eventBuckets}
|
||||
rangeHours={hours}
|
||||
barColor="bg-indigo-500"
|
||||
height={140}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Charts row 3: enforcements + worker health */}
|
||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
||||
{/* Enforcement volume */}
|
||||
<div className="bg-white rounded-lg shadow p-6">
|
||||
<h3 className="text-sm font-semibold text-gray-700 mb-3 flex items-center gap-1.5">
|
||||
<CheckCircle className="h-4 w-4 text-purple-500" />
|
||||
Enforcement Volume
|
||||
</h3>
|
||||
<MiniBarChart
|
||||
buckets={enforcementBuckets}
|
||||
rangeHours={hours}
|
||||
barColor="bg-purple-500"
|
||||
height={120}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Worker status */}
|
||||
<div className="bg-white rounded-lg shadow p-6">
|
||||
<h3 className="text-sm font-semibold text-gray-700 mb-3 flex items-center gap-1.5">
|
||||
<Server className="h-4 w-4 text-teal-500" />
|
||||
Worker Status Transitions
|
||||
</h3>
|
||||
<StackedBarChart
|
||||
points={data.worker_status}
|
||||
rangeHours={hours}
|
||||
height={120}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// Re-export sub-components and types for standalone use
|
||||
export {
|
||||
MiniBarChart,
|
||||
StackedBarChart,
|
||||
FailureRateCard,
|
||||
StatCard,
|
||||
TimeRangeSelector,
|
||||
};
|
||||
export type { TimeRangeHours };
|
||||
463
web/src/components/common/EntityHistoryPanel.tsx
Normal file
463
web/src/components/common/EntityHistoryPanel.tsx
Normal file
@@ -0,0 +1,463 @@
|
||||
import { useState } from "react";
|
||||
import { formatDistanceToNow } from "date-fns";
|
||||
import {
|
||||
ChevronDown,
|
||||
ChevronRight,
|
||||
History,
|
||||
Filter,
|
||||
ChevronLeft,
|
||||
ChevronsLeft,
|
||||
ChevronsRight,
|
||||
} from "lucide-react";
|
||||
import {
|
||||
useEntityHistory,
|
||||
type HistoryEntityType,
|
||||
type HistoryRecord,
|
||||
type HistoryQueryParams,
|
||||
} from "@/hooks/useHistory";
|
||||
|
||||
interface EntityHistoryPanelProps {
|
||||
/** The type of entity whose history to display */
|
||||
entityType: HistoryEntityType;
|
||||
/** The entity's primary key */
|
||||
entityId: number;
|
||||
/** Optional title override (default: "Change History") */
|
||||
title?: string;
|
||||
/** Whether the panel starts collapsed (default: true) */
|
||||
defaultCollapsed?: boolean;
|
||||
/** Number of items per page (default: 10) */
|
||||
pageSize?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* A reusable panel that displays the change history for an entity.
|
||||
*
|
||||
* Queries the TimescaleDB history hypertables via the API and renders
|
||||
* a timeline of changes with expandable details showing old/new values.
|
||||
*/
|
||||
export default function EntityHistoryPanel({
|
||||
entityType,
|
||||
entityId,
|
||||
title = "Change History",
|
||||
defaultCollapsed = true,
|
||||
pageSize = 10,
|
||||
}: EntityHistoryPanelProps) {
|
||||
const [isCollapsed, setIsCollapsed] = useState(defaultCollapsed);
|
||||
const [page, setPage] = useState(1);
|
||||
const [operationFilter, setOperationFilter] = useState<string>("");
|
||||
const [fieldFilter, setFieldFilter] = useState<string>("");
|
||||
const [showFilters, setShowFilters] = useState(false);
|
||||
|
||||
const params: HistoryQueryParams = {
|
||||
page,
|
||||
page_size: pageSize,
|
||||
...(operationFilter ? { operation: operationFilter } : {}),
|
||||
...(fieldFilter ? { changed_field: fieldFilter } : {}),
|
||||
};
|
||||
|
||||
const { data, isLoading, error } = useEntityHistory(
|
||||
entityType,
|
||||
entityId,
|
||||
params,
|
||||
!isCollapsed && !!entityId,
|
||||
);
|
||||
|
||||
const records = data?.data ?? [];
|
||||
const pagination = data?.pagination;
|
||||
const totalPages = pagination?.total_pages ?? 1;
|
||||
const totalItems = pagination?.total_items ?? 0;
|
||||
|
||||
const handleClearFilters = () => {
|
||||
setOperationFilter("");
|
||||
setFieldFilter("");
|
||||
setPage(1);
|
||||
};
|
||||
|
||||
const hasActiveFilters = !!operationFilter || !!fieldFilter;
|
||||
|
||||
return (
|
||||
<div className="bg-white rounded-lg shadow">
|
||||
{/* Header — always visible */}
|
||||
<button
|
||||
onClick={() => setIsCollapsed(!isCollapsed)}
|
||||
className="w-full px-6 py-4 flex items-center justify-between border-b border-gray-200 hover:bg-gray-50 transition-colors"
|
||||
>
|
||||
<div className="flex items-center gap-2">
|
||||
<History className="h-5 w-5 text-gray-500" />
|
||||
<h2 className="text-lg font-semibold text-gray-900">{title}</h2>
|
||||
{totalItems > 0 && !isCollapsed && (
|
||||
<span className="ml-2 px-2 py-0.5 text-xs font-medium bg-gray-100 text-gray-600 rounded-full">
|
||||
{totalItems}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
{isCollapsed ? (
|
||||
<ChevronRight className="h-5 w-5 text-gray-400" />
|
||||
) : (
|
||||
<ChevronDown className="h-5 w-5 text-gray-400" />
|
||||
)}
|
||||
</button>
|
||||
|
||||
{/* Body — only when expanded */}
|
||||
{!isCollapsed && (
|
||||
<div className="px-6 py-4">
|
||||
{/* Filter bar */}
|
||||
<div className="mb-4">
|
||||
<button
|
||||
onClick={() => setShowFilters(!showFilters)}
|
||||
className="flex items-center gap-1 text-sm text-gray-500 hover:text-gray-700"
|
||||
>
|
||||
<Filter className="h-3.5 w-3.5" />
|
||||
<span>Filters</span>
|
||||
{hasActiveFilters && (
|
||||
<span className="ml-1 h-2 w-2 rounded-full bg-blue-500" />
|
||||
)}
|
||||
</button>
|
||||
|
||||
{showFilters && (
|
||||
<div className="mt-2 flex flex-wrap gap-3 items-end">
|
||||
<div>
|
||||
<label className="block text-xs text-gray-500 mb-1">
|
||||
Operation
|
||||
</label>
|
||||
<select
|
||||
value={operationFilter}
|
||||
onChange={(e) => {
|
||||
setOperationFilter(e.target.value);
|
||||
setPage(1);
|
||||
}}
|
||||
className="text-sm border border-gray-300 rounded px-2 py-1.5 bg-white"
|
||||
>
|
||||
<option value="">All</option>
|
||||
<option value="INSERT">INSERT</option>
|
||||
<option value="UPDATE">UPDATE</option>
|
||||
<option value="DELETE">DELETE</option>
|
||||
</select>
|
||||
</div>
|
||||
<div>
|
||||
<label className="block text-xs text-gray-500 mb-1">
|
||||
Changed Field
|
||||
</label>
|
||||
<input
|
||||
type="text"
|
||||
value={fieldFilter}
|
||||
onChange={(e) => {
|
||||
setFieldFilter(e.target.value);
|
||||
setPage(1);
|
||||
}}
|
||||
placeholder="e.g. status"
|
||||
className="text-sm border border-gray-300 rounded px-2 py-1.5 w-36"
|
||||
/>
|
||||
</div>
|
||||
{hasActiveFilters && (
|
||||
<button
|
||||
onClick={handleClearFilters}
|
||||
className="text-xs text-blue-600 hover:text-blue-800 pb-1"
|
||||
>
|
||||
Clear filters
|
||||
</button>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Loading state */}
|
||||
{isLoading && (
|
||||
<div className="flex items-center justify-center py-8">
|
||||
<div className="inline-block animate-spin rounded-full h-6 w-6 border-b-2 border-blue-600" />
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Error state */}
|
||||
{error && (
|
||||
<div className="bg-red-50 border border-red-200 text-red-700 rounded p-3 text-sm">
|
||||
Failed to load history:{" "}
|
||||
{error instanceof Error ? error.message : "Unknown error"}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Empty state */}
|
||||
{!isLoading && !error && records.length === 0 && (
|
||||
<p className="text-sm text-gray-500 py-4 text-center">
|
||||
{hasActiveFilters
|
||||
? "No history records match the current filters."
|
||||
: "No change history recorded yet."}
|
||||
</p>
|
||||
)}
|
||||
|
||||
{/* Records list */}
|
||||
{!isLoading && !error && records.length > 0 && (
|
||||
<div className="space-y-1">
|
||||
{records.map((record, idx) => (
|
||||
<HistoryRecordRow key={`${record.time}-${idx}`} record={record} />
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Pagination */}
|
||||
{!isLoading && totalPages > 1 && (
|
||||
<div className="mt-4 flex items-center justify-between text-sm">
|
||||
<span className="text-gray-500">
|
||||
Page {page} of {totalPages} ({totalItems} records)
|
||||
</span>
|
||||
<div className="flex items-center gap-1">
|
||||
<PaginationButton
|
||||
onClick={() => setPage(1)}
|
||||
disabled={page <= 1}
|
||||
title="First page"
|
||||
>
|
||||
<ChevronsLeft className="h-4 w-4" />
|
||||
</PaginationButton>
|
||||
<PaginationButton
|
||||
onClick={() => setPage(page - 1)}
|
||||
disabled={page <= 1}
|
||||
title="Previous page"
|
||||
>
|
||||
<ChevronLeft className="h-4 w-4" />
|
||||
</PaginationButton>
|
||||
<PaginationButton
|
||||
onClick={() => setPage(page + 1)}
|
||||
disabled={page >= totalPages}
|
||||
title="Next page"
|
||||
>
|
||||
<ChevronRight className="h-4 w-4" />
|
||||
</PaginationButton>
|
||||
<PaginationButton
|
||||
onClick={() => setPage(totalPages)}
|
||||
disabled={page >= totalPages}
|
||||
title="Last page"
|
||||
>
|
||||
<ChevronsRight className="h-4 w-4" />
|
||||
</PaginationButton>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Sub-components
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function PaginationButton({
|
||||
onClick,
|
||||
disabled,
|
||||
title,
|
||||
children,
|
||||
}: {
|
||||
onClick: () => void;
|
||||
disabled: boolean;
|
||||
title: string;
|
||||
children: React.ReactNode;
|
||||
}) {
|
||||
return (
|
||||
<button
|
||||
onClick={onClick}
|
||||
disabled={disabled}
|
||||
title={title}
|
||||
className="p-1 rounded hover:bg-gray-100 disabled:opacity-30 disabled:cursor-not-allowed"
|
||||
>
|
||||
{children}
|
||||
</button>
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* A single history record displayed as a collapsible row.
|
||||
*/
|
||||
function HistoryRecordRow({ record }: { record: HistoryRecord }) {
|
||||
const [expanded, setExpanded] = useState(false);
|
||||
|
||||
const time = new Date(record.time);
|
||||
const relativeTime = formatDistanceToNow(time, { addSuffix: true });
|
||||
|
||||
return (
|
||||
<div className="border border-gray-100 rounded">
|
||||
<button
|
||||
onClick={() => setExpanded(!expanded)}
|
||||
className="w-full flex items-center gap-3 px-3 py-2 text-left hover:bg-gray-50 transition-colors text-sm"
|
||||
>
|
||||
{/* Expand/collapse indicator */}
|
||||
{expanded ? (
|
||||
<ChevronDown className="h-3.5 w-3.5 text-gray-400 flex-shrink-0" />
|
||||
) : (
|
||||
<ChevronRight className="h-3.5 w-3.5 text-gray-400 flex-shrink-0" />
|
||||
)}
|
||||
|
||||
{/* Operation badge */}
|
||||
<OperationBadge operation={record.operation} />
|
||||
|
||||
{/* Changed fields summary */}
|
||||
<span className="text-gray-700 truncate flex-1">
|
||||
{record.operation === "INSERT" && "Entity created"}
|
||||
{record.operation === "DELETE" && "Entity deleted"}
|
||||
{record.operation === "UPDATE" && record.changed_fields.length > 0 && (
|
||||
<>
|
||||
Changed{" "}
|
||||
<span className="font-medium">
|
||||
{record.changed_fields.join(", ")}
|
||||
</span>
|
||||
</>
|
||||
)}
|
||||
{record.operation === "UPDATE" &&
|
||||
record.changed_fields.length === 0 &&
|
||||
"Updated"}
|
||||
</span>
|
||||
|
||||
{/* Timestamp */}
|
||||
<span
|
||||
className="text-xs text-gray-400 flex-shrink-0"
|
||||
title={time.toISOString()}
|
||||
>
|
||||
{relativeTime}
|
||||
</span>
|
||||
</button>
|
||||
|
||||
{/* Expanded detail */}
|
||||
{expanded && (
|
||||
<div className="px-3 pb-3 pt-1 border-t border-gray-100">
|
||||
{/* Timestamp detail */}
|
||||
<p className="text-xs text-gray-400 mb-2">
|
||||
{time.toLocaleString()} (UTC: {time.toISOString()})
|
||||
</p>
|
||||
|
||||
{/* Field-level diffs */}
|
||||
{record.operation === "UPDATE" && record.changed_fields.length > 0 && (
|
||||
<div className="space-y-2">
|
||||
{record.changed_fields.map((field) => (
|
||||
<FieldDiff
|
||||
key={field}
|
||||
field={field}
|
||||
oldValue={record.old_values?.[field]}
|
||||
newValue={record.new_values?.[field]}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* INSERT — show new_values */}
|
||||
{record.operation === "INSERT" && record.new_values && (
|
||||
<div>
|
||||
<p className="text-xs font-medium text-gray-500 mb-1">
|
||||
Initial values
|
||||
</p>
|
||||
<JsonBlock value={record.new_values} />
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* DELETE — show old_values if available */}
|
||||
{record.operation === "DELETE" && record.old_values && (
|
||||
<div>
|
||||
<p className="text-xs font-medium text-gray-500 mb-1">
|
||||
Values at deletion
|
||||
</p>
|
||||
<JsonBlock value={record.old_values} />
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Fallback when there's nothing to show */}
|
||||
{!record.old_values && !record.new_values && (
|
||||
<p className="text-xs text-gray-400 italic">
|
||||
No field-level details recorded.
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Colored badge for the operation type.
|
||||
*/
|
||||
function OperationBadge({ operation }: { operation: string }) {
|
||||
const colors: Record<string, string> = {
|
||||
INSERT: "bg-green-100 text-green-700",
|
||||
UPDATE: "bg-blue-100 text-blue-700",
|
||||
DELETE: "bg-red-100 text-red-700",
|
||||
};
|
||||
|
||||
return (
|
||||
<span
|
||||
className={`px-1.5 py-0.5 text-[10px] font-semibold rounded flex-shrink-0 ${colors[operation] ?? "bg-gray-100 text-gray-700"}`}
|
||||
>
|
||||
{operation}
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Renders a single field's old → new diff.
|
||||
*/
|
||||
function FieldDiff({
|
||||
field,
|
||||
oldValue,
|
||||
newValue,
|
||||
}: {
|
||||
field: string;
|
||||
oldValue: unknown;
|
||||
newValue: unknown;
|
||||
}) {
|
||||
const isSimple =
|
||||
typeof oldValue !== "object" && typeof newValue !== "object";
|
||||
|
||||
return (
|
||||
<div className="text-xs">
|
||||
<p className="font-medium text-gray-600 mb-0.5">{field}</p>
|
||||
{isSimple ? (
|
||||
<div className="flex items-center gap-2 flex-wrap">
|
||||
<span className="bg-red-50 text-red-700 px-1.5 py-0.5 rounded line-through">
|
||||
{formatValue(oldValue)}
|
||||
</span>
|
||||
<span className="text-gray-400">→</span>
|
||||
<span className="bg-green-50 text-green-700 px-1.5 py-0.5 rounded">
|
||||
{formatValue(newValue)}
|
||||
</span>
|
||||
</div>
|
||||
) : (
|
||||
<div className="grid grid-cols-2 gap-2">
|
||||
<div>
|
||||
<p className="text-[10px] text-gray-400 mb-0.5">Before</p>
|
||||
<JsonBlock value={oldValue} />
|
||||
</div>
|
||||
<div>
|
||||
<p className="text-[10px] text-gray-400 mb-0.5">After</p>
|
||||
<JsonBlock value={newValue} />
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a scalar value for display.
|
||||
*/
|
||||
function formatValue(value: unknown): string {
|
||||
if (value === null || value === undefined) return "null";
|
||||
if (typeof value === "string") return value;
|
||||
return JSON.stringify(value);
|
||||
}
|
||||
|
||||
/**
|
||||
* Renders a JSONB value in a code block.
|
||||
*/
|
||||
function JsonBlock({ value }: { value: unknown }) {
|
||||
if (value === null || value === undefined) {
|
||||
return <span className="text-gray-400 text-xs italic">null</span>;
|
||||
}
|
||||
|
||||
const formatted =
|
||||
typeof value === "object"
|
||||
? JSON.stringify(value, null, 2)
|
||||
: String(value);
|
||||
|
||||
return (
|
||||
<pre className="bg-gray-50 rounded p-2 text-[11px] text-gray-700 overflow-x-auto max-h-48 whitespace-pre-wrap break-all">
|
||||
{formatted}
|
||||
</pre>
|
||||
);
|
||||
}
|
||||
217
web/src/hooks/useAnalytics.ts
Normal file
217
web/src/hooks/useAnalytics.ts
Normal file
@@ -0,0 +1,217 @@
|
||||
import { useQuery, keepPreviousData } from "@tanstack/react-query";
|
||||
import { apiClient } from "@/lib/api-client";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* A single data point in an hourly time series.
|
||||
*/
|
||||
export interface TimeSeriesPoint {
|
||||
/** Start of the 1-hour bucket (ISO 8601) */
|
||||
bucket: string;
|
||||
/** Series label (e.g., status name, action ref). Null for aggregate totals. */
|
||||
label: string | null;
|
||||
/** The count value for this bucket */
|
||||
value: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Failure rate summary over a time range.
|
||||
*/
|
||||
export interface FailureRateSummary {
|
||||
since: string;
|
||||
until: string;
|
||||
total_terminal: number;
|
||||
failed_count: number;
|
||||
timeout_count: number;
|
||||
completed_count: number;
|
||||
failure_rate_pct: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Combined dashboard analytics payload returned by GET /api/v1/analytics/dashboard.
|
||||
*/
|
||||
export interface DashboardAnalytics {
|
||||
since: string;
|
||||
until: string;
|
||||
execution_throughput: TimeSeriesPoint[];
|
||||
execution_status: TimeSeriesPoint[];
|
||||
event_volume: TimeSeriesPoint[];
|
||||
enforcement_volume: TimeSeriesPoint[];
|
||||
worker_status: TimeSeriesPoint[];
|
||||
failure_rate: FailureRateSummary;
|
||||
}
|
||||
|
||||
/**
|
||||
* A generic time-series response (used by the individual endpoints).
|
||||
*/
|
||||
export interface TimeSeriesResponse {
|
||||
since: string;
|
||||
until: string;
|
||||
data: TimeSeriesPoint[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Query parameters for analytics requests.
|
||||
*/
|
||||
export interface AnalyticsQueryParams {
|
||||
/** Start of time range (ISO 8601). Defaults to 24 hours ago on the server. */
|
||||
since?: string;
|
||||
/** End of time range (ISO 8601). Defaults to now on the server. */
|
||||
until?: string;
|
||||
/** Number of hours to look back from now (alternative to since/until). */
|
||||
hours?: number;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Fetch helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
async function fetchDashboardAnalytics(
|
||||
params: AnalyticsQueryParams,
|
||||
): Promise<DashboardAnalytics> {
|
||||
const queryParams: Record<string, string | number> = {};
|
||||
if (params.since) queryParams.since = params.since;
|
||||
if (params.until) queryParams.until = params.until;
|
||||
if (params.hours) queryParams.hours = params.hours;
|
||||
|
||||
const response = await apiClient.get<{ data: DashboardAnalytics }>(
|
||||
"/api/v1/analytics/dashboard",
|
||||
{ params: queryParams },
|
||||
);
|
||||
|
||||
return response.data.data;
|
||||
}
|
||||
|
||||
async function fetchTimeSeries(
|
||||
path: string,
|
||||
params: AnalyticsQueryParams,
|
||||
): Promise<TimeSeriesResponse> {
|
||||
const queryParams: Record<string, string | number> = {};
|
||||
if (params.since) queryParams.since = params.since;
|
||||
if (params.until) queryParams.until = params.until;
|
||||
if (params.hours) queryParams.hours = params.hours;
|
||||
|
||||
const response = await apiClient.get<{ data: TimeSeriesResponse }>(
|
||||
`/api/v1/analytics/${path}`,
|
||||
{ params: queryParams },
|
||||
);
|
||||
|
||||
return response.data.data;
|
||||
}
|
||||
|
||||
async function fetchFailureRate(
|
||||
params: AnalyticsQueryParams,
|
||||
): Promise<FailureRateSummary> {
|
||||
const queryParams: Record<string, string | number> = {};
|
||||
if (params.since) queryParams.since = params.since;
|
||||
if (params.until) queryParams.until = params.until;
|
||||
if (params.hours) queryParams.hours = params.hours;
|
||||
|
||||
const response = await apiClient.get<{ data: FailureRateSummary }>(
|
||||
"/api/v1/analytics/executions/failure-rate",
|
||||
{ params: queryParams },
|
||||
);
|
||||
|
||||
return response.data.data;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Hooks
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Fetch the combined dashboard analytics payload.
|
||||
*
|
||||
* This is the recommended hook for the dashboard page — it returns all
|
||||
* key metrics in a single API call to avoid multiple round-trips.
|
||||
*/
|
||||
export function useDashboardAnalytics(params: AnalyticsQueryParams = {}) {
|
||||
return useQuery({
|
||||
queryKey: ["analytics", "dashboard", params],
|
||||
queryFn: () => fetchDashboardAnalytics(params),
|
||||
staleTime: 60000, // 1 minute — aggregates don't change frequently
|
||||
refetchInterval: 120000, // auto-refresh every 2 minutes
|
||||
placeholderData: keepPreviousData,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch execution status transitions over time.
|
||||
*/
|
||||
export function useExecutionStatusAnalytics(
|
||||
params: AnalyticsQueryParams = {},
|
||||
) {
|
||||
return useQuery({
|
||||
queryKey: ["analytics", "executions", "status", params],
|
||||
queryFn: () => fetchTimeSeries("executions/status", params),
|
||||
staleTime: 60000,
|
||||
placeholderData: keepPreviousData,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch execution throughput over time.
|
||||
*/
|
||||
export function useExecutionThroughputAnalytics(
|
||||
params: AnalyticsQueryParams = {},
|
||||
) {
|
||||
return useQuery({
|
||||
queryKey: ["analytics", "executions", "throughput", params],
|
||||
queryFn: () => fetchTimeSeries("executions/throughput", params),
|
||||
staleTime: 60000,
|
||||
placeholderData: keepPreviousData,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch execution failure rate summary.
|
||||
*/
|
||||
export function useFailureRateAnalytics(params: AnalyticsQueryParams = {}) {
|
||||
return useQuery({
|
||||
queryKey: ["analytics", "executions", "failure-rate", params],
|
||||
queryFn: () => fetchFailureRate(params),
|
||||
staleTime: 60000,
|
||||
placeholderData: keepPreviousData,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch event volume over time.
|
||||
*/
|
||||
export function useEventVolumeAnalytics(params: AnalyticsQueryParams = {}) {
|
||||
return useQuery({
|
||||
queryKey: ["analytics", "events", "volume", params],
|
||||
queryFn: () => fetchTimeSeries("events/volume", params),
|
||||
staleTime: 60000,
|
||||
placeholderData: keepPreviousData,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch worker status transitions over time.
|
||||
*/
|
||||
export function useWorkerStatusAnalytics(params: AnalyticsQueryParams = {}) {
|
||||
return useQuery({
|
||||
queryKey: ["analytics", "workers", "status", params],
|
||||
queryFn: () => fetchTimeSeries("workers/status", params),
|
||||
staleTime: 60000,
|
||||
placeholderData: keepPreviousData,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch enforcement volume over time.
|
||||
*/
|
||||
export function useEnforcementVolumeAnalytics(
|
||||
params: AnalyticsQueryParams = {},
|
||||
) {
|
||||
return useQuery({
|
||||
queryKey: ["analytics", "enforcements", "volume", params],
|
||||
queryFn: () => fetchTimeSeries("enforcements/volume", params),
|
||||
staleTime: 60000,
|
||||
placeholderData: keepPreviousData,
|
||||
});
|
||||
}
|
||||
165
web/src/hooks/useHistory.ts
Normal file
165
web/src/hooks/useHistory.ts
Normal file
@@ -0,0 +1,165 @@
|
||||
import { useQuery, keepPreviousData } from "@tanstack/react-query";
|
||||
import { apiClient } from "@/lib/api-client";
|
||||
|
||||
/**
|
||||
* Supported entity types for history queries.
|
||||
* Maps to the TimescaleDB history hypertables.
|
||||
*/
|
||||
export type HistoryEntityType =
|
||||
| "execution"
|
||||
| "worker"
|
||||
| "enforcement"
|
||||
| "event";
|
||||
|
||||
/**
|
||||
* A single history record from the API.
|
||||
*/
|
||||
export interface HistoryRecord {
|
||||
/** When the change occurred */
|
||||
time: string;
|
||||
/** The operation: INSERT, UPDATE, or DELETE */
|
||||
operation: string;
|
||||
/** The primary key of the changed entity */
|
||||
entity_id: number;
|
||||
/** Denormalized human-readable identifier (e.g., action_ref, worker name) */
|
||||
entity_ref: string | null;
|
||||
/** Names of fields that changed */
|
||||
changed_fields: string[];
|
||||
/** Previous values of changed fields (null for INSERT) */
|
||||
old_values: Record<string, unknown> | null;
|
||||
/** New values of changed fields (null for DELETE) */
|
||||
new_values: Record<string, unknown> | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Paginated history response from the API.
|
||||
*/
|
||||
export interface PaginatedHistoryResponse {
|
||||
data: HistoryRecord[];
|
||||
pagination: {
|
||||
page: number;
|
||||
page_size: number;
|
||||
total_items: number;
|
||||
total_pages: number;
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Query parameters for history requests.
|
||||
*/
|
||||
export interface HistoryQueryParams {
|
||||
/** Filter by operation type */
|
||||
operation?: string;
|
||||
/** Only include records where this field was changed */
|
||||
changed_field?: string;
|
||||
/** Only include records at or after this time (ISO 8601) */
|
||||
since?: string;
|
||||
/** Only include records at or before this time (ISO 8601) */
|
||||
until?: string;
|
||||
/** Page number (1-based) */
|
||||
page?: number;
|
||||
/** Number of items per page */
|
||||
page_size?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch history for a specific entity by its type and ID.
|
||||
*
|
||||
* Uses the entity-specific endpoints:
|
||||
* - GET /api/v1/executions/:id/history
|
||||
* - GET /api/v1/workers/:id/history
|
||||
* - GET /api/v1/enforcements/:id/history
|
||||
* - GET /api/v1/events/:id/history
|
||||
*/
|
||||
async function fetchEntityHistory(
|
||||
entityType: HistoryEntityType,
|
||||
entityId: number,
|
||||
params: HistoryQueryParams,
|
||||
): Promise<PaginatedHistoryResponse> {
|
||||
const pluralMap: Record<HistoryEntityType, string> = {
|
||||
execution: "executions",
|
||||
worker: "workers",
|
||||
enforcement: "enforcements",
|
||||
event: "events",
|
||||
};
|
||||
|
||||
const queryParams: Record<string, string | number> = {};
|
||||
if (params.operation) queryParams.operation = params.operation;
|
||||
if (params.changed_field) queryParams.changed_field = params.changed_field;
|
||||
if (params.since) queryParams.since = params.since;
|
||||
if (params.until) queryParams.until = params.until;
|
||||
if (params.page) queryParams.page = params.page;
|
||||
if (params.page_size) queryParams.page_size = params.page_size;
|
||||
|
||||
const response = await apiClient.get<PaginatedHistoryResponse>(
|
||||
`/api/v1/${pluralMap[entityType]}/${entityId}/history`,
|
||||
{ params: queryParams },
|
||||
);
|
||||
|
||||
return response.data;
|
||||
}
|
||||
|
||||
/**
|
||||
* React Query hook for fetching entity history.
|
||||
*
|
||||
* @param entityType - The type of entity (execution, worker, enforcement, event)
|
||||
* @param entityId - The entity's primary key
|
||||
* @param params - Optional query parameters for filtering and pagination
|
||||
* @param enabled - Whether the query should execute (default: true when entityId is truthy)
|
||||
*/
|
||||
export function useEntityHistory(
|
||||
entityType: HistoryEntityType,
|
||||
entityId: number,
|
||||
params: HistoryQueryParams = {},
|
||||
enabled?: boolean,
|
||||
) {
|
||||
const isEnabled = enabled ?? !!entityId;
|
||||
|
||||
return useQuery({
|
||||
queryKey: ["history", entityType, entityId, params],
|
||||
queryFn: () => fetchEntityHistory(entityType, entityId, params),
|
||||
enabled: isEnabled,
|
||||
staleTime: 30000,
|
||||
placeholderData: keepPreviousData,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience hook for execution history.
|
||||
*/
|
||||
export function useExecutionHistory(
|
||||
executionId: number,
|
||||
params: HistoryQueryParams = {},
|
||||
) {
|
||||
return useEntityHistory("execution", executionId, params);
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience hook for worker history.
|
||||
*/
|
||||
export function useWorkerHistory(
|
||||
workerId: number,
|
||||
params: HistoryQueryParams = {},
|
||||
) {
|
||||
return useEntityHistory("worker", workerId, params);
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience hook for enforcement history.
|
||||
*/
|
||||
export function useEnforcementHistory(
|
||||
enforcementId: number,
|
||||
params: HistoryQueryParams = {},
|
||||
) {
|
||||
return useEntityHistory("enforcement", enforcementId, params);
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience hook for event history.
|
||||
*/
|
||||
export function useEventHistory(
|
||||
eventId: number,
|
||||
params: HistoryQueryParams = {},
|
||||
) {
|
||||
return useEntityHistory("event", eventId, params);
|
||||
}
|
||||
@@ -4,9 +4,12 @@ import { useActions } from "@/hooks/useActions";
|
||||
import { useRules } from "@/hooks/useRules";
|
||||
import { useExecutions } from "@/hooks/useExecutions";
|
||||
import { useExecutionStream } from "@/hooks/useExecutionStream";
|
||||
import { useDashboardAnalytics } from "@/hooks/useAnalytics";
|
||||
import { Link } from "react-router-dom";
|
||||
import { ExecutionStatus } from "@/api";
|
||||
import { useMemo } from "react";
|
||||
import { useMemo, useState } from "react";
|
||||
import AnalyticsDashboard from "@/components/common/AnalyticsWidgets";
|
||||
import type { TimeRangeHours } from "@/components/common/AnalyticsWidgets";
|
||||
|
||||
export default function DashboardPage() {
|
||||
const { user } = useAuth();
|
||||
@@ -39,6 +42,14 @@ export default function DashboardPage() {
|
||||
// The hook automatically invalidates queries when updates arrive
|
||||
const { isConnected } = useExecutionStream();
|
||||
|
||||
// Analytics time range state and data
|
||||
const [analyticsHours, setAnalyticsHours] = useState<TimeRangeHours>(24);
|
||||
const {
|
||||
data: analyticsData,
|
||||
isLoading: analyticsLoading,
|
||||
error: analyticsError,
|
||||
} = useDashboardAnalytics({ hours: analyticsHours });
|
||||
|
||||
// Calculate metrics
|
||||
const totalPacks = packsData?.pagination?.total_items || 0;
|
||||
const totalActions = actionsData?.pagination?.total_items || 0;
|
||||
@@ -311,6 +322,17 @@ export default function DashboardPage() {
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Analytics Section */}
|
||||
<div className="mt-8">
|
||||
<AnalyticsDashboard
|
||||
data={analyticsData}
|
||||
isLoading={analyticsLoading}
|
||||
error={analyticsError as Error | null}
|
||||
hours={analyticsHours}
|
||||
onHoursChange={setAnalyticsHours}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import { useParams, Link } from "react-router-dom";
|
||||
import { useEnforcement } from "@/hooks/useEvents";
|
||||
import { EnforcementStatus, EnforcementCondition } from "@/api";
|
||||
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
|
||||
|
||||
export default function EnforcementDetailPage() {
|
||||
const { id } = useParams<{ id: string }>();
|
||||
@@ -376,6 +377,15 @@ export default function EnforcementDetailPage() {
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Change History */}
|
||||
<div className="mt-6">
|
||||
<EntityHistoryPanel
|
||||
entityType="enforcement"
|
||||
entityId={enforcement.id}
|
||||
title="Enforcement History"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import { useParams, Link } from "react-router-dom";
|
||||
import { useEvent } from "@/hooks/useEvents";
|
||||
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
|
||||
|
||||
export default function EventDetailPage() {
|
||||
const { id } = useParams<{ id: string }>();
|
||||
@@ -258,6 +259,15 @@ export default function EventDetailPage() {
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Change History */}
|
||||
<div className="mt-6">
|
||||
<EntityHistoryPanel
|
||||
entityType="event"
|
||||
entityId={event.id}
|
||||
title="Event History"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,35 +1,113 @@
|
||||
import { useParams, Link } from "react-router-dom";
|
||||
|
||||
/** Format a duration in ms to a human-readable string. */
|
||||
function formatDuration(ms: number): string {
|
||||
if (ms < 1000) return `${ms}ms`;
|
||||
const secs = ms / 1000;
|
||||
if (secs < 60) return `${secs.toFixed(1)}s`;
|
||||
const mins = Math.floor(secs / 60);
|
||||
const remainSecs = Math.round(secs % 60);
|
||||
if (mins < 60) return `${mins}m ${remainSecs}s`;
|
||||
const hrs = Math.floor(mins / 60);
|
||||
const remainMins = mins % 60;
|
||||
return `${hrs}h ${remainMins}m`;
|
||||
}
|
||||
import { useExecution } from "@/hooks/useExecutions";
|
||||
import { useAction } from "@/hooks/useActions";
|
||||
import { useExecutionStream } from "@/hooks/useExecutionStream";
|
||||
import { useExecutionHistory } from "@/hooks/useHistory";
|
||||
import { formatDistanceToNow } from "date-fns";
|
||||
import { ExecutionStatus } from "@/api";
|
||||
import { useState } from "react";
|
||||
import { RotateCcw } from "lucide-react";
|
||||
import { useState, useMemo } from "react";
|
||||
import { RotateCcw, Loader2 } from "lucide-react";
|
||||
import ExecuteActionModal from "@/components/common/ExecuteActionModal";
|
||||
import EntityHistoryPanel from "@/components/common/EntityHistoryPanel";
|
||||
|
||||
const getStatusColor = (status: string) => {
|
||||
switch (status) {
|
||||
case "succeeded":
|
||||
case "completed":
|
||||
return "bg-green-100 text-green-800";
|
||||
case "failed":
|
||||
return "bg-red-100 text-red-800";
|
||||
case "running":
|
||||
return "bg-blue-100 text-blue-800";
|
||||
case "pending":
|
||||
case "requested":
|
||||
case "scheduling":
|
||||
case "scheduled":
|
||||
return "bg-yellow-100 text-yellow-800";
|
||||
case "timeout":
|
||||
return "bg-orange-100 text-orange-800";
|
||||
case "canceled":
|
||||
case "canceling":
|
||||
case "cancelled":
|
||||
return "bg-gray-100 text-gray-800";
|
||||
case "paused":
|
||||
return "bg-purple-100 text-purple-800";
|
||||
case "abandoned":
|
||||
return "bg-red-100 text-red-600";
|
||||
default:
|
||||
return "bg-gray-100 text-gray-800";
|
||||
}
|
||||
};
|
||||
|
||||
/** Map status to a dot color for the timeline. */
|
||||
const getTimelineDotColor = (status: string) => {
|
||||
switch (status) {
|
||||
case "completed":
|
||||
return "bg-green-500";
|
||||
case "failed":
|
||||
return "bg-red-500";
|
||||
case "running":
|
||||
return "bg-blue-500";
|
||||
case "requested":
|
||||
case "scheduling":
|
||||
case "scheduled":
|
||||
return "bg-yellow-500";
|
||||
case "timeout":
|
||||
return "bg-orange-500";
|
||||
case "canceling":
|
||||
case "cancelled":
|
||||
return "bg-gray-400";
|
||||
case "abandoned":
|
||||
return "bg-red-400";
|
||||
default:
|
||||
return "bg-gray-400";
|
||||
}
|
||||
};
|
||||
|
||||
/** Human-readable label for a status value. */
|
||||
const getStatusLabel = (status: string) => {
|
||||
switch (status) {
|
||||
case "requested":
|
||||
return "Requested";
|
||||
case "scheduling":
|
||||
return "Scheduling";
|
||||
case "scheduled":
|
||||
return "Scheduled";
|
||||
case "running":
|
||||
return "Running";
|
||||
case "completed":
|
||||
return "Completed";
|
||||
case "failed":
|
||||
return "Failed";
|
||||
case "canceling":
|
||||
return "Canceling";
|
||||
case "cancelled":
|
||||
return "Cancelled";
|
||||
case "timeout":
|
||||
return "Timed Out";
|
||||
case "abandoned":
|
||||
return "Abandoned";
|
||||
default:
|
||||
return status.charAt(0).toUpperCase() + status.slice(1);
|
||||
}
|
||||
};
|
||||
|
||||
interface TimelineEntry {
|
||||
status: string;
|
||||
time: string;
|
||||
isInitial: boolean;
|
||||
}
|
||||
|
||||
export default function ExecutionDetailPage() {
|
||||
const { id } = useParams<{ id: string }>();
|
||||
const { data: executionData, isLoading, error } = useExecution(Number(id));
|
||||
@@ -40,6 +118,42 @@ export default function ExecutionDetailPage() {
|
||||
|
||||
const [showRerunModal, setShowRerunModal] = useState(false);
|
||||
|
||||
// Fetch status history for the timeline
|
||||
const { data: historyData, isLoading: historyLoading } = useExecutionHistory(
|
||||
Number(id),
|
||||
{ page_size: 100 },
|
||||
);
|
||||
|
||||
// Build timeline entries from history records
|
||||
const timelineEntries = useMemo<TimelineEntry[]>(() => {
|
||||
const records = historyData?.data ?? [];
|
||||
const entries: TimelineEntry[] = [];
|
||||
|
||||
for (const record of records) {
|
||||
if (record.operation === "INSERT" && record.new_values?.status) {
|
||||
entries.push({
|
||||
status: String(record.new_values.status),
|
||||
time: record.time,
|
||||
isInitial: true,
|
||||
});
|
||||
} else if (
|
||||
record.operation === "UPDATE" &&
|
||||
record.changed_fields.includes("status") &&
|
||||
record.new_values?.status
|
||||
) {
|
||||
entries.push({
|
||||
status: String(record.new_values.status),
|
||||
time: record.time,
|
||||
isInitial: false,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// History comes newest-first; reverse to chronological order
|
||||
entries.reverse();
|
||||
return entries;
|
||||
}, [historyData]);
|
||||
|
||||
// Subscribe to real-time updates for this execution
|
||||
const { isConnected } = useExecutionStream({
|
||||
executionId: Number(id),
|
||||
@@ -242,59 +356,99 @@ export default function ExecutionDetailPage() {
|
||||
{/* Timeline */}
|
||||
<div className="bg-white shadow rounded-lg p-6">
|
||||
<h2 className="text-xl font-semibold mb-4">Timeline</h2>
|
||||
|
||||
{historyLoading && (
|
||||
<div className="flex items-center justify-center py-6">
|
||||
<Loader2 className="h-5 w-5 animate-spin text-gray-400" />
|
||||
<span className="ml-2 text-sm text-gray-500">
|
||||
Loading timeline…
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{!historyLoading && timelineEntries.length === 0 && (
|
||||
/* Fallback: no history data yet — show basic created/current status */
|
||||
<div className="space-y-4">
|
||||
<div className="flex gap-4">
|
||||
<div className="flex flex-col items-center">
|
||||
<div className="w-3 h-3 rounded-full bg-blue-500" />
|
||||
{!isRunning && <div className="w-0.5 h-full bg-gray-300" />}
|
||||
<div
|
||||
className={`w-3 h-3 rounded-full ${getTimelineDotColor(execution.status)}`}
|
||||
/>
|
||||
</div>
|
||||
<div className="flex-1 pb-4">
|
||||
<p className="font-medium">Execution Created</p>
|
||||
<div className="flex-1">
|
||||
<p className="font-medium">
|
||||
{getStatusLabel(execution.status)}
|
||||
</p>
|
||||
<p className="text-sm text-gray-500">
|
||||
{new Date(execution.created).toLocaleString()}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{execution.status === ExecutionStatus.COMPLETED && (
|
||||
<div className="flex gap-4">
|
||||
<div className="flex flex-col items-center">
|
||||
<div className="w-3 h-3 rounded-full bg-green-500" />
|
||||
</div>
|
||||
<div className="flex-1">
|
||||
<p className="font-medium">Execution Completed</p>
|
||||
<p className="text-sm text-gray-500">
|
||||
{new Date(execution.updated).toLocaleString()}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{execution.status === ExecutionStatus.FAILED && (
|
||||
<div className="flex gap-4">
|
||||
{!historyLoading && timelineEntries.length > 0 && (
|
||||
<div className="space-y-0">
|
||||
{timelineEntries.map((entry, idx) => {
|
||||
const isLast = idx === timelineEntries.length - 1;
|
||||
const time = new Date(entry.time);
|
||||
const prevTime =
|
||||
idx > 0 ? new Date(timelineEntries[idx - 1].time) : null;
|
||||
const durationMs = prevTime
|
||||
? time.getTime() - prevTime.getTime()
|
||||
: null;
|
||||
|
||||
return (
|
||||
<div key={`${entry.status}-${idx}`} className="flex gap-4">
|
||||
<div className="flex flex-col items-center">
|
||||
<div className="w-3 h-3 rounded-full bg-red-500" />
|
||||
</div>
|
||||
<div className="flex-1">
|
||||
<p className="font-medium">Execution Failed</p>
|
||||
<p className="text-sm text-gray-500">
|
||||
{new Date(execution.updated).toLocaleString()}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<div
|
||||
className={`w-3 h-3 rounded-full flex-shrink-0 ${getTimelineDotColor(entry.status)}${
|
||||
isLast && isRunning ? " animate-pulse" : ""
|
||||
}`}
|
||||
/>
|
||||
{!isLast && (
|
||||
<div className="w-0.5 flex-1 min-h-[24px] bg-gray-200" />
|
||||
)}
|
||||
</div>
|
||||
<div className={`flex-1 ${!isLast ? "pb-4" : ""}`}>
|
||||
<div className="flex items-center gap-2">
|
||||
<p className="font-medium">
|
||||
{getStatusLabel(entry.status)}
|
||||
</p>
|
||||
<span
|
||||
className={`px-1.5 py-0.5 text-[10px] font-medium rounded ${getStatusColor(entry.status)}`}
|
||||
>
|
||||
{entry.status}
|
||||
</span>
|
||||
</div>
|
||||
<p className="text-sm text-gray-500">
|
||||
{time.toLocaleString()}
|
||||
<span className="text-gray-400 ml-2 text-xs">
|
||||
({formatDistanceToNow(time, { addSuffix: true })})
|
||||
</span>
|
||||
</p>
|
||||
{durationMs !== null && durationMs > 0 && (
|
||||
<p className="text-xs text-gray-400 mt-0.5">
|
||||
+{formatDuration(durationMs)} since previous
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
|
||||
{isRunning && (
|
||||
<div className="flex gap-4">
|
||||
<div className="flex gap-4 pt-4">
|
||||
<div className="flex flex-col items-center">
|
||||
<div className="w-3 h-3 rounded-full bg-blue-500 animate-pulse" />
|
||||
</div>
|
||||
<div className="flex-1">
|
||||
<p className="font-medium text-blue-600">In Progress...</p>
|
||||
<p className="font-medium text-blue-600">In Progress…</p>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -349,6 +503,15 @@ export default function ExecutionDetailPage() {
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Change History */}
|
||||
<div className="mt-6">
|
||||
<EntityHistoryPanel
|
||||
entityType="execution"
|
||||
entityId={execution.id}
|
||||
title="Execution History"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
88
work-summary/2026-02-26-entity-history-phase3-analytics.md
Normal file
88
work-summary/2026-02-26-entity-history-phase3-analytics.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Entity History Phase 3 — Analytics Dashboard
|
||||
|
||||
**Date**: 2026-02-26
|
||||
|
||||
## Summary
|
||||
|
||||
Implemented the final phase of the TimescaleDB entity history plan: continuous aggregates, analytics API endpoints, and dashboard visualization widgets. This completes the full history tracking pipeline from database triggers → hypertables → continuous aggregates → API → UI.
|
||||
|
||||
## Changes
|
||||
|
||||
### New Files
|
||||
|
||||
1. **`migrations/20260226200000_continuous_aggregates.sql`** — TimescaleDB continuous aggregates migration:
|
||||
- `execution_status_hourly` — execution status transitions per hour by action_ref and status
|
||||
- `execution_throughput_hourly` — execution creation volume per hour by action_ref
|
||||
- `event_volume_hourly` — event creation volume per hour by trigger_ref
|
||||
- `worker_status_hourly` — worker status transitions per hour by worker name
|
||||
- `enforcement_volume_hourly` — enforcement creation volume per hour by rule_ref
|
||||
- Auto-refresh policies: 30-min interval for most, 1-hour for workers; 7-day lookback window
|
||||
- Initial `CALL refresh_continuous_aggregate()` for all five views
|
||||
|
||||
2. **`crates/common/src/repositories/analytics.rs`** — Analytics repository:
|
||||
- Row types: `ExecutionStatusBucket`, `ExecutionThroughputBucket`, `EventVolumeBucket`, `WorkerStatusBucket`, `EnforcementVolumeBucket`, `FailureRateSummary`
|
||||
- `AnalyticsTimeRange` with `default()` (24h), `last_hours()`, `last_days()` constructors
|
||||
- Query methods for each aggregate (global and per-entity-ref variants)
|
||||
- `execution_failure_rate()` — derives failure percentage from terminal-state transitions
|
||||
- Unit tests for time range construction and failure rate math
|
||||
|
||||
3. **`crates/api/src/dto/analytics.rs`** — Analytics DTOs:
|
||||
- `AnalyticsQueryParams` (since, until, hours) with `to_time_range()` conversion
|
||||
- Response types: `DashboardAnalyticsResponse`, `ExecutionStatusTimeSeriesResponse`, `ExecutionThroughputResponse`, `EventVolumeResponse`, `WorkerStatusTimeSeriesResponse`, `EnforcementVolumeResponse`, `FailureRateResponse`
|
||||
- `TimeSeriesPoint` — universal (bucket, label, value) data point
|
||||
- `From` conversions from all repository bucket types to `TimeSeriesPoint`
|
||||
- Unit tests for query param defaults, clamping, explicit ranges, and conversions
|
||||
|
||||
4. **`crates/api/src/routes/analytics.rs`** — 7 API endpoints:
|
||||
- `GET /api/v1/analytics/dashboard` — combined payload (all metrics in one call, concurrent queries via `tokio::try_join!`)
|
||||
- `GET /api/v1/analytics/executions/status` — status transitions over time
|
||||
- `GET /api/v1/analytics/executions/throughput` — creation throughput over time
|
||||
- `GET /api/v1/analytics/executions/failure-rate` — failure rate summary
|
||||
- `GET /api/v1/analytics/events/volume` — event creation volume
|
||||
- `GET /api/v1/analytics/workers/status` — worker status transitions
|
||||
- `GET /api/v1/analytics/enforcements/volume` — enforcement creation volume
|
||||
- All endpoints: authenticated, utoipa-documented, accept `since`/`until`/`hours` query params
|
||||
|
||||
5. **`web/src/hooks/useAnalytics.ts`** — React Query hooks:
|
||||
- `useDashboardAnalytics()` — fetches combined dashboard payload (1-min stale, 2-min auto-refresh)
|
||||
- Individual hooks: `useExecutionStatusAnalytics()`, `useExecutionThroughputAnalytics()`, `useFailureRateAnalytics()`, `useEventVolumeAnalytics()`, `useWorkerStatusAnalytics()`, `useEnforcementVolumeAnalytics()`
|
||||
- Types: `DashboardAnalytics`, `TimeSeriesPoint`, `FailureRateSummary`, `TimeSeriesResponse`, `AnalyticsQueryParams`
|
||||
|
||||
6. **`web/src/components/common/AnalyticsWidgets.tsx`** — Dashboard visualization components:
|
||||
- `AnalyticsDashboard` — composite widget rendering all charts and metrics
|
||||
- `MiniBarChart` — pure-CSS bar chart with hover tooltips and adaptive x-axis labels
|
||||
- `StackedBarChart` — stacked bar chart for status breakdowns with auto-generated legend
|
||||
- `FailureRateCard` — SVG ring gauge showing success/failure/timeout breakdown
|
||||
- `StatCard` — simple metric card with icon, label, and value
|
||||
- `TimeRangeSelector` — segmented button group (6h, 12h, 24h, 2d, 7d)
|
||||
- No external chart library dependency — all visualization is pure CSS/SVG
|
||||
|
||||
### Modified Files
|
||||
|
||||
7. **`crates/common/src/repositories/mod.rs`** — Registered `analytics` module, re-exported `AnalyticsRepository`
|
||||
|
||||
8. **`crates/api/src/dto/mod.rs`** — Registered `analytics` module, re-exported key DTO types
|
||||
|
||||
9. **`crates/api/src/routes/mod.rs`** — Registered `analytics` module, re-exported `analytics_routes`
|
||||
|
||||
10. **`crates/api/src/server.rs`** — Merged `analytics_routes()` into the API v1 router
|
||||
|
||||
11. **`web/src/pages/dashboard/DashboardPage.tsx`** — Added `AnalyticsDashboard` widget below existing metrics/activity sections with `TimeRangeHours` state management
|
||||
|
||||
12. **`docs/plans/timescaledb-entity-history.md`** — Marked Phase 2 continuous aggregates and Phase 3 analytics items as ✅ complete
|
||||
|
||||
13. **`AGENTS.md`** — Updated development status (continuous aggregates + analytics in Complete, removed from Planned)
|
||||
|
||||
## Design Decisions
|
||||
|
||||
- **Combined dashboard endpoint**: `GET /analytics/dashboard` fetches all 6 aggregate queries concurrently with `tokio::try_join!`, returning everything in one response. This avoids 6+ waterfall requests from the dashboard page.
|
||||
- **No chart library**: All visualization uses pure CSS (flex-based bars) and inline SVG (ring gauge). This avoids adding a heavy chart dependency for what are essentially bar charts and a donut. A dedicated charting library can be introduced later if more sophisticated visualizations are needed.
|
||||
- **Time range selector**: The dashboard defaults to 24 hours and offers 6h/12h/24h/2d/7d presets. The `hours` query param provides a simpler interface than specifying ISO timestamps for the common case.
|
||||
- **Auto-refresh**: The dashboard analytics hook has `refetchInterval: 120000` (2 minutes) so the dashboard stays reasonably current without hammering the API. The continuous aggregates themselves refresh every 30 minutes on the server side.
|
||||
- **Stale time**: Analytics hooks use 60-second stale time since the underlying aggregates only refresh every 30 minutes — there's no benefit to re-fetching more often.
|
||||
- **Continuous aggregate refresh policies**: 30-minute schedule for execution/event/enforcement aggregates (higher expected volume), 1-hour for workers (low volume). All with a 1-hour `end_offset` to avoid refreshing the currently-filling bucket, and 7-day `start_offset` to limit refresh scope.
|
||||
|
||||
## Remaining (not in scope)
|
||||
|
||||
- **Configurable retention periods via admin settings** — retention policies are set in the migration; admin UI for changing them is deferred
|
||||
- **Export/archival to external storage** — deferred to a future phase
|
||||
51
work-summary/2026-02-26-entity-history-ui-panels.md
Normal file
51
work-summary/2026-02-26-entity-history-ui-panels.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Entity History UI Panels — Phase 2 Completion
|
||||
|
||||
**Date**: 2026-02-26
|
||||
|
||||
## Summary
|
||||
|
||||
Completed the remaining Phase 2 work for the TimescaleDB entity history feature by building the Web UI history panels and integrating them into entity detail pages.
|
||||
|
||||
## Changes
|
||||
|
||||
### New Files
|
||||
|
||||
1. **`web/src/hooks/useHistory.ts`** — React Query hooks for fetching entity history from the API:
|
||||
- `useEntityHistory()` — generic hook accepting entity type, ID, and query params
|
||||
- `useExecutionHistory()`, `useWorkerHistory()`, `useEnforcementHistory()`, `useEventHistory()` — convenience wrappers
|
||||
- Types: `HistoryRecord`, `PaginatedHistoryResponse`, `HistoryQueryParams`, `HistoryEntityType`
|
||||
- Uses `apiClient` (with auth interceptors) to call `GET /api/v1/{entities}/{id}/history`
|
||||
|
||||
2. **`web/src/components/common/EntityHistoryPanel.tsx`** — Reusable collapsible panel component:
|
||||
- Starts collapsed by default to avoid unnecessary API calls on page load
|
||||
- Fetches history only when expanded (via `enabled` flag on React Query)
|
||||
- **Filters**: Operation type dropdown (INSERT/UPDATE/DELETE) and changed field text input
|
||||
- **Pagination**: First/prev/next/last page navigation with total count
|
||||
- **Timeline rows**: Each record is expandable to show field-level details
|
||||
- **Field diffs**: For UPDATE operations, shows old → new values side by side; simple scalar values use inline red/green format, complex objects use side-by-side JSON blocks
|
||||
- **INSERT/DELETE handling**: Shows initial values or values at deletion respectively
|
||||
- Operation badges color-coded: green (INSERT), blue (UPDATE), red (DELETE)
|
||||
- Relative timestamps with full ISO 8601 tooltip
|
||||
|
||||
### Modified Files
|
||||
|
||||
3. **`web/src/pages/executions/ExecutionDetailPage.tsx`** — Added `EntityHistoryPanel` below the main content grid with `entityType="execution"`
|
||||
|
||||
4. **`web/src/pages/enforcements/EnforcementDetailPage.tsx`** — Added `EntityHistoryPanel` below the main content grid with `entityType="enforcement"`
|
||||
|
||||
5. **`web/src/pages/events/EventDetailPage.tsx`** — Added `EntityHistoryPanel` below the main content grid with `entityType="event"`
|
||||
|
||||
6. **`docs/plans/timescaledb-entity-history.md`** — Marked Phase 2 as ✅ complete with details of the UI implementation
|
||||
|
||||
7. **`AGENTS.md`** — Updated development status: moved "History UI panels" from Planned to Complete
|
||||
|
||||
### Not Modified
|
||||
|
||||
- **Worker detail page** does not exist yet in the web UI, so no worker history panel was added. The `useWorkerHistory()` hook and the `entityType="worker"` option are ready for when a worker detail page is created.
|
||||
|
||||
## Design Decisions
|
||||
|
||||
- **Collapsed by default**: History panels start collapsed to avoid unnecessary API requests on every page load. The query only fires when the user expands the panel.
|
||||
- **Uses `apiClient` directly**: Since the history endpoints aren't part of the generated OpenAPI client (they would need a client regeneration), the hook uses `apiClient` from `lib/api-client.ts` which already handles JWT auth and token refresh.
|
||||
- **Configurable page size**: Defaults to 10 records per page (suitable for a detail-page sidebar), but can be overridden via prop.
|
||||
- **Full-width placement**: The history panel is placed below the main two-column grid layout on each detail page, spanning full width for readability.
|
||||
Reference in New Issue
Block a user