David Culbreth a057ad5db5
Some checks failed
CI / Rustfmt (push) Successful in 21s
CI / Clippy (push) Failing after 2m3s
CI / Cargo Audit & Deny (push) Successful in 33s
CI / Web Blocking Checks (push) Successful in 51s
CI / Security Blocking Checks (push) Successful in 5s
CI / Web Advisory Checks (push) Successful in 38s
CI / Security Advisory Checks (push) Successful in 36s
Publish Images / Resolve Publish Metadata (push) Successful in 1s
Publish Images / Publish web (arm64) (push) Successful in 3m34s
Publish Images / Build Rust Bundles (amd64) (push) Failing after 4m1s
CI / Tests (push) Successful in 8m47s
Publish Images / Publish web (amd64) (push) Failing after 46s
Publish Images / Build Rust Bundles (arm64) (push) Failing after 4m3s
Publish Images / Publish agent (arm64) (push) Has been skipped
Publish Images / Publish api (arm64) (push) Has been skipped
Publish Images / Publish agent (amd64) (push) Has been skipped
Publish Images / Publish api (amd64) (push) Has been skipped
Publish Images / Publish executor (arm64) (push) Has been skipped
Publish Images / Publish notifier (arm64) (push) Has been skipped
Publish Images / Publish executor (amd64) (push) Has been skipped
Publish Images / Publish notifier (amd64) (push) Has been skipped
Publish Images / Publish manifest attune-agent (push) Has been skipped
Publish Images / Publish manifest attune-api (push) Has been skipped
Publish Images / Publish manifest attune-executor (push) Has been skipped
Publish Images / Publish manifest attune-notifier (push) Has been skipped
Publish Images / Publish manifest attune-web (push) Has been skipped
adjusting publish pipeline to cross-compile because rpis are slow
2026-03-25 10:07:48 -05:00
2026-03-25 08:14:06 -05:00
2026-03-25 08:14:06 -05:00
2026-02-16 22:04:20 -06:00
2026-03-04 22:02:34 -06:00
2026-02-16 22:04:20 -06:00
WIP
2026-03-02 19:27:52 -06:00
2026-03-24 14:45:07 -05:00
2026-03-21 07:32:11 -05:00
2026-03-24 14:45:07 -05:00
2026-03-04 22:36:16 -06:00
2026-02-23 20:45:10 -06:00
2026-03-17 14:51:19 -05:00
2026-03-23 13:05:53 -05:00
2026-03-21 10:05:02 -05:00
2026-03-21 07:32:11 -05:00
2026-03-10 09:30:57 -05:00
2026-03-18 16:35:21 -05:00
2026-02-04 17:46:30 -06:00
2026-03-05 09:48:42 -06:00
2026-03-25 08:14:06 -05:00
2026-03-18 16:35:21 -05:00
2026-03-18 16:35:21 -05:00
2026-02-04 17:46:30 -06:00
2026-02-04 17:46:30 -06:00

Attune

An event-driven automation and orchestration platform built in Rust.

Overview

Attune is a comprehensive automation platform similar to StackStorm or Apache Airflow, designed for building event-driven workflows with built-in multi-tenancy, RBAC (Role-Based Access Control), and human-in-the-loop capabilities.

Key Features

  • Event-Driven Architecture: Sensors monitor for triggers, which fire events that activate rules
  • Flexible Automation: Pack-based system for organizing and distributing automation components
  • Workflow Orchestration: Support for complex workflows with parent-child execution relationships
  • Human-in-the-Loop: Inquiry system for async user interactions and approvals
  • Multi-Runtime Support: Execute actions in different runtime environments (Python, Node.js, containers)
  • RBAC & Multi-Tenancy: Comprehensive permission system with identity-based access control
  • Real-Time Notifications: PostgreSQL-based pub/sub for real-time event streaming
  • Secure Secrets Management: Encrypted key-value storage with ownership scoping
  • Execution Policies: Rate limiting and concurrency control for action executions

Architecture

Attune is built as a distributed system with multiple specialized services:

Services

  1. API Service (attune-api): REST API gateway for all client interactions
  2. Executor Service (attune-executor): Manages action execution lifecycle and scheduling
  3. Worker Service (attune-worker): Executes actions in various runtime environments
  4. Sensor Service (attune-sensor): Monitors for trigger conditions and generates events
  5. Notifier Service (attune-notifier): Handles real-time notifications and pub/sub

Core Concepts

  • Pack: A bundle of related automation components (actions, sensors, rules, triggers)
  • Trigger: An event type that can activate rules (e.g., "webhook_received")
  • Sensor: Monitors for trigger conditions and creates events
  • Event: An instance of a trigger firing with payload data
  • Action: An executable task (e.g., "send_email", "deploy_service")
  • Rule: Connects triggers to actions with conditional logic
  • Execution: A single action run, supports nested workflows
  • Inquiry: Async user interaction within a workflow (approvals, input requests)

Project Structure

attune/
├── Cargo.toml              # Workspace root configuration
├── crates/
│   ├── common/             # Shared library
│   │   ├── src/
│   │   │   ├── config.rs   # Configuration management
│   │   │   ├── db.rs       # Database connection pooling
│   │   │   ├── error.rs    # Error types
│   │   │   ├── models.rs   # Data models
│   │   │   ├── schema.rs   # Schema utilities
│   │   │   └── utils.rs    # Common utilities
│   │   └── Cargo.toml
│   ├── api/                # API service
│   ├── executor/           # Execution service
│   ├── worker/             # Worker service
│   ├── sensor/             # Sensor service
│   ├── notifier/           # Notification service
│   └── cli/                # CLI tool
└── reference/
    ├── models.py           # Python SQLAlchemy models (reference)
    └── models.md           # Data model documentation

Prerequisites

Local Development

  • Rust: 1.75 or later
  • PostgreSQL: 14 or later
  • RabbitMQ: 3.12 or later (for message queue)
  • Redis: 7.0 or later (optional, for caching)
  • Docker: 20.10 or later
  • Docker Compose: 2.0 or later

Getting Started

The fastest way to get Attune running is with Docker:

# Clone the repository
git clone https://github.com/yourusername/attune.git
cd attune

# Run the quick start script
./docker/quickstart.sh

This will:

  • Generate secure secrets
  • Build all Docker images
  • Start all services (API, Executor, Worker, Sensor, Notifier, Web UI)
  • Start infrastructure (PostgreSQL, RabbitMQ, Redis)
  • Set up the database with migrations

Access the application:

For more details, see Docker Deployment Guide.

Option 2: Local Development Setup

1. Clone the Repository

git clone https://github.com/yourusername/attune.git
cd attune

2. Set Up Database

# Create PostgreSQL database
createdb attune

# Run migrations
sqlx migrate run

3. Load the Core Pack

The core pack provides essential built-in automation components (timers, HTTP actions, etc.):

# Install Python dependencies for the loader
pip install psycopg2-binary pyyaml

# Load the core pack into the database
./scripts/load-core-pack.sh

# Or use the Python script directly
python3 scripts/load_core_pack.py

Verify the core pack is loaded:

# Using CLI (after starting API)
attune pack show core

# Using database
psql attune -c "SELECT * FROM attune.pack WHERE ref = 'core';"

See Core Pack Setup Guide for detailed instructions.

4. Configure Application

Create a configuration file from the example:

cp config.example.yaml config.yaml

Edit config.yaml with your settings:

# Attune Configuration
service_name: attune
environment: development

database:
  url: postgresql://postgres:postgres@localhost:5432/attune

server:
  host: 0.0.0.0
  port: 8080
  cors_origins:
    - http://localhost:3000
    - http://localhost:5173

security:
  jwt_secret: your-secret-key-change-this
  jwt_access_expiration: 3600
  encryption_key: your-32-char-encryption-key-here

log:
  level: info
  format: json

Generate secure secrets:

# JWT secret
openssl rand -base64 64

# Encryption key
openssl rand -base64 32

5. Build All Services

cargo build --release

6. Run Services

Each service can be run independently:

# API Service
cargo run --bin attune-api --release

# Executor Service
cargo run --bin attune-executor --release

# Worker Service
cargo run --bin attune-worker --release

# Sensor Service
cargo run --bin attune-sensor --release

# Notifier Service
cargo run --bin attune-notifier --release

7. Using the CLI

Install and use the Attune CLI to interact with the API:

# Build and install CLI
cargo install --path crates/cli

# Login to API
attune auth login --username admin

# List packs
attune pack list

# List packs as JSON (shorthand)
attune pack list -j

# Execute an action
attune action execute core.echo --param message="Hello World"

# Monitor executions
attune execution list

# Get raw execution result for piping
attune execution result 123 | jq '.data'

See CLI Documentation for comprehensive usage guide.

Development

Web UI Development (Quick Start)

For rapid frontend development with hot-module reloading:

# Terminal 1: Start backend services in Docker
docker compose up -d postgres rabbitmq redis api executor worker-shell sensor

# Terminal 2: Start Vite dev server
cd web
npm install  # First time only
npm run dev

# Browser: Open http://localhost:3001

The Vite dev server provides:

  • Instant hot-module reloading - changes appear immediately
  • 🚀 Fast iteration - no Docker rebuild needed for frontend changes
  • 🔧 Full API access - properly configured CORS with backend services
  • 🎯 Source maps - easy debugging

Why port 3001? The Docker web container uses port 3000. Vite automatically uses 3001 to avoid conflicts.

Documentation:

Default test user:

  • Email: test@attune.local
  • Password: TestPass123!

Building

# Build all crates
cargo build

# Build specific service
cargo build -p attune-api

# Build with optimizations
cargo build --release

Testing

# Run all tests
cargo test

# Run tests for specific crate
cargo test -p attune-common

# Run tests with output
cargo test -- --nocapture

# Run tests in parallel (recommended - uses schema-per-test isolation)
cargo test -- --test-threads=4

SQLx Compile-Time Query Checking

Attune uses SQLx macros for type-safe database queries. These macros verify queries at compile time using cached metadata.

Setup for Development:

  1. Copy the example environment file:

    cp .env.example .env
    
  2. The .env file enables SQLx offline mode by default:

    SQLX_OFFLINE=true
    DATABASE_URL=postgresql://postgres:postgres@localhost:5432/attune?options=-c%20search_path%3Dattune%2Cpublic
    

Regenerating Query Metadata:

When you modify SQLx queries (in query!, query_as!, or query_scalar! macros), regenerate the cached metadata:

# Ensure database is running and up-to-date
sqlx database setup

# Regenerate offline query data
cargo sqlx prepare --workspace

This creates/updates .sqlx/ directory with query metadata. Commit these files to version control so other developers and CI/CD can build without a database connection.

Benefits of Offline Mode:

  • Fast compilation without database connection
  • Works in CI/CD environments
  • Type-safe queries verified at compile time
  • Consistent query validation across all environments

Code Quality

# Check code without building
cargo check

# Run linter
cargo clippy

# Format code
cargo fmt

Configuration

Attune uses YAML configuration files with environment variable overrides.

Configuration Loading Priority

  1. Base configuration file (config.yaml or path from ATTUNE_CONFIG environment variable)
  2. Environment-specific file (e.g., config.development.yaml, config.production.yaml)
  3. Environment variables (prefix: ATTUNE__, separator: __)
    • Example: ATTUNE__DATABASE__URL, ATTUNE__SERVER__PORT

Quick Setup

# Copy example configuration
cp config.example.yaml config.yaml

# Edit configuration
nano config.yaml

# Or use environment-specific config
cp config.example.yaml config.development.yaml

Environment Variable Overrides

You can override any YAML setting with environment variables:

export ATTUNE__DATABASE__URL=postgresql://localhost/attune
export ATTUNE__SERVER__PORT=3000
export ATTUNE__LOG__LEVEL=debug
export ATTUNE__SECURITY__JWT_SECRET=$(openssl rand -base64 64)

Configuration Structure

See Configuration Guide for detailed documentation.

Main configuration sections:

  • database: PostgreSQL connection settings
  • redis: Redis connection (optional)
  • message_queue: RabbitMQ settings
  • server: HTTP server configuration
  • log: Logging settings
  • security: JWT and encryption settings
  • worker: Worker-specific settings

Data Models

See reference/models.md for comprehensive documentation of all data models.

Key models include:

  • Pack, Runtime, Worker
  • Trigger, Sensor, Event
  • Action, Rule, Enforcement
  • Execution, Inquiry
  • Identity, PermissionSet
  • Key (secrets), Notification

CLI Tool

Attune includes a comprehensive command-line interface for interacting with the platform.

Installation

cargo install --path crates/cli

Quick Start

# Login
attune auth login --username admin

# Install a pack
attune pack install https://github.com/example/attune-pack-monitoring

# List actions
attune action list --pack monitoring

# Execute an action
attune action execute monitoring.check_health --param endpoint=https://api.example.com

# Monitor executions
attune execution list --limit 20

# Search executions
attune execution list --pack monitoring --status failed
attune execution list --result "error"

# Get raw execution result
attune execution result 123 | jq '.field'

Features

  • Pack Management: Install, list, and manage automation packs
  • Action Execution: Run actions with parameters, wait for completion
  • Rule Management: Create, enable, disable, and configure rules
  • Execution Monitoring: View execution status, logs, and results with advanced filtering
  • Result Extraction: Get raw execution results for piping to other tools
  • Multiple Output Formats: Table (default), JSON (-j), and YAML (-y) output
  • Configuration Management: Persistent config with token storage

See the CLI README for detailed documentation and examples.

API Documentation

API documentation will be available at /docs when running the API service (OpenAPI/Swagger).

Deployment

🚀 New to Docker deployment? Start here: Docker Quick Start Guide

Quick Setup:

# Stop conflicting system services (if needed)
./scripts/stop-system-services.sh

# Start all services (migrations run automatically)
docker compose up -d

# Check status
docker compose ps

# Access Web UI
open http://localhost:3000

Building Images (only needed if you modify code):

# Pre-warm build cache (prevents race conditions)
make docker-cache-warm

# Build all services
make docker-build

Documentation:

Kubernetes

Kubernetes manifests are located in the deploy/kubernetes/ directory.

kubectl apply -f deploy/kubernetes/

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Code Style

  • Follow Rust standard conventions
  • Use cargo fmt before committing
  • Ensure cargo clippy passes without warnings
  • Write tests for new functionality

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Inspired by:

Roadmap

Phase 1: Core Infrastructure (Current)

  • Project structure and workspace setup
  • Common library with models and utilities
  • Database migrations
  • Service stubs and configuration

Phase 2: Basic Services

  • API service with REST endpoints
  • Executor service for managing executions
  • Worker service for running actions
  • Basic pack management

Phase 3: Event System

  • Sensor service implementation
  • Event generation and processing
  • Rule evaluation engine
  • Enforcement creation

Phase 4: Advanced Features

  • Inquiry system for human-in-the-loop
  • Workflow orchestration (parent-child executions)
  • Execution policies (rate limiting, concurrency)
  • Real-time notifications

Phase 5: Production Ready

  • Comprehensive testing
  • Performance optimization
  • Documentation and examples
  • Deployment tooling
  • Monitoring and observability

Support

For questions, issues, or contributions:

  • Open an issue on GitHub
  • Check the documentation in reference/models.md
  • Review code examples in the examples/ directory (coming soon)

Status

Current Status: Early Development

The project structure and core models are in place. Service implementation is ongoing.

Description
No description provided
Readme Apache-2.0 15 MiB
Languages
Rust 47.8%
Python 27.2%
TypeScript 17.6%
Shell 4.3%
PLpgSQL 2.7%
Other 0.3%