re-uploading work
This commit is contained in:
289
docs/guides/QUICKREF-timer-happy-path.md
Normal file
289
docs/guides/QUICKREF-timer-happy-path.md
Normal file
@@ -0,0 +1,289 @@
|
||||
# Quick Reference: Timer Echo Happy Path Test
|
||||
|
||||
This guide provides a quick reference for testing the core happy-path scenario in Attune: an interval timer running every second to execute `echo "Hello, World!"`.
|
||||
|
||||
## Overview
|
||||
|
||||
This test verifies the complete event-driven flow with unified runtime detection:
|
||||
|
||||
```
|
||||
Timer Sensor → Event → Rule Match → Enforcement → Execution → Worker → Shell Action
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker and Docker Compose installed
|
||||
- All Attune services running in containers
|
||||
- Core pack loaded with timer triggers and echo action
|
||||
|
||||
## Quick Test (Automated)
|
||||
|
||||
Run the automated test script:
|
||||
|
||||
```bash
|
||||
cd attune
|
||||
./scripts/test-timer-echo-docker.sh
|
||||
```
|
||||
|
||||
This script will:
|
||||
1. ✓ Check Docker services are healthy
|
||||
2. ✓ Authenticate with API
|
||||
3. ✓ Verify runtime detection (Shell runtime available)
|
||||
4. ✓ Verify core pack is loaded
|
||||
5. ✓ Create a 1-second interval timer trigger instance
|
||||
6. ✓ Create a rule linking timer to echo action
|
||||
7. ✓ Wait 15 seconds and verify executions
|
||||
8. ✓ Display results and cleanup
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
=== HAPPY PATH TEST PASSED ===
|
||||
|
||||
The complete event flow is working:
|
||||
Timer Sensor → Event → Rule → Enforcement → Execution → Worker → Shell Action
|
||||
```
|
||||
|
||||
## Manual Test Steps
|
||||
|
||||
### 1. Start Services
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
docker-compose ps # Verify all services are running
|
||||
```
|
||||
|
||||
### 2. Check Runtime Detection
|
||||
|
||||
```bash
|
||||
# Get auth token
|
||||
export TOKEN=$(curl -s -X POST http://localhost:8080/auth/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"username":"admin","password":"admin"}' | jq -r '.data.access_token')
|
||||
|
||||
# Verify runtimes detected
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/runtimes | jq '.data[] | {name, enabled}'
|
||||
```
|
||||
|
||||
**Expected:** Shell runtime should be present and enabled.
|
||||
|
||||
### 3. Verify Core Pack
|
||||
|
||||
```bash
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/packs/core | jq '.data | {id, ref, name}'
|
||||
```
|
||||
|
||||
**Expected:** Core pack with actions and triggers loaded.
|
||||
|
||||
### 4. Create Trigger Instance
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/trigger-instances \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"trigger_type_ref": "core.intervaltimer",
|
||||
"ref": "test.timer_1s",
|
||||
"description": "1-second interval timer",
|
||||
"enabled": true,
|
||||
"parameters": {
|
||||
"unit": "seconds",
|
||||
"interval": 1
|
||||
}
|
||||
}' | jq '.data | {id, ref}'
|
||||
```
|
||||
|
||||
### 5. Create Rule
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/rules \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"ref": "test.timer_echo",
|
||||
"pack_ref": "core",
|
||||
"name": "Timer Echo Test",
|
||||
"description": "Echoes Hello World every second",
|
||||
"enabled": true,
|
||||
"trigger_instance_ref": "test.timer_1s",
|
||||
"action_ref": "core.echo",
|
||||
"action_parameters": {
|
||||
"message": "Hello, World!"
|
||||
}
|
||||
}' | jq '.data | {id, ref}'
|
||||
```
|
||||
|
||||
### 6. Monitor Executions
|
||||
|
||||
Wait 10-15 seconds, then check for executions:
|
||||
|
||||
```bash
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/executions?limit=10 | \
|
||||
jq '.data[] | {id, status, action_ref, created}'
|
||||
```
|
||||
|
||||
**Expected:** Multiple executions with `status: "succeeded"` and `action_ref: "core.echo"`.
|
||||
|
||||
### 7. Check Service Logs
|
||||
|
||||
```bash
|
||||
# Sensor service (timer firing)
|
||||
docker logs attune-sensor --tail 50 | grep -i "timer\|interval"
|
||||
|
||||
# Executor service (scheduling)
|
||||
docker logs attune-executor --tail 50 | grep -i "execution\|schedule"
|
||||
|
||||
# Worker service (runtime detection and action execution)
|
||||
docker logs attune-worker --tail 50 | grep -i "runtime\|shell\|echo"
|
||||
```
|
||||
|
||||
**Expected log entries:**
|
||||
|
||||
**Sensor:**
|
||||
```
|
||||
Timer trigger fired: core.intervaltimer
|
||||
Event created: id=123
|
||||
```
|
||||
|
||||
**Executor:**
|
||||
```
|
||||
Processing enforcement: id=456
|
||||
Execution scheduled: id=789
|
||||
```
|
||||
|
||||
**Worker:**
|
||||
```
|
||||
Runtime detected: Shell
|
||||
Executing action: core.echo
|
||||
Action completed successfully
|
||||
```
|
||||
|
||||
### 8. Cleanup
|
||||
|
||||
```bash
|
||||
# Disable the rule
|
||||
curl -X PUT http://localhost:8080/api/v1/rules/test.timer_echo \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"enabled": false}'
|
||||
|
||||
# Delete the rule (optional)
|
||||
curl -X DELETE http://localhost:8080/api/v1/rules/test.timer_echo \
|
||||
-H "Authorization: Bearer $TOKEN"
|
||||
|
||||
# Delete trigger instance (optional)
|
||||
curl -X DELETE http://localhost:8080/api/v1/trigger-instances/test.timer_1s \
|
||||
-H "Authorization: Bearer $TOKEN"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Executions Created
|
||||
|
||||
**Check 1: Is the sensor service running?**
|
||||
```bash
|
||||
docker logs attune-sensor --tail 100
|
||||
```
|
||||
Look for: "Started monitoring trigger instances" or "Timer trigger fired"
|
||||
|
||||
**Check 2: Are events being created?**
|
||||
```bash
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/events?limit=10 | jq '.data | length'
|
||||
```
|
||||
|
||||
**Check 3: Are enforcements being created?**
|
||||
```bash
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/enforcements?limit=10 | jq '.data | length'
|
||||
```
|
||||
|
||||
**Check 4: Is the rule enabled?**
|
||||
```bash
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/rules/test.timer_echo | jq '.data.enabled'
|
||||
```
|
||||
|
||||
### Executions Failed
|
||||
|
||||
**Check worker logs for errors:**
|
||||
```bash
|
||||
docker logs attune-worker --tail 100 | grep -i "error\|failed"
|
||||
```
|
||||
|
||||
**Check execution details:**
|
||||
```bash
|
||||
EXEC_ID=$(curl -s -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/executions?limit=1 | jq -r '.data[0].id')
|
||||
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/executions/$EXEC_ID | jq '.data'
|
||||
```
|
||||
|
||||
**Common issues:**
|
||||
- Runtime not detected: Check worker startup logs for "Runtime detected: Shell"
|
||||
- Action script not found: Verify packs mounted at `/opt/attune/packs` in worker container
|
||||
- Permission denied: Check file permissions on `packs/core/actions/echo.sh`
|
||||
|
||||
### Runtime Not Detected
|
||||
|
||||
**Check runtime configuration in database:**
|
||||
```bash
|
||||
docker exec -it postgres psql -U attune -d attune \
|
||||
-c "SELECT name, enabled, distributions FROM attune.runtime WHERE name ILIKE '%shell%';"
|
||||
```
|
||||
|
||||
**Check worker configuration:**
|
||||
```bash
|
||||
docker exec -it attune-worker env | grep ATTUNE
|
||||
```
|
||||
|
||||
**Verify Shell runtime verification:**
|
||||
```bash
|
||||
# This should succeed on the worker container
|
||||
docker exec -it attune-worker /bin/bash -c "echo 'Runtime test'"
|
||||
```
|
||||
|
||||
## Configuration Files
|
||||
|
||||
**Docker config:** `config.docker.yaml`
|
||||
- Database: `postgresql://attune:attune@postgres:5432/attune`
|
||||
- Message Queue: `amqp://attune:attune@rabbitmq:5672`
|
||||
- Packs: `/opt/attune/packs`
|
||||
- Schema: `attune`
|
||||
|
||||
**Core pack location (in containers):**
|
||||
- Actions: `/opt/attune/packs/core/actions/`
|
||||
- Triggers: `/opt/attune/packs/core/triggers/`
|
||||
- Sensors: `/opt/attune/packs/core/sensors/`
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ **Shell runtime detected** by worker service
|
||||
✅ **Core pack loaded** with echo action and timer trigger
|
||||
✅ **Events generated** by sensor every second
|
||||
✅ **Enforcements created** by rule matching
|
||||
✅ **Executions scheduled** by executor service
|
||||
✅ **Actions executed** by worker service using Shell runtime
|
||||
✅ **Executions succeed** with "Hello, World!" output
|
||||
|
||||
## Next Steps
|
||||
|
||||
After verifying the happy path:
|
||||
|
||||
1. **Test Python runtime**: Create a Python action and verify runtime detection
|
||||
2. **Test Node.js runtime**: Create a Node.js action and verify runtime detection
|
||||
3. **Test workflows**: Chain multiple actions together
|
||||
4. **Test pack environments**: Verify pack-specific dependency isolation
|
||||
5. **Test error handling**: Trigger failures and verify retry logic
|
||||
6. **Test concurrency**: Create multiple rules firing simultaneously
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Unified Runtime Detection](../QUICKREF-unified-runtime-detection.md)
|
||||
- [Pack Runtime Environments](../pack-runtime-environments.md)
|
||||
- [Worker Service Architecture](../architecture/worker-service.md)
|
||||
- [Sensor Service Architecture](../architecture/sensor-service.md)
|
||||
- [Timer Sensor Quickstart](./timer-sensor-quickstart.md)
|
||||
401
docs/guides/quick-start.md
Normal file
401
docs/guides/quick-start.md
Normal file
@@ -0,0 +1,401 @@
|
||||
# Attune API Quick Start Guide
|
||||
|
||||
Get the Attune API up and running in minutes!
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rust 1.70+ installed
|
||||
- PostgreSQL 13+ installed and running
|
||||
- `sqlx-cli` (will be installed if needed)
|
||||
|
||||
## Step 1: Database Setup
|
||||
|
||||
The API needs a PostgreSQL database. Run the setup script:
|
||||
|
||||
```bash
|
||||
cd attune
|
||||
./scripts/setup-db.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Create the `attune` database
|
||||
- Run all migrations
|
||||
- Set up the schema
|
||||
|
||||
**If the script doesn't work**, do it manually:
|
||||
|
||||
```bash
|
||||
# Connect to PostgreSQL
|
||||
psql -U postgres
|
||||
|
||||
# Create database
|
||||
CREATE DATABASE attune;
|
||||
|
||||
# Exit psql
|
||||
\q
|
||||
|
||||
# Run migrations
|
||||
export DATABASE_URL="postgresql://postgres:postgres@localhost:5432/attune"
|
||||
cargo install sqlx-cli --no-default-features --features postgres
|
||||
sqlx migrate run
|
||||
```
|
||||
|
||||
## Step 2: Configure the API
|
||||
|
||||
The API uses a YAML configuration file. Create your config from the example:
|
||||
|
||||
```bash
|
||||
cp config.example.yaml config.yaml
|
||||
```
|
||||
|
||||
**Edit the configuration file:**
|
||||
|
||||
```bash
|
||||
nano config.yaml
|
||||
```
|
||||
|
||||
Key settings to review:
|
||||
|
||||
```yaml
|
||||
database:
|
||||
url: postgresql://postgres:postgres@localhost:5432/attune
|
||||
|
||||
security:
|
||||
jwt_secret: your-secret-key-change-this
|
||||
encryption_key: your-32-char-encryption-key-here
|
||||
|
||||
server:
|
||||
port: 8080
|
||||
cors_origins:
|
||||
- http://localhost:3000
|
||||
```
|
||||
|
||||
**Generate secure secrets for production:**
|
||||
|
||||
```bash
|
||||
# Generate JWT secret
|
||||
openssl rand -base64 64
|
||||
|
||||
# Generate encryption key
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
**If your database uses different credentials**, update the database URL in `config.yaml`:
|
||||
|
||||
```yaml
|
||||
database:
|
||||
url: postgresql://YOUR_USER:YOUR_PASSWORD@localhost:5432/attune
|
||||
```
|
||||
|
||||
## Step 3: Start the API
|
||||
|
||||
Simply run:
|
||||
|
||||
```bash
|
||||
cargo run --bin attune-api
|
||||
```
|
||||
|
||||
You should see:
|
||||
```
|
||||
INFO Starting Attune API Service
|
||||
INFO Loaded configuration from config.yaml
|
||||
INFO Configuration loaded successfully
|
||||
INFO Environment: development
|
||||
INFO Connecting to database...
|
||||
INFO Database connection established
|
||||
INFO JWT configuration initialized (access: 3600s, refresh: 604800s)
|
||||
INFO Starting server on 0.0.0.0:8080
|
||||
INFO Server listening on 0.0.0.0:8080
|
||||
INFO Attune API Service is ready
|
||||
```
|
||||
|
||||
## Step 4: Test It!
|
||||
|
||||
### Health Check
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/health
|
||||
```
|
||||
|
||||
Expected response:
|
||||
```json
|
||||
{
|
||||
"status": "healthy"
|
||||
}
|
||||
```
|
||||
|
||||
### Register a User
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/register \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"login": "admin",
|
||||
"password": "admin123456",
|
||||
"display_name": "Administrator"
|
||||
}'
|
||||
```
|
||||
|
||||
You'll get back access and refresh tokens:
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"access_token": "eyJhbGciOiJIUzI1NiIs...",
|
||||
"refresh_token": "eyJhbGciOiJIUzI1NiIs...",
|
||||
"token_type": "Bearer",
|
||||
"expires_in": 3600
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Use the Token
|
||||
|
||||
Save your access token and use it for authenticated requests:
|
||||
|
||||
```bash
|
||||
# Replace YOUR_TOKEN with the actual access_token from above
|
||||
curl http://localhost:8080/auth/me \
|
||||
-H "Authorization: Bearer YOUR_TOKEN"
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"id": 1,
|
||||
"login": "admin",
|
||||
"display_name": "Administrator"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Step 5: Explore the API
|
||||
|
||||
### Available Endpoints
|
||||
|
||||
**Authentication:**
|
||||
- `POST /auth/register` - Register new user
|
||||
- `POST /auth/login` - Login
|
||||
- `POST /auth/refresh` - Refresh token
|
||||
- `GET /auth/me` - Get current user (protected)
|
||||
- `POST /auth/change-password` - Change password (protected)
|
||||
|
||||
**Health:**
|
||||
- `GET /health` - Basic health check
|
||||
- `GET /health/detailed` - Detailed status with DB check
|
||||
- `GET /health/ready` - Readiness probe
|
||||
- `GET /health/live` - Liveness probe
|
||||
|
||||
**Packs:**
|
||||
- `GET /api/v1/packs` - List all packs
|
||||
- `POST /api/v1/packs` - Create pack
|
||||
- `GET /api/v1/packs/:ref` - Get pack by reference
|
||||
- `PUT /api/v1/packs/:ref` - Update pack
|
||||
- `DELETE /api/v1/packs/:ref` - Delete pack
|
||||
|
||||
### Create a Pack
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/packs \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"ref": "core.basics",
|
||||
"name": "Basic Operations",
|
||||
"description": "Core automation pack",
|
||||
"version": "1.0.0",
|
||||
"author": "Admin"
|
||||
}'
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
The `.env` file supports many configuration options. See `.env.example` for all available settings.
|
||||
|
||||
### Common Customizations
|
||||
|
||||
**Change the port:**
|
||||
```bash
|
||||
ATTUNE__SERVER__PORT=3000
|
||||
```
|
||||
### Debug Logging
|
||||
|
||||
Edit `config.yaml`:
|
||||
|
||||
```yaml
|
||||
log:
|
||||
level: debug
|
||||
format: pretty # Human-readable output
|
||||
```
|
||||
|
||||
Or use environment variables:
|
||||
|
||||
```bash
|
||||
export ATTUNE__LOG__LEVEL=debug
|
||||
export ATTUNE__LOG__FORMAT=pretty
|
||||
cargo run --bin attune-api
|
||||
```
|
||||
### Longer Token Expiration (Development)
|
||||
|
||||
Edit `config.yaml`:
|
||||
|
||||
```yaml
|
||||
security:
|
||||
jwt_access_expiration: 7200 # 2 hours
|
||||
jwt_refresh_expiration: 2592000 # 30 days
|
||||
```
|
||||
### Database Connection Pool
|
||||
|
||||
Edit `config.yaml`:
|
||||
|
||||
```yaml
|
||||
database:
|
||||
max_connections: 100
|
||||
min_connections: 10
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Database Connection Failed
|
||||
|
||||
```
|
||||
Error: error communicating with database
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
1. Verify PostgreSQL is running: `pg_isready`
|
||||
2. Check credentials in `.env` file
|
||||
3. Ensure database exists: `psql -U postgres -l | grep attune`
|
||||
|
||||
### Migration Errors
|
||||
|
||||
```
|
||||
Error: migration version not found
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
export DATABASE_URL="postgresql://postgres:postgres@localhost:5432/attune"
|
||||
sqlx migrate run
|
||||
```
|
||||
|
||||
### Port Already in Use
|
||||
|
||||
```
|
||||
Error: Address already in use
|
||||
```
|
||||
|
||||
**Solution:** Change the port in `.env`:
|
||||
```bash
|
||||
ATTUNE__SERVER__PORT=8081
|
||||
```
|
||||
|
||||
### JWT Secret Warning
|
||||
|
||||
```
|
||||
WARN JWT_SECRET not set in config, using default (INSECURE for production!)
|
||||
```
|
||||
|
||||
**Solution:** The default `.env` file has this set. Make sure:
|
||||
1. The `.env` file exists in the `attune/` directory
|
||||
2. The variable is set: `ATTUNE__SECURITY__JWT_SECRET=your-secret-here`
|
||||
|
||||
## Development Tips
|
||||
|
||||
### Auto-reload on Changes
|
||||
|
||||
Use `cargo-watch` for automatic rebuilds:
|
||||
|
||||
```bash
|
||||
cargo install cargo-watch
|
||||
cargo watch -x 'run --bin attune-api'
|
||||
```
|
||||
|
||||
### Enable SQL Query Logging
|
||||
|
||||
In `.env`:
|
||||
Edit `config.yaml`:
|
||||
|
||||
```yaml
|
||||
database:
|
||||
log_statements: true
|
||||
```
|
||||
|
||||
### Pretty Logs
|
||||
|
||||
For development, use pretty formatting:
|
||||
Edit `config.yaml`:
|
||||
|
||||
```yaml
|
||||
log:
|
||||
format: pretty
|
||||
```
|
||||
|
||||
For production, use JSON:
|
||||
Edit `config.yaml`:
|
||||
|
||||
```yaml
|
||||
log:
|
||||
format: json
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Read the [Authentication Guide](./authentication.md)
|
||||
- Learn about [Testing](./testing-authentication.md)
|
||||
- See [API Reference](./auth-quick-reference.md) for all endpoints
|
||||
- Check out the [Architecture Documentation](./architecture.md)
|
||||
|
||||
## Production Deployment
|
||||
|
||||
Before deploying to production:
|
||||
|
||||
1. **Change JWT Secret:**
|
||||
```bash
|
||||
ATTUNE__SECURITY__JWT_SECRET=$(openssl rand -base64 64)
|
||||
```
|
||||
|
||||
2. **Use Environment Variables:**
|
||||
Don't commit `.env` to version control. Use your platform's secrets management.
|
||||
|
||||
3. **Enable HTTPS:**
|
||||
Configure TLS/SSL termination at your load balancer or reverse proxy.
|
||||
|
||||
3. **Use production configuration:**
|
||||
```bash
|
||||
# Use production config file
|
||||
export ATTUNE_CONFIG=config.production.yaml
|
||||
|
||||
# Or set environment
|
||||
export ATTUNE__ENVIRONMENT=production
|
||||
```
|
||||
|
||||
4. **Adjust connection pool** in `config.production.yaml`:
|
||||
```yaml
|
||||
database:
|
||||
max_connections: 100
|
||||
min_connections: 10
|
||||
```
|
||||
|
||||
6. **Enable JSON Logging:**
|
||||
```bash
|
||||
ATTUNE__LOG__FORMAT=json
|
||||
```
|
||||
|
||||
## Getting Help
|
||||
|
||||
- Documentation: `docs/` directory
|
||||
- Issues: GitHub Issues
|
||||
- API Quick Reference: `docs/auth-quick-reference.md`
|
||||
|
||||
Happy automating! 🚀
|
||||
**Configure CORS origins:**
|
||||
```bash
|
||||
# Default (empty) - allows localhost:3000, localhost:8080, 127.0.0.1:3000, 127.0.0.1:8080
|
||||
ATTUNE__SERVER__CORS_ORIGINS=
|
||||
|
||||
# Custom origins (comma-separated)
|
||||
ATTUNE__SERVER__CORS_ORIGINS=http://localhost:3000,https://app.example.com
|
||||
```
|
||||
|
||||
**Note:** CORS origins must be specified when using authentication (credentials). The API cannot use wildcard origins (`*`) with credentials enabled.
|
||||
297
docs/guides/quickstart-example.md
Normal file
297
docs/guides/quickstart-example.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# Quick Start: Running the Example Rule
|
||||
|
||||
This guide walks you through running the pre-seeded example that echoes "hello, world" every 10 seconds.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- PostgreSQL 14+ running
|
||||
- RabbitMQ 3.12+ running
|
||||
- Rust toolchain installed
|
||||
- Database migrations applied
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Seed the Database
|
||||
|
||||
```bash
|
||||
# Set your database URL
|
||||
export DATABASE_URL="postgresql://user:pass@localhost:5432/attune"
|
||||
|
||||
# Run migrations (if not already done)
|
||||
sqlx database create
|
||||
sqlx migrate run
|
||||
|
||||
# Seed the core pack with example data
|
||||
psql $DATABASE_URL -f scripts/seed_core_pack.sql
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
NOTICE: Core pack seeded successfully
|
||||
NOTICE: Pack ID: 1
|
||||
NOTICE: Action Runtime ID: 1
|
||||
NOTICE: Sensor Runtime ID: 2
|
||||
NOTICE: Trigger Types: intervaltimer=1, crontimer=2, datetimetimer=3
|
||||
NOTICE: Actions: core.echo, core.sleep, core.noop
|
||||
NOTICE: Sensors: core.timer_10s_sensor (id=1)
|
||||
NOTICE: Rules: core.rule.timer_10s_echo
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Configure Environment
|
||||
|
||||
Create a configuration file or set environment variables:
|
||||
|
||||
```bash
|
||||
# Database
|
||||
export ATTUNE__DATABASE__URL="postgresql://user:pass@localhost:5432/attune"
|
||||
|
||||
# Message Queue
|
||||
export ATTUNE__RABBITMQ__URL="amqp://guest:guest@localhost:5672"
|
||||
|
||||
# JWT Secret (required for API service)
|
||||
export ATTUNE__JWT_SECRET="your-secret-key-change-in-production"
|
||||
|
||||
# Optional: Set log level
|
||||
export RUST_LOG="info,attune_sensor=debug,attune_executor=debug,attune_worker=debug"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Start the Services
|
||||
|
||||
Open **three separate terminals** and run:
|
||||
|
||||
### Terminal 1: Sensor Service
|
||||
```bash
|
||||
cd attune
|
||||
cargo run --bin attune-sensor
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
- Monitors the `core.timer_10s_sensor` sensor
|
||||
- Fires a `core.intervaltimer` event every 10 seconds
|
||||
|
||||
**What to look for:**
|
||||
```
|
||||
[INFO] Sensor core.timer_10s_sensor started
|
||||
[DEBUG] Timer fired: interval=10s
|
||||
[DEBUG] Publishing event for trigger core.intervaltimer
|
||||
```
|
||||
|
||||
### Terminal 2: Executor Service
|
||||
```bash
|
||||
cd attune
|
||||
cargo run --bin attune-executor
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
- Listens for events from sensors
|
||||
- Evaluates rules against events
|
||||
- Creates executions for matched rules
|
||||
|
||||
**What to look for:**
|
||||
```
|
||||
[INFO] Executor service started
|
||||
[DEBUG] Event received for trigger core.intervaltimer
|
||||
[DEBUG] Rule matched: core.rule.timer_10s_echo
|
||||
[DEBUG] Creating enforcement for rule
|
||||
[DEBUG] Scheduling execution for action core.echo
|
||||
```
|
||||
|
||||
### Terminal 3: Worker Service
|
||||
```bash
|
||||
cd attune
|
||||
cargo run --bin attune-worker
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
- Receives execution requests
|
||||
- Runs the `core.echo` action
|
||||
- Returns results
|
||||
|
||||
**What to look for:**
|
||||
```
|
||||
[INFO] Worker service started
|
||||
[DEBUG] Execution request received for action core.echo
|
||||
[DEBUG] Running: echo "hello, world"
|
||||
[INFO] Output: hello, world
|
||||
[DEBUG] Execution completed successfully
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Verify It's Working
|
||||
|
||||
You should see the complete flow every 10 seconds:
|
||||
|
||||
1. **Sensor** fires timer event
|
||||
2. **Executor** matches rule and schedules execution
|
||||
3. **Worker** executes action and outputs "hello, world"
|
||||
|
||||
---
|
||||
|
||||
## Understanding What You Created
|
||||
|
||||
### Components Seeded
|
||||
|
||||
| Component | Ref | Description |
|
||||
|-----------|-----|-------------|
|
||||
| **Trigger Type** | `core.intervaltimer` | Generic interval timer definition |
|
||||
| **Sensor Instance** | `core.timer_10s_sensor` | Configured to fire every 10 seconds |
|
||||
| **Action** | `core.echo` | Echoes a message to stdout |
|
||||
| **Rule** | `core.rule.timer_10s_echo` | Connects trigger to action |
|
||||
|
||||
### The Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ core.timer_10s_sensor │
|
||||
│ Config: {"unit":"seconds","interval":10} │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
│ every 10 seconds
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Event (core.intervaltimer) │
|
||||
│ Payload: {"type":"interval",...} │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
│ triggers
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Rule: core.rule.timer_10s_echo │
|
||||
│ Params: {"message":"hello, world"} │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
│ executes
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Action: core.echo │
|
||||
│ Command: echo "hello, world" │
|
||||
│ Output: hello, world │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Modify the Message
|
||||
|
||||
Update the rule to echo a different message:
|
||||
|
||||
```sql
|
||||
UPDATE attune.rule
|
||||
SET action_params = '{"message": "Attune is running!"}'::jsonb
|
||||
WHERE ref = 'core.rule.timer_10s_echo';
|
||||
```
|
||||
|
||||
Restart the executor service to pick up the change.
|
||||
|
||||
### Create a Different Timer
|
||||
|
||||
Create a sensor that fires every 30 seconds:
|
||||
|
||||
```sql
|
||||
INSERT INTO attune.sensor (
|
||||
ref, pack, pack_ref, label, description,
|
||||
entrypoint, runtime, runtime_ref,
|
||||
trigger, trigger_ref, enabled, config
|
||||
)
|
||||
VALUES (
|
||||
'mypack.timer_30s',
|
||||
(SELECT id FROM attune.pack WHERE ref = 'core'),
|
||||
'core',
|
||||
'30 Second Timer',
|
||||
'Fires every 30 seconds',
|
||||
'builtin:interval_timer',
|
||||
(SELECT id FROM attune.runtime WHERE ref = 'core.sensor.builtin'),
|
||||
'core.sensor.builtin',
|
||||
(SELECT id FROM attune.trigger WHERE ref = 'core.intervaltimer'),
|
||||
'core.intervaltimer',
|
||||
true,
|
||||
'{"unit": "seconds", "interval": 30}'::jsonb
|
||||
);
|
||||
```
|
||||
|
||||
Restart the sensor service to activate the new sensor.
|
||||
|
||||
### Use Dynamic Parameters
|
||||
|
||||
Update the rule to use event data:
|
||||
|
||||
```sql
|
||||
UPDATE attune.rule
|
||||
SET action_params = '{"message": "Timer fired at {{ trigger.payload.fired_at }}"}'::jsonb
|
||||
WHERE ref = 'core.rule.timer_10s_echo';
|
||||
```
|
||||
|
||||
The executor will resolve the template with actual event data.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No events firing
|
||||
- Check that the sensor service is running
|
||||
- Verify the sensor is enabled: `SELECT * FROM attune.sensor WHERE ref = 'core.timer_10s_sensor';`
|
||||
- Check sensor service logs for errors
|
||||
|
||||
### Events firing but no executions
|
||||
- Check that the executor service is running
|
||||
- Verify the rule is enabled: `SELECT * FROM attune.rule WHERE ref = 'core.rule.timer_10s_echo';`
|
||||
- Check executor service logs for rule matching
|
||||
|
||||
### Executions created but not running
|
||||
- Check that the worker service is running
|
||||
- Verify the action exists: `SELECT * FROM attune.action WHERE ref = 'core.echo';`
|
||||
- Check worker service logs for execution errors
|
||||
|
||||
### Check the database
|
||||
```sql
|
||||
-- View recent events
|
||||
SELECT * FROM attune.event ORDER BY created DESC LIMIT 10;
|
||||
|
||||
-- View recent enforcements
|
||||
SELECT * FROM attune.enforcement ORDER BY created DESC LIMIT 10;
|
||||
|
||||
-- View recent executions
|
||||
SELECT * FROM attune.execution ORDER BY created DESC LIMIT 10;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Clean Up
|
||||
|
||||
To remove the example data:
|
||||
|
||||
```sql
|
||||
-- Remove rule
|
||||
DELETE FROM attune.rule WHERE ref = 'core.rule.timer_10s_echo';
|
||||
|
||||
-- Remove sensor
|
||||
DELETE FROM attune.sensor WHERE ref = 'core.timer_10s_sensor';
|
||||
|
||||
-- (Triggers and actions are part of core pack, keep them)
|
||||
```
|
||||
|
||||
Or drop and recreate the database:
|
||||
|
||||
```bash
|
||||
sqlx database drop
|
||||
sqlx database create
|
||||
sqlx migrate run
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Learn More
|
||||
|
||||
- **Architecture Guide:** `docs/trigger-sensor-architecture.md`
|
||||
- **Rule Parameters:** `docs/examples/rule-parameter-examples.md`
|
||||
- **API Documentation:** `docs/api-*.md`
|
||||
- **Service Details:** `docs/executor-service.md`, `docs/sensor-service.md`, `docs/worker-service.md`
|
||||
353
docs/guides/quickstart-timer-demo.md
Normal file
353
docs/guides/quickstart-timer-demo.md
Normal file
@@ -0,0 +1,353 @@
|
||||
# Quick Start: Timer Echo Demo
|
||||
|
||||
This guide will help you run a simple demonstration of Attune's timer-based automation: an "echo Hello World" action that runs every 10 seconds.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- PostgreSQL 14+ running
|
||||
- RabbitMQ 3.12+ running
|
||||
- Rust toolchain installed
|
||||
- `jq` installed (for setup script)
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
This demo exercises the complete Attune event flow:
|
||||
|
||||
```
|
||||
Timer Manager → Event → Rule Match → Enforcement → Execution → Worker → Action
|
||||
```
|
||||
|
||||
**Components involved:**
|
||||
1. **Sensor Service** - Timer manager fires every 10 seconds
|
||||
2. **API Service** - Provides REST endpoints for rule management
|
||||
3. **Executor Service** - Processes enforcements and schedules executions
|
||||
4. **Worker Service** - Executes the echo action
|
||||
|
||||
## Step 1: Database Setup
|
||||
|
||||
First, ensure your database is running and create the schema:
|
||||
|
||||
```bash
|
||||
# Set database URL
|
||||
export DATABASE_URL="postgresql://user:password@localhost:5432/attune"
|
||||
|
||||
# Run migrations
|
||||
cd attune
|
||||
sqlx database create
|
||||
sqlx migrate run
|
||||
```
|
||||
|
||||
## Step 2: Seed Core Pack Data
|
||||
|
||||
Load the core pack with timer triggers and basic actions:
|
||||
|
||||
```bash
|
||||
psql $DATABASE_URL -f scripts/seed_core_pack.sql
|
||||
```
|
||||
|
||||
This creates:
|
||||
- **Core pack** with timer triggers and basic actions
|
||||
- **Timer triggers**: `core.timer_10s`, `core.timer_1m`, `core.timer_hourly`
|
||||
- **Actions**: `core.echo`, `core.sleep`, `core.noop`
|
||||
- **Shell runtime** for executing shell commands
|
||||
|
||||
## Step 3: Configure Services
|
||||
|
||||
Create a configuration file or use environment variables:
|
||||
|
||||
```yaml
|
||||
# config.development.yaml
|
||||
environment: development
|
||||
|
||||
database:
|
||||
url: "postgresql://user:password@localhost:5432/attune"
|
||||
max_connections: 10
|
||||
|
||||
message_queue:
|
||||
url: "amqp://guest:guest@localhost:5672/%2F"
|
||||
|
||||
api:
|
||||
host: "0.0.0.0"
|
||||
port: 8080
|
||||
|
||||
jwt:
|
||||
secret: "your-secret-key-change-in-production"
|
||||
access_token_ttl: 3600
|
||||
refresh_token_ttl: 604800
|
||||
|
||||
worker:
|
||||
name: "worker-1"
|
||||
max_concurrent_tasks: 10
|
||||
task_timeout: 300
|
||||
```
|
||||
|
||||
Or use environment variables:
|
||||
|
||||
```bash
|
||||
export ATTUNE__DATABASE__URL="postgresql://user:password@localhost:5432/attune"
|
||||
export ATTUNE__MESSAGE_QUEUE__URL="amqp://guest:guest@localhost:5672/%2F"
|
||||
export ATTUNE__JWT__SECRET="your-secret-key"
|
||||
```
|
||||
|
||||
## Step 4: Create Default User
|
||||
|
||||
Create an admin user for API access:
|
||||
|
||||
```sql
|
||||
INSERT INTO attune.identity (username, email, password_hash, enabled)
|
||||
VALUES (
|
||||
'admin',
|
||||
'admin@example.com',
|
||||
-- Password: 'admin' (hashed with Argon2id)
|
||||
'$argon2id$v=19$m=19456,t=2,p=1$...',
|
||||
true
|
||||
);
|
||||
```
|
||||
|
||||
Or use the API's registration endpoint after starting the API service.
|
||||
|
||||
## Step 5: Start Services
|
||||
|
||||
Open 4 terminal windows and start each service:
|
||||
|
||||
### Terminal 1: API Service
|
||||
```bash
|
||||
cd attune
|
||||
cargo run --bin attune-api
|
||||
```
|
||||
|
||||
Wait for: `Attune API Server listening on 0.0.0.0:8080`
|
||||
|
||||
### Terminal 2: Sensor Service
|
||||
```bash
|
||||
cd attune
|
||||
cargo run --bin attune-sensor
|
||||
```
|
||||
|
||||
Wait for: `Started X timer triggers`
|
||||
|
||||
### Terminal 3: Executor Service
|
||||
```bash
|
||||
cd attune
|
||||
cargo run --bin attune-executor
|
||||
```
|
||||
|
||||
Wait for: `Executor Service initialized successfully`
|
||||
|
||||
### Terminal 4: Worker Service
|
||||
```bash
|
||||
cd attune
|
||||
cargo run --bin attune-worker
|
||||
```
|
||||
|
||||
Wait for: `Attune Worker Service is ready`
|
||||
|
||||
## Step 6: Create the Timer Echo Rule
|
||||
|
||||
Run the setup script to create a rule that runs echo every 10 seconds:
|
||||
|
||||
```bash
|
||||
cd attune
|
||||
./scripts/setup_timer_echo_rule.sh
|
||||
```
|
||||
|
||||
The script will:
|
||||
1. Authenticate with the API
|
||||
2. Verify core pack, trigger, and action exist
|
||||
3. Create a rule: `core.timer_echo_10s`
|
||||
|
||||
## Step 7: Observe the System
|
||||
|
||||
### Watch Worker Logs
|
||||
|
||||
In the worker terminal, you should see output every 10 seconds:
|
||||
|
||||
```
|
||||
[INFO] Received execution request: ...
|
||||
[INFO] Executing action core.echo
|
||||
[INFO] Action completed successfully
|
||||
```
|
||||
|
||||
### Watch Sensor Logs
|
||||
|
||||
In the sensor terminal, you should see:
|
||||
|
||||
```
|
||||
[DEBUG] Interval timer core.timer_10s fired
|
||||
[INFO] Generated event 123 from timer trigger core.timer_10s
|
||||
```
|
||||
|
||||
### Watch Executor Logs
|
||||
|
||||
In the executor terminal, you should see:
|
||||
|
||||
```
|
||||
[INFO] Processing enforcement 456
|
||||
[INFO] Scheduling execution for action core.echo
|
||||
[INFO] Execution scheduled: 789
|
||||
```
|
||||
|
||||
### Query via API
|
||||
|
||||
Check recent executions:
|
||||
|
||||
```bash
|
||||
# Get auth token
|
||||
TOKEN=$(curl -s -X POST http://localhost:8080/auth/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"username":"admin","password":"admin"}' | jq -r '.data.access_token')
|
||||
|
||||
# List recent executions
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/executions | jq '.data[0:5]'
|
||||
|
||||
# Get specific execution details
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/executions/123 | jq
|
||||
```
|
||||
|
||||
## Step 8: Experiment
|
||||
|
||||
### Change the Timer Interval
|
||||
|
||||
Edit the trigger in the database to fire every 5 seconds:
|
||||
|
||||
```sql
|
||||
UPDATE attune.trigger
|
||||
SET param_schema = '{"type": "interval", "seconds": 5}'
|
||||
WHERE ref = 'core.timer_10s';
|
||||
```
|
||||
|
||||
Restart the sensor service to pick up the change.
|
||||
|
||||
### Change the Echo Message
|
||||
|
||||
Update the rule's action parameters:
|
||||
|
||||
```bash
|
||||
curl -X PUT http://localhost:8080/api/v1/rules/core.timer_echo_10s \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"action_params": {
|
||||
"message": "Custom message from timer!"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
### Add Rule Conditions
|
||||
|
||||
Modify the rule to only fire during business hours (requires implementing rule conditions):
|
||||
|
||||
```json
|
||||
{
|
||||
"conditions": {
|
||||
"condition": "all",
|
||||
"rules": [
|
||||
{
|
||||
"field": "fired_at",
|
||||
"operator": "greater_than",
|
||||
"value": "09:00:00"
|
||||
},
|
||||
{
|
||||
"field": "fired_at",
|
||||
"operator": "less_than",
|
||||
"value": "17:00:00"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Create a Cron-Based Rule
|
||||
|
||||
Create a rule that fires on a cron schedule:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/rules \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"ref": "core.hourly_echo",
|
||||
"pack": 1,
|
||||
"pack_ref": "core",
|
||||
"label": "Hourly Echo",
|
||||
"description": "Echoes a message every hour",
|
||||
"enabled": true,
|
||||
"trigger_ref": "core.timer_hourly",
|
||||
"action_ref": "core.echo",
|
||||
"action_params": {
|
||||
"message": "Hourly chime!"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Timer Not Firing
|
||||
|
||||
1. **Check sensor service logs** for "Started X timer triggers"
|
||||
2. **Verify trigger is enabled**: `SELECT * FROM attune.trigger WHERE ref = 'core.timer_10s';`
|
||||
3. **Check timer configuration**: Ensure `param_schema` has valid timer config
|
||||
|
||||
### No Executions Created
|
||||
|
||||
1. **Check if rule exists and is enabled**: `SELECT * FROM attune.rule WHERE ref = 'core.timer_echo_10s';`
|
||||
2. **Check sensor logs** for event generation
|
||||
3. **Check executor logs** for enforcement processing
|
||||
|
||||
### Worker Not Executing
|
||||
|
||||
1. **Check worker service is running** and connected to message queue
|
||||
2. **Check executor logs** for "Execution scheduled" messages
|
||||
3. **Verify runtime exists**: `SELECT * FROM attune.runtime WHERE ref = 'shell';`
|
||||
4. **Check worker has permission** to execute shell commands
|
||||
|
||||
### Database Connection Errors
|
||||
|
||||
1. Verify PostgreSQL is running
|
||||
2. Check connection string is correct
|
||||
3. Ensure `attune` database exists
|
||||
4. Verify migrations ran successfully
|
||||
|
||||
### Message Queue Errors
|
||||
|
||||
1. Verify RabbitMQ is running
|
||||
2. Check connection string is correct
|
||||
3. Ensure exchanges and queues are created
|
||||
|
||||
## Next Steps
|
||||
|
||||
- **Add more actions**: Create Python or Node.js actions
|
||||
- **Create workflows**: Chain multiple actions together
|
||||
- **Add policies**: Implement concurrency limits or rate limiting
|
||||
- **Human-in-the-loop**: Add inquiry actions that wait for user input
|
||||
- **Custom sensors**: Write sensors that monitor external systems
|
||||
- **Webhooks**: Implement webhook triggers for external events
|
||||
|
||||
## Clean Up
|
||||
|
||||
To stop the demo:
|
||||
|
||||
1. Press Ctrl+C in each service terminal
|
||||
2. Disable the rule:
|
||||
```bash
|
||||
curl -X PUT http://localhost:8080/api/v1/rules/core.timer_echo_10s \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"enabled": false}'
|
||||
```
|
||||
3. (Optional) Clean up data:
|
||||
```sql
|
||||
DELETE FROM attune.execution WHERE created < NOW() - INTERVAL '1 hour';
|
||||
DELETE FROM attune.event WHERE created < NOW() - INTERVAL '1 hour';
|
||||
DELETE FROM attune.enforcement WHERE created < NOW() - INTERVAL '1 hour';
|
||||
```
|
||||
|
||||
## Learn More
|
||||
|
||||
- [Architecture Overview](architecture.md)
|
||||
- [Data Model](data-model.md)
|
||||
- [API Documentation](api-overview.md)
|
||||
- [Creating Custom Actions](creating-actions.md)
|
||||
- [Writing Sensors](writing-sensors.md)
|
||||
392
docs/guides/timer-sensor-quickstart.md
Normal file
392
docs/guides/timer-sensor-quickstart.md
Normal file
@@ -0,0 +1,392 @@
|
||||
# Timer Sensor Quick Start Guide
|
||||
|
||||
**Last Updated:** 2025-01-27
|
||||
**Audience:** Developers
|
||||
|
||||
## Overview
|
||||
|
||||
This guide will help you get the timer sensor up and running for development and testing.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rust 1.70+ installed
|
||||
- PostgreSQL 14+ running
|
||||
- RabbitMQ 3.12+ running
|
||||
- Attune API service running
|
||||
|
||||
## Step 1: Start Dependencies
|
||||
|
||||
### Using Docker Compose
|
||||
|
||||
```bash
|
||||
# From project root
|
||||
docker-compose up -d postgres rabbitmq
|
||||
```
|
||||
|
||||
### Manual Setup
|
||||
|
||||
```bash
|
||||
# PostgreSQL (already running on localhost:5432)
|
||||
# RabbitMQ (already running on localhost:5672)
|
||||
```
|
||||
|
||||
Verify services are running:
|
||||
|
||||
```bash
|
||||
# PostgreSQL
|
||||
psql -h localhost -U postgres -c "SELECT version();"
|
||||
|
||||
# RabbitMQ
|
||||
rabbitmqadmin list queues
|
||||
```
|
||||
|
||||
## Step 2: Start the API Service
|
||||
|
||||
```bash
|
||||
# Terminal 1
|
||||
cd attune
|
||||
make run-api
|
||||
|
||||
# Or manually:
|
||||
cd crates/api
|
||||
cargo run
|
||||
```
|
||||
|
||||
Verify API is running:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/health
|
||||
```
|
||||
|
||||
## Step 3: Create a Service Account for the Sensor
|
||||
|
||||
**NOTE:** Service accounts are not yet implemented. This step will be available after implementing the service account system.
|
||||
|
||||
For now, you'll need to use a user token or skip authentication during development.
|
||||
|
||||
### When Service Accounts Are Implemented
|
||||
|
||||
```bash
|
||||
# Get admin token (from login or existing session)
|
||||
export ADMIN_TOKEN="your_admin_token_here"
|
||||
|
||||
# Create sensor service account
|
||||
curl -X POST http://localhost:8080/service-accounts \
|
||||
-H "Authorization: Bearer ${ADMIN_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "sensor:core.timer",
|
||||
"scope": "sensor",
|
||||
"description": "Timer sensor for development",
|
||||
"ttl_days": 90,
|
||||
"metadata": {
|
||||
"trigger_types": ["core.timer"]
|
||||
}
|
||||
}'
|
||||
|
||||
# Save the returned token
|
||||
export SENSOR_TOKEN="eyJhbGci..."
|
||||
```
|
||||
|
||||
## Step 4: Start the Timer Sensor
|
||||
|
||||
```bash
|
||||
# Terminal 2
|
||||
cd attune
|
||||
|
||||
# Set environment variables
|
||||
export ATTUNE_API_URL="http://localhost:8080"
|
||||
export ATTUNE_API_TOKEN="your_sensor_token_here" # Or user token for now
|
||||
export ATTUNE_SENSOR_REF="core.timer"
|
||||
export ATTUNE_MQ_URL="amqp://localhost:5672"
|
||||
export ATTUNE_LOG_LEVEL="debug"
|
||||
|
||||
# Run the sensor
|
||||
cargo run --package core-timer-sensor
|
||||
```
|
||||
|
||||
You should see output like:
|
||||
|
||||
```json
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"Starting Attune Timer Sensor"}
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"Configuration loaded successfully","sensor_ref":"core.timer","api_url":"http://localhost:8080"}
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"API connectivity verified"}
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"Timer manager initialized"}
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"Connected to RabbitMQ"}
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"Started consuming messages from queue 'sensor.core.timer'"}
|
||||
```
|
||||
|
||||
## Step 5: Create a Timer-Based Rule
|
||||
|
||||
### Via API
|
||||
|
||||
```bash
|
||||
# Create a simple timer rule that fires every 5 seconds
|
||||
curl -X POST http://localhost:8080/rules \
|
||||
-H "Authorization: Bearer ${TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"ref": "timer_every_5s",
|
||||
"label": "Timer Every 5 Seconds",
|
||||
"description": "Test timer that fires every 5 seconds",
|
||||
"pack": "core",
|
||||
"trigger_type": "core.timer",
|
||||
"trigger_params": {
|
||||
"type": "interval",
|
||||
"interval": 5,
|
||||
"unit": "seconds"
|
||||
},
|
||||
"action_ref": "core.echo",
|
||||
"action_params": {
|
||||
"message": "Timer fired!"
|
||||
},
|
||||
"enabled": true
|
||||
}'
|
||||
```
|
||||
|
||||
### Via CLI
|
||||
|
||||
```bash
|
||||
# Not yet implemented
|
||||
# attune rule create timer_every_5s --trigger core.timer --action core.echo
|
||||
```
|
||||
|
||||
## Step 6: Watch the Sensor Logs
|
||||
|
||||
In the sensor terminal, you should see:
|
||||
|
||||
```json
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"Handling RuleCreated","rule_id":123,"ref":"timer_every_5s"}
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"Starting timer for rule 123"}
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"Timer started for rule 123"}
|
||||
{"timestamp":"2025-01-27T12:34:56Z","level":"info","message":"Interval timer loop started for rule 123","interval":5}
|
||||
```
|
||||
|
||||
After 5 seconds:
|
||||
|
||||
```json
|
||||
{"timestamp":"2025-01-27T12:35:01Z","level":"info","message":"Timer fired for rule 123, created event 456"}
|
||||
```
|
||||
|
||||
## Step 7: Verify Events Are Created
|
||||
|
||||
```bash
|
||||
# List events
|
||||
curl http://localhost:8080/events \
|
||||
-H "Authorization: Bearer ${TOKEN}"
|
||||
|
||||
# Should show events with trigger_type "core.timer"
|
||||
```
|
||||
|
||||
## Step 8: Test Rule Disable/Enable
|
||||
|
||||
### Disable the rule
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/rules/timer_every_5s/disable \
|
||||
-H "Authorization: Bearer ${TOKEN}"
|
||||
```
|
||||
|
||||
Sensor logs should show:
|
||||
|
||||
```json
|
||||
{"timestamp":"2025-01-27T12:35:10Z","level":"info","message":"Handling RuleDisabled","rule_id":123}
|
||||
{"timestamp":"2025-01-27T12:35:10Z","level":"info","message":"Stopped timer for rule 123"}
|
||||
```
|
||||
|
||||
### Re-enable the rule
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/rules/timer_every_5s/enable \
|
||||
-H "Authorization: Bearer ${TOKEN}"
|
||||
```
|
||||
|
||||
Sensor logs should show:
|
||||
|
||||
```json
|
||||
{"timestamp":"2025-01-27T12:35:20Z","level":"info","message":"Handling RuleEnabled","rule_id":123}
|
||||
{"timestamp":"2025-01-27T12:35:20Z","level":"info","message":"Starting timer for rule 123"}
|
||||
```
|
||||
|
||||
## Step 9: Test Different Timer Types
|
||||
|
||||
### Every 1 minute
|
||||
|
||||
```json
|
||||
{
|
||||
"trigger_params": {
|
||||
"type": "interval",
|
||||
"interval": 1,
|
||||
"unit": "minutes"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Every 1 hour
|
||||
|
||||
```json
|
||||
{
|
||||
"trigger_params": {
|
||||
"type": "interval",
|
||||
"interval": 1,
|
||||
"unit": "hours"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### One-time at specific datetime
|
||||
|
||||
```json
|
||||
{
|
||||
"trigger_params": {
|
||||
"type": "date_time",
|
||||
"fire_at": "2025-01-27T15:00:00Z"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Making Changes to the Sensor
|
||||
|
||||
```bash
|
||||
# 1. Make code changes in crates/core-timer-sensor/src/
|
||||
|
||||
# 2. Build and check for errors
|
||||
cargo build --package core-timer-sensor
|
||||
|
||||
# 3. Run tests
|
||||
cargo test --package core-timer-sensor
|
||||
|
||||
# 4. Restart the sensor
|
||||
# Stop with Ctrl+C, then:
|
||||
cargo run --package core-timer-sensor
|
||||
```
|
||||
|
||||
### Testing Edge Cases
|
||||
|
||||
1. **Sensor restart with active rules:**
|
||||
- Create a rule
|
||||
- Stop the sensor (Ctrl+C)
|
||||
- Start the sensor again
|
||||
- Verify it loads and starts the timer for existing rules
|
||||
|
||||
2. **Multiple rules with different intervals:**
|
||||
- Create 3 rules with 5s, 10s, and 15s intervals
|
||||
- Verify all timers fire independently
|
||||
|
||||
3. **Rule updates:**
|
||||
- Update a rule's trigger_params
|
||||
- Currently requires disable/enable cycle
|
||||
- Future: should handle updates automatically
|
||||
|
||||
4. **Network failures:**
|
||||
- Stop the API service
|
||||
- Observe sensor logs (should show retry attempts)
|
||||
- Restart API
|
||||
- Verify sensor reconnects
|
||||
|
||||
## Debugging
|
||||
|
||||
### Enable Debug Logging
|
||||
|
||||
```bash
|
||||
export ATTUNE_LOG_LEVEL="debug"
|
||||
cargo run --package core-timer-sensor
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
**"Failed to connect to Attune API"**
|
||||
- Verify API is running: `curl http://localhost:8080/health`
|
||||
- Check `ATTUNE_API_URL` is correct
|
||||
|
||||
**"Failed to connect to RabbitMQ"**
|
||||
- Verify RabbitMQ is running: `rabbitmqctl status`
|
||||
- Check `ATTUNE_MQ_URL` is correct
|
||||
- Try: `amqp://guest:guest@localhost:5672/%2F`
|
||||
|
||||
**"Insufficient permissions to create event"**
|
||||
- Service account system not yet implemented
|
||||
- Use a user token temporarily
|
||||
- Or wait for service account implementation
|
||||
|
||||
**"Timer not firing"**
|
||||
- Check sensor logs for "Timer started for rule X"
|
||||
- Verify rule is enabled
|
||||
- Check trigger_params format is correct
|
||||
- Enable debug logging to see more details
|
||||
|
||||
**"No timers loaded on startup"**
|
||||
- API endpoint `/rules?trigger_type=core.timer` not yet implemented
|
||||
- Create a rule after sensor starts
|
||||
- Timers will be managed via RabbitMQ messages
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Implement Service Account System** - See `docs/service-accounts.md`
|
||||
2. **Add Cron Timer Support** - Implement cron parsing and scheduling
|
||||
3. **Add Tests** - Integration tests for full sensor workflow
|
||||
4. **Add Metrics** - Prometheus metrics for monitoring
|
||||
5. **Production Deployment** - systemd service, Docker image, Kubernetes deployment
|
||||
|
||||
## Resources
|
||||
|
||||
- [Sensor Interface Specification](./sensor-interface.md)
|
||||
- [Service Accounts Documentation](./service-accounts.md)
|
||||
- [Timer Sensor README](../../crates/core-timer-sensor/README.md)
|
||||
- [Sensor Authentication Overview](./sensor-authentication-overview.md)
|
||||
|
||||
## Troubleshooting Tips
|
||||
|
||||
### View RabbitMQ Queues
|
||||
|
||||
```bash
|
||||
# List all queues
|
||||
rabbitmqadmin list queues
|
||||
|
||||
# Should see: sensor.core.timer
|
||||
|
||||
# View messages in queue
|
||||
rabbitmqadmin get queue=sensor.core.timer count=10
|
||||
```
|
||||
|
||||
### View Sensor Queue Bindings
|
||||
|
||||
```bash
|
||||
# List bindings for sensor queue
|
||||
rabbitmqadmin list bindings | grep sensor.core.timer
|
||||
|
||||
# Should see bindings for:
|
||||
# - rule.created
|
||||
# - rule.enabled
|
||||
# - rule.disabled
|
||||
# - rule.deleted
|
||||
```
|
||||
|
||||
### Monitor API Logs
|
||||
|
||||
```bash
|
||||
# In API terminal, should see:
|
||||
# "Published RuleCreated message for rule timer_every_5s"
|
||||
# "Published RuleEnabled message for rule timer_every_5s"
|
||||
# "Published RuleDisabled message for rule timer_every_5s"
|
||||
```
|
||||
|
||||
### Test Token Manually
|
||||
|
||||
```bash
|
||||
# Decode JWT to inspect claims
|
||||
echo "eyJhbGci..." | jq -R 'split(".") | .[1] | @base64d | fromjson'
|
||||
|
||||
# Should show:
|
||||
{
|
||||
"sub": "sensor:core.timer",
|
||||
"scope": "sensor",
|
||||
"metadata": {
|
||||
"trigger_types": ["core.timer"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Happy Hacking! 🚀
|
||||
563
docs/guides/workflow-quickstart.md
Normal file
563
docs/guides/workflow-quickstart.md
Normal file
@@ -0,0 +1,563 @@
|
||||
# Workflow Orchestration Quick-Start Guide
|
||||
|
||||
This guide helps developers get started implementing the workflow orchestration feature in Attune.
|
||||
|
||||
## Overview
|
||||
|
||||
Workflows are composable YAML-based action graphs that enable complex automation. This implementation adds workflow capabilities to Attune over 5 phases spanning 9 weeks.
|
||||
|
||||
## Before You Start
|
||||
|
||||
### Required Reading
|
||||
1. `docs/workflow-orchestration.md` - Full technical design (1,063 lines)
|
||||
2. `docs/workflow-implementation-plan.md` - Implementation roadmap (562 lines)
|
||||
3. `docs/workflow-summary.md` - Quick reference (400 lines)
|
||||
|
||||
### Required Knowledge
|
||||
- Rust async programming (tokio)
|
||||
- PostgreSQL and SQLx
|
||||
- RabbitMQ message patterns
|
||||
- Graph algorithms (basic traversal)
|
||||
- Template engines (Jinja2-style syntax)
|
||||
|
||||
### Development Environment
|
||||
```bash
|
||||
# Ensure you have:
|
||||
- Rust 1.70+
|
||||
- PostgreSQL 14+
|
||||
- RabbitMQ 3.12+
|
||||
- Docker (for testing)
|
||||
|
||||
# Clone and setup
|
||||
cd attune
|
||||
cargo build
|
||||
```
|
||||
|
||||
## Implementation Phases
|
||||
|
||||
### Phase 1: Foundation (Weeks 1-2)
|
||||
|
||||
**Goal**: Database schema, models, parser, template engine
|
||||
|
||||
#### Step 1.1: Database Migration
|
||||
```bash
|
||||
# Create migration file
|
||||
cd migrations
|
||||
touch 020_workflow_orchestration.sql
|
||||
```
|
||||
|
||||
Copy content from `docs/examples/workflow-migration.sql`:
|
||||
- 3 new tables: `workflow_definition`, `workflow_execution`, `workflow_task_execution`
|
||||
- Modify `action` table with `is_workflow` and `workflow_def` columns
|
||||
- Add indexes, triggers, views
|
||||
|
||||
Run migration:
|
||||
```bash
|
||||
sqlx migrate run
|
||||
```
|
||||
|
||||
#### Step 1.2: Data Models
|
||||
Add to `crates/common/src/models.rs`:
|
||||
|
||||
```rust
|
||||
pub mod workflow {
|
||||
use super::*;
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
|
||||
pub struct WorkflowDefinition {
|
||||
pub id: Id,
|
||||
pub r#ref: String,
|
||||
pub pack: Id,
|
||||
pub pack_ref: String,
|
||||
pub label: String,
|
||||
pub description: Option<String>,
|
||||
pub version: String,
|
||||
pub param_schema: Option<JsonSchema>,
|
||||
pub out_schema: Option<JsonSchema>,
|
||||
pub definition: JsonValue, // Full workflow YAML as JSON
|
||||
pub tags: Vec<String>,
|
||||
pub enabled: bool,
|
||||
pub created: DateTime<Utc>,
|
||||
pub updated: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
|
||||
pub struct WorkflowExecution {
|
||||
pub id: Id,
|
||||
pub execution: Id,
|
||||
pub workflow_def: Id,
|
||||
pub current_tasks: Vec<String>,
|
||||
pub completed_tasks: Vec<String>,
|
||||
pub failed_tasks: Vec<String>,
|
||||
pub skipped_tasks: Vec<String>,
|
||||
pub variables: JsonValue,
|
||||
pub task_graph: JsonValue,
|
||||
pub status: ExecutionStatus,
|
||||
pub error_message: Option<String>,
|
||||
pub paused: bool,
|
||||
pub pause_reason: Option<String>,
|
||||
pub created: DateTime<Utc>,
|
||||
pub updated: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
|
||||
pub struct WorkflowTaskExecution {
|
||||
pub id: Id,
|
||||
pub workflow_execution: Id,
|
||||
pub execution: Id,
|
||||
pub task_name: String,
|
||||
pub task_index: Option<i32>,
|
||||
pub task_batch: Option<i32>,
|
||||
pub status: ExecutionStatus,
|
||||
pub started_at: Option<DateTime<Utc>>,
|
||||
pub completed_at: Option<DateTime<Utc>>,
|
||||
pub duration_ms: Option<i64>,
|
||||
pub result: Option<JsonValue>,
|
||||
pub error: Option<JsonValue>,
|
||||
pub retry_count: i32,
|
||||
pub max_retries: i32,
|
||||
pub next_retry_at: Option<DateTime<Utc>>,
|
||||
pub timeout_seconds: Option<i32>,
|
||||
pub timed_out: bool,
|
||||
pub created: DateTime<Utc>,
|
||||
pub updated: DateTime<Utc>,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 1.3: Repositories
|
||||
Create `crates/common/src/repositories/workflow_definition.rs`:
|
||||
|
||||
```rust
|
||||
use sqlx::PgPool;
|
||||
use crate::models::workflow::WorkflowDefinition;
|
||||
use crate::error::Result;
|
||||
|
||||
pub struct WorkflowDefinitionRepository;
|
||||
|
||||
impl WorkflowDefinitionRepository {
|
||||
pub async fn create(pool: &PgPool, def: &WorkflowDefinition) -> Result<WorkflowDefinition> {
|
||||
// INSERT implementation
|
||||
}
|
||||
|
||||
pub async fn find_by_ref(pool: &PgPool, ref_: &str) -> Result<Option<WorkflowDefinition>> {
|
||||
// SELECT WHERE ref = ?
|
||||
}
|
||||
|
||||
pub async fn list_by_pack(pool: &PgPool, pack_id: i64) -> Result<Vec<WorkflowDefinition>> {
|
||||
// SELECT WHERE pack = ?
|
||||
}
|
||||
|
||||
// ... other CRUD methods
|
||||
}
|
||||
```
|
||||
|
||||
Create similar repositories for `workflow_execution` and `workflow_task_execution`.
|
||||
|
||||
#### Step 1.4: YAML Parser
|
||||
Create `crates/executor/src/workflow/parser.rs`:
|
||||
|
||||
```rust
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashMap;
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct WorkflowSpec {
|
||||
pub r#ref: String,
|
||||
pub label: String,
|
||||
pub description: Option<String>,
|
||||
pub version: String,
|
||||
pub parameters: Option<serde_json::Value>,
|
||||
pub output: Option<serde_json::Value>,
|
||||
pub vars: HashMap<String, serde_json::Value>,
|
||||
pub tasks: Vec<TaskSpec>,
|
||||
pub output_map: Option<HashMap<String, String>>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct TaskSpec {
|
||||
pub name: String,
|
||||
#[serde(rename = "type")]
|
||||
pub task_type: Option<TaskType>,
|
||||
pub action: Option<String>,
|
||||
pub input: HashMap<String, String>,
|
||||
pub publish: Option<Vec<String>>,
|
||||
pub on_success: Option<String>,
|
||||
pub on_failure: Option<String>,
|
||||
pub on_complete: Option<String>,
|
||||
pub on_timeout: Option<String>,
|
||||
pub decision: Option<Vec<DecisionBranch>>,
|
||||
pub when: Option<String>,
|
||||
pub with_items: Option<String>,
|
||||
pub batch_size: Option<usize>,
|
||||
pub retry: Option<RetryPolicy>,
|
||||
pub timeout: Option<u64>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum TaskType {
|
||||
Action,
|
||||
Parallel,
|
||||
Workflow,
|
||||
}
|
||||
|
||||
pub fn parse_workflow_yaml(yaml: &str) -> Result<WorkflowSpec> {
|
||||
serde_yaml::from_str(yaml)
|
||||
.map_err(|e| Error::InvalidWorkflowDefinition(e.to_string()))
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 1.5: Template Engine
|
||||
Add to `crates/executor/Cargo.toml`:
|
||||
```toml
|
||||
[dependencies]
|
||||
tera = "1.19"
|
||||
```
|
||||
|
||||
Create `crates/executor/src/workflow/context.rs`:
|
||||
|
||||
```rust
|
||||
use tera::{Tera, Context};
|
||||
use std::collections::HashMap;
|
||||
use serde_json::Value as JsonValue;
|
||||
|
||||
pub struct WorkflowContext {
|
||||
pub execution_id: i64,
|
||||
pub parameters: JsonValue,
|
||||
pub vars: HashMap<String, JsonValue>,
|
||||
pub task_results: HashMap<String, TaskResult>,
|
||||
pub pack_config: JsonValue,
|
||||
pub system: SystemContext,
|
||||
}
|
||||
|
||||
impl WorkflowContext {
|
||||
pub fn new(execution_id: i64, parameters: JsonValue) -> Self {
|
||||
Self {
|
||||
execution_id,
|
||||
parameters,
|
||||
vars: HashMap::new(),
|
||||
task_results: HashMap::new(),
|
||||
pack_config: JsonValue::Null,
|
||||
system: SystemContext::default(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn render_template(&self, template: &str) -> Result<String> {
|
||||
let mut tera = Tera::default();
|
||||
let context = self.to_tera_context();
|
||||
tera.render_str(template, &context)
|
||||
.map_err(|e| Error::TemplateError(e.to_string()))
|
||||
}
|
||||
|
||||
fn to_tera_context(&self) -> Context {
|
||||
let mut ctx = Context::new();
|
||||
ctx.insert("parameters", &self.parameters);
|
||||
ctx.insert("vars", &self.vars);
|
||||
ctx.insert("task", &self.task_results);
|
||||
ctx.insert("system", &self.system);
|
||||
ctx.insert("pack", &json!({"config": self.pack_config}));
|
||||
ctx
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Phase 1 Testing**:
|
||||
```bash
|
||||
cargo test -p attune-common workflow
|
||||
cargo test -p attune-executor workflow::parser
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Execution Engine (Weeks 3-4)
|
||||
|
||||
**Goal**: Graph builder, workflow executor, message handlers
|
||||
|
||||
#### Step 2.1: Task Graph
|
||||
Create `crates/executor/src/workflow/graph.rs`:
|
||||
|
||||
```rust
|
||||
use std::collections::HashMap;
|
||||
|
||||
pub struct TaskGraph {
|
||||
nodes: HashMap<String, TaskNode>,
|
||||
edges: HashMap<String, Vec<Edge>>,
|
||||
}
|
||||
|
||||
impl TaskGraph {
|
||||
pub fn from_workflow_spec(spec: &WorkflowSpec) -> Result<Self> {
|
||||
// Build graph from task definitions
|
||||
// Create nodes for each task
|
||||
// Create edges based on transitions (on_success, on_failure, etc.)
|
||||
}
|
||||
|
||||
pub fn get_entry_tasks(&self) -> Vec<&TaskNode> {
|
||||
// Return tasks with no incoming edges
|
||||
}
|
||||
|
||||
pub fn get_next_tasks(&self, completed_task: &str, result: &TaskResult) -> Vec<&TaskNode> {
|
||||
// Follow edges based on result (success/failure)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 2.2: Workflow Executor
|
||||
Create `crates/executor/src/workflow/executor.rs`:
|
||||
|
||||
```rust
|
||||
pub struct WorkflowExecutor {
|
||||
pool: PgPool,
|
||||
publisher: MessagePublisher,
|
||||
}
|
||||
|
||||
impl WorkflowExecutor {
|
||||
pub async fn execute_workflow(
|
||||
&self,
|
||||
execution_id: i64,
|
||||
workflow_ref: &str,
|
||||
parameters: JsonValue,
|
||||
) -> Result<()> {
|
||||
// 1. Load workflow definition
|
||||
// 2. Create workflow_execution record
|
||||
// 3. Initialize context
|
||||
// 4. Build task graph
|
||||
// 5. Schedule initial tasks
|
||||
}
|
||||
|
||||
pub async fn handle_task_completion(
|
||||
&self,
|
||||
workflow_execution_id: i64,
|
||||
task_name: String,
|
||||
result: TaskResult,
|
||||
) -> Result<()> {
|
||||
// 1. Update workflow_task_execution
|
||||
// 2. Publish variables
|
||||
// 3. Determine next tasks
|
||||
// 4. Schedule next tasks
|
||||
// 5. Check if workflow complete
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 2.3: Message Handlers
|
||||
Integrate with existing executor message loops:
|
||||
|
||||
```rust
|
||||
// In executor/src/main.rs
|
||||
async fn start_workflow_message_handlers(
|
||||
pool: PgPool,
|
||||
publisher: MessagePublisher,
|
||||
) -> Result<()> {
|
||||
let executor = WorkflowExecutor::new(pool.clone(), publisher.clone());
|
||||
|
||||
// Listen for execution.completed on workflow tasks
|
||||
let consumer = create_consumer("workflow.task.completions").await?;
|
||||
|
||||
consumer.consume_with_handler(move |envelope| {
|
||||
let executor = executor.clone();
|
||||
async move {
|
||||
executor.handle_task_completion(
|
||||
envelope.payload.workflow_execution_id,
|
||||
envelope.payload.task_name,
|
||||
envelope.payload.result,
|
||||
).await
|
||||
}
|
||||
}).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
**Phase 2 Testing**:
|
||||
```bash
|
||||
cargo test -p attune-executor workflow::graph
|
||||
cargo test -p attune-executor workflow::executor
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Advanced Features (Weeks 5-6)
|
||||
|
||||
**Goal**: Iteration, parallelism, retry, conditionals
|
||||
|
||||
#### Step 3.1: Iteration
|
||||
Create `crates/executor/src/workflow/iterator.rs`:
|
||||
|
||||
```rust
|
||||
pub struct TaskIterator {
|
||||
items: Vec<JsonValue>,
|
||||
batch_size: Option<usize>,
|
||||
}
|
||||
|
||||
impl TaskIterator {
|
||||
pub fn from_template(
|
||||
template: &str,
|
||||
context: &WorkflowContext,
|
||||
batch_size: Option<usize>,
|
||||
) -> Result<Self> {
|
||||
let rendered = context.render_template(template)?;
|
||||
let items: Vec<JsonValue> = serde_json::from_str(&rendered)?;
|
||||
Ok(Self { items, batch_size })
|
||||
}
|
||||
|
||||
pub fn batches(&self) -> Vec<Vec<&JsonValue>> {
|
||||
if let Some(size) = self.batch_size {
|
||||
self.items.chunks(size).map(|c| c.iter().collect()).collect()
|
||||
} else {
|
||||
vec![self.items.iter().collect()]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 3.2: Parallel Execution
|
||||
Create `crates/executor/src/workflow/parallel.rs`:
|
||||
|
||||
```rust
|
||||
pub struct ParallelExecutor {
|
||||
// Execute multiple tasks simultaneously
|
||||
// Wait for all to complete
|
||||
// Aggregate results
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 3.3: Retry Logic
|
||||
Create `crates/executor/src/workflow/retry.rs`:
|
||||
|
||||
```rust
|
||||
pub struct RetryHandler {
|
||||
// Exponential/linear/constant backoff
|
||||
// Max retries
|
||||
// Condition evaluation
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: API & Tools (Weeks 7-8)
|
||||
|
||||
**Goal**: REST endpoints, validation, pack integration
|
||||
|
||||
#### Step 4.1: API Routes
|
||||
Create `crates/api/src/routes/workflows.rs`:
|
||||
|
||||
```rust
|
||||
pub fn workflow_routes() -> Router {
|
||||
Router::new()
|
||||
.route("/packs/:pack_ref/workflows", post(create_workflow))
|
||||
.route("/packs/:pack_ref/workflows", get(list_workflows))
|
||||
.route("/workflows/:workflow_ref", get(get_workflow))
|
||||
.route("/workflows/:workflow_ref", put(update_workflow))
|
||||
.route("/workflows/:workflow_ref", delete(delete_workflow))
|
||||
.route("/workflows/:workflow_ref/execute", post(execute_workflow))
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 4.2: Pack Integration
|
||||
Update pack registration to scan `workflows/` directory:
|
||||
|
||||
```rust
|
||||
// In pack registration logic
|
||||
async fn register_workflows_in_pack(pool: &PgPool, pack_id: i64, pack_path: &Path) -> Result<()> {
|
||||
let workflows_dir = pack_path.join("workflows");
|
||||
if !workflows_dir.exists() {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
for entry in std::fs::read_dir(workflows_dir)? {
|
||||
let path = entry?.path();
|
||||
if path.extension() == Some("yaml".as_ref()) {
|
||||
let yaml = std::fs::read_to_string(&path)?;
|
||||
let spec = parse_workflow_yaml(&yaml)?;
|
||||
|
||||
// Create workflow_definition
|
||||
// Create synthetic action with is_workflow=true
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Testing & Documentation (Week 9)
|
||||
|
||||
**Goal**: Comprehensive tests and documentation
|
||||
|
||||
#### Integration Tests
|
||||
Create `crates/executor/tests/workflow_integration.rs`:
|
||||
|
||||
```rust
|
||||
#[tokio::test]
|
||||
async fn test_simple_sequential_workflow() {
|
||||
// Test basic workflow execution
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_parallel_execution() {
|
||||
// Test parallel tasks
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_conditional_branching() {
|
||||
// Test decision trees
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_iteration_with_batching() {
|
||||
// Test with-items
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Development Tips
|
||||
|
||||
### Debugging Workflows
|
||||
```bash
|
||||
# Enable debug logging
|
||||
RUST_LOG=attune_executor::workflow=debug cargo run
|
||||
|
||||
# Watch workflow execution
|
||||
psql -d attune -c "SELECT * FROM attune.workflow_execution_summary;"
|
||||
|
||||
# Check task status
|
||||
psql -d attune -c "SELECT * FROM attune.workflow_task_detail WHERE workflow_execution = ?;"
|
||||
```
|
||||
|
||||
### Testing YAML Parsing
|
||||
```bash
|
||||
# Validate workflow YAML
|
||||
cargo run --bin attune-cli -- workflow validate path/to/workflow.yaml
|
||||
```
|
||||
|
||||
### Common Pitfalls
|
||||
1. **Circular Dependencies**: Validate graph for cycles
|
||||
2. **Template Errors**: Always handle template rendering failures
|
||||
3. **Variable Scope**: Test all 6 scopes independently
|
||||
4. **Message Ordering**: Ensure task completions processed in order
|
||||
5. **Resource Limits**: Enforce max tasks/depth/iterations
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- **Design Docs**: `docs/workflow-*.md`
|
||||
- **Examples**: `docs/examples/simple-workflow.yaml`, `complete-workflow.yaml`
|
||||
- **Migration**: `docs/examples/workflow-migration.sql`
|
||||
- **TODO Tasks**: `work-summary/TODO.md` Phase 8.1
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
|
||||
- Review full design: `docs/workflow-orchestration.md`
|
||||
- Check implementation plan: `docs/workflow-implementation-plan.md`
|
||||
- See examples: `docs/examples/`
|
||||
- Ask questions in project discussions
|
||||
|
||||
---
|
||||
|
||||
**Ready to start? Begin with Phase 1, Step 1.1: Database Migration**
|
||||
Reference in New Issue
Block a user