re-uploading work
This commit is contained in:
1817
docs/testing/e2e-test-plan.md
Normal file
1817
docs/testing/e2e-test-plan.md
Normal file
File diff suppressed because it is too large
Load Diff
441
docs/testing/running-tests.md
Normal file
441
docs/testing/running-tests.md
Normal file
@@ -0,0 +1,441 @@
|
||||
# Running Tests - Quick Reference
|
||||
|
||||
This guide provides quick commands for running all tests across the Attune project.
|
||||
|
||||
**Note:** Attune uses a **schema-per-test architecture** for true test isolation and parallel execution. See [Schema-Per-Test Architecture](./schema-per-test.md) for details.
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Run all tests (from project root)
|
||||
make test
|
||||
|
||||
# Or run individually by component
|
||||
make test-common
|
||||
make test-api
|
||||
make test-executor
|
||||
make test-worker
|
||||
make test-sensor
|
||||
make test-cli
|
||||
make test-core-pack
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## By Component
|
||||
|
||||
### 1. Common Library
|
||||
|
||||
```bash
|
||||
cd crates/common
|
||||
cargo test
|
||||
```
|
||||
|
||||
**Coverage**: 539 tests
|
||||
- Repository tests (all 15 repositories)
|
||||
- Model validation
|
||||
- Configuration parsing
|
||||
- Error handling
|
||||
|
||||
**Note**: Tests run in parallel with isolated schemas (no `#[serial]` constraints)
|
||||
|
||||
---
|
||||
|
||||
### 2. API Service
|
||||
|
||||
```bash
|
||||
cd crates/api
|
||||
cargo test
|
||||
```
|
||||
|
||||
**Coverage**: 82 tests
|
||||
- Unit tests (41)
|
||||
- Integration tests (41)
|
||||
- Authentication flows
|
||||
- CRUD operations
|
||||
|
||||
**Performance**: ~4-5 seconds (parallel execution with schema isolation)
|
||||
|
||||
---
|
||||
|
||||
### 3. Executor Service
|
||||
|
||||
```bash
|
||||
cd crates/executor
|
||||
cargo test
|
||||
```
|
||||
|
||||
**Coverage**: 63 tests
|
||||
- Unit tests (55)
|
||||
- Integration tests (8)
|
||||
- Queue management
|
||||
- Workflow orchestration
|
||||
|
||||
---
|
||||
|
||||
### 4. Worker Service
|
||||
|
||||
```bash
|
||||
cd crates/worker
|
||||
cargo test
|
||||
```
|
||||
|
||||
**Coverage**: 50 tests
|
||||
- Unit tests (44)
|
||||
- Security tests (6)
|
||||
- Action execution
|
||||
- Dependency isolation
|
||||
|
||||
---
|
||||
|
||||
### 5. Sensor Service
|
||||
|
||||
```bash
|
||||
cd crates/sensor
|
||||
cargo test
|
||||
```
|
||||
|
||||
**Coverage**: 27 tests
|
||||
- Timer sensors
|
||||
- Interval timers
|
||||
- Cron timers
|
||||
- Event generation
|
||||
|
||||
---
|
||||
|
||||
### 6. CLI Tool
|
||||
|
||||
```bash
|
||||
cd crates/cli
|
||||
cargo test
|
||||
```
|
||||
|
||||
**Coverage**: 60+ integration tests
|
||||
- Pack management
|
||||
- Action execution
|
||||
- Configuration
|
||||
- User workflows
|
||||
|
||||
---
|
||||
|
||||
### 7. Core Pack
|
||||
|
||||
```bash
|
||||
# Bash test runner (fast)
|
||||
cd packs/core/tests
|
||||
./run_tests.sh
|
||||
|
||||
# Python test suite (comprehensive)
|
||||
cd packs/core/tests
|
||||
python3 test_actions.py
|
||||
|
||||
# With pytest (recommended)
|
||||
cd packs/core/tests
|
||||
pytest test_actions.py -v
|
||||
```
|
||||
|
||||
**Coverage**: 76 tests
|
||||
- core.echo (7 tests)
|
||||
- core.noop (8 tests)
|
||||
- core.sleep (8 tests)
|
||||
- core.http_request (10 tests)
|
||||
- File permissions (4 tests)
|
||||
- YAML validation (optional)
|
||||
|
||||
---
|
||||
|
||||
## Running Specific Tests
|
||||
|
||||
### Rust Tests
|
||||
|
||||
```bash
|
||||
# Run specific test by name
|
||||
cargo test test_name
|
||||
|
||||
# Run tests matching pattern
|
||||
cargo test pattern
|
||||
|
||||
# Run tests in specific module
|
||||
cargo test module_name::
|
||||
|
||||
# Show test output
|
||||
cargo test -- --nocapture
|
||||
|
||||
# Run tests serially (not parallel) - rarely needed with schema-per-test
|
||||
cargo test -- --test-threads=1
|
||||
|
||||
# See verbose output from specific test
|
||||
cargo test test_name -- --nocapture
|
||||
```
|
||||
|
||||
### Python Tests (Core Pack)
|
||||
|
||||
```bash
|
||||
# Run specific test class
|
||||
pytest test_actions.py::TestEchoAction -v
|
||||
|
||||
# Run specific test method
|
||||
pytest test_actions.py::TestEchoAction::test_basic_echo -v
|
||||
|
||||
# Show output
|
||||
pytest test_actions.py -v -s
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test Requirements
|
||||
|
||||
### Rust Tests
|
||||
|
||||
**Required**:
|
||||
- Rust 1.70+
|
||||
- PostgreSQL (for integration tests)
|
||||
- RabbitMQ (for integration tests)
|
||||
|
||||
**Setup**:
|
||||
```bash
|
||||
# Start test dependencies (PostgreSQL)
|
||||
docker-compose -f docker-compose.test.yaml up -d
|
||||
|
||||
# Create test database
|
||||
createdb -U postgres attune_test
|
||||
|
||||
# Run tests (migrations run automatically per test)
|
||||
cargo test
|
||||
|
||||
# Cleanup orphaned test schemas (optional)
|
||||
./scripts/cleanup-test-schemas.sh
|
||||
```
|
||||
|
||||
**Note**: Each test creates its own isolated schema (`test_<uuid>`), runs migrations, and cleans up automatically.
|
||||
|
||||
### Core Pack Tests
|
||||
|
||||
**Required**:
|
||||
- bash
|
||||
- python3
|
||||
|
||||
**Optional**:
|
||||
- `pytest` - Better test output: `pip install pytest`
|
||||
- `PyYAML` - YAML validation: `pip install pyyaml`
|
||||
- `requests` - HTTP tests: `pip install requests>=2.28.0`
|
||||
|
||||
---
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
Tests run automatically on:
|
||||
- Push to main
|
||||
- Pull requests
|
||||
- Manual workflow dispatch
|
||||
|
||||
View results: `.github/workflows/test.yml`
|
||||
|
||||
---
|
||||
|
||||
## Test Coverage
|
||||
|
||||
### Current Coverage by Component
|
||||
|
||||
| Component | Tests | Status | Coverage |
|
||||
|-----------|-------|--------|----------|
|
||||
| Common | 539 | ✅ Passing | ~90% |
|
||||
| API | 82 | ✅ Passing | ~70% |
|
||||
| Executor | 63 | ✅ Passing | ~85% |
|
||||
| Worker | 50 | ✅ Passing | ~80% |
|
||||
| Sensor | 27 | ✅ Passing | ~75% |
|
||||
| CLI | 60+ | ✅ Passing | ~70% |
|
||||
| Core Pack | 76 | ✅ Passing | 100% |
|
||||
| **Total** | **732+** | **✅ 731+ Passing** | **~40%** |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tests Fail Due to Database
|
||||
|
||||
```bash
|
||||
# Ensure PostgreSQL is running
|
||||
docker ps | grep postgres
|
||||
|
||||
# Check connection
|
||||
psql -U postgres -h localhost -c "SELECT 1"
|
||||
|
||||
# Cleanup orphaned test schemas
|
||||
./scripts/cleanup-test-schemas.sh --force
|
||||
|
||||
# Check for accumulated schemas
|
||||
psql postgresql://postgres:postgres@localhost:5432/attune_test -c \
|
||||
"SELECT COUNT(*) FROM pg_namespace WHERE nspname LIKE 'test_%';"
|
||||
|
||||
# If needed, recreate test database
|
||||
dropdb attune_test
|
||||
createdb attune_test
|
||||
```
|
||||
|
||||
**Tip**: The schema-per-test approach means you don't need to reset the database between test runs. Each test gets its own isolated schema.
|
||||
|
||||
### Tests Fail Due to RabbitMQ
|
||||
|
||||
```bash
|
||||
# Ensure RabbitMQ is running
|
||||
docker ps | grep rabbitmq
|
||||
|
||||
# Check status
|
||||
rabbitmqctl status
|
||||
|
||||
# Reset queues
|
||||
rabbitmqadmin purge queue name=executor.enforcement
|
||||
```
|
||||
|
||||
### Core Pack Tests Fail
|
||||
|
||||
```bash
|
||||
# Check file permissions
|
||||
ls -la packs/core/actions/
|
||||
|
||||
# Make scripts executable
|
||||
chmod +x packs/core/actions/*.sh
|
||||
chmod +x packs/core/actions/*.py
|
||||
|
||||
# Install Python dependencies
|
||||
pip install requests>=2.28.0
|
||||
```
|
||||
|
||||
### Slow Tests
|
||||
|
||||
```bash
|
||||
# Run only fast unit tests (skip integration)
|
||||
cargo test --lib
|
||||
|
||||
# Run specific test suite
|
||||
cargo test --test integration_test
|
||||
|
||||
# Parallel execution (default, recommended with schema-per-test)
|
||||
cargo test
|
||||
|
||||
# Limit parallelism if needed
|
||||
cargo test -- --test-threads=4
|
||||
|
||||
# Serial execution (rarely needed with schema isolation)
|
||||
cargo test -- --test-threads=1
|
||||
|
||||
# Cleanup accumulated schemas if performance degrades
|
||||
./scripts/cleanup-test-schemas.sh --force
|
||||
```
|
||||
|
||||
**Note**: With schema-per-test isolation, parallel execution is safe and ~4-8x faster than serial execution.
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Committing
|
||||
|
||||
```bash
|
||||
# 1. Run all tests
|
||||
cargo test --all
|
||||
|
||||
# 2. Run core pack tests
|
||||
cd packs/core/tests && ./run_tests.sh
|
||||
|
||||
# 3. Check formatting
|
||||
cargo fmt --check
|
||||
|
||||
# 4. Run clippy
|
||||
cargo clippy -- -D warnings
|
||||
```
|
||||
|
||||
### Writing New Tests
|
||||
|
||||
1. **Unit tests**: In the same file as the code
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_something() {
|
||||
// Test code
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Integration tests**: In `tests/` directory
|
||||
```rust
|
||||
// tests/integration_test.rs
|
||||
use crate::helpers::TestContext;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_integration() {
|
||||
// Each test gets isolated schema automatically
|
||||
let ctx = TestContext::new().await;
|
||||
|
||||
// Test code using ctx.pool, ctx.app, etc.
|
||||
}
|
||||
```
|
||||
|
||||
**Important**: No need for `#[serial]` attribute - schema-per-test provides isolation!
|
||||
|
||||
3. **Core Pack tests**: Add to both test runners
|
||||
- `packs/core/tests/run_tests.sh` for quick tests
|
||||
- `packs/core/tests/test_actions.py` for comprehensive tests
|
||||
|
||||
---
|
||||
|
||||
## Performance Benchmarks
|
||||
|
||||
Expected test execution times:
|
||||
|
||||
| Component | Time | Notes |
|
||||
|-----------|------|-------|
|
||||
| Common | ~0.5s | Parallel execution |
|
||||
| API | ~4-5s | **75% faster** with schema-per-test |
|
||||
| Executor | ~6s | Parallel with isolation |
|
||||
| Worker | ~5s | Parallel execution |
|
||||
| Sensor | ~3s | Parallel timer tests |
|
||||
| CLI | ~12s | Integration tests |
|
||||
| Core Pack (bash) | ~20s | Includes HTTP tests |
|
||||
| Core Pack (python) | ~12s | Unittest suite |
|
||||
| **Total** | **~60s** | **4-8x speedup** with parallel execution |
|
||||
|
||||
**Performance Improvement**: Schema-per-test architecture enables true parallel execution without `#[serial]` constraints, resulting in 75% faster test runs for integration tests.
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Schema-Per-Test Architecture](./schema-per-test.md) - **NEW**: Detailed explanation of test isolation
|
||||
- [Testing Status](testing-status.md) - Detailed coverage analysis
|
||||
- [Core Pack Tests](../packs/core/tests/README.md) - Core pack testing guide
|
||||
- [Production Deployment](./production-deployment.md) - Production schema configuration
|
||||
- [Contributing](../CONTRIBUTING.md) - Development guidelines
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Cleanup Orphaned Test Schemas
|
||||
|
||||
If tests are interrupted (Ctrl+C, crash), schemas may accumulate:
|
||||
|
||||
```bash
|
||||
# Manual cleanup
|
||||
./scripts/cleanup-test-schemas.sh
|
||||
|
||||
# Force cleanup (no confirmation)
|
||||
./scripts/cleanup-test-schemas.sh --force
|
||||
|
||||
# Check schema count
|
||||
psql postgresql://postgres:postgres@localhost:5432/attune_test -c \
|
||||
"SELECT COUNT(*) FROM pg_namespace WHERE nspname LIKE 'test_%';"
|
||||
```
|
||||
|
||||
Run cleanup periodically or if you notice performance degradation.
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-28
|
||||
**Maintainer**: Attune Team
|
||||
541
docs/testing/schema-per-test.md
Normal file
541
docs/testing/schema-per-test.md
Normal file
@@ -0,0 +1,541 @@
|
||||
# Schema-Per-Test Architecture
|
||||
|
||||
**Status:** Implemented
|
||||
**Version:** 1.0
|
||||
**Last Updated:** 2026-01-28
|
||||
|
||||
## Overview
|
||||
|
||||
Attune uses a **schema-per-test architecture** to achieve true test isolation and enable parallel test execution. Each test runs in its own dedicated PostgreSQL schema, eliminating shared state and data contamination between tests.
|
||||
|
||||
This approach provides:
|
||||
|
||||
- ✅ **True Isolation**: Each test has its own complete database schema with independent data
|
||||
- ✅ **Parallel Execution**: Tests can run concurrently without interference (4-8x faster)
|
||||
- ✅ **Simple Cleanup**: Just drop the schema instead of complex deletion logic
|
||||
- ✅ **No Serial Constraints**: No need for `#[serial]` or manual locking
|
||||
- ✅ **Better Reliability**: Foreign key constraints never conflict between tests
|
||||
|
||||
## How It Works
|
||||
|
||||
### 1. Schema Creation
|
||||
|
||||
When a test starts, a unique schema is created:
|
||||
|
||||
```rust
|
||||
// Test helper creates unique schema per test
|
||||
let schema = format!("test_{}", uuid::Uuid::new_v4().simple());
|
||||
|
||||
// Create schema in database
|
||||
sqlx::query(&format!("CREATE SCHEMA {}", schema))
|
||||
.execute(&pool)
|
||||
.await?;
|
||||
|
||||
// Set search_path for all connections
|
||||
sqlx::query(&format!("SET search_path TO {}", schema))
|
||||
.execute(&pool)
|
||||
.await?;
|
||||
```
|
||||
|
||||
Schema names follow the pattern: `test_<uuid>` (e.g., `test_a1b2c3d4e5f6...`)
|
||||
|
||||
### 2. Migration Execution
|
||||
|
||||
Each test schema gets its own complete set of tables:
|
||||
|
||||
```rust
|
||||
// Run migrations in the test schema
|
||||
// Migrations are schema-agnostic (no hardcoded "attune." prefixes)
|
||||
for migration in migrations {
|
||||
sqlx::query(&migration.sql)
|
||||
.execute(&pool)
|
||||
.await?;
|
||||
}
|
||||
```
|
||||
|
||||
All 17 Attune tables are created:
|
||||
- `pack`, `action`, `trigger`, `sensor`, `rule`, `event`, `enforcement`
|
||||
- `execution`, `inquiry`, `identity`, `key`, `workflow_definition`
|
||||
- `workflow_execution`, `notification`, `artifact`, `queue_stats`, etc.
|
||||
|
||||
### 3. Search Path Mechanism
|
||||
|
||||
PostgreSQL's `search_path` determines which schema to use for unqualified table names:
|
||||
|
||||
```sql
|
||||
-- Set once per connection
|
||||
SET search_path TO test_a1b2c3d4;
|
||||
|
||||
-- Now all queries use the test schema automatically
|
||||
SELECT * FROM pack; -- Resolves to test_a1b2c3d4.pack
|
||||
INSERT INTO action (...); -- Resolves to test_a1b2c3d4.action
|
||||
```
|
||||
|
||||
This is set via the `after_connect` hook in `Database::new()`:
|
||||
|
||||
```rust
|
||||
.after_connect(move |conn, _meta| {
|
||||
let schema = schema_for_hook.clone();
|
||||
Box::pin(async move {
|
||||
let search_path = if schema.starts_with("test_") {
|
||||
format!("SET search_path TO {}", schema)
|
||||
} else {
|
||||
format!("SET search_path TO {}, public", schema)
|
||||
};
|
||||
sqlx::query(&search_path).execute(&mut *conn).await?;
|
||||
Ok(())
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
### 4. Test Execution
|
||||
|
||||
Tests run with isolated data:
|
||||
|
||||
```rust
|
||||
#[tokio::test]
|
||||
async fn test_create_pack() {
|
||||
// Each test gets its own TestContext with unique schema
|
||||
let ctx = TestContext::new().await;
|
||||
|
||||
// Create pack in this test's schema only
|
||||
let pack = create_test_pack(&ctx.pool).await;
|
||||
|
||||
// Other tests running in parallel don't see this data
|
||||
assert_eq!(pack.name, "test-pack");
|
||||
|
||||
// Cleanup happens automatically when TestContext drops
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Automatic Cleanup
|
||||
|
||||
**Schema is automatically dropped when the test completes** via Rust's `Drop` trait:
|
||||
|
||||
```rust
|
||||
impl Drop for TestContext {
|
||||
fn drop(&mut self) {
|
||||
// Cleanup happens synchronously to ensure it completes before test exits
|
||||
let schema = self.schema.clone();
|
||||
|
||||
// Block on async cleanup using the current tokio runtime
|
||||
if let Ok(handle) = tokio::runtime::Handle::try_current() {
|
||||
handle.block_on(async move {
|
||||
if let Err(e) = cleanup_test_schema(&schema).await {
|
||||
eprintln!("Failed to cleanup test schema {}: {}", schema, e);
|
||||
} else {
|
||||
tracing::info!("Test context cleanup completed for schema: {}", schema);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Also cleanup test packs directory
|
||||
std::fs::remove_dir_all(&self.test_packs_dir).ok();
|
||||
}
|
||||
}
|
||||
|
||||
async fn cleanup_test_schema(schema_name: &str) -> Result<()> {
|
||||
// Drop entire schema with CASCADE
|
||||
// This removes all tables, data, functions, types, etc.
|
||||
let base_pool = create_base_pool().await?;
|
||||
sqlx::query(&format!("DROP SCHEMA IF EXISTS {} CASCADE", schema_name))
|
||||
.execute(&base_pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
- Cleanup is **synchronous** (blocks until complete) to ensure schema is dropped before test exits
|
||||
- Uses `tokio::runtime::Handle::block_on()` to run async cleanup in the current runtime
|
||||
- Drops the entire schema with `CASCADE`, removing all objects in one operation
|
||||
- Also cleans up the test-specific packs directory
|
||||
- Logs success/failure for debugging
|
||||
|
||||
This means **you don't need to manually cleanup** - just let `TestContext` go out of scope:
|
||||
|
||||
```rust
|
||||
#[tokio::test]
|
||||
async fn test_something() {
|
||||
let ctx = TestContext::new().await;
|
||||
// ... run your test ...
|
||||
// Schema automatically dropped here when ctx goes out of scope
|
||||
}
|
||||
```
|
||||
|
||||
## Production vs. Test Configuration
|
||||
|
||||
### Production Configuration
|
||||
|
||||
Production always uses the `attune` schema:
|
||||
|
||||
```yaml
|
||||
# config.production.yaml
|
||||
database:
|
||||
schema: "attune" # REQUIRED: Do not change
|
||||
```
|
||||
|
||||
The database layer validates and logs schema usage:
|
||||
|
||||
```rust
|
||||
if schema != "attune" {
|
||||
tracing::warn!("Using non-standard schema: '{}'. Production should use 'attune'", schema);
|
||||
} else {
|
||||
tracing::info!("Using production schema: {}", schema);
|
||||
}
|
||||
```
|
||||
|
||||
### Test Configuration
|
||||
|
||||
Tests use dynamic schemas:
|
||||
|
||||
```yaml
|
||||
# config.test.yaml
|
||||
database:
|
||||
schema: null # Will be set per-test in TestContext
|
||||
```
|
||||
|
||||
Each test creates its own unique schema at runtime.
|
||||
|
||||
## Code Structure
|
||||
|
||||
### Test Helper (`crates/api/tests/helpers.rs`)
|
||||
|
||||
```rust
|
||||
pub struct TestContext {
|
||||
pub pool: PgPool,
|
||||
pub app: Router,
|
||||
pub token: Option<String>,
|
||||
pub user: Option<Identity>,
|
||||
pub schema: String, // Unique per test
|
||||
}
|
||||
|
||||
impl TestContext {
|
||||
pub async fn new() -> Self {
|
||||
// 1. Connect to base database
|
||||
let base_pool = create_base_pool().await;
|
||||
|
||||
// 2. Create unique test schema
|
||||
let schema = format!("test_{}", uuid::Uuid::new_v4().simple());
|
||||
sqlx::query(&format!("CREATE SCHEMA {}", schema))
|
||||
.execute(&base_pool)
|
||||
.await
|
||||
.expect("Failed to create test schema");
|
||||
|
||||
// 3. Create schema-specific pool with search_path set
|
||||
let pool = create_schema_pool(&schema).await;
|
||||
|
||||
// 4. Run migrations in test schema
|
||||
run_test_migrations(&pool, &schema).await;
|
||||
|
||||
// 5. Build test app
|
||||
let app = build_test_app(pool.clone());
|
||||
|
||||
Self {
|
||||
pool,
|
||||
app,
|
||||
token: None,
|
||||
user: None,
|
||||
schema,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for TestContext {
|
||||
fn drop(&mut self) {
|
||||
// Cleanup happens here
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Database Layer (`crates/common/src/db.rs`)
|
||||
|
||||
```rust
|
||||
impl Database {
|
||||
pub async fn new(config: &DatabaseConfig) -> Result<Self> {
|
||||
let schema = config.schema.clone().unwrap_or_else(|| "attune".to_string());
|
||||
|
||||
// Validate schema name (security)
|
||||
Self::validate_schema_name(&schema)?;
|
||||
|
||||
// Log schema usage
|
||||
if schema != "attune" {
|
||||
warn!("Using non-standard schema: '{}'", schema);
|
||||
} else {
|
||||
info!("Using production schema: {}", schema);
|
||||
}
|
||||
|
||||
// Create pool with search_path hook
|
||||
let pool = PgPoolOptions::new()
|
||||
.after_connect(move |conn, _meta| {
|
||||
let schema = schema_for_hook.clone();
|
||||
Box::pin(async move {
|
||||
let search_path = if schema.starts_with("test_") {
|
||||
format!("SET search_path TO {}", schema)
|
||||
} else {
|
||||
format!("SET search_path TO {}, public", schema)
|
||||
};
|
||||
sqlx::query(&search_path).execute(&mut *conn).await?;
|
||||
Ok(())
|
||||
})
|
||||
})
|
||||
.connect(&config.url)
|
||||
.await?;
|
||||
|
||||
Ok(Self { pool, schema })
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Repository Queries (Schema-Agnostic)
|
||||
|
||||
All repository queries use unqualified table names:
|
||||
|
||||
```rust
|
||||
// ✅ CORRECT: Schema-agnostic
|
||||
sqlx::query_as::<_, Pack>("SELECT * FROM pack WHERE id = $1")
|
||||
.bind(id)
|
||||
.fetch_one(pool)
|
||||
.await
|
||||
|
||||
// ❌ WRONG: Hardcoded schema
|
||||
sqlx::query_as::<_, Pack>("SELECT * FROM attune.pack WHERE id = $1")
|
||||
.bind(id)
|
||||
.fetch_one(pool)
|
||||
.await
|
||||
```
|
||||
|
||||
The `search_path` automatically resolves `pack` to the correct schema:
|
||||
- Production: `attune.pack`
|
||||
- Test: `test_a1b2c3d4.pack`
|
||||
|
||||
### Migration Files (Schema-Agnostic)
|
||||
|
||||
Migrations don't specify schema prefixes:
|
||||
|
||||
```sql
|
||||
-- ✅ CORRECT: Schema-agnostic
|
||||
CREATE TABLE pack (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
...
|
||||
);
|
||||
|
||||
-- ❌ WRONG: Hardcoded schema
|
||||
CREATE TABLE attune.pack (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
...
|
||||
);
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Run All Tests (Parallel)
|
||||
|
||||
```bash
|
||||
cargo test
|
||||
# Tests run in parallel across multiple threads
|
||||
```
|
||||
|
||||
### Run Specific Test File
|
||||
|
||||
```bash
|
||||
cargo test --test api_packs_test
|
||||
```
|
||||
|
||||
### Run Single Test
|
||||
|
||||
```bash
|
||||
cargo test test_create_pack
|
||||
```
|
||||
|
||||
### Verbose Output
|
||||
|
||||
```bash
|
||||
cargo test -- --nocapture --test-threads=1
|
||||
```
|
||||
|
||||
### Using Makefile
|
||||
|
||||
```bash
|
||||
make test # Run all tests
|
||||
make test-integration # Run integration tests only
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Cleanup Orphaned Schemas
|
||||
|
||||
**Normal test execution:** Schemas are automatically cleaned up via the `Drop` implementation in `TestContext`.
|
||||
|
||||
**However, if tests are interrupted** (Ctrl+C, crash, panic before Drop runs, etc.), schemas may accumulate:
|
||||
|
||||
```bash
|
||||
# Manual cleanup
|
||||
./scripts/cleanup-test-schemas.sh
|
||||
|
||||
# With custom database
|
||||
DATABASE_URL="postgresql://user:pass@host/db" ./scripts/cleanup-test-schemas.sh
|
||||
|
||||
# Force mode (no confirmation)
|
||||
./scripts/cleanup-test-schemas.sh --force
|
||||
```
|
||||
|
||||
The cleanup script:
|
||||
- Finds all schemas matching `test_%` pattern
|
||||
- Drops them with CASCADE (removes all objects)
|
||||
- Processes in batches to avoid shared memory issues
|
||||
- Provides progress reporting and verification
|
||||
|
||||
### Automated Cleanup
|
||||
|
||||
Add to CI/CD:
|
||||
|
||||
```yaml
|
||||
# .github/workflows/test.yml
|
||||
jobs:
|
||||
test:
|
||||
steps:
|
||||
- name: Run tests
|
||||
run: cargo test
|
||||
|
||||
- name: Cleanup test schemas
|
||||
if: always()
|
||||
run: ./scripts/cleanup-test-schemas.sh --force
|
||||
```
|
||||
|
||||
Or use a cron job:
|
||||
|
||||
```bash
|
||||
# Cleanup every night at 3am
|
||||
0 3 * * * /path/to/attune/scripts/cleanup-test-schemas.sh --force
|
||||
```
|
||||
|
||||
### Monitoring Schema Count
|
||||
|
||||
Check for schema accumulation:
|
||||
|
||||
```bash
|
||||
# Count test schemas
|
||||
psql $DATABASE_URL -c "SELECT COUNT(*) FROM pg_namespace WHERE nspname LIKE 'test_%';"
|
||||
|
||||
# List all test schemas
|
||||
psql $DATABASE_URL -c "SELECT nspname FROM pg_namespace WHERE nspname LIKE 'test_%' ORDER BY nspname;"
|
||||
```
|
||||
|
||||
If the count grows over time, tests are not cleaning up properly. Run the cleanup script.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tests Fail: "Schema does not exist"
|
||||
|
||||
**Cause:** Test schema creation failed or was prematurely dropped
|
||||
|
||||
**Solution:**
|
||||
1. Check database connection: `psql $DATABASE_URL`
|
||||
2. Verify user has CREATE privilege: `GRANT CREATE ON DATABASE attune_test TO postgres;`
|
||||
3. Check disk space and PostgreSQL limits
|
||||
4. Review test output for error messages
|
||||
5. Check if `TestContext` is being dropped too early (ensure it lives for entire test duration)
|
||||
|
||||
### Tests Fail: "Too many connections"
|
||||
|
||||
**Cause:** Connection pool exhaustion from many parallel tests
|
||||
|
||||
**Solution:**
|
||||
1. Reduce `max_connections` in `config.test.yaml`
|
||||
2. Increase PostgreSQL's `max_connections` setting
|
||||
3. Run tests with fewer threads: `cargo test -- --test-threads=4`
|
||||
|
||||
### Cleanup Script Fails: "Out of shared memory"
|
||||
|
||||
**Cause:** Too many schemas to drop at once (this shouldn't happen with automatic cleanup, but can occur if many tests were killed)
|
||||
|
||||
**Solution:** The script now handles this automatically by processing in batches of 50. If you still see this error, reduce the `BATCH_SIZE` in the script.
|
||||
|
||||
**Prevention:** The automatic cleanup in `TestContext::Drop` prevents schema accumulation under normal circumstances.
|
||||
|
||||
### Performance Degradation
|
||||
|
||||
**Cause:** Too many accumulated schemas (usually from interrupted tests)
|
||||
|
||||
**Note:** With automatic cleanup via `Drop`, schemas should not accumulate during normal test execution.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Check schema count
|
||||
psql $DATABASE_URL -c "SELECT COUNT(*) FROM pg_namespace WHERE nspname LIKE 'test_%';"
|
||||
|
||||
# If count is high (>100), cleanup - likely from interrupted tests
|
||||
./scripts/cleanup-test-schemas.sh --force
|
||||
```
|
||||
|
||||
**Prevention:** Avoid killing tests with SIGKILL; use Ctrl+C instead to allow Drop to run.
|
||||
|
||||
### SQLx Compile-Time Checks Fail
|
||||
|
||||
**Cause:** SQLx macros need schema in search_path during compilation
|
||||
|
||||
**Solution:** Use offline mode (already configured):
|
||||
```bash
|
||||
# Generate query metadata
|
||||
cargo sqlx prepare
|
||||
|
||||
# Compile using offline mode
|
||||
cargo build
|
||||
# or
|
||||
cargo test
|
||||
```
|
||||
|
||||
See `.sqlx/` directory for cached query metadata.
|
||||
|
||||
## Benefits Summary
|
||||
|
||||
### Before Schema-Per-Test
|
||||
|
||||
- ❌ Serial execution with `#[serial]` attribute
|
||||
- ❌ Complex cleanup logic with careful deletion order
|
||||
- ❌ Foreign key constraint conflicts between tests
|
||||
- ❌ Data contamination if cleanup fails
|
||||
- ❌ Slow test suite (~20 seconds per test file)
|
||||
|
||||
### After Schema-Per-Test
|
||||
|
||||
- ✅ Parallel execution (no serial constraints)
|
||||
- ✅ Simple cleanup (drop schema)
|
||||
- ✅ No foreign key conflicts
|
||||
- ✅ Complete isolation between tests
|
||||
- ✅ Fast test suite (~4-5 seconds per test file, 4-8x speedup)
|
||||
- ✅ Better reliability and developer experience
|
||||
|
||||
## Migration History
|
||||
|
||||
This architecture was implemented in phases:
|
||||
|
||||
1. **Phase 1**: Updated all migrations to remove schema prefixes
|
||||
2. **Phase 2**: Updated all repositories to be schema-agnostic
|
||||
3. **Phase 3**: Enhanced database layer with dynamic schema configuration
|
||||
4. **Phase 4**: Overhauled test infrastructure to create/destroy schemas
|
||||
5. **Phase 5**: Removed all serial test constraints
|
||||
6. **Phase 6**: Enabled SQLx offline mode for compile-time checks
|
||||
7. **Phase 7**: Added production safety measures and validation
|
||||
8. **Phase 8**: Created cleanup utility script
|
||||
9. **Phase 9**: Updated documentation
|
||||
|
||||
See `docs/plans/schema-per-test-refactor.md` for complete implementation details.
|
||||
|
||||
## References
|
||||
|
||||
- [PostgreSQL search_path Documentation](https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH)
|
||||
- [SQLx Compile-Time Verification](https://github.com/launchbadge/sqlx/blob/main/sqlx-cli/README.md#enable-building-in-offline-mode-with-query)
|
||||
- [Running Tests Guide](./running-tests.md)
|
||||
- [Production Deployment Guide](./production-deployment.md)
|
||||
- [Schema-Per-Test Refactor Plan](./plans/schema-per-test-refactor.md)
|
||||
|
||||
## See Also
|
||||
|
||||
- [Testing Status](./testing-status.md)
|
||||
- [Running Tests](./running-tests.md)
|
||||
- [Database Architecture](./queue-architecture.md)
|
||||
- [Configuration Guide](./configuration.md)
|
||||
338
docs/testing/test-user-setup.md
Normal file
338
docs/testing/test-user-setup.md
Normal file
@@ -0,0 +1,338 @@
|
||||
# Test User Setup Guide
|
||||
|
||||
This guide explains how to create and manage test users in Attune for development and testing.
|
||||
|
||||
## Quick Setup
|
||||
|
||||
### Create Default Test User
|
||||
|
||||
Run the script to create a test user with default credentials:
|
||||
|
||||
```bash
|
||||
./scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
**Default Credentials:**
|
||||
- **Login**: `test@attune.local`
|
||||
- **Password**: `TestPass123!`
|
||||
- **Display Name**: `Test User`
|
||||
|
||||
### Test Login
|
||||
|
||||
Once created, test the login via API:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/login \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"login":"test@attune.local","password":"TestPass123!"}'
|
||||
```
|
||||
|
||||
**Successful Response:**
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"access_token": "eyJ0eXAiOiJKV1QiLCJhbGc...",
|
||||
"refresh_token": "eyJ0eXAiOiJKV1QiLCJhbGc...",
|
||||
"token_type": "Bearer",
|
||||
"expires_in": 86400,
|
||||
"user": {
|
||||
"id": 2,
|
||||
"login": "test@attune.local",
|
||||
"display_name": "Test User"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Custom User Credentials
|
||||
|
||||
### Using Environment Variables
|
||||
|
||||
Create a user with custom credentials:
|
||||
|
||||
```bash
|
||||
TEST_LOGIN="myuser@example.com" \
|
||||
TEST_PASSWORD="MySecurePass123!" \
|
||||
TEST_DISPLAY_NAME="My Custom User" \
|
||||
./scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `TEST_LOGIN` | User login/email | `test@attune.local` |
|
||||
| `TEST_PASSWORD` | User password | `TestPass123!` |
|
||||
| `TEST_DISPLAY_NAME` | User display name | `Test User` |
|
||||
| `ATTUNE_DB_NAME` | Database name | `attune` |
|
||||
| `ATTUNE_DB_HOST` | Database host | `localhost` |
|
||||
| `ATTUNE_DB_PORT` | Database port | `5432` |
|
||||
| `ATTUNE_DB_USER` | Database user | `postgres` |
|
||||
| `ATTUNE_DB_PASSWORD` | Database password | `postgres` |
|
||||
|
||||
## Password Hashing
|
||||
|
||||
Attune uses **Argon2id** for password hashing, which is a secure, modern password hashing algorithm.
|
||||
|
||||
### Generate Password Hash Manually
|
||||
|
||||
To generate a password hash for manual database insertion:
|
||||
|
||||
```bash
|
||||
cargo run --example hash_password "YourPasswordHere"
|
||||
```
|
||||
|
||||
**Example Output:**
|
||||
```
|
||||
$argon2id$v=19$m=19456,t=2,p=1$F0UlGNd21LBXF7TWmpD93w$F65DKRjPU6japrzYv3ZcddnMFCtjVIBDWIkiLbkqt2I
|
||||
```
|
||||
|
||||
### Manual Database Insertion
|
||||
|
||||
Insert a user directly via SQL:
|
||||
|
||||
```sql
|
||||
INSERT INTO identity (login, display_name, password_hash, attributes)
|
||||
VALUES (
|
||||
'user@example.com',
|
||||
'User Name',
|
||||
'$argon2id$v=19$m=19456,t=2,p=1$...', -- Hash from above
|
||||
'{}'::jsonb
|
||||
);
|
||||
```
|
||||
|
||||
## Updating Existing Users
|
||||
|
||||
### Update Password via Script
|
||||
|
||||
If the user already exists, the script will prompt to update:
|
||||
|
||||
```bash
|
||||
./scripts/create-test-user.sh
|
||||
# Outputs: "User 'test@attune.local' already exists"
|
||||
# Prompts: "Do you want to update the password? (y/N):"
|
||||
```
|
||||
|
||||
**Auto-confirm update:**
|
||||
```bash
|
||||
echo "y" | ./scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
### Update Password via SQL
|
||||
|
||||
Directly update a user's password in the database:
|
||||
|
||||
```sql
|
||||
-- Generate hash first: cargo run --example hash_password "NewPassword"
|
||||
|
||||
UPDATE identity
|
||||
SET password_hash = '$argon2id$v=19$m=19456,t=2,p=1$...',
|
||||
updated = NOW()
|
||||
WHERE login = 'test@attune.local';
|
||||
```
|
||||
|
||||
## Multiple Database Support
|
||||
|
||||
Create users in different databases/schemas:
|
||||
|
||||
### Development Database (public schema)
|
||||
|
||||
```bash
|
||||
ATTUNE_DB_NAME=attune \
|
||||
./scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
### E2E Test Database
|
||||
|
||||
```bash
|
||||
ATTUNE_DB_NAME=attune_e2e \
|
||||
./scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
### Custom Database
|
||||
|
||||
```bash
|
||||
ATTUNE_DB_NAME=my_custom_db \
|
||||
ATTUNE_DB_HOST=db.example.com \
|
||||
ATTUNE_DB_PASSWORD=secretpass \
|
||||
./scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
### Check User Exists
|
||||
|
||||
Query the database to verify user creation:
|
||||
|
||||
```bash
|
||||
psql postgresql://postgres:postgres@localhost:5432/attune \
|
||||
-c "SELECT id, login, display_name FROM identity WHERE login = 'test@attune.local';"
|
||||
```
|
||||
|
||||
**Expected Output:**
|
||||
```
|
||||
id | login | display_name
|
||||
----+-------------------+--------------
|
||||
2 | test@attune.local | Test User
|
||||
```
|
||||
|
||||
### Test Authentication
|
||||
|
||||
Use curl to test login:
|
||||
|
||||
```bash
|
||||
# Login
|
||||
TOKEN=$(curl -s -X POST http://localhost:8080/auth/login \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"login":"test@attune.local","password":"TestPass123!"}' \
|
||||
| jq -r '.data.access_token')
|
||||
|
||||
# Use token for authenticated request
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
http://localhost:8080/api/v1/packs
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Development vs Production
|
||||
|
||||
⚠️ **Important Security Notes:**
|
||||
|
||||
1. **Never use test credentials in production**
|
||||
- Default test user (`test@attune.local`) is for development only
|
||||
- Change or remove before production deployment
|
||||
|
||||
2. **Password Strength**
|
||||
- Default password (`TestPass123!`) is intentionally simple for testing
|
||||
- Production passwords should be stronger and unique
|
||||
|
||||
3. **Database Access**
|
||||
- Test user creation requires direct database access
|
||||
- In production, create users via API with proper authentication
|
||||
|
||||
4. **Password Storage**
|
||||
- Never store passwords in plain text
|
||||
- Always use the Argon2id hashing mechanism
|
||||
- Never commit password hashes to version control
|
||||
|
||||
### Production User Creation
|
||||
|
||||
In production, users should be created through:
|
||||
|
||||
1. **API Registration Endpoint** (if enabled)
|
||||
2. **Admin Interface** (web UI)
|
||||
3. **CLI Tool** with proper authentication
|
||||
4. **Database Migration** for initial admin user only
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Script Fails to Connect
|
||||
|
||||
**Error:** `Cannot connect to database attune`
|
||||
|
||||
**Solutions:**
|
||||
- Verify PostgreSQL is running: `pg_isready -h localhost -p 5432`
|
||||
- Check database exists: `psql -l | grep attune`
|
||||
- Verify credentials are correct
|
||||
- Create database: `./scripts/setup-db.sh`
|
||||
|
||||
### Password Hash Generation Fails
|
||||
|
||||
**Error:** `Failed to generate password hash`
|
||||
|
||||
**Solutions:**
|
||||
- Ensure Rust toolchain is installed: `cargo --version`
|
||||
- Build the project first: `cargo build`
|
||||
- Use pre-generated hash for default password (already in script)
|
||||
|
||||
### Login Fails After User Creation
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
1. **Wrong Database Schema**
|
||||
- Verify API service uses same schema as user creation
|
||||
- Check config: `grep schema config.development.yaml`
|
||||
|
||||
2. **Password Mismatch**
|
||||
- Ensure password hash matches the password
|
||||
- Regenerate hash: `cargo run --example hash_password "YourPassword"`
|
||||
|
||||
3. **User Not Found**
|
||||
- Verify user exists in database
|
||||
- Check correct database is being queried
|
||||
|
||||
### API Returns 401 Unauthorized
|
||||
|
||||
**Check:**
|
||||
- User exists in database
|
||||
- Password hash is correct
|
||||
- API service is running and connected to correct database
|
||||
- JWT secret is configured properly
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Configuration Guide](configuration.md) - Database and security settings
|
||||
- [API Authentication](api-authentication.md) - JWT tokens and authentication flow
|
||||
- [Running Tests](running-tests.md) - E2E testing with test users
|
||||
- [Database Setup](../scripts/setup-db.sh) - Initial database configuration
|
||||
|
||||
## Script Location
|
||||
|
||||
The test user creation script is located at:
|
||||
```
|
||||
scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
**Source Code:**
|
||||
- Password hashing: `crates/api/src/auth/password.rs`
|
||||
- Hash example: `crates/common/examples/hash_password.rs`
|
||||
- Identity model: `crates/common/src/models.rs`
|
||||
|
||||
## Examples
|
||||
|
||||
### Create Admin User
|
||||
|
||||
```bash
|
||||
TEST_LOGIN="admin@company.com" \
|
||||
TEST_PASSWORD="SuperSecure123!" \
|
||||
TEST_DISPLAY_NAME="System Administrator" \
|
||||
./scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
### Create Multiple Test Users
|
||||
|
||||
```bash
|
||||
# User 1
|
||||
TEST_LOGIN="alice@test.com" \
|
||||
TEST_DISPLAY_NAME="Alice Test" \
|
||||
./scripts/create-test-user.sh
|
||||
|
||||
# User 2
|
||||
TEST_LOGIN="bob@test.com" \
|
||||
TEST_DISPLAY_NAME="Bob Test" \
|
||||
./scripts/create-test-user.sh
|
||||
|
||||
# User 3
|
||||
TEST_LOGIN="charlie@test.com" \
|
||||
TEST_DISPLAY_NAME="Charlie Test" \
|
||||
./scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
### Create E2E Test User
|
||||
|
||||
```bash
|
||||
ATTUNE_DB_NAME=attune_e2e \
|
||||
TEST_LOGIN="e2e@test.local" \
|
||||
TEST_PASSWORD="E2ETest123!" \
|
||||
./scripts/create-test-user.sh
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
- ✅ **Simple**: One command to create test users
|
||||
- ✅ **Flexible**: Customizable via environment variables
|
||||
- ✅ **Secure**: Uses Argon2id password hashing
|
||||
- ✅ **Safe**: Prompts before overwriting existing users
|
||||
- ✅ **Verified**: Includes login test instructions
|
||||
|
||||
For production deployments, use the API or admin interface to create users with proper authentication and authorization checks.
|
||||
466
docs/testing/testing-authentication.md
Normal file
466
docs/testing/testing-authentication.md
Normal file
@@ -0,0 +1,466 @@
|
||||
# Testing Authentication Endpoints
|
||||
|
||||
This guide provides step-by-step instructions for testing the Attune authentication system.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Database Running**
|
||||
```bash
|
||||
# Start PostgreSQL (if using Docker)
|
||||
docker run -d \
|
||||
--name postgres \
|
||||
-e POSTGRES_PASSWORD=postgres \
|
||||
-p 5432:5432 \
|
||||
postgres:15
|
||||
```
|
||||
|
||||
2. **Database Setup**
|
||||
```bash
|
||||
# Create database and user
|
||||
psql -U postgres -c "CREATE DATABASE attune;"
|
||||
psql -U postgres -c "CREATE USER svc_attune WITH PASSWORD 'attune_password';"
|
||||
psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE attune TO svc_attune;"
|
||||
```
|
||||
|
||||
3. **Run Migrations**
|
||||
```bash
|
||||
export DATABASE_URL="postgresql://svc_attune:attune_password@localhost:5432/attune"
|
||||
sqlx migrate run
|
||||
```
|
||||
|
||||
4. **Set Environment Variables**
|
||||
```bash
|
||||
export DATABASE_URL="postgresql://svc_attune:attune_password@localhost:5432/attune"
|
||||
export JWT_SECRET="my-super-secret-jwt-key-min-256-bits-please"
|
||||
export JWT_ACCESS_EXPIRATION=3600
|
||||
export JWT_REFRESH_EXPIRATION=604800
|
||||
export RUST_LOG=info
|
||||
```
|
||||
|
||||
## Starting the API Server
|
||||
|
||||
```bash
|
||||
cd attune
|
||||
cargo run --bin attune-api
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
INFO Starting Attune API Service
|
||||
INFO Configuration loaded successfully
|
||||
INFO Environment: development
|
||||
INFO Connecting to database...
|
||||
INFO Database connection established
|
||||
INFO JWT configuration initialized
|
||||
INFO Starting server on 127.0.0.1:8080
|
||||
INFO Server listening on 127.0.0.1:8080
|
||||
INFO Attune API Service is ready
|
||||
```
|
||||
|
||||
## Testing with cURL
|
||||
|
||||
### 1. Health Check (Verify Server is Running)
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/api/v1/health
|
||||
```
|
||||
|
||||
Expected response:
|
||||
```json
|
||||
{
|
||||
"status": "healthy"
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Register a New User
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/register \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"login": "alice",
|
||||
"password": "securepass123",
|
||||
"display_name": "Alice Smith"
|
||||
}'
|
||||
```
|
||||
|
||||
Expected response:
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||
"refresh_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||
"token_type": "Bearer",
|
||||
"expires_in": 3600
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Save the access_token for the next steps!**
|
||||
|
||||
### 3. Login with Existing User
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"login": "alice",
|
||||
"password": "securepass123"
|
||||
}'
|
||||
```
|
||||
|
||||
Expected response:
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||
"refresh_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||
"token_type": "Bearer",
|
||||
"expires_in": 3600
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Get Current User (Protected Endpoint)
|
||||
|
||||
Replace `YOUR_ACCESS_TOKEN` with the actual token from step 2 or 3:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/auth/me \
|
||||
-H "Authorization: Bearer YOUR_ACCESS_TOKEN"
|
||||
```
|
||||
|
||||
Expected response:
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"id": 1,
|
||||
"login": "alice",
|
||||
"display_name": "Alice Smith"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Change Password (Protected Endpoint)
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/change-password \
|
||||
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"current_password": "securepass123",
|
||||
"new_password": "newsecurepass456"
|
||||
}'
|
||||
```
|
||||
|
||||
Expected response:
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"success": true,
|
||||
"message": "Password changed successfully"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Login with New Password
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"login": "alice",
|
||||
"password": "newsecurepass456"
|
||||
}'
|
||||
```
|
||||
|
||||
Should return new tokens.
|
||||
|
||||
### 7. Refresh Access Token
|
||||
|
||||
Save the refresh_token from a previous login, then:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/refresh \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"refresh_token": "YOUR_REFRESH_TOKEN"
|
||||
}'
|
||||
```
|
||||
|
||||
Expected response:
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||
"refresh_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||
"token_type": "Bearer",
|
||||
"expires_in": 3600
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Cases to Test
|
||||
|
||||
### Invalid Credentials
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"login": "alice",
|
||||
"password": "wrongpassword"
|
||||
}'
|
||||
```
|
||||
|
||||
Expected response (401):
|
||||
```json
|
||||
{
|
||||
"error": "Invalid login or password",
|
||||
"code": "UNAUTHORIZED"
|
||||
}
|
||||
```
|
||||
|
||||
### Missing Authentication Token
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/auth/me
|
||||
```
|
||||
|
||||
Expected response (401):
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": 401,
|
||||
"message": "Missing authentication token"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Invalid Token
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/auth/me \
|
||||
-H "Authorization: Bearer invalid.token.here"
|
||||
```
|
||||
|
||||
Expected response (401):
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": 401,
|
||||
"message": "Invalid authentication token"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Duplicate Registration
|
||||
|
||||
Register the same user twice:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/register \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"login": "alice",
|
||||
"password": "securepass123"
|
||||
}'
|
||||
```
|
||||
|
||||
Expected response (409):
|
||||
```json
|
||||
{
|
||||
"error": "Identity with login 'alice' already exists",
|
||||
"code": "CONFLICT"
|
||||
}
|
||||
```
|
||||
|
||||
### Validation Errors
|
||||
|
||||
Short password:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/auth/register \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"login": "bob",
|
||||
"password": "short"
|
||||
}'
|
||||
```
|
||||
|
||||
Expected response (422):
|
||||
```json
|
||||
{
|
||||
"error": "Invalid registration request: ...",
|
||||
"code": "VALIDATION_ERROR"
|
||||
}
|
||||
```
|
||||
|
||||
## Testing with HTTPie (Alternative)
|
||||
|
||||
If you prefer HTTPie (more readable):
|
||||
|
||||
```bash
|
||||
# Install HTTPie
|
||||
pip install httpie
|
||||
|
||||
# Register
|
||||
http POST localhost:8080/auth/register \
|
||||
login=alice password=securepass123 display_name="Alice Smith"
|
||||
|
||||
# Login
|
||||
http POST localhost:8080/auth/login \
|
||||
login=alice password=securepass123
|
||||
|
||||
# Get current user (set TOKEN variable first)
|
||||
TOKEN="your_access_token_here"
|
||||
http GET localhost:8080/auth/me \
|
||||
"Authorization: Bearer $TOKEN"
|
||||
|
||||
# Change password
|
||||
http POST localhost:8080/auth/change-password \
|
||||
"Authorization: Bearer $TOKEN" \
|
||||
current_password=securepass123 \
|
||||
new_password=newsecurepass456
|
||||
```
|
||||
|
||||
## Testing with Postman
|
||||
|
||||
1. **Import Collection**
|
||||
- Create a new collection named "Attune Auth"
|
||||
- Add base URL variable: `{{baseUrl}}` = `http://localhost:8080`
|
||||
|
||||
2. **Setup Environment**
|
||||
- Create environment "Attune Local"
|
||||
- Variables:
|
||||
- `baseUrl`: `http://localhost:8080`
|
||||
- `accessToken`: (will be set by tests)
|
||||
- `refreshToken`: (will be set by tests)
|
||||
|
||||
3. **Add Requests**
|
||||
- POST `{{baseUrl}}/auth/register`
|
||||
- POST `{{baseUrl}}/auth/login`
|
||||
- GET `{{baseUrl}}/auth/me` with header: `Authorization: Bearer {{accessToken}}`
|
||||
- POST `{{baseUrl}}/auth/change-password`
|
||||
- POST `{{baseUrl}}/auth/refresh`
|
||||
|
||||
4. **Test Scripts**
|
||||
Add to login/register requests to save tokens:
|
||||
```javascript
|
||||
pm.test("Status is 200", function () {
|
||||
pm.response.to.have.status(200);
|
||||
});
|
||||
|
||||
var jsonData = pm.response.json();
|
||||
pm.environment.set("accessToken", jsonData.data.access_token);
|
||||
pm.environment.set("refreshToken", jsonData.data.refresh_token);
|
||||
```
|
||||
|
||||
## Automated Testing
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Run the authentication unit tests:
|
||||
|
||||
```bash
|
||||
# Test password hashing
|
||||
cargo test --package attune-api password
|
||||
|
||||
# Test JWT utilities
|
||||
cargo test --package attune-api jwt
|
||||
|
||||
# Test middleware
|
||||
cargo test --package attune-api middleware
|
||||
|
||||
# Run all API tests
|
||||
cargo test --package attune-api
|
||||
```
|
||||
|
||||
### Integration Tests (Future)
|
||||
|
||||
Integration tests will be added to test the full authentication flow:
|
||||
|
||||
```bash
|
||||
cargo test --package attune-api --test auth_integration
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Server Won't Start
|
||||
|
||||
1. **Database Connection Error**
|
||||
```
|
||||
Error: error communicating with database
|
||||
```
|
||||
- Verify PostgreSQL is running
|
||||
- Check DATABASE_URL is correct
|
||||
- Verify database exists and user has permissions
|
||||
|
||||
2. **Migration Error**
|
||||
```
|
||||
Error: migration version not found
|
||||
```
|
||||
- Run migrations: `sqlx migrate run`
|
||||
|
||||
3. **JWT_SECRET Warning**
|
||||
```
|
||||
WARN JWT_SECRET not set, using default
|
||||
```
|
||||
- Set JWT_SECRET environment variable
|
||||
|
||||
### Authentication Fails
|
||||
|
||||
1. **Invalid Credentials**
|
||||
- Verify password is correct
|
||||
- Check if identity exists in database:
|
||||
```sql
|
||||
SELECT * FROM attune.identity WHERE login = 'alice';
|
||||
```
|
||||
|
||||
2. **Token Expired**
|
||||
- Use the refresh token to get a new access token
|
||||
- Check JWT_ACCESS_EXPIRATION setting
|
||||
|
||||
3. **Invalid Token Format**
|
||||
- Ensure Authorization header format: `Bearer <token>`
|
||||
- No extra spaces or quotes
|
||||
|
||||
### Database Issues
|
||||
|
||||
Check identities in database:
|
||||
```sql
|
||||
-- Connect to database
|
||||
psql -U svc_attune -d attune
|
||||
|
||||
-- View all identities
|
||||
SELECT id, login, display_name, created FROM attune.identity;
|
||||
|
||||
-- Check password hash exists
|
||||
SELECT login,
|
||||
attributes->>'password_hash' IS NOT NULL as has_password
|
||||
FROM attune.identity;
|
||||
```
|
||||
|
||||
## Security Notes
|
||||
|
||||
- **Never use default JWT_SECRET in production**
|
||||
- Always use HTTPS in production
|
||||
- Store tokens securely on the client side
|
||||
- Implement rate limiting on auth endpoints (future)
|
||||
- Consider adding MFA for production (future)
|
||||
|
||||
## Next Steps
|
||||
|
||||
After validating authentication works:
|
||||
|
||||
1. Test Pack Management API with authentication
|
||||
2. Implement additional CRUD APIs
|
||||
3. Add RBAC permission checking (Phase 2.13)
|
||||
4. Add integration tests
|
||||
5. Implement token revocation for logout
|
||||
|
||||
## Resources
|
||||
|
||||
- [Authentication Documentation](./authentication.md)
|
||||
- [API Documentation](../README.md)
|
||||
- [JWT.io](https://jwt.io) - JWT token decoder/debugger
|
||||
624
docs/testing/testing-dashboard-rules.md
Normal file
624
docs/testing/testing-dashboard-rules.md
Normal file
@@ -0,0 +1,624 @@
|
||||
# Testing Guide: Dashboard & Rules Pages
|
||||
|
||||
This guide covers manual testing of the newly implemented dashboard and rules management pages in the Attune Web UI.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### 1. Backend Services Running
|
||||
|
||||
You need the following services running:
|
||||
|
||||
```bash
|
||||
# Terminal 1: PostgreSQL (if not running as service)
|
||||
docker run -d --name attune-postgres \
|
||||
-e POSTGRES_PASSWORD=attune \
|
||||
-e POSTGRES_USER=attune \
|
||||
-e POSTGRES_DB=attune \
|
||||
-p 5432:5432 postgres:14
|
||||
|
||||
# Terminal 2: RabbitMQ (if not running as service)
|
||||
docker run -d --name attune-rabbitmq \
|
||||
-p 5672:5672 -p 15672:15672 \
|
||||
rabbitmq:3.12-management
|
||||
|
||||
# Terminal 3: API Server
|
||||
cd crates/api
|
||||
cargo run
|
||||
# Should start on http://localhost:8080
|
||||
```
|
||||
|
||||
### 2. Test Data
|
||||
|
||||
Create test data using the CLI or API:
|
||||
|
||||
```bash
|
||||
# Using the CLI
|
||||
cd crates/cli
|
||||
|
||||
# Create a test pack
|
||||
cargo run -- pack create \
|
||||
--name test-pack \
|
||||
--version 1.0.0 \
|
||||
--description "Test pack for UI testing"
|
||||
|
||||
# Create a test action
|
||||
cargo run -- action create \
|
||||
--pack test-pack \
|
||||
--name test-action \
|
||||
--entry-point "echo 'Hello World'" \
|
||||
--runner-type local.shell.command
|
||||
|
||||
# Create a test trigger
|
||||
cargo run -- trigger create \
|
||||
--pack test-pack \
|
||||
--name test-trigger \
|
||||
--description "Manual test trigger"
|
||||
|
||||
# Create a test rule
|
||||
cargo run -- rule create \
|
||||
--pack test-pack \
|
||||
--name test-rule \
|
||||
--trigger test-trigger \
|
||||
--action test-action \
|
||||
--description "Test automation rule"
|
||||
|
||||
# Execute the action to create execution records
|
||||
cargo run -- action execute test-pack.test-action
|
||||
cargo run -- action execute test-pack.test-action
|
||||
cargo run -- action execute test-pack.test-action
|
||||
```
|
||||
|
||||
### 3. Web UI Running
|
||||
|
||||
```bash
|
||||
# Terminal 4: Web UI Dev Server
|
||||
cd web
|
||||
npm install # First time only
|
||||
npm run dev
|
||||
# Should start on http://localhost:5173
|
||||
```
|
||||
|
||||
### 4. Login Credentials
|
||||
|
||||
Default test user (if seeded):
|
||||
- Username: `admin`
|
||||
- Password: `admin`
|
||||
|
||||
Or create a user via API/CLI if needed.
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Testing
|
||||
|
||||
### Test 1: Initial Load
|
||||
|
||||
**Objective**: Verify dashboard loads with correct metrics.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to `http://localhost:5173`
|
||||
2. Login if not authenticated
|
||||
3. Should automatically redirect to dashboard
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Dashboard page loads without errors
|
||||
- ✅ Four metric cards display at top
|
||||
- ✅ Each metric shows a number (not "—" or "Loading...")
|
||||
- ✅ Metric counts match actual data:
|
||||
- Total Packs: should show 1+ (your test packs)
|
||||
- Active Rules: should show count of enabled rules
|
||||
- Running Executions: likely 0 (unless something is running)
|
||||
- Total Actions: should show 1+ (your test actions)
|
||||
|
||||
### Test 2: Live Connection Indicator
|
||||
|
||||
**Objective**: Verify SSE connection status is shown.
|
||||
|
||||
**Steps**:
|
||||
1. On dashboard, look for "Welcome back" message
|
||||
2. Next to it should be a "Live" indicator
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Green pulsing dot visible
|
||||
- ✅ "Live" text displayed in green
|
||||
- ✅ If API is stopped, indicator should disappear within 30s
|
||||
|
||||
### Test 3: Metric Card Navigation
|
||||
|
||||
**Objective**: Verify clicking metrics navigates to correct pages.
|
||||
|
||||
**Steps**:
|
||||
1. Click "Total Packs" card → should go to `/packs`
|
||||
2. Go back, click "Active Rules" card → should go to `/rules`
|
||||
3. Go back, click "Running Executions" card → should go to `/executions`
|
||||
4. Go back, click "Total Actions" card → should go to `/actions`
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Each click navigates to correct page
|
||||
- ✅ Hover effect shows on cards (shadow increases)
|
||||
- ✅ Cursor shows pointer on hover
|
||||
|
||||
### Test 4: Status Distribution Chart
|
||||
|
||||
**Objective**: Verify execution status visualization.
|
||||
|
||||
**Steps**:
|
||||
1. Look at "Execution Status" section (left side, below metrics)
|
||||
2. Should show status breakdown with progress bars
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Status categories listed (succeeded, failed, running, etc.)
|
||||
- ✅ Counts displayed for each status
|
||||
- ✅ Progress bars show percentage visually
|
||||
- ✅ Colors match status (green=succeeded, red=failed, blue=running)
|
||||
- ✅ Success rate displayed at bottom
|
||||
- ✅ If no executions: "No executions yet" message
|
||||
|
||||
### Test 5: Recent Activity Feed
|
||||
|
||||
**Objective**: Verify execution activity list.
|
||||
|
||||
**Steps**:
|
||||
1. Look at "Recent Activity" section (right side, 2 columns wide)
|
||||
2. Should show list of recent executions
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Up to 20 executions displayed
|
||||
- ✅ Each shows: pack.action name, status badge, ID, time, elapsed time
|
||||
- ✅ Clicking an item navigates to execution detail page
|
||||
- ✅ Hover effect highlights row
|
||||
- ✅ "View all →" link goes to executions page
|
||||
- ✅ If no executions: "No recent activity" message
|
||||
|
||||
### Test 6: Real-Time Updates
|
||||
|
||||
**Objective**: Verify SSE updates dashboard in real-time.
|
||||
|
||||
**Steps**:
|
||||
1. Keep dashboard open in browser
|
||||
2. In terminal, execute an action:
|
||||
```bash
|
||||
cargo run -- action execute test-pack.test-action
|
||||
```
|
||||
3. Watch the dashboard
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Recent Activity updates within 1-2 seconds
|
||||
- ✅ New execution appears at top of list
|
||||
- ✅ Running Executions count updates if execution is in progress
|
||||
- ✅ Status distribution updates when execution completes
|
||||
- ✅ No page reload required
|
||||
- ✅ "Live" indicator stays green throughout
|
||||
|
||||
### Test 7: Quick Actions Section
|
||||
|
||||
**Objective**: Verify navigation cards at bottom.
|
||||
|
||||
**Steps**:
|
||||
1. Scroll to bottom of dashboard
|
||||
2. Should see "Quick Actions" section with 3 cards
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Three cards: "Manage Packs", "Browse Actions", "Configure Rules"
|
||||
- ✅ Each has an icon and description
|
||||
- ✅ Hover effect shows (shadow increases)
|
||||
- ✅ Clicking navigates to correct page
|
||||
|
||||
### Test 8: Responsive Layout
|
||||
|
||||
**Objective**: Verify layout adapts to screen size.
|
||||
|
||||
**Steps**:
|
||||
1. Resize browser window from wide to narrow
|
||||
2. Observe metric cards layout
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Desktop (>1024px): 4 columns of metrics
|
||||
- ✅ Tablet (768-1024px): 2 columns of metrics
|
||||
- ✅ Mobile (<768px): 1 column of metrics
|
||||
- ✅ Status chart and activity feed stack on mobile
|
||||
- ✅ No horizontal scrolling at any size
|
||||
|
||||
---
|
||||
|
||||
## Rules Pages Testing
|
||||
|
||||
### Test 9: Rules List - Initial Load
|
||||
|
||||
**Objective**: Verify rules list page displays correctly.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to `/rules` or click "Configure Rules" from dashboard
|
||||
2. Should see rules list page
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Page title "Rules" visible
|
||||
- ✅ Description text visible
|
||||
- ✅ Filter buttons visible (All Rules, Enabled, Disabled)
|
||||
- ✅ "Create Rule" button visible (disabled/placeholder for now)
|
||||
- ✅ Result count shows "Showing X of Y rules"
|
||||
- ✅ Table with headers: Rule, Pack, Trigger, Action, Status, Actions
|
||||
- ✅ Test rule visible in table
|
||||
|
||||
### Test 10: Rules List - Filtering
|
||||
|
||||
**Objective**: Verify filtering works correctly.
|
||||
|
||||
**Steps**:
|
||||
1. On rules list page, note initial count
|
||||
2. Click "Enabled" filter button
|
||||
3. Note filtered count
|
||||
4. Click "Disabled" filter button
|
||||
5. Click "All Rules" button
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ "Enabled" shows only enabled rules
|
||||
- ✅ "Disabled" shows only disabled rules
|
||||
- ✅ "All Rules" shows all rules
|
||||
- ✅ Active filter button highlighted in blue
|
||||
- ✅ Inactive buttons are gray
|
||||
- ✅ Count updates correctly with each filter
|
||||
|
||||
### Test 11: Rules List - Toggle Enable/Disable
|
||||
|
||||
**Objective**: Verify inline status toggle.
|
||||
|
||||
**Steps**:
|
||||
1. On rules list, find a rule with "Enabled" status
|
||||
2. Click the green "Enabled" badge
|
||||
3. Wait for update
|
||||
4. Observe status change
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Badge shows loading state briefly
|
||||
- ✅ Status changes to "Disabled" (gray badge)
|
||||
- ✅ Clicking again toggles back to "Enabled"
|
||||
- ✅ No page reload
|
||||
- ✅ If "Enabled" filter active, rule disappears from list after disable
|
||||
|
||||
### Test 12: Rules List - Delete Rule
|
||||
|
||||
**Objective**: Verify rule deletion.
|
||||
|
||||
**Steps**:
|
||||
1. On rules list, click "Delete" button for a test rule
|
||||
2. Confirmation dialog appears
|
||||
3. Click "Cancel" first
|
||||
4. Click "Delete" again
|
||||
5. Click "OK" to confirm
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Confirmation dialog shows rule name
|
||||
- ✅ Cancel does nothing
|
||||
- ✅ OK removes rule from list
|
||||
- ✅ Count updates
|
||||
- ✅ No page reload
|
||||
|
||||
### Test 13: Rules List - Pagination
|
||||
|
||||
**Objective**: Verify pagination controls (if >20 rules).
|
||||
|
||||
**Steps**:
|
||||
1. Create 25+ rules (if needed)
|
||||
2. On rules list, observe pagination controls at bottom
|
||||
3. Click "Next" button
|
||||
4. Click "Previous" button
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Pagination only shows if >20 rules
|
||||
- ✅ "Page X of Y" displayed
|
||||
- ✅ "Next" disabled on last page
|
||||
- ✅ "Previous" disabled on first page
|
||||
- ✅ Navigation works correctly
|
||||
|
||||
### Test 14: Rule Detail - Basic Information
|
||||
|
||||
**Objective**: Verify rule detail page displays all info.
|
||||
|
||||
**Steps**:
|
||||
1. From rules list, click a rule name
|
||||
2. Should navigate to `/rules/:id`
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ "← Back to Rules" link at top
|
||||
- ✅ Rule name as page title
|
||||
- ✅ Status badge (Enabled/Disabled) next to title
|
||||
- ✅ Description visible (if set)
|
||||
- ✅ Metadata: ID, created date, updated date
|
||||
- ✅ Enable/Disable button at top right
|
||||
- ✅ Delete button at top right
|
||||
|
||||
### Test 15: Rule Detail - Overview Card
|
||||
|
||||
**Objective**: Verify overview section content.
|
||||
|
||||
**Steps**:
|
||||
1. On rule detail page, find "Overview" card (left side)
|
||||
2. Check displayed information
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Pack name displayed as clickable link
|
||||
- ✅ Trigger name displayed
|
||||
- ✅ Action name displayed as clickable link
|
||||
- ✅ Clicking pack link goes to `/packs/:name`
|
||||
- ✅ Clicking action link goes to `/actions/:id`
|
||||
|
||||
### Test 16: Rule Detail - Criteria Display
|
||||
|
||||
**Objective**: Verify criteria JSON display (if rule has criteria).
|
||||
|
||||
**Steps**:
|
||||
1. On rule detail, look for "Match Criteria" card
|
||||
2. Should show JSON formatted criteria
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Card only appears if criteria exists
|
||||
- ✅ JSON is formatted with indentation
|
||||
- ✅ Displayed in monospace font
|
||||
- ✅ Gray background for readability
|
||||
- ✅ Scrollable if content is long
|
||||
|
||||
### Test 17: Rule Detail - Action Parameters
|
||||
|
||||
**Objective**: Verify action parameters display.
|
||||
|
||||
**Steps**:
|
||||
1. On rule detail, look for "Action Parameters" card
|
||||
2. Should show JSON formatted parameters
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Card only appears if parameters exist
|
||||
- ✅ JSON is formatted with indentation
|
||||
- ✅ Displayed in monospace font
|
||||
- ✅ Gray background for readability
|
||||
- ✅ Scrollable if content is long
|
||||
|
||||
### Test 18: Rule Detail - Quick Links Sidebar
|
||||
|
||||
**Objective**: Verify quick links functionality.
|
||||
|
||||
**Steps**:
|
||||
1. On rule detail, find "Quick Links" card (right sidebar)
|
||||
2. Try clicking each link
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ "View Pack" link works
|
||||
- ✅ "View Action" link works
|
||||
- ✅ "View Trigger" link works (may 404 if triggers page not implemented)
|
||||
- ✅ "View Enforcements" link works (may 404 if enforcements page not implemented)
|
||||
|
||||
### Test 19: Rule Detail - Metadata Sidebar
|
||||
|
||||
**Objective**: Verify metadata display.
|
||||
|
||||
**Steps**:
|
||||
1. On rule detail, find "Metadata" card (right sidebar)
|
||||
2. Check all fields
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Rule ID in monospace font
|
||||
- ✅ Pack ID in monospace font
|
||||
- ✅ Trigger ID in monospace font
|
||||
- ✅ Action ID in monospace font
|
||||
- ✅ Created timestamp in readable format
|
||||
- ✅ Last Updated timestamp in readable format
|
||||
|
||||
### Test 20: Rule Detail - Status Card
|
||||
|
||||
**Objective**: Verify status display and warnings.
|
||||
|
||||
**Steps**:
|
||||
1. On rule detail, find "Status" card (right sidebar)
|
||||
2. If rule is disabled, should show warning
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Status badge shows "Active" or "Inactive"
|
||||
- ✅ Color matches enabled state (green/gray)
|
||||
- ✅ If disabled: warning message displayed
|
||||
- ✅ Warning text explains rule won't trigger
|
||||
|
||||
### Test 21: Rule Detail - Enable/Disable Toggle
|
||||
|
||||
**Objective**: Verify status toggle on detail page.
|
||||
|
||||
**Steps**:
|
||||
1. On rule detail page, click Enable/Disable button
|
||||
2. Watch for status update
|
||||
3. Toggle back
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Button shows loading state ("Processing...")
|
||||
- ✅ Status badge updates after success
|
||||
- ✅ Button text changes (Enable ↔ Disable)
|
||||
- ✅ Button color changes (green ↔ gray)
|
||||
- ✅ Status card updates
|
||||
- ✅ No page reload
|
||||
|
||||
### Test 22: Rule Detail - Delete Rule
|
||||
|
||||
**Objective**: Verify rule deletion from detail page.
|
||||
|
||||
**Steps**:
|
||||
1. On rule detail page, click "Delete" button
|
||||
2. Confirmation dialog appears
|
||||
3. Click "OK"
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Confirmation dialog shows rule name
|
||||
- ✅ After confirmation, redirects to `/rules` list
|
||||
- ✅ Rule no longer in list
|
||||
- ✅ No errors
|
||||
|
||||
---
|
||||
|
||||
## Error Handling Testing
|
||||
|
||||
### Test 23: Network Error Handling
|
||||
|
||||
**Objective**: Verify graceful handling of network errors.
|
||||
|
||||
**Steps**:
|
||||
1. Stop the API server
|
||||
2. Refresh dashboard or rules page
|
||||
3. Wait for timeout
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Loading spinner shows while attempting
|
||||
- ✅ Error message displayed after timeout
|
||||
- ✅ "Live" indicator disappears
|
||||
- ✅ Page doesn't crash
|
||||
- ✅ Can navigate to other pages
|
||||
|
||||
### Test 24: Invalid Rule ID
|
||||
|
||||
**Objective**: Verify handling of non-existent rule.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to `/rules/99999` (non-existent ID)
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Error message displayed
|
||||
- ✅ "Rule not found" or similar message
|
||||
- ✅ "Back to Rules" link provided
|
||||
- ✅ No page crash
|
||||
|
||||
### Test 25: SSE Reconnection
|
||||
|
||||
**Objective**: Verify SSE reconnects after interruption.
|
||||
|
||||
**Steps**:
|
||||
1. Open dashboard with "Live" indicator active
|
||||
2. Stop API server
|
||||
3. Wait 30 seconds (indicator should disappear)
|
||||
4. Restart API server
|
||||
5. Wait up to 30 seconds
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ "Live" indicator disappears when connection lost
|
||||
- ✅ Dashboard still usable (cached data)
|
||||
- ✅ "Live" indicator reappears after reconnection
|
||||
- ✅ Updates resume automatically
|
||||
|
||||
---
|
||||
|
||||
## Performance Testing
|
||||
|
||||
### Test 26: Dashboard Load Time
|
||||
|
||||
**Objective**: Verify dashboard loads quickly.
|
||||
|
||||
**Steps**:
|
||||
1. Open browser DevTools → Network tab
|
||||
2. Clear cache and reload dashboard
|
||||
3. Observe load time
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Initial load < 2 seconds (with warm backend)
|
||||
- ✅ Metrics appear < 3 seconds
|
||||
- ✅ No excessive API calls (should be ~5 requests)
|
||||
|
||||
### Test 27: Large Rules List
|
||||
|
||||
**Objective**: Verify performance with many rules.
|
||||
|
||||
**Steps**:
|
||||
1. Create 100+ rules (if feasible)
|
||||
2. Navigate to rules list page
|
||||
3. Scroll through list
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Page loads in reasonable time (< 3s)
|
||||
- ✅ Only 20 items per page (pagination working)
|
||||
- ✅ Smooth scrolling
|
||||
- ✅ No lag when changing pages
|
||||
|
||||
---
|
||||
|
||||
## Cross-Browser Testing
|
||||
|
||||
### Test 28: Browser Compatibility
|
||||
|
||||
**Objective**: Verify works in major browsers.
|
||||
|
||||
**Browsers to test**: Chrome, Firefox, Safari, Edge
|
||||
|
||||
**Steps**:
|
||||
1. Open dashboard in each browser
|
||||
2. Test basic navigation
|
||||
3. Test real-time updates
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Layout looks correct in all browsers
|
||||
- ✅ All functionality works
|
||||
- ✅ SSE connection works (all support EventSource)
|
||||
- ✅ No console errors
|
||||
|
||||
---
|
||||
|
||||
## Accessibility Testing
|
||||
|
||||
### Test 29: Keyboard Navigation
|
||||
|
||||
**Objective**: Verify keyboard accessibility.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate dashboard using only Tab key
|
||||
2. Press Enter on focused elements
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ All interactive elements focusable
|
||||
- ✅ Focus indicator visible
|
||||
- ✅ Logical tab order
|
||||
- ✅ Enter key activates buttons/links
|
||||
|
||||
### Test 30: Screen Reader Testing
|
||||
|
||||
**Objective**: Verify screen reader compatibility (basic).
|
||||
|
||||
**Steps**:
|
||||
1. Use browser's reader mode or screen reader
|
||||
2. Navigate dashboard and rules pages
|
||||
|
||||
**Expected Results**:
|
||||
- ✅ Headings properly announced
|
||||
- ✅ Button labels descriptive
|
||||
- ✅ Link text meaningful
|
||||
- ✅ Form controls labeled
|
||||
|
||||
---
|
||||
|
||||
## Reporting Issues
|
||||
|
||||
If you find any issues during testing:
|
||||
|
||||
1. **Check console** (F12 → Console tab) for errors
|
||||
2. **Note exact steps** to reproduce
|
||||
3. **Screenshot** if visual issue
|
||||
4. **Browser/OS** information
|
||||
5. **Create issue** in project tracker or document in `work-summary/PROBLEMS.md`
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
All tests passing means:
|
||||
- ✅ Dashboard displays live metrics correctly
|
||||
- ✅ Real-time updates work via SSE
|
||||
- ✅ Rules CRUD operations fully functional
|
||||
- ✅ Navigation flows work seamlessly
|
||||
- ✅ Error handling is graceful
|
||||
- ✅ Performance is acceptable
|
||||
- ✅ Cross-browser compatible
|
||||
- ✅ Accessible to keyboard users
|
||||
|
||||
---
|
||||
|
||||
## Next Steps After Testing
|
||||
|
||||
Once all tests pass:
|
||||
1. Document any bugs found in PROBLEMS.md
|
||||
2. Fix critical issues
|
||||
3. Consider visual enhancements (charts library, animations)
|
||||
4. Move on to Events/Triggers/Sensors pages
|
||||
5. Implement create/edit forms for packs, actions, rules
|
||||
1818
docs/testing/testing-status.md
Normal file
1818
docs/testing/testing-status.md
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user