re-uploading work
This commit is contained in:
391
crates/common/tests/README.md
Normal file
391
crates/common/tests/README.md
Normal file
@@ -0,0 +1,391 @@
|
||||
# Attune Common Library - Integration Tests
|
||||
|
||||
This directory contains integration tests for the Attune common library, specifically testing the database repository layer and migrations.
|
||||
|
||||
## Overview
|
||||
|
||||
The test suite includes:
|
||||
|
||||
- **Migration Tests** (`migration_tests.rs`) - Verify database schema, migrations, and constraints
|
||||
- **Repository Tests** - Comprehensive CRUD and transaction tests for each repository:
|
||||
- `pack_repository_tests.rs` - Pack repository operations
|
||||
- `action_repository_tests.rs` - Action repository operations
|
||||
- Additional repository tests for all other entities
|
||||
- **Test Helpers** (`helpers.rs`) - Fixtures, utilities, and common test setup
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before running the tests, ensure you have:
|
||||
|
||||
1. **PostgreSQL** installed and running
|
||||
2. **Test database** created and configured
|
||||
3. **Environment variables** set (via `.env.test`)
|
||||
|
||||
### Setting Up the Test Database
|
||||
|
||||
```bash
|
||||
# Create the test database
|
||||
make db-test-create
|
||||
|
||||
# Run migrations on test database
|
||||
make db-test-migrate
|
||||
|
||||
# Or do both at once
|
||||
make db-test-setup
|
||||
```
|
||||
|
||||
To reset the test database:
|
||||
|
||||
```bash
|
||||
make db-test-reset
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Run All Integration Tests
|
||||
|
||||
```bash
|
||||
# Automatic setup and run
|
||||
make test-integration
|
||||
|
||||
# Or manually
|
||||
cargo test --test '*' -p attune-common -- --test-threads=1
|
||||
```
|
||||
|
||||
### Run Specific Test Files
|
||||
|
||||
```bash
|
||||
# Run only migration tests
|
||||
cargo test --test migration_tests -p attune-common
|
||||
|
||||
# Run only pack repository tests
|
||||
cargo test --test pack_repository_tests -p attune-common
|
||||
|
||||
# Run only action repository tests
|
||||
cargo test --test action_repository_tests -p attune-common
|
||||
```
|
||||
|
||||
### Run Specific Tests
|
||||
|
||||
```bash
|
||||
# Run a single test by name
|
||||
cargo test test_create_pack -p attune-common
|
||||
|
||||
# Run tests matching a pattern
|
||||
cargo test test_create -p attune-common
|
||||
|
||||
# Run with output
|
||||
cargo test test_create_pack -p attune-common -- --nocapture
|
||||
```
|
||||
|
||||
## Test Configuration
|
||||
|
||||
Test configuration is loaded from `.env.test` in the project root. Key settings:
|
||||
|
||||
```bash
|
||||
# Test database URL
|
||||
ATTUNE__DATABASE__URL=postgresql://postgres:postgres@localhost:5432/attune_test
|
||||
|
||||
# Enable SQL logging for debugging
|
||||
ATTUNE__DATABASE__LOG_STATEMENTS=true
|
||||
|
||||
# Verbose logging
|
||||
ATTUNE__LOG__LEVEL=debug
|
||||
RUST_LOG=debug,sqlx=warn
|
||||
```
|
||||
|
||||
## Test Structure
|
||||
|
||||
### Test Helpers (`helpers.rs`)
|
||||
|
||||
The helpers module provides:
|
||||
|
||||
- **Database Setup**: `create_test_pool()`, `clean_database()`
|
||||
- **Fixtures**: Builder pattern for creating test data
|
||||
- `PackFixture` - Create test packs
|
||||
- `ActionFixture` - Create test actions
|
||||
- `RuntimeFixture` - Create test runtimes
|
||||
- And more for all entities
|
||||
- **Utilities**: Transaction helpers, assertions
|
||||
|
||||
Example fixture usage:
|
||||
|
||||
```rust
|
||||
use helpers::*;
|
||||
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
clean_database(&pool).await.unwrap();
|
||||
|
||||
let pack_repo = PackRepository::new(&pool);
|
||||
|
||||
// Use fixture to create test data
|
||||
let pack = PackFixture::new("test.pack")
|
||||
.with_version("2.0.0")
|
||||
.with_name("Custom Pack Name")
|
||||
.create(&pack_repo)
|
||||
.await
|
||||
.unwrap();
|
||||
```
|
||||
|
||||
### Test Organization
|
||||
|
||||
Each test file follows this pattern:
|
||||
|
||||
1. **Import helpers module**: `mod helpers;`
|
||||
2. **Setup phase**: Create pool and clean database
|
||||
3. **Test execution**: Perform operations
|
||||
4. **Assertions**: Verify expected outcomes
|
||||
5. **Cleanup**: Automatic via `clean_database()` or transactions
|
||||
|
||||
Example test:
|
||||
|
||||
```rust
|
||||
#[tokio::test]
|
||||
async fn test_create_pack() {
|
||||
// Setup
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
clean_database(&pool).await.unwrap();
|
||||
|
||||
let repo = PackRepository::new(&pool);
|
||||
|
||||
// Execute
|
||||
let pack = PackFixture::new("test.pack")
|
||||
.create(&repo)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Assert
|
||||
assert_eq!(pack.ref_name, "test.pack");
|
||||
assert!(pack.created_at.timestamp() > 0);
|
||||
}
|
||||
```
|
||||
|
||||
## Test Categories
|
||||
|
||||
### CRUD Operations
|
||||
|
||||
Tests verify basic Create, Read, Update, Delete operations:
|
||||
|
||||
- Creating entities with valid data
|
||||
- Retrieving entities by ID and other fields
|
||||
- Listing and pagination
|
||||
- Updating partial and full records
|
||||
- Deleting entities
|
||||
|
||||
### Constraint Validation
|
||||
|
||||
Tests verify database constraints:
|
||||
|
||||
- Unique constraints (e.g., pack ref_name + version)
|
||||
- Foreign key constraints
|
||||
- NOT NULL constraints
|
||||
- Check constraints
|
||||
|
||||
### Transaction Support
|
||||
|
||||
Tests verify transaction behavior:
|
||||
|
||||
- Commit preserves changes
|
||||
- Rollback discards changes
|
||||
- Isolation between transactions
|
||||
|
||||
### Error Handling
|
||||
|
||||
Tests verify proper error handling:
|
||||
|
||||
- Duplicate key violations
|
||||
- Foreign key violations
|
||||
- Not found scenarios
|
||||
|
||||
### Cascading Deletes
|
||||
|
||||
Tests verify cascade delete behavior:
|
||||
|
||||
- Deleting a pack deletes associated actions
|
||||
- Deleting a runtime deletes associated workers
|
||||
- And other cascade relationships
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Clean Database Before Tests
|
||||
|
||||
Always clean the database at the start of each test:
|
||||
|
||||
```rust
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
clean_database(&pool).await.unwrap();
|
||||
```
|
||||
|
||||
### 2. Use Fixtures for Test Data
|
||||
|
||||
Use fixture builders instead of manual creation:
|
||||
|
||||
```rust
|
||||
// Good
|
||||
let pack = PackFixture::new("test.pack").create(&repo).await.unwrap();
|
||||
|
||||
// Avoid
|
||||
let create = CreatePack { /* ... */ };
|
||||
let pack = repo.create(&create).await.unwrap();
|
||||
```
|
||||
|
||||
### 3. Test Isolation
|
||||
|
||||
Each test should be independent:
|
||||
|
||||
- Don't rely on data from other tests
|
||||
- Clean database between tests
|
||||
- Use unique names/IDs
|
||||
|
||||
### 4. Single-Threaded Execution
|
||||
|
||||
Run integration tests single-threaded to avoid race conditions:
|
||||
|
||||
```bash
|
||||
cargo test -- --test-threads=1
|
||||
```
|
||||
|
||||
### 5. Descriptive Test Names
|
||||
|
||||
Use clear, descriptive test names:
|
||||
|
||||
```rust
|
||||
#[tokio::test]
|
||||
async fn test_create_pack_duplicate_ref_version() { /* ... */ }
|
||||
```
|
||||
|
||||
### 6. Test Both Success and Failure
|
||||
|
||||
Test both happy paths and error cases:
|
||||
|
||||
```rust
|
||||
#[tokio::test]
|
||||
async fn test_create_pack() { /* success case */ }
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_pack_duplicate_ref_version() { /* error case */ }
|
||||
```
|
||||
|
||||
## Debugging Tests
|
||||
|
||||
### Enable SQL Logging
|
||||
|
||||
Set in `.env.test`:
|
||||
|
||||
```bash
|
||||
ATTUNE__DATABASE__LOG_STATEMENTS=true
|
||||
RUST_LOG=debug,sqlx=debug
|
||||
```
|
||||
|
||||
### Run with Output
|
||||
|
||||
```bash
|
||||
cargo test test_name -- --nocapture
|
||||
```
|
||||
|
||||
### Use Transaction Rollback
|
||||
|
||||
Wrap tests in transactions that rollback to inspect state:
|
||||
|
||||
```rust
|
||||
let mut tx = pool.begin().await.unwrap();
|
||||
// ... test operations ...
|
||||
// Drop tx without commit to rollback
|
||||
```
|
||||
|
||||
### Check Database State
|
||||
|
||||
Connect to test database directly:
|
||||
|
||||
```bash
|
||||
psql -d attune_test -U postgres
|
||||
```
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
For CI environments:
|
||||
|
||||
```bash
|
||||
# Setup test database
|
||||
createdb attune_test
|
||||
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/attune_test sqlx migrate run
|
||||
|
||||
# Run tests
|
||||
cargo test --test '*' -p attune-common -- --test-threads=1
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Database Connection Errors
|
||||
|
||||
**Issue**: Cannot connect to database
|
||||
|
||||
**Solution**:
|
||||
- Ensure PostgreSQL is running
|
||||
- Check credentials in `.env.test`
|
||||
- Verify test database exists
|
||||
|
||||
### Migration Errors
|
||||
|
||||
**Issue**: Migrations fail
|
||||
|
||||
**Solution**:
|
||||
- Run `make db-test-reset` to reset test database
|
||||
- Ensure migrations are in `migrations/` directory
|
||||
|
||||
### Flaky Tests
|
||||
|
||||
**Issue**: Tests fail intermittently
|
||||
|
||||
**Solution**:
|
||||
- Run single-threaded: `--test-threads=1`
|
||||
- Clean database before each test
|
||||
- Avoid time-dependent assertions
|
||||
|
||||
### Foreign Key Violations
|
||||
|
||||
**Issue**: Cannot delete entity due to foreign keys
|
||||
|
||||
**Solution**:
|
||||
- Use `clean_database()` which handles dependencies
|
||||
- Test cascade deletes explicitly
|
||||
- Delete in correct order (children before parents)
|
||||
|
||||
## Adding New Tests
|
||||
|
||||
To add tests for a new repository:
|
||||
|
||||
1. Create test file: `tests/<entity>_repository_tests.rs`
|
||||
2. Import helpers: `mod helpers;`
|
||||
3. Create fixtures in `helpers.rs` if needed
|
||||
4. Write comprehensive CRUD tests
|
||||
5. Test constraints and error cases
|
||||
6. Test transactions
|
||||
7. Run and verify: `cargo test --test <entity>_repository_tests`
|
||||
|
||||
## Test Coverage
|
||||
|
||||
To generate test coverage reports:
|
||||
|
||||
```bash
|
||||
# Install tarpaulin
|
||||
cargo install cargo-tarpaulin
|
||||
|
||||
# Generate coverage
|
||||
cargo tarpaulin --out Html --output-dir coverage --test '*' -p attune-common
|
||||
```
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [SQLx Documentation](https://docs.rs/sqlx)
|
||||
- [Tokio Testing Guide](https://tokio.rs/tokio/topics/testing)
|
||||
- [Rust Testing Best Practices](https://doc.rust-lang.org/book/ch11-00-testing.html)
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Check existing tests for examples
|
||||
- Review helper functions in `helpers.rs`
|
||||
- Consult the main project documentation
|
||||
- Open an issue on the project repository
|
||||
477
crates/common/tests/action_repository_tests.rs
Normal file
477
crates/common/tests/action_repository_tests.rs
Normal file
@@ -0,0 +1,477 @@
|
||||
//! Integration tests for Action repository
|
||||
//!
|
||||
//! These tests verify CRUD operations, queries, and constraints
|
||||
//! for the Action repository.
|
||||
|
||||
mod helpers;
|
||||
|
||||
use attune_common::repositories::{
|
||||
action::{ActionRepository, CreateActionInput, UpdateActionInput},
|
||||
Create, Delete, FindById, FindByRef, List, Update,
|
||||
};
|
||||
use helpers::*;
|
||||
use serde_json::json;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_action() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(action.pack, pack.id);
|
||||
assert_eq!(action.pack_ref, pack.r#ref);
|
||||
assert!(action.r#ref.contains("test_pack_"));
|
||||
assert!(action.r#ref.contains(".test_action_"));
|
||||
assert!(action.created.timestamp() > 0);
|
||||
assert!(action.updated.timestamp() > 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_action_with_optional_fields() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "full_action")
|
||||
.with_label("Full Test Action")
|
||||
.with_description("Action with all optional fields")
|
||||
.with_entrypoint("custom.py")
|
||||
.with_param_schema(json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string"}
|
||||
}
|
||||
}))
|
||||
.with_out_schema(json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"result": {"type": "string"}
|
||||
}
|
||||
}))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(action.label, "Full Test Action");
|
||||
assert_eq!(action.description, "Action with all optional fields");
|
||||
assert_eq!(action.entrypoint, "custom.py");
|
||||
assert!(action.param_schema.is_some());
|
||||
assert!(action.out_schema.is_some());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_action_by_id() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let created = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let found = ActionRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(found.is_some());
|
||||
let action = found.unwrap();
|
||||
assert_eq!(action.id, created.id);
|
||||
assert_eq!(action.r#ref, created.r#ref);
|
||||
assert_eq!(action.pack, pack.id);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_action_by_id_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let found = ActionRepository::find_by_id(&pool, 99999).await.unwrap();
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_action_by_ref() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let created = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let found = ActionRepository::find_by_ref(&pool, &created.r#ref)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(found.is_some());
|
||||
let action = found.unwrap();
|
||||
assert_eq!(action.id, created.id);
|
||||
assert_eq!(action.r#ref, created.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_action_by_ref_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let found = ActionRepository::find_by_ref(&pool, "nonexistent.action")
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_actions() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create multiple actions
|
||||
ActionFixture::new_unique(pack.id, &pack.r#ref, "action1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
ActionFixture::new_unique(pack.id, &pack.r#ref, "action2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
ActionFixture::new_unique(pack.id, &pack.r#ref, "action3")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let actions = ActionRepository::list(&pool).await.unwrap();
|
||||
|
||||
// Should contain at least our created actions
|
||||
assert!(actions.len() >= 3);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_actions_empty() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let actions = ActionRepository::list(&pool).await.unwrap();
|
||||
// May have actions from other tests, just verify we can list without error
|
||||
drop(actions);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_action() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let original_updated = action.updated;
|
||||
|
||||
// Wait a bit to ensure timestamp difference
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let update = UpdateActionInput {
|
||||
label: Some("Updated Label".to_string()),
|
||||
description: Some("Updated description".to_string()),
|
||||
entrypoint: None,
|
||||
runtime: None,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
};
|
||||
|
||||
let updated = ActionRepository::update(&pool, action.id, update)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.id, action.id);
|
||||
assert_eq!(updated.label, "Updated Label");
|
||||
assert_eq!(updated.description, "Updated description");
|
||||
assert_eq!(updated.entrypoint, action.entrypoint); // Unchanged
|
||||
assert!(updated.updated > original_updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_action_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let update = UpdateActionInput {
|
||||
label: Some("New Label".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = ActionRepository::update(&pool, 99999, update).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_action_partial() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.with_label("Original")
|
||||
.with_description("Original description")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Update only the label
|
||||
let update = UpdateActionInput {
|
||||
label: Some("Updated Label Only".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = ActionRepository::update(&pool, action.id, update)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.label, "Updated Label Only");
|
||||
assert_eq!(updated.description, action.description); // Unchanged
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_action() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let deleted = ActionRepository::delete(&pool, action.id).await.unwrap();
|
||||
|
||||
assert!(deleted);
|
||||
|
||||
// Verify it's gone
|
||||
let found = ActionRepository::find_by_id(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_action_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let deleted = ActionRepository::delete(&pool, 99999).await.unwrap();
|
||||
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_actions_cascade_delete_with_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Delete the pack
|
||||
sqlx::query("DELETE FROM pack WHERE id = $1")
|
||||
.bind(pack.id)
|
||||
.execute(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Action should be cascade deleted
|
||||
let found = ActionRepository::find_by_id(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_action_foreign_key_constraint() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Try to create action with non-existent pack
|
||||
let input = CreateActionInput {
|
||||
r#ref: "test.action".to_string(),
|
||||
pack: 99999,
|
||||
pack_ref: "nonexistent.pack".to_string(),
|
||||
label: "Test Action".to_string(),
|
||||
description: "Test".to_string(),
|
||||
entrypoint: "main.py".to_string(),
|
||||
runtime: None,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let result = ActionRepository::create(&pool, input).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multiple_actions_same_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create multiple actions in the same pack
|
||||
let action1 = ActionFixture::new_unique(pack.id, &pack.r#ref, "action1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let action2 = ActionFixture::new_unique(pack.id, &pack.r#ref, "action2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(action1.pack, pack.id);
|
||||
assert_eq!(action2.pack, pack.id);
|
||||
assert_ne!(action1.id, action2.id);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_action_unique_ref_constraint() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create first action - use non-unique name since we're testing duplicate detection
|
||||
let action_name = helpers::unique_action_name("duplicate");
|
||||
ActionFixture::new(pack.id, &pack.r#ref, &action_name)
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Try to create another action with same ref (should fail)
|
||||
let result = ActionFixture::new(pack.id, &pack.r#ref, &action_name)
|
||||
.create(&pool)
|
||||
.await;
|
||||
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_action_with_json_schemas() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let param_schema = json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"input": {"type": "string"},
|
||||
"count": {"type": "integer"}
|
||||
},
|
||||
"required": ["input"]
|
||||
});
|
||||
|
||||
let out_schema = json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"output": {"type": "string"},
|
||||
"status": {"type": "string"}
|
||||
}
|
||||
});
|
||||
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "schema_action")
|
||||
.with_param_schema(param_schema.clone())
|
||||
.with_out_schema(out_schema.clone())
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(action.param_schema, Some(param_schema));
|
||||
assert_eq!(action.out_schema, Some(out_schema));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_action_timestamps_auto_populated() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let now = chrono::Utc::now();
|
||||
assert!(action.created <= now);
|
||||
assert!(action.updated <= now);
|
||||
assert!(action.created <= action.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_action_updated_changes_on_update() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let original_created = action.created;
|
||||
let original_updated = action.updated;
|
||||
|
||||
// Wait a bit
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let update = UpdateActionInput {
|
||||
label: Some("Updated".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = ActionRepository::update(&pool, action.id, update)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.created, original_created); // Created unchanged
|
||||
assert!(updated.updated > original_updated); // Updated changed
|
||||
}
|
||||
1392
crates/common/tests/enforcement_repository_tests.rs
Normal file
1392
crates/common/tests/enforcement_repository_tests.rs
Normal file
File diff suppressed because it is too large
Load Diff
797
crates/common/tests/event_repository_tests.rs
Normal file
797
crates/common/tests/event_repository_tests.rs
Normal file
@@ -0,0 +1,797 @@
|
||||
//! Integration tests for Event repository
|
||||
//!
|
||||
//! These tests verify CRUD operations, queries, and constraints
|
||||
//! for the Event repository.
|
||||
|
||||
mod helpers;
|
||||
|
||||
use attune_common::{
|
||||
repositories::{
|
||||
event::{CreateEventInput, EventRepository, UpdateEventInput},
|
||||
Create, Delete, FindById, List, Update,
|
||||
},
|
||||
Error,
|
||||
};
|
||||
use helpers::*;
|
||||
use serde_json::json;
|
||||
|
||||
// ============================================================================
|
||||
// CREATE Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_event_minimal() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create a trigger for the event
|
||||
let pack = PackFixture::new_unique("event_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create event with minimal fields
|
||||
let input = CreateEventInput {
|
||||
trigger: Some(trigger.id),
|
||||
trigger_ref: trigger.r#ref.clone(),
|
||||
config: None,
|
||||
payload: None,
|
||||
source: None,
|
||||
source_ref: None,
|
||||
rule: None,
|
||||
rule_ref: None,
|
||||
};
|
||||
|
||||
let event = EventRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert!(event.id > 0);
|
||||
assert_eq!(event.trigger, Some(trigger.id));
|
||||
assert_eq!(event.trigger_ref, trigger.r#ref);
|
||||
assert_eq!(event.config, None);
|
||||
assert_eq!(event.payload, None);
|
||||
assert_eq!(event.source, None);
|
||||
assert_eq!(event.source_ref, None);
|
||||
assert!(event.created.timestamp() > 0);
|
||||
assert!(event.updated.timestamp() > 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_event_with_payload() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("payload_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let payload = json!({
|
||||
"webhook_url": "https://example.com/webhook",
|
||||
"method": "POST",
|
||||
"headers": {
|
||||
"Content-Type": "application/json"
|
||||
},
|
||||
"body": {
|
||||
"message": "Test event"
|
||||
}
|
||||
});
|
||||
|
||||
let input = CreateEventInput {
|
||||
trigger: Some(trigger.id),
|
||||
trigger_ref: trigger.r#ref.clone(),
|
||||
config: None,
|
||||
payload: Some(payload.clone()),
|
||||
source: None,
|
||||
source_ref: None,
|
||||
rule: None,
|
||||
rule_ref: None,
|
||||
};
|
||||
|
||||
let event = EventRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert_eq!(event.payload, Some(payload));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_event_with_config() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("config_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "timer")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let config = json!({
|
||||
"interval": "5m",
|
||||
"timezone": "UTC"
|
||||
});
|
||||
|
||||
let input = CreateEventInput {
|
||||
trigger: Some(trigger.id),
|
||||
trigger_ref: trigger.r#ref.clone(),
|
||||
config: Some(config.clone()),
|
||||
payload: None,
|
||||
source: None,
|
||||
source_ref: None,
|
||||
rule: None,
|
||||
rule_ref: None,
|
||||
};
|
||||
|
||||
let event = EventRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert_eq!(event.config, Some(config));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_event_without_trigger_id() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Events can be created without a trigger ID (trigger may have been deleted)
|
||||
let input = CreateEventInput {
|
||||
trigger: None,
|
||||
trigger_ref: "deleted.trigger".to_string(),
|
||||
config: None,
|
||||
payload: Some(json!({"reason": "trigger was deleted"})),
|
||||
source: None,
|
||||
source_ref: None,
|
||||
rule: None,
|
||||
rule_ref: None,
|
||||
};
|
||||
|
||||
let event = EventRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert_eq!(event.trigger, None);
|
||||
assert_eq!(event.trigger_ref, "deleted.trigger");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_event_with_source() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("source_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create a sensor to reference as source
|
||||
// Note: We'd need a SensorFixture, but for now we'll just test with NULL source
|
||||
let input = CreateEventInput {
|
||||
trigger: Some(trigger.id),
|
||||
trigger_ref: trigger.r#ref.clone(),
|
||||
config: None,
|
||||
payload: None,
|
||||
source: None,
|
||||
source_ref: Some("test.sensor".to_string()),
|
||||
rule: None,
|
||||
rule_ref: None,
|
||||
};
|
||||
|
||||
let event = EventRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert_eq!(event.source, None);
|
||||
assert_eq!(event.source_ref, Some("test.sensor".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_event_with_invalid_trigger_fails() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Try to create event with non-existent trigger ID
|
||||
let input = CreateEventInput {
|
||||
trigger: Some(99999),
|
||||
trigger_ref: "nonexistent.trigger".to_string(),
|
||||
config: None,
|
||||
payload: None,
|
||||
source: None,
|
||||
source_ref: None,
|
||||
rule: None,
|
||||
rule_ref: None,
|
||||
};
|
||||
|
||||
let result = EventRepository::create(&pool, input).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
// Foreign key constraint violation
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// READ Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_event_by_id() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("find_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let created_event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.with_payload(json!({"test": "data"}))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let found = EventRepository::find_by_id(&pool, created_event.id)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(found.is_some());
|
||||
let event = found.unwrap();
|
||||
assert_eq!(event.id, created_event.id);
|
||||
assert_eq!(event.trigger, created_event.trigger);
|
||||
assert_eq!(event.trigger_ref, created_event.trigger_ref);
|
||||
assert_eq!(event.payload, created_event.payload);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_event_by_id_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let result = EventRepository::find_by_id(&pool, 99999).await.unwrap();
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_get_event_by_id() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("get_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let created_event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let event = EventRepository::get_by_id(&pool, created_event.id)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(event.id, created_event.id);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_get_event_by_id_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let result = EventRepository::get_by_id(&pool, 99999).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(matches!(result.unwrap_err(), Error::NotFound { .. }));
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// LIST Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_events_empty() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let events = EventRepository::list(&pool).await.unwrap();
|
||||
// May have events from other tests, just verify we can list without error
|
||||
drop(events);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_events() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("list_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let before_count = EventRepository::list(&pool).await.unwrap().len();
|
||||
|
||||
// Create multiple events
|
||||
let mut created_ids = vec![];
|
||||
for i in 0..3 {
|
||||
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.with_payload(json!({"index": i}))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
created_ids.push(event.id);
|
||||
}
|
||||
|
||||
let events = EventRepository::list(&pool).await.unwrap();
|
||||
|
||||
assert!(events.len() >= before_count + 3);
|
||||
// Verify our events are in the list (should be at the top since ordered by created DESC)
|
||||
let our_events: Vec<_> = events
|
||||
.iter()
|
||||
.filter(|e| created_ids.contains(&e.id))
|
||||
.collect();
|
||||
assert_eq!(our_events.len(), 3);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_events_respects_limit() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("limit_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let _trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// List operation has a LIMIT of 1000, so it won't retrieve more than that
|
||||
let events = EventRepository::list(&pool).await.unwrap();
|
||||
assert!(events.len() <= 1000);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// UPDATE Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_event_config() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("update_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.with_config(json!({"old": "config"}))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let new_config = json!({"new": "config", "updated": true});
|
||||
let input = UpdateEventInput {
|
||||
config: Some(new_config.clone()),
|
||||
payload: None,
|
||||
};
|
||||
|
||||
let updated = EventRepository::update(&pool, event.id, input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.id, event.id);
|
||||
assert_eq!(updated.config, Some(new_config));
|
||||
assert!(updated.updated > event.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_event_payload() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("payload_update_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.with_payload(json!({"initial": "payload"}))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let new_payload = json!({"updated": "payload", "version": 2});
|
||||
let input = UpdateEventInput {
|
||||
config: None,
|
||||
payload: Some(new_payload.clone()),
|
||||
};
|
||||
|
||||
let updated = EventRepository::update(&pool, event.id, input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.payload, Some(new_payload));
|
||||
assert!(updated.updated > event.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_event_both_fields() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("both_update_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let new_config = json!({"setting": "value"});
|
||||
let new_payload = json!({"data": "value"});
|
||||
let input = UpdateEventInput {
|
||||
config: Some(new_config.clone()),
|
||||
payload: Some(new_payload.clone()),
|
||||
};
|
||||
|
||||
let updated = EventRepository::update(&pool, event.id, input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.config, Some(new_config));
|
||||
assert_eq!(updated.payload, Some(new_payload));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_event_no_changes() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("nochange_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.with_payload(json!({"test": "data"}))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let input = UpdateEventInput {
|
||||
config: None,
|
||||
payload: None,
|
||||
};
|
||||
|
||||
let result = EventRepository::update(&pool, event.id, input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Should return existing event without updating
|
||||
assert_eq!(result.id, event.id);
|
||||
assert_eq!(result.payload, event.payload);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_event_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = UpdateEventInput {
|
||||
config: Some(json!({"test": "config"})),
|
||||
payload: None,
|
||||
};
|
||||
|
||||
let result = EventRepository::update(&pool, 99999, input).await;
|
||||
|
||||
// When updating non-existent entity with changes, SQLx returns RowNotFound error
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// DELETE Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_event() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("delete_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let deleted = EventRepository::delete(&pool, event.id).await.unwrap();
|
||||
|
||||
assert!(deleted);
|
||||
|
||||
// Verify it's gone
|
||||
let found = EventRepository::find_by_id(&pool, event.id).await.unwrap();
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_event_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let deleted = EventRepository::delete(&pool, 99999).await.unwrap();
|
||||
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_event_sets_enforcement_event_to_null() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create pack, trigger, action, rule, and event
|
||||
let pack = PackFixture::new_unique("cascade_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create a rule
|
||||
use attune_common::repositories::rule::{CreateRuleInput, RuleRepository};
|
||||
let rule = RuleRepository::create(
|
||||
&pool,
|
||||
CreateRuleInput {
|
||||
r#ref: format!("{}.test_rule", pack.r#ref),
|
||||
pack: pack.id,
|
||||
pack_ref: pack.r#ref.clone(),
|
||||
label: "Test Rule".to_string(),
|
||||
description: "Test".to_string(),
|
||||
action: action.id,
|
||||
action_ref: action.r#ref.clone(),
|
||||
trigger: trigger.id,
|
||||
trigger_ref: trigger.r#ref.clone(),
|
||||
conditions: json!({}),
|
||||
action_params: json!({}),
|
||||
trigger_params: json!({}),
|
||||
enabled: true,
|
||||
is_adhoc: false,
|
||||
},
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create enforcement referencing the event
|
||||
let enforcement = EnforcementFixture::new_unique(Some(rule.id), &rule.r#ref, &trigger.r#ref)
|
||||
.with_event(event.id)
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Delete the event - enforcement.event should be set to NULL (ON DELETE SET NULL)
|
||||
EventRepository::delete(&pool, event.id).await.unwrap();
|
||||
|
||||
// Enforcement should still exist but with NULL event
|
||||
use attune_common::repositories::event::EnforcementRepository;
|
||||
let found_enforcement = EnforcementRepository::find_by_id(&pool, enforcement.id)
|
||||
.await
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(found_enforcement.event, None);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// SPECIALIZED QUERY Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_events_by_trigger() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("trigger_query_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger1 = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger2 = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "timer")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create events for trigger1
|
||||
for i in 0..3 {
|
||||
EventFixture::new_unique(Some(trigger1.id), &trigger1.r#ref)
|
||||
.with_payload(json!({"trigger": 1, "index": i}))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
// Create events for trigger2
|
||||
for i in 0..2 {
|
||||
EventFixture::new_unique(Some(trigger2.id), &trigger2.r#ref)
|
||||
.with_payload(json!({"trigger": 2, "index": i}))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
let events = EventRepository::find_by_trigger(&pool, trigger1.id)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(events.len(), 3);
|
||||
for event in &events {
|
||||
assert_eq!(event.trigger, Some(trigger1.id));
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_events_by_trigger_ref() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("triggerref_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create events with a unique trigger_ref to avoid conflicts
|
||||
let unique_trigger_ref = trigger.r#ref.clone();
|
||||
for i in 0..3 {
|
||||
EventFixture::new(Some(trigger.id), &unique_trigger_ref)
|
||||
.with_payload(json!({"index": i}))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
let events = EventRepository::find_by_trigger_ref(&pool, &unique_trigger_ref)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(events.len(), 3);
|
||||
for event in &events {
|
||||
assert_eq!(event.trigger_ref, unique_trigger_ref);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_events_by_trigger_ref_preserves_after_trigger_deletion() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("preserve_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger_ref = trigger.r#ref.clone();
|
||||
|
||||
// Create event with the specific trigger_ref
|
||||
let event = EventFixture::new(Some(trigger.id), &trigger_ref)
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Delete the trigger (ON DELETE SET NULL on event.trigger)
|
||||
use attune_common::repositories::{trigger::TriggerRepository, Delete};
|
||||
TriggerRepository::delete(&pool, trigger.id).await.unwrap();
|
||||
|
||||
// Events should still be findable by trigger_ref even though trigger is deleted
|
||||
let events = EventRepository::find_by_trigger_ref(&pool, &trigger_ref)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(events.len(), 1);
|
||||
assert_eq!(events[0].id, event.id);
|
||||
assert_eq!(events[0].trigger, None); // trigger ID set to NULL
|
||||
assert_eq!(events[0].trigger_ref, trigger_ref); // trigger_ref preserved
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// TIMESTAMP Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_event_timestamps_auto_managed() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("timestamp_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger = TriggerFixture::new_unique(Some(pack.id), Some(pack.r#ref.clone()), "webhook")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let event = EventFixture::new_unique(Some(trigger.id), &trigger.r#ref)
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let created_time = event.created;
|
||||
let updated_time = event.updated;
|
||||
|
||||
assert!(created_time.timestamp() > 0);
|
||||
assert_eq!(created_time, updated_time);
|
||||
|
||||
// Update and verify timestamp changed
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let input = UpdateEventInput {
|
||||
config: Some(json!({"updated": true})),
|
||||
payload: None,
|
||||
};
|
||||
|
||||
let updated = EventRepository::update(&pool, event.id, input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.created, created_time); // created unchanged
|
||||
assert!(updated.updated > updated_time); // updated changed
|
||||
}
|
||||
1080
crates/common/tests/execution_repository_tests.rs
Normal file
1080
crates/common/tests/execution_repository_tests.rs
Normal file
File diff suppressed because it is too large
Load Diff
1258
crates/common/tests/helpers.rs
Normal file
1258
crates/common/tests/helpers.rs
Normal file
File diff suppressed because it is too large
Load Diff
464
crates/common/tests/identity_repository_tests.rs
Normal file
464
crates/common/tests/identity_repository_tests.rs
Normal file
@@ -0,0 +1,464 @@
|
||||
//! Integration tests for Identity repository
|
||||
//!
|
||||
//! These tests verify CRUD operations, queries, and constraints
|
||||
//! for the Identity repository.
|
||||
|
||||
mod helpers;
|
||||
|
||||
use attune_common::{
|
||||
repositories::{
|
||||
identity::{CreateIdentityInput, IdentityRepository, UpdateIdentityInput},
|
||||
Create, Delete, FindById, List, Update,
|
||||
},
|
||||
Error,
|
||||
};
|
||||
use helpers::*;
|
||||
use serde_json::json;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_identity() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = CreateIdentityInput {
|
||||
login: unique_pack_ref("testuser"),
|
||||
display_name: Some("Test User".to_string()),
|
||||
attributes: json!({"email": "test@example.com"}),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let identity = IdentityRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(identity.login.starts_with("testuser_"));
|
||||
assert_eq!(identity.display_name, Some("Test User".to_string()));
|
||||
assert_eq!(identity.attributes["email"], "test@example.com");
|
||||
assert!(identity.created.timestamp() > 0);
|
||||
assert!(identity.updated.timestamp() > 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_identity_minimal() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = CreateIdentityInput {
|
||||
login: unique_pack_ref("minimal"),
|
||||
display_name: None,
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let identity = IdentityRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert!(identity.login.starts_with("minimal_"));
|
||||
assert_eq!(identity.display_name, None);
|
||||
assert_eq!(identity.attributes, json!({}));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_identity_duplicate_login() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let login = unique_pack_ref("duplicate");
|
||||
|
||||
// Create first identity
|
||||
let input1 = CreateIdentityInput {
|
||||
login: login.clone(),
|
||||
display_name: Some("First".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
IdentityRepository::create(&pool, input1).await.unwrap();
|
||||
|
||||
// Try to create second identity with same login
|
||||
let input2 = CreateIdentityInput {
|
||||
login: login.clone(),
|
||||
display_name: Some("Second".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
let result = IdentityRepository::create(&pool, input2).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
println!("Actual error: {:?}", err);
|
||||
match err {
|
||||
Error::AlreadyExists { entity, field, .. } => {
|
||||
assert_eq!(entity, "Identity");
|
||||
assert_eq!(field, "login");
|
||||
}
|
||||
_ => panic!("Expected AlreadyExists error, got: {:?}", err),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_identity_by_id() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = CreateIdentityInput {
|
||||
login: unique_pack_ref("findbyid"),
|
||||
display_name: Some("Find By ID".to_string()),
|
||||
attributes: json!({"key": "value"}),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let created = IdentityRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
let found = IdentityRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.unwrap()
|
||||
.expect("Identity not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.login, created.login);
|
||||
assert_eq!(found.display_name, created.display_name);
|
||||
assert_eq!(found.attributes, created.attributes);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_identity_by_id_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let found = IdentityRepository::find_by_id(&pool, 999999).await.unwrap();
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_identity_by_login() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let login = unique_pack_ref("findbylogin");
|
||||
let input = CreateIdentityInput {
|
||||
login: login.clone(),
|
||||
display_name: Some("Find By Login".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let created = IdentityRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
let found = IdentityRepository::find_by_login(&pool, &login)
|
||||
.await
|
||||
.unwrap()
|
||||
.expect("Identity not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.login, login);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_identity_by_login_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let found = IdentityRepository::find_by_login(&pool, "nonexistent_user_12345")
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_identities() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create multiple identities
|
||||
let input1 = CreateIdentityInput {
|
||||
login: unique_pack_ref("user1"),
|
||||
display_name: Some("User 1".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
let identity1 = IdentityRepository::create(&pool, input1).await.unwrap();
|
||||
|
||||
let input2 = CreateIdentityInput {
|
||||
login: unique_pack_ref("user2"),
|
||||
display_name: Some("User 2".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
let identity2 = IdentityRepository::create(&pool, input2).await.unwrap();
|
||||
|
||||
let identities = IdentityRepository::list(&pool).await.unwrap();
|
||||
|
||||
// Should contain at least our created identities
|
||||
assert!(identities.len() >= 2);
|
||||
|
||||
let identity_ids: Vec<i64> = identities.iter().map(|i| i.id).collect();
|
||||
assert!(identity_ids.contains(&identity1.id));
|
||||
assert!(identity_ids.contains(&identity2.id));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_identity() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = CreateIdentityInput {
|
||||
login: unique_pack_ref("updatetest"),
|
||||
display_name: Some("Original Name".to_string()),
|
||||
attributes: json!({"key": "original"}),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let identity = IdentityRepository::create(&pool, input).await.unwrap();
|
||||
let original_updated = identity.updated;
|
||||
|
||||
// Wait a moment to ensure timestamp changes
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let update_input = UpdateIdentityInput {
|
||||
display_name: Some("Updated Name".to_string()),
|
||||
password_hash: None,
|
||||
attributes: Some(json!({"key": "updated", "new_key": "new_value"})),
|
||||
};
|
||||
|
||||
let updated = IdentityRepository::update(&pool, identity.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.id, identity.id);
|
||||
assert_eq!(updated.login, identity.login); // Login should not change
|
||||
assert_eq!(updated.display_name, Some("Updated Name".to_string()));
|
||||
assert_eq!(updated.attributes["key"], "updated");
|
||||
assert_eq!(updated.attributes["new_key"], "new_value");
|
||||
assert!(updated.updated > original_updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_identity_partial() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = CreateIdentityInput {
|
||||
login: unique_pack_ref("partial"),
|
||||
display_name: Some("Original".to_string()),
|
||||
attributes: json!({"key": "value"}),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let identity = IdentityRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
// Update only display_name
|
||||
let update_input = UpdateIdentityInput {
|
||||
display_name: Some("Only Display Name Changed".to_string()),
|
||||
password_hash: None,
|
||||
attributes: None,
|
||||
};
|
||||
|
||||
let updated = IdentityRepository::update(&pool, identity.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(
|
||||
updated.display_name,
|
||||
Some("Only Display Name Changed".to_string())
|
||||
);
|
||||
assert_eq!(updated.attributes, identity.attributes); // Should remain unchanged
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_identity_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let update_input = UpdateIdentityInput {
|
||||
display_name: Some("Updated Name".to_string()),
|
||||
password_hash: None,
|
||||
attributes: None,
|
||||
};
|
||||
|
||||
let result = IdentityRepository::update(&pool, 999999, update_input).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
println!("Actual error: {:?}", err);
|
||||
match err {
|
||||
Error::NotFound { entity, .. } => {
|
||||
assert_eq!(entity, "identity");
|
||||
}
|
||||
_ => panic!("Expected NotFound error, got: {:?}", err),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_identity() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = CreateIdentityInput {
|
||||
login: unique_pack_ref("deletetest"),
|
||||
display_name: Some("To Be Deleted".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let identity = IdentityRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
// Verify identity exists
|
||||
let found = IdentityRepository::find_by_id(&pool, identity.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(found.is_some());
|
||||
|
||||
// Delete the identity
|
||||
let deleted = IdentityRepository::delete(&pool, identity.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(deleted);
|
||||
|
||||
// Verify identity no longer exists
|
||||
let not_found = IdentityRepository::find_by_id(&pool, identity.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(not_found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_identity_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let deleted = IdentityRepository::delete(&pool, 999999).await.unwrap();
|
||||
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_identity_timestamps_auto_populated() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = CreateIdentityInput {
|
||||
login: unique_pack_ref("timestamps"),
|
||||
display_name: Some("Timestamp Test".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let identity = IdentityRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
// Timestamps should be set
|
||||
assert!(identity.created.timestamp() > 0);
|
||||
assert!(identity.updated.timestamp() > 0);
|
||||
|
||||
// Created and updated should be very close initially
|
||||
let diff = (identity.updated - identity.created)
|
||||
.num_milliseconds()
|
||||
.abs();
|
||||
assert!(diff < 1000); // Within 1 second
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_identity_updated_changes_on_update() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = CreateIdentityInput {
|
||||
login: unique_pack_ref("updatetimestamp"),
|
||||
display_name: Some("Original".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let identity = IdentityRepository::create(&pool, input).await.unwrap();
|
||||
let original_created = identity.created;
|
||||
let original_updated = identity.updated;
|
||||
|
||||
// Wait a moment to ensure timestamp changes
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let update_input = UpdateIdentityInput {
|
||||
display_name: Some("Updated".to_string()),
|
||||
password_hash: None,
|
||||
attributes: None,
|
||||
};
|
||||
|
||||
let updated = IdentityRepository::update(&pool, identity.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Created should remain the same
|
||||
assert_eq!(updated.created, original_created);
|
||||
|
||||
// Updated should be newer
|
||||
assert!(updated.updated > original_updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_identity_with_complex_attributes() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let complex_attrs = json!({
|
||||
"email": "complex@example.com",
|
||||
"roles": ["admin", "user"],
|
||||
"metadata": {
|
||||
"last_login": "2024-01-01T00:00:00Z",
|
||||
"login_count": 42
|
||||
},
|
||||
"preferences": {
|
||||
"theme": "dark",
|
||||
"notifications": true
|
||||
}
|
||||
});
|
||||
|
||||
let input = CreateIdentityInput {
|
||||
login: unique_pack_ref("complex"),
|
||||
display_name: Some("Complex User".to_string()),
|
||||
attributes: complex_attrs.clone(),
|
||||
password_hash: None,
|
||||
};
|
||||
|
||||
let identity = IdentityRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert_eq!(identity.attributes, complex_attrs);
|
||||
assert_eq!(identity.attributes["roles"][0], "admin");
|
||||
assert_eq!(identity.attributes["metadata"]["login_count"], 42);
|
||||
assert_eq!(identity.attributes["preferences"]["theme"], "dark");
|
||||
|
||||
// Verify it can be retrieved correctly
|
||||
let found = IdentityRepository::find_by_id(&pool, identity.id)
|
||||
.await
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(found.attributes, complex_attrs);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_identity_login_case_sensitive() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let base = unique_pack_ref("case");
|
||||
let lower_login = format!("{}lower", base);
|
||||
let upper_login = format!("{}UPPER", base);
|
||||
|
||||
// Create identity with lowercase login
|
||||
let input1 = CreateIdentityInput {
|
||||
login: lower_login.clone(),
|
||||
display_name: Some("Lower".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
let identity1 = IdentityRepository::create(&pool, input1).await.unwrap();
|
||||
|
||||
// Create identity with uppercase login (should work - different login)
|
||||
let input2 = CreateIdentityInput {
|
||||
login: upper_login.clone(),
|
||||
display_name: Some("Upper".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
let identity2 = IdentityRepository::create(&pool, input2).await.unwrap();
|
||||
|
||||
// Both should exist
|
||||
assert_ne!(identity1.id, identity2.id);
|
||||
assert_eq!(identity1.login, lower_login);
|
||||
assert_eq!(identity2.login, upper_login);
|
||||
|
||||
// Find by login should be exact match
|
||||
let found_lower = IdentityRepository::find_by_login(&pool, &lower_login)
|
||||
.await
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
assert_eq!(found_lower.id, identity1.id);
|
||||
|
||||
let found_upper = IdentityRepository::find_by_login(&pool, &upper_login)
|
||||
.await
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
assert_eq!(found_upper.id, identity2.id);
|
||||
}
|
||||
1255
crates/common/tests/inquiry_repository_tests.rs
Normal file
1255
crates/common/tests/inquiry_repository_tests.rs
Normal file
File diff suppressed because it is too large
Load Diff
884
crates/common/tests/key_repository_tests.rs
Normal file
884
crates/common/tests/key_repository_tests.rs
Normal file
@@ -0,0 +1,884 @@
|
||||
//! Integration tests for Key repository
|
||||
//!
|
||||
//! These tests verify CRUD operations, owner validation, encryption handling,
|
||||
//! and constraints for the Key repository.
|
||||
|
||||
mod helpers;
|
||||
|
||||
use attune_common::{
|
||||
models::enums::OwnerType,
|
||||
repositories::{
|
||||
key::{CreateKeyInput, KeyRepository, UpdateKeyInput},
|
||||
Create, Delete, FindById, List, Update,
|
||||
},
|
||||
Error,
|
||||
};
|
||||
use helpers::*;
|
||||
|
||||
// ============================================================================
|
||||
// CREATE Tests - System Owner
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_key_system_owner() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("system_key", "test_value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(key.id > 0);
|
||||
assert_eq!(key.owner_type, OwnerType::System);
|
||||
assert_eq!(key.owner, Some("system".to_string()));
|
||||
assert_eq!(key.owner_identity, None);
|
||||
assert_eq!(key.owner_pack, None);
|
||||
assert_eq!(key.owner_action, None);
|
||||
assert_eq!(key.owner_sensor, None);
|
||||
assert_eq!(key.encrypted, false);
|
||||
assert_eq!(key.value, "test_value");
|
||||
assert!(key.created.timestamp() > 0);
|
||||
assert!(key.updated.timestamp() > 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_key_system_encrypted() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("encrypted_key", "encrypted_value")
|
||||
.with_encrypted(true)
|
||||
.with_encryption_key_hash("sha256:abc123")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(key.encrypted, true);
|
||||
assert_eq!(key.encryption_key_hash, Some("sha256:abc123".to_string()));
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// CREATE Tests - Identity Owner
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_key_identity_owner() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create an identity first
|
||||
let identity = IdentityFixture::new_unique("testuser")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key = KeyFixture::new_identity_unique(identity.id, "api_key", "secret_token")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(key.owner_type, OwnerType::Identity);
|
||||
assert_eq!(key.owner, Some(identity.id.to_string()));
|
||||
assert_eq!(key.owner_identity, Some(identity.id));
|
||||
assert_eq!(key.owner_pack, None);
|
||||
assert_eq!(key.value, "secret_token");
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// CREATE Tests - Pack Owner
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_key_pack_owner() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("testpack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key = KeyFixture::new_pack_unique(pack.id, &pack.r#ref, "config_key", "config_value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(key.owner_type, OwnerType::Pack);
|
||||
assert_eq!(key.owner, Some(pack.id.to_string()));
|
||||
assert_eq!(key.owner_pack, Some(pack.id));
|
||||
assert_eq!(key.owner_pack_ref, Some(pack.r#ref.clone()));
|
||||
assert_eq!(key.value, "config_value");
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// CREATE Tests - Constraints
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_key_duplicate_ref_fails() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key_ref = format!("duplicate_key_{}", unique_test_id());
|
||||
|
||||
// Create first key
|
||||
let input = CreateKeyInput {
|
||||
r#ref: key_ref.clone(),
|
||||
owner_type: OwnerType::System,
|
||||
owner: Some("system".to_string()),
|
||||
owner_identity: None,
|
||||
owner_pack: None,
|
||||
owner_pack_ref: None,
|
||||
owner_action: None,
|
||||
owner_action_ref: None,
|
||||
owner_sensor: None,
|
||||
owner_sensor_ref: None,
|
||||
name: key_ref.clone(),
|
||||
encrypted: false,
|
||||
encryption_key_hash: None,
|
||||
value: "value1".to_string(),
|
||||
};
|
||||
|
||||
KeyRepository::create(&pool, input.clone()).await.unwrap();
|
||||
|
||||
// Try to create duplicate
|
||||
let result = KeyRepository::create(&pool, input).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_key_system_with_owner_fields_fails() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create an identity
|
||||
let identity = IdentityFixture::new_unique("testuser")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Try to create system key with owner_identity set (should fail)
|
||||
let input = CreateKeyInput {
|
||||
r#ref: format!("invalid_key_{}", unique_test_id()),
|
||||
owner_type: OwnerType::System,
|
||||
owner: Some("system".to_string()),
|
||||
owner_identity: Some(identity.id), // This should cause failure
|
||||
owner_pack: None,
|
||||
owner_pack_ref: None,
|
||||
owner_action: None,
|
||||
owner_action_ref: None,
|
||||
owner_sensor: None,
|
||||
owner_sensor_ref: None,
|
||||
name: "invalid".to_string(),
|
||||
encrypted: false,
|
||||
encryption_key_hash: None,
|
||||
value: "value".to_string(),
|
||||
};
|
||||
|
||||
let result = KeyRepository::create(&pool, input).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_key_identity_without_owner_id_fails() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Try to create identity key without owner_identity set
|
||||
let input = CreateKeyInput {
|
||||
r#ref: format!("invalid_key_{}", unique_test_id()),
|
||||
owner_type: OwnerType::Identity,
|
||||
owner: None,
|
||||
owner_identity: None, // Missing required field
|
||||
owner_pack: None,
|
||||
owner_pack_ref: None,
|
||||
owner_action: None,
|
||||
owner_action_ref: None,
|
||||
owner_sensor: None,
|
||||
owner_sensor_ref: None,
|
||||
name: "invalid".to_string(),
|
||||
encrypted: false,
|
||||
encryption_key_hash: None,
|
||||
value: "value".to_string(),
|
||||
};
|
||||
|
||||
let result = KeyRepository::create(&pool, input).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_key_multiple_owners_fails() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let identity = IdentityFixture::new_unique("testuser")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("testpack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Try to create key with both identity and pack owners (should fail)
|
||||
let input = CreateKeyInput {
|
||||
r#ref: format!("invalid_key_{}", unique_test_id()),
|
||||
owner_type: OwnerType::Identity,
|
||||
owner: None,
|
||||
owner_identity: Some(identity.id),
|
||||
owner_pack: Some(pack.id), // Can't have multiple owners
|
||||
owner_pack_ref: None,
|
||||
owner_action: None,
|
||||
owner_action_ref: None,
|
||||
owner_sensor: None,
|
||||
owner_sensor_ref: None,
|
||||
name: "invalid".to_string(),
|
||||
encrypted: false,
|
||||
encryption_key_hash: None,
|
||||
value: "value".to_string(),
|
||||
};
|
||||
|
||||
let result = KeyRepository::create(&pool, input).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_key_invalid_ref_format_fails() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Try uppercase ref (should fail CHECK constraint)
|
||||
let input = CreateKeyInput {
|
||||
r#ref: "UPPERCASE_KEY".to_string(),
|
||||
owner_type: OwnerType::System,
|
||||
owner: Some("system".to_string()),
|
||||
owner_identity: None,
|
||||
owner_pack: None,
|
||||
owner_pack_ref: None,
|
||||
owner_action: None,
|
||||
owner_action_ref: None,
|
||||
owner_sensor: None,
|
||||
owner_sensor_ref: None,
|
||||
name: "uppercase".to_string(),
|
||||
encrypted: false,
|
||||
encryption_key_hash: None,
|
||||
value: "value".to_string(),
|
||||
};
|
||||
|
||||
let result = KeyRepository::create(&pool, input).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// READ Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_id_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("find_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let found = KeyRepository::find_by_id(&pool, key.id).await.unwrap();
|
||||
|
||||
assert!(found.is_some());
|
||||
let found = found.unwrap();
|
||||
assert_eq!(found.id, key.id);
|
||||
assert_eq!(found.r#ref, key.r#ref);
|
||||
assert_eq!(found.value, key.value);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_id_not_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let result = KeyRepository::find_by_id(&pool, 99999).await.unwrap();
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_get_by_id_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("get_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let found = KeyRepository::get_by_id(&pool, key.id).await.unwrap();
|
||||
|
||||
assert_eq!(found.id, key.id);
|
||||
assert_eq!(found.r#ref, key.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_get_by_id_not_exists_fails() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let result = KeyRepository::get_by_id(&pool, 99999).await;
|
||||
assert!(result.is_err());
|
||||
assert!(matches!(result.unwrap_err(), Error::NotFound { .. }));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_ref_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("ref_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let found = KeyRepository::find_by_ref(&pool, &key.r#ref).await.unwrap();
|
||||
|
||||
assert!(found.is_some());
|
||||
let found = found.unwrap();
|
||||
assert_eq!(found.id, key.id);
|
||||
assert_eq!(found.r#ref, key.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_ref_not_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let result = KeyRepository::find_by_ref(&pool, "nonexistent_key")
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_all_keys() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create multiple keys
|
||||
let key1 = KeyFixture::new_system_unique("list_key_a", "value1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key2 = KeyFixture::new_system_unique("list_key_b", "value2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let keys = KeyRepository::list(&pool).await.unwrap();
|
||||
|
||||
// Should have at least our 2 keys (may have more from parallel tests)
|
||||
assert!(keys.len() >= 2);
|
||||
|
||||
// Verify our keys are in the list
|
||||
assert!(keys.iter().any(|k| k.id == key1.id));
|
||||
assert!(keys.iter().any(|k| k.id == key2.id));
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// UPDATE Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_value() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("update_key", "original_value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let original_updated = key.updated;
|
||||
|
||||
// Small delay to ensure updated timestamp changes
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let input = UpdateKeyInput {
|
||||
value: Some("new_value".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = KeyRepository::update(&pool, key.id, input).await.unwrap();
|
||||
|
||||
assert_eq!(updated.value, "new_value");
|
||||
assert!(updated.updated > original_updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_name() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("update_name_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Use a unique name to avoid conflicts with parallel tests
|
||||
let new_name = format!("new_name_{}", unique_test_id());
|
||||
let input = UpdateKeyInput {
|
||||
name: Some(new_name.clone()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = KeyRepository::update(&pool, key.id, input).await.unwrap();
|
||||
|
||||
assert_eq!(updated.name, new_name);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_encrypted_status() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("encrypt_key", "plain_value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(key.encrypted, false);
|
||||
|
||||
let input = UpdateKeyInput {
|
||||
encrypted: Some(true),
|
||||
encryption_key_hash: Some("sha256:xyz789".to_string()),
|
||||
value: Some("encrypted_value".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = KeyRepository::update(&pool, key.id, input).await.unwrap();
|
||||
|
||||
assert_eq!(updated.encrypted, true);
|
||||
assert_eq!(
|
||||
updated.encryption_key_hash,
|
||||
Some("sha256:xyz789".to_string())
|
||||
);
|
||||
assert_eq!(updated.value, "encrypted_value");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_multiple_fields() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("multi_update_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Use a unique name to avoid conflicts with parallel tests
|
||||
let new_name = format!("updated_name_{}", unique_test_id());
|
||||
let input = UpdateKeyInput {
|
||||
name: Some(new_name.clone()),
|
||||
value: Some("updated_value".to_string()),
|
||||
encrypted: Some(true),
|
||||
encryption_key_hash: Some("hash123".to_string()),
|
||||
};
|
||||
|
||||
let updated = KeyRepository::update(&pool, key.id, input).await.unwrap();
|
||||
|
||||
assert_eq!(updated.name, new_name);
|
||||
assert_eq!(updated.value, "updated_value");
|
||||
assert_eq!(updated.encrypted, true);
|
||||
assert_eq!(updated.encryption_key_hash, Some("hash123".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_no_changes() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("nochange_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let original_updated = key.updated;
|
||||
|
||||
let input = UpdateKeyInput::default();
|
||||
|
||||
let updated = KeyRepository::update(&pool, key.id, input).await.unwrap();
|
||||
|
||||
assert_eq!(updated.id, key.id);
|
||||
assert_eq!(updated.name, key.name);
|
||||
assert_eq!(updated.value, key.value);
|
||||
// Updated timestamp should not change when no fields are updated
|
||||
assert_eq!(updated.updated, original_updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_nonexistent_key_fails() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = UpdateKeyInput {
|
||||
value: Some("new_value".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = KeyRepository::update(&pool, 99999, input).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// DELETE Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_existing_key() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("delete_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let deleted = KeyRepository::delete(&pool, key.id).await.unwrap();
|
||||
assert!(deleted);
|
||||
|
||||
// Verify key is gone
|
||||
let result = KeyRepository::find_by_id(&pool, key.id).await.unwrap();
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_nonexistent_key() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let deleted = KeyRepository::delete(&pool, 99999).await.unwrap();
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_key_when_identity_deleted() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let identity = IdentityFixture::new_unique("deleteuser")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key = KeyFixture::new_identity_unique(identity.id, "user_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Delete the identity - this will fail because key references it
|
||||
use attune_common::repositories::{identity::IdentityRepository, Delete as _};
|
||||
let delete_result = IdentityRepository::delete(&pool, identity.id).await;
|
||||
|
||||
// Should fail due to foreign key constraint (no CASCADE on key table)
|
||||
assert!(delete_result.is_err());
|
||||
|
||||
// Key should still exist
|
||||
let result = KeyRepository::find_by_id(&pool, key.id).await.unwrap();
|
||||
assert!(result.is_some());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_key_when_pack_deleted() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("deletepack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key = KeyFixture::new_pack_unique(pack.id, &pack.r#ref, "pack_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Delete the pack - this will fail because key references it
|
||||
use attune_common::repositories::{pack::PackRepository, Delete as _};
|
||||
let delete_result = PackRepository::delete(&pool, pack.id).await;
|
||||
|
||||
// Should fail due to foreign key constraint (no CASCADE on key table)
|
||||
assert!(delete_result.is_err());
|
||||
|
||||
// Key should still exist
|
||||
let result = KeyRepository::find_by_id(&pool, key.id).await.unwrap();
|
||||
assert!(result.is_some());
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Specialized Query Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_owner_type_system() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let _key1 = KeyFixture::new_system_unique("sys_key1", "value1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let _key2 = KeyFixture::new_system_unique("sys_key2", "value2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let keys = KeyRepository::find_by_owner_type(&pool, OwnerType::System)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Should have at least our 2 system keys
|
||||
assert!(keys.len() >= 2);
|
||||
assert!(keys.iter().all(|k| k.owner_type == OwnerType::System));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_owner_type_identity() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let identity1 = IdentityFixture::new_unique("user1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let identity2 = IdentityFixture::new_unique("user2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key1 = KeyFixture::new_identity_unique(identity1.id, "key1", "value1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key2 = KeyFixture::new_identity_unique(identity2.id, "key2", "value2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let keys = KeyRepository::find_by_owner_type(&pool, OwnerType::Identity)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Should contain our identity keys
|
||||
assert!(keys.iter().any(|k| k.id == key1.id));
|
||||
assert!(keys.iter().any(|k| k.id == key2.id));
|
||||
assert!(keys.iter().all(|k| k.owner_type == OwnerType::Identity));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_owner_type_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("ownerpack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key1 = KeyFixture::new_pack_unique(pack.id, &pack.r#ref, "pack_key1", "value1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key2 = KeyFixture::new_pack_unique(pack.id, &pack.r#ref, "pack_key2", "value2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let keys = KeyRepository::find_by_owner_type(&pool, OwnerType::Pack)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Should contain our pack keys
|
||||
assert!(keys.iter().any(|k| k.id == key1.id));
|
||||
assert!(keys.iter().any(|k| k.id == key2.id));
|
||||
assert!(keys.iter().all(|k| k.owner_type == OwnerType::Pack));
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Timestamp Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_created_timestamp_set_automatically() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let before = chrono::Utc::now();
|
||||
|
||||
let key = KeyFixture::new_system_unique("timestamp_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let after = chrono::Utc::now();
|
||||
|
||||
assert!(key.created >= before);
|
||||
assert!(key.created <= after);
|
||||
assert_eq!(key.created, key.updated); // Should be equal on creation
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_updated_timestamp_changes_on_update() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("update_time_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let original_updated = key.updated;
|
||||
|
||||
// Small delay to ensure timestamp changes
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let input = UpdateKeyInput {
|
||||
value: Some("new_value".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = KeyRepository::update(&pool, key.id, input).await.unwrap();
|
||||
|
||||
assert!(updated.updated > original_updated);
|
||||
assert_eq!(updated.created, key.created); // Created should not change
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_updated_timestamp_unchanged_on_read() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let key = KeyFixture::new_system_unique("read_time_key", "value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let original_updated = key.updated;
|
||||
|
||||
// Small delay
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
// Read the key
|
||||
let found = KeyRepository::find_by_id(&pool, key.id)
|
||||
.await
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(found.updated, original_updated); // Should not change
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Encryption Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_key_encrypted_flag() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let plain_key = KeyFixture::new_system_unique("plain_key", "plain_value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let encrypted_key = KeyFixture::new_system_unique("encrypted_key", "cipher_text")
|
||||
.with_encrypted(true)
|
||||
.with_encryption_key_hash("sha256:abc")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(plain_key.encrypted, false);
|
||||
assert_eq!(plain_key.encryption_key_hash, None);
|
||||
|
||||
assert_eq!(encrypted_key.encrypted, true);
|
||||
assert_eq!(
|
||||
encrypted_key.encryption_key_hash,
|
||||
Some("sha256:abc".to_string())
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_encryption_status() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create plain key
|
||||
let key = KeyFixture::new_system_unique("to_encrypt", "plain_value")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(key.encrypted, false);
|
||||
|
||||
// Encrypt it
|
||||
let input = UpdateKeyInput {
|
||||
encrypted: Some(true),
|
||||
encryption_key_hash: Some("sha256:newkey".to_string()),
|
||||
value: Some("encrypted_value".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let encrypted = KeyRepository::update(&pool, key.id, input).await.unwrap();
|
||||
|
||||
assert_eq!(encrypted.encrypted, true);
|
||||
assert_eq!(
|
||||
encrypted.encryption_key_hash,
|
||||
Some("sha256:newkey".to_string())
|
||||
);
|
||||
assert_eq!(encrypted.value, "encrypted_value");
|
||||
|
||||
// Decrypt it
|
||||
let input = UpdateKeyInput {
|
||||
encrypted: Some(false),
|
||||
encryption_key_hash: None,
|
||||
value: Some("plain_value".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let decrypted = KeyRepository::update(&pool, key.id, input).await.unwrap();
|
||||
|
||||
assert_eq!(decrypted.encrypted, false);
|
||||
assert_eq!(decrypted.value, "plain_value");
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Owner Validation Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multiple_keys_same_pack_different_names() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("multikey_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key1 = KeyFixture::new_pack_unique(pack.id, &pack.r#ref, "key1", "value1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key2 = KeyFixture::new_pack_unique(pack.id, &pack.r#ref, "key2", "value2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_ne!(key1.id, key2.id);
|
||||
assert_eq!(key1.owner_pack, Some(pack.id));
|
||||
assert_eq!(key2.owner_pack, Some(pack.id));
|
||||
assert_ne!(key1.name, key2.name);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_same_key_name_different_owners() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack1 = PackFixture::new_unique("pack1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let pack2 = PackFixture::new_unique("pack2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Same base key name, different owners - should be allowed
|
||||
// Use same base name so fixture creates keys with same logical name
|
||||
let base_name = format!("api_key_{}", unique_test_id());
|
||||
|
||||
let key1 = KeyFixture::new_pack(pack1.id, &pack1.r#ref, &base_name, "value1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let key2 = KeyFixture::new_pack(pack2.id, &pack2.r#ref, &base_name, "value2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_ne!(key1.id, key2.id);
|
||||
assert_eq!(key1.name, key2.name); // Same name
|
||||
assert_ne!(key1.owner_pack, key2.owner_pack); // Different owners
|
||||
}
|
||||
569
crates/common/tests/migration_tests.rs
Normal file
569
crates/common/tests/migration_tests.rs
Normal file
@@ -0,0 +1,569 @@
|
||||
//! Integration tests for database migrations
|
||||
//!
|
||||
//! These tests verify that migrations run successfully, the schema is correct,
|
||||
//! and basic database operations work as expected.
|
||||
|
||||
mod helpers;
|
||||
|
||||
use helpers::*;
|
||||
use sqlx::Row;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_migrations_applied() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Verify migrations were applied by checking that core tables exist
|
||||
// We check for multiple tables to ensure the schema is properly set up
|
||||
let tables = vec!["pack", "action", "trigger", "rule", "execution"];
|
||||
|
||||
for table_name in tables {
|
||||
let row = sqlx::query(&format!(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = '{}'
|
||||
) as exists
|
||||
"#,
|
||||
table_name
|
||||
))
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(
|
||||
exists,
|
||||
"Table '{}' does not exist - migrations may not have run",
|
||||
table_name
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'pack'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "pack table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_action_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'action'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "action table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_trigger_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'trigger'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "trigger table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_sensor_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'sensor'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "sensor table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_rule_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'rule'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "rule table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_execution_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'execution'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "execution table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_event_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'event'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "event table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_enforcement_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'enforcement'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "enforcement table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_inquiry_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'inquiry'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "inquiry table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_identity_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'identity'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "identity table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_key_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'key'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "key table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_notification_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'notification'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "notification table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_runtime_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'runtime'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "runtime table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_worker_table_exists() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let row = sqlx::query(
|
||||
r#"
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_schema = current_schema()
|
||||
AND table_name = 'worker'
|
||||
) as exists
|
||||
"#,
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let exists: bool = row.get("exists");
|
||||
assert!(exists, "worker table does not exist");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_columns() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Verify all expected columns exist in pack table
|
||||
let columns: Vec<String> = sqlx::query(
|
||||
r#"
|
||||
SELECT column_name
|
||||
FROM information_schema.columns
|
||||
WHERE table_schema = current_schema() AND table_name = 'pack'
|
||||
ORDER BY column_name
|
||||
"#,
|
||||
)
|
||||
.fetch_all(&pool)
|
||||
.await
|
||||
.unwrap()
|
||||
.iter()
|
||||
.map(|row| row.get("column_name"))
|
||||
.collect();
|
||||
|
||||
let expected_columns = vec![
|
||||
"conf_schema",
|
||||
"config",
|
||||
"created",
|
||||
"description",
|
||||
"id",
|
||||
"is_standard",
|
||||
"label",
|
||||
"meta",
|
||||
"ref",
|
||||
"runtime_deps",
|
||||
"tags",
|
||||
"updated",
|
||||
"version",
|
||||
];
|
||||
|
||||
for col in &expected_columns {
|
||||
assert!(
|
||||
columns.contains(&col.to_string()),
|
||||
"Column '{}' not found in pack table",
|
||||
col
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_action_columns() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Verify all expected columns exist in action table
|
||||
let columns: Vec<String> = sqlx::query(
|
||||
r#"
|
||||
SELECT column_name
|
||||
FROM information_schema.columns
|
||||
WHERE table_schema = current_schema() AND table_name = 'action'
|
||||
ORDER BY column_name
|
||||
"#,
|
||||
)
|
||||
.fetch_all(&pool)
|
||||
.await
|
||||
.unwrap()
|
||||
.iter()
|
||||
.map(|row| row.get("column_name"))
|
||||
.collect();
|
||||
|
||||
let expected_columns = vec![
|
||||
"created",
|
||||
"description",
|
||||
"entrypoint",
|
||||
"id",
|
||||
"label",
|
||||
"out_schema",
|
||||
"pack",
|
||||
"pack_ref",
|
||||
"param_schema",
|
||||
"ref",
|
||||
"runtime",
|
||||
"updated",
|
||||
];
|
||||
|
||||
for col in &expected_columns {
|
||||
assert!(
|
||||
columns.contains(&col.to_string()),
|
||||
"Column '{}' not found in action table",
|
||||
col
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_timestamps_auto_populated() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
clean_database(&pool).await.unwrap();
|
||||
|
||||
// Create a pack and verify timestamps are set
|
||||
let pack = PackFixture::new("timestamp_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Timestamps should be set to current time
|
||||
let now = chrono::Utc::now();
|
||||
assert!(pack.created <= now);
|
||||
assert!(pack.updated <= now);
|
||||
assert!(pack.created <= pack.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_json_column_storage() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
clean_database(&pool).await.unwrap();
|
||||
|
||||
// Create pack with JSON data
|
||||
let pack = PackFixture::new("json_pack")
|
||||
.with_description("Pack with JSON data")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Verify JSON data is stored and retrieved correctly
|
||||
assert!(pack.conf_schema.is_object());
|
||||
assert!(pack.config.is_object());
|
||||
assert!(pack.meta.is_object());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_array_column_storage() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
clean_database(&pool).await.unwrap();
|
||||
|
||||
// Create pack with arrays
|
||||
let pack = PackFixture::new("array_pack")
|
||||
.with_tags(vec![
|
||||
"test".to_string(),
|
||||
"example".to_string(),
|
||||
"demo".to_string(),
|
||||
])
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Verify arrays are stored correctly
|
||||
assert_eq!(pack.tags.len(), 3);
|
||||
assert!(pack.tags.contains(&"test".to_string()));
|
||||
assert!(pack.tags.contains(&"example".to_string()));
|
||||
assert!(pack.tags.contains(&"demo".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_unique_constraints() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
clean_database(&pool).await.unwrap();
|
||||
|
||||
// Create a pack
|
||||
PackFixture::new("unique_pack").create(&pool).await.unwrap();
|
||||
|
||||
// Try to create another pack with the same ref - should fail
|
||||
let result = PackFixture::new("unique_pack").create(&pool).await;
|
||||
|
||||
assert!(result.is_err(), "Should not allow duplicate pack refs");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_foreign_key_constraints() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
clean_database(&pool).await.unwrap();
|
||||
|
||||
// Try to create an action with non-existent pack_id - should fail
|
||||
let result = sqlx::query(
|
||||
r#"
|
||||
INSERT INTO attune.action (ref, pack, pack_ref, label, description, entrypoint)
|
||||
VALUES ($1, $2, $3, $4, $5, $6)
|
||||
"#,
|
||||
)
|
||||
.bind("test_pack.test_action")
|
||||
.bind(99999i64) // Non-existent pack ID
|
||||
.bind("test_pack")
|
||||
.bind("Test Action")
|
||||
.bind("Test action description")
|
||||
.bind("main.py")
|
||||
.execute(&pool)
|
||||
.await;
|
||||
|
||||
assert!(
|
||||
result.is_err(),
|
||||
"Should not allow action with non-existent pack"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_enum_types_exist() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Check that custom enum types are created
|
||||
let enums: Vec<String> = sqlx::query(
|
||||
r#"
|
||||
SELECT typname
|
||||
FROM pg_type
|
||||
WHERE typnamespace = (SELECT oid FROM pg_namespace WHERE nspname = current_schema())
|
||||
AND typtype = 'e'
|
||||
ORDER BY typname
|
||||
"#,
|
||||
)
|
||||
.fetch_all(&pool)
|
||||
.await
|
||||
.unwrap()
|
||||
.iter()
|
||||
.map(|row| row.get("typname"))
|
||||
.collect();
|
||||
|
||||
let expected_enums = vec![
|
||||
"artifact_retention_enum",
|
||||
"artifact_type_enum",
|
||||
"enforcement_condition_enum",
|
||||
"enforcement_status_enum",
|
||||
"execution_status_enum",
|
||||
"inquiry_status_enum",
|
||||
"notification_status_enum",
|
||||
"owner_type_enum",
|
||||
"policy_method_enum",
|
||||
"runtime_type_enum",
|
||||
"worker_status_enum",
|
||||
"worker_type_enum",
|
||||
];
|
||||
|
||||
for enum_type in &expected_enums {
|
||||
assert!(
|
||||
enums.contains(&enum_type.to_string()),
|
||||
"Enum type '{}' not found",
|
||||
enum_type
|
||||
);
|
||||
}
|
||||
}
|
||||
1246
crates/common/tests/notification_repository_tests.rs
Normal file
1246
crates/common/tests/notification_repository_tests.rs
Normal file
File diff suppressed because it is too large
Load Diff
497
crates/common/tests/pack_repository_tests.rs
Normal file
497
crates/common/tests/pack_repository_tests.rs
Normal file
@@ -0,0 +1,497 @@
|
||||
//! Integration tests for Pack repository
|
||||
//!
|
||||
//! These tests verify all CRUD operations, transactions, error handling,
|
||||
//! and constraint validation for the Pack repository.
|
||||
|
||||
mod helpers;
|
||||
|
||||
use attune_common::repositories::pack::{self, PackRepository};
|
||||
use attune_common::repositories::{Create, Delete, FindById, FindByRef, List, Pagination, Update};
|
||||
use attune_common::Error;
|
||||
use helpers::*;
|
||||
use serde_json::json;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.with_label("Test Pack")
|
||||
.with_version("1.0.0")
|
||||
.with_description("A test pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(pack.r#ref.starts_with("test_pack_"));
|
||||
assert_eq!(pack.version, "1.0.0");
|
||||
assert_eq!(pack.label, "Test Pack");
|
||||
assert_eq!(pack.description, Some("A test pack".to_string()));
|
||||
assert!(pack.created.timestamp() > 0);
|
||||
assert!(pack.updated.timestamp() > 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_pack_duplicate_ref() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create first pack - use a specific unique ref for this test
|
||||
let unique_ref = helpers::unique_pack_ref("duplicate_test");
|
||||
PackFixture::new(&unique_ref).create(&pool).await.unwrap();
|
||||
|
||||
// Try to create pack with same ref (should fail due to unique constraint)
|
||||
let result = PackFixture::new(&unique_ref).create(&pool).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
let error = result.unwrap_err();
|
||||
assert!(matches!(error, Error::AlreadyExists { .. }));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_pack_with_tags() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("tagged_pack")
|
||||
.with_tags(vec!["test".to_string(), "automation".to_string()])
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(pack.tags.len(), 2);
|
||||
assert!(pack.tags.contains(&"test".to_string()));
|
||||
assert!(pack.tags.contains(&"automation".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_pack_standard() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("standard_pack")
|
||||
.with_standard(true)
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(pack.is_standard);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_pack_by_id() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let created = PackFixture::new_unique("find_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let found = PackRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.unwrap()
|
||||
.expect("Pack not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.r#ref, created.r#ref);
|
||||
assert_eq!(found.label, created.label);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_pack_by_id_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let result = PackRepository::find_by_id(&pool, 999999).await.unwrap();
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_pack_by_ref() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let created = PackFixture::new_unique("ref_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let found = PackRepository::find_by_ref(&pool, &created.r#ref)
|
||||
.await
|
||||
.unwrap()
|
||||
.expect("Pack not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.r#ref, created.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_pack_by_ref_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let result = PackRepository::find_by_ref(&pool, "nonexistent.pack")
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_packs() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create multiple packs
|
||||
let pack1 = PackFixture::new_unique("pack1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let pack2 = PackFixture::new_unique("pack2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let pack3 = PackFixture::new_unique("pack3")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let packs = PackRepository::list(&pool).await.unwrap();
|
||||
|
||||
// Should contain at least our created packs
|
||||
assert!(packs.len() >= 3);
|
||||
|
||||
// Verify our packs are in the list
|
||||
let pack_refs: Vec<String> = packs.iter().map(|p| p.r#ref.clone()).collect();
|
||||
assert!(pack_refs.contains(&pack1.r#ref));
|
||||
assert!(pack_refs.contains(&pack2.r#ref));
|
||||
assert!(pack_refs.contains(&pack3.r#ref));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_packs_with_pagination() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Create test packs
|
||||
for i in 1..=5 {
|
||||
PackFixture::new_unique(&format!("pack{}", i))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
// Test that pagination works by getting pages
|
||||
let page1 = PackRepository::list_paginated(&pool, Pagination::new(2, 0))
|
||||
.await
|
||||
.unwrap();
|
||||
// First page should have 2 items (or less if there are fewer total)
|
||||
assert!(page1.len() <= 2);
|
||||
|
||||
// Test with different offset
|
||||
let page2 = PackRepository::list_paginated(&pool, Pagination::new(2, 2))
|
||||
.await
|
||||
.unwrap();
|
||||
// Second page should have items (or be empty if not enough total)
|
||||
assert!(page2.len() <= 2);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("update_pack")
|
||||
.with_label("Original Label")
|
||||
.with_version("1.0.0")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let update_input = pack::UpdatePackInput {
|
||||
label: Some("Updated Label".to_string()),
|
||||
version: Some("2.0.0".to_string()),
|
||||
description: Some("Updated description".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = PackRepository::update(&pool, pack.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.id, pack.id);
|
||||
assert_eq!(updated.label, "Updated Label");
|
||||
assert_eq!(updated.version, "2.0.0");
|
||||
assert_eq!(updated.description, Some("Updated description".to_string()));
|
||||
assert_eq!(updated.r#ref, pack.r#ref); // ref should not change
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_pack_partial() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("partial_pack")
|
||||
.with_label("Original Label")
|
||||
.with_version("1.0.0")
|
||||
.with_description("Original description")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Update only the label
|
||||
let update_input = pack::UpdatePackInput {
|
||||
label: Some("New Label".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = PackRepository::update(&pool, pack.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.label, "New Label");
|
||||
assert_eq!(updated.version, "1.0.0"); // version unchanged
|
||||
assert_eq!(updated.description, pack.description); // description unchanged
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_pack_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let update_input = pack::UpdatePackInput {
|
||||
label: Some("Updated".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let result = PackRepository::update(&pool, 999999, update_input).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(matches!(result.unwrap_err(), Error::NotFound { .. }));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_pack_tags() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("tags_pack")
|
||||
.with_tags(vec!["old".to_string()])
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let update_input = pack::UpdatePackInput {
|
||||
tags: Some(vec!["new".to_string(), "updated".to_string()]),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = PackRepository::update(&pool, pack.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.tags.len(), 2);
|
||||
assert!(updated.tags.contains(&"new".to_string()));
|
||||
assert!(updated.tags.contains(&"updated".to_string()));
|
||||
assert!(!updated.tags.contains(&"old".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("delete_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Verify pack exists
|
||||
let found = PackRepository::find_by_id(&pool, pack.id).await.unwrap();
|
||||
assert!(found.is_some());
|
||||
|
||||
// Delete the pack
|
||||
PackRepository::delete(&pool, pack.id).await.unwrap();
|
||||
|
||||
// Verify pack is gone
|
||||
let not_found = PackRepository::find_by_id(&pool, pack.id).await.unwrap();
|
||||
assert!(not_found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_pack_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let deleted = PackRepository::delete(&pool, 999999).await.unwrap();
|
||||
|
||||
assert!(!deleted, "Should return false when pack doesn't exist");
|
||||
}
|
||||
|
||||
// TODO: Re-enable once ActionFixture is fixed
|
||||
// #[tokio::test]
|
||||
// async fn test_delete_pack_cascades_to_actions() {
|
||||
// let pool = create_test_pool().await.unwrap();
|
||||
//
|
||||
// // Create pack with an action
|
||||
// let pack = PackFixture::new_unique("cascade_pack")
|
||||
// .create(&pool)
|
||||
// .await
|
||||
// .unwrap();
|
||||
//
|
||||
// let action = ActionFixture::new(pack.id, "cascade_action")
|
||||
// .create(&pool)
|
||||
// .await
|
||||
// .unwrap();
|
||||
//
|
||||
// // Verify action exists
|
||||
// let found_action = ActionRepository::find_by_id(&pool, action.id)
|
||||
// .await
|
||||
// .unwrap();
|
||||
// assert!(found_action.is_some());
|
||||
//
|
||||
// // Delete pack
|
||||
// PackRepository::delete(&pool, pack.id).await.unwrap();
|
||||
//
|
||||
// // Verify action is also deleted (cascade)
|
||||
// let action_after = ActionRepository::find_by_id(&pool, action.id)
|
||||
// .await
|
||||
// .unwrap();
|
||||
// assert!(action_after.is_none());
|
||||
// }
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_count_packs() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Get initial count
|
||||
let count_before = PackRepository::count(&pool).await.unwrap();
|
||||
|
||||
// Create some packs
|
||||
PackFixture::new_unique("pack1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
PackFixture::new_unique("pack2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
PackFixture::new_unique("pack3")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let count_after = PackRepository::count(&pool).await.unwrap();
|
||||
// Should have at least 3 more packs (may have more from parallel tests)
|
||||
assert!(count_after >= count_before + 3);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_transaction_commit() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Begin transaction
|
||||
let mut tx = pool.begin().await.unwrap();
|
||||
|
||||
// Create pack in transaction with unique ref
|
||||
let unique_ref = helpers::unique_pack_ref("tx_pack");
|
||||
let input = pack::CreatePackInput {
|
||||
r#ref: unique_ref.clone(),
|
||||
label: "Transaction Pack".to_string(),
|
||||
description: None,
|
||||
version: "1.0.0".to_string(),
|
||||
conf_schema: json!({}),
|
||||
config: json!({}),
|
||||
meta: json!({}),
|
||||
tags: vec![],
|
||||
runtime_deps: vec![],
|
||||
is_standard: false,
|
||||
};
|
||||
|
||||
let pack = PackRepository::create(&mut *tx, input).await.unwrap();
|
||||
|
||||
// Commit transaction
|
||||
tx.commit().await.unwrap();
|
||||
|
||||
// Verify pack exists after commit
|
||||
let found = PackRepository::find_by_id(&pool, pack.id)
|
||||
.await
|
||||
.unwrap()
|
||||
.expect("Pack should exist after commit");
|
||||
|
||||
assert_eq!(found.r#ref, unique_ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_transaction_rollback() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Begin transaction
|
||||
let mut tx = pool.begin().await.unwrap();
|
||||
|
||||
// Create pack in transaction with unique ref
|
||||
let input = pack::CreatePackInput {
|
||||
r#ref: helpers::unique_pack_ref("rollback_pack"),
|
||||
label: "Rollback Pack".to_string(),
|
||||
description: None,
|
||||
version: "1.0.0".to_string(),
|
||||
conf_schema: json!({}),
|
||||
config: json!({}),
|
||||
meta: json!({}),
|
||||
tags: vec![],
|
||||
runtime_deps: vec![],
|
||||
is_standard: false,
|
||||
};
|
||||
|
||||
let pack = PackRepository::create(&mut *tx, input).await.unwrap();
|
||||
let pack_id = pack.id;
|
||||
|
||||
// Rollback transaction
|
||||
tx.rollback().await.unwrap();
|
||||
|
||||
// Verify pack does NOT exist after rollback
|
||||
let not_found = PackRepository::find_by_id(&pool, pack_id).await.unwrap();
|
||||
assert!(not_found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_invalid_ref_format() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let input = pack::CreatePackInput {
|
||||
r#ref: "invalid pack!@#".to_string(), // Contains invalid characters
|
||||
label: "Invalid Pack".to_string(),
|
||||
description: None,
|
||||
version: "1.0.0".to_string(),
|
||||
conf_schema: json!({}),
|
||||
config: json!({}),
|
||||
meta: json!({}),
|
||||
tags: vec![],
|
||||
runtime_deps: vec![],
|
||||
is_standard: false,
|
||||
};
|
||||
|
||||
let result = PackRepository::create(&pool, input).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(matches!(result.unwrap_err(), Error::Validation { .. }));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_valid_ref_formats() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
// Valid ref formats - each gets unique suffix
|
||||
let valid_base_refs = vec![
|
||||
"simple",
|
||||
"with_underscores",
|
||||
"with-hyphens",
|
||||
"mixed_all-together-123",
|
||||
];
|
||||
|
||||
for base_ref in valid_base_refs {
|
||||
let unique_ref = helpers::unique_pack_ref(base_ref);
|
||||
let input = pack::CreatePackInput {
|
||||
r#ref: unique_ref.clone(),
|
||||
label: format!("Pack {}", base_ref),
|
||||
description: None,
|
||||
version: "1.0.0".to_string(),
|
||||
conf_schema: json!({}),
|
||||
config: json!({}),
|
||||
meta: json!({}),
|
||||
tags: vec![],
|
||||
runtime_deps: vec![],
|
||||
is_standard: false,
|
||||
};
|
||||
|
||||
let result = PackRepository::create(&pool, input).await;
|
||||
assert!(result.is_ok(), "Ref '{}' should be valid", unique_ref);
|
||||
}
|
||||
}
|
||||
935
crates/common/tests/permission_repository_tests.rs
Normal file
935
crates/common/tests/permission_repository_tests.rs
Normal file
@@ -0,0 +1,935 @@
|
||||
//! Integration tests for Permission repositories (PermissionSet and PermissionAssignment)
|
||||
|
||||
use attune_common::{
|
||||
models::identity::*,
|
||||
repositories::{
|
||||
identity::{
|
||||
CreateIdentityInput, CreatePermissionAssignmentInput, CreatePermissionSetInput,
|
||||
IdentityRepository, PermissionAssignmentRepository, PermissionSetRepository,
|
||||
UpdatePermissionSetInput,
|
||||
},
|
||||
pack::{CreatePackInput, PackRepository},
|
||||
Create, Delete, FindById, List, Update,
|
||||
},
|
||||
};
|
||||
use serde_json::json;
|
||||
use sqlx::PgPool;
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
|
||||
mod helpers;
|
||||
use helpers::create_test_pool;
|
||||
|
||||
static PERMISSION_COUNTER: AtomicU64 = AtomicU64::new(0);
|
||||
|
||||
/// Test fixture for creating unique permission sets
|
||||
struct PermissionSetFixture {
|
||||
pool: PgPool,
|
||||
id_suffix: String,
|
||||
internal_counter: std::sync::Arc<std::sync::atomic::AtomicU64>,
|
||||
}
|
||||
|
||||
impl PermissionSetFixture {
|
||||
fn new(pool: PgPool) -> Self {
|
||||
let counter = PERMISSION_COUNTER.fetch_add(1, Ordering::SeqCst);
|
||||
let timestamp = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap()
|
||||
.as_nanos();
|
||||
// Hash the thread ID to get a unique number
|
||||
let thread_id = std::thread::current().id();
|
||||
let thread_hash = format!("{:?}", thread_id)
|
||||
.chars()
|
||||
.filter(|c| c.is_numeric())
|
||||
.collect::<String>()
|
||||
.parse::<u64>()
|
||||
.unwrap_or(0);
|
||||
// Add random component for absolute uniqueness
|
||||
use std::collections::hash_map::RandomState;
|
||||
use std::hash::{BuildHasher, Hash, Hasher};
|
||||
let random_state = RandomState::new();
|
||||
let mut hasher = random_state.build_hasher();
|
||||
timestamp.hash(&mut hasher);
|
||||
counter.hash(&mut hasher);
|
||||
thread_hash.hash(&mut hasher);
|
||||
let random_hash = hasher.finish();
|
||||
// Create a unique lowercase alphanumeric suffix combining all sources of uniqueness
|
||||
let id_suffix = format!("{:x}", random_hash);
|
||||
Self {
|
||||
pool,
|
||||
id_suffix,
|
||||
internal_counter: std::sync::Arc::new(std::sync::atomic::AtomicU64::new(0)),
|
||||
}
|
||||
}
|
||||
|
||||
fn unique_ref(&self, base: &str) -> String {
|
||||
let seq = self.internal_counter.fetch_add(1, Ordering::SeqCst);
|
||||
format!("test.{}_{}_{}", base, self.id_suffix, seq)
|
||||
}
|
||||
|
||||
async fn create_pack(&self) -> i64 {
|
||||
let seq = self.internal_counter.fetch_add(1, Ordering::SeqCst);
|
||||
let pack_ref = format!("testpack_{}_{}", self.id_suffix, seq);
|
||||
let input = CreatePackInput {
|
||||
r#ref: pack_ref,
|
||||
version: "1.0.0".to_string(),
|
||||
label: "Test Pack".to_string(),
|
||||
description: Some("Test pack for permissions".to_string()),
|
||||
tags: vec![],
|
||||
conf_schema: json!({}),
|
||||
config: json!({}),
|
||||
meta: json!({}),
|
||||
runtime_deps: vec![],
|
||||
is_standard: false,
|
||||
};
|
||||
PackRepository::create(&self.pool, input)
|
||||
.await
|
||||
.expect("Failed to create pack")
|
||||
.id
|
||||
}
|
||||
|
||||
async fn create_identity(&self) -> i64 {
|
||||
let seq = self.internal_counter.fetch_add(1, Ordering::SeqCst);
|
||||
let login = format!("testuser_{}_{}", self.id_suffix, seq);
|
||||
let input = CreateIdentityInput {
|
||||
login,
|
||||
display_name: Some("Test User".to_string()),
|
||||
attributes: json!({}),
|
||||
password_hash: None,
|
||||
};
|
||||
IdentityRepository::create(&self.pool, input)
|
||||
.await
|
||||
.expect("Failed to create identity")
|
||||
.id
|
||||
}
|
||||
|
||||
async fn create_permission_set(
|
||||
&self,
|
||||
ref_name: &str,
|
||||
pack_id: Option<i64>,
|
||||
pack_ref: Option<String>,
|
||||
grants: serde_json::Value,
|
||||
) -> PermissionSet {
|
||||
let input = CreatePermissionSetInput {
|
||||
r#ref: ref_name.to_string(),
|
||||
pack: pack_id,
|
||||
pack_ref,
|
||||
label: Some("Test Permission Set".to_string()),
|
||||
description: Some("Test description".to_string()),
|
||||
grants,
|
||||
};
|
||||
|
||||
PermissionSetRepository::create(&self.pool, input)
|
||||
.await
|
||||
.expect("Failed to create permission set")
|
||||
}
|
||||
|
||||
async fn create_default(&self) -> PermissionSet {
|
||||
let ref_name = self.unique_ref("permset");
|
||||
self.create_permission_set(&ref_name, None, None, json!([]))
|
||||
.await
|
||||
}
|
||||
|
||||
async fn create_with_pack(&self) -> (i64, PermissionSet) {
|
||||
let pack_id = self.create_pack().await;
|
||||
let ref_name = self.unique_ref("permset");
|
||||
// Get the pack_ref from the last created pack - extract from pack
|
||||
let pack = PackRepository::find_by_id(&self.pool, pack_id)
|
||||
.await
|
||||
.expect("Failed to find pack")
|
||||
.expect("Pack not found");
|
||||
let pack_ref = pack.r#ref;
|
||||
let permset = self
|
||||
.create_permission_set(&ref_name, Some(pack_id), Some(pack_ref), json!([]))
|
||||
.await;
|
||||
(pack_id, permset)
|
||||
}
|
||||
|
||||
async fn create_with_grants(&self, grants: serde_json::Value) -> PermissionSet {
|
||||
let ref_name = self.unique_ref("permset");
|
||||
self.create_permission_set(&ref_name, None, None, grants)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn create_assignment(&self, identity_id: i64, permset_id: i64) -> PermissionAssignment {
|
||||
let input = CreatePermissionAssignmentInput {
|
||||
identity: identity_id,
|
||||
permset: permset_id,
|
||||
};
|
||||
PermissionAssignmentRepository::create(&self.pool, input)
|
||||
.await
|
||||
.expect("Failed to create permission assignment")
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// PermissionSet Repository Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_permission_set_minimal() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let ref_name = fixture.unique_ref("minimal");
|
||||
let input = CreatePermissionSetInput {
|
||||
r#ref: ref_name.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: Some("Minimal Permission Set".to_string()),
|
||||
description: None,
|
||||
grants: json!([]),
|
||||
};
|
||||
|
||||
let permset = PermissionSetRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create permission set");
|
||||
|
||||
assert!(permset.id > 0);
|
||||
assert_eq!(permset.r#ref, ref_name);
|
||||
assert_eq!(permset.label, Some("Minimal Permission Set".to_string()));
|
||||
assert!(permset.description.is_none());
|
||||
assert_eq!(permset.grants, json!([]));
|
||||
assert!(permset.pack.is_none());
|
||||
assert!(permset.pack_ref.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_permission_set_with_pack() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let pack_id = fixture.create_pack().await;
|
||||
let ref_name = fixture.unique_ref("with_pack");
|
||||
let pack_ref = format!("testpack_{}", fixture.id_suffix);
|
||||
|
||||
let input = CreatePermissionSetInput {
|
||||
r#ref: ref_name.clone(),
|
||||
pack: Some(pack_id),
|
||||
pack_ref: Some(pack_ref.clone()),
|
||||
label: Some("Pack Permission Set".to_string()),
|
||||
description: Some("Permission set from pack".to_string()),
|
||||
grants: json!([
|
||||
{"resource": "actions", "permission": "read"},
|
||||
{"resource": "actions", "permission": "execute"}
|
||||
]),
|
||||
};
|
||||
|
||||
let permset = PermissionSetRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create permission set");
|
||||
|
||||
assert_eq!(permset.pack, Some(pack_id));
|
||||
assert_eq!(permset.pack_ref, Some(pack_ref));
|
||||
assert!(permset.grants.is_array());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_permission_set_with_complex_grants() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let _ref_name = fixture.unique_ref("complex");
|
||||
let grants = json!([
|
||||
{
|
||||
"resource": "executions",
|
||||
"permissions": ["read", "write", "delete"],
|
||||
"filters": {"pack": "core"}
|
||||
},
|
||||
{
|
||||
"resource": "actions",
|
||||
"permissions": ["execute"],
|
||||
"filters": {"tags": ["safe"]}
|
||||
}
|
||||
]);
|
||||
|
||||
let permset = fixture.create_with_grants(grants.clone()).await;
|
||||
|
||||
assert_eq!(permset.grants, grants);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_set_ref_format_validation() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
// Valid format: pack.name
|
||||
let valid_ref = fixture.unique_ref("valid");
|
||||
let input = CreatePermissionSetInput {
|
||||
r#ref: valid_ref,
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: None,
|
||||
description: None,
|
||||
grants: json!([]),
|
||||
};
|
||||
let result = PermissionSetRepository::create(&pool, input).await;
|
||||
assert!(result.is_ok());
|
||||
|
||||
// Invalid format: no dot
|
||||
let invalid_ref = format!("nodot_{}", fixture.id_suffix);
|
||||
let input = CreatePermissionSetInput {
|
||||
r#ref: invalid_ref,
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: None,
|
||||
description: None,
|
||||
grants: json!([]),
|
||||
};
|
||||
let result = PermissionSetRepository::create(&pool, input).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_set_ref_lowercase() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
// Create with uppercase - should fail due to CHECK constraint
|
||||
let upper_ref = format!("Test.UPPERCASE_{}", fixture.id_suffix);
|
||||
let input = CreatePermissionSetInput {
|
||||
r#ref: upper_ref,
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: None,
|
||||
description: None,
|
||||
grants: json!([]),
|
||||
};
|
||||
let result = PermissionSetRepository::create(&pool, input).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_set_duplicate_ref() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let ref_name = fixture.unique_ref("duplicate");
|
||||
let input = CreatePermissionSetInput {
|
||||
r#ref: ref_name.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: None,
|
||||
description: None,
|
||||
grants: json!([]),
|
||||
};
|
||||
|
||||
// First create should succeed
|
||||
let result1 = PermissionSetRepository::create(&pool, input.clone()).await;
|
||||
assert!(result1.is_ok());
|
||||
|
||||
// Second create with same ref should fail
|
||||
let result2 = PermissionSetRepository::create(&pool, input).await;
|
||||
assert!(result2.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_permission_set_by_id() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let created = fixture.create_default().await;
|
||||
|
||||
let found = PermissionSetRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to find permission set")
|
||||
.expect("Permission set not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.r#ref, created.r#ref);
|
||||
assert_eq!(found.label, created.label);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_permission_set_by_id_not_found() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
|
||||
let result = PermissionSetRepository::find_by_id(&pool, 999_999_999)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_permission_sets() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let p1 = fixture.create_default().await;
|
||||
let p2 = fixture.create_default().await;
|
||||
let p3 = fixture.create_default().await;
|
||||
|
||||
let permsets = PermissionSetRepository::list(&pool)
|
||||
.await
|
||||
.expect("Failed to list permission sets");
|
||||
|
||||
let ids: Vec<i64> = permsets.iter().map(|p| p.id).collect();
|
||||
assert!(ids.contains(&p1.id));
|
||||
assert!(ids.contains(&p2.id));
|
||||
assert!(ids.contains(&p3.id));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_permission_set_label() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let created = fixture.create_default().await;
|
||||
|
||||
let update_input = UpdatePermissionSetInput {
|
||||
label: Some("Updated Label".to_string()),
|
||||
description: None,
|
||||
grants: None,
|
||||
};
|
||||
|
||||
let updated = PermissionSetRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update permission set");
|
||||
|
||||
assert_eq!(updated.label, Some("Updated Label".to_string()));
|
||||
assert_eq!(updated.description, created.description);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_permission_set_grants() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let created = fixture.create_with_grants(json!([])).await;
|
||||
|
||||
let new_grants = json!([
|
||||
{"resource": "packs", "permission": "read"},
|
||||
{"resource": "actions", "permission": "execute"}
|
||||
]);
|
||||
|
||||
let update_input = UpdatePermissionSetInput {
|
||||
label: None,
|
||||
description: None,
|
||||
grants: Some(new_grants.clone()),
|
||||
};
|
||||
|
||||
let updated = PermissionSetRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update permission set");
|
||||
|
||||
assert_eq!(updated.grants, new_grants);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_permission_set_all_fields() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let created = fixture.create_default().await;
|
||||
|
||||
let new_grants = json!([{"resource": "all", "permission": "admin"}]);
|
||||
let update_input = UpdatePermissionSetInput {
|
||||
label: Some("New Label".to_string()),
|
||||
description: Some("New Description".to_string()),
|
||||
grants: Some(new_grants.clone()),
|
||||
};
|
||||
|
||||
let updated = PermissionSetRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update permission set");
|
||||
|
||||
assert_eq!(updated.label, Some("New Label".to_string()));
|
||||
assert_eq!(updated.description, Some("New Description".to_string()));
|
||||
assert_eq!(updated.grants, new_grants);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_permission_set_no_changes() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let created = fixture.create_default().await;
|
||||
|
||||
let update_input = UpdatePermissionSetInput {
|
||||
label: None,
|
||||
description: None,
|
||||
grants: None,
|
||||
};
|
||||
|
||||
let updated = PermissionSetRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update permission set");
|
||||
|
||||
assert_eq!(updated.id, created.id);
|
||||
assert_eq!(updated.r#ref, created.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_permission_set_timestamps() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let created = fixture.create_default().await;
|
||||
let created_timestamp = created.created;
|
||||
let original_updated = created.updated;
|
||||
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let update_input = UpdatePermissionSetInput {
|
||||
label: Some("Updated".to_string()),
|
||||
description: None,
|
||||
grants: None,
|
||||
};
|
||||
|
||||
let updated = PermissionSetRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update permission set");
|
||||
|
||||
assert_eq!(updated.created, created_timestamp);
|
||||
assert!(updated.updated > original_updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_permission_set() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let created = fixture.create_default().await;
|
||||
|
||||
let deleted = PermissionSetRepository::delete(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to delete permission set");
|
||||
|
||||
assert!(deleted);
|
||||
|
||||
let found = PermissionSetRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_permission_set_not_found() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
|
||||
let deleted = PermissionSetRepository::delete(&pool, 999_999_999)
|
||||
.await
|
||||
.expect("Delete should succeed");
|
||||
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_set_cascade_from_pack() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let (pack_id, permset) = fixture.create_with_pack().await;
|
||||
|
||||
// Delete pack - permission set should be cascade deleted
|
||||
let deleted = PackRepository::delete(&pool, pack_id)
|
||||
.await
|
||||
.expect("Failed to delete pack");
|
||||
assert!(deleted);
|
||||
|
||||
// Permission set should no longer exist
|
||||
let found = PermissionSetRepository::find_by_id(&pool, permset.id)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_set_timestamps_auto_set() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let before = chrono::Utc::now();
|
||||
let permset = fixture.create_default().await;
|
||||
let after = chrono::Utc::now();
|
||||
|
||||
assert!(permset.created >= before);
|
||||
assert!(permset.created <= after);
|
||||
assert!(permset.updated >= before);
|
||||
assert!(permset.updated <= after);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// PermissionAssignment Repository Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_permission_assignment() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let permset = fixture.create_default().await;
|
||||
|
||||
let assignment = fixture.create_assignment(identity_id, permset.id).await;
|
||||
|
||||
assert!(assignment.id > 0);
|
||||
assert_eq!(assignment.identity, identity_id);
|
||||
assert_eq!(assignment.permset, permset.id);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_permission_assignment_duplicate() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let permset = fixture.create_default().await;
|
||||
|
||||
// First assignment should succeed
|
||||
let result1 = fixture.create_assignment(identity_id, permset.id).await;
|
||||
assert!(result1.id > 0);
|
||||
|
||||
// Second assignment with same identity+permset should fail (unique constraint)
|
||||
let input = CreatePermissionAssignmentInput {
|
||||
identity: identity_id,
|
||||
permset: permset.id,
|
||||
};
|
||||
let result2 = PermissionAssignmentRepository::create(&pool, input).await;
|
||||
assert!(result2.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_permission_assignment_invalid_identity() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let permset = fixture.create_default().await;
|
||||
|
||||
let input = CreatePermissionAssignmentInput {
|
||||
identity: 999_999_999,
|
||||
permset: permset.id,
|
||||
};
|
||||
|
||||
let result = PermissionAssignmentRepository::create(&pool, input).await;
|
||||
assert!(result.is_err()); // Foreign key violation
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_permission_assignment_invalid_permset() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
|
||||
let input = CreatePermissionAssignmentInput {
|
||||
identity: identity_id,
|
||||
permset: 999_999_999,
|
||||
};
|
||||
|
||||
let result = PermissionAssignmentRepository::create(&pool, input).await;
|
||||
assert!(result.is_err()); // Foreign key violation
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_permission_assignment_by_id() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let permset = fixture.create_default().await;
|
||||
let created = fixture.create_assignment(identity_id, permset.id).await;
|
||||
|
||||
let found = PermissionAssignmentRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to find assignment")
|
||||
.expect("Assignment not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.identity, identity_id);
|
||||
assert_eq!(found.permset, permset.id);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_permission_assignment_by_id_not_found() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
|
||||
let result = PermissionAssignmentRepository::find_by_id(&pool, 999_999_999)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_permission_assignments() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let p1 = fixture.create_default().await;
|
||||
let p2 = fixture.create_default().await;
|
||||
|
||||
let a1 = fixture.create_assignment(identity_id, p1.id).await;
|
||||
let a2 = fixture.create_assignment(identity_id, p2.id).await;
|
||||
|
||||
let assignments = PermissionAssignmentRepository::list(&pool)
|
||||
.await
|
||||
.expect("Failed to list assignments");
|
||||
|
||||
let ids: Vec<i64> = assignments.iter().map(|a| a.id).collect();
|
||||
assert!(ids.contains(&a1.id));
|
||||
assert!(ids.contains(&a2.id));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_assignments_by_identity() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity1 = fixture.create_identity().await;
|
||||
let identity2 = fixture.create_identity().await;
|
||||
let p1 = fixture.create_default().await;
|
||||
let p2 = fixture.create_default().await;
|
||||
|
||||
let a1 = fixture.create_assignment(identity1, p1.id).await;
|
||||
let a2 = fixture.create_assignment(identity1, p2.id).await;
|
||||
let _a3 = fixture.create_assignment(identity2, p1.id).await;
|
||||
|
||||
let assignments = PermissionAssignmentRepository::find_by_identity(&pool, identity1)
|
||||
.await
|
||||
.expect("Failed to find assignments");
|
||||
|
||||
assert_eq!(assignments.len(), 2);
|
||||
let ids: Vec<i64> = assignments.iter().map(|a| a.id).collect();
|
||||
assert!(ids.contains(&a1.id));
|
||||
assert!(ids.contains(&a2.id));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_assignments_by_identity_empty() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
|
||||
let assignments = PermissionAssignmentRepository::find_by_identity(&pool, identity_id)
|
||||
.await
|
||||
.expect("Failed to find assignments");
|
||||
|
||||
assert!(assignments.is_empty());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_permission_assignment() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let permset = fixture.create_default().await;
|
||||
let created = fixture.create_assignment(identity_id, permset.id).await;
|
||||
|
||||
let deleted = PermissionAssignmentRepository::delete(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to delete assignment");
|
||||
|
||||
assert!(deleted);
|
||||
|
||||
let found = PermissionAssignmentRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_permission_assignment_not_found() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
|
||||
let deleted = PermissionAssignmentRepository::delete(&pool, 999_999_999)
|
||||
.await
|
||||
.expect("Delete should succeed");
|
||||
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_assignment_cascade_from_identity() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let permset = fixture.create_default().await;
|
||||
let assignment = fixture.create_assignment(identity_id, permset.id).await;
|
||||
|
||||
// Delete identity - assignment should be cascade deleted
|
||||
let deleted = IdentityRepository::delete(&pool, identity_id)
|
||||
.await
|
||||
.expect("Failed to delete identity");
|
||||
assert!(deleted);
|
||||
|
||||
// Assignment should no longer exist
|
||||
let found = PermissionAssignmentRepository::find_by_id(&pool, assignment.id)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_assignment_cascade_from_permset() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let permset = fixture.create_default().await;
|
||||
let assignment = fixture.create_assignment(identity_id, permset.id).await;
|
||||
|
||||
// Delete permission set - assignment should be cascade deleted
|
||||
let deleted = PermissionSetRepository::delete(&pool, permset.id)
|
||||
.await
|
||||
.expect("Failed to delete permission set");
|
||||
assert!(deleted);
|
||||
|
||||
// Assignment should no longer exist
|
||||
let found = PermissionAssignmentRepository::find_by_id(&pool, assignment.id)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_assignment_timestamp_auto_set() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let permset = fixture.create_default().await;
|
||||
|
||||
let before = chrono::Utc::now();
|
||||
let assignment = fixture.create_assignment(identity_id, permset.id).await;
|
||||
let after = chrono::Utc::now();
|
||||
|
||||
assert!(assignment.created >= before);
|
||||
assert!(assignment.created <= after);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multiple_identities_same_permset() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity1 = fixture.create_identity().await;
|
||||
let identity2 = fixture.create_identity().await;
|
||||
let identity3 = fixture.create_identity().await;
|
||||
let permset = fixture.create_default().await;
|
||||
|
||||
let a1 = fixture.create_assignment(identity1, permset.id).await;
|
||||
let a2 = fixture.create_assignment(identity2, permset.id).await;
|
||||
let a3 = fixture.create_assignment(identity3, permset.id).await;
|
||||
|
||||
// All should have same permset
|
||||
assert_eq!(a1.permset, permset.id);
|
||||
assert_eq!(a2.permset, permset.id);
|
||||
assert_eq!(a3.permset, permset.id);
|
||||
|
||||
// But different identities
|
||||
assert_eq!(a1.identity, identity1);
|
||||
assert_eq!(a2.identity, identity2);
|
||||
assert_eq!(a3.identity, identity3);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_one_identity_multiple_permsets() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let p1 = fixture.create_default().await;
|
||||
let p2 = fixture.create_default().await;
|
||||
let p3 = fixture.create_default().await;
|
||||
|
||||
let a1 = fixture.create_assignment(identity_id, p1.id).await;
|
||||
let a2 = fixture.create_assignment(identity_id, p2.id).await;
|
||||
let a3 = fixture.create_assignment(identity_id, p3.id).await;
|
||||
|
||||
// All should have same identity
|
||||
assert_eq!(a1.identity, identity_id);
|
||||
assert_eq!(a2.identity, identity_id);
|
||||
assert_eq!(a3.identity, identity_id);
|
||||
|
||||
// But different permsets
|
||||
assert_eq!(a1.permset, p1.id);
|
||||
assert_eq!(a2.permset, p2.id);
|
||||
assert_eq!(a3.permset, p3.id);
|
||||
|
||||
// Query by identity should return all 3
|
||||
let assignments = PermissionAssignmentRepository::find_by_identity(&pool, identity_id)
|
||||
.await
|
||||
.expect("Failed to find assignments");
|
||||
|
||||
assert_eq!(assignments.len(), 3);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_set_ordering() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let ref1 = fixture.unique_ref("aaa");
|
||||
let ref2 = fixture.unique_ref("bbb");
|
||||
let ref3 = fixture.unique_ref("ccc");
|
||||
|
||||
let _p1 = fixture
|
||||
.create_permission_set(&ref1, None, None, json!([]))
|
||||
.await;
|
||||
let _p2 = fixture
|
||||
.create_permission_set(&ref2, None, None, json!([]))
|
||||
.await;
|
||||
let _p3 = fixture
|
||||
.create_permission_set(&ref3, None, None, json!([]))
|
||||
.await;
|
||||
|
||||
let permsets = PermissionSetRepository::list(&pool)
|
||||
.await
|
||||
.expect("Failed to list permission sets");
|
||||
|
||||
// Should be ordered by ref ASC
|
||||
let our_sets: Vec<&PermissionSet> = permsets
|
||||
.iter()
|
||||
.filter(|p| p.r#ref.starts_with("test."))
|
||||
.filter(|p| p.r#ref == ref1 || p.r#ref == ref2 || p.r#ref == ref3)
|
||||
.collect();
|
||||
|
||||
if our_sets.len() == 3 {
|
||||
let pos1 = permsets.iter().position(|p| p.r#ref == ref1).unwrap();
|
||||
let pos2 = permsets.iter().position(|p| p.r#ref == ref2).unwrap();
|
||||
let pos3 = permsets.iter().position(|p| p.r#ref == ref3).unwrap();
|
||||
|
||||
assert!(pos1 < pos2);
|
||||
assert!(pos2 < pos3);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_permission_assignment_ordering() {
|
||||
let pool = create_test_pool().await.expect("Failed to create pool");
|
||||
let fixture = PermissionSetFixture::new(pool.clone());
|
||||
|
||||
let identity_id = fixture.create_identity().await;
|
||||
let p1 = fixture.create_default().await;
|
||||
let p2 = fixture.create_default().await;
|
||||
let p3 = fixture.create_default().await;
|
||||
|
||||
let a1 = fixture.create_assignment(identity_id, p1.id).await;
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
let a2 = fixture.create_assignment(identity_id, p2.id).await;
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
let a3 = fixture.create_assignment(identity_id, p3.id).await;
|
||||
|
||||
let assignments = PermissionAssignmentRepository::list(&pool)
|
||||
.await
|
||||
.expect("Failed to list assignments");
|
||||
|
||||
// Should be ordered by created DESC (newest first)
|
||||
let ids: Vec<i64> = assignments.iter().map(|a| a.id).collect();
|
||||
if ids.contains(&a1.id) && ids.contains(&a2.id) && ids.contains(&a3.id) {
|
||||
let pos1 = ids.iter().position(|&id| id == a1.id).unwrap();
|
||||
let pos2 = ids.iter().position(|&id| id == a2.id).unwrap();
|
||||
let pos3 = ids.iter().position(|&id| id == a3.id).unwrap();
|
||||
|
||||
// Newest (a3) should come before older ones
|
||||
assert!(pos3 < pos2);
|
||||
assert!(pos2 < pos1);
|
||||
}
|
||||
}
|
||||
343
crates/common/tests/queue_stats_repository_tests.rs
Normal file
343
crates/common/tests/queue_stats_repository_tests.rs
Normal file
@@ -0,0 +1,343 @@
|
||||
//! Integration tests for queue stats repository
|
||||
//!
|
||||
//! Tests queue statistics persistence and retrieval operations.
|
||||
|
||||
use attune_common::repositories::queue_stats::{QueueStatsRepository, UpsertQueueStatsInput};
|
||||
use chrono::Utc;
|
||||
|
||||
mod helpers;
|
||||
use helpers::{ActionFixture, PackFixture};
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_upsert_queue_stats() {
|
||||
let pool = helpers::create_test_pool().await.unwrap();
|
||||
|
||||
// Create test pack and action using fixtures
|
||||
let pack = PackFixture::new_unique("test").create(&pool).await.unwrap();
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Upsert queue stats (insert)
|
||||
let input = UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: 5,
|
||||
active_count: 2,
|
||||
max_concurrent: 3,
|
||||
oldest_enqueued_at: Some(Utc::now()),
|
||||
total_enqueued: 100,
|
||||
total_completed: 95,
|
||||
};
|
||||
|
||||
let stats = QueueStatsRepository::upsert(&pool, input.clone())
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(stats.action_id, action.id);
|
||||
assert_eq!(stats.queue_length, 5);
|
||||
assert_eq!(stats.active_count, 2);
|
||||
assert_eq!(stats.max_concurrent, 3);
|
||||
assert_eq!(stats.total_enqueued, 100);
|
||||
assert_eq!(stats.total_completed, 95);
|
||||
assert!(stats.oldest_enqueued_at.is_some());
|
||||
|
||||
// Upsert again (update)
|
||||
let update_input = UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: 3,
|
||||
active_count: 3,
|
||||
max_concurrent: 3,
|
||||
oldest_enqueued_at: None,
|
||||
total_enqueued: 110,
|
||||
total_completed: 107,
|
||||
};
|
||||
|
||||
let updated_stats = QueueStatsRepository::upsert(&pool, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated_stats.action_id, action.id);
|
||||
assert_eq!(updated_stats.queue_length, 3);
|
||||
assert_eq!(updated_stats.active_count, 3);
|
||||
assert_eq!(updated_stats.total_enqueued, 110);
|
||||
assert_eq!(updated_stats.total_completed, 107);
|
||||
assert!(updated_stats.oldest_enqueued_at.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_queue_stats_by_action() {
|
||||
let pool = helpers::create_test_pool().await.unwrap();
|
||||
|
||||
// Create test pack and action
|
||||
let pack = PackFixture::new_unique("test").create(&pool).await.unwrap();
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// No stats initially
|
||||
let result = QueueStatsRepository::find_by_action(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(result.is_none());
|
||||
|
||||
// Create stats
|
||||
let input = UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: 10,
|
||||
active_count: 5,
|
||||
max_concurrent: 5,
|
||||
oldest_enqueued_at: Some(Utc::now()),
|
||||
total_enqueued: 200,
|
||||
total_completed: 190,
|
||||
};
|
||||
|
||||
QueueStatsRepository::upsert(&pool, input).await.unwrap();
|
||||
|
||||
// Find stats
|
||||
let stats = QueueStatsRepository::find_by_action(&pool, action.id)
|
||||
.await
|
||||
.unwrap()
|
||||
.expect("Stats should exist");
|
||||
|
||||
assert_eq!(stats.action_id, action.id);
|
||||
assert_eq!(stats.queue_length, 10);
|
||||
assert_eq!(stats.active_count, 5);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_active_queue_stats() {
|
||||
let pool = helpers::create_test_pool().await.unwrap();
|
||||
|
||||
// Create test pack
|
||||
let pack = PackFixture::new_unique("test").create(&pool).await.unwrap();
|
||||
|
||||
// Create multiple actions with different queue states
|
||||
for i in 0..3 {
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, &format!("action_{}", i))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let input = if i == 0 {
|
||||
// Active queue
|
||||
UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: 5,
|
||||
active_count: 2,
|
||||
max_concurrent: 3,
|
||||
oldest_enqueued_at: Some(Utc::now()),
|
||||
total_enqueued: 50,
|
||||
total_completed: 45,
|
||||
}
|
||||
} else if i == 1 {
|
||||
// Active executions but no queue
|
||||
UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: 0,
|
||||
active_count: 3,
|
||||
max_concurrent: 3,
|
||||
oldest_enqueued_at: None,
|
||||
total_enqueued: 30,
|
||||
total_completed: 27,
|
||||
}
|
||||
} else {
|
||||
// Idle (should not appear in active list)
|
||||
UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: 0,
|
||||
active_count: 0,
|
||||
max_concurrent: 3,
|
||||
oldest_enqueued_at: None,
|
||||
total_enqueued: 20,
|
||||
total_completed: 20,
|
||||
}
|
||||
};
|
||||
|
||||
QueueStatsRepository::upsert(&pool, input).await.unwrap();
|
||||
}
|
||||
|
||||
// List active queues
|
||||
let active_stats = QueueStatsRepository::list_active(&pool).await.unwrap();
|
||||
|
||||
// Should only return entries with queue_length > 0 or active_count > 0
|
||||
// At least 2 from our test data (may be more from other tests)
|
||||
let our_active = active_stats
|
||||
.iter()
|
||||
.filter(|s| s.queue_length > 0 || s.active_count > 0)
|
||||
.count();
|
||||
assert!(our_active >= 2);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_queue_stats() {
|
||||
let pool = helpers::create_test_pool().await.unwrap();
|
||||
|
||||
// Create test pack and action
|
||||
let pack = PackFixture::new_unique("test").create(&pool).await.unwrap();
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create stats
|
||||
let input = UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: 5,
|
||||
active_count: 2,
|
||||
max_concurrent: 3,
|
||||
oldest_enqueued_at: Some(Utc::now()),
|
||||
total_enqueued: 100,
|
||||
total_completed: 95,
|
||||
};
|
||||
|
||||
QueueStatsRepository::upsert(&pool, input).await.unwrap();
|
||||
|
||||
// Verify exists
|
||||
let stats = QueueStatsRepository::find_by_action(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(stats.is_some());
|
||||
|
||||
// Delete
|
||||
let deleted = QueueStatsRepository::delete(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(deleted);
|
||||
|
||||
// Verify deleted
|
||||
let stats = QueueStatsRepository::find_by_action(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(stats.is_none());
|
||||
|
||||
// Delete again (should return false)
|
||||
let deleted = QueueStatsRepository::delete(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_upsert_queue_stats() {
|
||||
let pool = helpers::create_test_pool().await.unwrap();
|
||||
|
||||
// Create test pack
|
||||
let pack = PackFixture::new_unique("test").create(&pool).await.unwrap();
|
||||
|
||||
// Create multiple actions
|
||||
let mut inputs = Vec::new();
|
||||
for i in 0..5 {
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, &format!("action_{}", i))
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
inputs.push(UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: i,
|
||||
active_count: i,
|
||||
max_concurrent: 5,
|
||||
oldest_enqueued_at: if i > 0 { Some(Utc::now()) } else { None },
|
||||
total_enqueued: (i * 10) as i64,
|
||||
total_completed: (i * 9) as i64,
|
||||
});
|
||||
}
|
||||
|
||||
// Batch upsert
|
||||
let results = QueueStatsRepository::batch_upsert(&pool, inputs)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(results.len(), 5);
|
||||
|
||||
// Verify each result
|
||||
for (i, stats) in results.iter().enumerate() {
|
||||
assert_eq!(stats.queue_length, i as i32);
|
||||
assert_eq!(stats.active_count, i as i32);
|
||||
assert_eq!(stats.total_enqueued, (i * 10) as i64);
|
||||
assert_eq!(stats.total_completed, (i * 9) as i64);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_clear_stale_queue_stats() {
|
||||
let pool = helpers::create_test_pool().await.unwrap();
|
||||
|
||||
// Create test pack
|
||||
let pack = PackFixture::new_unique("test").create(&pool).await.unwrap();
|
||||
|
||||
// Create action with idle stats
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create idle stats (queue_length = 0, active_count = 0)
|
||||
let input = UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: 0,
|
||||
active_count: 0,
|
||||
max_concurrent: 3,
|
||||
oldest_enqueued_at: None,
|
||||
total_enqueued: 100,
|
||||
total_completed: 100,
|
||||
};
|
||||
|
||||
QueueStatsRepository::upsert(&pool, input).await.unwrap();
|
||||
|
||||
// Try to clear stale stats (with very large timeout - should not delete recent stats)
|
||||
let _cleared = QueueStatsRepository::clear_stale(&pool, 3600)
|
||||
.await
|
||||
.unwrap();
|
||||
// May or may not be 0 depending on other test data, but our stat should still exist
|
||||
|
||||
// Verify our stat still exists (was just created)
|
||||
let stats = QueueStatsRepository::find_by_action(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(stats.is_some());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_queue_stats_cascade_delete() {
|
||||
let pool = helpers::create_test_pool().await.unwrap();
|
||||
|
||||
// Create test pack and action
|
||||
let pack = PackFixture::new_unique("test").create(&pool).await.unwrap();
|
||||
let action = ActionFixture::new_unique(pack.id, &pack.r#ref, "test_action")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create stats
|
||||
let input = UpsertQueueStatsInput {
|
||||
action_id: action.id,
|
||||
queue_length: 5,
|
||||
active_count: 2,
|
||||
max_concurrent: 3,
|
||||
oldest_enqueued_at: Some(Utc::now()),
|
||||
total_enqueued: 100,
|
||||
total_completed: 95,
|
||||
};
|
||||
|
||||
QueueStatsRepository::upsert(&pool, input).await.unwrap();
|
||||
|
||||
// Verify stats exist
|
||||
let stats = QueueStatsRepository::find_by_action(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(stats.is_some());
|
||||
|
||||
// Delete the action (should cascade to queue_stats)
|
||||
use attune_common::repositories::action::ActionRepository;
|
||||
use attune_common::repositories::Delete;
|
||||
ActionRepository::delete(&pool, action.id).await.unwrap();
|
||||
|
||||
// Verify stats are also deleted (cascade)
|
||||
let stats = QueueStatsRepository::find_by_action(&pool, action.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(stats.is_none());
|
||||
}
|
||||
765
crates/common/tests/repository_artifact_tests.rs
Normal file
765
crates/common/tests/repository_artifact_tests.rs
Normal file
@@ -0,0 +1,765 @@
|
||||
//! Integration tests for Artifact repository
|
||||
//!
|
||||
//! Tests cover CRUD operations, specialized queries, constraints,
|
||||
//! enum handling, timestamps, and edge cases.
|
||||
|
||||
use attune_common::models::enums::{ArtifactType, OwnerType, RetentionPolicyType};
|
||||
use attune_common::repositories::artifact::{
|
||||
ArtifactRepository, CreateArtifactInput, UpdateArtifactInput,
|
||||
};
|
||||
use attune_common::repositories::{Create, Delete, FindById, FindByRef, List, Update};
|
||||
use attune_common::Error;
|
||||
use sqlx::PgPool;
|
||||
use std::collections::hash_map::DefaultHasher;
|
||||
use std::hash::{Hash, Hasher};
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
|
||||
mod helpers;
|
||||
use helpers::create_test_pool;
|
||||
|
||||
// Global counter for unique IDs across all tests
|
||||
static GLOBAL_COUNTER: AtomicU64 = AtomicU64::new(0);
|
||||
|
||||
/// Test fixture for creating unique artifact data
|
||||
struct ArtifactFixture {
|
||||
sequence: AtomicU64,
|
||||
test_id: String,
|
||||
}
|
||||
|
||||
impl ArtifactFixture {
|
||||
fn new(test_name: &str) -> Self {
|
||||
let global_count = GLOBAL_COUNTER.fetch_add(1, Ordering::SeqCst);
|
||||
let timestamp = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap()
|
||||
.as_nanos();
|
||||
|
||||
// Create unique test ID from test name, timestamp, and global counter
|
||||
let mut hasher = DefaultHasher::new();
|
||||
test_name.hash(&mut hasher);
|
||||
timestamp.hash(&mut hasher);
|
||||
global_count.hash(&mut hasher);
|
||||
let hash = hasher.finish();
|
||||
|
||||
let test_id = format!("test_{}_{:x}", global_count, hash);
|
||||
|
||||
Self {
|
||||
sequence: AtomicU64::new(0),
|
||||
test_id,
|
||||
}
|
||||
}
|
||||
|
||||
fn unique_ref(&self, prefix: &str) -> String {
|
||||
let seq = self.sequence.fetch_add(1, Ordering::SeqCst);
|
||||
format!("{}_{}_ref_{}", prefix, self.test_id, seq)
|
||||
}
|
||||
|
||||
fn unique_owner(&self, prefix: &str) -> String {
|
||||
let seq = self.sequence.fetch_add(1, Ordering::SeqCst);
|
||||
format!("{}_{}_owner_{}", prefix, self.test_id, seq)
|
||||
}
|
||||
|
||||
fn create_input(&self, ref_suffix: &str) -> CreateArtifactInput {
|
||||
CreateArtifactInput {
|
||||
r#ref: self.unique_ref(ref_suffix),
|
||||
scope: OwnerType::System,
|
||||
owner: self.unique_owner("system"),
|
||||
r#type: ArtifactType::FileText,
|
||||
retention_policy: RetentionPolicyType::Versions,
|
||||
retention_limit: 5,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn setup_db() -> PgPool {
|
||||
create_test_pool()
|
||||
.await
|
||||
.expect("Failed to create test pool")
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Basic CRUD Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_artifact() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("create_artifact");
|
||||
let input = fixture.create_input("basic");
|
||||
|
||||
let artifact = ArtifactRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
assert!(artifact.id > 0);
|
||||
assert_eq!(artifact.r#ref, input.r#ref);
|
||||
assert_eq!(artifact.scope, input.scope);
|
||||
assert_eq!(artifact.owner, input.owner);
|
||||
assert_eq!(artifact.r#type, input.r#type);
|
||||
assert_eq!(artifact.retention_policy, input.retention_policy);
|
||||
assert_eq!(artifact.retention_limit, input.retention_limit);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_id_exists() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("find_by_id_exists");
|
||||
let input = fixture.create_input("find");
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
let found = ArtifactRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to query artifact")
|
||||
.expect("Artifact not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.r#ref, created.r#ref);
|
||||
assert_eq!(found.scope, created.scope);
|
||||
assert_eq!(found.owner, created.owner);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_id_not_exists() {
|
||||
let pool = setup_db().await;
|
||||
let non_existent_id = 999_999_999_999i64;
|
||||
|
||||
let found = ArtifactRepository::find_by_id(&pool, non_existent_id)
|
||||
.await
|
||||
.expect("Failed to query artifact");
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_get_by_id_not_found_error() {
|
||||
let pool = setup_db().await;
|
||||
let non_existent_id = 999_999_999_998i64;
|
||||
|
||||
let result = ArtifactRepository::get_by_id(&pool, non_existent_id).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
match result {
|
||||
Err(Error::NotFound { entity, .. }) => {
|
||||
assert_eq!(entity, "artifact");
|
||||
}
|
||||
_ => panic!("Expected NotFound error"),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_ref_exists() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("find_by_ref_exists");
|
||||
let input = fixture.create_input("ref_test");
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
let found = ArtifactRepository::find_by_ref(&pool, &input.r#ref)
|
||||
.await
|
||||
.expect("Failed to query artifact")
|
||||
.expect("Artifact not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.r#ref, created.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_ref_not_exists() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("find_by_ref_not_exists");
|
||||
|
||||
let found = ArtifactRepository::find_by_ref(&pool, &fixture.unique_ref("nonexistent"))
|
||||
.await
|
||||
.expect("Failed to query artifact");
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_artifacts() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("list");
|
||||
|
||||
// Create multiple artifacts
|
||||
for i in 0..3 {
|
||||
let input = fixture.create_input(&format!("list_{}", i));
|
||||
ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
}
|
||||
|
||||
let artifacts = ArtifactRepository::list(&pool)
|
||||
.await
|
||||
.expect("Failed to list artifacts");
|
||||
|
||||
// Should have at least the 3 we created
|
||||
assert!(artifacts.len() >= 3);
|
||||
|
||||
// Should be ordered by created DESC (newest first)
|
||||
for i in 0..artifacts.len().saturating_sub(1) {
|
||||
assert!(artifacts[i].created >= artifacts[i + 1].created);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_artifact_ref() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("update_ref");
|
||||
let input = fixture.create_input("original");
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
let new_ref = fixture.unique_ref("updated");
|
||||
let update_input = UpdateArtifactInput {
|
||||
r#ref: Some(new_ref.clone()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = ArtifactRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update artifact");
|
||||
|
||||
assert_eq!(updated.id, created.id);
|
||||
assert_eq!(updated.r#ref, new_ref);
|
||||
assert_eq!(updated.scope, created.scope);
|
||||
assert!(updated.updated > created.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_artifact_all_fields() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("update_all");
|
||||
let input = fixture.create_input("original");
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
let update_input = UpdateArtifactInput {
|
||||
r#ref: Some(fixture.unique_ref("all_updated")),
|
||||
scope: Some(OwnerType::Identity),
|
||||
owner: Some(fixture.unique_owner("identity")),
|
||||
r#type: Some(ArtifactType::FileImage),
|
||||
retention_policy: Some(RetentionPolicyType::Days),
|
||||
retention_limit: Some(30),
|
||||
};
|
||||
|
||||
let updated = ArtifactRepository::update(&pool, created.id, update_input.clone())
|
||||
.await
|
||||
.expect("Failed to update artifact");
|
||||
|
||||
assert_eq!(updated.r#ref, update_input.r#ref.unwrap());
|
||||
assert_eq!(updated.scope, update_input.scope.unwrap());
|
||||
assert_eq!(updated.owner, update_input.owner.unwrap());
|
||||
assert_eq!(updated.r#type, update_input.r#type.unwrap());
|
||||
assert_eq!(
|
||||
updated.retention_policy,
|
||||
update_input.retention_policy.unwrap()
|
||||
);
|
||||
assert_eq!(
|
||||
updated.retention_limit,
|
||||
update_input.retention_limit.unwrap()
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_artifact_no_changes() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("update_no_changes");
|
||||
let input = fixture.create_input("nochange");
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
let update_input = UpdateArtifactInput::default();
|
||||
|
||||
let updated = ArtifactRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update artifact");
|
||||
|
||||
assert_eq!(updated.id, created.id);
|
||||
assert_eq!(updated.r#ref, created.r#ref);
|
||||
assert_eq!(updated.updated, created.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_artifact() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("delete");
|
||||
let input = fixture.create_input("delete");
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
let deleted = ArtifactRepository::delete(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to delete artifact");
|
||||
|
||||
assert!(deleted);
|
||||
|
||||
let found = ArtifactRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to query artifact");
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_artifact_not_exists() {
|
||||
let pool = setup_db().await;
|
||||
let non_existent_id = 999_999_999_997i64;
|
||||
|
||||
let deleted = ArtifactRepository::delete(&pool, non_existent_id)
|
||||
.await
|
||||
.expect("Failed to delete artifact");
|
||||
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Enum Type Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_artifact_all_types() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("all_types");
|
||||
|
||||
let types = vec![
|
||||
ArtifactType::FileBinary,
|
||||
ArtifactType::FileDataTable,
|
||||
ArtifactType::FileImage,
|
||||
ArtifactType::FileText,
|
||||
ArtifactType::Other,
|
||||
ArtifactType::Progress,
|
||||
ArtifactType::Url,
|
||||
];
|
||||
|
||||
for artifact_type in types {
|
||||
let mut input = fixture.create_input(&format!("{:?}", artifact_type));
|
||||
input.r#type = artifact_type;
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
assert_eq!(created.r#type, artifact_type);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_artifact_all_scopes() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("all_scopes");
|
||||
|
||||
let scopes = vec![
|
||||
OwnerType::System,
|
||||
OwnerType::Identity,
|
||||
OwnerType::Pack,
|
||||
OwnerType::Action,
|
||||
OwnerType::Sensor,
|
||||
];
|
||||
|
||||
for scope in scopes {
|
||||
let mut input = fixture.create_input(&format!("{:?}", scope));
|
||||
input.scope = scope;
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
assert_eq!(created.scope, scope);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_artifact_all_retention_policies() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("all_retention");
|
||||
|
||||
let policies = vec![
|
||||
RetentionPolicyType::Versions,
|
||||
RetentionPolicyType::Days,
|
||||
RetentionPolicyType::Hours,
|
||||
RetentionPolicyType::Minutes,
|
||||
];
|
||||
|
||||
for policy in policies {
|
||||
let mut input = fixture.create_input(&format!("{:?}", policy));
|
||||
input.retention_policy = policy;
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
assert_eq!(created.retention_policy, policy);
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Specialized Query Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_scope() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("find_by_scope");
|
||||
|
||||
// Create artifacts with different scopes
|
||||
let mut identity_input = fixture.create_input("identity_scope");
|
||||
identity_input.scope = OwnerType::Identity;
|
||||
let identity_artifact = ArtifactRepository::create(&pool, identity_input)
|
||||
.await
|
||||
.expect("Failed to create identity artifact");
|
||||
|
||||
let mut system_input = fixture.create_input("system_scope");
|
||||
system_input.scope = OwnerType::System;
|
||||
ArtifactRepository::create(&pool, system_input)
|
||||
.await
|
||||
.expect("Failed to create system artifact");
|
||||
|
||||
// Find by identity scope
|
||||
let identity_artifacts = ArtifactRepository::find_by_scope(&pool, OwnerType::Identity)
|
||||
.await
|
||||
.expect("Failed to find by scope");
|
||||
|
||||
assert!(identity_artifacts
|
||||
.iter()
|
||||
.any(|a| a.id == identity_artifact.id));
|
||||
assert!(identity_artifacts
|
||||
.iter()
|
||||
.all(|a| a.scope == OwnerType::Identity));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_owner() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("find_by_owner");
|
||||
|
||||
let owner1 = fixture.unique_owner("owner1");
|
||||
let owner2 = fixture.unique_owner("owner2");
|
||||
|
||||
// Create artifacts with different owners
|
||||
let mut input1 = fixture.create_input("owner1");
|
||||
input1.owner = owner1.clone();
|
||||
let artifact1 = ArtifactRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create artifact 1");
|
||||
|
||||
let mut input2 = fixture.create_input("owner2");
|
||||
input2.owner = owner2.clone();
|
||||
ArtifactRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create artifact 2");
|
||||
|
||||
// Find by owner1
|
||||
let owner1_artifacts = ArtifactRepository::find_by_owner(&pool, &owner1)
|
||||
.await
|
||||
.expect("Failed to find by owner");
|
||||
|
||||
assert!(owner1_artifacts.iter().any(|a| a.id == artifact1.id));
|
||||
assert!(owner1_artifacts.iter().all(|a| a.owner == owner1));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_type() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("find_by_type");
|
||||
|
||||
// Create artifacts with different types
|
||||
let mut image_input = fixture.create_input("image");
|
||||
image_input.r#type = ArtifactType::FileImage;
|
||||
let image_artifact = ArtifactRepository::create(&pool, image_input)
|
||||
.await
|
||||
.expect("Failed to create image artifact");
|
||||
|
||||
let mut text_input = fixture.create_input("text");
|
||||
text_input.r#type = ArtifactType::FileText;
|
||||
ArtifactRepository::create(&pool, text_input)
|
||||
.await
|
||||
.expect("Failed to create text artifact");
|
||||
|
||||
// Find by image type
|
||||
let image_artifacts = ArtifactRepository::find_by_type(&pool, ArtifactType::FileImage)
|
||||
.await
|
||||
.expect("Failed to find by type");
|
||||
|
||||
assert!(image_artifacts.iter().any(|a| a.id == image_artifact.id));
|
||||
assert!(image_artifacts
|
||||
.iter()
|
||||
.all(|a| a.r#type == ArtifactType::FileImage));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_scope_and_owner() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("find_by_scope_and_owner");
|
||||
|
||||
let pack_owner = fixture.unique_owner("pack");
|
||||
|
||||
// Create artifact with pack scope and specific owner
|
||||
let mut pack_input = fixture.create_input("pack");
|
||||
pack_input.scope = OwnerType::Pack;
|
||||
pack_input.owner = pack_owner.clone();
|
||||
let pack_artifact = ArtifactRepository::create(&pool, pack_input)
|
||||
.await
|
||||
.expect("Failed to create pack artifact");
|
||||
|
||||
// Create artifact with same scope but different owner
|
||||
let mut other_input = fixture.create_input("other");
|
||||
other_input.scope = OwnerType::Pack;
|
||||
other_input.owner = fixture.unique_owner("other");
|
||||
ArtifactRepository::create(&pool, other_input)
|
||||
.await
|
||||
.expect("Failed to create other artifact");
|
||||
|
||||
// Find by scope and owner
|
||||
let artifacts =
|
||||
ArtifactRepository::find_by_scope_and_owner(&pool, OwnerType::Pack, &pack_owner)
|
||||
.await
|
||||
.expect("Failed to find by scope and owner");
|
||||
|
||||
assert!(artifacts.iter().any(|a| a.id == pack_artifact.id));
|
||||
assert!(artifacts
|
||||
.iter()
|
||||
.all(|a| a.scope == OwnerType::Pack && a.owner == pack_owner));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_retention_policy() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("find_by_retention");
|
||||
|
||||
// Create artifacts with different retention policies
|
||||
let mut days_input = fixture.create_input("days");
|
||||
days_input.retention_policy = RetentionPolicyType::Days;
|
||||
let days_artifact = ArtifactRepository::create(&pool, days_input)
|
||||
.await
|
||||
.expect("Failed to create days artifact");
|
||||
|
||||
let mut hours_input = fixture.create_input("hours");
|
||||
hours_input.retention_policy = RetentionPolicyType::Hours;
|
||||
ArtifactRepository::create(&pool, hours_input)
|
||||
.await
|
||||
.expect("Failed to create hours artifact");
|
||||
|
||||
// Find by days retention policy
|
||||
let days_artifacts =
|
||||
ArtifactRepository::find_by_retention_policy(&pool, RetentionPolicyType::Days)
|
||||
.await
|
||||
.expect("Failed to find by retention policy");
|
||||
|
||||
assert!(days_artifacts.iter().any(|a| a.id == days_artifact.id));
|
||||
assert!(days_artifacts
|
||||
.iter()
|
||||
.all(|a| a.retention_policy == RetentionPolicyType::Days));
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Timestamp Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_timestamps_auto_set_on_create() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("timestamps_create");
|
||||
let input = fixture.create_input("timestamps");
|
||||
|
||||
let artifact = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
assert!(artifact.created.timestamp() > 0);
|
||||
assert!(artifact.updated.timestamp() > 0);
|
||||
assert_eq!(artifact.created, artifact.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_updated_timestamp_changes_on_update() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("timestamps_update");
|
||||
let input = fixture.create_input("update_time");
|
||||
|
||||
let created = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
|
||||
// Small delay to ensure timestamp difference
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
|
||||
|
||||
let update_input = UpdateArtifactInput {
|
||||
r#ref: Some(fixture.unique_ref("updated")),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = ArtifactRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update artifact");
|
||||
|
||||
assert_eq!(updated.created, created.created);
|
||||
assert!(updated.updated > created.updated);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Edge Cases and Validation Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_artifact_with_empty_owner() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("empty_owner");
|
||||
let mut input = fixture.create_input("empty");
|
||||
input.owner = String::new();
|
||||
|
||||
let artifact = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact with empty owner");
|
||||
|
||||
assert_eq!(artifact.owner, "");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_artifact_with_special_characters_in_ref() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("special_chars");
|
||||
let mut input = fixture.create_input("special");
|
||||
input.r#ref = format!(
|
||||
"{}_test/path/to/file-with-special_chars.txt",
|
||||
fixture.unique_ref("spec")
|
||||
);
|
||||
|
||||
let artifact = ArtifactRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create artifact with special chars");
|
||||
|
||||
assert_eq!(artifact.r#ref, input.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_artifact_with_zero_retention_limit() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("zero_retention");
|
||||
let mut input = fixture.create_input("zero");
|
||||
input.retention_limit = 0;
|
||||
|
||||
let artifact = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact with zero retention limit");
|
||||
|
||||
assert_eq!(artifact.retention_limit, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_artifact_with_negative_retention_limit() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("negative_retention");
|
||||
let mut input = fixture.create_input("negative");
|
||||
input.retention_limit = -1;
|
||||
|
||||
let artifact = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact with negative retention limit");
|
||||
|
||||
assert_eq!(artifact.retention_limit, -1);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_artifact_with_large_retention_limit() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("large_retention");
|
||||
let mut input = fixture.create_input("large");
|
||||
input.retention_limit = i32::MAX;
|
||||
|
||||
let artifact = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact with large retention limit");
|
||||
|
||||
assert_eq!(artifact.retention_limit, i32::MAX);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_artifact_with_long_ref() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("long_ref");
|
||||
let mut input = fixture.create_input("long");
|
||||
input.r#ref = format!("{}_{}", fixture.unique_ref("long"), "a".repeat(500));
|
||||
|
||||
let artifact = ArtifactRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create artifact with long ref");
|
||||
|
||||
assert_eq!(artifact.r#ref, input.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multiple_artifacts_same_ref_allowed() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("duplicate_ref");
|
||||
let same_ref = fixture.unique_ref("same");
|
||||
|
||||
// Create first artifact
|
||||
let mut input1 = fixture.create_input("dup1");
|
||||
input1.r#ref = same_ref.clone();
|
||||
let artifact1 = ArtifactRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create first artifact");
|
||||
|
||||
// Create second artifact with same ref (should be allowed)
|
||||
let mut input2 = fixture.create_input("dup2");
|
||||
input2.r#ref = same_ref.clone();
|
||||
let artifact2 = ArtifactRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create second artifact with same ref");
|
||||
|
||||
assert_ne!(artifact1.id, artifact2.id);
|
||||
assert_eq!(artifact1.r#ref, artifact2.r#ref);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Query Result Ordering Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_scope_ordered_by_created() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = ArtifactFixture::new("scope_ordering");
|
||||
|
||||
// Create multiple artifacts with same scope
|
||||
let mut artifacts = Vec::new();
|
||||
for i in 0..3 {
|
||||
let mut input = fixture.create_input(&format!("order_{}", i));
|
||||
input.scope = OwnerType::Action;
|
||||
|
||||
let artifact = ArtifactRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create artifact");
|
||||
artifacts.push(artifact);
|
||||
|
||||
// Small delay to ensure different timestamps
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
}
|
||||
|
||||
let found = ArtifactRepository::find_by_scope(&pool, OwnerType::Action)
|
||||
.await
|
||||
.expect("Failed to find by scope");
|
||||
|
||||
// Find our test artifacts in the results
|
||||
let test_artifacts: Vec<_> = found
|
||||
.iter()
|
||||
.filter(|a| artifacts.iter().any(|ta| ta.id == a.id))
|
||||
.collect();
|
||||
|
||||
// Should be ordered by created DESC (newest first)
|
||||
for i in 0..test_artifacts.len().saturating_sub(1) {
|
||||
assert!(test_artifacts[i].created >= test_artifacts[i + 1].created);
|
||||
}
|
||||
}
|
||||
610
crates/common/tests/repository_runtime_tests.rs
Normal file
610
crates/common/tests/repository_runtime_tests.rs
Normal file
@@ -0,0 +1,610 @@
|
||||
//! Integration tests for Runtime repository
|
||||
//!
|
||||
//! Tests cover CRUD operations, specialized queries, constraints,
|
||||
//! enum handling, timestamps, and edge cases.
|
||||
|
||||
use attune_common::repositories::runtime::{
|
||||
CreateRuntimeInput, RuntimeRepository, UpdateRuntimeInput,
|
||||
};
|
||||
use attune_common::repositories::{Create, Delete, FindById, FindByRef, List, Update};
|
||||
use serde_json::json;
|
||||
use sqlx::PgPool;
|
||||
use std::collections::hash_map::DefaultHasher;
|
||||
use std::hash::{Hash, Hasher};
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
|
||||
mod helpers;
|
||||
use helpers::create_test_pool;
|
||||
|
||||
// Global counter for unique IDs across all tests
|
||||
static GLOBAL_COUNTER: AtomicU64 = AtomicU64::new(0);
|
||||
|
||||
/// Test fixture for creating unique runtime data
|
||||
struct RuntimeFixture {
|
||||
sequence: AtomicU64,
|
||||
test_id: String,
|
||||
}
|
||||
|
||||
impl RuntimeFixture {
|
||||
fn new(test_name: &str) -> Self {
|
||||
let global_count = GLOBAL_COUNTER.fetch_add(1, Ordering::SeqCst);
|
||||
let timestamp = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap()
|
||||
.as_nanos();
|
||||
|
||||
// Create unique test ID from test name, timestamp, and global counter
|
||||
let mut hasher = DefaultHasher::new();
|
||||
test_name.hash(&mut hasher);
|
||||
timestamp.hash(&mut hasher);
|
||||
global_count.hash(&mut hasher);
|
||||
let hash = hasher.finish();
|
||||
|
||||
let test_id = format!("test_{}_{:x}", global_count, hash);
|
||||
|
||||
Self {
|
||||
sequence: AtomicU64::new(0),
|
||||
test_id,
|
||||
}
|
||||
}
|
||||
|
||||
fn unique_ref(&self, prefix: &str) -> String {
|
||||
let seq = self.sequence.fetch_add(1, Ordering::SeqCst);
|
||||
format!("{}_{}_ref_{}", prefix, self.test_id, seq)
|
||||
}
|
||||
|
||||
fn create_input(&self, ref_suffix: &str) -> CreateRuntimeInput {
|
||||
let seq = self.sequence.fetch_add(1, Ordering::SeqCst);
|
||||
let name = format!("test_runtime_{}_{}", ref_suffix, seq);
|
||||
let r#ref = format!("{}.{}", self.test_id, name);
|
||||
|
||||
CreateRuntimeInput {
|
||||
r#ref,
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
description: Some(format!("Test runtime {}", seq)),
|
||||
name,
|
||||
distributions: json!({
|
||||
"linux": { "supported": true, "versions": ["ubuntu20.04", "ubuntu22.04"] },
|
||||
"darwin": { "supported": true, "versions": ["12", "13"] }
|
||||
}),
|
||||
installation: Some(json!({
|
||||
"method": "pip",
|
||||
"packages": ["requests", "pyyaml"]
|
||||
})),
|
||||
}
|
||||
}
|
||||
|
||||
fn create_minimal_input(&self, ref_suffix: &str) -> CreateRuntimeInput {
|
||||
let seq = self.sequence.fetch_add(1, Ordering::SeqCst);
|
||||
let name = format!("minimal_{}_{}", ref_suffix, seq);
|
||||
let r#ref = format!("{}.{}", self.test_id, name);
|
||||
|
||||
CreateRuntimeInput {
|
||||
r#ref,
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
description: None,
|
||||
name,
|
||||
distributions: json!({}),
|
||||
installation: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn setup_db() -> PgPool {
|
||||
create_test_pool()
|
||||
.await
|
||||
.expect("Failed to create test pool")
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Basic CRUD Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_runtime() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("create_runtime");
|
||||
let input = fixture.create_input("basic");
|
||||
|
||||
let runtime = RuntimeRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
assert!(runtime.id > 0);
|
||||
assert_eq!(runtime.r#ref, input.r#ref);
|
||||
assert_eq!(runtime.pack, input.pack);
|
||||
assert_eq!(runtime.pack_ref, input.pack_ref);
|
||||
assert_eq!(runtime.description, input.description);
|
||||
assert_eq!(runtime.name, input.name);
|
||||
assert_eq!(runtime.distributions, input.distributions);
|
||||
assert_eq!(runtime.installation, input.installation);
|
||||
assert!(runtime.created > chrono::Utc::now() - chrono::Duration::seconds(5));
|
||||
assert!(runtime.updated > chrono::Utc::now() - chrono::Duration::seconds(5));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_runtime_minimal() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("create_runtime_minimal");
|
||||
let input = fixture.create_minimal_input("minimal");
|
||||
|
||||
let runtime = RuntimeRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create minimal runtime");
|
||||
|
||||
assert!(runtime.id > 0);
|
||||
assert_eq!(runtime.r#ref, input.r#ref);
|
||||
assert_eq!(runtime.description, None);
|
||||
assert_eq!(runtime.pack, None);
|
||||
assert_eq!(runtime.pack_ref, None);
|
||||
assert_eq!(runtime.installation, None);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_runtime_by_id() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("find_by_id");
|
||||
let input = fixture.create_input("findable");
|
||||
|
||||
let created = RuntimeRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
let found = RuntimeRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to find runtime")
|
||||
.expect("Runtime not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.r#ref, created.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_runtime_by_id_not_found() {
|
||||
let pool = setup_db().await;
|
||||
|
||||
let result = RuntimeRepository::find_by_id(&pool, 999999999)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_runtime_by_ref() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("find_by_ref");
|
||||
let input = fixture.create_input("reftest");
|
||||
|
||||
let created = RuntimeRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
let found = RuntimeRepository::find_by_ref(&pool, &input.r#ref)
|
||||
.await
|
||||
.expect("Failed to find runtime")
|
||||
.expect("Runtime not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.r#ref, created.r#ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_runtime_by_ref_not_found() {
|
||||
let pool = setup_db().await;
|
||||
|
||||
let result = RuntimeRepository::find_by_ref(&pool, "nonexistent.ref.999999")
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_runtimes() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("list_runtimes");
|
||||
|
||||
let input1 = fixture.create_input("list1");
|
||||
let input2 = fixture.create_input("list2");
|
||||
|
||||
let created1 = RuntimeRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create runtime 1");
|
||||
let created2 = RuntimeRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create runtime 2");
|
||||
|
||||
let list = RuntimeRepository::list(&pool)
|
||||
.await
|
||||
.expect("Failed to list runtimes");
|
||||
|
||||
assert!(list.len() >= 2);
|
||||
assert!(list.iter().any(|r| r.id == created1.id));
|
||||
assert!(list.iter().any(|r| r.id == created2.id));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_runtime() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("update_runtime");
|
||||
let input = fixture.create_input("update");
|
||||
|
||||
let created = RuntimeRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
let update_input = UpdateRuntimeInput {
|
||||
description: Some("Updated description".to_string()),
|
||||
name: Some("updated_name".to_string()),
|
||||
distributions: Some(json!({
|
||||
"linux": { "supported": false }
|
||||
})),
|
||||
installation: Some(json!({
|
||||
"method": "npm"
|
||||
})),
|
||||
};
|
||||
|
||||
let updated = RuntimeRepository::update(&pool, created.id, update_input.clone())
|
||||
.await
|
||||
.expect("Failed to update runtime");
|
||||
|
||||
assert_eq!(updated.id, created.id);
|
||||
assert_eq!(updated.description, update_input.description);
|
||||
assert_eq!(updated.name, update_input.name.unwrap());
|
||||
assert_eq!(updated.distributions, update_input.distributions.unwrap());
|
||||
assert_eq!(updated.installation, update_input.installation);
|
||||
assert!(updated.updated > created.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_runtime_partial() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("update_partial");
|
||||
let input = fixture.create_input("partial");
|
||||
|
||||
let created = RuntimeRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
let update_input = UpdateRuntimeInput {
|
||||
description: Some("Only description changed".to_string()),
|
||||
name: None,
|
||||
distributions: None,
|
||||
installation: None,
|
||||
};
|
||||
|
||||
let updated = RuntimeRepository::update(&pool, created.id, update_input.clone())
|
||||
.await
|
||||
.expect("Failed to update runtime");
|
||||
|
||||
assert_eq!(updated.description, update_input.description);
|
||||
assert_eq!(updated.name, created.name);
|
||||
assert_eq!(updated.distributions, created.distributions);
|
||||
assert_eq!(updated.installation, created.installation);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_runtime_empty() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("update_empty");
|
||||
let input = fixture.create_input("empty");
|
||||
|
||||
let created = RuntimeRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
let update_input = UpdateRuntimeInput::default();
|
||||
|
||||
let result = RuntimeRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update runtime");
|
||||
|
||||
// Should return existing entity unchanged
|
||||
assert_eq!(result.id, created.id);
|
||||
assert_eq!(result.description, created.description);
|
||||
assert_eq!(result.name, created.name);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_runtime() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("delete_runtime");
|
||||
let input = fixture.create_input("deletable");
|
||||
|
||||
let created = RuntimeRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
let deleted = RuntimeRepository::delete(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to delete runtime");
|
||||
|
||||
assert!(deleted);
|
||||
|
||||
let found = RuntimeRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_runtime_not_found() {
|
||||
let pool = setup_db().await;
|
||||
|
||||
let deleted = RuntimeRepository::delete(&pool, 999999999)
|
||||
.await
|
||||
.expect("Delete should succeed");
|
||||
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Specialized Query Tests
|
||||
// ============================================================================
|
||||
|
||||
// #[tokio::test]
|
||||
// async fn test_find_by_type_action() {
|
||||
// // RuntimeType and find_by_type no longer exist
|
||||
// }
|
||||
|
||||
// #[tokio::test]
|
||||
// async fn test_find_by_type_sensor() {
|
||||
// // RuntimeType and find_by_type no longer exist
|
||||
// }
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_pack() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("find_by_pack");
|
||||
|
||||
// Create a pack first
|
||||
use attune_common::repositories::pack::{CreatePackInput, PackRepository};
|
||||
|
||||
let pack_input = CreatePackInput {
|
||||
r#ref: fixture.unique_ref("testpack"),
|
||||
label: "Test Pack".to_string(),
|
||||
description: Some("Pack for runtime testing".to_string()),
|
||||
version: "1.0.0".to_string(),
|
||||
conf_schema: json!({}),
|
||||
config: json!({}),
|
||||
meta: json!({
|
||||
"author": "Test Author",
|
||||
"email": "test@example.com"
|
||||
}),
|
||||
tags: vec!["test".to_string()],
|
||||
runtime_deps: vec![],
|
||||
is_standard: false,
|
||||
};
|
||||
|
||||
let pack = PackRepository::create(&pool, pack_input)
|
||||
.await
|
||||
.expect("Failed to create pack");
|
||||
|
||||
// Create runtimes with and without pack association
|
||||
let mut input1 = fixture.create_input("with_pack1");
|
||||
input1.pack = Some(pack.id);
|
||||
input1.pack_ref = Some(pack.r#ref.clone());
|
||||
|
||||
let mut input2 = fixture.create_input("with_pack2");
|
||||
input2.pack = Some(pack.id);
|
||||
input2.pack_ref = Some(pack.r#ref.clone());
|
||||
|
||||
let input3 = fixture.create_input("without_pack");
|
||||
|
||||
let created1 = RuntimeRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create runtime 1");
|
||||
let created2 = RuntimeRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create runtime 2");
|
||||
let _created3 = RuntimeRepository::create(&pool, input3)
|
||||
.await
|
||||
.expect("Failed to create runtime 3");
|
||||
|
||||
let pack_runtimes = RuntimeRepository::find_by_pack(&pool, pack.id)
|
||||
.await
|
||||
.expect("Failed to find by pack");
|
||||
|
||||
assert_eq!(pack_runtimes.len(), 2);
|
||||
assert!(pack_runtimes.iter().any(|r| r.id == created1.id));
|
||||
assert!(pack_runtimes.iter().any(|r| r.id == created2.id));
|
||||
assert!(pack_runtimes.iter().all(|r| r.pack == Some(pack.id)));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_pack_empty() {
|
||||
let pool = setup_db().await;
|
||||
|
||||
let runtimes = RuntimeRepository::find_by_pack(&pool, 999999999)
|
||||
.await
|
||||
.expect("Failed to find by pack");
|
||||
|
||||
assert_eq!(runtimes.len(), 0);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Enum Tests
|
||||
// ============================================================================
|
||||
|
||||
// Test removed - runtime_type field no longer exists
|
||||
// #[tokio::test]
|
||||
// async fn test_runtime_type_enum() {
|
||||
// // runtime_type field removed from Runtime model
|
||||
// }
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_runtime_created_successfully() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("created_test");
|
||||
let input = fixture.create_input("created");
|
||||
|
||||
let runtime = RuntimeRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
let found = RuntimeRepository::find_by_id(&pool, runtime.id)
|
||||
.await
|
||||
.expect("Failed to find runtime")
|
||||
.expect("Runtime not found");
|
||||
|
||||
assert_eq!(found.id, runtime.id);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Edge Cases and Constraints
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_duplicate_ref_fails() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("duplicate_ref");
|
||||
let input = fixture.create_input("duplicate");
|
||||
|
||||
RuntimeRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create first runtime");
|
||||
|
||||
let result = RuntimeRepository::create(&pool, input).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_json_fields() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("json_fields");
|
||||
let input = fixture.create_input("json_test");
|
||||
|
||||
let runtime = RuntimeRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
assert_eq!(runtime.distributions, input.distributions);
|
||||
assert_eq!(runtime.installation, input.installation);
|
||||
|
||||
// Verify JSON structure
|
||||
assert_eq!(runtime.distributions["linux"]["supported"], json!(true));
|
||||
assert!(runtime.installation.is_some());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_empty_json_distributions() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("empty_json");
|
||||
let mut input = fixture.create_input("empty");
|
||||
input.distributions = json!({});
|
||||
input.installation = None;
|
||||
|
||||
let runtime = RuntimeRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
assert_eq!(runtime.distributions, json!({}));
|
||||
assert_eq!(runtime.installation, None);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_ordering() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("list_ordering");
|
||||
|
||||
let mut input1 = fixture.create_input("z_last");
|
||||
input1.r#ref = format!("{}.action.zzz", fixture.test_id);
|
||||
|
||||
let mut input2 = fixture.create_input("a_first");
|
||||
input2.r#ref = format!("{}.sensor.aaa", fixture.test_id);
|
||||
|
||||
let mut input3 = fixture.create_input("m_middle");
|
||||
input3.r#ref = format!("{}.action.mmm", fixture.test_id);
|
||||
|
||||
RuntimeRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create runtime 1");
|
||||
RuntimeRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create runtime 2");
|
||||
RuntimeRepository::create(&pool, input3)
|
||||
.await
|
||||
.expect("Failed to create runtime 3");
|
||||
|
||||
let list = RuntimeRepository::list(&pool)
|
||||
.await
|
||||
.expect("Failed to list runtimes");
|
||||
|
||||
// Find our test runtimes in the list
|
||||
let test_runtimes: Vec<_> = list
|
||||
.iter()
|
||||
.filter(|r| r.r#ref.contains(&fixture.test_id))
|
||||
.collect();
|
||||
|
||||
assert_eq!(test_runtimes.len(), 3);
|
||||
|
||||
// Verify they are sorted by ref
|
||||
for i in 0..test_runtimes.len() - 1 {
|
||||
assert!(test_runtimes[i].r#ref <= test_runtimes[i + 1].r#ref);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_timestamps() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("timestamps");
|
||||
let input = fixture.create_input("timestamped");
|
||||
|
||||
let before = chrono::Utc::now();
|
||||
let runtime = RuntimeRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
let after = chrono::Utc::now();
|
||||
|
||||
assert!(runtime.created >= before);
|
||||
assert!(runtime.created <= after);
|
||||
assert!(runtime.updated >= before);
|
||||
assert!(runtime.updated <= after);
|
||||
assert_eq!(runtime.created, runtime.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_changes_timestamp() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("timestamp_update");
|
||||
let input = fixture.create_input("ts");
|
||||
|
||||
let runtime = RuntimeRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
|
||||
|
||||
let update_input = UpdateRuntimeInput {
|
||||
description: Some("Updated".to_string()),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = RuntimeRepository::update(&pool, runtime.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update runtime");
|
||||
|
||||
assert_eq!(updated.created, runtime.created);
|
||||
assert!(updated.updated > runtime.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_pack_ref_without_pack_id() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = RuntimeFixture::new("pack_ref_only");
|
||||
let mut input = fixture.create_input("packref");
|
||||
input.pack = None;
|
||||
input.pack_ref = Some("some.pack.ref".to_string());
|
||||
|
||||
let runtime = RuntimeRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
assert_eq!(runtime.pack, None);
|
||||
assert_eq!(runtime.pack_ref, input.pack_ref);
|
||||
}
|
||||
946
crates/common/tests/repository_worker_tests.rs
Normal file
946
crates/common/tests/repository_worker_tests.rs
Normal file
@@ -0,0 +1,946 @@
|
||||
//! Integration tests for Worker repository
|
||||
//!
|
||||
//! Tests cover CRUD operations, specialized queries, constraints,
|
||||
//! enum handling, timestamps, heartbeat updates, and edge cases.
|
||||
|
||||
use attune_common::models::enums::{WorkerStatus, WorkerType};
|
||||
use attune_common::repositories::runtime::{
|
||||
CreateRuntimeInput, CreateWorkerInput, RuntimeRepository, UpdateWorkerInput, WorkerRepository,
|
||||
};
|
||||
use attune_common::repositories::{Create, Delete, FindById, List, Update};
|
||||
|
||||
use serde_json::json;
|
||||
use sqlx::PgPool;
|
||||
use std::collections::hash_map::DefaultHasher;
|
||||
use std::hash::{Hash, Hasher};
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
|
||||
mod helpers;
|
||||
use helpers::create_test_pool;
|
||||
|
||||
// Global counter for unique IDs across all tests
|
||||
static GLOBAL_COUNTER: AtomicU64 = AtomicU64::new(0);
|
||||
|
||||
/// Test fixture for creating unique worker data
|
||||
struct WorkerFixture {
|
||||
sequence: AtomicU64,
|
||||
test_id: String,
|
||||
}
|
||||
|
||||
impl WorkerFixture {
|
||||
fn new(test_name: &str) -> Self {
|
||||
let global_count = GLOBAL_COUNTER.fetch_add(1, Ordering::SeqCst);
|
||||
let timestamp = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap()
|
||||
.as_nanos();
|
||||
|
||||
// Create unique test ID from test name, timestamp, and global counter
|
||||
let mut hasher = DefaultHasher::new();
|
||||
test_name.hash(&mut hasher);
|
||||
timestamp.hash(&mut hasher);
|
||||
global_count.hash(&mut hasher);
|
||||
let hash = hasher.finish();
|
||||
|
||||
let test_id = format!("test_{}_{:x}", global_count, hash);
|
||||
|
||||
Self {
|
||||
sequence: AtomicU64::new(0),
|
||||
test_id,
|
||||
}
|
||||
}
|
||||
|
||||
fn unique_name(&self, prefix: &str) -> String {
|
||||
let seq = self.sequence.fetch_add(1, Ordering::SeqCst);
|
||||
format!("{}_{}_worker_{}", prefix, self.test_id, seq)
|
||||
}
|
||||
|
||||
fn create_input(&self, name_suffix: &str, worker_type: WorkerType) -> CreateWorkerInput {
|
||||
CreateWorkerInput {
|
||||
name: self.unique_name(name_suffix),
|
||||
worker_type,
|
||||
runtime: None,
|
||||
host: Some("localhost".to_string()),
|
||||
port: Some(8080),
|
||||
status: Some(WorkerStatus::Active),
|
||||
capabilities: Some(json!({
|
||||
"cpu": "x86_64",
|
||||
"memory": "8GB",
|
||||
"python": ["3.9", "3.10", "3.11"],
|
||||
"node": ["16", "18", "20"]
|
||||
})),
|
||||
meta: Some(json!({
|
||||
"region": "us-west-2",
|
||||
"environment": "test"
|
||||
})),
|
||||
}
|
||||
}
|
||||
|
||||
fn create_minimal_input(&self, name_suffix: &str) -> CreateWorkerInput {
|
||||
CreateWorkerInput {
|
||||
name: self.unique_name(name_suffix),
|
||||
worker_type: WorkerType::Local,
|
||||
runtime: None,
|
||||
host: None,
|
||||
port: None,
|
||||
status: None,
|
||||
capabilities: None,
|
||||
meta: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn setup_db() -> PgPool {
|
||||
create_test_pool()
|
||||
.await
|
||||
.expect("Failed to create test pool")
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Basic CRUD Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_worker() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("create_worker");
|
||||
let input = fixture.create_input("basic", WorkerType::Local);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert!(worker.id > 0);
|
||||
assert_eq!(worker.name, input.name);
|
||||
assert_eq!(worker.worker_type, input.worker_type);
|
||||
assert_eq!(worker.runtime, input.runtime);
|
||||
assert_eq!(worker.host, input.host);
|
||||
assert_eq!(worker.port, input.port);
|
||||
assert_eq!(worker.status, input.status);
|
||||
assert_eq!(worker.capabilities, input.capabilities);
|
||||
assert_eq!(worker.meta, input.meta);
|
||||
assert_eq!(worker.last_heartbeat, None);
|
||||
assert!(worker.created > chrono::Utc::now() - chrono::Duration::seconds(5));
|
||||
assert!(worker.updated > chrono::Utc::now() - chrono::Duration::seconds(5));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_worker_minimal() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("create_worker_minimal");
|
||||
let input = fixture.create_minimal_input("minimal");
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create minimal worker");
|
||||
|
||||
assert!(worker.id > 0);
|
||||
assert_eq!(worker.name, input.name);
|
||||
assert_eq!(worker.worker_type, WorkerType::Local);
|
||||
assert_eq!(worker.host, None);
|
||||
assert_eq!(worker.port, None);
|
||||
assert_eq!(worker.status, None);
|
||||
assert_eq!(worker.capabilities, None);
|
||||
assert_eq!(worker.meta, None);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_worker_by_id() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("find_by_id");
|
||||
let input = fixture.create_input("findable", WorkerType::Remote);
|
||||
|
||||
let created = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
let found = WorkerRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to find worker")
|
||||
.expect("Worker not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.name, created.name);
|
||||
assert_eq!(found.worker_type, created.worker_type);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_worker_by_id_not_found() {
|
||||
let pool = setup_db().await;
|
||||
|
||||
let result = WorkerRepository::find_by_id(&pool, 999999999)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_worker_by_name() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("find_by_name");
|
||||
let input = fixture.create_input("nametest", WorkerType::Container);
|
||||
|
||||
let created = WorkerRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
let found = WorkerRepository::find_by_name(&pool, &input.name)
|
||||
.await
|
||||
.expect("Failed to find worker")
|
||||
.expect("Worker not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.name, created.name);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_worker_by_name_not_found() {
|
||||
let pool = setup_db().await;
|
||||
|
||||
let result = WorkerRepository::find_by_name(&pool, "nonexistent_worker_999999")
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_workers() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("list_workers");
|
||||
|
||||
let input1 = fixture.create_input("list1", WorkerType::Local);
|
||||
let input2 = fixture.create_input("list2", WorkerType::Remote);
|
||||
|
||||
let created1 = WorkerRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create worker 1");
|
||||
let created2 = WorkerRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create worker 2");
|
||||
|
||||
let list = WorkerRepository::list(&pool)
|
||||
.await
|
||||
.expect("Failed to list workers");
|
||||
|
||||
assert!(list.len() >= 2);
|
||||
assert!(list.iter().any(|w| w.id == created1.id));
|
||||
assert!(list.iter().any(|w| w.id == created2.id));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_worker() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("update_worker");
|
||||
let input = fixture.create_input("update", WorkerType::Local);
|
||||
|
||||
let created = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
let update_input = UpdateWorkerInput {
|
||||
name: Some("updated_worker_name".to_string()),
|
||||
status: Some(WorkerStatus::Busy),
|
||||
capabilities: Some(json!({
|
||||
"updated": true
|
||||
})),
|
||||
meta: Some(json!({
|
||||
"version": "2.0"
|
||||
})),
|
||||
host: Some("updated-host".to_string()),
|
||||
port: Some(9090),
|
||||
};
|
||||
|
||||
let updated = WorkerRepository::update(&pool, created.id, update_input.clone())
|
||||
.await
|
||||
.expect("Failed to update worker");
|
||||
|
||||
assert_eq!(updated.id, created.id);
|
||||
assert_eq!(updated.name, update_input.name.unwrap());
|
||||
assert_eq!(updated.status, update_input.status);
|
||||
assert_eq!(updated.capabilities, update_input.capabilities);
|
||||
assert_eq!(updated.meta, update_input.meta);
|
||||
assert_eq!(updated.host, update_input.host);
|
||||
assert_eq!(updated.port, update_input.port);
|
||||
assert!(updated.updated > created.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_worker_partial() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("update_partial");
|
||||
let input = fixture.create_input("partial", WorkerType::Remote);
|
||||
|
||||
let created = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
let update_input = UpdateWorkerInput {
|
||||
status: Some(WorkerStatus::Inactive),
|
||||
name: None,
|
||||
capabilities: None,
|
||||
meta: None,
|
||||
host: None,
|
||||
port: None,
|
||||
};
|
||||
|
||||
let updated = WorkerRepository::update(&pool, created.id, update_input.clone())
|
||||
.await
|
||||
.expect("Failed to update worker");
|
||||
|
||||
assert_eq!(updated.status, update_input.status);
|
||||
assert_eq!(updated.name, created.name);
|
||||
assert_eq!(updated.capabilities, created.capabilities);
|
||||
assert_eq!(updated.meta, created.meta);
|
||||
assert_eq!(updated.host, created.host);
|
||||
assert_eq!(updated.port, created.port);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_worker_empty() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("update_empty");
|
||||
let input = fixture.create_input("empty", WorkerType::Container);
|
||||
|
||||
let created = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
let update_input = UpdateWorkerInput::default();
|
||||
|
||||
let result = WorkerRepository::update(&pool, created.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update worker");
|
||||
|
||||
// Should return existing entity unchanged
|
||||
assert_eq!(result.id, created.id);
|
||||
assert_eq!(result.name, created.name);
|
||||
assert_eq!(result.status, created.status);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_worker() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("delete_worker");
|
||||
let input = fixture.create_input("delete", WorkerType::Local);
|
||||
|
||||
let created = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
let deleted = WorkerRepository::delete(&pool, created.id)
|
||||
.await
|
||||
.expect("Failed to delete worker");
|
||||
|
||||
assert!(deleted);
|
||||
|
||||
let found = WorkerRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.expect("Query should succeed");
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_worker_not_found() {
|
||||
let pool = setup_db().await;
|
||||
|
||||
let deleted = WorkerRepository::delete(&pool, 999999999)
|
||||
.await
|
||||
.expect("Delete should succeed");
|
||||
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Specialized Query Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_status_active() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("find_by_status_active");
|
||||
|
||||
let mut input1 = fixture.create_input("active1", WorkerType::Local);
|
||||
input1.status = Some(WorkerStatus::Active);
|
||||
|
||||
let mut input2 = fixture.create_input("active2", WorkerType::Remote);
|
||||
input2.status = Some(WorkerStatus::Active);
|
||||
|
||||
let mut input3 = fixture.create_input("busy", WorkerType::Container);
|
||||
input3.status = Some(WorkerStatus::Busy);
|
||||
|
||||
let created1 = WorkerRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create active worker 1");
|
||||
let created2 = WorkerRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create active worker 2");
|
||||
let _created3 = WorkerRepository::create(&pool, input3)
|
||||
.await
|
||||
.expect("Failed to create busy worker");
|
||||
|
||||
let active_workers = WorkerRepository::find_by_status(&pool, WorkerStatus::Active)
|
||||
.await
|
||||
.expect("Failed to find by status");
|
||||
|
||||
assert!(active_workers.iter().any(|w| w.id == created1.id));
|
||||
assert!(active_workers.iter().any(|w| w.id == created2.id));
|
||||
assert!(active_workers
|
||||
.iter()
|
||||
.all(|w| w.status == Some(WorkerStatus::Active)));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_status_all_statuses() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("find_by_status_all");
|
||||
|
||||
let statuses = vec![
|
||||
WorkerStatus::Active,
|
||||
WorkerStatus::Inactive,
|
||||
WorkerStatus::Busy,
|
||||
WorkerStatus::Error,
|
||||
];
|
||||
|
||||
for status in &statuses {
|
||||
let mut input = fixture.create_input(&format!("{:?}", status), WorkerType::Local);
|
||||
input.status = Some(*status);
|
||||
|
||||
let created = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
let found = WorkerRepository::find_by_status(&pool, *status)
|
||||
.await
|
||||
.expect("Failed to find by status");
|
||||
|
||||
assert!(found.iter().any(|w| w.id == created.id));
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_type_local() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("find_by_type_local");
|
||||
|
||||
let input1 = fixture.create_input("local1", WorkerType::Local);
|
||||
let input2 = fixture.create_input("local2", WorkerType::Local);
|
||||
let input3 = fixture.create_input("remote", WorkerType::Remote);
|
||||
|
||||
let created1 = WorkerRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create local worker 1");
|
||||
let created2 = WorkerRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create local worker 2");
|
||||
let _created3 = WorkerRepository::create(&pool, input3)
|
||||
.await
|
||||
.expect("Failed to create remote worker");
|
||||
|
||||
let local_workers = WorkerRepository::find_by_type(&pool, WorkerType::Local)
|
||||
.await
|
||||
.expect("Failed to find by type");
|
||||
|
||||
assert!(local_workers.iter().any(|w| w.id == created1.id));
|
||||
assert!(local_workers.iter().any(|w| w.id == created2.id));
|
||||
assert!(local_workers
|
||||
.iter()
|
||||
.all(|w| w.worker_type == WorkerType::Local));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_type_all_types() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("find_by_type_all");
|
||||
|
||||
let types = vec![WorkerType::Local, WorkerType::Remote, WorkerType::Container];
|
||||
|
||||
for worker_type in &types {
|
||||
let input = fixture.create_input(&format!("{:?}", worker_type), *worker_type);
|
||||
|
||||
let created = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
let found = WorkerRepository::find_by_type(&pool, *worker_type)
|
||||
.await
|
||||
.expect("Failed to find by type");
|
||||
|
||||
assert!(found.iter().any(|w| w.id == created.id));
|
||||
assert!(found.iter().all(|w| w.worker_type == *worker_type));
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_heartbeat() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("update_heartbeat");
|
||||
let input = fixture.create_input("heartbeat", WorkerType::Local);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.last_heartbeat, None);
|
||||
|
||||
let before = chrono::Utc::now();
|
||||
WorkerRepository::update_heartbeat(&pool, worker.id)
|
||||
.await
|
||||
.expect("Failed to update heartbeat");
|
||||
let after = chrono::Utc::now();
|
||||
|
||||
let updated = WorkerRepository::find_by_id(&pool, worker.id)
|
||||
.await
|
||||
.expect("Failed to find worker")
|
||||
.expect("Worker not found");
|
||||
|
||||
assert!(updated.last_heartbeat.is_some());
|
||||
let heartbeat = updated.last_heartbeat.unwrap();
|
||||
assert!(heartbeat >= before);
|
||||
assert!(heartbeat <= after);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_heartbeat_multiple_times() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("heartbeat_multiple");
|
||||
let input = fixture.create_input("multi", WorkerType::Remote);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
WorkerRepository::update_heartbeat(&pool, worker.id)
|
||||
.await
|
||||
.expect("Failed to update heartbeat 1");
|
||||
|
||||
let first = WorkerRepository::find_by_id(&pool, worker.id)
|
||||
.await
|
||||
.expect("Failed to find worker")
|
||||
.expect("Worker not found")
|
||||
.last_heartbeat
|
||||
.unwrap();
|
||||
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
|
||||
|
||||
WorkerRepository::update_heartbeat(&pool, worker.id)
|
||||
.await
|
||||
.expect("Failed to update heartbeat 2");
|
||||
|
||||
let second = WorkerRepository::find_by_id(&pool, worker.id)
|
||||
.await
|
||||
.expect("Failed to find worker")
|
||||
.expect("Worker not found")
|
||||
.last_heartbeat
|
||||
.unwrap();
|
||||
|
||||
assert!(second > first);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Runtime Association Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_worker_with_runtime() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("with_runtime");
|
||||
|
||||
// Create a runtime first
|
||||
let runtime_input = CreateRuntimeInput {
|
||||
r#ref: format!("{}.action.test_runtime", fixture.test_id),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
description: Some("Test runtime".to_string()),
|
||||
name: "test_runtime".to_string(),
|
||||
distributions: json!({}),
|
||||
installation: None,
|
||||
};
|
||||
|
||||
let runtime = RuntimeRepository::create(&pool, runtime_input)
|
||||
.await
|
||||
.expect("Failed to create runtime");
|
||||
|
||||
// Create worker with runtime association
|
||||
let mut input = fixture.create_input("with_rt", WorkerType::Local);
|
||||
input.runtime = Some(runtime.id);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.runtime, Some(runtime.id));
|
||||
|
||||
let found = WorkerRepository::find_by_id(&pool, worker.id)
|
||||
.await
|
||||
.expect("Failed to find worker")
|
||||
.expect("Worker not found");
|
||||
|
||||
assert_eq!(found.runtime, Some(runtime.id));
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Enum Tests
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_worker_type_local() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("type_local");
|
||||
let input = fixture.create_input("local", WorkerType::Local);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.worker_type, WorkerType::Local);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_worker_type_remote() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("type_remote");
|
||||
let input = fixture.create_input("remote", WorkerType::Remote);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.worker_type, WorkerType::Remote);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_worker_type_container() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("type_container");
|
||||
let input = fixture.create_input("container", WorkerType::Container);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.worker_type, WorkerType::Container);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_worker_status_active() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("status_active");
|
||||
let mut input = fixture.create_input("active", WorkerType::Local);
|
||||
input.status = Some(WorkerStatus::Active);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.status, Some(WorkerStatus::Active));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_worker_status_inactive() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("status_inactive");
|
||||
let mut input = fixture.create_input("inactive", WorkerType::Local);
|
||||
input.status = Some(WorkerStatus::Inactive);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.status, Some(WorkerStatus::Inactive));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_worker_status_busy() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("status_busy");
|
||||
let mut input = fixture.create_input("busy", WorkerType::Local);
|
||||
input.status = Some(WorkerStatus::Busy);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.status, Some(WorkerStatus::Busy));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_worker_status_error() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("status_error");
|
||||
let mut input = fixture.create_input("error", WorkerType::Local);
|
||||
input.status = Some(WorkerStatus::Error);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.status, Some(WorkerStatus::Error));
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Edge Cases and Constraints
|
||||
// ============================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_duplicate_name_allowed() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("duplicate_name");
|
||||
|
||||
// Use a fixed name for both workers
|
||||
let name = format!("{}_duplicate", fixture.test_id);
|
||||
|
||||
let mut input1 = fixture.create_input("dup1", WorkerType::Local);
|
||||
input1.name = name.clone();
|
||||
|
||||
let mut input2 = fixture.create_input("dup2", WorkerType::Remote);
|
||||
input2.name = name.clone();
|
||||
|
||||
let worker1 = WorkerRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create first worker");
|
||||
|
||||
let worker2 = WorkerRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create second worker with same name");
|
||||
|
||||
assert_eq!(worker1.name, worker2.name);
|
||||
assert_ne!(worker1.id, worker2.id);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_json_fields() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("json_fields");
|
||||
let input = fixture.create_input("json", WorkerType::Container);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input.clone())
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.capabilities, input.capabilities);
|
||||
assert_eq!(worker.meta, input.meta);
|
||||
|
||||
// Verify JSON structure
|
||||
let caps = worker.capabilities.unwrap();
|
||||
assert_eq!(caps["cpu"], json!("x86_64"));
|
||||
assert_eq!(caps["memory"], json!("8GB"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_null_json_fields() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("null_json");
|
||||
let input = fixture.create_minimal_input("nulljson");
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.capabilities, None);
|
||||
assert_eq!(worker.meta, None);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_null_status() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("null_status");
|
||||
let mut input = fixture.create_input("nostatus", WorkerType::Local);
|
||||
input.status = None;
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.status, None);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_ordering() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("list_ordering");
|
||||
|
||||
let mut input1 = fixture.create_input("z", WorkerType::Local);
|
||||
input1.name = format!("{}_zzz_worker", fixture.test_id);
|
||||
|
||||
let mut input2 = fixture.create_input("a", WorkerType::Remote);
|
||||
input2.name = format!("{}_aaa_worker", fixture.test_id);
|
||||
|
||||
let mut input3 = fixture.create_input("m", WorkerType::Container);
|
||||
input3.name = format!("{}_mmm_worker", fixture.test_id);
|
||||
|
||||
WorkerRepository::create(&pool, input1)
|
||||
.await
|
||||
.expect("Failed to create worker 1");
|
||||
WorkerRepository::create(&pool, input2)
|
||||
.await
|
||||
.expect("Failed to create worker 2");
|
||||
WorkerRepository::create(&pool, input3)
|
||||
.await
|
||||
.expect("Failed to create worker 3");
|
||||
|
||||
let list = WorkerRepository::list(&pool)
|
||||
.await
|
||||
.expect("Failed to list workers");
|
||||
|
||||
// Find our test workers in the list
|
||||
let test_workers: Vec<_> = list
|
||||
.iter()
|
||||
.filter(|w| w.name.contains(&fixture.test_id))
|
||||
.collect();
|
||||
|
||||
assert_eq!(test_workers.len(), 3);
|
||||
|
||||
// Verify they are sorted by name
|
||||
for i in 0..test_workers.len() - 1 {
|
||||
assert!(test_workers[i].name <= test_workers[i + 1].name);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_timestamps() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("timestamps");
|
||||
let input = fixture.create_input("time", WorkerType::Local);
|
||||
|
||||
let before = chrono::Utc::now();
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
let after = chrono::Utc::now();
|
||||
|
||||
assert!(worker.created >= before);
|
||||
assert!(worker.created <= after);
|
||||
assert!(worker.updated >= before);
|
||||
assert!(worker.updated <= after);
|
||||
assert_eq!(worker.created, worker.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_changes_timestamp() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("timestamp_update");
|
||||
let input = fixture.create_input("ts", WorkerType::Remote);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
|
||||
|
||||
let update_input = UpdateWorkerInput {
|
||||
status: Some(WorkerStatus::Busy),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let updated = WorkerRepository::update(&pool, worker.id, update_input)
|
||||
.await
|
||||
.expect("Failed to update worker");
|
||||
|
||||
assert_eq!(updated.created, worker.created);
|
||||
assert!(updated.updated > worker.updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_heartbeat_updates_timestamp() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("heartbeat_updates");
|
||||
let input = fixture.create_input("hb", WorkerType::Container);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
let original_updated = worker.updated;
|
||||
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
|
||||
|
||||
WorkerRepository::update_heartbeat(&pool, worker.id)
|
||||
.await
|
||||
.expect("Failed to update heartbeat");
|
||||
|
||||
let after_heartbeat = WorkerRepository::find_by_id(&pool, worker.id)
|
||||
.await
|
||||
.expect("Failed to find worker")
|
||||
.expect("Worker not found");
|
||||
|
||||
// Heartbeat should update both last_heartbeat and updated timestamp (due to trigger)
|
||||
assert!(after_heartbeat.last_heartbeat.is_some());
|
||||
assert!(after_heartbeat.updated > original_updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_port_range() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("port_range");
|
||||
|
||||
// Test various port numbers
|
||||
let ports = vec![1, 80, 443, 8080, 65535];
|
||||
|
||||
for port in ports {
|
||||
let mut input = fixture.create_input(&format!("port{}", port), WorkerType::Local);
|
||||
input.port = Some(port);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect(&format!("Failed to create worker with port {}", port));
|
||||
|
||||
assert_eq!(worker.port, Some(port));
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_status_lifecycle() {
|
||||
let pool = setup_db().await;
|
||||
let fixture = WorkerFixture::new("status_lifecycle");
|
||||
let mut input = fixture.create_input("lifecycle", WorkerType::Local);
|
||||
input.status = Some(WorkerStatus::Inactive);
|
||||
|
||||
let worker = WorkerRepository::create(&pool, input)
|
||||
.await
|
||||
.expect("Failed to create worker");
|
||||
|
||||
assert_eq!(worker.status, Some(WorkerStatus::Inactive));
|
||||
|
||||
// Transition to Active
|
||||
let update1 = UpdateWorkerInput {
|
||||
status: Some(WorkerStatus::Active),
|
||||
..Default::default()
|
||||
};
|
||||
let worker = WorkerRepository::update(&pool, worker.id, update1)
|
||||
.await
|
||||
.expect("Failed to update to Active");
|
||||
assert_eq!(worker.status, Some(WorkerStatus::Active));
|
||||
|
||||
// Transition to Busy
|
||||
let update2 = UpdateWorkerInput {
|
||||
status: Some(WorkerStatus::Busy),
|
||||
..Default::default()
|
||||
};
|
||||
let worker = WorkerRepository::update(&pool, worker.id, update2)
|
||||
.await
|
||||
.expect("Failed to update to Busy");
|
||||
assert_eq!(worker.status, Some(WorkerStatus::Busy));
|
||||
|
||||
// Transition to Error
|
||||
let update3 = UpdateWorkerInput {
|
||||
status: Some(WorkerStatus::Error),
|
||||
..Default::default()
|
||||
};
|
||||
let worker = WorkerRepository::update(&pool, worker.id, update3)
|
||||
.await
|
||||
.expect("Failed to update to Error");
|
||||
assert_eq!(worker.status, Some(WorkerStatus::Error));
|
||||
|
||||
// Back to Inactive
|
||||
let update4 = UpdateWorkerInput {
|
||||
status: Some(WorkerStatus::Inactive),
|
||||
..Default::default()
|
||||
};
|
||||
let worker = WorkerRepository::update(&pool, worker.id, update4)
|
||||
.await
|
||||
.expect("Failed to update back to Inactive");
|
||||
assert_eq!(worker.status, Some(WorkerStatus::Inactive));
|
||||
}
|
||||
1375
crates/common/tests/rule_repository_tests.rs
Normal file
1375
crates/common/tests/rule_repository_tests.rs
Normal file
File diff suppressed because it is too large
Load Diff
1850
crates/common/tests/sensor_repository_tests.rs
Normal file
1850
crates/common/tests/sensor_repository_tests.rs
Normal file
File diff suppressed because it is too large
Load Diff
788
crates/common/tests/trigger_repository_tests.rs
Normal file
788
crates/common/tests/trigger_repository_tests.rs
Normal file
@@ -0,0 +1,788 @@
|
||||
//! Integration tests for Trigger repository
|
||||
//!
|
||||
//! These tests verify CRUD operations, queries, and constraints
|
||||
//! for the Trigger repository.
|
||||
|
||||
mod helpers;
|
||||
|
||||
use attune_common::{
|
||||
repositories::{
|
||||
trigger::{CreateTriggerInput, TriggerRepository, UpdateTriggerInput},
|
||||
Create, Delete, FindById, FindByRef, List, Update,
|
||||
},
|
||||
Error,
|
||||
};
|
||||
use helpers::*;
|
||||
use serde_json::json;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_trigger() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("test_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: format!("{}.webhook", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Webhook Trigger".to_string(),
|
||||
description: Some("Test webhook trigger".to_string()),
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert!(trigger.r#ref.contains(".webhook"));
|
||||
assert_eq!(trigger.pack, Some(pack.id));
|
||||
assert_eq!(trigger.pack_ref, Some(pack.r#ref));
|
||||
assert_eq!(trigger.label, "Webhook Trigger");
|
||||
assert_eq!(trigger.enabled, true);
|
||||
assert!(trigger.created.timestamp() > 0);
|
||||
assert!(trigger.updated.timestamp() > 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_trigger_without_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let trigger_ref = format!("core.{}", unique_pack_ref("standalone_trigger"));
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: "Standalone Trigger".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert_eq!(trigger.r#ref, trigger_ref);
|
||||
assert_eq!(trigger.pack, None);
|
||||
assert_eq!(trigger.pack_ref, None);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_trigger_with_schemas() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("schema_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let param_schema = json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"url": {"type": "string"},
|
||||
"method": {"type": "string", "enum": ["GET", "POST"]}
|
||||
},
|
||||
"required": ["url"]
|
||||
});
|
||||
|
||||
let out_schema = json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": {"type": "integer"},
|
||||
"body": {"type": "string"}
|
||||
}
|
||||
});
|
||||
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: format!("{}.http_trigger", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "HTTP Trigger".to_string(),
|
||||
description: Some("HTTP request trigger".to_string()),
|
||||
enabled: true,
|
||||
param_schema: Some(param_schema.clone()),
|
||||
out_schema: Some(out_schema.clone()),
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert_eq!(trigger.param_schema, Some(param_schema));
|
||||
assert_eq!(trigger.out_schema, Some(out_schema));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_trigger_disabled() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let trigger_ref = format!("core.{}", unique_pack_ref("disabled_trigger"));
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: "Disabled Trigger".to_string(),
|
||||
description: None,
|
||||
enabled: false,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
assert_eq!(trigger.enabled, false);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_create_trigger_duplicate_ref() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let trigger_ref = format!("core.{}", unique_pack_ref("duplicate"));
|
||||
|
||||
// Create first trigger
|
||||
let input1 = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: "First".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
TriggerRepository::create(&pool, input1).await.unwrap();
|
||||
|
||||
// Try to create second trigger with same ref
|
||||
let input2 = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: "Second".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
let result = TriggerRepository::create(&pool, input2).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
match result.unwrap_err() {
|
||||
Error::AlreadyExists { entity, field, .. } => {
|
||||
assert_eq!(entity, "Trigger");
|
||||
assert_eq!(field, "ref");
|
||||
}
|
||||
_ => panic!("Expected AlreadyExists error"),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_trigger_by_id() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("find_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: format!("{}.find_trigger", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Find Trigger".to_string(),
|
||||
description: Some("Test find".to_string()),
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let created = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
let found = TriggerRepository::find_by_id(&pool, created.id)
|
||||
.await
|
||||
.unwrap()
|
||||
.expect("Trigger not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.r#ref, created.r#ref);
|
||||
assert_eq!(found.label, created.label);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_trigger_by_id_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let found = TriggerRepository::find_by_id(&pool, 999999).await.unwrap();
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_trigger_by_ref() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("ref_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let trigger_ref = format!("{}.ref_trigger", pack.r#ref);
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Ref Trigger".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let created = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
let found = TriggerRepository::find_by_ref(&pool, &trigger_ref)
|
||||
.await
|
||||
.unwrap()
|
||||
.expect("Trigger not found");
|
||||
|
||||
assert_eq!(found.id, created.id);
|
||||
assert_eq!(found.r#ref, trigger_ref);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_trigger_by_ref_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let found = TriggerRepository::find_by_ref(&pool, "nonexistent.trigger")
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_triggers() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("list_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create multiple triggers
|
||||
let input1 = CreateTriggerInput {
|
||||
r#ref: format!("{}.trigger1", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Trigger 1".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
let trigger1 = TriggerRepository::create(&pool, input1).await.unwrap();
|
||||
|
||||
let input2 = CreateTriggerInput {
|
||||
r#ref: format!("{}.trigger2", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Trigger 2".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
let trigger2 = TriggerRepository::create(&pool, input2).await.unwrap();
|
||||
|
||||
let triggers = TriggerRepository::list(&pool).await.unwrap();
|
||||
|
||||
// Should contain at least our created triggers
|
||||
assert!(triggers.len() >= 2);
|
||||
|
||||
let trigger_ids: Vec<i64> = triggers.iter().map(|t| t.id).collect();
|
||||
assert!(trigger_ids.contains(&trigger1.id));
|
||||
assert!(trigger_ids.contains(&trigger2.id));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_triggers_by_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack1 = PackFixture::new_unique("pack1")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
let pack2 = PackFixture::new_unique("pack2")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create triggers for pack1
|
||||
let input1a = CreateTriggerInput {
|
||||
r#ref: format!("{}.trigger_a", pack1.r#ref),
|
||||
pack: Some(pack1.id),
|
||||
pack_ref: Some(pack1.r#ref.clone()),
|
||||
label: "Pack 1 Trigger A".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
let trigger1a = TriggerRepository::create(&pool, input1a).await.unwrap();
|
||||
|
||||
let input1b = CreateTriggerInput {
|
||||
r#ref: format!("{}.trigger_b", pack1.r#ref),
|
||||
pack: Some(pack1.id),
|
||||
pack_ref: Some(pack1.r#ref.clone()),
|
||||
label: "Pack 1 Trigger B".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
let trigger1b = TriggerRepository::create(&pool, input1b).await.unwrap();
|
||||
|
||||
// Create trigger for pack2
|
||||
let input2 = CreateTriggerInput {
|
||||
r#ref: format!("{}.trigger", pack2.r#ref),
|
||||
pack: Some(pack2.id),
|
||||
pack_ref: Some(pack2.r#ref.clone()),
|
||||
label: "Pack 2 Trigger".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
TriggerRepository::create(&pool, input2).await.unwrap();
|
||||
|
||||
// Find triggers for pack1
|
||||
let pack1_triggers = TriggerRepository::find_by_pack(&pool, pack1.id)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Should have exactly 2 triggers for pack1
|
||||
assert_eq!(pack1_triggers.len(), 2);
|
||||
|
||||
let trigger_ids: Vec<i64> = pack1_triggers.iter().map(|t| t.id).collect();
|
||||
assert!(trigger_ids.contains(&trigger1a.id));
|
||||
assert!(trigger_ids.contains(&trigger1b.id));
|
||||
|
||||
// All triggers should belong to pack1
|
||||
assert!(pack1_triggers.iter().all(|t| t.pack == Some(pack1.id)));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_enabled_triggers() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("enabled_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create enabled trigger
|
||||
let input_enabled = CreateTriggerInput {
|
||||
r#ref: format!("{}.enabled", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Enabled Trigger".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
let trigger_enabled = TriggerRepository::create(&pool, input_enabled)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create disabled trigger
|
||||
let input_disabled = CreateTriggerInput {
|
||||
r#ref: format!("{}.disabled", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Disabled Trigger".to_string(),
|
||||
description: None,
|
||||
enabled: false,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
TriggerRepository::create(&pool, input_disabled)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Find enabled triggers
|
||||
let enabled_triggers = TriggerRepository::find_enabled(&pool).await.unwrap();
|
||||
|
||||
// Should contain at least our enabled trigger
|
||||
let enabled_ids: Vec<i64> = enabled_triggers.iter().map(|t| t.id).collect();
|
||||
assert!(enabled_ids.contains(&trigger_enabled.id));
|
||||
|
||||
// All returned triggers should be enabled
|
||||
assert!(enabled_triggers.iter().all(|t| t.enabled));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_trigger() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("update_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: format!("{}.update_trigger", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Original Label".to_string(),
|
||||
description: Some("Original description".to_string()),
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
let original_updated = trigger.updated;
|
||||
|
||||
// Wait a moment to ensure timestamp changes
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let update_input = UpdateTriggerInput {
|
||||
label: Some("Updated Label".to_string()),
|
||||
description: Some("Updated description".to_string()),
|
||||
enabled: Some(false),
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
};
|
||||
|
||||
let updated = TriggerRepository::update(&pool, trigger.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.id, trigger.id);
|
||||
assert_eq!(updated.r#ref, trigger.r#ref); // Ref should not change
|
||||
assert_eq!(updated.label, "Updated Label");
|
||||
assert_eq!(updated.description, Some("Updated description".to_string()));
|
||||
assert_eq!(updated.enabled, false);
|
||||
assert!(updated.updated > original_updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_trigger_partial() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let trigger_ref = format!("core.{}", unique_pack_ref("partial_trigger"));
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: "Original".to_string(),
|
||||
description: Some("Original".to_string()),
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
// Update only label
|
||||
let update_input = UpdateTriggerInput {
|
||||
label: Some("Only Label Changed".to_string()),
|
||||
description: None,
|
||||
enabled: None,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
};
|
||||
|
||||
let updated = TriggerRepository::update(&pool, trigger.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.label, "Only Label Changed");
|
||||
assert_eq!(updated.description, trigger.description); // Should remain unchanged
|
||||
assert_eq!(updated.enabled, trigger.enabled); // Should remain unchanged
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_trigger_schemas() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let trigger_ref = format!("core.{}", unique_pack_ref("schema_update"));
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: "Schema Trigger".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
let new_param_schema = json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string"}
|
||||
}
|
||||
});
|
||||
|
||||
let new_out_schema = json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"result": {"type": "boolean"}
|
||||
}
|
||||
});
|
||||
|
||||
let update_input = UpdateTriggerInput {
|
||||
label: None,
|
||||
description: None,
|
||||
enabled: None,
|
||||
param_schema: Some(new_param_schema.clone()),
|
||||
out_schema: Some(new_out_schema.clone()),
|
||||
};
|
||||
|
||||
let updated = TriggerRepository::update(&pool, trigger.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(updated.param_schema, Some(new_param_schema));
|
||||
assert_eq!(updated.out_schema, Some(new_out_schema));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_update_trigger_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let update_input = UpdateTriggerInput {
|
||||
label: Some("New Label".to_string()),
|
||||
description: None,
|
||||
enabled: None,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
};
|
||||
|
||||
let result = TriggerRepository::update(&pool, 999999, update_input).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
match err {
|
||||
Error::NotFound { entity, .. } => {
|
||||
assert_eq!(entity, "trigger");
|
||||
}
|
||||
_ => panic!("Expected NotFound error, got: {:?}", err),
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_trigger() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let trigger_ref = format!("core.{}", unique_pack_ref("delete_trigger"));
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: "To Be Deleted".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
// Verify trigger exists
|
||||
let found = TriggerRepository::find_by_id(&pool, trigger.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(found.is_some());
|
||||
|
||||
// Delete the trigger
|
||||
let deleted = TriggerRepository::delete(&pool, trigger.id).await.unwrap();
|
||||
assert!(deleted);
|
||||
|
||||
// Verify trigger no longer exists
|
||||
let not_found = TriggerRepository::find_by_id(&pool, trigger.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(not_found.is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_delete_trigger_not_found() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let deleted = TriggerRepository::delete(&pool, 999999).await.unwrap();
|
||||
|
||||
assert!(!deleted);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_trigger_timestamps_auto_populated() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let trigger_ref = format!("core.{}", unique_pack_ref("timestamp_trigger"));
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: "Timestamp Test".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
// Timestamps should be set
|
||||
assert!(trigger.created.timestamp() > 0);
|
||||
assert!(trigger.updated.timestamp() > 0);
|
||||
|
||||
// Created and updated should be very close initially
|
||||
let diff = (trigger.updated - trigger.created).num_milliseconds().abs();
|
||||
assert!(diff < 1000); // Within 1 second
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_trigger_updated_changes_on_update() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let trigger_ref = format!("core.{}", unique_pack_ref("update_timestamp"));
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: trigger_ref.clone(),
|
||||
pack: None,
|
||||
pack_ref: None,
|
||||
label: "Original".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
let original_created = trigger.created;
|
||||
let original_updated = trigger.updated;
|
||||
|
||||
// Wait a moment to ensure timestamp changes
|
||||
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
|
||||
|
||||
let update_input = UpdateTriggerInput {
|
||||
label: Some("Updated".to_string()),
|
||||
description: None,
|
||||
enabled: None,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
};
|
||||
|
||||
let updated = TriggerRepository::update(&pool, trigger.id, update_input)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Created should remain the same
|
||||
assert_eq!(updated.created, original_created);
|
||||
|
||||
// Updated should be newer
|
||||
assert!(updated.updated > original_updated);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_multiple_triggers_same_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("multi_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Create multiple triggers in the same pack
|
||||
let input1 = CreateTriggerInput {
|
||||
r#ref: format!("{}.webhook", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Webhook".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
let trigger1 = TriggerRepository::create(&pool, input1).await.unwrap();
|
||||
|
||||
let input2 = CreateTriggerInput {
|
||||
r#ref: format!("{}.timer", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Timer".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
let trigger2 = TriggerRepository::create(&pool, input2).await.unwrap();
|
||||
|
||||
// Both should be different triggers
|
||||
assert_ne!(trigger1.id, trigger2.id);
|
||||
assert_ne!(trigger1.r#ref, trigger2.r#ref);
|
||||
|
||||
// Both should belong to the same pack
|
||||
assert_eq!(trigger1.pack, Some(pack.id));
|
||||
assert_eq!(trigger2.pack, Some(pack.id));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_trigger_cascade_delete_with_pack() {
|
||||
let pool = create_test_pool().await.unwrap();
|
||||
|
||||
let pack = PackFixture::new_unique("cascade_pack")
|
||||
.create(&pool)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: format!("{}.cascade_trigger", pack.r#ref),
|
||||
pack: Some(pack.id),
|
||||
pack_ref: Some(pack.r#ref.clone()),
|
||||
label: "Cascade Trigger".to_string(),
|
||||
description: None,
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
let trigger = TriggerRepository::create(&pool, input).await.unwrap();
|
||||
|
||||
// Delete the pack
|
||||
use attune_common::repositories::pack::PackRepository;
|
||||
PackRepository::delete(&pool, pack.id).await.unwrap();
|
||||
|
||||
// Verify trigger was cascade deleted
|
||||
let not_found = TriggerRepository::find_by_id(&pool, trigger.id)
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(not_found.is_none());
|
||||
}
|
||||
247
crates/common/tests/webhook_tests.rs
Normal file
247
crates/common/tests/webhook_tests.rs
Normal file
@@ -0,0 +1,247 @@
|
||||
//! Integration tests for webhook functionality
|
||||
|
||||
use attune_common::models::trigger::Trigger;
|
||||
use attune_common::repositories::trigger::{CreateTriggerInput, TriggerRepository};
|
||||
use attune_common::repositories::{Create, FindById};
|
||||
use sqlx::postgres::PgPoolOptions;
|
||||
use sqlx::PgPool;
|
||||
|
||||
async fn setup_test_db() -> PgPool {
|
||||
let database_url = std::env::var("DATABASE_URL")
|
||||
.unwrap_or_else(|_| "postgresql://postgres:postgres@localhost:5432/attune".to_string());
|
||||
|
||||
PgPoolOptions::new()
|
||||
.max_connections(5)
|
||||
.connect(&database_url)
|
||||
.await
|
||||
.expect("Failed to create database pool")
|
||||
}
|
||||
|
||||
async fn create_test_trigger(pool: &PgPool) -> Trigger {
|
||||
let input = CreateTriggerInput {
|
||||
r#ref: format!("test.webhook_trigger_{}", uuid::Uuid::new_v4()),
|
||||
pack: None,
|
||||
pack_ref: Some("test".to_string()),
|
||||
label: "Test Webhook Trigger".to_string(),
|
||||
description: Some("A test trigger for webhook functionality".to_string()),
|
||||
enabled: true,
|
||||
param_schema: None,
|
||||
out_schema: None,
|
||||
is_adhoc: false,
|
||||
};
|
||||
|
||||
TriggerRepository::create(pool, input)
|
||||
.await
|
||||
.expect("Failed to create test trigger")
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_webhook_enable() {
|
||||
let pool = setup_test_db().await;
|
||||
let trigger = create_test_trigger(&pool).await;
|
||||
|
||||
// Initially, webhook should be disabled
|
||||
assert!(!trigger.webhook_enabled);
|
||||
assert!(trigger.webhook_key.is_none());
|
||||
|
||||
// Enable webhooks
|
||||
let webhook_info = TriggerRepository::enable_webhook(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to enable webhook");
|
||||
|
||||
// Verify webhook info
|
||||
assert!(webhook_info.enabled);
|
||||
assert!(webhook_info.webhook_key.starts_with("wh_"));
|
||||
assert_eq!(webhook_info.webhook_key.len(), 35); // "wh_" + 32 chars
|
||||
assert!(webhook_info.webhook_url.contains(&webhook_info.webhook_key));
|
||||
|
||||
// Fetch trigger again to verify database state
|
||||
let updated_trigger = TriggerRepository::find_by_id(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to fetch trigger")
|
||||
.expect("Trigger not found");
|
||||
|
||||
assert!(updated_trigger.webhook_enabled);
|
||||
assert_eq!(
|
||||
updated_trigger.webhook_key.as_ref().unwrap(),
|
||||
&webhook_info.webhook_key
|
||||
);
|
||||
|
||||
// Cleanup
|
||||
sqlx::query("DELETE FROM attune.trigger WHERE id = $1")
|
||||
.bind(trigger.id)
|
||||
.execute(&pool)
|
||||
.await
|
||||
.expect("Failed to cleanup");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_webhook_disable() {
|
||||
let pool = setup_test_db().await;
|
||||
let trigger = create_test_trigger(&pool).await;
|
||||
|
||||
// Enable webhooks first
|
||||
let webhook_info = TriggerRepository::enable_webhook(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to enable webhook");
|
||||
|
||||
let webhook_key = webhook_info.webhook_key.clone();
|
||||
|
||||
// Disable webhooks
|
||||
let result = TriggerRepository::disable_webhook(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to disable webhook");
|
||||
|
||||
assert!(result);
|
||||
|
||||
// Fetch trigger to verify
|
||||
let updated_trigger = TriggerRepository::find_by_id(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to fetch trigger")
|
||||
.expect("Trigger not found");
|
||||
|
||||
assert!(!updated_trigger.webhook_enabled);
|
||||
// Key should still be present (for audit purposes)
|
||||
assert_eq!(updated_trigger.webhook_key.as_ref().unwrap(), &webhook_key);
|
||||
|
||||
// Cleanup
|
||||
sqlx::query("DELETE FROM attune.trigger WHERE id = $1")
|
||||
.bind(trigger.id)
|
||||
.execute(&pool)
|
||||
.await
|
||||
.expect("Failed to cleanup");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_webhook_key_regeneration() {
|
||||
let pool = setup_test_db().await;
|
||||
let trigger = create_test_trigger(&pool).await;
|
||||
|
||||
// Enable webhooks
|
||||
let initial_info = TriggerRepository::enable_webhook(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to enable webhook");
|
||||
|
||||
let old_key = initial_info.webhook_key.clone();
|
||||
|
||||
// Regenerate key
|
||||
let regenerate_result = TriggerRepository::regenerate_webhook_key(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to regenerate webhook key");
|
||||
|
||||
assert!(regenerate_result.previous_key_revoked);
|
||||
assert_ne!(regenerate_result.webhook_key, old_key);
|
||||
assert!(regenerate_result.webhook_key.starts_with("wh_"));
|
||||
|
||||
// Fetch trigger to verify new key
|
||||
let updated_trigger = TriggerRepository::find_by_id(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to fetch trigger")
|
||||
.expect("Trigger not found");
|
||||
|
||||
assert_eq!(
|
||||
updated_trigger.webhook_key.as_ref().unwrap(),
|
||||
®enerate_result.webhook_key
|
||||
);
|
||||
|
||||
// Cleanup
|
||||
sqlx::query("DELETE FROM attune.trigger WHERE id = $1")
|
||||
.bind(trigger.id)
|
||||
.execute(&pool)
|
||||
.await
|
||||
.expect("Failed to cleanup");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_find_by_webhook_key() {
|
||||
let pool = setup_test_db().await;
|
||||
let trigger = create_test_trigger(&pool).await;
|
||||
|
||||
// Enable webhooks
|
||||
let webhook_info = TriggerRepository::enable_webhook(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to enable webhook");
|
||||
|
||||
// Find by webhook key
|
||||
let found_trigger = TriggerRepository::find_by_webhook_key(&pool, &webhook_info.webhook_key)
|
||||
.await
|
||||
.expect("Failed to find trigger by webhook key")
|
||||
.expect("Trigger not found");
|
||||
|
||||
assert_eq!(found_trigger.id, trigger.id);
|
||||
assert_eq!(found_trigger.r#ref, trigger.r#ref);
|
||||
assert!(found_trigger.webhook_enabled);
|
||||
|
||||
// Test with invalid key
|
||||
let not_found =
|
||||
TriggerRepository::find_by_webhook_key(&pool, "wh_invalid_key_12345678901234567890")
|
||||
.await
|
||||
.expect("Query failed");
|
||||
|
||||
assert!(not_found.is_none());
|
||||
|
||||
// Cleanup
|
||||
sqlx::query("DELETE FROM attune.trigger WHERE id = $1")
|
||||
.bind(trigger.id)
|
||||
.execute(&pool)
|
||||
.await
|
||||
.expect("Failed to cleanup");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_webhook_key_uniqueness() {
|
||||
let pool = setup_test_db().await;
|
||||
let trigger1 = create_test_trigger(&pool).await;
|
||||
let trigger2 = create_test_trigger(&pool).await;
|
||||
|
||||
// Enable webhooks for both triggers
|
||||
let info1 = TriggerRepository::enable_webhook(&pool, trigger1.id)
|
||||
.await
|
||||
.expect("Failed to enable webhook for trigger 1");
|
||||
|
||||
let info2 = TriggerRepository::enable_webhook(&pool, trigger2.id)
|
||||
.await
|
||||
.expect("Failed to enable webhook for trigger 2");
|
||||
|
||||
// Keys should be different
|
||||
assert_ne!(info1.webhook_key, info2.webhook_key);
|
||||
|
||||
// Both should be valid format
|
||||
assert!(info1.webhook_key.starts_with("wh_"));
|
||||
assert!(info2.webhook_key.starts_with("wh_"));
|
||||
|
||||
// Cleanup
|
||||
sqlx::query("DELETE FROM attune.trigger WHERE id IN ($1, $2)")
|
||||
.bind(trigger1.id)
|
||||
.bind(trigger2.id)
|
||||
.execute(&pool)
|
||||
.await
|
||||
.expect("Failed to cleanup");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_enable_webhook_idempotent() {
|
||||
let pool = setup_test_db().await;
|
||||
let trigger = create_test_trigger(&pool).await;
|
||||
|
||||
// Enable webhooks first time
|
||||
let info1 = TriggerRepository::enable_webhook(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to enable webhook");
|
||||
|
||||
// Enable webhooks second time (should return same key)
|
||||
let info2 = TriggerRepository::enable_webhook(&pool, trigger.id)
|
||||
.await
|
||||
.expect("Failed to enable webhook again");
|
||||
|
||||
// Should return the same key
|
||||
assert_eq!(info1.webhook_key, info2.webhook_key);
|
||||
assert!(info2.enabled);
|
||||
|
||||
// Cleanup
|
||||
sqlx::query("DELETE FROM attune.trigger WHERE id = $1")
|
||||
.bind(trigger.id)
|
||||
.execute(&pool)
|
||||
.await
|
||||
.expect("Failed to cleanup");
|
||||
}
|
||||
Reference in New Issue
Block a user