21 KiB
Tier 3 E2E Tests - Quick Reference Guide
Status: 🔄 IN PROGRESS (17/21 scenarios, 81%)
Focus: Advanced features, edge cases, security validation, operational scenarios
Priority: MEDIUM-LOW (after Tier 1 & 2 complete)
Overview
Tier 3 tests validate advanced Attune features, edge cases, security boundaries, and operational scenarios that go beyond core automation flows. These tests ensure the platform is robust, secure, and production-ready.
Implemented Tests (17 scenarios, 56 tests)
🔐 T3.20: Secret Injection Security (HIGH Priority)
File: test_t3_20_secret_injection.py (566 lines)
Tests: 4
Duration: ~20 seconds
Validates that secrets are passed securely via stdin (not environment variables) and never exposed in logs or to other tenants.
Test Functions:
test_secret_injection_via_stdin- Secrets via stdin validationtest_secret_encryption_at_rest- Encryption flag validationtest_secret_not_in_execution_logs- Secret redaction testingtest_secret_access_tenant_isolation- Cross-tenant isolation
Run:
pytest e2e/tier3/test_t3_20_secret_injection.py -v
pytest -m secrets -v
Key Validations:
- ✅ Secrets passed via stdin (secure)
- ✅ Secrets NOT in environment variables
- ✅ Secrets NOT exposed in logs
- ✅ Tenant isolation enforced
🔒 T3.10: RBAC Permission Checks (MEDIUM Priority)
File: test_t3_10_rbac.py (524 lines)
Tests: 4
Duration: ~20 seconds
Tests role-based access control enforcement across all API endpoints.
Test Functions:
test_viewer_role_permissions- Read-only accesstest_admin_role_permissions- Full CRUD accesstest_executor_role_permissions- Execute + read onlytest_role_permissions_summary- Permission matrix documentation
Run:
pytest e2e/tier3/test_t3_10_rbac.py -v
pytest -m rbac -v
Roles Tested:
- admin - Full access
- editor - Create/update + execute
- executor - Execute + read only
- viewer - Read-only
🌐 T3.18: HTTP Runner Execution (MEDIUM Priority)
File: test_t3_18_http_runner.py (473 lines)
Tests: 4
Duration: ~10 seconds
Validates HTTP runner making REST API calls with authentication, headers, and error handling.
Test Functions:
test_http_runner_basic_get- GET requesttest_http_runner_post_with_json- POST with JSONtest_http_runner_authentication_header- Bearer token authtest_http_runner_error_handling- 4xx/5xx errors
Run:
pytest e2e/tier3/test_t3_18_http_runner.py -v
pytest -m http -v
Features Validated:
- ✅ GET and POST requests
- ✅ Custom headers
- ✅ JSON serialization
- ✅ Authentication via secrets
- ✅ Response capture
- ✅ Error handling
⚠️ T3.13: Invalid Action Parameters (MEDIUM Priority)
File: test_t3_13_invalid_parameters.py (559 lines)
Tests: 4
Duration: ~5 seconds
Tests parameter validation, default values, and error handling.
Test Functions:
test_missing_required_parameter- Required param validationtest_invalid_parameter_type- Type checkingtest_extra_parameters_ignored- Extra params handlingtest_parameter_default_values- Default values
Run:
pytest e2e/tier3/test_t3_13_invalid_parameters.py -v
pytest -m validation -v
Validations:
- ✅ Missing required parameters fail early
- ✅ Clear error messages
- ✅ Default values applied
- ✅ Extra parameters ignored gracefully
⏱️ T3.1: Date Timer with Past Date (LOW Priority)
File: test_t3_01_past_date_timer.py (305 lines)
Tests: 3
Duration: ~5 seconds
Tests edge cases for date timers with past dates.
Test Functions:
test_past_date_timer_immediate_execution- 1 hour pasttest_just_missed_date_timer- 2 seconds pasttest_far_past_date_timer- 1 year past
Run:
pytest e2e/tier3/test_t3_01_past_date_timer.py -v
pytest -m edge_case -v
Edge Cases:
- ✅ Past date behavior (execute or reject)
- ✅ Boundary conditions
- ✅ Clear error messages
🔗 T3.4: Webhook with Multiple Rules (LOW Priority)
File: test_t3_04_webhook_multiple_rules.py (343 lines)
Tests: 2
Duration: ~15 seconds
Tests single webhook triggering multiple rules simultaneously.
Test Functions:
test_webhook_fires_multiple_rules- 1 webhook → 3 rulestest_webhook_multiple_posts_multiple_rules- 3 posts × 2 rules
Run:
pytest e2e/tier3/test_t3_04_webhook_multiple_rules.py -v
pytest -m webhook e2e/tier3/ -v
Validations:
- ✅ Single event triggers multiple rules
- ✅ Independent rule execution
- ✅ Correct execution count (posts × rules)
⏱️ T3.2: Timer Cancellation (LOW Priority)
File: test_t3_02_timer_cancellation.py (335 lines)
Tests: 3
Duration: ~15 seconds
Tests that disabling/deleting rules stops timer executions.
Test Functions:
test_timer_cancellation_via_rule_disable- Disable stops executionstest_timer_resume_after_re_enable- Re-enable resumes timertest_timer_delete_stops_executions- Delete permanently stops
Run:
pytest e2e/tier3/test_t3_02_timer_cancellation.py -v
pytest -m timer e2e/tier3/ -v
Validations:
- ✅ Disabling rule stops future executions
- ✅ Re-enabling rule resumes timer
- ✅ Deleting rule permanently stops timer
- ✅ In-flight executions complete normally
⏱️ T3.3: Multiple Concurrent Timers (LOW Priority)
File: test_t3_03_concurrent_timers.py (438 lines)
Tests: 3
Duration: ~30 seconds
Tests that multiple timers run independently without interference.
Test Functions:
test_multiple_concurrent_timers- 3 timers with different intervalstest_many_concurrent_timers- 5 concurrent timers (stress test)test_timer_precision_under_load- Precision validation
Run:
pytest e2e/tier3/test_t3_03_concurrent_timers.py -v
pytest -m performance e2e/tier3/ -v
Validations:
- ✅ Multiple timers fire independently
- ✅ Correct execution counts per timer
- ✅ No timer interference
- ✅ System handles concurrent load
- ✅ Timing precision maintained
🎯 T3.5: Webhook with Rule Criteria Filtering (MEDIUM Priority)
File: test_t3_05_rule_criteria.py (507 lines)
Tests: 4
Duration: ~20 seconds
Tests conditional rule firing based on event payload criteria.
Test Functions:
test_rule_criteria_basic_filtering- Equality checkstest_rule_criteria_numeric_comparison- Numeric operatorstest_rule_criteria_complex_expressions- AND/OR logictest_rule_criteria_list_membership- List membership
Run:
pytest e2e/tier3/test_t3_05_rule_criteria.py -v
pytest -m criteria -v
Validations:
- ✅ Jinja2 expression evaluation
- ✅ Event filtering by criteria
- ✅ Numeric comparisons (>, <, >=, <=)
- ✅ Complex boolean logic (AND/OR)
- ✅ List membership (in operator)
- ✅ Only matching rules fire
🔒 T3.11: System vs User Packs (MEDIUM Priority)
File: test_t3_11_system_packs.py (401 lines)
Tests: 4
Duration: ~15 seconds
Tests multi-tenant pack isolation and system pack availability.
Test Functions:
test_system_pack_visible_to_all_tenants- System packs visible to alltest_user_pack_isolation- User packs isolated per tenanttest_system_pack_actions_available_to_all- System actions executabletest_system_pack_identification- Documentation reference
Run:
pytest e2e/tier3/test_t3_11_system_packs.py -v
pytest -m multi_tenant -v
Validations:
- ✅ System packs visible to all tenants
- ✅ User packs isolated per tenant
- ✅ Cross-tenant access blocked
- ✅ System actions executable by all
- ✅ Pack isolation enforced
🔔 T3.14: Execution Completion Notifications (MEDIUM Priority)
File: test_t3_14_execution_notifications.py (374 lines)
Tests: 4
Duration: ~20 seconds
Tests real-time notification system for execution lifecycle events.
Test Functions:
test_execution_success_notification- Success completion notificationstest_execution_failure_notification- Failure event notificationstest_execution_timeout_notification- Timeout event notificationstest_websocket_notification_delivery- Real-time WebSocket delivery (skipped)
Run:
pytest e2e/tier3/test_t3_14_execution_notifications.py -v
pytest -m notifications -v
Key Validations:
- ✅ Notification metadata for execution events
- ✅ Success, failure, and timeout notifications
- ✅ Execution tracking for real-time updates
- ⏭️ WebSocket delivery (infrastructure pending)
🔔 T3.15: Inquiry Creation Notifications (MEDIUM Priority)
File: test_t3_15_inquiry_notifications.py (405 lines)
Tests: 4
Duration: ~20 seconds
Tests notification system for human-in-the-loop inquiry workflows.
Test Functions:
test_inquiry_creation_notification- Inquiry creation eventtest_inquiry_response_notification- Response submission eventtest_inquiry_timeout_notification- Inquiry timeout handlingtest_websocket_inquiry_notification_delivery- Real-time delivery (skipped)
Run:
pytest e2e/tier3/test_t3_15_inquiry_notifications.py -v
pytest -m "notifications and inquiry" -v
Key Validations:
- ✅ Inquiry lifecycle events (created, responded, timeout)
- ✅ Notification metadata for approval workflows
- ✅ Human-in-the-loop notification flow
- ⏭️ Real-time WebSocket delivery (pending)
🐳 T3.17: Container Runner Execution (MEDIUM Priority)
File: test_t3_17_container_runner.py (472 lines)
Tests: 4
Duration: ~30 seconds
Tests Docker-based container runner for isolated action execution.
Test Functions:
test_container_runner_basic_execution- Basic Python container executiontest_container_runner_with_parameters- Parameter injection via stdintest_container_runner_isolation- Container isolation validationtest_container_runner_failure_handling- Failure capture and cleanup
Run:
pytest e2e/tier3/test_t3_17_container_runner.py -v
pytest -m container -v
Key Validations:
- ✅ Container-based execution (python:3.11-slim)
- ✅ Parameter passing via JSON stdin
- ✅ Container isolation (no state leakage)
- ✅ Failure handling and cleanup
- ✅ Docker image specification
Prerequisites: Docker daemon running
📝 T3.21: Action Log Size Limits (MEDIUM Priority)
File: test_t3_21_log_size_limits.py (481 lines)
Tests: 4
Duration: ~20 seconds
Tests log capture, size limits, and handling of large outputs.
Test Functions:
test_large_log_output_truncation- Large log truncation (~5MB output)test_stderr_log_capture- Separate stdout/stderr capturetest_log_line_count_limits- High line count handling (10k lines)test_binary_output_handling- Binary/non-UTF8 output sanitization
Run:
pytest e2e/tier3/test_t3_21_log_size_limits.py -v
pytest -m logs -v
Key Validations:
- ✅ Log size limits enforced (max 10MB)
- ✅ Stdout and stderr captured separately
- ✅ High line count (10,000+) handled gracefully
- ✅ Binary data properly sanitized
- ✅ No crashes from large output
🔄 T3.7: Complex Workflow Orchestration (MEDIUM Priority)
File: test_t3_07_complex_workflows.py (718 lines)
Tests: 4
Duration: ~45 seconds
Tests advanced workflow features including parallel execution, branching, and data transformation.
Test Functions:
test_parallel_workflow_execution- Parallel task executiontest_conditional_workflow_branching- If/else conditional logictest_nested_workflow_with_error_handling- Nested workflows with error recoverytest_workflow_with_data_transformation- Data pipeline with transformations
Run:
pytest e2e/tier3/test_t3_07_complex_workflows.py -v
pytest -m orchestration -v
Key Validations:
- ✅ Parallel task execution (3 tasks concurrently)
- ✅ Conditional branching (if/else based on parameters)
- ✅ Nested workflow execution with error handling
- ✅ Data transformation and passing between tasks
- ✅ Workflow orchestration patterns
🔗 T3.8: Chained Webhook Triggers (MEDIUM Priority)
File: test_t3_08_chained_webhooks.py (686 lines)
Tests: 4
Duration: ~30 seconds
Tests webhook chains where webhooks trigger workflows that trigger other webhooks.
Test Functions:
test_webhook_triggers_workflow_triggers_webhook- A→Workflow→B chaintest_webhook_cascade_multiple_levels- Multi-level cascade (A→B→C)test_webhook_chain_with_data_passing- Data transformation in chainstest_webhook_chain_error_propagation- Error handling in chains
Run:
pytest e2e/tier3/test_t3_08_chained_webhooks.py -v
pytest -m "webhook and orchestration" -v
Key Validations:
- ✅ Webhook chaining through workflows
- ✅ Multi-level webhook cascades
- ✅ Data passing and transformation through chains
- ✅ Error propagation and isolation
- ✅ HTTP runner triggering webhooks
🔐 T3.9: Multi-Step Approval Workflow (MEDIUM Priority)
File: test_t3_09_multistep_approvals.py (788 lines)
Tests: 4
Duration: ~40 seconds
Tests complex approval workflows with multiple sequential and conditional inquiries.
Test Functions:
test_sequential_multi_step_approvals- 3 sequential approvals (Manager→Director→VP)test_conditional_approval_workflow- Conditional approval based on responsetest_approval_with_timeout_and_escalation- Timeout triggers escalationtest_approval_denial_stops_workflow- Denial stops subsequent steps
Run:
pytest e2e/tier3/test_t3_09_multistep_approvals.py -v
pytest -m "inquiry and workflow" -v
Key Validations:
- ✅ Sequential multi-step approvals
- ✅ Conditional approval logic
- ✅ Timeout and escalation handling
- ✅ Denial stops workflow execution
- ✅ Human-in-the-loop orchestration
🔔 T3.16: Rule Trigger Notifications (MEDIUM Priority)
File: test_t3_16_rule_notifications.py (464 lines)
Tests: 4
Duration: ~20 seconds
Tests real-time notifications for rule lifecycle events.
Test Functions:
test_rule_trigger_notification- Rule trigger notification metadatatest_rule_enable_disable_notification- State change notificationstest_multiple_rule_triggers_notification- Multiple rules from one eventtest_rule_criteria_evaluation_notification- Criteria match/no-match
Run:
pytest e2e/tier3/test_t3_16_rule_notifications.py -v
pytest -m "notifications and rules" -v
Key Validations:
- ✅ Rule trigger notification metadata
- ✅ Rule state change notifications (enable/disable)
- ✅ Multiple rule trigger notifications from single event
- ✅ Rule criteria evaluation tracking
- ✅ Enforcement creation notification
Remaining Scenarios (4 scenarios, ~4 tests)
LOW Priority (4 remaining)
- T3.6: Sensor-generated custom events
- T3.12: Worker crash recovery
- T3.19: Dependency conflict isolation (virtualenv)
- T3.22: Additional edge cases (TBD)
Quick Commands
Run All Tier 3 Tests
cd tests
pytest e2e/tier3/ -v
Run by Category
# Security tests (secrets + RBAC)
pytest -m security e2e/tier3/ -v
# HTTP runner tests
pytest -m http -v
# Parameter validation tests
pytest -m validation -v
# Edge cases
pytest -m edge_case -v
# All webhook tests
pytest -m webhook e2e/tier3/ -v
Run Specific Test
# Secret injection (most important security test)
pytest e2e/tier3/test_t3_20_secret_injection.py::test_secret_injection_via_stdin -v
# RBAC viewer permissions
pytest e2e/tier3/test_t3_10_rbac.py::test_viewer_role_permissions -v
# HTTP GET request
pytest e2e/tier3/test_t3_18_http_runner.py::test_http_runner_basic_get -v
Run with Output
# Show print statements
pytest e2e/tier3/ -v -s
# Stop on first failure
pytest e2e/tier3/ -v -x
# Run specific marker with output
pytest -m secrets -v -s
Test Markers
Use pytest markers to run specific test categories:
@pytest.mark.tier3- All Tier 3 tests@pytest.mark.security- Security and RBAC tests@pytest.mark.secrets- Secret management tests@pytest.mark.rbac- Role-based access control@pytest.mark.http- HTTP runner tests@pytest.mark.runner- Action runner tests@pytest.mark.validation- Parameter validation@pytest.mark.parameters- Parameter handling@pytest.mark.edge_case- Edge cases@pytest.mark.webhook- Webhook tests@pytest.mark.rules- Rule evaluation tests@pytest.mark.timer- Timer tests@pytest.mark.criteria- Rule criteria tests@pytest.mark.multi_tenant- Multi-tenancy tests@pytest.mark.packs- Pack management tests@pytest.mark.notifications- Notification system tests@pytest.mark.websocket- WebSocket tests (skipped - pending infrastructure)@pytest.mark.container- Container runner tests@pytest.mark.logs- Log capture and size tests@pytest.mark.limits- Resource and size limit tests@pytest.mark.orchestration- Advanced workflow orchestration tests
Prerequisites
Services Required
- PostgreSQL (port 5432)
- RabbitMQ (port 5672)
- attune-api (port 8080)
- attune-executor
- attune-worker
- attune-sensor
- attune-notifier (for notification tests)
External Dependencies
- HTTP tests: Internet access (uses httpbin.org)
- Container tests: Docker daemon running
- Notification tests: Notifier service running
- Secret tests: Encryption key configured
Test Patterns
Common Test Structure
def test_feature(client: AttuneClient, test_pack):
"""Test description"""
print("\n" + "=" * 80)
print("TEST: Feature Name")
print("=" * 80)
# Step 1: Setup
print("\n[STEP 1] Setting up...")
# Create resources
# Step 2: Execute
print("\n[STEP 2] Executing...")
# Trigger action
# Step 3: Verify
print("\n[STEP 3] Verifying...")
# Check results
# Summary
print("\n" + "=" * 80)
print("SUMMARY")
print("=" * 80)
# Print results
# Assertions
assert condition, "Error message"
Polling Pattern
from helpers.polling import wait_for_execution_status
final_exec = wait_for_execution_status(
client=client,
execution_id=execution_id,
expected_status="succeeded",
timeout=20,
)
Secret Testing Pattern
# Create secret
secret_response = client.create_secret(
key="api_key",
value="secret_value",
encrypted=True
)
# Use secret in action
execution_data = {
"action": action_ref,
"parameters": {},
"secrets": ["api_key"]
}
Troubleshooting
Test Failures
Secret injection test fails:
- Check if worker is passing secrets via stdin
- Verify encryption key is configured
- Check worker logs for secret handling
RBAC test fails:
- RBAC may not be fully implemented yet
- Tests use
pytest.skip()for unavailable features - Check if role-based registration is available
HTTP runner test fails:
- Verify internet access (uses httpbin.org)
- Check if HTTP runner is implemented
- Verify proxy settings if behind firewall
Parameter validation test fails:
- Check if parameter validation is implemented
- Verify error messages are clear
- Check executor parameter handling
Common Issues
Timeouts:
- Increase timeout values in polling functions
- Check if services are running and responsive
- Verify network connectivity
Import Errors:
- Run
pip install -r requirements-test.txt - Check Python path includes test helpers
Authentication Errors:
- Check if test user credentials are correct
- Verify JWT_SECRET is configured
- Check API service logs
Contributing
Adding New Tests
- Create test file:
test_t3_XX_feature_name.py - Add docstring with scenario number and description
- Use consistent test structure (steps, summary, assertions)
- Add appropriate pytest markers
- Update this README with test information
- Update
E2E_TESTS_COMPLETE.mdwith completion status
Test Writing Guidelines
- ✅ Clear step-by-step output for debugging
- ✅ Comprehensive assertions with descriptive messages
- ✅ Summary section at end of each test
- ✅ Handle unimplemented features gracefully (pytest.skip)
- ✅ Use unique references to avoid conflicts
- ✅ Clean up resources when possible
- ✅ Document expected behavior in docstrings
Statistics
Completed: 17/21 scenarios (81%)
Test Functions: 56
Lines of Code: ~8,700
Average Duration: ~240 seconds total
Priority Status:
- HIGH: 5/5 complete (100%) ✅
- MEDIUM: 11/11 complete (100%) ✅
- LOW: 1/5 complete (20%) 🔄
References
- Test Plan:
docs/e2e-test-plan.md - Complete Report:
tests/E2E_TESTS_COMPLETE.md - Helpers:
tests/helpers/ - Tier 1 Tests:
tests/e2e/tier1/ - Tier 2 Tests:
tests/e2e/tier2/
Last Updated: 2026-01-21
Status: 🔄 IN PROGRESS (17/21 scenarios, 81%)
Next: T3.6 (Custom events), T3.12 (Crash recovery), T3.19 (Dependency isolation)