5.9 KiB
Tier 1 E2E Test Status Summary
Date: 2026-01-23
Status: 🔴 BLOCKED - Tests cannot run due to API schema mismatches
Priority: P0 - Critical blockers prevent test execution
Current State
Tests Attempted: 34 Tier 1 tests
Results:
- ❌ 26+ tests failing due to fixture setup errors
- ⚠️ 4 tests failing due to schema mismatches
- ✅ 0 tests passing
Root Cause
The E2E tests were written based on an expected API schema that differs from the actual implemented API. Key mismatches:
- Pack Registration - Wrong endpoint
- Trigger Fields - Different field names (name → label, type → ref)
- Timer Architecture - Tests don't understand Trigger + Sensor model
- Field Naming - Inconsistent throughout (name vs label vs ref)
Fixes Applied Today
✅ 1. Helper Function Imports (Phase 1)
- Added
wait_for_execution_completion()tohelpers/polling.py - Added 10 missing exports to
helpers/__init__.py - Result: All 151 tests now collect successfully
✅ 2. Client Method Fixes (Phase 2)
- Added
create_pack()method toAttuneClient - Fixed
create_secret()signature to match API (/api/v1/keys) - Result: No more AttributeError/TypeError
✅ 3. Pack Registration Fix (Phase 3)
- Fixed
register_pack()to use correct endpoint:/api/v1/packs/register - Added
skip_testsandforceparameters - Updated
create_test_pack()to reuse existing packs - Result: Test packs load successfully
✅ 4. Trigger Creation Fix (Phase 4)
- Updated
create_trigger()to use correct API fields - Maps legacy
name→label, generatesreffrom pack + name - Result: Triggers can be created
Remaining Blockers
🔴 1. Timer Architecture Gap (CRITICAL)
Problem: Tests assume creating a trigger creates a timer. Reality: Need Trigger + Sensor.
Current Test Code:
trigger = create_interval_timer(client, interval_seconds=5)
rule = create_rule(client, trigger_id=trigger["id"], action_ref="...")
# ❌ Timer never fires because no sensor exists
What's Needed:
trigger = create_interval_timer(client, interval_seconds=5)
sensor = create_interval_sensor(client, trigger_id=trigger["id"], interval_seconds=5)
rule = create_rule(client, trigger_id=trigger["id"], action_ref="...")
# ✅ Sensor fires events, rules execute
Required Changes:
- Create helper functions:
create_interval_sensor(),create_cron_sensor(),create_date_sensor() - Update
create_interval_timer()etc. to create BOTH trigger and sensor - Add
AttuneClient.create_sensor()method - Check if sensor service is running and can handle these
Impact: ALL timer tests blocked (T1.1, T1.2, T1.3)
🔴 2. Field Name Mismatches (HIGH)
Problem: Tests reference wrong field names in assertions
Examples:
# Test expects:
trigger['name'] # ❌ Field doesn't exist
# API returns:
trigger['label'] # ✅ Correct field
# Test expects:
action['name'] # ❌ Field doesn't exist
# API returns:
action['label'] # ✅ Correct field
Required Changes:
- Search/replace
trigger['name']→trigger['label']across all tests - Search/replace
action['name']→action['label']across all tests - Verify other field name assumptions (type, parameters, etc.)
Impact: Most tests will fail on assertions even if they execute
🔴 3. Sensor Service Requirements (UNKNOWN)
Questions:
- Is the sensor service running and functional?
- Does it support interval/cron/date timers?
- How does it receive sensor configurations?
- Does it auto-register sensors or need manual registration?
Need to Verify:
- Check if sensor service is in docker-compose
- Test manual sensor creation via API
- Confirm timer sensors actually fire events
Impact: Unknown - could be complete blocker if sensors don't work
Recommended Approach
Option A: Fix E2E Tests (Estimated: 4-8 hours)
- Add sensor creation to timer helpers (1 hour)
- Fix field name mismatches across all tests (2-3 hours)
- Verify and fix sensor service integration (1-2 hours)
- Fix additional schema issues as discovered (1-2 hours)
Pros: Tests become useful, validate full system
Cons: Time-consuming, tests may be based on incorrect assumptions
Option B: Focus on Unit/Integration Tests (Estimated: 2-4 hours)
- Write focused API tests for individual endpoints
- Test trigger/sensor/rule creation in isolation
- Verify basic automation flow with minimal setup
Pros: Faster, more reliable, easier to maintain
Cons: Less comprehensive, doesn't test full E2E flows
Option C: Hybrid Approach (RECOMMENDED)
- Immediate: Fix 1-2 simple E2E tests to validate architecture (webhook test?)
- Short-term: Write API-level integration tests for core flows
- Long-term: Gradually fix E2E tests as features stabilize
Next Steps
Decision Needed: Which approach to take?
If proceeding with E2E fixes:
- First verify sensor service is functional
- Create sensor helper functions
- Fix timer tests (T1.1, T1.2, T1.3)
- Fix field name mismatches
- Tackle webhook tests (simpler, no sensors needed)
If pivoting to integration tests:
- Create new
tests/integration/directory - Write API endpoint tests
- Test basic automation flows
- Document E2E test limitations for future work
Files Modified Today
tests/helpers/polling.py- Addedwait_for_execution_completion()tests/helpers/__init__.py- Added 10 exportstests/helpers/client.py- Fixedregister_pack(),create_pack(),create_secret(),create_trigger()tests/helpers/fixtures.py- Updatedcreate_test_pack()work-summary/PROBLEM.md- Documented issuesCHANGELOG.md- Added fix entries
Bottom Line: E2E tests need significant rework to match actual API implementation. Recommend validating sensor architecture before investing more time in test fixes.