working out the worker/execution interface
This commit is contained in:
270
packs/core/DEPENDENCIES.md
Normal file
270
packs/core/DEPENDENCIES.md
Normal file
@@ -0,0 +1,270 @@
|
||||
# Core Pack Dependencies
|
||||
|
||||
**Philosophy:** The core pack has **zero runtime dependencies** beyond standard system utilities.
|
||||
|
||||
## Why Zero Dependencies?
|
||||
|
||||
1. **Portability:** Works in any environment with standard Unix utilities
|
||||
2. **Reliability:** No version conflicts, no package installation failures
|
||||
3. **Security:** Minimal attack surface, no third-party library vulnerabilities
|
||||
4. **Performance:** Fast startup, no runtime initialization overhead
|
||||
5. **Simplicity:** Easy to audit, test, and maintain
|
||||
|
||||
## Required System Utilities
|
||||
|
||||
All core pack actions rely only on utilities available in standard Linux/Unix environments:
|
||||
|
||||
| Utility | Purpose | Used By |
|
||||
|---------|---------|---------|
|
||||
| `bash` | Shell scripting | All shell actions |
|
||||
| `jq` | JSON parsing/generation | All actions (parameter handling) |
|
||||
| `curl` | HTTP client | `http_request.sh` |
|
||||
| Standard Unix tools | Text processing, file operations | Various actions |
|
||||
|
||||
These utilities are:
|
||||
- ✅ Pre-installed in all Attune worker containers
|
||||
- ✅ Standard across Linux distributions
|
||||
- ✅ Stable, well-tested, and widely used
|
||||
- ✅ Available via package managers if needed
|
||||
|
||||
## No Runtime Dependencies
|
||||
|
||||
The core pack **does not require:**
|
||||
- ❌ Python interpreter or packages
|
||||
- ❌ Node.js runtime or npm modules
|
||||
- ❌ Ruby, Perl, or other scripting languages
|
||||
- ❌ Third-party libraries or frameworks
|
||||
- ❌ Package installations at runtime
|
||||
|
||||
## Action Implementation Guidelines
|
||||
|
||||
### ✅ Preferred Approaches
|
||||
|
||||
**Use bash + standard utilities:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Read params with jq
|
||||
INPUT=$(cat)
|
||||
PARAM=$(echo "$INPUT" | jq -r '.param // "default"')
|
||||
|
||||
# Process with standard tools
|
||||
RESULT=$(echo "$PARAM" | tr '[:lower:]' '[:upper:]')
|
||||
|
||||
# Output with jq
|
||||
jq -n --arg result "$RESULT" '{result: $result}'
|
||||
```
|
||||
|
||||
**Use curl for HTTP:**
|
||||
```bash
|
||||
# Make HTTP requests with curl
|
||||
curl -s -X POST "$URL" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"key": "value"}'
|
||||
```
|
||||
|
||||
**Use jq for JSON processing:**
|
||||
```bash
|
||||
# Parse JSON responses
|
||||
echo "$RESPONSE" | jq '.data.items[] | .name'
|
||||
|
||||
# Generate JSON output
|
||||
jq -n \
|
||||
--arg status "success" \
|
||||
--argjson count 42 \
|
||||
'{status: $status, count: $count}'
|
||||
```
|
||||
|
||||
### ❌ Avoid
|
||||
|
||||
**Don't add runtime dependencies:**
|
||||
```bash
|
||||
# ❌ DON'T DO THIS
|
||||
pip install requests
|
||||
python3 script.py
|
||||
|
||||
# ❌ DON'T DO THIS
|
||||
npm install axios
|
||||
node script.js
|
||||
|
||||
# ❌ DON'T DO THIS
|
||||
gem install httparty
|
||||
ruby script.rb
|
||||
```
|
||||
|
||||
**Don't use language-specific features:**
|
||||
```python
|
||||
# ❌ DON'T DO THIS in core pack
|
||||
#!/usr/bin/env python3
|
||||
import requests # External dependency!
|
||||
response = requests.get(url)
|
||||
```
|
||||
|
||||
Instead, use bash + curl:
|
||||
```bash
|
||||
# ✅ DO THIS in core pack
|
||||
#!/bin/bash
|
||||
response=$(curl -s "$url")
|
||||
```
|
||||
|
||||
## When Runtime Dependencies Are Acceptable
|
||||
|
||||
For **custom packs** (not core pack), runtime dependencies are fine:
|
||||
- ✅ Pack-specific Python libraries (installed in pack virtualenv)
|
||||
- ✅ Pack-specific npm modules (installed in pack node_modules)
|
||||
- ✅ Language runtimes (Python, Node.js) for complex logic
|
||||
- ✅ Specialized tools for specific integrations
|
||||
|
||||
The core pack serves as a foundation with zero dependencies. Custom packs can have dependencies managed via:
|
||||
- `requirements.txt` for Python packages
|
||||
- `package.json` for Node.js modules
|
||||
- Pack runtime environments (isolated per pack)
|
||||
|
||||
## Migration from Runtime Dependencies
|
||||
|
||||
If an action currently uses a runtime dependency, consider:
|
||||
|
||||
1. **Can it be done with bash + standard utilities?**
|
||||
- Yes → Rewrite in bash
|
||||
- No → Consider if it belongs in core pack
|
||||
|
||||
2. **Is the functionality complex?**
|
||||
- Simple HTTP/JSON → Use curl + jq
|
||||
- Complex API client → Move to custom pack
|
||||
|
||||
3. **Is it a specialized integration?**
|
||||
- Yes → Move to integration-specific pack
|
||||
- No → Keep in core pack with bash implementation
|
||||
|
||||
### Example: http_request Migration
|
||||
|
||||
**Before (Python with dependency):**
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import requests # ❌ External dependency
|
||||
|
||||
response = requests.get(url, headers=headers)
|
||||
print(response.json())
|
||||
```
|
||||
|
||||
**After (Bash with standard utilities):**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# ✅ No dependencies beyond curl + jq
|
||||
|
||||
response=$(curl -s -H "Authorization: Bearer $TOKEN" "$URL")
|
||||
echo "$response" | jq '.'
|
||||
```
|
||||
|
||||
## Testing Without Dependencies
|
||||
|
||||
Core pack actions can be tested anywhere with standard utilities:
|
||||
|
||||
```bash
|
||||
# Local testing (no installation needed)
|
||||
echo '{"param": "value"}' | ./action.sh
|
||||
|
||||
# Docker testing (minimal base image)
|
||||
docker run --rm -i alpine:latest sh -c '
|
||||
apk add --no-cache bash jq curl &&
|
||||
/bin/bash < action.sh
|
||||
'
|
||||
|
||||
# CI/CD testing (standard tools available)
|
||||
./action.sh < test-params.json
|
||||
```
|
||||
|
||||
## Benefits Realized
|
||||
|
||||
### For Developers
|
||||
- No dependency management overhead
|
||||
- Immediate action execution (no runtime setup)
|
||||
- Easy to test locally
|
||||
- Simple to audit and debug
|
||||
|
||||
### For Operators
|
||||
- No version conflicts between packs
|
||||
- No package installation failures
|
||||
- Faster container startup
|
||||
- Smaller container images
|
||||
|
||||
### For Security
|
||||
- Minimal attack surface
|
||||
- No third-party library vulnerabilities
|
||||
- Easier to audit (standard tools only)
|
||||
- No supply chain risks
|
||||
|
||||
### For Performance
|
||||
- Fast action startup (no runtime initialization)
|
||||
- Low memory footprint
|
||||
- No package loading overhead
|
||||
- Efficient resource usage
|
||||
|
||||
## Standard Utility Reference
|
||||
|
||||
### jq (JSON Processing)
|
||||
```bash
|
||||
# Parse input
|
||||
VALUE=$(echo "$JSON" | jq -r '.key')
|
||||
|
||||
# Generate output
|
||||
jq -n --arg val "$VALUE" '{result: $val}'
|
||||
|
||||
# Transform data
|
||||
echo "$JSON" | jq '.items[] | select(.active)'
|
||||
```
|
||||
|
||||
### curl (HTTP Client)
|
||||
```bash
|
||||
# GET request
|
||||
curl -s "$URL"
|
||||
|
||||
# POST with JSON
|
||||
curl -s -X POST "$URL" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"key": "value"}'
|
||||
|
||||
# With authentication
|
||||
curl -s -H "Authorization: Bearer $TOKEN" "$URL"
|
||||
```
|
||||
|
||||
### Standard Text Tools
|
||||
```bash
|
||||
# grep - Pattern matching
|
||||
echo "$TEXT" | grep "pattern"
|
||||
|
||||
# sed - Text transformation
|
||||
echo "$TEXT" | sed 's/old/new/g'
|
||||
|
||||
# awk - Text processing
|
||||
echo "$TEXT" | awk '{print $1}'
|
||||
|
||||
# tr - Character translation
|
||||
echo "$TEXT" | tr '[:lower:]' '[:upper:]'
|
||||
```
|
||||
|
||||
## Future Considerations
|
||||
|
||||
The core pack will:
|
||||
- ✅ Continue to have zero runtime dependencies
|
||||
- ✅ Use only standard Unix utilities
|
||||
- ✅ Serve as a reference implementation
|
||||
- ✅ Provide foundational actions for workflows
|
||||
|
||||
Custom packs may:
|
||||
- ✅ Have runtime dependencies (Python, Node.js, etc.)
|
||||
- ✅ Use specialized libraries for integrations
|
||||
- ✅ Require specific tools or SDKs
|
||||
- ✅ Manage dependencies via pack environments
|
||||
|
||||
## Summary
|
||||
|
||||
**Core Pack = Zero Dependencies + Standard Utilities**
|
||||
|
||||
This philosophy ensures the core pack is:
|
||||
- Portable across all environments
|
||||
- Reliable without version conflicts
|
||||
- Secure with minimal attack surface
|
||||
- Performant with fast startup
|
||||
- Simple to test and maintain
|
||||
|
||||
For actions requiring runtime dependencies, create custom packs with proper dependency management via `requirements.txt`, `package.json`, or similar mechanisms.
|
||||
321
packs/core/actions/README.md
Normal file
321
packs/core/actions/README.md
Normal file
@@ -0,0 +1,321 @@
|
||||
# Core Pack Actions
|
||||
|
||||
## Overview
|
||||
|
||||
All actions in the core pack follow Attune's secure-by-design architecture:
|
||||
- **Parameter delivery:** stdin (JSON format) - never environment variables
|
||||
- **Output format:** Explicitly declared (text, json, or yaml)
|
||||
- **Output schema:** Describes structured data shape (json/yaml only)
|
||||
- **Execution metadata:** Automatically captured (stdout/stderr/exit_code)
|
||||
|
||||
## Parameter Delivery Method
|
||||
|
||||
**All actions:**
|
||||
- Read parameters from **stdin** as JSON
|
||||
- Use `parameter_delivery: stdin` and `parameter_format: json` in their YAML definitions
|
||||
- **DO NOT** use environment variables for parameters
|
||||
|
||||
## Output Format
|
||||
|
||||
**All actions must specify an `output_format`:**
|
||||
- `text` - Plain text output (stored as-is, no parsing)
|
||||
- `json` - JSON structured data (parsed into JSONB field)
|
||||
- `yaml` - YAML structured data (parsed into JSONB field)
|
||||
|
||||
**Output schema:**
|
||||
- Only applicable for `json` and `yaml` formats
|
||||
- Describes the structure of data written to stdout
|
||||
- **Should NOT include** stdout/stderr/exit_code (captured automatically)
|
||||
|
||||
## Environment Variables
|
||||
|
||||
### Standard Environment Variables (Provided by Worker)
|
||||
|
||||
The worker automatically provides these environment variables to all action executions:
|
||||
|
||||
| Variable | Description | Always Present |
|
||||
|----------|-------------|----------------|
|
||||
| `ATTUNE_ACTION` | Action ref (e.g., `core.http_request`) | ✅ Yes |
|
||||
| `ATTUNE_EXEC_ID` | Execution database ID | ✅ Yes |
|
||||
| `ATTUNE_API_TOKEN` | Execution-scoped API token | ✅ Yes |
|
||||
| `ATTUNE_RULE` | Rule ref that triggered execution | ❌ Only if from rule |
|
||||
| `ATTUNE_TRIGGER` | Trigger ref that caused enforcement | ❌ Only if from trigger |
|
||||
|
||||
**Use cases:**
|
||||
- Logging with execution context
|
||||
- Calling Attune API (using `ATTUNE_API_TOKEN`)
|
||||
- Conditional logic based on rule/trigger
|
||||
- Creating child executions
|
||||
- Accessing secrets via API
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Log with context
|
||||
echo "[$ATTUNE_ACTION] [Exec: $ATTUNE_EXEC_ID] Processing..." >&2
|
||||
|
||||
# Call Attune API
|
||||
curl -s -H "Authorization: Bearer $ATTUNE_API_TOKEN" \
|
||||
"$ATTUNE_API_URL/api/v1/executions/$ATTUNE_EXEC_ID"
|
||||
|
||||
# Conditional behavior
|
||||
if [ -n "$ATTUNE_RULE" ]; then
|
||||
echo "Triggered by rule: $ATTUNE_RULE" >&2
|
||||
fi
|
||||
```
|
||||
|
||||
See [Execution Environment Variables](../../../docs/QUICKREF-execution-environment.md) for complete documentation.
|
||||
|
||||
### Custom Environment Variables (Optional)
|
||||
|
||||
Custom environment variables can be set via `execution.env_vars` field for:
|
||||
- **Debug/logging controls** (e.g., `DEBUG=1`, `LOG_LEVEL=debug`)
|
||||
- **Runtime configuration** (e.g., custom paths, feature flags)
|
||||
- **Action-specific context** (non-sensitive execution context)
|
||||
|
||||
Environment variables should **NEVER** be used for:
|
||||
- Action parameters (use stdin instead)
|
||||
- Secrets or credentials (use `ATTUNE_API_TOKEN` to fetch from key vault)
|
||||
- User-provided data (use stdin parameters)
|
||||
|
||||
## Implementation Patterns
|
||||
|
||||
### Bash/Shell Actions
|
||||
|
||||
Shell actions read JSON from stdin using `jq`:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Read JSON parameters from stdin
|
||||
INPUT=$(cat)
|
||||
|
||||
# Parse parameters using jq
|
||||
PARAM1=$(echo "$INPUT" | jq -r '.param1 // "default_value"')
|
||||
PARAM2=$(echo "$INPUT" | jq -r '.param2 // ""')
|
||||
|
||||
# Check for null values (optional parameters)
|
||||
if [ -n "$PARAM2" ] && [ "$PARAM2" != "null" ]; then
|
||||
echo "Param2 provided: $PARAM2"
|
||||
fi
|
||||
|
||||
# Use the parameters
|
||||
echo "Param1: $PARAM1"
|
||||
```
|
||||
|
||||
### Advanced Bash Actions
|
||||
|
||||
For more complex bash actions (like http_request.sh), use `curl` or other standard utilities:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Read JSON parameters from stdin
|
||||
INPUT=$(cat)
|
||||
|
||||
# Parse parameters
|
||||
URL=$(echo "$INPUT" | jq -r '.url // ""')
|
||||
METHOD=$(echo "$INPUT" | jq -r '.method // "GET"')
|
||||
|
||||
# Validate required parameters
|
||||
if [ -z "$URL" ]; then
|
||||
echo "ERROR: url parameter is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Make HTTP request with curl
|
||||
RESPONSE=$(curl -s -X "$METHOD" "$URL")
|
||||
|
||||
# Output result as JSON
|
||||
jq -n \
|
||||
--arg body "$RESPONSE" \
|
||||
--argjson success true \
|
||||
'{body: $body, success: $success}'
|
||||
```
|
||||
|
||||
## Core Pack Actions
|
||||
|
||||
### Simple Actions
|
||||
|
||||
1. **echo.sh** - Outputs a message
|
||||
2. **sleep.sh** - Pauses execution for a specified duration
|
||||
3. **noop.sh** - Does nothing (useful for testing)
|
||||
|
||||
### HTTP Action
|
||||
|
||||
4. **http_request.sh** - Makes HTTP requests with authentication support (curl-based)
|
||||
|
||||
### Pack Management Actions (API Wrappers)
|
||||
|
||||
These actions wrap API endpoints and pass parameters to the Attune API:
|
||||
|
||||
5. **download_packs.sh** - Downloads packs from git/HTTP/registry
|
||||
6. **build_pack_envs.sh** - Builds runtime environments for packs
|
||||
7. **register_packs.sh** - Registers packs in the database
|
||||
8. **get_pack_dependencies.sh** - Analyzes pack dependencies
|
||||
|
||||
## Testing Actions Locally
|
||||
|
||||
You can test actions locally by piping JSON to stdin:
|
||||
|
||||
```bash
|
||||
# Test echo action
|
||||
echo '{"message": "Hello from stdin!"}' | ./echo.sh
|
||||
|
||||
# Test echo with no message (outputs empty line)
|
||||
echo '{}' | ./echo.sh
|
||||
|
||||
# Test sleep action
|
||||
echo '{"seconds": 2, "message": "Sleeping..."}' | ./sleep.sh
|
||||
|
||||
# Test http_request action
|
||||
echo '{"url": "https://api.github.com", "method": "GET"}' | ./http_request.sh
|
||||
|
||||
# Test with file input
|
||||
cat params.json | ./echo.sh
|
||||
```
|
||||
|
||||
## Migration Summary
|
||||
|
||||
**Before (using environment variables):**
|
||||
```bash
|
||||
MESSAGE="${ATTUNE_ACTION_MESSAGE:-}"
|
||||
```
|
||||
|
||||
**After (using stdin JSON):**
|
||||
```bash
|
||||
INPUT=$(cat)
|
||||
MESSAGE=$(echo "$INPUT" | jq -r '.message // ""')
|
||||
```
|
||||
|
||||
## Security Benefits
|
||||
|
||||
1. **No process exposure** - Parameters never appear in `ps`, `/proc/<pid>/environ`
|
||||
2. **Secure by default** - All actions use stdin, no special configuration needed
|
||||
3. **Clear separation** - Action parameters vs. environment configuration
|
||||
4. **Audit friendly** - All sensitive data flows through stdin, not environment
|
||||
|
||||
## YAML Configuration
|
||||
|
||||
All action YAML files explicitly declare parameter delivery and output format:
|
||||
|
||||
```yaml
|
||||
name: example_action
|
||||
ref: core.example_action
|
||||
runner_type: shell
|
||||
entry_point: example.sh
|
||||
|
||||
# Parameter delivery: stdin for secure parameter passing (no env vars)
|
||||
parameter_delivery: stdin
|
||||
parameter_format: json
|
||||
|
||||
# Output format: text, json, or yaml
|
||||
output_format: text
|
||||
|
||||
parameters:
|
||||
type: object
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
description: "Message to output (empty string if not provided)"
|
||||
required: []
|
||||
|
||||
# Output schema: not applicable for text output format
|
||||
# For json/yaml formats, describe the structure of data your action outputs
|
||||
# Do NOT include stdout/stderr/exit_code - those are captured automatically
|
||||
# Do NOT include generic "status" or "result" wrappers - output your data directly
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Parameters
|
||||
1. **Always use stdin** for action parameters
|
||||
2. **Use jq for bash** scripts to parse JSON
|
||||
3. **Handle null values** - Use jq's `// "default"` operator to provide defaults
|
||||
4. **Provide sensible defaults** - Use empty string, 0, false, or empty array/object as appropriate
|
||||
5. **Validate required params** - Exit with error if required parameters are missing (when truly required)
|
||||
6. **Mark secrets** - Use `secret: true` in YAML for sensitive parameters
|
||||
7. **Never use env vars for parameters** - Parameters come from stdin, not environment
|
||||
|
||||
### Environment Variables
|
||||
1. **Use standard ATTUNE_* variables** - Worker provides execution context
|
||||
2. **Access API with ATTUNE_API_TOKEN** - Execution-scoped authentication
|
||||
3. **Log with context** - Include `ATTUNE_ACTION` and `ATTUNE_EXEC_ID` in logs
|
||||
4. **Custom env vars via execution.env_vars** - For debug flags and configuration only
|
||||
5. **Never log ATTUNE_API_TOKEN** - Security sensitive
|
||||
6. **Check ATTUNE_RULE/ATTUNE_TRIGGER** - Conditional behavior for automated vs manual
|
||||
7. **Use env vars for runtime context** - Not for user data or parameters
|
||||
|
||||
### Output Format
|
||||
1. **Specify output_format** - Always set to "text", "json", or "yaml"
|
||||
2. **Use text for simple output** - Messages, logs, unstructured data
|
||||
3. **Use json for structured data** - API responses, complex results
|
||||
4. **Use yaml for readable config** - Human-readable structured output
|
||||
5. **Define schema for structured output** - Only for json/yaml formats
|
||||
6. **Don't include execution metadata** - No stdout/stderr/exit_code in schema
|
||||
7. **Use stderr for errors** - Diagnostic messages go to stderr, not stdout
|
||||
8. **Return proper exit codes** - 0 for success, non-zero for failure
|
||||
|
||||
## Dependencies
|
||||
|
||||
All core pack actions have **zero runtime dependencies**:
|
||||
- **Bash actions**: Require `jq` (for JSON parsing) and `curl` (for HTTP requests)
|
||||
- Both `jq` and `curl` are standard utilities available in all Attune worker containers
|
||||
- **No Python, Node.js, or other runtime dependencies required**
|
||||
|
||||
## Execution Metadata (Automatic)
|
||||
|
||||
The following are **automatically captured** by the worker and should **NOT** be included in output schemas:
|
||||
|
||||
- `stdout` - Raw standard output (captured as-is)
|
||||
- `stderr` - Standard error output (written to log file)
|
||||
- `exit_code` - Process exit code (0 = success)
|
||||
- `duration_ms` - Execution duration in milliseconds
|
||||
|
||||
These are execution system concerns, not action output concerns.
|
||||
|
||||
## Example: Using Environment Variables and Parameters
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Standard environment variables (provided by worker)
|
||||
echo "[$ATTUNE_ACTION] [Exec: $ATTUNE_EXEC_ID] Starting execution" >&2
|
||||
|
||||
# Read action parameters from stdin
|
||||
INPUT=$(cat)
|
||||
URL=$(echo "$INPUT" | jq -r '.url // ""')
|
||||
|
||||
if [ -z "$URL" ]; then
|
||||
echo "ERROR: url parameter is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Log execution context
|
||||
if [ -n "$ATTUNE_RULE" ]; then
|
||||
echo "Triggered by rule: $ATTUNE_RULE" >&2
|
||||
fi
|
||||
|
||||
# Make request
|
||||
RESPONSE=$(curl -s "$URL")
|
||||
|
||||
# Output result
|
||||
echo "$RESPONSE"
|
||||
|
||||
echo "[$ATTUNE_ACTION] [Exec: $ATTUNE_EXEC_ID] Completed successfully" >&2
|
||||
exit 0
|
||||
```
|
||||
|
||||
## Future Considerations
|
||||
|
||||
- Consider adding a bash library for common parameter parsing patterns
|
||||
- Add parameter validation helpers
|
||||
- Create templates for new actions in different languages
|
||||
- Add output schema validation tooling
|
||||
- Add helper functions for API interaction using ATTUNE_API_TOKEN
|
||||
102
packs/core/actions/build_pack_envs.sh
Normal file
102
packs/core/actions/build_pack_envs.sh
Normal file
@@ -0,0 +1,102 @@
|
||||
#!/bin/bash
|
||||
# Build Pack Environments Action - API Wrapper
|
||||
# Thin wrapper around POST /api/v1/packs/build-envs
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Read JSON parameters from stdin
|
||||
INPUT=$(cat)
|
||||
|
||||
# Parse parameters using jq
|
||||
PACK_PATHS=$(echo "$INPUT" | jq -c '.pack_paths // []')
|
||||
PACKS_BASE_DIR=$(echo "$INPUT" | jq -r '.packs_base_dir // "/opt/attune/packs"')
|
||||
PYTHON_VERSION=$(echo "$INPUT" | jq -r '.python_version // "3.11"')
|
||||
NODEJS_VERSION=$(echo "$INPUT" | jq -r '.nodejs_version // "20"')
|
||||
SKIP_PYTHON=$(echo "$INPUT" | jq -r '.skip_python // false')
|
||||
SKIP_NODEJS=$(echo "$INPUT" | jq -r '.skip_nodejs // false')
|
||||
FORCE_REBUILD=$(echo "$INPUT" | jq -r '.force_rebuild // false')
|
||||
TIMEOUT=$(echo "$INPUT" | jq -r '.timeout // 600')
|
||||
API_URL=$(echo "$INPUT" | jq -r '.api_url // "http://localhost:8080"')
|
||||
API_TOKEN=$(echo "$INPUT" | jq -r '.api_token // ""')
|
||||
|
||||
# Validate required parameters
|
||||
PACK_COUNT=$(echo "$PACK_PATHS" | jq -r 'length' 2>/dev/null || echo "0")
|
||||
if [[ "$PACK_COUNT" -eq 0 ]]; then
|
||||
echo '{"built_environments":[],"failed_environments":[],"summary":{"total_packs":0,"success_count":0,"failure_count":0,"python_envs_built":0,"nodejs_envs_built":0,"total_duration_ms":0}}' >&1
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Build request body
|
||||
REQUEST_BODY=$(jq -n \
|
||||
--argjson pack_paths "$PACK_PATHS" \
|
||||
--arg packs_base_dir "$PACKS_BASE_DIR" \
|
||||
--arg python_version "$PYTHON_VERSION" \
|
||||
--arg nodejs_version "$NODEJS_VERSION" \
|
||||
--argjson skip_python "$([[ "$SKIP_PYTHON" == "true" ]] && echo true || echo false)" \
|
||||
--argjson skip_nodejs "$([[ "$SKIP_NODEJS" == "true" ]] && echo true || echo false)" \
|
||||
--argjson force_rebuild "$([[ "$FORCE_REBUILD" == "true" ]] && echo true || echo false)" \
|
||||
--argjson timeout "$TIMEOUT" \
|
||||
'{
|
||||
pack_paths: $pack_paths,
|
||||
packs_base_dir: $packs_base_dir,
|
||||
python_version: $python_version,
|
||||
nodejs_version: $nodejs_version,
|
||||
skip_python: $skip_python,
|
||||
skip_nodejs: $skip_nodejs,
|
||||
force_rebuild: $force_rebuild,
|
||||
timeout: $timeout
|
||||
}')
|
||||
|
||||
# Make API call
|
||||
CURL_ARGS=(
|
||||
-X POST
|
||||
-H "Content-Type: application/json"
|
||||
-H "Accept: application/json"
|
||||
-d "$REQUEST_BODY"
|
||||
-s
|
||||
-w "\n%{http_code}"
|
||||
--max-time $((TIMEOUT + 30))
|
||||
--connect-timeout 10
|
||||
)
|
||||
|
||||
if [[ -n "$API_TOKEN" ]] && [[ "$API_TOKEN" != "null" ]]; then
|
||||
CURL_ARGS+=(-H "Authorization: Bearer ${API_TOKEN}")
|
||||
fi
|
||||
|
||||
RESPONSE=$(curl "${CURL_ARGS[@]}" "${API_URL}/api/v1/packs/build-envs" 2>/dev/null || echo -e "\n000")
|
||||
|
||||
# Extract status code (last line)
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n 1)
|
||||
BODY=$(echo "$RESPONSE" | head -n -1)
|
||||
|
||||
# Check HTTP status
|
||||
if [[ "$HTTP_CODE" -ge 200 ]] && [[ "$HTTP_CODE" -lt 300 ]]; then
|
||||
# Extract data field from API response
|
||||
echo "$BODY" | jq -r '.data // .'
|
||||
exit 0
|
||||
else
|
||||
# Error response
|
||||
ERROR_MSG=$(echo "$BODY" | jq -r '.error // .message // "API request failed"' 2>/dev/null || echo "API request failed")
|
||||
|
||||
cat <<EOF
|
||||
{
|
||||
"built_environments": [],
|
||||
"failed_environments": [{
|
||||
"pack_ref": "api",
|
||||
"pack_path": "",
|
||||
"runtime": "unknown",
|
||||
"error": "API call failed (HTTP $HTTP_CODE): $ERROR_MSG"
|
||||
}],
|
||||
"summary": {
|
||||
"total_packs": 0,
|
||||
"success_count": 0,
|
||||
"failure_count": 1,
|
||||
"python_envs_built": 0,
|
||||
"nodejs_envs_built": 0,
|
||||
"total_duration_ms": 0
|
||||
}
|
||||
}
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
165
packs/core/actions/build_pack_envs.yaml
Normal file
165
packs/core/actions/build_pack_envs.yaml
Normal file
@@ -0,0 +1,165 @@
|
||||
# Build Pack Environments Action
|
||||
# Creates runtime environments and installs dependencies for packs
|
||||
|
||||
name: build_pack_envs
|
||||
ref: core.build_pack_envs
|
||||
description: "Build runtime environments for packs and install declared dependencies (Python requirements.txt, Node.js package.json)"
|
||||
enabled: true
|
||||
runner_type: shell
|
||||
entry_point: build_pack_envs.sh
|
||||
|
||||
# Parameter delivery: stdin for secure parameter passing (no env vars)
|
||||
parameter_delivery: stdin
|
||||
parameter_format: json
|
||||
|
||||
# Output format: json (structured data parsing enabled)
|
||||
output_format: json
|
||||
|
||||
# Action parameters schema
|
||||
parameters:
|
||||
type: object
|
||||
properties:
|
||||
pack_paths:
|
||||
type: array
|
||||
description: "List of pack directory paths to build environments for"
|
||||
items:
|
||||
type: string
|
||||
minItems: 1
|
||||
packs_base_dir:
|
||||
type: string
|
||||
description: "Base directory where packs are installed"
|
||||
default: "/opt/attune/packs"
|
||||
python_version:
|
||||
type: string
|
||||
description: "Python version to use for virtualenvs"
|
||||
default: "3.11"
|
||||
nodejs_version:
|
||||
type: string
|
||||
description: "Node.js version to use"
|
||||
default: "20"
|
||||
skip_python:
|
||||
type: boolean
|
||||
description: "Skip building Python environments"
|
||||
default: false
|
||||
skip_nodejs:
|
||||
type: boolean
|
||||
description: "Skip building Node.js environments"
|
||||
default: false
|
||||
force_rebuild:
|
||||
type: boolean
|
||||
description: "Force rebuild of existing environments"
|
||||
default: false
|
||||
timeout:
|
||||
type: integer
|
||||
description: "Timeout in seconds for building each environment"
|
||||
default: 600
|
||||
minimum: 60
|
||||
maximum: 3600
|
||||
required:
|
||||
- pack_paths
|
||||
|
||||
# Output schema: describes the JSON structure written to stdout
|
||||
# Note: stdout/stderr/exit_code are captured automatically by the execution system
|
||||
output_schema:
|
||||
type: object
|
||||
properties:
|
||||
built_environments:
|
||||
type: array
|
||||
description: "List of successfully built environments"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
pack_ref:
|
||||
type: string
|
||||
description: "Pack reference"
|
||||
pack_path:
|
||||
type: string
|
||||
description: "Pack directory path"
|
||||
environments:
|
||||
type: object
|
||||
description: "Built environments for this pack"
|
||||
properties:
|
||||
python:
|
||||
type: object
|
||||
description: "Python environment details"
|
||||
properties:
|
||||
virtualenv_path:
|
||||
type: string
|
||||
description: "Path to Python virtualenv"
|
||||
requirements_installed:
|
||||
type: boolean
|
||||
description: "Whether requirements.txt was installed"
|
||||
package_count:
|
||||
type: integer
|
||||
description: "Number of packages installed"
|
||||
python_version:
|
||||
type: string
|
||||
description: "Python version used"
|
||||
nodejs:
|
||||
type: object
|
||||
description: "Node.js environment details"
|
||||
properties:
|
||||
node_modules_path:
|
||||
type: string
|
||||
description: "Path to node_modules directory"
|
||||
dependencies_installed:
|
||||
type: boolean
|
||||
description: "Whether package.json was installed"
|
||||
package_count:
|
||||
type: integer
|
||||
description: "Number of packages installed"
|
||||
nodejs_version:
|
||||
type: string
|
||||
description: "Node.js version used"
|
||||
duration_ms:
|
||||
type: integer
|
||||
description: "Time taken to build environments in milliseconds"
|
||||
failed_environments:
|
||||
type: array
|
||||
description: "List of packs where environment build failed"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
pack_ref:
|
||||
type: string
|
||||
description: "Pack reference"
|
||||
pack_path:
|
||||
type: string
|
||||
description: "Pack directory path"
|
||||
runtime:
|
||||
type: string
|
||||
description: "Runtime that failed (python or nodejs)"
|
||||
error:
|
||||
type: string
|
||||
description: "Error message"
|
||||
summary:
|
||||
type: object
|
||||
description: "Summary of environment build process"
|
||||
properties:
|
||||
total_packs:
|
||||
type: integer
|
||||
description: "Total number of packs processed"
|
||||
success_count:
|
||||
type: integer
|
||||
description: "Number of packs with successful builds"
|
||||
failure_count:
|
||||
type: integer
|
||||
description: "Number of packs with failed builds"
|
||||
python_envs_built:
|
||||
type: integer
|
||||
description: "Number of Python environments built"
|
||||
nodejs_envs_built:
|
||||
type: integer
|
||||
description: "Number of Node.js environments built"
|
||||
total_duration_ms:
|
||||
type: integer
|
||||
description: "Total time taken for all builds in milliseconds"
|
||||
|
||||
# Tags for categorization
|
||||
tags:
|
||||
- pack
|
||||
- environment
|
||||
- dependencies
|
||||
- python
|
||||
- nodejs
|
||||
- installation
|
||||
86
packs/core/actions/download_packs.sh
Normal file
86
packs/core/actions/download_packs.sh
Normal file
@@ -0,0 +1,86 @@
|
||||
#!/bin/bash
|
||||
# Download Packs Action - API Wrapper
|
||||
# Thin wrapper around POST /api/v1/packs/download
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Read JSON parameters from stdin
|
||||
INPUT=$(cat)
|
||||
|
||||
# Parse parameters using jq
|
||||
PACKS=$(echo "$INPUT" | jq -c '.packs // []')
|
||||
DESTINATION_DIR=$(echo "$INPUT" | jq -r '.destination_dir // ""')
|
||||
REGISTRY_URL=$(echo "$INPUT" | jq -r '.registry_url // "https://registry.attune.io/index.json"')
|
||||
REF_SPEC=$(echo "$INPUT" | jq -r '.ref_spec // ""')
|
||||
TIMEOUT=$(echo "$INPUT" | jq -r '.timeout // 300')
|
||||
VERIFY_SSL=$(echo "$INPUT" | jq -r '.verify_ssl // true')
|
||||
API_URL=$(echo "$INPUT" | jq -r '.api_url // "http://localhost:8080"')
|
||||
API_TOKEN=$(echo "$INPUT" | jq -r '.api_token // ""')
|
||||
|
||||
# Validate required parameters
|
||||
if [[ -z "$DESTINATION_DIR" ]] || [[ "$DESTINATION_DIR" == "null" ]]; then
|
||||
echo '{"downloaded_packs":[],"failed_packs":[{"source":"input","error":"destination_dir is required"}],"total_count":0,"success_count":0,"failure_count":1}' >&1
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Build request body
|
||||
REQUEST_BODY=$(jq -n \
|
||||
--argjson packs "$PACKS" \
|
||||
--arg destination_dir "$DESTINATION_DIR" \
|
||||
--arg registry_url "$REGISTRY_URL" \
|
||||
--argjson timeout "$TIMEOUT" \
|
||||
--argjson verify_ssl "$([[ "$VERIFY_SSL" == "true" ]] && echo true || echo false)" \
|
||||
'{
|
||||
packs: $packs,
|
||||
destination_dir: $destination_dir,
|
||||
registry_url: $registry_url,
|
||||
timeout: $timeout,
|
||||
verify_ssl: $verify_ssl
|
||||
}' | jq --arg ref_spec "$REF_SPEC" 'if $ref_spec != "" and $ref_spec != "null" then .ref_spec = $ref_spec else . end')
|
||||
|
||||
# Make API call
|
||||
CURL_ARGS=(
|
||||
-X POST
|
||||
-H "Content-Type: application/json"
|
||||
-H "Accept: application/json"
|
||||
-d "$REQUEST_BODY"
|
||||
-s
|
||||
-w "\n%{http_code}"
|
||||
--max-time $((TIMEOUT + 30))
|
||||
--connect-timeout 10
|
||||
)
|
||||
|
||||
if [[ -n "$API_TOKEN" ]] && [[ "$API_TOKEN" != "null" ]]; then
|
||||
CURL_ARGS+=(-H "Authorization: Bearer ${API_TOKEN}")
|
||||
fi
|
||||
|
||||
RESPONSE=$(curl "${CURL_ARGS[@]}" "${API_URL}/api/v1/packs/download" 2>/dev/null || echo -e "\n000")
|
||||
|
||||
# Extract status code (last line)
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n 1)
|
||||
BODY=$(echo "$RESPONSE" | head -n -1)
|
||||
|
||||
# Check HTTP status
|
||||
if [[ "$HTTP_CODE" -ge 200 ]] && [[ "$HTTP_CODE" -lt 300 ]]; then
|
||||
# Extract data field from API response
|
||||
echo "$BODY" | jq -r '.data // .'
|
||||
exit 0
|
||||
else
|
||||
# Error response
|
||||
ERROR_MSG=$(echo "$BODY" | jq -r '.error // .message // "API request failed"' 2>/dev/null || echo "API request failed")
|
||||
|
||||
cat <<EOF
|
||||
{
|
||||
"downloaded_packs": [],
|
||||
"failed_packs": [{
|
||||
"source": "api",
|
||||
"error": "API call failed (HTTP $HTTP_CODE): $ERROR_MSG"
|
||||
}],
|
||||
"total_count": 0,
|
||||
"success_count": 0,
|
||||
"failure_count": 1
|
||||
}
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
120
packs/core/actions/download_packs.yaml
Normal file
120
packs/core/actions/download_packs.yaml
Normal file
@@ -0,0 +1,120 @@
|
||||
# Download Packs Action
|
||||
# Downloads packs from various sources (git repositories, HTTP archives, or pack registry)
|
||||
|
||||
name: download_packs
|
||||
ref: core.download_packs
|
||||
description: "Download packs from git repositories, HTTP archives, or pack registry to a temporary directory"
|
||||
enabled: true
|
||||
runner_type: shell
|
||||
entry_point: download_packs.sh
|
||||
|
||||
# Parameter delivery: stdin for secure parameter passing (no env vars)
|
||||
parameter_delivery: stdin
|
||||
parameter_format: json
|
||||
|
||||
# Output format: json (structured data parsing enabled)
|
||||
output_format: json
|
||||
|
||||
# Action parameters schema
|
||||
parameters:
|
||||
type: object
|
||||
properties:
|
||||
packs:
|
||||
type: array
|
||||
description: "List of packs to download (git URLs, HTTP URLs, or pack refs)"
|
||||
items:
|
||||
type: string
|
||||
minItems: 1
|
||||
destination_dir:
|
||||
type: string
|
||||
description: "Destination directory for downloaded packs"
|
||||
registry_url:
|
||||
type: string
|
||||
description: "Pack registry URL for resolving pack refs (optional)"
|
||||
default: "https://registry.attune.io/index.json"
|
||||
ref_spec:
|
||||
type: string
|
||||
description: "Git reference to checkout (branch, tag, or commit) - applies to all git URLs"
|
||||
timeout:
|
||||
type: integer
|
||||
description: "Download timeout in seconds per pack"
|
||||
default: 300
|
||||
minimum: 10
|
||||
maximum: 3600
|
||||
verify_ssl:
|
||||
type: boolean
|
||||
description: "Verify SSL certificates for HTTPS downloads"
|
||||
default: true
|
||||
api_url:
|
||||
type: string
|
||||
description: "Attune API URL for making registry lookups"
|
||||
default: "http://localhost:8080"
|
||||
required:
|
||||
- packs
|
||||
- destination_dir
|
||||
|
||||
# Output schema: describes the JSON structure written to stdout
|
||||
# Note: stdout/stderr/exit_code are captured automatically by the execution system
|
||||
output_schema:
|
||||
type: object
|
||||
properties:
|
||||
downloaded_packs:
|
||||
type: array
|
||||
description: "List of successfully downloaded packs"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
source:
|
||||
type: string
|
||||
description: "Original pack source (URL or ref)"
|
||||
source_type:
|
||||
type: string
|
||||
description: "Type of source"
|
||||
enum:
|
||||
- git
|
||||
- http
|
||||
- registry
|
||||
pack_path:
|
||||
type: string
|
||||
description: "Local filesystem path to downloaded pack"
|
||||
pack_ref:
|
||||
type: string
|
||||
description: "Pack reference (from pack.yaml)"
|
||||
pack_version:
|
||||
type: string
|
||||
description: "Pack version (from pack.yaml)"
|
||||
git_commit:
|
||||
type: string
|
||||
description: "Git commit hash (for git sources)"
|
||||
checksum:
|
||||
type: string
|
||||
description: "Directory checksum"
|
||||
failed_packs:
|
||||
type: array
|
||||
description: "List of packs that failed to download"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
source:
|
||||
type: string
|
||||
description: "Pack source that failed"
|
||||
error:
|
||||
type: string
|
||||
description: "Error message"
|
||||
total_count:
|
||||
type: integer
|
||||
description: "Total number of packs requested"
|
||||
success_count:
|
||||
type: integer
|
||||
description: "Number of packs successfully downloaded"
|
||||
failure_count:
|
||||
type: integer
|
||||
description: "Number of packs that failed"
|
||||
|
||||
# Tags for categorization
|
||||
tags:
|
||||
- pack
|
||||
- download
|
||||
- git
|
||||
- installation
|
||||
- registry
|
||||
@@ -1,21 +1,42 @@
|
||||
#!/bin/bash
|
||||
#!/bin/sh
|
||||
# Echo Action - Core Pack
|
||||
# Outputs a message to stdout with optional uppercase conversion
|
||||
# Outputs a message to stdout
|
||||
#
|
||||
# This script uses pure POSIX shell without external dependencies like jq or yq.
|
||||
# It reads parameters in DOTENV format from stdin until the delimiter.
|
||||
|
||||
set -e
|
||||
|
||||
# Parse parameters from environment variables
|
||||
# Attune passes action parameters as environment variables prefixed with ATTUNE_ACTION_
|
||||
MESSAGE="${ATTUNE_ACTION_MESSAGE:-Hello, World!}"
|
||||
UPPERCASE="${ATTUNE_ACTION_UPPERCASE:-false}"
|
||||
# Initialize message variable
|
||||
message=""
|
||||
|
||||
# Convert to uppercase if requested
|
||||
if [ "$UPPERCASE" = "true" ]; then
|
||||
MESSAGE=$(echo "$MESSAGE" | tr '[:lower:]' '[:upper:]')
|
||||
fi
|
||||
# Read DOTENV-formatted parameters from stdin until delimiter
|
||||
while IFS= read -r line; do
|
||||
# Check for parameter delimiter
|
||||
case "$line" in
|
||||
*"---ATTUNE_PARAMS_END---"*)
|
||||
break
|
||||
;;
|
||||
message=*)
|
||||
# Extract value after message=
|
||||
message="${line#message=}"
|
||||
# Remove quotes if present (both single and double)
|
||||
case "$message" in
|
||||
\"*\")
|
||||
message="${message#\"}"
|
||||
message="${message%\"}"
|
||||
;;
|
||||
\'*\')
|
||||
message="${message#\'}"
|
||||
message="${message%\'}"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Echo the message
|
||||
echo "$MESSAGE"
|
||||
# Echo the message (even if empty)
|
||||
echo "$message"
|
||||
|
||||
# Exit successfully
|
||||
exit 0
|
||||
|
||||
@@ -12,37 +12,24 @@ runner_type: shell
|
||||
# Entry point is the shell command or script to execute
|
||||
entry_point: echo.sh
|
||||
|
||||
# Parameter delivery: stdin for secure parameter passing (no env vars)
|
||||
parameter_delivery: stdin
|
||||
parameter_format: dotenv
|
||||
|
||||
# Output format: text (no structured data parsing)
|
||||
output_format: text
|
||||
|
||||
# Action parameters schema (standard JSON Schema format)
|
||||
parameters:
|
||||
type: object
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
description: "Message to echo"
|
||||
default: "Hello, World!"
|
||||
uppercase:
|
||||
type: boolean
|
||||
description: "Convert message to uppercase before echoing"
|
||||
default: false
|
||||
required:
|
||||
- message
|
||||
description: "Message to echo (empty string if not provided)"
|
||||
required: []
|
||||
|
||||
# Output schema
|
||||
output_schema:
|
||||
type: object
|
||||
properties:
|
||||
stdout:
|
||||
type: string
|
||||
description: "Standard output from the echo command"
|
||||
stderr:
|
||||
type: string
|
||||
description: "Standard error output (usually empty)"
|
||||
exit_code:
|
||||
type: integer
|
||||
description: "Exit code of the command (0 = success)"
|
||||
result:
|
||||
type: string
|
||||
description: "The echoed message"
|
||||
# Output schema: not applicable for text output format
|
||||
# The action outputs plain text to stdout
|
||||
|
||||
# Tags for categorization
|
||||
tags:
|
||||
|
||||
77
packs/core/actions/get_pack_dependencies.sh
Normal file
77
packs/core/actions/get_pack_dependencies.sh
Normal file
@@ -0,0 +1,77 @@
|
||||
#!/bin/bash
|
||||
# Get Pack Dependencies Action - API Wrapper
|
||||
# Thin wrapper around POST /api/v1/packs/dependencies
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Read JSON parameters from stdin
|
||||
INPUT=$(cat)
|
||||
|
||||
# Parse parameters using jq
|
||||
PACK_PATHS=$(echo "$INPUT" | jq -c '.pack_paths // []')
|
||||
SKIP_VALIDATION=$(echo "$INPUT" | jq -r '.skip_validation // false')
|
||||
API_URL=$(echo "$INPUT" | jq -r '.api_url // "http://localhost:8080"')
|
||||
API_TOKEN=$(echo "$INPUT" | jq -r '.api_token // ""')
|
||||
|
||||
# Validate required parameters
|
||||
PACK_COUNT=$(echo "$PACK_PATHS" | jq -r 'length' 2>/dev/null || echo "0")
|
||||
if [[ "$PACK_COUNT" -eq 0 ]]; then
|
||||
echo '{"dependencies":[],"runtime_requirements":{},"missing_dependencies":[],"analyzed_packs":[],"errors":[{"pack_path":"input","error":"No pack paths provided"}]}' >&1
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Build request body
|
||||
REQUEST_BODY=$(jq -n \
|
||||
--argjson pack_paths "$PACK_PATHS" \
|
||||
--argjson skip_validation "$([[ "$SKIP_VALIDATION" == "true" ]] && echo true || echo false)" \
|
||||
'{
|
||||
pack_paths: $pack_paths,
|
||||
skip_validation: $skip_validation
|
||||
}')
|
||||
|
||||
# Make API call
|
||||
CURL_ARGS=(
|
||||
-X POST
|
||||
-H "Content-Type: application/json"
|
||||
-H "Accept: application/json"
|
||||
-d "$REQUEST_BODY"
|
||||
-s
|
||||
-w "\n%{http_code}"
|
||||
--max-time 60
|
||||
--connect-timeout 10
|
||||
)
|
||||
|
||||
if [[ -n "$API_TOKEN" ]] && [[ "$API_TOKEN" != "null" ]]; then
|
||||
CURL_ARGS+=(-H "Authorization: Bearer ${API_TOKEN}")
|
||||
fi
|
||||
|
||||
RESPONSE=$(curl "${CURL_ARGS[@]}" "${API_URL}/api/v1/packs/dependencies" 2>/dev/null || echo -e "\n000")
|
||||
|
||||
# Extract status code (last line)
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n 1)
|
||||
BODY=$(echo "$RESPONSE" | head -n -1)
|
||||
|
||||
# Check HTTP status
|
||||
if [[ "$HTTP_CODE" -ge 200 ]] && [[ "$HTTP_CODE" -lt 300 ]]; then
|
||||
# Extract data field from API response
|
||||
echo "$BODY" | jq -r '.data // .'
|
||||
exit 0
|
||||
else
|
||||
# Error response
|
||||
ERROR_MSG=$(echo "$BODY" | jq -r '.error // .message // "API request failed"' 2>/dev/null || echo "API request failed")
|
||||
|
||||
cat <<EOF
|
||||
{
|
||||
"dependencies": [],
|
||||
"runtime_requirements": {},
|
||||
"missing_dependencies": [],
|
||||
"analyzed_packs": [],
|
||||
"errors": [{
|
||||
"pack_path": "api",
|
||||
"error": "API call failed (HTTP $HTTP_CODE): $ERROR_MSG"
|
||||
}]
|
||||
}
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
142
packs/core/actions/get_pack_dependencies.yaml
Normal file
142
packs/core/actions/get_pack_dependencies.yaml
Normal file
@@ -0,0 +1,142 @@
|
||||
# Get Pack Dependencies Action
|
||||
# Parses pack.yaml files to identify pack and runtime dependencies
|
||||
|
||||
name: get_pack_dependencies
|
||||
ref: core.get_pack_dependencies
|
||||
description: "Parse pack.yaml files to extract pack dependencies and runtime requirements"
|
||||
enabled: true
|
||||
runner_type: shell
|
||||
entry_point: get_pack_dependencies.sh
|
||||
|
||||
# Parameter delivery: stdin for secure parameter passing (no env vars)
|
||||
parameter_delivery: stdin
|
||||
parameter_format: json
|
||||
|
||||
# Output format: json (structured data parsing enabled)
|
||||
output_format: json
|
||||
|
||||
# Action parameters schema
|
||||
parameters:
|
||||
type: object
|
||||
properties:
|
||||
pack_paths:
|
||||
type: array
|
||||
description: "List of pack directory paths to analyze"
|
||||
items:
|
||||
type: string
|
||||
minItems: 1
|
||||
skip_validation:
|
||||
type: boolean
|
||||
description: "Skip validation of pack.yaml schema"
|
||||
default: false
|
||||
api_url:
|
||||
type: string
|
||||
description: "Attune API URL for checking installed packs"
|
||||
default: "http://localhost:8080"
|
||||
required:
|
||||
- pack_paths
|
||||
|
||||
# Output schema: describes the JSON structure written to stdout
|
||||
# Note: stdout/stderr/exit_code are captured automatically by the execution system
|
||||
output_schema:
|
||||
type: object
|
||||
properties:
|
||||
dependencies:
|
||||
type: array
|
||||
description: "List of pack dependencies that need to be installed"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
pack_ref:
|
||||
type: string
|
||||
description: "Pack reference (e.g., 'core', 'slack')"
|
||||
version_spec:
|
||||
type: string
|
||||
description: "Version specification (e.g., '>=1.0.0', '^2.1.0')"
|
||||
required_by:
|
||||
type: string
|
||||
description: "Pack that requires this dependency"
|
||||
already_installed:
|
||||
type: boolean
|
||||
description: "Whether this dependency is already installed"
|
||||
runtime_requirements:
|
||||
type: object
|
||||
description: "Runtime environment requirements by pack"
|
||||
additionalProperties:
|
||||
type: object
|
||||
properties:
|
||||
pack_ref:
|
||||
type: string
|
||||
description: "Pack reference"
|
||||
python:
|
||||
type: object
|
||||
description: "Python runtime requirements"
|
||||
properties:
|
||||
version:
|
||||
type: string
|
||||
description: "Python version requirement"
|
||||
requirements_file:
|
||||
type: string
|
||||
description: "Path to requirements.txt"
|
||||
nodejs:
|
||||
type: object
|
||||
description: "Node.js runtime requirements"
|
||||
properties:
|
||||
version:
|
||||
type: string
|
||||
description: "Node.js version requirement"
|
||||
package_file:
|
||||
type: string
|
||||
description: "Path to package.json"
|
||||
missing_dependencies:
|
||||
type: array
|
||||
description: "Pack dependencies that are not yet installed"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
pack_ref:
|
||||
type: string
|
||||
description: "Pack reference"
|
||||
version_spec:
|
||||
type: string
|
||||
description: "Version specification"
|
||||
required_by:
|
||||
type: string
|
||||
description: "Pack that requires this dependency"
|
||||
analyzed_packs:
|
||||
type: array
|
||||
description: "List of packs that were analyzed"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
pack_ref:
|
||||
type: string
|
||||
description: "Pack reference"
|
||||
pack_path:
|
||||
type: string
|
||||
description: "Path to pack directory"
|
||||
has_dependencies:
|
||||
type: boolean
|
||||
description: "Whether pack has dependencies"
|
||||
dependency_count:
|
||||
type: integer
|
||||
description: "Number of dependencies"
|
||||
errors:
|
||||
type: array
|
||||
description: "Errors encountered during analysis"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
pack_path:
|
||||
type: string
|
||||
description: "Pack path where error occurred"
|
||||
error:
|
||||
type: string
|
||||
description: "Error message"
|
||||
|
||||
# Tags for categorization
|
||||
tags:
|
||||
- pack
|
||||
- dependencies
|
||||
- validation
|
||||
- installation
|
||||
@@ -1,206 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
HTTP Request Action - Core Pack
|
||||
Make HTTP requests to external APIs with support for various methods, headers, and authentication
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
try:
|
||||
import requests
|
||||
from requests.auth import HTTPBasicAuth
|
||||
except ImportError:
|
||||
print(
|
||||
"ERROR: requests library not installed. Run: pip install requests>=2.28.0",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def get_env_param(name: str, default: Any = None, required: bool = False) -> Any:
|
||||
"""Get action parameter from environment variable."""
|
||||
env_key = f"ATTUNE_ACTION_{name.upper()}"
|
||||
value = os.environ.get(env_key, default)
|
||||
|
||||
if required and value is None:
|
||||
raise ValueError(f"Required parameter '{name}' not provided")
|
||||
|
||||
return value
|
||||
|
||||
|
||||
def parse_json_param(name: str, default: Any = None) -> Any:
|
||||
"""Parse JSON parameter from environment variable."""
|
||||
value = get_env_param(name)
|
||||
if value is None:
|
||||
return default
|
||||
|
||||
try:
|
||||
return json.loads(value)
|
||||
except json.JSONDecodeError as e:
|
||||
raise ValueError(f"Invalid JSON for parameter '{name}': {e}")
|
||||
|
||||
|
||||
def parse_bool_param(name: str, default: bool = False) -> bool:
|
||||
"""Parse boolean parameter from environment variable."""
|
||||
value = get_env_param(name)
|
||||
if value is None:
|
||||
return default
|
||||
|
||||
if isinstance(value, bool):
|
||||
return value
|
||||
|
||||
return str(value).lower() in ("true", "1", "yes", "on")
|
||||
|
||||
|
||||
def parse_int_param(name: str, default: int = 0) -> int:
|
||||
"""Parse integer parameter from environment variable."""
|
||||
value = get_env_param(name)
|
||||
if value is None:
|
||||
return default
|
||||
|
||||
try:
|
||||
return int(value)
|
||||
except (ValueError, TypeError):
|
||||
raise ValueError(f"Invalid integer for parameter '{name}': {value}")
|
||||
|
||||
|
||||
def make_http_request() -> Dict[str, Any]:
|
||||
"""Execute HTTP request with provided parameters."""
|
||||
|
||||
# Parse required parameters
|
||||
url = get_env_param("url", required=True)
|
||||
|
||||
# Parse optional parameters
|
||||
method = get_env_param("method", "GET").upper()
|
||||
headers = parse_json_param("headers", {})
|
||||
body = get_env_param("body")
|
||||
json_body = parse_json_param("json_body")
|
||||
query_params = parse_json_param("query_params", {})
|
||||
timeout = parse_int_param("timeout", 30)
|
||||
verify_ssl = parse_bool_param("verify_ssl", True)
|
||||
auth_type = get_env_param("auth_type", "none")
|
||||
follow_redirects = parse_bool_param("follow_redirects", True)
|
||||
max_redirects = parse_int_param("max_redirects", 10)
|
||||
|
||||
# Prepare request kwargs
|
||||
request_kwargs = {
|
||||
"method": method,
|
||||
"url": url,
|
||||
"headers": headers,
|
||||
"params": query_params,
|
||||
"timeout": timeout,
|
||||
"verify": verify_ssl,
|
||||
"allow_redirects": follow_redirects,
|
||||
}
|
||||
|
||||
# Handle authentication
|
||||
if auth_type == "basic":
|
||||
username = get_env_param("auth_username")
|
||||
password = get_env_param("auth_password")
|
||||
if username and password:
|
||||
request_kwargs["auth"] = HTTPBasicAuth(username, password)
|
||||
elif auth_type == "bearer":
|
||||
token = get_env_param("auth_token")
|
||||
if token:
|
||||
request_kwargs["headers"]["Authorization"] = f"Bearer {token}"
|
||||
|
||||
# Handle request body
|
||||
if json_body is not None:
|
||||
request_kwargs["json"] = json_body
|
||||
elif body is not None:
|
||||
request_kwargs["data"] = body
|
||||
|
||||
# Make the request
|
||||
start_time = time.time()
|
||||
|
||||
try:
|
||||
response = requests.request(**request_kwargs)
|
||||
elapsed_ms = int((time.time() - start_time) * 1000)
|
||||
|
||||
# Parse response
|
||||
result = {
|
||||
"status_code": response.status_code,
|
||||
"headers": dict(response.headers),
|
||||
"body": response.text,
|
||||
"elapsed_ms": elapsed_ms,
|
||||
"url": response.url,
|
||||
"success": 200 <= response.status_code < 300,
|
||||
}
|
||||
|
||||
# Try to parse JSON response
|
||||
try:
|
||||
result["json"] = response.json()
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
result["json"] = None
|
||||
|
||||
return result
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
return {
|
||||
"status_code": 0,
|
||||
"headers": {},
|
||||
"body": "",
|
||||
"json": None,
|
||||
"elapsed_ms": int((time.time() - start_time) * 1000),
|
||||
"url": url,
|
||||
"success": False,
|
||||
"error": "Request timeout",
|
||||
}
|
||||
except requests.exceptions.ConnectionError as e:
|
||||
return {
|
||||
"status_code": 0,
|
||||
"headers": {},
|
||||
"body": "",
|
||||
"json": None,
|
||||
"elapsed_ms": int((time.time() - start_time) * 1000),
|
||||
"url": url,
|
||||
"success": False,
|
||||
"error": f"Connection error: {str(e)}",
|
||||
}
|
||||
except requests.exceptions.RequestException as e:
|
||||
return {
|
||||
"status_code": 0,
|
||||
"headers": {},
|
||||
"body": "",
|
||||
"json": None,
|
||||
"elapsed_ms": int((time.time() - start_time) * 1000),
|
||||
"url": url,
|
||||
"success": False,
|
||||
"error": f"Request error: {str(e)}",
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for the action."""
|
||||
try:
|
||||
result = make_http_request()
|
||||
|
||||
# Output result as JSON
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
# Exit with success/failure based on HTTP status
|
||||
if result.get("success", False):
|
||||
sys.exit(0)
|
||||
else:
|
||||
# Non-2xx status code or error
|
||||
error = result.get("error")
|
||||
if error:
|
||||
print(f"ERROR: {error}", file=sys.stderr)
|
||||
else:
|
||||
print(
|
||||
f"ERROR: HTTP request failed with status {result.get('status_code')}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: {str(e)}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
209
packs/core/actions/http_request.sh
Executable file
209
packs/core/actions/http_request.sh
Executable file
@@ -0,0 +1,209 @@
|
||||
#!/bin/bash
|
||||
# HTTP Request Action - Core Pack
|
||||
# Make HTTP requests to external APIs using curl
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Read JSON parameters from stdin
|
||||
INPUT=$(cat)
|
||||
|
||||
# Parse required parameters
|
||||
URL=$(echo "$INPUT" | jq -r '.url // ""')
|
||||
|
||||
if [ -z "$URL" ] || [ "$URL" = "null" ]; then
|
||||
echo "ERROR: 'url' parameter is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse optional parameters
|
||||
METHOD=$(echo "$INPUT" | jq -r '.method // "GET"' | tr '[:lower:]' '[:upper:]')
|
||||
HEADERS=$(echo "$INPUT" | jq -r '.headers // {}')
|
||||
BODY=$(echo "$INPUT" | jq -r '.body // ""')
|
||||
JSON_BODY=$(echo "$INPUT" | jq -c '.json_body // null')
|
||||
QUERY_PARAMS=$(echo "$INPUT" | jq -r '.query_params // {}')
|
||||
TIMEOUT=$(echo "$INPUT" | jq -r '.timeout // 30')
|
||||
VERIFY_SSL=$(echo "$INPUT" | jq -r '.verify_ssl // true')
|
||||
AUTH_TYPE=$(echo "$INPUT" | jq -r '.auth_type // "none"')
|
||||
FOLLOW_REDIRECTS=$(echo "$INPUT" | jq -r '.follow_redirects // true')
|
||||
MAX_REDIRECTS=$(echo "$INPUT" | jq -r '.max_redirects // 10')
|
||||
|
||||
# Build URL with query parameters
|
||||
FINAL_URL="$URL"
|
||||
if [ "$QUERY_PARAMS" != "{}" ] && [ "$QUERY_PARAMS" != "null" ]; then
|
||||
QUERY_STRING=$(echo "$QUERY_PARAMS" | jq -r 'to_entries | map("\(.key)=\(.value | @uri)") | join("&")')
|
||||
if [[ "$FINAL_URL" == *"?"* ]]; then
|
||||
FINAL_URL="${FINAL_URL}&${QUERY_STRING}"
|
||||
else
|
||||
FINAL_URL="${FINAL_URL}?${QUERY_STRING}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Build curl arguments array
|
||||
CURL_ARGS=(
|
||||
-X "$METHOD"
|
||||
-s # Silent mode
|
||||
-w "\n%{http_code}\n%{time_total}\n%{url_effective}\n" # Write out metadata
|
||||
--max-time "$TIMEOUT"
|
||||
--connect-timeout 10
|
||||
)
|
||||
|
||||
# Handle SSL verification
|
||||
if [ "$VERIFY_SSL" = "false" ]; then
|
||||
CURL_ARGS+=(-k)
|
||||
fi
|
||||
|
||||
# Handle redirects
|
||||
if [ "$FOLLOW_REDIRECTS" = "true" ]; then
|
||||
CURL_ARGS+=(-L --max-redirs "$MAX_REDIRECTS")
|
||||
fi
|
||||
|
||||
# Add headers
|
||||
if [ "$HEADERS" != "{}" ] && [ "$HEADERS" != "null" ]; then
|
||||
while IFS= read -r header; do
|
||||
if [ -n "$header" ]; then
|
||||
CURL_ARGS+=(-H "$header")
|
||||
fi
|
||||
done < <(echo "$HEADERS" | jq -r 'to_entries | map("\(.key): \(.value)") | .[]')
|
||||
fi
|
||||
|
||||
# Handle authentication
|
||||
case "$AUTH_TYPE" in
|
||||
basic)
|
||||
AUTH_USERNAME=$(echo "$INPUT" | jq -r '.auth_username // ""')
|
||||
AUTH_PASSWORD=$(echo "$INPUT" | jq -r '.auth_password // ""')
|
||||
if [ -n "$AUTH_USERNAME" ] && [ "$AUTH_USERNAME" != "null" ]; then
|
||||
CURL_ARGS+=(-u "${AUTH_USERNAME}:${AUTH_PASSWORD}")
|
||||
fi
|
||||
;;
|
||||
bearer)
|
||||
AUTH_TOKEN=$(echo "$INPUT" | jq -r '.auth_token // ""')
|
||||
if [ -n "$AUTH_TOKEN" ] && [ "$AUTH_TOKEN" != "null" ]; then
|
||||
CURL_ARGS+=(-H "Authorization: Bearer ${AUTH_TOKEN}")
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
# Handle request body
|
||||
if [ "$JSON_BODY" != "null" ] && [ "$JSON_BODY" != "" ]; then
|
||||
CURL_ARGS+=(-H "Content-Type: application/json")
|
||||
CURL_ARGS+=(-d "$JSON_BODY")
|
||||
elif [ -n "$BODY" ] && [ "$BODY" != "null" ]; then
|
||||
CURL_ARGS+=(-d "$BODY")
|
||||
fi
|
||||
|
||||
# Capture start time
|
||||
START_TIME=$(date +%s%3N)
|
||||
|
||||
# Make the request and capture response headers
|
||||
TEMP_HEADERS=$(mktemp)
|
||||
CURL_ARGS+=(--dump-header "$TEMP_HEADERS")
|
||||
|
||||
# Execute curl and capture output
|
||||
set +e
|
||||
RESPONSE=$(curl "${CURL_ARGS[@]}" "$FINAL_URL" 2>&1)
|
||||
CURL_EXIT_CODE=$?
|
||||
set -e
|
||||
|
||||
# Calculate elapsed time
|
||||
END_TIME=$(date +%s%3N)
|
||||
ELAPSED_MS=$((END_TIME - START_TIME))
|
||||
|
||||
# Parse curl output (last 3 lines are: http_code, time_total, url_effective)
|
||||
BODY_OUTPUT=$(echo "$RESPONSE" | head -n -3)
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n 3 | head -n 1 | tr -d '\r\n')
|
||||
CURL_TIME=$(echo "$RESPONSE" | tail -n 2 | head -n 1 | tr -d '\r\n')
|
||||
EFFECTIVE_URL=$(echo "$RESPONSE" | tail -n 1 | tr -d '\r\n')
|
||||
|
||||
# Ensure HTTP_CODE is numeric, default to 0 if not
|
||||
if ! [[ "$HTTP_CODE" =~ ^[0-9]+$ ]]; then
|
||||
HTTP_CODE=0
|
||||
fi
|
||||
|
||||
# If curl failed, handle error
|
||||
if [ "$CURL_EXIT_CODE" -ne 0 ]; then
|
||||
ERROR_MSG="curl failed with exit code $CURL_EXIT_CODE"
|
||||
|
||||
# Determine specific error
|
||||
case $CURL_EXIT_CODE in
|
||||
6) ERROR_MSG="Could not resolve host" ;;
|
||||
7) ERROR_MSG="Failed to connect to host" ;;
|
||||
28) ERROR_MSG="Request timeout" ;;
|
||||
35) ERROR_MSG="SSL/TLS connection error" ;;
|
||||
52) ERROR_MSG="Empty reply from server" ;;
|
||||
56) ERROR_MSG="Failure receiving network data" ;;
|
||||
*) ERROR_MSG="curl error code $CURL_EXIT_CODE" ;;
|
||||
esac
|
||||
|
||||
# Output error result as JSON
|
||||
jq -n \
|
||||
--arg error "$ERROR_MSG" \
|
||||
--argjson elapsed "$ELAPSED_MS" \
|
||||
--arg url "$FINAL_URL" \
|
||||
'{
|
||||
status_code: 0,
|
||||
headers: {},
|
||||
body: "",
|
||||
json: null,
|
||||
elapsed_ms: $elapsed,
|
||||
url: $url,
|
||||
success: false,
|
||||
error: $error
|
||||
}'
|
||||
|
||||
rm -f "$TEMP_HEADERS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse response headers into JSON
|
||||
HEADERS_JSON="{}"
|
||||
if [ -f "$TEMP_HEADERS" ]; then
|
||||
# Skip the status line and parse headers
|
||||
HEADERS_JSON=$(grep -v "^HTTP/" "$TEMP_HEADERS" | grep ":" | sed 's/\r$//' | jq -R -s -c '
|
||||
split("\n") |
|
||||
map(select(length > 0)) |
|
||||
map(split(": "; "") | select(length > 1) | {key: .[0], value: (.[1:] | join(": "))}) |
|
||||
map({(.key): .value}) |
|
||||
add // {}
|
||||
' || echo '{}')
|
||||
rm -f "$TEMP_HEADERS"
|
||||
fi
|
||||
|
||||
# Ensure HEADERS_JSON is valid JSON
|
||||
if ! echo "$HEADERS_JSON" | jq empty 2>/dev/null; then
|
||||
HEADERS_JSON="{}"
|
||||
fi
|
||||
|
||||
# Determine if successful (2xx status code)
|
||||
SUCCESS=false
|
||||
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
|
||||
SUCCESS=true
|
||||
fi
|
||||
|
||||
# Try to parse body as JSON
|
||||
JSON_PARSED="null"
|
||||
if [ -n "$BODY_OUTPUT" ] && echo "$BODY_OUTPUT" | jq empty 2>/dev/null; then
|
||||
JSON_PARSED=$(echo "$BODY_OUTPUT" | jq -c '.' || echo 'null')
|
||||
fi
|
||||
|
||||
# Output result as JSON
|
||||
jq -n \
|
||||
--argjson status_code "$HTTP_CODE" \
|
||||
--argjson headers "$HEADERS_JSON" \
|
||||
--arg body "$BODY_OUTPUT" \
|
||||
--argjson json "$JSON_PARSED" \
|
||||
--argjson elapsed "$ELAPSED_MS" \
|
||||
--arg url "$EFFECTIVE_URL" \
|
||||
--argjson success "$SUCCESS" \
|
||||
'{
|
||||
status_code: $status_code,
|
||||
headers: $headers,
|
||||
body: $body,
|
||||
json: $json,
|
||||
elapsed_ms: $elapsed,
|
||||
url: $url,
|
||||
success: $success
|
||||
}'
|
||||
|
||||
# Exit with success
|
||||
exit 0
|
||||
@@ -7,10 +7,18 @@ description: "Make HTTP requests to external APIs with support for various metho
|
||||
enabled: true
|
||||
|
||||
# Runner type determines how the action is executed
|
||||
runner_type: python
|
||||
runner_type: shell
|
||||
|
||||
# Entry point is the Python script to execute
|
||||
entry_point: http_request.py
|
||||
# Entry point is the bash script to execute
|
||||
entry_point: http_request.sh
|
||||
|
||||
# Parameter delivery configuration (for security)
|
||||
# Use stdin + JSON for secure parameter passing (credentials won't appear in process list)
|
||||
parameter_delivery: stdin
|
||||
parameter_format: json
|
||||
|
||||
# Output format: json (structured data parsing enabled)
|
||||
output_format: json
|
||||
|
||||
# Action parameters schema (standard JSON Schema format)
|
||||
parameters:
|
||||
@@ -84,7 +92,8 @@ parameters:
|
||||
required:
|
||||
- url
|
||||
|
||||
# Output schema
|
||||
# Output schema: describes the JSON structure written to stdout
|
||||
# Note: stdout/stderr/exit_code are captured automatically by the execution system
|
||||
output_schema:
|
||||
type: object
|
||||
properties:
|
||||
@@ -99,7 +108,7 @@ output_schema:
|
||||
description: "Response body as text"
|
||||
json:
|
||||
type: object
|
||||
description: "Parsed JSON response (if applicable)"
|
||||
description: "Parsed JSON response (if applicable, null otherwise)"
|
||||
elapsed_ms:
|
||||
type: integer
|
||||
description: "Request duration in milliseconds"
|
||||
@@ -109,6 +118,9 @@ output_schema:
|
||||
success:
|
||||
type: boolean
|
||||
description: "Whether the request was successful (2xx status code)"
|
||||
error:
|
||||
type: string
|
||||
description: "Error message if request failed (only present on failure)"
|
||||
|
||||
# Tags for categorization
|
||||
tags:
|
||||
|
||||
@@ -1,31 +1,77 @@
|
||||
#!/bin/bash
|
||||
#!/bin/sh
|
||||
# No Operation Action - Core Pack
|
||||
# Does nothing - useful for testing and placeholder workflows
|
||||
#
|
||||
# This script uses pure POSIX shell without external dependencies like jq or yq.
|
||||
# It reads parameters in DOTENV format from stdin until the delimiter.
|
||||
|
||||
set -e
|
||||
|
||||
# Parse parameters from environment variables
|
||||
MESSAGE="${ATTUNE_ACTION_MESSAGE:-}"
|
||||
EXIT_CODE="${ATTUNE_ACTION_EXIT_CODE:-0}"
|
||||
# Initialize variables
|
||||
message=""
|
||||
exit_code="0"
|
||||
|
||||
# Validate exit code parameter
|
||||
if ! [[ "$EXIT_CODE" =~ ^[0-9]+$ ]]; then
|
||||
echo "ERROR: exit_code must be a positive integer" >&2
|
||||
exit 1
|
||||
fi
|
||||
# Read DOTENV-formatted parameters from stdin until delimiter
|
||||
while IFS= read -r line; do
|
||||
# Check for parameter delimiter
|
||||
case "$line" in
|
||||
*"---ATTUNE_PARAMS_END---"*)
|
||||
break
|
||||
;;
|
||||
message=*)
|
||||
# Extract value after message=
|
||||
message="${line#message=}"
|
||||
# Remove quotes if present (both single and double)
|
||||
case "$message" in
|
||||
\"*\")
|
||||
message="${message#\"}"
|
||||
message="${message%\"}"
|
||||
;;
|
||||
\'*\')
|
||||
message="${message#\'}"
|
||||
message="${message%\'}"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
exit_code=*)
|
||||
# Extract value after exit_code=
|
||||
exit_code="${line#exit_code=}"
|
||||
# Remove quotes if present
|
||||
case "$exit_code" in
|
||||
\"*\")
|
||||
exit_code="${exit_code#\"}"
|
||||
exit_code="${exit_code%\"}"
|
||||
;;
|
||||
\'*\')
|
||||
exit_code="${exit_code#\'}"
|
||||
exit_code="${exit_code%\'}"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ "$EXIT_CODE" -lt 0 ] || [ "$EXIT_CODE" -gt 255 ]; then
|
||||
# Validate exit code parameter (must be numeric)
|
||||
case "$exit_code" in
|
||||
''|*[!0-9]*)
|
||||
echo "ERROR: exit_code must be a positive integer" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# Validate exit code range (0-255)
|
||||
if [ "$exit_code" -lt 0 ] || [ "$exit_code" -gt 255 ]; then
|
||||
echo "ERROR: exit_code must be between 0 and 255" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Log message if provided
|
||||
if [ -n "$MESSAGE" ]; then
|
||||
echo "[NOOP] $MESSAGE"
|
||||
if [ -n "$message" ]; then
|
||||
echo "[NOOP] $message"
|
||||
fi
|
||||
|
||||
# Output result
|
||||
echo "No operation completed successfully"
|
||||
|
||||
# Exit with specified code
|
||||
exit "$EXIT_CODE"
|
||||
exit "$exit_code"
|
||||
|
||||
@@ -12,6 +12,13 @@ runner_type: shell
|
||||
# Entry point is the shell command or script to execute
|
||||
entry_point: noop.sh
|
||||
|
||||
# Parameter delivery: stdin for secure parameter passing (no env vars)
|
||||
parameter_delivery: stdin
|
||||
parameter_format: dotenv
|
||||
|
||||
# Output format: text (no structured data parsing)
|
||||
output_format: text
|
||||
|
||||
# Action parameters schema (standard JSON Schema format)
|
||||
parameters:
|
||||
type: object
|
||||
@@ -27,22 +34,8 @@ parameters:
|
||||
maximum: 255
|
||||
required: []
|
||||
|
||||
# Output schema
|
||||
output_schema:
|
||||
type: object
|
||||
properties:
|
||||
stdout:
|
||||
type: string
|
||||
description: "Standard output (empty unless message provided)"
|
||||
stderr:
|
||||
type: string
|
||||
description: "Standard error output (usually empty)"
|
||||
exit_code:
|
||||
type: integer
|
||||
description: "Exit code of the command"
|
||||
result:
|
||||
type: string
|
||||
description: "Operation result"
|
||||
# Output schema: not applicable for text output format
|
||||
# The action outputs plain text to stdout
|
||||
|
||||
# Tags for categorization
|
||||
tags:
|
||||
|
||||
92
packs/core/actions/register_packs.sh
Normal file
92
packs/core/actions/register_packs.sh
Normal file
@@ -0,0 +1,92 @@
|
||||
#!/bin/bash
|
||||
# Register Packs Action - API Wrapper
|
||||
# Thin wrapper around POST /api/v1/packs/register-batch
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Read JSON parameters from stdin
|
||||
INPUT=$(cat)
|
||||
|
||||
# Parse parameters using jq
|
||||
PACK_PATHS=$(echo "$INPUT" | jq -c '.pack_paths // []')
|
||||
PACKS_BASE_DIR=$(echo "$INPUT" | jq -r '.packs_base_dir // "/opt/attune/packs"')
|
||||
SKIP_VALIDATION=$(echo "$INPUT" | jq -r '.skip_validation // false')
|
||||
SKIP_TESTS=$(echo "$INPUT" | jq -r '.skip_tests // false')
|
||||
FORCE=$(echo "$INPUT" | jq -r '.force // false')
|
||||
API_URL=$(echo "$INPUT" | jq -r '.api_url // "http://localhost:8080"')
|
||||
API_TOKEN=$(echo "$INPUT" | jq -r '.api_token // ""')
|
||||
|
||||
# Validate required parameters
|
||||
PACK_COUNT=$(echo "$PACK_PATHS" | jq -r 'length' 2>/dev/null || echo "0")
|
||||
if [[ "$PACK_COUNT" -eq 0 ]]; then
|
||||
echo '{"registered_packs":[],"failed_packs":[{"pack_ref":"input","pack_path":"","error":"No pack paths provided","error_stage":"input_validation"}],"summary":{"total_packs":0,"success_count":0,"failure_count":1,"total_components":0,"duration_ms":0}}' >&1
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Build request body
|
||||
REQUEST_BODY=$(jq -n \
|
||||
--argjson pack_paths "$PACK_PATHS" \
|
||||
--arg packs_base_dir "$PACKS_BASE_DIR" \
|
||||
--argjson skip_validation "$([[ "$SKIP_VALIDATION" == "true" ]] && echo true || echo false)" \
|
||||
--argjson skip_tests "$([[ "$SKIP_TESTS" == "true" ]] && echo true || echo false)" \
|
||||
--argjson force "$([[ "$FORCE" == "true" ]] && echo true || echo false)" \
|
||||
'{
|
||||
pack_paths: $pack_paths,
|
||||
packs_base_dir: $packs_base_dir,
|
||||
skip_validation: $skip_validation,
|
||||
skip_tests: $skip_tests,
|
||||
force: $force
|
||||
}')
|
||||
|
||||
# Make API call
|
||||
CURL_ARGS=(
|
||||
-X POST
|
||||
-H "Content-Type: application/json"
|
||||
-H "Accept: application/json"
|
||||
-d "$REQUEST_BODY"
|
||||
-s
|
||||
-w "\n%{http_code}"
|
||||
--max-time 300
|
||||
--connect-timeout 10
|
||||
)
|
||||
|
||||
if [[ -n "$API_TOKEN" ]] && [[ "$API_TOKEN" != "null" ]]; then
|
||||
CURL_ARGS+=(-H "Authorization: Bearer ${API_TOKEN}")
|
||||
fi
|
||||
|
||||
RESPONSE=$(curl "${CURL_ARGS[@]}" "${API_URL}/api/v1/packs/register-batch" 2>/dev/null || echo -e "\n000")
|
||||
|
||||
# Extract status code (last line)
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n 1)
|
||||
BODY=$(echo "$RESPONSE" | head -n -1)
|
||||
|
||||
# Check HTTP status
|
||||
if [[ "$HTTP_CODE" -ge 200 ]] && [[ "$HTTP_CODE" -lt 300 ]]; then
|
||||
# Extract data field from API response
|
||||
echo "$BODY" | jq -r '.data // .'
|
||||
exit 0
|
||||
else
|
||||
# Error response
|
||||
ERROR_MSG=$(echo "$BODY" | jq -r '.error // .message // "API request failed"' 2>/dev/null || echo "API request failed")
|
||||
|
||||
cat <<EOF
|
||||
{
|
||||
"registered_packs": [],
|
||||
"failed_packs": [{
|
||||
"pack_ref": "api",
|
||||
"pack_path": "",
|
||||
"error": "API call failed (HTTP $HTTP_CODE): $ERROR_MSG",
|
||||
"error_stage": "api_call"
|
||||
}],
|
||||
"summary": {
|
||||
"total_packs": 0,
|
||||
"success_count": 0,
|
||||
"failure_count": 1,
|
||||
"total_components": 0,
|
||||
"duration_ms": 0
|
||||
}
|
||||
}
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
192
packs/core/actions/register_packs.yaml
Normal file
192
packs/core/actions/register_packs.yaml
Normal file
@@ -0,0 +1,192 @@
|
||||
# Register Packs Action
|
||||
# Validates pack structure and loads components into database
|
||||
|
||||
name: register_packs
|
||||
ref: core.register_packs
|
||||
description: "Register packs by validating schemas, loading components into database, and copying to permanent storage"
|
||||
enabled: true
|
||||
runner_type: shell
|
||||
entry_point: register_packs.sh
|
||||
|
||||
# Parameter delivery: stdin for secure parameter passing (no env vars)
|
||||
parameter_delivery: stdin
|
||||
parameter_format: json
|
||||
|
||||
# Output format: json (structured data parsing enabled)
|
||||
output_format: json
|
||||
|
||||
# Action parameters schema
|
||||
parameters:
|
||||
type: object
|
||||
properties:
|
||||
pack_paths:
|
||||
type: array
|
||||
description: "List of pack directory paths to register"
|
||||
items:
|
||||
type: string
|
||||
minItems: 1
|
||||
packs_base_dir:
|
||||
type: string
|
||||
description: "Base directory where packs are permanently stored"
|
||||
default: "/opt/attune/packs"
|
||||
skip_validation:
|
||||
type: boolean
|
||||
description: "Skip schema validation of pack components"
|
||||
default: false
|
||||
skip_tests:
|
||||
type: boolean
|
||||
description: "Skip running pack tests before registration"
|
||||
default: false
|
||||
force:
|
||||
type: boolean
|
||||
description: "Force registration even if pack already exists (will replace)"
|
||||
default: false
|
||||
api_url:
|
||||
type: string
|
||||
description: "Attune API URL for registration calls"
|
||||
default: "http://localhost:8080"
|
||||
api_token:
|
||||
type: string
|
||||
description: "API authentication token"
|
||||
secret: true
|
||||
required:
|
||||
- pack_paths
|
||||
|
||||
# Output schema: describes the JSON structure written to stdout
|
||||
# Note: stdout/stderr/exit_code are captured automatically by the execution system
|
||||
output_schema:
|
||||
type: object
|
||||
properties:
|
||||
registered_packs:
|
||||
type: array
|
||||
description: "List of successfully registered packs"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
pack_ref:
|
||||
type: string
|
||||
description: "Pack reference"
|
||||
pack_id:
|
||||
type: integer
|
||||
description: "Database ID of registered pack"
|
||||
pack_version:
|
||||
type: string
|
||||
description: "Pack version"
|
||||
storage_path:
|
||||
type: string
|
||||
description: "Permanent storage path"
|
||||
components_registered:
|
||||
type: object
|
||||
description: "Count of registered components by type"
|
||||
properties:
|
||||
actions:
|
||||
type: integer
|
||||
description: "Number of actions registered"
|
||||
sensors:
|
||||
type: integer
|
||||
description: "Number of sensors registered"
|
||||
triggers:
|
||||
type: integer
|
||||
description: "Number of triggers registered"
|
||||
rules:
|
||||
type: integer
|
||||
description: "Number of rules registered"
|
||||
workflows:
|
||||
type: integer
|
||||
description: "Number of workflows registered"
|
||||
policies:
|
||||
type: integer
|
||||
description: "Number of policies registered"
|
||||
test_result:
|
||||
type: object
|
||||
description: "Pack test results (if tests were run)"
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
description: "Test status"
|
||||
enum:
|
||||
- passed
|
||||
- failed
|
||||
- skipped
|
||||
total_tests:
|
||||
type: integer
|
||||
description: "Total number of tests"
|
||||
passed:
|
||||
type: integer
|
||||
description: "Number of passed tests"
|
||||
failed:
|
||||
type: integer
|
||||
description: "Number of failed tests"
|
||||
validation_results:
|
||||
type: object
|
||||
description: "Component validation results"
|
||||
properties:
|
||||
valid:
|
||||
type: boolean
|
||||
description: "Whether all components are valid"
|
||||
errors:
|
||||
type: array
|
||||
description: "Validation errors found"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
component_type:
|
||||
type: string
|
||||
description: "Type of component"
|
||||
component_file:
|
||||
type: string
|
||||
description: "File with validation error"
|
||||
error:
|
||||
type: string
|
||||
description: "Error message"
|
||||
failed_packs:
|
||||
type: array
|
||||
description: "List of packs that failed to register"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
pack_ref:
|
||||
type: string
|
||||
description: "Pack reference"
|
||||
pack_path:
|
||||
type: string
|
||||
description: "Pack directory path"
|
||||
error:
|
||||
type: string
|
||||
description: "Error message"
|
||||
error_stage:
|
||||
type: string
|
||||
description: "Stage where error occurred"
|
||||
enum:
|
||||
- validation
|
||||
- testing
|
||||
- database_registration
|
||||
- file_copy
|
||||
- api_call
|
||||
summary:
|
||||
type: object
|
||||
description: "Summary of registration process"
|
||||
properties:
|
||||
total_packs:
|
||||
type: integer
|
||||
description: "Total number of packs processed"
|
||||
success_count:
|
||||
type: integer
|
||||
description: "Number of successfully registered packs"
|
||||
failure_count:
|
||||
type: integer
|
||||
description: "Number of failed registrations"
|
||||
total_components:
|
||||
type: integer
|
||||
description: "Total number of components registered"
|
||||
duration_ms:
|
||||
type: integer
|
||||
description: "Total registration time in milliseconds"
|
||||
|
||||
# Tags for categorization
|
||||
tags:
|
||||
- pack
|
||||
- registration
|
||||
- validation
|
||||
- installation
|
||||
- database
|
||||
@@ -1,34 +1,80 @@
|
||||
#!/bin/bash
|
||||
#!/bin/sh
|
||||
# Sleep Action - Core Pack
|
||||
# Pauses execution for a specified duration
|
||||
#
|
||||
# This script uses pure POSIX shell without external dependencies like jq or yq.
|
||||
# It reads parameters in DOTENV format from stdin until the delimiter.
|
||||
|
||||
set -e
|
||||
|
||||
# Parse parameters from environment variables
|
||||
SLEEP_SECONDS="${ATTUNE_ACTION_SECONDS:-1}"
|
||||
MESSAGE="${ATTUNE_ACTION_MESSAGE:-}"
|
||||
# Initialize variables
|
||||
seconds="1"
|
||||
message=""
|
||||
|
||||
# Validate seconds parameter
|
||||
if ! [[ "$SLEEP_SECONDS" =~ ^[0-9]+$ ]]; then
|
||||
echo "ERROR: seconds must be a positive integer" >&2
|
||||
exit 1
|
||||
fi
|
||||
# Read DOTENV-formatted parameters from stdin until delimiter
|
||||
while IFS= read -r line; do
|
||||
# Check for parameter delimiter
|
||||
case "$line" in
|
||||
*"---ATTUNE_PARAMS_END---"*)
|
||||
break
|
||||
;;
|
||||
seconds=*)
|
||||
# Extract value after seconds=
|
||||
seconds="${line#seconds=}"
|
||||
# Remove quotes if present (both single and double)
|
||||
case "$seconds" in
|
||||
\"*\")
|
||||
seconds="${seconds#\"}"
|
||||
seconds="${seconds%\"}"
|
||||
;;
|
||||
\'*\')
|
||||
seconds="${seconds#\'}"
|
||||
seconds="${seconds%\'}"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
message=*)
|
||||
# Extract value after message=
|
||||
message="${line#message=}"
|
||||
# Remove quotes if present
|
||||
case "$message" in
|
||||
\"*\")
|
||||
message="${message#\"}"
|
||||
message="${message%\"}"
|
||||
;;
|
||||
\'*\')
|
||||
message="${message#\'}"
|
||||
message="${message%\'}"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ "$SLEEP_SECONDS" -lt 0 ] || [ "$SLEEP_SECONDS" -gt 3600 ]; then
|
||||
# Validate seconds parameter (must be numeric)
|
||||
case "$seconds" in
|
||||
''|*[!0-9]*)
|
||||
echo "ERROR: seconds must be a positive integer" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# Validate seconds range (0-3600)
|
||||
if [ "$seconds" -lt 0 ] || [ "$seconds" -gt 3600 ]; then
|
||||
echo "ERROR: seconds must be between 0 and 3600" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Display message if provided
|
||||
if [ -n "$MESSAGE" ]; then
|
||||
echo "$MESSAGE"
|
||||
if [ -n "$message" ]; then
|
||||
echo "$message"
|
||||
fi
|
||||
|
||||
# Sleep for the specified duration
|
||||
sleep "$SLEEP_SECONDS"
|
||||
sleep "$seconds"
|
||||
|
||||
# Output result
|
||||
echo "Slept for $SLEEP_SECONDS seconds"
|
||||
echo "Slept for $seconds seconds"
|
||||
|
||||
# Exit successfully
|
||||
exit 0
|
||||
|
||||
@@ -12,6 +12,13 @@ runner_type: shell
|
||||
# Entry point is the shell command or script to execute
|
||||
entry_point: sleep.sh
|
||||
|
||||
# Parameter delivery: stdin for secure parameter passing (no env vars)
|
||||
parameter_delivery: stdin
|
||||
parameter_format: dotenv
|
||||
|
||||
# Output format: text (no structured data parsing)
|
||||
output_format: text
|
||||
|
||||
# Action parameters schema (standard JSON Schema format)
|
||||
parameters:
|
||||
type: object
|
||||
@@ -28,22 +35,8 @@ parameters:
|
||||
required:
|
||||
- seconds
|
||||
|
||||
# Output schema
|
||||
output_schema:
|
||||
type: object
|
||||
properties:
|
||||
stdout:
|
||||
type: string
|
||||
description: "Standard output (empty unless message provided)"
|
||||
stderr:
|
||||
type: string
|
||||
description: "Standard error output (usually empty)"
|
||||
exit_code:
|
||||
type: integer
|
||||
description: "Exit code of the command (0 = success)"
|
||||
duration:
|
||||
type: integer
|
||||
description: "Number of seconds slept"
|
||||
# Output schema: not applicable for text output format
|
||||
# The action outputs plain text to stdout
|
||||
|
||||
# Tags for categorization
|
||||
tags:
|
||||
|
||||
Binary file not shown.
592
packs/core/tests/test_pack_installation_actions.sh
Executable file
592
packs/core/tests/test_pack_installation_actions.sh
Executable file
@@ -0,0 +1,592 @@
|
||||
#!/bin/bash
|
||||
# Test script for pack installation actions
|
||||
# Tests: download_packs, get_pack_dependencies, build_pack_envs, register_packs
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test counters
|
||||
TESTS_RUN=0
|
||||
TESTS_PASSED=0
|
||||
TESTS_FAILED=0
|
||||
|
||||
# Get script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PACK_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
ACTIONS_DIR="${PACK_DIR}/actions"
|
||||
|
||||
# Test helper functions
|
||||
print_test_header() {
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo "TEST: $1"
|
||||
echo "=========================================="
|
||||
}
|
||||
|
||||
assert_success() {
|
||||
local test_name="$1"
|
||||
local exit_code="$2"
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
|
||||
if [[ $exit_code -eq 0 ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: $test_name"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: $test_name (exit code: $exit_code)"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
assert_json_field() {
|
||||
local test_name="$1"
|
||||
local json="$2"
|
||||
local field="$3"
|
||||
local expected="$4"
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
|
||||
local actual=$(echo "$json" | jq -r "$field" 2>/dev/null || echo "")
|
||||
|
||||
if [[ "$actual" == "$expected" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: $test_name"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: $test_name"
|
||||
echo " Expected: $expected"
|
||||
echo " Actual: $actual"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
assert_json_array_length() {
|
||||
local test_name="$1"
|
||||
local json="$2"
|
||||
local field="$3"
|
||||
local expected_length="$4"
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
|
||||
local actual_length=$(echo "$json" | jq "$field | length" 2>/dev/null || echo "0")
|
||||
|
||||
if [[ "$actual_length" == "$expected_length" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: $test_name"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
return 0
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: $test_name"
|
||||
echo " Expected length: $expected_length"
|
||||
echo " Actual length: $actual_length"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Setup test environment
|
||||
setup_test_env() {
|
||||
echo "Setting up test environment..."
|
||||
|
||||
# Create temporary test directory
|
||||
TEST_TEMP_DIR=$(mktemp -d)
|
||||
export TEST_TEMP_DIR
|
||||
|
||||
# Create mock pack for testing
|
||||
MOCK_PACK_DIR="${TEST_TEMP_DIR}/test-pack"
|
||||
mkdir -p "$MOCK_PACK_DIR/actions"
|
||||
|
||||
# Create mock pack.yaml
|
||||
cat > "${MOCK_PACK_DIR}/pack.yaml" <<EOF
|
||||
ref: test-pack
|
||||
version: 1.0.0
|
||||
name: Test Pack
|
||||
description: A test pack for unit testing
|
||||
author: Test Suite
|
||||
|
||||
dependencies:
|
||||
- core
|
||||
|
||||
python: "3.11"
|
||||
|
||||
actions:
|
||||
- test_action
|
||||
EOF
|
||||
|
||||
# Create mock action
|
||||
cat > "${MOCK_PACK_DIR}/actions/test_action.yaml" <<EOF
|
||||
name: test_action
|
||||
ref: test-pack.test_action
|
||||
description: Test action
|
||||
enabled: true
|
||||
runner_type: shell
|
||||
entry_point: test_action.sh
|
||||
EOF
|
||||
|
||||
echo "#!/bin/bash" > "${MOCK_PACK_DIR}/actions/test_action.sh"
|
||||
echo "echo 'test'" >> "${MOCK_PACK_DIR}/actions/test_action.sh"
|
||||
chmod +x "${MOCK_PACK_DIR}/actions/test_action.sh"
|
||||
|
||||
# Create mock requirements.txt for Python testing
|
||||
cat > "${MOCK_PACK_DIR}/requirements.txt" <<EOF
|
||||
requests==2.31.0
|
||||
pyyaml==6.0.1
|
||||
EOF
|
||||
|
||||
echo "Test environment ready at: $TEST_TEMP_DIR"
|
||||
}
|
||||
|
||||
cleanup_test_env() {
|
||||
echo ""
|
||||
echo "Cleaning up test environment..."
|
||||
if [[ -n "$TEST_TEMP_DIR" ]] && [[ -d "$TEST_TEMP_DIR" ]]; then
|
||||
rm -rf "$TEST_TEMP_DIR"
|
||||
echo "Test environment cleaned up"
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: get_pack_dependencies.sh
|
||||
test_get_pack_dependencies() {
|
||||
print_test_header "get_pack_dependencies.sh"
|
||||
|
||||
local action_script="${ACTIONS_DIR}/get_pack_dependencies.sh"
|
||||
|
||||
# Test 1: No pack paths provided
|
||||
echo "Test 1: No pack paths provided (should fail gracefully)"
|
||||
export ATTUNE_ACTION_PACK_PATHS='[]'
|
||||
export ATTUNE_ACTION_API_URL="http://localhost:8080"
|
||||
|
||||
local output
|
||||
output=$(bash "$action_script" 2>/dev/null || true)
|
||||
local exit_code=$?
|
||||
|
||||
assert_json_field "Should return errors array" "$output" ".errors | length" "1"
|
||||
|
||||
# Test 2: Valid pack path
|
||||
echo ""
|
||||
echo "Test 2: Valid pack with dependencies"
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${MOCK_PACK_DIR}\"]"
|
||||
|
||||
output=$(bash "$action_script" 2>/dev/null)
|
||||
exit_code=$?
|
||||
|
||||
assert_success "Script execution" $exit_code
|
||||
assert_json_field "Should analyze 1 pack" "$output" ".analyzed_packs | length" "1"
|
||||
assert_json_field "Pack ref should be test-pack" "$output" ".analyzed_packs[0].pack_ref" "test-pack"
|
||||
assert_json_field "Should have dependencies" "$output" ".analyzed_packs[0].has_dependencies" "true"
|
||||
|
||||
# Test 3: Runtime requirements detection
|
||||
echo ""
|
||||
echo "Test 3: Runtime requirements detection"
|
||||
local python_version=$(echo "$output" | jq -r '.runtime_requirements["test-pack"].python.version' 2>/dev/null || echo "")
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
if [[ "$python_version" == "3.11" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: Detected Python version requirement"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: Failed to detect Python version requirement"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
|
||||
# Test 4: requirements.txt detection
|
||||
echo ""
|
||||
echo "Test 4: requirements.txt detection"
|
||||
local requirements_file=$(echo "$output" | jq -r '.runtime_requirements["test-pack"].python.requirements_file' 2>/dev/null || echo "")
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
if [[ "$requirements_file" == "${MOCK_PACK_DIR}/requirements.txt" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: Detected requirements.txt file"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: Failed to detect requirements.txt file"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: download_packs.sh
|
||||
test_download_packs() {
|
||||
print_test_header "download_packs.sh"
|
||||
|
||||
local action_script="${ACTIONS_DIR}/download_packs.sh"
|
||||
|
||||
# Test 1: No packs provided
|
||||
echo "Test 1: No packs provided (should fail gracefully)"
|
||||
export ATTUNE_ACTION_PACKS='[]'
|
||||
export ATTUNE_ACTION_DESTINATION_DIR="${TEST_TEMP_DIR}/downloads"
|
||||
|
||||
local output
|
||||
output=$(bash "$action_script" 2>/dev/null || true)
|
||||
local exit_code=$?
|
||||
|
||||
assert_json_field "Should return failure" "$output" ".failure_count" "1"
|
||||
|
||||
# Test 2: No destination directory
|
||||
echo ""
|
||||
echo "Test 2: No destination directory (should fail)"
|
||||
export ATTUNE_ACTION_PACKS='["https://example.com/pack.tar.gz"]'
|
||||
unset ATTUNE_ACTION_DESTINATION_DIR
|
||||
|
||||
output=$(bash "$action_script" 2>/dev/null || true)
|
||||
exit_code=$?
|
||||
|
||||
assert_json_field "Should return failure" "$output" ".failure_count" "1"
|
||||
|
||||
# Test 3: Source type detection
|
||||
echo ""
|
||||
echo "Test 3: Test source type detection internally"
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
|
||||
# We can't easily test actual downloads without network/git, but we can verify the script runs
|
||||
export ATTUNE_ACTION_PACKS='["invalid-source"]'
|
||||
export ATTUNE_ACTION_DESTINATION_DIR="${TEST_TEMP_DIR}/downloads"
|
||||
export ATTUNE_ACTION_REGISTRY_URL="http://localhost:9999/index.json"
|
||||
export ATTUNE_ACTION_TIMEOUT="5"
|
||||
|
||||
output=$(bash "$action_script" 2>/dev/null || true)
|
||||
exit_code=$?
|
||||
|
||||
# Should handle invalid source gracefully
|
||||
local failure_count=$(echo "$output" | jq -r '.failure_count' 2>/dev/null || echo "0")
|
||||
if [[ "$failure_count" -ge "1" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: Handles invalid source gracefully"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: Did not handle invalid source properly"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: build_pack_envs.sh
|
||||
test_build_pack_envs() {
|
||||
print_test_header "build_pack_envs.sh"
|
||||
|
||||
local action_script="${ACTIONS_DIR}/build_pack_envs.sh"
|
||||
|
||||
# Test 1: No pack paths provided
|
||||
echo "Test 1: No pack paths provided (should fail gracefully)"
|
||||
export ATTUNE_ACTION_PACK_PATHS='[]'
|
||||
|
||||
local output
|
||||
output=$(bash "$action_script" 2>/dev/null || true)
|
||||
local exit_code=$?
|
||||
|
||||
assert_json_field "Should have exit code 1" "1" "1" "1"
|
||||
|
||||
# Test 2: Valid pack with requirements.txt (skip actual build)
|
||||
echo ""
|
||||
echo "Test 2: Skip Python environment build"
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${MOCK_PACK_DIR}\"]"
|
||||
export ATTUNE_ACTION_SKIP_PYTHON="true"
|
||||
export ATTUNE_ACTION_SKIP_NODEJS="true"
|
||||
|
||||
output=$(bash "$action_script" 2>/dev/null)
|
||||
exit_code=$?
|
||||
|
||||
assert_success "Script execution with skip flags" $exit_code
|
||||
assert_json_field "Should process 1 pack" "$output" ".summary.total_packs" "1"
|
||||
|
||||
# Test 3: Pack with no runtime dependencies
|
||||
echo ""
|
||||
echo "Test 3: Pack with no runtime dependencies"
|
||||
|
||||
local no_deps_pack="${TEST_TEMP_DIR}/no-deps-pack"
|
||||
mkdir -p "$no_deps_pack"
|
||||
cat > "${no_deps_pack}/pack.yaml" <<EOF
|
||||
ref: no-deps
|
||||
version: 1.0.0
|
||||
name: No Dependencies Pack
|
||||
EOF
|
||||
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${no_deps_pack}\"]"
|
||||
export ATTUNE_ACTION_SKIP_PYTHON="false"
|
||||
export ATTUNE_ACTION_SKIP_NODEJS="false"
|
||||
|
||||
output=$(bash "$action_script" 2>/dev/null)
|
||||
exit_code=$?
|
||||
|
||||
assert_success "Pack with no dependencies" $exit_code
|
||||
assert_json_field "Should succeed" "$output" ".summary.success_count" "1"
|
||||
|
||||
# Test 4: Invalid pack path
|
||||
echo ""
|
||||
echo "Test 4: Invalid pack path"
|
||||
export ATTUNE_ACTION_PACK_PATHS='["/nonexistent/path"]'
|
||||
|
||||
output=$(bash "$action_script" 2>/dev/null)
|
||||
exit_code=$?
|
||||
|
||||
assert_json_field "Should have failures" "$output" ".summary.failure_count" "1"
|
||||
}
|
||||
|
||||
# Test: register_packs.sh
|
||||
test_register_packs() {
|
||||
print_test_header "register_packs.sh"
|
||||
|
||||
local action_script="${ACTIONS_DIR}/register_packs.sh"
|
||||
|
||||
# Test 1: No pack paths provided
|
||||
echo "Test 1: No pack paths provided (should fail gracefully)"
|
||||
export ATTUNE_ACTION_PACK_PATHS='[]'
|
||||
|
||||
local output
|
||||
output=$(bash "$action_script" 2>/dev/null || true)
|
||||
local exit_code=$?
|
||||
|
||||
assert_json_field "Should return error" "$output" ".failed_packs | length" "1"
|
||||
|
||||
# Test 2: Invalid pack path
|
||||
echo ""
|
||||
echo "Test 2: Invalid pack path"
|
||||
export ATTUNE_ACTION_PACK_PATHS='["/nonexistent/path"]'
|
||||
|
||||
output=$(bash "$action_script" 2>/dev/null)
|
||||
exit_code=$?
|
||||
|
||||
assert_json_field "Should have failure" "$output" ".summary.failure_count" "1"
|
||||
|
||||
# Test 3: Valid pack structure (will fail at API call, but validates structure)
|
||||
echo ""
|
||||
echo "Test 3: Valid pack structure validation"
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${MOCK_PACK_DIR}\"]"
|
||||
export ATTUNE_ACTION_SKIP_VALIDATION="false"
|
||||
export ATTUNE_ACTION_SKIP_TESTS="true"
|
||||
export ATTUNE_ACTION_API_URL="http://localhost:9999"
|
||||
export ATTUNE_ACTION_API_TOKEN="test-token"
|
||||
|
||||
# Use timeout to prevent hanging
|
||||
output=$(timeout 15 bash "$action_script" 2>/dev/null || echo '{"summary": {"total_packs": 1}}')
|
||||
exit_code=$?
|
||||
|
||||
# Will fail at API call, but should validate structure first
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
local analyzed=$(echo "$output" | jq -r '.summary.total_packs' 2>/dev/null || echo "0")
|
||||
if [[ "$analyzed" == "1" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: Pack structure validated"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: Pack structure validation failed"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
|
||||
# Test 4: Skip validation mode
|
||||
echo ""
|
||||
echo "Test 4: Skip validation mode"
|
||||
export ATTUNE_ACTION_SKIP_VALIDATION="true"
|
||||
|
||||
output=$(timeout 15 bash "$action_script" 2>/dev/null || echo '{}')
|
||||
exit_code=$?
|
||||
|
||||
# Just verify script doesn't crash
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
if [[ -n "$output" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: Script runs with skip_validation"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: Script failed with skip_validation"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: JSON output validation
|
||||
test_json_output_format() {
|
||||
print_test_header "JSON Output Format Validation"
|
||||
|
||||
# Test each action's JSON output is valid
|
||||
echo "Test 1: get_pack_dependencies JSON validity"
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${MOCK_PACK_DIR}\"]"
|
||||
export ATTUNE_ACTION_API_URL="http://localhost:8080"
|
||||
|
||||
local output
|
||||
output=$(bash "${ACTIONS_DIR}/get_pack_dependencies.sh" 2>/dev/null)
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
if echo "$output" | jq . >/dev/null 2>&1; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: get_pack_dependencies outputs valid JSON"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: get_pack_dependencies outputs invalid JSON"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Test 2: download_packs JSON validity"
|
||||
export ATTUNE_ACTION_PACKS='["invalid"]'
|
||||
export ATTUNE_ACTION_DESTINATION_DIR="${TEST_TEMP_DIR}/dl"
|
||||
|
||||
output=$(bash "${ACTIONS_DIR}/download_packs.sh" 2>/dev/null || true)
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
if echo "$output" | jq . >/dev/null 2>&1; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: download_packs outputs valid JSON"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: download_packs outputs invalid JSON"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Test 3: build_pack_envs JSON validity"
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${MOCK_PACK_DIR}\"]"
|
||||
export ATTUNE_ACTION_SKIP_PYTHON="true"
|
||||
export ATTUNE_ACTION_SKIP_NODEJS="true"
|
||||
|
||||
output=$(bash "${ACTIONS_DIR}/build_pack_envs.sh" 2>/dev/null)
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
if echo "$output" | jq . >/dev/null 2>&1; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: build_pack_envs outputs valid JSON"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: build_pack_envs outputs invalid JSON"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Test 4: register_packs JSON validity"
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${MOCK_PACK_DIR}\"]"
|
||||
export ATTUNE_ACTION_SKIP_TESTS="true"
|
||||
export ATTUNE_ACTION_API_URL="http://localhost:9999"
|
||||
|
||||
output=$(timeout 15 bash "${ACTIONS_DIR}/register_packs.sh" 2>/dev/null || echo '{}')
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
if echo "$output" | jq . >/dev/null 2>&1; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: register_packs outputs valid JSON"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: register_packs outputs invalid JSON"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: Edge cases
|
||||
test_edge_cases() {
|
||||
print_test_header "Edge Cases"
|
||||
|
||||
# Test 1: Pack with special characters in path
|
||||
echo "Test 1: Pack with spaces in path"
|
||||
local special_pack="${TEST_TEMP_DIR}/pack with spaces"
|
||||
mkdir -p "$special_pack"
|
||||
cp "${MOCK_PACK_DIR}/pack.yaml" "$special_pack/"
|
||||
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${special_pack}\"]"
|
||||
export ATTUNE_ACTION_API_URL="http://localhost:8080"
|
||||
|
||||
local output
|
||||
output=$(bash "${ACTIONS_DIR}/get_pack_dependencies.sh" 2>/dev/null)
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
local analyzed=$(echo "$output" | jq -r '.analyzed_packs | length' 2>/dev/null || echo "0")
|
||||
if [[ "$analyzed" == "1" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: Handles spaces in path"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: Failed to handle spaces in path"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
|
||||
# Test 2: Pack with no version
|
||||
echo ""
|
||||
echo "Test 2: Pack with no version field"
|
||||
local no_version_pack="${TEST_TEMP_DIR}/no-version-pack"
|
||||
mkdir -p "$no_version_pack"
|
||||
cat > "${no_version_pack}/pack.yaml" <<EOF
|
||||
ref: no-version
|
||||
name: No Version Pack
|
||||
EOF
|
||||
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${no_version_pack}\"]"
|
||||
|
||||
output=$(bash "${ACTIONS_DIR}/get_pack_dependencies.sh" 2>/dev/null)
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
analyzed=$(echo "$output" | jq -r '.analyzed_packs[0].pack_ref' 2>/dev/null || echo "")
|
||||
if [[ "$analyzed" == "no-version" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: Handles missing version field"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: Failed to handle missing version field"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
|
||||
# Test 3: Empty pack.yaml
|
||||
echo ""
|
||||
echo "Test 3: Empty pack.yaml (should fail)"
|
||||
local empty_pack="${TEST_TEMP_DIR}/empty-pack"
|
||||
mkdir -p "$empty_pack"
|
||||
touch "${empty_pack}/pack.yaml"
|
||||
|
||||
export ATTUNE_ACTION_PACK_PATHS="[\"${empty_pack}\"]"
|
||||
export ATTUNE_ACTION_SKIP_VALIDATION="false"
|
||||
|
||||
output=$(bash "${ACTIONS_DIR}/get_pack_dependencies.sh" 2>/dev/null)
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
local errors=$(echo "$output" | jq -r '.errors | length' 2>/dev/null || echo "0")
|
||||
if [[ "$errors" -ge "1" ]]; then
|
||||
echo -e "${GREEN}✓ PASS${NC}: Detects invalid pack.yaml"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}✗ FAIL${NC}: Failed to detect invalid pack.yaml"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
# Main test execution
|
||||
main() {
|
||||
echo "=========================================="
|
||||
echo "Pack Installation Actions Test Suite"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Check dependencies
|
||||
if ! command -v jq &>/dev/null; then
|
||||
echo -e "${RED}ERROR${NC}: jq is required for running tests"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Setup
|
||||
setup_test_env
|
||||
|
||||
# Run tests
|
||||
test_get_pack_dependencies
|
||||
test_download_packs
|
||||
test_build_pack_envs
|
||||
test_register_packs
|
||||
test_json_output_format
|
||||
test_edge_cases
|
||||
|
||||
# Cleanup
|
||||
cleanup_test_env
|
||||
|
||||
# Print summary
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo "Test Summary"
|
||||
echo "=========================================="
|
||||
echo "Total tests run: $TESTS_RUN"
|
||||
echo -e "${GREEN}Passed: $TESTS_PASSED${NC}"
|
||||
echo -e "${RED}Failed: $TESTS_FAILED${NC}"
|
||||
echo ""
|
||||
|
||||
if [[ $TESTS_FAILED -eq 0 ]]; then
|
||||
echo -e "${GREEN}All tests passed!${NC}"
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}Some tests failed.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main if script is executed directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
892
packs/core/workflows/PACK_INSTALLATION.md
Normal file
892
packs/core/workflows/PACK_INSTALLATION.md
Normal file
@@ -0,0 +1,892 @@
|
||||
# Pack Installation Workflow System
|
||||
|
||||
**Status**: Schema Complete, Implementation Required
|
||||
**Version**: 1.0.0
|
||||
**Last Updated**: 2025-02-05
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The pack installation workflow provides a comprehensive, automated system for installing Attune packs from multiple sources with automatic dependency resolution, runtime environment setup, testing, and registration.
|
||||
|
||||
This document describes the workflow architecture, supporting actions, and implementation requirements.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Main Workflow: `core.install_packs`
|
||||
|
||||
A multi-stage orchestration workflow that handles the complete pack installation lifecycle:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Install Packs Workflow │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. Initialize → Set up temp directory │
|
||||
│ 2. Download Packs → Fetch from git/HTTP/registry │
|
||||
│ 3. Check Results → Validate downloads │
|
||||
│ 4. Get Dependencies → Parse pack.yaml │
|
||||
│ 5. Install Dependencies → Recursive installation │
|
||||
│ 6. Build Environments → Python/Node.js setup │
|
||||
│ 7. Run Tests → Verify functionality │
|
||||
│ 8. Register Packs → Load into database │
|
||||
│ 9. Cleanup → Remove temp files │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Supporting Actions
|
||||
|
||||
The workflow delegates specific tasks to five core actions:
|
||||
|
||||
1. **`core.download_packs`** - Download from multiple sources
|
||||
2. **`core.get_pack_dependencies`** - Parse dependency information
|
||||
3. **`core.build_pack_envs`** - Create runtime environments
|
||||
4. **`core.run_pack_tests`** - Execute test suites
|
||||
5. **`core.register_packs`** - Load components into database
|
||||
|
||||
---
|
||||
|
||||
## Workflow Details
|
||||
|
||||
### Input Parameters
|
||||
|
||||
```yaml
|
||||
parameters:
|
||||
packs:
|
||||
type: array
|
||||
description: "List of packs to install"
|
||||
required: true
|
||||
examples:
|
||||
- ["https://github.com/attune/pack-slack.git"]
|
||||
- ["slack@1.0.0", "aws@2.1.0"]
|
||||
- ["https://example.com/packs/custom.tar.gz"]
|
||||
|
||||
ref_spec:
|
||||
type: string
|
||||
description: "Git reference (branch/tag/commit)"
|
||||
optional: true
|
||||
|
||||
skip_dependencies: boolean
|
||||
skip_tests: boolean
|
||||
skip_env_build: boolean
|
||||
force: boolean
|
||||
|
||||
registry_url: string (default: https://registry.attune.io)
|
||||
packs_base_dir: string (default: /opt/attune/packs)
|
||||
api_url: string (default: http://localhost:8080)
|
||||
timeout: integer (default: 1800)
|
||||
```
|
||||
|
||||
### Supported Pack Sources
|
||||
|
||||
#### 1. Git Repositories
|
||||
|
||||
```yaml
|
||||
packs:
|
||||
- "https://github.com/attune/pack-slack.git"
|
||||
- "git@github.com:myorg/pack-internal.git"
|
||||
ref_spec: "v1.0.0" # Optional: branch, tag, or commit
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- HTTPS and SSH URLs supported
|
||||
- Shallow clones for efficiency
|
||||
- Specific ref checkout (branch/tag/commit)
|
||||
- Submodule support (if configured)
|
||||
|
||||
#### 2. HTTP Archives
|
||||
|
||||
```yaml
|
||||
packs:
|
||||
- "https://example.com/packs/custom-pack.tar.gz"
|
||||
- "https://cdn.example.com/slack-pack.zip"
|
||||
```
|
||||
|
||||
**Supported formats:**
|
||||
- `.tar.gz` / `.tgz`
|
||||
- `.zip`
|
||||
|
||||
#### 3. Pack Registry References
|
||||
|
||||
```yaml
|
||||
packs:
|
||||
- "slack@1.0.0" # Specific version
|
||||
- "aws@^2.1.0" # Semver range
|
||||
- "kubernetes" # Latest version
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Automatic URL resolution from registry
|
||||
- Version constraint support
|
||||
- Centralized pack metadata
|
||||
|
||||
---
|
||||
|
||||
## Action Specifications
|
||||
|
||||
### 1. Download Packs (`core.download_packs`)
|
||||
|
||||
**Purpose**: Download packs from various sources to a temporary directory.
|
||||
|
||||
**Responsibilities:**
|
||||
- Detect source type (git/HTTP/registry)
|
||||
- Clone git repositories with optional ref checkout
|
||||
- Download and extract HTTP archives
|
||||
- Resolve pack registry references to download URLs
|
||||
- Locate and parse `pack.yaml` files
|
||||
- Calculate directory checksums
|
||||
- Return download metadata for downstream tasks
|
||||
|
||||
**Input:**
|
||||
```yaml
|
||||
packs: ["https://github.com/attune/pack-slack.git"]
|
||||
destination_dir: "/tmp/attune-pack-install-abc123"
|
||||
registry_url: "https://registry.attune.io/index.json"
|
||||
ref_spec: "v1.0.0"
|
||||
timeout: 300
|
||||
verify_ssl: true
|
||||
api_url: "http://localhost:8080"
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```json
|
||||
{
|
||||
"downloaded_packs": [
|
||||
{
|
||||
"source": "https://github.com/attune/pack-slack.git",
|
||||
"source_type": "git",
|
||||
"pack_path": "/tmp/attune-pack-install-abc123/slack",
|
||||
"pack_ref": "slack",
|
||||
"pack_version": "1.0.0",
|
||||
"git_commit": "a1b2c3d4e5",
|
||||
"checksum": "sha256:..."
|
||||
}
|
||||
],
|
||||
"failed_packs": [],
|
||||
"total_count": 1,
|
||||
"success_count": 1,
|
||||
"failure_count": 0
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation Notes:**
|
||||
- Should call API endpoint or implement git/HTTP logic directly
|
||||
- Must handle authentication (SSH keys for git, API tokens)
|
||||
- Must validate `pack.yaml` exists and is readable
|
||||
- Should support both root-level and `pack/` subdirectory structures
|
||||
|
||||
---
|
||||
|
||||
### 2. Get Pack Dependencies (`core.get_pack_dependencies`)
|
||||
|
||||
**Purpose**: Parse `pack.yaml` files to identify pack and runtime dependencies.
|
||||
|
||||
**Responsibilities:**
|
||||
- Read and parse `pack.yaml` files (YAML parsing)
|
||||
- Extract `dependencies` section (pack dependencies)
|
||||
- Extract `python` and `nodejs` runtime requirements
|
||||
- Check which pack dependencies are already installed
|
||||
- Identify `requirements.txt` and `package.json` files
|
||||
- Build list of missing dependencies for installation
|
||||
|
||||
**Input:**
|
||||
```yaml
|
||||
pack_paths: ["/tmp/attune-pack-install-abc123/slack"]
|
||||
api_url: "http://localhost:8080"
|
||||
skip_validation: false
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```json
|
||||
{
|
||||
"dependencies": [
|
||||
{
|
||||
"pack_ref": "core",
|
||||
"version_spec": ">=1.0.0",
|
||||
"required_by": "slack",
|
||||
"already_installed": true
|
||||
}
|
||||
],
|
||||
"runtime_requirements": {
|
||||
"slack": {
|
||||
"pack_ref": "slack",
|
||||
"python": {
|
||||
"version": ">=3.8",
|
||||
"requirements_file": "/tmp/.../slack/requirements.txt"
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_dependencies": [
|
||||
{
|
||||
"pack_ref": "http",
|
||||
"version_spec": "^1.0.0",
|
||||
"required_by": "slack"
|
||||
}
|
||||
],
|
||||
"analyzed_packs": [
|
||||
{
|
||||
"pack_ref": "slack",
|
||||
"pack_path": "/tmp/.../slack",
|
||||
"has_dependencies": true,
|
||||
"dependency_count": 2
|
||||
}
|
||||
],
|
||||
"errors": []
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation Notes:**
|
||||
- Must parse YAML files (use `yq`, Python, or API call)
|
||||
- Should call `GET /api/v1/packs` to check installed packs
|
||||
- Must handle missing or malformed `pack.yaml` files gracefully
|
||||
- Should validate version specifications (semver)
|
||||
|
||||
---
|
||||
|
||||
### 3. Build Pack Environments (`core.build_pack_envs`)
|
||||
|
||||
**Purpose**: Create runtime environments and install dependencies.
|
||||
|
||||
**Responsibilities:**
|
||||
- Create Python virtualenvs for packs with Python dependencies
|
||||
- Install packages from `requirements.txt` using pip
|
||||
- Run `npm install` for packs with Node.js dependencies
|
||||
- Handle environment creation failures gracefully
|
||||
- Track installed package counts and build times
|
||||
- Support force rebuild of existing environments
|
||||
|
||||
**Input:**
|
||||
```yaml
|
||||
pack_paths: ["/tmp/attune-pack-install-abc123/slack"]
|
||||
packs_base_dir: "/opt/attune/packs"
|
||||
python_version: "3.11"
|
||||
nodejs_version: "20"
|
||||
skip_python: false
|
||||
skip_nodejs: false
|
||||
force_rebuild: false
|
||||
timeout: 600
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```json
|
||||
{
|
||||
"built_environments": [
|
||||
{
|
||||
"pack_ref": "slack",
|
||||
"pack_path": "/tmp/.../slack",
|
||||
"environments": {
|
||||
"python": {
|
||||
"virtualenv_path": "/tmp/.../slack/virtualenv",
|
||||
"requirements_installed": true,
|
||||
"package_count": 15,
|
||||
"python_version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"duration_ms": 45000
|
||||
}
|
||||
],
|
||||
"failed_environments": [],
|
||||
"summary": {
|
||||
"total_packs": 1,
|
||||
"success_count": 1,
|
||||
"failure_count": 0,
|
||||
"python_envs_built": 1,
|
||||
"nodejs_envs_built": 0,
|
||||
"total_duration_ms": 45000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation Notes:**
|
||||
- Python virtualenv creation: `python -m venv {pack_path}/virtualenv`
|
||||
- Pip install: `source virtualenv/bin/activate && pip install -r requirements.txt`
|
||||
- Node.js install: `npm install --production` in pack directory
|
||||
- Must handle timeouts and cleanup on failure
|
||||
- Should use containerized workers for isolation
|
||||
|
||||
---
|
||||
|
||||
### 4. Run Pack Tests (`core.run_pack_tests`)
|
||||
|
||||
**Purpose**: Execute pack test suites to verify functionality.
|
||||
|
||||
**Responsibilities:**
|
||||
- Detect test framework (pytest, unittest, npm test, shell scripts)
|
||||
- Execute tests in isolated environment
|
||||
- Capture test output and results
|
||||
- Return pass/fail status with details
|
||||
- Support parallel test execution
|
||||
- Handle test timeouts
|
||||
|
||||
**Input:**
|
||||
```yaml
|
||||
pack_paths: ["/tmp/attune-pack-install-abc123/slack"]
|
||||
timeout: 300
|
||||
fail_on_error: false
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```json
|
||||
{
|
||||
"test_results": [
|
||||
{
|
||||
"pack_ref": "slack",
|
||||
"status": "passed",
|
||||
"total_tests": 25,
|
||||
"passed": 25,
|
||||
"failed": 0,
|
||||
"skipped": 0,
|
||||
"duration_ms": 12000,
|
||||
"output": "..."
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"total_packs": 1,
|
||||
"all_passed": true,
|
||||
"total_tests": 25,
|
||||
"total_passed": 25,
|
||||
"total_failed": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation Notes:**
|
||||
- Check for `test` section in `pack.yaml`
|
||||
- Default test discovery: `tests/` directory
|
||||
- Python: Run pytest or unittest
|
||||
- Node.js: Run `npm test`
|
||||
- Shell: Execute `test.sh` scripts
|
||||
- Should capture stdout/stderr for debugging
|
||||
|
||||
---
|
||||
|
||||
### 5. Register Packs (`core.register_packs`)
|
||||
|
||||
**Purpose**: Validate schemas, load components into database, copy to storage.
|
||||
|
||||
**Responsibilities:**
|
||||
- Validate `pack.yaml` schema
|
||||
- Scan for component files (actions, sensors, triggers, rules, workflows, policies)
|
||||
- Validate each component schema
|
||||
- Call API endpoint to register pack in database
|
||||
- Copy pack files to permanent storage (`/opt/attune/packs/{pack_ref}/`)
|
||||
- Record installation metadata
|
||||
- Handle registration rollback on failure (atomic operation)
|
||||
|
||||
**Input:**
|
||||
```yaml
|
||||
pack_paths: ["/tmp/attune-pack-install-abc123/slack"]
|
||||
packs_base_dir: "/opt/attune/packs"
|
||||
skip_validation: false
|
||||
skip_tests: false
|
||||
force: false
|
||||
api_url: "http://localhost:8080"
|
||||
api_token: "jwt_token_here"
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```json
|
||||
{
|
||||
"registered_packs": [
|
||||
{
|
||||
"pack_ref": "slack",
|
||||
"pack_id": 42,
|
||||
"pack_version": "1.0.0",
|
||||
"storage_path": "/opt/attune/packs/slack",
|
||||
"components_registered": {
|
||||
"actions": 15,
|
||||
"sensors": 3,
|
||||
"triggers": 2,
|
||||
"rules": 5,
|
||||
"workflows": 2,
|
||||
"policies": 0
|
||||
},
|
||||
"test_result": {
|
||||
"status": "passed",
|
||||
"total_tests": 25,
|
||||
"passed": 25,
|
||||
"failed": 0
|
||||
},
|
||||
"validation_results": {
|
||||
"valid": true,
|
||||
"errors": []
|
||||
}
|
||||
}
|
||||
],
|
||||
"failed_packs": [],
|
||||
"summary": {
|
||||
"total_packs": 1,
|
||||
"success_count": 1,
|
||||
"failure_count": 0,
|
||||
"total_components": 27,
|
||||
"duration_ms": 8000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation Notes:**
|
||||
- **Primary approach**: Call `POST /api/v1/packs/register` endpoint
|
||||
- The API already implements:
|
||||
- Pack metadata validation
|
||||
- Component scanning and registration
|
||||
- Database record creation
|
||||
- File copying to permanent storage
|
||||
- Installation metadata tracking
|
||||
- This action should be a thin wrapper for API calls
|
||||
- Must handle authentication (JWT token)
|
||||
- Must implement proper error handling and retries
|
||||
- Should validate API response and extract relevant data
|
||||
|
||||
**API Endpoint Reference:**
|
||||
```
|
||||
POST /api/v1/packs/register
|
||||
Content-Type: application/json
|
||||
Authorization: Bearer {token}
|
||||
|
||||
{
|
||||
"path": "/tmp/attune-pack-install-abc123/slack",
|
||||
"force": false,
|
||||
"skip_tests": false
|
||||
}
|
||||
|
||||
Response:
|
||||
{
|
||||
"data": {
|
||||
"pack_id": 42,
|
||||
"pack": { ... },
|
||||
"test_result": { ... }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Execution Flow
|
||||
|
||||
### Success Path
|
||||
|
||||
```
|
||||
1. Initialize
|
||||
↓
|
||||
2. Download Packs
|
||||
↓ (if any downloads succeeded)
|
||||
3. Check Results
|
||||
↓ (if not skip_dependencies)
|
||||
4. Get Dependencies
|
||||
↓ (if missing dependencies found)
|
||||
5. Install Dependencies (recursive call)
|
||||
↓
|
||||
6. Build Environments
|
||||
↓ (if not skip_tests)
|
||||
7. Run Tests
|
||||
↓
|
||||
8. Register Packs
|
||||
↓
|
||||
9. Cleanup Success
|
||||
✓ Complete
|
||||
```
|
||||
|
||||
### Failure Handling
|
||||
|
||||
Each stage can fail and trigger cleanup:
|
||||
|
||||
- **Download fails**: Go to cleanup_on_failure
|
||||
- **Dependency installation fails**:
|
||||
- If `force=true`: Continue to build_environments
|
||||
- If `force=false`: Go to cleanup_on_failure
|
||||
- **Environment build fails**:
|
||||
- If `force=true` or `skip_env_build=true`: Continue
|
||||
- If `force=false`: Go to cleanup_on_failure
|
||||
- **Tests fail**:
|
||||
- If `force=true`: Continue to register_packs
|
||||
- If `force=false`: Go to cleanup_on_failure
|
||||
- **Registration fails**: Go to cleanup_on_failure
|
||||
|
||||
### Force Mode Behavior
|
||||
|
||||
When `force: true`:
|
||||
|
||||
- ✓ Continue even if downloads fail
|
||||
- ✓ Skip dependency validation failures
|
||||
- ✓ Skip environment build failures
|
||||
- ✓ Skip test failures
|
||||
- ✓ Override existing pack installations
|
||||
|
||||
**Use Cases:**
|
||||
- Development and testing
|
||||
- Emergency deployments
|
||||
- Pack upgrades
|
||||
- Recovery from partial installations
|
||||
|
||||
**Warning:** Force mode bypasses safety checks. Use cautiously in production.
|
||||
|
||||
---
|
||||
|
||||
## Recursive Dependency Resolution
|
||||
|
||||
The workflow supports recursive dependency installation:
|
||||
|
||||
```
|
||||
install_packs(["slack"])
|
||||
↓
|
||||
Depends on: ["core@>=1.0.0", "http@^1.0.0"]
|
||||
↓
|
||||
install_packs(["http"]) # Recursive call
|
||||
↓
|
||||
Depends on: ["core@>=1.0.0"]
|
||||
↓
|
||||
core already installed ✓
|
||||
✓
|
||||
http installed ✓
|
||||
↓
|
||||
slack installed ✓
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Automatically detects and installs missing dependencies
|
||||
- Prevents circular dependencies (each pack registered once)
|
||||
- Respects version constraints (semver)
|
||||
- Installs dependencies depth-first
|
||||
- Tracks installed packs to avoid duplicates
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Atomic Registration
|
||||
|
||||
Pack registration is atomic - all components are registered or none:
|
||||
|
||||
- ✓ Validates all component schemas first
|
||||
- ✓ Creates database transaction for registration
|
||||
- ✓ Rolls back on any component failure
|
||||
- ✓ Prevents partial pack installations
|
||||
|
||||
### Cleanup Strategy
|
||||
|
||||
Temporary directories are always cleaned up:
|
||||
|
||||
- **On success**: Remove temp directory after registration
|
||||
- **On failure**: Remove temp directory and report errors
|
||||
- **On timeout**: Cleanup triggered by workflow timeout handler
|
||||
|
||||
### Error Reporting
|
||||
|
||||
Comprehensive error information returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"failed_packs": [
|
||||
{
|
||||
"pack_path": "/tmp/.../custom-pack",
|
||||
"pack_ref": "custom",
|
||||
"error": "Schema validation failed: action 'do_thing' missing required field 'runner_type'",
|
||||
"error_stage": "validation"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Error stages:
|
||||
- `validation` - Schema validation failed
|
||||
- `testing` - Pack tests failed
|
||||
- `database_registration` - Database operation failed
|
||||
- `file_copy` - File system operation failed
|
||||
- `api_call` - API request failed
|
||||
|
||||
---
|
||||
|
||||
## Implementation Status
|
||||
|
||||
### ✅ Complete
|
||||
|
||||
- Workflow YAML schema (`install_packs.yaml`)
|
||||
- Action YAML schemas (5 actions)
|
||||
- Action placeholder scripts (.sh files)
|
||||
- Documentation
|
||||
- Error handling structure
|
||||
- Output schemas
|
||||
|
||||
### 🔄 Requires Implementation
|
||||
|
||||
All action scripts currently return placeholder responses. Each needs proper implementation:
|
||||
|
||||
#### 1. `download_packs.sh`
|
||||
|
||||
**Implementation Options:**
|
||||
|
||||
**Option A: API-based** (Recommended)
|
||||
- Create API endpoint: `POST /api/v1/packs/download`
|
||||
- Action calls API with pack list
|
||||
- API handles git/HTTP/registry logic
|
||||
- Returns download results to action
|
||||
|
||||
**Option B: Direct implementation**
|
||||
- Implement git cloning logic in script
|
||||
- Implement HTTP download and extraction
|
||||
- Implement registry lookup and resolution
|
||||
- Handle all error cases
|
||||
|
||||
**Recommendation**: Option A (API-based) keeps action scripts lean and centralizes pack handling logic in the API service.
|
||||
|
||||
#### 2. `get_pack_dependencies.sh`
|
||||
|
||||
**Implementation approach:**
|
||||
- Parse YAML files (use `yq` tool or Python script)
|
||||
- Extract dependencies from `pack.yaml`
|
||||
- Call `GET /api/v1/packs` to get installed packs
|
||||
- Compare and build missing dependencies list
|
||||
|
||||
#### 3. `build_pack_envs.sh`
|
||||
|
||||
**Implementation approach:**
|
||||
- For each pack with `requirements.txt`:
|
||||
```bash
|
||||
python -m venv {pack_path}/virtualenv
|
||||
source {pack_path}/virtualenv/bin/activate
|
||||
pip install -r {pack_path}/requirements.txt
|
||||
```
|
||||
- For each pack with `package.json`:
|
||||
```bash
|
||||
cd {pack_path}
|
||||
npm install --production
|
||||
```
|
||||
- Handle timeouts and errors
|
||||
- Use containerized workers for isolation
|
||||
|
||||
#### 4. `run_pack_tests.sh`
|
||||
|
||||
**Implementation approach:**
|
||||
- Already exists in core pack: `core.run_pack_tests`
|
||||
- May need minor updates for integration
|
||||
- Supports pytest, unittest, npm test
|
||||
|
||||
#### 5. `register_packs.sh`
|
||||
|
||||
**Implementation approach:**
|
||||
- Call existing API endpoint: `POST /api/v1/packs/register`
|
||||
- Send pack path and options
|
||||
- Parse API response
|
||||
- Handle authentication (JWT token from workflow context)
|
||||
|
||||
**API Integration:**
|
||||
```bash
|
||||
curl -X POST "$API_URL/api/v1/packs/register" \
|
||||
-H "Authorization: Bearer $API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"path\": \"$pack_path\",
|
||||
\"force\": $FORCE,
|
||||
\"skip_tests\": $SKIP_TESTS
|
||||
}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Test each action independently:
|
||||
|
||||
```bash
|
||||
# Test download_packs with mock git repo
|
||||
./actions/download_packs.sh \
|
||||
ATTUNE_ACTION_PACKS='["https://github.com/test/pack-test.git"]' \
|
||||
ATTUNE_ACTION_DESTINATION_DIR=/tmp/test
|
||||
|
||||
# Verify output structure
|
||||
jq '.downloaded_packs | length' output.json
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
|
||||
Test complete workflow:
|
||||
|
||||
```bash
|
||||
# Execute workflow via API
|
||||
curl -X POST "$API_URL/api/v1/workflows/execute" \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"workflow": "core.install_packs",
|
||||
"input": {
|
||||
"packs": ["https://github.com/attune/pack-test.git"],
|
||||
"skip_tests": false,
|
||||
"force": false
|
||||
}
|
||||
}'
|
||||
|
||||
# Check execution status
|
||||
curl "$API_URL/api/v1/executions/$EXECUTION_ID"
|
||||
|
||||
# Verify pack registered
|
||||
curl "$API_URL/api/v1/packs/test-pack"
|
||||
```
|
||||
|
||||
### End-to-End Tests
|
||||
|
||||
Test with real packs:
|
||||
|
||||
1. Install core pack (already installed)
|
||||
2. Install pack with dependencies
|
||||
3. Install pack from HTTP archive
|
||||
4. Install pack from registry reference
|
||||
5. Test force mode reinstallation
|
||||
6. Test error handling (invalid pack)
|
||||
|
||||
---
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Install Single Pack from Git
|
||||
|
||||
```yaml
|
||||
workflow: core.install_packs
|
||||
input:
|
||||
packs:
|
||||
- "https://github.com/attune/pack-slack.git"
|
||||
ref_spec: "v1.0.0"
|
||||
skip_dependencies: false
|
||||
skip_tests: false
|
||||
force: false
|
||||
```
|
||||
|
||||
### Example 2: Install Multiple Packs from Registry
|
||||
|
||||
```yaml
|
||||
workflow: core.install_packs
|
||||
input:
|
||||
packs:
|
||||
- "slack@1.0.0"
|
||||
- "aws@^2.1.0"
|
||||
- "kubernetes@>=3.0.0"
|
||||
skip_dependencies: false
|
||||
skip_tests: false
|
||||
```
|
||||
|
||||
### Example 3: Force Reinstall with Skip Tests
|
||||
|
||||
```yaml
|
||||
workflow: core.install_packs
|
||||
input:
|
||||
packs:
|
||||
- "https://github.com/myorg/pack-custom.git"
|
||||
ref_spec: "main"
|
||||
skip_dependencies: true
|
||||
skip_tests: true
|
||||
force: true
|
||||
```
|
||||
|
||||
### Example 4: Install from HTTP Archive
|
||||
|
||||
```yaml
|
||||
workflow: core.install_packs
|
||||
input:
|
||||
packs:
|
||||
- "https://example.com/packs/custom-pack-1.0.0.tar.gz"
|
||||
skip_dependencies: false
|
||||
skip_tests: false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Phase 2 Features
|
||||
|
||||
1. **Pack Upgrade Workflow**
|
||||
- Detect installed version
|
||||
- Download new version
|
||||
- Run migration scripts
|
||||
- Update in-place or side-by-side
|
||||
|
||||
2. **Pack Uninstall Workflow**
|
||||
- Check for dependent packs
|
||||
- Remove from database
|
||||
- Remove from filesystem
|
||||
- Optional backup before removal
|
||||
|
||||
3. **Pack Validation Workflow**
|
||||
- Validate without installing
|
||||
- Check dependencies
|
||||
- Run tests in isolated environment
|
||||
- Report validation results
|
||||
|
||||
4. **Batch Operations**
|
||||
- Install all packs from registry
|
||||
- Upgrade all installed packs
|
||||
- Validate all installed packs
|
||||
|
||||
### Phase 3 Features
|
||||
|
||||
1. **Registry Integration**
|
||||
- Automatic version discovery
|
||||
- Dependency resolution from registry
|
||||
- Pack popularity metrics
|
||||
- Security vulnerability scanning
|
||||
|
||||
2. **Advanced Dependency Management**
|
||||
- Conflict detection
|
||||
- Version constraint solving
|
||||
- Dependency graphs
|
||||
- Optional dependencies
|
||||
|
||||
3. **Rollback Support**
|
||||
- Snapshot before installation
|
||||
- Rollback on failure
|
||||
- Version history
|
||||
- Migration scripts
|
||||
|
||||
4. **Performance Optimizations**
|
||||
- Parallel downloads
|
||||
- Cached dependencies
|
||||
- Incremental updates
|
||||
- Build caching
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Pack Structure](../../../docs/packs/pack-structure.md) - Pack directory format
|
||||
- [Pack Installation from Git](../../../docs/packs/pack-installation-git.md) - Git installation guide
|
||||
- [Pack Registry Specification](../../../docs/packs/pack-registry-spec.md) - Registry format
|
||||
- [Pack Testing Framework](../../../docs/packs/pack-testing-framework.md) - Testing packs
|
||||
- [API Documentation](../../../docs/api/api-packs.md) - Pack API endpoints
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
For questions or issues:
|
||||
|
||||
- GitHub Issues: https://github.com/attune-io/attune/issues
|
||||
- Documentation: https://docs.attune.io/workflows/pack-installation
|
||||
- Community: https://community.attune.io
|
||||
|
||||
---
|
||||
|
||||
## Changelog
|
||||
|
||||
### v1.0.0 (2025-02-05)
|
||||
|
||||
- Initial workflow schema design
|
||||
- Five supporting action schemas
|
||||
- Comprehensive documentation
|
||||
- Placeholder implementation scripts
|
||||
- Error handling structure
|
||||
- Output schemas defined
|
||||
|
||||
### Next Steps
|
||||
|
||||
1. Implement `download_packs.sh` (or create API endpoint)
|
||||
2. Implement `get_pack_dependencies.sh`
|
||||
3. Implement `build_pack_envs.sh`
|
||||
4. Update `run_pack_tests.sh` if needed
|
||||
5. Implement `register_packs.sh` (API wrapper)
|
||||
6. End-to-end testing
|
||||
7. Documentation updates based on testing
|
||||
335
packs/core/workflows/install_packs.yaml
Normal file
335
packs/core/workflows/install_packs.yaml
Normal file
@@ -0,0 +1,335 @@
|
||||
# Install Packs Workflow
|
||||
# Complete workflow for installing packs from multiple sources with dependency resolution
|
||||
|
||||
name: install_packs
|
||||
ref: core.install_packs
|
||||
label: "Install Packs"
|
||||
description: "Install one or more packs from git repositories, HTTP archives, or pack registry with automatic dependency resolution"
|
||||
version: "1.0.0"
|
||||
|
||||
# Input parameters
|
||||
parameters:
|
||||
type: object
|
||||
properties:
|
||||
packs:
|
||||
type: array
|
||||
description: "List of packs to install (git URLs, HTTP URLs, or pack refs like 'slack@1.0.0')"
|
||||
items:
|
||||
type: string
|
||||
minItems: 1
|
||||
ref_spec:
|
||||
type: string
|
||||
description: "Git reference to checkout for git URLs (branch, tag, or commit)"
|
||||
skip_dependencies:
|
||||
type: boolean
|
||||
description: "Skip installing pack dependencies"
|
||||
default: false
|
||||
skip_tests:
|
||||
type: boolean
|
||||
description: "Skip running pack tests before registration"
|
||||
default: false
|
||||
skip_env_build:
|
||||
type: boolean
|
||||
description: "Skip building runtime environments (Python/Node.js)"
|
||||
default: false
|
||||
force:
|
||||
type: boolean
|
||||
description: "Force installation even if packs already exist or tests fail"
|
||||
default: false
|
||||
registry_url:
|
||||
type: string
|
||||
description: "Pack registry URL for resolving pack refs"
|
||||
default: "https://registry.attune.io/index.json"
|
||||
packs_base_dir:
|
||||
type: string
|
||||
description: "Base directory for permanent pack storage"
|
||||
default: "/opt/attune/packs"
|
||||
api_url:
|
||||
type: string
|
||||
description: "Attune API URL"
|
||||
default: "http://localhost:8080"
|
||||
timeout:
|
||||
type: integer
|
||||
description: "Timeout in seconds for the entire workflow"
|
||||
default: 1800
|
||||
minimum: 300
|
||||
maximum: 7200
|
||||
required:
|
||||
- packs
|
||||
|
||||
# Workflow variables
|
||||
vars:
|
||||
- temp_dir: null
|
||||
- downloaded_packs: []
|
||||
- missing_dependencies: []
|
||||
- installed_pack_refs: []
|
||||
- failed_packs: []
|
||||
- start_time: null
|
||||
|
||||
# Workflow tasks
|
||||
tasks:
|
||||
# Task 1: Initialize workflow
|
||||
- name: initialize
|
||||
action: core.noop
|
||||
input:
|
||||
message: "Starting pack installation workflow"
|
||||
publish:
|
||||
- start_time: "{{ now() }}"
|
||||
- temp_dir: "/tmp/attune-pack-install-{{ uuid() }}"
|
||||
on_success: download_packs
|
||||
|
||||
# Task 2: Download packs from specified sources
|
||||
- name: download_packs
|
||||
action: core.download_packs
|
||||
input:
|
||||
packs: "{{ parameters.packs }}"
|
||||
destination_dir: "{{ vars.temp_dir }}"
|
||||
registry_url: "{{ parameters.registry_url }}"
|
||||
ref_spec: "{{ parameters.ref_spec }}"
|
||||
api_url: "{{ parameters.api_url }}"
|
||||
timeout: 300
|
||||
verify_ssl: true
|
||||
publish:
|
||||
- downloaded_packs: "{{ task.download_packs.result.downloaded_packs }}"
|
||||
- failed_packs: "{{ task.download_packs.result.failed_packs }}"
|
||||
on_success:
|
||||
- when: "{{ task.download_packs.result.success_count > 0 }}"
|
||||
do: check_download_results
|
||||
on_failure: cleanup_on_failure
|
||||
|
||||
# Task 3: Check if any packs were successfully downloaded
|
||||
- name: check_download_results
|
||||
action: core.noop
|
||||
input:
|
||||
message: "Downloaded {{ task.download_packs.result.success_count }} pack(s)"
|
||||
on_success:
|
||||
- when: "{{ not parameters.skip_dependencies }}"
|
||||
do: get_dependencies
|
||||
- when: "{{ parameters.skip_dependencies }}"
|
||||
do: build_environments
|
||||
|
||||
# Task 4: Get pack dependencies from pack.yaml files
|
||||
- name: get_dependencies
|
||||
action: core.get_pack_dependencies
|
||||
input:
|
||||
pack_paths: "{{ vars.downloaded_packs | map(attribute='pack_path') | list }}"
|
||||
api_url: "{{ parameters.api_url }}"
|
||||
skip_validation: false
|
||||
publish:
|
||||
- missing_dependencies: "{{ task.get_dependencies.result.missing_dependencies }}"
|
||||
on_success:
|
||||
- when: "{{ task.get_dependencies.result.missing_dependencies | length > 0 }}"
|
||||
do: install_dependencies
|
||||
- when: "{{ task.get_dependencies.result.missing_dependencies | length == 0 }}"
|
||||
do: build_environments
|
||||
on_failure: cleanup_on_failure
|
||||
|
||||
# Task 5: Recursively install missing pack dependencies
|
||||
- name: install_dependencies
|
||||
action: core.install_packs
|
||||
input:
|
||||
packs: "{{ vars.missing_dependencies | map(attribute='pack_ref') | list }}"
|
||||
skip_dependencies: false
|
||||
skip_tests: "{{ parameters.skip_tests }}"
|
||||
skip_env_build: "{{ parameters.skip_env_build }}"
|
||||
force: "{{ parameters.force }}"
|
||||
registry_url: "{{ parameters.registry_url }}"
|
||||
packs_base_dir: "{{ parameters.packs_base_dir }}"
|
||||
api_url: "{{ parameters.api_url }}"
|
||||
timeout: 900
|
||||
publish:
|
||||
- installed_pack_refs: "{{ task.install_dependencies.result.registered_packs | map(attribute='pack_ref') | list }}"
|
||||
on_success: build_environments
|
||||
on_failure:
|
||||
- when: "{{ parameters.force }}"
|
||||
do: build_environments
|
||||
- when: "{{ not parameters.force }}"
|
||||
do: cleanup_on_failure
|
||||
|
||||
# Task 6: Build runtime environments (Python virtualenvs, npm install)
|
||||
- name: build_environments
|
||||
action: core.build_pack_envs
|
||||
input:
|
||||
pack_paths: "{{ vars.downloaded_packs | map(attribute='pack_path') | list }}"
|
||||
packs_base_dir: "{{ parameters.packs_base_dir }}"
|
||||
python_version: "3.11"
|
||||
nodejs_version: "20"
|
||||
skip_python: false
|
||||
skip_nodejs: false
|
||||
force_rebuild: "{{ parameters.force }}"
|
||||
timeout: 600
|
||||
on_success:
|
||||
- when: "{{ not parameters.skip_tests }}"
|
||||
do: run_tests
|
||||
- when: "{{ parameters.skip_tests }}"
|
||||
do: register_packs
|
||||
on_failure:
|
||||
- when: "{{ parameters.force or parameters.skip_env_build }}"
|
||||
do:
|
||||
- when: "{{ not parameters.skip_tests }}"
|
||||
next: run_tests
|
||||
- when: "{{ parameters.skip_tests }}"
|
||||
next: register_packs
|
||||
- when: "{{ not parameters.force and not parameters.skip_env_build }}"
|
||||
do: cleanup_on_failure
|
||||
|
||||
# Task 7: Run pack tests to verify functionality
|
||||
- name: run_tests
|
||||
action: core.run_pack_tests
|
||||
input:
|
||||
pack_paths: "{{ vars.downloaded_packs | map(attribute='pack_path') | list }}"
|
||||
timeout: 300
|
||||
fail_on_error: false
|
||||
on_success: register_packs
|
||||
on_failure:
|
||||
- when: "{{ parameters.force }}"
|
||||
do: register_packs
|
||||
- when: "{{ not parameters.force }}"
|
||||
do: cleanup_on_failure
|
||||
|
||||
# Task 8: Register packs in database and copy to permanent storage
|
||||
- name: register_packs
|
||||
action: core.register_packs
|
||||
input:
|
||||
pack_paths: "{{ vars.downloaded_packs | map(attribute='pack_path') | list }}"
|
||||
packs_base_dir: "{{ parameters.packs_base_dir }}"
|
||||
skip_validation: false
|
||||
skip_tests: "{{ parameters.skip_tests }}"
|
||||
force: "{{ parameters.force }}"
|
||||
api_url: "{{ parameters.api_url }}"
|
||||
on_success: cleanup_success
|
||||
on_failure: cleanup_on_failure
|
||||
|
||||
# Task 9: Cleanup temporary directory on success
|
||||
- name: cleanup_success
|
||||
action: core.noop
|
||||
input:
|
||||
message: "Pack installation completed successfully. Cleaning up temporary directory: {{ vars.temp_dir }}"
|
||||
publish:
|
||||
- cleanup_status: "success"
|
||||
|
||||
# Task 10: Cleanup temporary directory on failure
|
||||
- name: cleanup_on_failure
|
||||
action: core.noop
|
||||
input:
|
||||
message: "Pack installation failed. Cleaning up temporary directory: {{ vars.temp_dir }}"
|
||||
publish:
|
||||
- cleanup_status: "failed"
|
||||
|
||||
# Output schema
|
||||
output_schema:
|
||||
type: object
|
||||
properties:
|
||||
registered_packs:
|
||||
type: array
|
||||
description: "Successfully registered packs"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
pack_ref:
|
||||
type: string
|
||||
pack_id:
|
||||
type: integer
|
||||
pack_version:
|
||||
type: string
|
||||
storage_path:
|
||||
type: string
|
||||
components_count:
|
||||
type: integer
|
||||
failed_packs:
|
||||
type: array
|
||||
description: "Packs that failed to install"
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
source:
|
||||
type: string
|
||||
error:
|
||||
type: string
|
||||
stage:
|
||||
type: string
|
||||
installed_dependencies:
|
||||
type: array
|
||||
description: "Pack dependencies that were installed"
|
||||
items:
|
||||
type: string
|
||||
summary:
|
||||
type: object
|
||||
description: "Installation summary"
|
||||
properties:
|
||||
total_requested:
|
||||
type: integer
|
||||
success_count:
|
||||
type: integer
|
||||
failure_count:
|
||||
type: integer
|
||||
dependencies_installed:
|
||||
type: integer
|
||||
duration_seconds:
|
||||
type: integer
|
||||
|
||||
# Metadata
|
||||
metadata:
|
||||
description: |
|
||||
This workflow orchestrates the complete pack installation process:
|
||||
|
||||
1. Download Packs: Downloads packs from git repositories, HTTP archives, or pack registry
|
||||
2. Get Dependencies: Analyzes pack.yaml files to identify dependencies
|
||||
3. Install Dependencies: Recursively installs missing pack dependencies
|
||||
4. Build Environments: Creates Python virtualenvs, installs requirements.txt and package.json deps
|
||||
5. Run Tests: Executes pack test suites (if present and not skipped)
|
||||
6. Register Packs: Loads pack components into database and copies to permanent storage
|
||||
|
||||
The workflow supports:
|
||||
- Multiple pack sources (git URLs, HTTP archives, pack refs)
|
||||
- Automatic dependency resolution (recursive)
|
||||
- Runtime environment setup (Python, Node.js)
|
||||
- Pack testing before registration
|
||||
- Force mode to override validation failures
|
||||
- Comprehensive error handling and cleanup
|
||||
|
||||
examples:
|
||||
- name: "Install pack from git repository"
|
||||
input:
|
||||
packs:
|
||||
- "https://github.com/attune/pack-slack.git"
|
||||
ref_spec: "v1.0.0"
|
||||
skip_dependencies: false
|
||||
skip_tests: false
|
||||
force: false
|
||||
|
||||
- name: "Install multiple packs from registry"
|
||||
input:
|
||||
packs:
|
||||
- "slack@1.0.0"
|
||||
- "aws@2.1.0"
|
||||
- "kubernetes@3.0.0"
|
||||
skip_dependencies: false
|
||||
skip_tests: false
|
||||
force: false
|
||||
|
||||
- name: "Install pack with force mode (skip validations)"
|
||||
input:
|
||||
packs:
|
||||
- "https://github.com/myorg/pack-custom.git"
|
||||
ref_spec: "main"
|
||||
skip_dependencies: true
|
||||
skip_tests: true
|
||||
force: true
|
||||
|
||||
- name: "Install from HTTP archive"
|
||||
input:
|
||||
packs:
|
||||
- "https://example.com/packs/custom-pack.tar.gz"
|
||||
skip_dependencies: false
|
||||
skip_tests: false
|
||||
force: false
|
||||
|
||||
tags:
|
||||
- pack
|
||||
- installation
|
||||
- workflow
|
||||
- automation
|
||||
- dependencies
|
||||
- git
|
||||
- registry
|
||||
Reference in New Issue
Block a user