Pipelines
Automate multi-step workflows with AI agents using the DAG pipeline engine.
Pipelines are directed acyclic graph (DAG) workflows that chain together AI agents, skills, HTTP calls, code execution, and flow control into automated multi-step processes. Think of it as n8n — but with AI agents in every node and persistent memory across runs.
Key Concepts
Nodes
Each pipeline is built from nodes — discrete steps that process data. Agents Machine provides 20 node types across 6 categories:
| Category | Nodes |
|---|---|
| Triggers | Trigger |
| AI & Agents | Agent, Skill, Memory Search, Memory Store |
| Flow Control | Condition, Switch, Merge, Loop, Split, Aggregate, Error Handler |
| Data | Transform, Filter, Set Variable, Code |
| Integration | HTTP Request, Output, Sub-Pipeline |
| Utility | Delay, Note |
Edges
Edges connect node outputs to node inputs, defining the execution order. The engine resolves the DAG using topological sort — nodes only execute when all upstream dependencies are complete.
Triggers
Pipelines can be started by:
- Manual — Run on demand via MCP tool or desktop app
- Schedule — Cron expressions (e.g.,
0 9 * * 1-5for weekdays at 9am) - Webhook — HTTP endpoint that starts a run with request payload
- Event — Internal EventBus events (e.g., memory stored, task updated)
- File Change — Watch filesystem paths with glob patterns
Creating Pipelines
Via MCP (IDE)
In your IDE, ask the AI to build a pipeline:
Create a pipeline called "code-review" with:
1. Trigger node (manual)
2. Agent node: reviewer agent analyzes the code
3. Condition: check if issues found
4. If true: Skill node — slack-notify with the review
5. Output: log the resultsThe pipeline_create MCP tool handles the rest.
Via Desktop App
Open the Pipelines tab in the desktop app to use the visual DAG editor:
- Click New Pipeline
- Drag nodes from the sidebar palette
- Connect nodes by dragging between ports
- Configure each node in the properties panel
- Click Run to execute
Via MCP Tools
| Tool | Description |
|---|---|
pipeline_create | Create or update a pipeline definition |
pipeline_list | List all saved pipelines |
pipeline_get | Get full pipeline definition |
pipeline_run | Execute a pipeline |
pipeline_runs | List recent pipeline runs |
pipeline_run_status | Get detailed status of a run |
pipeline_delete | Delete a pipeline |
Execution Model
- Topological sort — Nodes are ordered by dependencies
- Sequential execution — Nodes run in dependency order
- Data flow — Each node's output is passed as input to connected nodes
- Error handling — Per-node retry policies and error handlers
- Status tracking — Real-time status updates (pending → running → completed/failed)
Node Statuses
| Status | Meaning |
|---|---|
pending | Waiting to execute |
running | Currently executing |
completed | Finished successfully |
failed | Execution error |
skipped | Skipped (e.g., condition branch not taken) |
Variables and Expressions
Pipelines support template expressions using {{double braces}}:
{{input}}— Output from the previous node{{vars.myVar}}— Pipeline variable set by Set Variable node{{trigger.payload}}— Original trigger data