Agents Machine

Pipelines

Automate multi-step workflows with AI agents using the DAG pipeline engine.

Pipelines are directed acyclic graph (DAG) workflows that chain together AI agents, skills, HTTP calls, code execution, and flow control into automated multi-step processes. Think of it as n8n — but with AI agents in every node and persistent memory across runs.

Key Concepts

Nodes

Each pipeline is built from nodes — discrete steps that process data. Agents Machine provides 20 node types across 6 categories:

CategoryNodes
TriggersTrigger
AI & AgentsAgent, Skill, Memory Search, Memory Store
Flow ControlCondition, Switch, Merge, Loop, Split, Aggregate, Error Handler
DataTransform, Filter, Set Variable, Code
IntegrationHTTP Request, Output, Sub-Pipeline
UtilityDelay, Note

Edges

Edges connect node outputs to node inputs, defining the execution order. The engine resolves the DAG using topological sort — nodes only execute when all upstream dependencies are complete.

Triggers

Pipelines can be started by:

  • Manual — Run on demand via MCP tool or desktop app
  • Schedule — Cron expressions (e.g., 0 9 * * 1-5 for weekdays at 9am)
  • Webhook — HTTP endpoint that starts a run with request payload
  • Event — Internal EventBus events (e.g., memory stored, task updated)
  • File Change — Watch filesystem paths with glob patterns

Creating Pipelines

Via MCP (IDE)

In your IDE, ask the AI to build a pipeline:

Create a pipeline called "code-review" with:
1. Trigger node (manual)
2. Agent node: reviewer agent analyzes the code
3. Condition: check if issues found
4. If true: Skill node — slack-notify with the review
5. Output: log the results

The pipeline_create MCP tool handles the rest.

Via Desktop App

Open the Pipelines tab in the desktop app to use the visual DAG editor:

  1. Click New Pipeline
  2. Drag nodes from the sidebar palette
  3. Connect nodes by dragging between ports
  4. Configure each node in the properties panel
  5. Click Run to execute

Via MCP Tools

ToolDescription
pipeline_createCreate or update a pipeline definition
pipeline_listList all saved pipelines
pipeline_getGet full pipeline definition
pipeline_runExecute a pipeline
pipeline_runsList recent pipeline runs
pipeline_run_statusGet detailed status of a run
pipeline_deleteDelete a pipeline

Execution Model

  1. Topological sort — Nodes are ordered by dependencies
  2. Sequential execution — Nodes run in dependency order
  3. Data flow — Each node's output is passed as input to connected nodes
  4. Error handling — Per-node retry policies and error handlers
  5. Status tracking — Real-time status updates (pending → running → completed/failed)

Node Statuses

StatusMeaning
pendingWaiting to execute
runningCurrently executing
completedFinished successfully
failedExecution error
skippedSkipped (e.g., condition branch not taken)

Variables and Expressions

Pipelines support template expressions using {{double braces}}:

  • {{input}} — Output from the previous node
  • {{vars.myVar}} — Pipeline variable set by Set Variable node
  • {{trigger.payload}} — Original trigger data

What's Next

On this page