Memory
Persistent AI memory powered by vector search — your AI never forgets.
Memory is the core feature of Agents Machine. It stores knowledge in a Qdrant vector database, enabling semantic search across all your stored information. Unlike regular AI chats that lose context between sessions, Agents Machine remembers everything you teach it.
How It Works
Embed & Store Text is converted to vector representations and persisted
in a project-scoped Qdrant collection.
Retrieve Semantically Natural language queries find related content via
vector similarity.
Auto-Inject Context Relevant memories are automatically included in AI
responses — no manual retrieval needed.
Types & Categories
| Type | Purpose | Example |
|---|---|---|
rule | Enforced conventions | "Always use TypeScript strict mode" |
memory | General knowledge | "The auth service was refactored in v2.3" |
anti-pattern | Things to avoid | "Never use any type in this project" |
guide | How-to instructions | "To deploy, run docker compose up -d" |
decision | Architectural decisions | "We chose PostgreSQL over MongoDB for ACID compliance" |
| Category | Purpose |
|---|---|
architecture | System design, infrastructure decisions, service boundaries |
coding-rules | Code style, naming conventions, patterns to follow |
anti-patterns | Known pitfalls, deprecated approaches, things to avoid |
service-context | Project-specific knowledge, business logic, domain context |
Storing Memories
Store in memory: Our API uses JWT authentication with refresh tokens.
Access tokens expire in 15 minutes, refresh tokens in 7 days.
Category: service-context
Tags: auth, jwt, apiThe store_memory tool is available in your IDE:
Use store_memory to save: "Database migrations run automatically
on deploy via drizzle-kit migrate"
Category: architecture, Tags: database, deployment- Open the Memory Browser
- Click New Memory
- Fill in content, category, and tags
- Click Save
Querying Memories
🔍 Semantic Search
Ask natural language questions: "What do we know about authentication?"
📁 Category Filter
Search within a category: "Search coding-rules for TypeScript conventions"
🏷️ Tag Filter
Filter by specific tags for precise results.
📦 Batch Query
Run multiple parallel searches with batch_query_memory.
Best Practices
Follow these guidelines for the best memory retrieval quality:
- Be specific — "Use Drizzle ORM with PostgreSQL" beats "Use a database"
- Use categories — Proper categorization improves search relevance
- Add tags — Tags enable precise filtering
- Update regularly — Keep memories current as your project evolves
- Remove stale data — Delete outdated memories to avoid confusion
- One concept per memory — Focused memories are more retrievable than large dumps