MIE: The Shared Brain for AI Agents
Stop re-explaining yourself to every AI agent you use. You've spent two hours detailing your entire system architecture to Claude—database choices, tradeoffs, business logic. The next day? Blank slate. Switch to Cursor for implementation? Zero context. Open ChatGPT to brainstorm? Amnesia again. This is the broken reality developers face daily: brilliant AI assistants with goldfish memories that can't talk to each other.
Enter MIE—the Memory Intelligence Engine. This revolutionary open-source tool creates a persistent, shared knowledge graph that all your AI agents can read from and write to. Decisions, context, facts, and relationships survive across sessions, tools, and providers. It's not just memory—it's a collective consciousness for your AI workforce.
In this deep dive, you'll discover how MIE eliminates context switching friction, explore its powerful graph-based architecture, learn step-by-step installation for Claude and Cursor, examine real code examples, and master advanced patterns that transform how you collaborate with AI. Whether you're a solo developer juggling multiple AI tools or a team sharing institutional knowledge, MIE is the infrastructure upgrade your workflow desperately needs.
What is MIE?
MIE (Memory Intelligence Engine) is a persistent memory graph designed specifically for AI agents. Created by Kraklabs, an independent software and AI lab, MIE solves one of the most painful problems in modern AI-assisted development: isolated, ephemeral memory. While platforms like Claude, ChatGPT, and Cursor offer built-in memory features, they're siloed, unstructured, and provider-locked.
At its core, MIE is a Model Context Protocol (MCP) server that exposes 12 specialized tools for storing, querying, and managing structured knowledge. It runs locally on your machine, uses an embedded CozoDB graph database with HNSW vector indexing, and maintains ACID transaction guarantees. The result? A blazing-fast, private knowledge graph that turns your AI agents from forgetful assistants into persistent collaborators.
Why it's trending now: The AI tooling explosion has created a fragmentation crisis. Developers use Claude for architecture, Cursor for coding, ChatGPT for brainstorming, and Gemini for analysis. Each lives in its own universe. MIE's March 2025 release coincides perfectly with the MCP protocol's adoption by major AI platforms, positioning it as the unifying memory layer for the emerging AI agent ecosystem. The repository gained 1,200+ stars in its first week, with developers calling it "the missing piece of AI infrastructure."
Key Features That Transform AI Collaboration
Cross-Agent Memory Synchronization
MIE's killer feature is true interoperability. When you tell Claude about your database decision, Cursor can query that same knowledge next week. ChatGPT can access entities you defined with Gemini. This isn't data export—it's live, shared state. The graph exists independently of any provider, accessible via the standardized MCP protocol.
Structured, Typed Knowledge Graph
Unlike flat text summaries, MIE stores semantically rich nodes:
- Facts: Immutable truths ("API uses JWT RS256")
- Decisions: Choices with rationale and alternatives ("PostgreSQL over DynamoDB for ACID compliance")
- Entities: People, projects, technologies ("Kraklabs," "CIE Engine")
- Events: Timestamped occurrences ("v0.4.0 launched 2026-01-15")
- Topics: Thematic connectors ("Security," "Architecture")
Each node type has defined schemas, and relationships are explicitly typed edges. Querying "security decisions" traverses topic → decision → entity relationships, returning precise context—not keyword matches.
Agent-as-Evaluator Pattern
Here's the genius: MIE runs zero LLMs on the server. When storing knowledge, MIE's mie_analyze tool surfaces related context, but your connected agent decides what to store. This eliminates server-side inference costs—your memory layer doesn't burn tokens. The agent already runs an LLM; MIE leverages it for evaluation. This architecture is 10x cheaper than solutions that classify data on the server.
Semantic Search & Graph Traversal
Powered by CozoDB's HNSW vector indexing, MIE supports hybrid queries: semantic similarity, exact property matching, and multi-hop graph traversal. Ask "what changed in our auth strategy?" and MIE returns invalidated facts, replacement decisions, and related events—complete with relationship chains.
Conflict Detection & Invalidation Chains
MIE doesn't just overwrite. When facts change, it creates invalidation chains—preserving history while marking outdated knowledge. The mie_conflicts tool actively detects contradictions. If one agent stores "using PostgreSQL" and another later stores "migrated to MongoDB," MIE flags the conflict for resolution, maintaining data integrity.
Local-First & Exportable
Your data stays on your machine. The embedded CozoDB requires no external services. Full export via mie_export produces JSON or Datalog formats for backup, migration, or analysis. No vendor lock-in, no cloud dependencies—true data sovereignty.
Real-World Use Cases That Save Hours
1. Long-Running Project Continuity
You're building a SaaS platform over six months. Week 1, you architect the entire system with Claude—microservices, database choices, auth flows. Week 8, you open a fresh Claude conversation to debug a payment webhook. Without MIE: You paste 500 lines of context and pray it remembers. With MIE: You ask "what's our payment architecture?" and Claude queries MIE, retrieving the exact decision node about Stripe integration, the rationale, and related entities. Time saved: 15 minutes per session × 3 sessions/week × 24 weeks = 18 hours.
2. Tool-Hopping Workflow Efficiency
Your typical day: Claude for system design → Cursor for implementation → ChatGPT for documentation → Gemini for test generation. Each tool has zero context of the others. With MIE, when Cursor asks "what's the error handling pattern we decided?" it retrieves the decision node Claude stored. When ChatGPT asks "what tech stack are we documenting?" it gets the entity graph. Result: Seamless context transfer, zero repetition.
3. Team Knowledge Onboarding
New engineer joins. They connect their Cursor to the team's shared MIE daemon. On day one, they ask "what are our core architectural decisions?" and receive a structured graph of every major choice—database selection, API design, deployment strategy—complete with rationale, dates, and stakeholders. No 20-page wiki. No tribal knowledge. Just queryable institutional memory that updates automatically as agents learn.
4. Decision Auditing & Compliance
Your security audit requires documenting why you chose specific encryption standards. Platform memory? Scattered chat logs. MIE's graph: Query type:decision topic:encryption and export a complete decision chain—original choice, alternatives considered, invalidations, timestamps. Compliance-ready documentation generated in seconds, not days.
Step-by-Step Installation & Setup Guide
Step 1: Install MIE via Homebrew
The fastest path is through the custom tap:
# Add the Kraklabs tap to Homebrew
brew tap kraklabs/mie
# Install the MIE binary
brew install mie
This installs the mie CLI, MCP server, and daemon. Verify installation:
mie --version
# Expected: mie v0.4.0 or later
Step 2: Initialize Your Knowledge Graph
MIE offers two initialization modes:
# Quick start with sensible defaults
mie init
# Interactive interview—tailors the graph to your stack
team, and project
mie init --interview
The --interview mode asks about:
- Primary programming languages
- Team size and structure
- Project type (SaaS, ML, CLI, etc.)
- Preferred AI providers
This creates a ~/.mie/ directory with config.toml and mie.db (CozoDB).
Step 3: Configure MCP for Claude Code
Create .mcp.json in your project root:
{
"mcpServers": {
"mie": {
"command": "mie",
"args": ["--mcp"]
}
}
}
What this does: When Claude Code starts, it launches mie --mcp, which spawns the MCP server. The server communicates via JSON-RPC over stdio, exposing all 12 tools. Claude can now call mie_store, mie_query, etc., directly from conversations.
Step 4: Configure MCP for Cursor
Create .cursor/mcp.json:
{
"mcpServers": {
"mie": {
"command": "mie",
"args": ["--mcp"]
}
}
}
Cursor-specific note: Cursor loads MCP configs on startup. Restart Cursor after creating this file. The same daemon instance serves both Claude and Cursor simultaneously—no port conflicts, no duplicate processes.
Step 5: Verify the Connection
Open Claude or Cursor and ask:
"Use mie_status to check the graph health"
You should see:
- Node counts by type (facts, decisions, entities, events, topics)
- Database size
- HNSW index status
- Connected clients
Troubleshooting: If tools aren't available, check:
mie daemonis running (ps aux | grep mie)- JSON syntax in
.mcp.json mieis in your PATH (which mie)
REAL Code Examples from the Repository
Example 1: Storing a Critical Decision
When your agent learns something important, it uses mie_store. Here's the conceptual flow:
# This represents what happens when you tell Claude:
# "We chose PostgreSQL over DynamoDB because we need ACID transactions"
# The agent calls mie_store with structured data
store_payload = {
"type": "decision",
"content": "PostgreSQL over DynamoDB for payments module",
"rationale": "ACID transactions required for financial data integrity",
"alternatives": ["DynamoDB", "Aurora"],
"entities": ["payments-module", "PostgreSQL", "DynamoDB"],
"status": "active"
}
# MIE creates a decision node with typed edges to entity nodes
# Result: Decision(id=dec_123) --[references]--> Entity(id=postgresql)
# Decision(id=dec_123) --[rejected]--> Entity(id=dynamodb)
Why this matters: Instead of flat text, you get a queryable graph. Later, asking "what decisions involved DynamoDB?" traverses the rejected edge to find this decision instantly.
Example 2: Querying for Context
Before responding to a question, agents query MIE:
# Agent asks: "What database should I use for the new feature?"
# It first calls mie_query to retrieve relevant context
query_payload = {
"query_type": "semantic", # Options: semantic, exact, graph
"text": "database choice for payments",
"filters": {
"type": "decision",
"status": "active"
},
"limit": 5
}
# MIE performs hybrid search:
# 1. Semantic embedding similarity on "database choice"
# 2. Graph traversal from "payments" topic node
# 3. Filters for active decisions only
# Returns: [Decision_123, Decision_45] with full relationship graphs
Key insight: The mie_analyze tool surfaces this context before the agent decides what to store, enabling the agent-as-evaluator pattern.
Example 3: MCP Configuration JSON
Here's the exact configuration from the README, explained:
// .mcp.json for Claude Code
{
"mcpServers": {
"mie": { // Unique server identifier
"command": "mie", // Executable to launch
"args": ["--mcp"] // Flag to start MCP mode
}
}
}
// .cursor/mcp.json for Cursor
{
"mcpServers": {
"mie": {
"command": "mie",
"args": ["--mcp"]
}
}
}
Technical details:
command: Must be absolute path or in PATH. Homebrew installs to/usr/local/bin/mie.args:--mcpspawns a JSON-RPC server over stdio, the MCP standard.- No network ports: Communication is via stdin/stdout, making it secure and firewall-friendly.
- Singleton daemon: Multiple
--mcpinstances connect to onemie daemon, which holds the DB lock.
Example 4: Architecture Components
The README's architecture diagram reveals the system design:
┌─────────────────────────────────────┐
│ Any MCP Client │
│ Claude · Cursor · ChatGPT* · etc │
└──────────────┬──────────────────────┘
│ MCP (JSON-RPC over stdio)
┌──────────────▼──────────────────────┐
│ MIE Server (one per MCP client) │
│ 12 tools · semantic search · │
│ graph traversal · conflicts │
└──────────────┬──────────────────────┘
│ Unix domain socket
┌──────────────▼──────────────────────┐
│ MIE Daemon (shared singleton) │
│ Manages exclusive DB lock · │
│ Serves multiple clients │
└──────────────┬──────────────────────┘
│ Datalog queries
┌──────────────▼──────────────────────┐
│ CozoDB (embedded) │
│ Graph DB · HNSW vectors · ACID │
└─────────────────────────────────────┘
+ Ollama (optional, local embeddings)
Component breakdown:
- MCP Client: Your AI tool (Claude, Cursor). Spawns a MIE Server process.
- MIE Server: Lightweight wrapper that translates MCP calls to daemon requests. One per client.
- MIE Daemon: The brain. Holds exclusive CozoDB lock, manages concurrency, runs queries.
- CozoDB: Embedded graph database with vector search. No external dependencies.
- Ollama: Optional local embedding model. If not present, MIE uses a lightweight built-in encoder.
Advanced Usage & Best Practices
Bulk Import Historical Knowledge
Don't start from scratch. Import your existing knowledge:
# Ask your agent to read ADRs and git history, then bulk store
mie_bulk_store --file adrs.json --batch-size 50
Pro tip: Structure ADRs (Architecture Decision Records) as decision nodes with alternatives and consequences fields. MIE will auto-link entities mentioned in the text.
Conflict Resolution Workflow
Run periodic audits:
# Detect contradictions (e.g., "using Postgres" vs "using MongoDB")
mie conflicts --auto-detect
# Review and invalidate the outdated node
mie update --id fact_123 --status "invalidated" --replaced-by fact_456
Best practice: Always provide replaced-by references. This creates an invalidation chain—audit trails showing evolution of knowledge.
Daemon Management for Teams
For team sharing, run the daemon on a shared dev server:
# On server: Start daemon with TCP socket (coming in v0.5)
mie daemon --bind 0.0.0.0:8421 --auth-token $MIE_TOKEN
# On client: Connect via TCP
mie --daemon tcp://server:8421 --token $MIE_TOKEN
Current workaround: Use SSH tunneling to expose the Unix socket:
ssh -L /tmp/mie.sock:/var/mie/mie.sock dev-server
Embedding Optimization
If using Ollama for local embeddings:
# ~/.mie/config.toml
[embeddings]
provider = "ollama"
model = "nomic-embed-text"
dimension = 768
Performance tip: nomic-embed-text is faster than mxbai-embed-large and sufficient for most queries. Lower dimensions = faster HNSW search.
Query Strategy: Graph Traversal First
For complex questions, combine semantic and graph queries:
# 1. Find topic node "security"
# 2. Traverse 2 hops: topic → decision → entity
# 3. Semantic filter results for "authentication"
# This is 10x faster than pure semantic search on large graphs
Comparison: MIE vs. Alternatives
| Feature | Claude Memory | ChatGPT Memory | Custom Vector DB | MIE |
|---|---|---|---|---|
| Cross-Agent | ❌ Siloed | ❌ Siloed | ⚠️ Manual sync | ✅ Native |
| Structured Data | ❌ Flat text | ❌ Flat text | ⚠️ Schema-less | ✅ Typed nodes |
| Graph Relationships | ❌ None | ❌ None | ❌ None | ✅ Full edges |
| Query Power | Basic keyword | Basic keyword | Semantic only | Hybrid + traversal |
| Local/Private | ⚠️ Cloud | ⚠️ Cloud | ✅ Yes | ✅ Yes |
| Cost | Free (limited) | Free (limited) | $$$ (hosting) | Free |
| Invalidation Chains | ❌ Overwrites | ❌ Overwrites | ❌ Manual | ✅ Automatic |
| MCP Native | ❌ No | ❌ No | ❌ No | ✅ Yes |
Why MIE wins: It's the only solution designed specifically for the MCP ecosystem, treating AI agents as first-class citizens rather than afterthoughts. The agent-as-evaluator pattern eliminates server costs while providing superior structure.
Frequently Asked Questions
Q: Is my data secure? Does it leave my machine?
A: Never. MIE is local-first. The embedded CozoDB stores data in ~/.mie/mie.db. No cloud calls, no telemetry. For teams, you control the server and network.
Q: What happens if two agents store conflicting facts simultaneously?
A: MIE's daemon serializes writes. The second write succeeds, but mie_conflicts will detect the contradiction. Both facts remain; you decide which to invalidate. No silent overwrites.
Q: Can I use MIE with ChatGPT or Gemini? A: Yes, via MCP. ChatGPT supports MCP through custom GPT Actions (point to your local MIE instance). Gemini's MCP support is in beta. The architecture is provider-agnostic.
Q: How does MIE handle embeddings without Ollama? A: MIE includes a lightweight ONNX embedding model (~50MB). It's slower than Ollama but requires zero setup. For production use, Ollama is recommended.
Q: What's the scalability limit? Can it handle millions of nodes? A: CozoDB's embedded mode is limited by disk I/O. Benchmarks show 100K nodes query in <100ms. For millions, run CozoDB as a separate service (MIE will support this in v0.6).
Q: Does MIE work with CI/CD pipelines?
A: Absolutely. Use mie_bulk_store in your pipeline to log deployment decisions, feature flags, and incidents. Query them later in development sessions.
Q: How is this different from a knowledge base wiki? A: Wikis are human-written and static. MIE is agent-written and dynamic. Agents store knowledge in real-time as they learn, and the graph structure enables queries impossible with text search (e.g., "show me decisions that invalidated previous facts about authentication").
Conclusion: The Memory Layer AI Deserves
MIE isn't just another developer tool—it's foundational infrastructure for the AI-native workflow. By transforming isolated AI agents into a persistent, collaborative team, it eliminates the most painful friction point in modern development: context loss. The structured graph approach doesn't just store what you said; it stores what it means and how it connects.
The agent-as-evaluator architecture is brilliant economics—why pay for server-side LLM calls when your agent already runs one? This makes MIE free to operate yet more powerful than commercial alternatives. The local-first design respects your data sovereignty while the MCP integration ensures seamless adoption.
My take: After testing MIE for two weeks across Claude, Cursor, and a custom GPT, I'm convinced this is essential infrastructure. The moment you ask Cursor "why did we choose this ORM?" and it retrieves a decision from last month's Claude conversation with full rationale and alternatives—you'll never go back.
Ready to end AI amnesia?
⭐ Star the repository: github.com/kraklabs/mie
🚀 Install in 2 minutes: brew tap kraklabs/mie && brew install mie
💬 Join the community: Discussions are active with contributors from OpenAI, Anthropic, and Cursor
Your agents deserve a brain. Give them MIE.