Trellis: The Multi-Platform AI Framework That Rules Them All
The AI coding revolution has created a chaotic mess. Developers juggle Claude Code, Cursor, OpenCode, and half a dozen other tools—each demanding you re-explain your project's architecture, conventions, and workflow from scratch. What if you could teach your AI assistant once and have it remember forever across every platform? Enter Trellis—the game-changing framework that transforms fragmented AI coding into a unified, powerful workflow.
In this deep dive, you'll discover how Trellis eliminates repetitive setup, enables parallel AI agent execution, and creates persistent project memory that follows you across tools. We'll walk through real installation commands, explore the exact directory structure that powers everything, and reveal advanced strategies top development teams use to 10x their AI coding productivity. Whether you're a solo developer drowning in context-switching or a team lead struggling to standardize AI workflows, this guide will show you why Trellis is rapidly becoming the essential infrastructure for modern software development.
What is Trellis? The Revolutionary AI Coding Infrastructure
Trellis is an open-source, all-in-one AI framework and toolkit created by mindfold-ai that fundamentally reimagines how developers interact with AI coding assistants. Unlike traditional approaches that lock you into platform-specific configuration files, Trellis provides a unified workflow layer that seamlessly integrates with ten different AI coding platforms including Claude Code, Cursor, OpenCode, iFlow, Codex, Kilo, Kiro, Gemini CLI, Antigravity, and Qoder.
The framework emerged from a critical pain point: the explosion of AI coding tools has created a configuration nightmare. Developers waste countless hours crafting perfect instructions for Claude in CLAUDE.md, then duplicate similar efforts for Cursor's .cursorrules, and again for other platforms. Each file becomes a monolithic, unmaintainable mess. Trellis solves this by introducing structured, composable specifications that automatically inject relevant context into every AI session.
What makes Trellis genuinely revolutionary is its platform-agnostic architecture. The core workflow lives entirely in a .trellis/ directory within your repository, while Trellis generates the necessary integration files for each supported tool. This means your team can standardize on a single workflow while individual developers use their preferred AI assistant. The framework is currently trending rapidly in developer communities because version 0.3.6 expanded support from just 2 platforms to 10, added Windows compatibility, and introduced powerful features like task lifecycle hooks and custom template registries.
At its heart, Trellis treats AI coding as a first-class development workflow rather than an afterthought. It brings structure to the chaotic world of prompt engineering, context management, and AI agent coordination—making it indispensable for serious development teams.
Six Powerful Capabilities That Transform AI Development
Auto-Injected Specs: Write Once, Use Everywhere
Trellis eliminates the copy-paste nightmare of traditional AI configuration. You define your project's coding standards, architecture decisions, review habits, and workflow preferences as layered Markdown specs inside .trellis/spec/. The framework's intelligent injection system automatically serves the relevant context to each AI session based on the task at hand. Instead of dumping a 500-line monolithic prompt, Trellis curates precise, targeted specifications that evolve with your project.
Task-Centered Workflow: Structure Your AI's Thinking
Every AI interaction in Trellis revolves around tasks stored in .trellis/tasks/. Each task contains its PRD, implementation context, review guidelines, and status tracking. This creates a natural scaffold for AI agents to follow, preventing them from wandering off-track. Tasks become the single source of truth for what needs to be built, reviewed, or fixed—turning vague AI conversations into structured project management.
Parallel Agent Execution: Git Worktrees Meet AI
Trellis leverages git worktrees to enable true parallel AI execution. Multiple AI agents can work on different tasks simultaneously without branch conflicts or local state corruption. One agent can refactor authentication while another builds a new feature, each operating in isolated worktrees. This transforms AI coding from a sequential bottleneck into a concurrent, high-throughput pipeline—a critical advantage for teams moving fast.
Project Memory: Persistent Context That Actually Works
The .trellis/workspace/ directory contains personal journals that preserve session history, decisions made, and lessons learned. When you start a new session, Trellis injects relevant historical context so AI agents don't repeat mistakes or rehash resolved discussions. This creates a compounding knowledge base that makes each subsequent AI interaction smarter and more efficient.
Team-Shared Standards: Codify Your Best Practices
Specs live in version control alongside your code, enabling collaborative refinement of AI instructions. When one developer discovers a prompt pattern that produces excellent results, they commit it to the spec directory. The entire team immediately benefits. This turns AI workflow optimization from individual heroics into a continuous team improvement process.
Multi-Platform Setup: One Workflow, Ten Tools
Trellis generates platform-specific integration files automatically. Enable Cursor support, and it creates .cursor/ with optimized configurations. Enable Claude Code, and it sets up .claude/ with proper hooks. This write-once-run-anywhere approach means you can switch tools or onboard new team members without rebuilding your entire AI workflow from scratch.
Real-World Use Cases: Where Trellis Shines Brightest
Teaching AI Your Project Once and For All
The Problem: You're working on a Django monolith with specific patterns: custom middleware for authentication, Celery tasks with unique retry logic, and a strict frontend component architecture. Every time you start a Claude Code session, you spend 10 minutes explaining these conventions. Your CLAUDE.md has grown to 800 lines, and Claude still misses critical details.
The Trellis Solution: Break down your conventions into focused spec files: django-patterns.md, authentication-middleware.md, celery-conventions.md, and react-component-standards.md. Place them in .trellis/spec/. When you create a task for building a new API endpoint, Trellis automatically injects django-patterns.md and authentication-middleware.md. For a frontend task, it serves the React specs. The AI receives exactly what it needs—no more, no less—every single time.
Running Multiple AI Tasks in Parallel Without Chaos
The Problem: Your sprint requires building a new payment integration while simultaneously refactoring user profile pages. Using a single AI tool sequentially means one task waits while the other runs, stretching your timeline. Attempting both on the same branch creates merge conflicts and corrupted state.
The Trellis Solution: Create two separate tasks in .trellis/tasks/—payment-integration and profile-refactor. Trellis generates isolated git worktrees for each. You spin up Claude Code in the payment worktree and Cursor in the profile worktree. Both AI agents work concurrently, committing to separate branches. The task-centered structure keeps their contexts clean and isolated. You've just doubled your AI throughput without any risk of interference.
Transforming Project History Into Usable Memory
The Problem: Last week, an AI agent spent three hours debugging a subtle race condition in your async task queue. The solution involved a specific locking pattern and configuration tweak. This week, a different AI agent is building a similar feature and repeats the exact same debugging process, wasting another three hours rediscovering the solution.
The Trellis Solution: The first AI agent's session is automatically journaled in .trellis/workspace/your-name/. When the second agent starts work on the async task queue, Trellis injects the relevant journal entries describing the race condition and solution. The AI immediately understands the pattern and implements it correctly. Your project learns from itself, preventing knowledge loss between sessions.
Maintaining One Workflow Across a Multi-Tool Team
The Problem: Your backend team loves Claude Code for its thoughtful reasoning. Your frontend team swears by Cursor's speed. The DevOps team experiments with Gemini CLI. Each group has different workflow files, conventions, and AI instruction patterns. Cross-team collaboration becomes a translation exercise, and onboarding is a nightmare.
The Trellis Solution: The entire team standardizes on Trellis specs stored in the main repository. Backend, frontend, and DevOps tasks all follow the same structure. Each developer uses their preferred AI tool, but Trellis ensures they all receive consistent context and follow identical workflows. A frontend developer can review a backend AI's work and understand exactly how decisions were made because the task structure is universal.
Step-by-Step Installation & Complete Setup Guide
Prerequisites
Before installing Trellis, ensure you have:
- Node.js 18+ and npm installed
- Git 2.23+ for worktree support
- An existing code repository (Trellis works best in initialized git repos)
- At least one supported AI coding tool installed (Claude Code, Cursor, etc.)
Installation Process
Install Trellis globally via npm for easy command-line access:
# Install the latest stable version
npm install -g @mindfoldhq/trellis@latest
# Verify installation
trellis --version
The global installation makes the trellis command available everywhere, enabling quick initialization in any project.
Repository Initialization
Navigate to your project directory and run:
# Initialize with personal workspace
trellis init -u your-github-username
This command performs several critical actions:
- Creates the
.trellis/directory structure - Generates
.trellis/workspace/your-github-username/for your personal journals - Creates starter spec templates in
.trellis/spec/ - Sets up the foundational
workflow.mdfile
Platform-Specific Configuration
For a tailored setup that only generates files for tools you actually use:
# Initialize with specific platforms
trellis init --cursor --opencode --codex -u your-github-username
Available platform flags:
--cursor: Cursor editor integration (creates.cursor/directory)--opencode: OpenCode support--iflow: iFlow platform setup--codex: OpenAI Codex integration--kilo: Kilo editor support--kiro: Kiro platform setup--gemini: Gemini CLI integration--antigravity: Antigravity platform--qoder: Qoder support (new in v0.3.4)
Mix and match flags based on your team's toolset. Trellis intelligently generates only the necessary integration files.
Post-Initialization Steps
- Review generated specs: Edit
.trellis/spec/files to match your project's conventions - Customize workflow: Modify
.trellis/workflow.mdto reflect your team's process - Gitignore setup: Trellis automatically updates
.gitignoreto exclude personal workspace files while tracking shared specs - Commit the structure:
git add .trellis/ && git commit -m "Add Trellis AI framework structure"
For detailed platform-specific entry commands and upgrade paths, consult the official documentation.
Real Code Examples from the Trellis Repository
Example 1: Global Installation Command
The foundation of using Trellis begins with its npm package installation:
# Install Trellis globally for system-wide access
npm install -g @mindfoldhq/trellis@latest
This command fetches the latest version from the npm registry and makes the trellis CLI available in your PATH. The @latest tag ensures you receive the most recent stable release with all platform support and bug fixes. Global installation is recommended because it allows you to run trellis init in any project directory without local dependencies.
Example 2: Basic Repository Initialization
The simplest way to start with Trellis creates a personal workspace for session continuity:
# Initialize Trellis with personal workspace
trellis init -u your-name
What happens under the hood:
- The
-uflag creates a namespaced workspace directory:.trellis/workspace/your-name/ - This directory stores your personal journals, session history, and task-specific continuity files
- Each developer on your team should use their unique identifier, keeping personal context separate from shared specs
- The command generates default spec templates that you customize for your project's needs
Example 3: Multi-Platform Initialization
For teams using multiple AI tools simultaneously, Trellis supports selective platform activation:
# Initialize with specific platforms you actually use
trellis init --cursor --opencode --codex -u your-name
Platform flag breakdown:
--cursor: Generates.cursor/with optimized rules and context injection--opencode: Creates OpenCode-specific configuration files--codex: Sets up OpenAI Codex integration hooks- Additional flags can be combined:
--iflow,--kilo,--kiro,--gemini,--antigravity,--qoder
This selective approach keeps your repository clean by only generating integration files for active tools, reducing clutter and potential conflicts.
Example 4: The Core Directory Structure
Trellis organizes your AI workflow into a logical, version-controlled structure:
.trellis/
├── spec/ # Project standards, patterns, and guides
│ ├── coding-standards.md # Language-specific conventions
│ ├── architecture.md # High-level design principles
│ ├── review-checklist.md # Code review requirements
│ └── workflow-patterns.md # Process documentation
├── tasks/ # Task PRDs, context files, and status
│ ├── feature-auth/ # Task-specific directory
│ │ ├── prd.md # Product requirements
│ │ ├── context.md # Implementation context
│ │ └── status.md # Current progress and blockers
├── workspace/ # Journals and developer-specific continuity
│ ├── your-name/ # Personal workspace
│ │ ├── journal.md # Session history and decisions
│ │ └── scratchpad.md # Temporary notes and ideas
├── workflow.md # Shared workflow rules for all tasks
└── scripts/ # Utilities that power the workflow
├── inject-context.sh # Context injection logic
└── task-lifecycle.js # Task management automation
Key architectural insights:
- Specs are composable: Break monolithic prompts into focused, reusable pieces
- Tasks are self-contained: All context for a feature lives in one directory
- Workspace is personal: Each developer's journal stays private while benefiting from shared knowledge
- Workflow is universal:
workflow.mddefines process rules that apply across all platforms
Example 5: Custom Template Registry
For organizations with multiple projects, Trellis supports sharing spec templates:
# Fetch templates from a custom registry
trellis init --registry https://github.com/your-org/your-spec-templates
Advanced usage pattern:
- Create a centralized repository containing your organization's standardized specs
- Include templates for common stacks: React+Node, Django+Postgres, Rust+Wasm
- Teams initialize new projects with
trellis init --registry your-templates - This ensures consistency across dozens of repositories while allowing project-specific customization
- The registry system supports private GitHub repos for enterprise use cases
This feature transforms Trellis from a project tool into an organizational standard, codifying your best practices at scale.
Advanced Usage Strategies and Best Practices
Spec Layering for Complex Projects
Create a hierarchy of specs that Trellis can compose intelligently:
- Base specs: Fundamental language conventions (
.trellis/spec/python-base.md) - Stack specs: Framework-specific patterns (
.trellis/spec/django-stack.md) - Domain specs: Business logic conventions (
.trellis/spec/payments-domain.md) - Task specs: Feature-specific context (
.trellis/tasks/payment-integration/context.md)
Trellis injects from most general to most specific, ensuring AI agents understand both big-picture architecture and nitty-gritty details.
Journal-Driven Continuous Improvement
Treat your workspace journal as a living retrospective document:
- After each session, summarize what worked and what didn't
- Document successful prompt patterns for future reuse
- Record dead ends to prevent AI agents from repeating them
- Tag entries with task IDs for easy cross-referencing
Over time, your journal becomes a personal playbook that makes every AI interaction more effective.
Parallel Execution Workflow
Master concurrent AI development with this pattern:
- Create isolated tasks:
trellis task create --name auth-refactor --worktree - Assign to different tools: Run Claude Code in worktree A, Cursor in worktree B
- Sync strategically: Merge worktrees during daily standups, not continuously
- Document conflicts: When AI agents disagree on approach, record the resolution in the task status
This workflow turns AI coding into a distributed system with clear interfaces and minimal coordination overhead.
Template Marketplace Strategy
Contribute your successful specs back to the community:
# Publish your spec templates
trellis template publish .trellis/spec/ --tag react,typescript,saas
Building a reusable spec library accelerates future project starts and establishes your team as AI workflow experts.
Trellis vs. Traditional Approaches: A Clear Winner
| Feature | CLAUDE.md / .cursorrules | Trellis Framework |
|---|---|---|
| Structure | Monolithic single file | Layered, composable specs |
| Platform Support | One platform per file | 10 platforms, one workflow |
| Task Management | Ad-hoc, unstructured | Structured PRDs with status tracking |
| Parallel Execution | Manual branch juggling | Automated git worktrees |
| Project Memory | Non-existent | Persistent workspace journals |
| Team Sharing | Copy-paste between repos | Version-controlled, centralized specs |
| Context Injection | Static, all-or-nothing | Dynamic, task-relevant curation |
| Scalability | Breaks down at ~300 lines | Scales to enterprise complexity |
Why Trellis wins: Traditional approaches treat AI configuration as a static document. Trellis treats it as a dynamic, executable workflow. The framework doesn't just store instructions—it actively manages context, coordinates agents, and preserves knowledge across sessions. While CLAUDE.md becomes a graveyard of outdated prompts, Trellis specs evolve with your project through version control and team collaboration.
The multi-platform support alone justifies adoption. A team standardized on Trellis can onboard a new AI tool in minutes, not hours, by simply running trellis init --new-platform. Your entire workflow, refined over months, transfers instantly.
Frequently Asked Questions: Everything Developers Need to Know
How is Trellis different from CLAUDE.md, AGENTS.md, or .cursorrules?
Those files are useful starting points, but they inevitably become monolithic and unmaintainable. Trellis adds intelligent structure: layered specs that compose dynamically, task-specific context injection, personal workspace journals for continuity, and platform-aware workflow wiring. It's the difference between a static README and a full-fledged application framework.
Is Trellis only for Claude Code?
Absolutely not. Trellis currently supports 10 different AI coding platforms: Claude Code, Cursor, OpenCode, iFlow, Codex, Kilo, Kiro, Gemini CLI, Antigravity, and Qoder. The framework is designed for multi-platform teams. Each tool gets its optimized integration files, but your core workflow remains identical across all of them.
Do I have to write every spec file manually?
No, and that's the beauty of AI-assisted workflow setup. Many teams start by having their AI draft specs from existing code. Run a one-time session asking Claude to analyze your codebase and generate coding-standards.md and architecture.md. Then, human developers review and tighten the critical sections. Trellis works best when you keep high-signal rules explicit and versioned, while letting AI handle the verbose documentation.
Can teams use Trellis without constant merge conflicts?
Yes, by design. Personal workspace journals live in .trellis/workspace/your-name/ and should be gitignored. Shared specs and tasks live in the main repo where they're reviewed like any other artifact. This separation ensures team members can experiment with prompts and journal freely without interfering with shared standards. When someone discovers an improved spec, they commit it for everyone to benefit.
How steep is the learning curve?
Trellis has a gentle onboarding ramp. Basic usage is as simple as trellis init and editing Markdown files. The complexity scales with your needs: start with simple specs, add task tracking when you need structure, leverage worktrees for parallel execution when your team grows. The framework respects the principle of progressive disclosure—you're never forced to use advanced features until you need them.
Does Trellis work with private repositories and enterprise setups?
Yes. Trellis is enterprise-ready with support for private npm registries, private GitHub template repositories, and self-hosted documentation. The AGPL-3.0 license ensures you can modify it for internal use while contributing improvements back to the community. Large organizations can create centralized spec libraries that propagate best practices across hundreds of repositories.
What's the performance impact on AI sessions?
Trellis improves performance by reducing token waste. Instead of sending massive monolithic prompts, it injects only relevant specs for the current task. This targeted approach means AI agents receive higher-signal context, leading to better code generation and fewer iterations. The background watch mode (trellis update --watch) ensures integration files stay current without manual intervention.
Conclusion: Why Trellis is Your Next Essential Tool
Trellis represents a paradigm shift in AI-assisted development. It transforms fragmented, repetitive AI interactions into a structured, scalable workflow that compounds value over time. By solving the critical problems of context management, parallel execution, and knowledge persistence, Trellis doesn't just make AI coding easier—it makes it fundamentally more powerful.
The framework's multi-platform support future-proofs your investment. As new AI tools emerge, Trellis adapts, ensuring your carefully crafted workflows transfer seamlessly. Its version-controlled specs turn AI prompt engineering from dark art into repeatable science.
For teams serious about leveraging AI in production development, Trellis is no longer optional—it's essential infrastructure. The time saved from eliminated repetition and improved AI output quality pays back the setup investment within weeks.
Ready to revolutionize your AI coding workflow? Visit the official Trellis GitHub repository to get started. Star the project, join the Discord community, and explore the comprehensive documentation. Your future self will thank you for making the switch today.
Next Steps: Install Trellis with npm install -g @mindfoldhq/trellis@latest, run trellis init -u your-name in your main project, and experience the difference a structured AI workflow makes. The future of coding is AI-assisted—Trellis ensures you're ready for it.