Yantric gives AI agents durable project context, enforces dual-track human and AI review, and keeps feature descriptions and business rules honest as code evolves.
Free during private beta. No credit card required.
The problem
AI agents create tasks at 10× the rate of humans. Fresh chats lose context. Reviewers can't tell the difference between code-quality and security risks. Feature docs drift the moment code ships. Linear, Jira, and GitHub Issues weren't designed for any of this.
AI agents log dozens of tasks per session — discovered bugs, follow-ups, refactors. Without bulk operations and AI-aware filters, the queue becomes noise.
Every new AI session starts blank. Without a single load of project context, features, and learned conventions, agents repeat mistakes and contradict prior decisions.
A "code review approved" checkbox doesn't tell you whether the change touched auth, secrets, or a sensitive migration. Code quality and security need separate tracks.
Feature descriptions and business rules go stale the day code ships. Without an AI-proposes / human-approves loop, the docs and the code diverge silently.
How it works
Yantric exposes an MCP server. Your AI coding agent connects to it alongside your existing tools, sees the project, claims a task, logs progress, submits for review — all without leaving your editor.
The agent runs git remote get-url origin and Yantric resolves it to your project. From that point, every mutating tool call is bound to one project — no cross-project contamination.
One yc_get_task call returns the description, linked Features, Business Rules, and the composed learning context: org-wide conventions, technology-specific patterns, and project-specific knowledge.
The agent claims the task, writes code with its existing tools, and logs progress at meaningful checkpoints so reviewers can follow the path the AI took.
On submit, the agent reconciles every linked Feature and Rule (unchanged / propose update / flag change). Code review fires automatically. Touch a sensitive path and security review fires too — separately.
Reviewers approve in the web UI. AI-proposed feature updates and learnings sit in an approval queue until a human signs off — so what the AI sees is always what the team agreed to.
Features
Bearer-token auth, JSON-RPC 2.0. Your AI agent connects directly. Every tool call audited; bulk and search built in.
Every coding session is bound to one project from the repo URL. Cross-project writes are impossible by design.
Code review and security review are independent state machines. A task is done only when both are approved.
Auth files, migrations, and dependency manifests auto-trigger security review. Configure your own patterns per project.
Features, business rules, and learnings flow through an AI-proposed, human-approved loop. Agents only see what the team has agreed to.
Tenant isolation is enforced at the data layer, not the app. A bug in business logic can't leak another org's data — the queries can't see it.
Spin up your workspace, mint an API key, and connect your AI agent in minutes.