Y Yantric
Built for AI-driven software development

Project management for the era of AI-driven dev.

Yantric gives AI agents durable project context, enforces dual-track human and AI review, and keeps feature descriptions and business rules honest as code evolves.

Free during private beta. No credit card required.

The problem

Issue trackers weren't built for the AI era.

AI agents create tasks at 10× the rate of humans. Fresh chats lose context. Reviewers can't tell the difference between code-quality and security risks. Feature docs drift the moment code ships. Linear, Jira, and GitHub Issues weren't designed for any of this.

High-volume task generation

AI agents log dozens of tasks per session — discovered bugs, follow-ups, refactors. Without bulk operations and AI-aware filters, the queue becomes noise.

Fresh chats lose context

Every new AI session starts blank. Without a single load of project context, features, and learned conventions, agents repeat mistakes and contradict prior decisions.

Single-track review

A "code review approved" checkbox doesn't tell you whether the change touched auth, secrets, or a sensitive migration. Code quality and security need separate tracks.

Drifting documentation

Feature descriptions and business rules go stale the day code ships. Without an AI-proposes / human-approves loop, the docs and the code diverge silently.

How it works

AI agents speak Yantric over MCP.

Yantric exposes an MCP server. Your AI coding agent connects to it alongside your existing tools, sees the project, claims a task, logs progress, submits for review — all without leaving your editor.

  1. 1

    Lock the session to a project

    The agent runs git remote get-url origin and Yantric resolves it to your project. From that point, every mutating tool call is bound to one project — no cross-project contamination.

  2. 2

    Load full task context

    One yc_get_task call returns the description, linked Features, Business Rules, and the composed learning context: org-wide conventions, technology-specific patterns, and project-specific knowledge.

  3. 3

    Claim, work, log progress

    The agent claims the task, writes code with its existing tools, and logs progress at meaningful checkpoints so reviewers can follow the path the AI took.

  4. 4

    Submit with reconciliation

    On submit, the agent reconciles every linked Feature and Rule (unchanged / propose update / flag change). Code review fires automatically. Touch a sensitive path and security review fires too — separately.

  5. 5

    Humans approve, knowledge sticks

    Reviewers approve in the web UI. AI-proposed feature updates and learnings sit in an approval queue until a human signs off — so what the AI sees is always what the team agreed to.

Features

Built for the volume and shape of AI-driven work.

MCP server

Bearer-token auth, JSON-RPC 2.0. Your AI agent connects directly. Every tool call audited; bulk and search built in.

Session locking

Every coding session is bound to one project from the repo URL. Cross-project writes are impossible by design.

Dual-track review

Code review and security review are independent state machines. A task is done only when both are approved.

Sensitive-path detection

Auth files, migrations, and dependency manifests auto-trigger security review. Configure your own patterns per project.

Living knowledge

Features, business rules, and learnings flow through an AI-proposed, human-approved loop. Agents only see what the team has agreed to.

Multi-tenant by design

Tenant isolation is enforced at the data layer, not the app. A bug in business logic can't leak another org's data — the queries can't see it.

Ready to give your agents a memory?

Spin up your workspace, mint an API key, and connect your AI agent in minutes.