The Memory Layer for Engineering Teams

Institutional memory
for engineering teams.

Engineers leave and take context with them. Incidents repeat. New hires spend months reconstructing decisions that already exist. Engram ingests your git history, pull requests, and incidents — and makes them queryable, permanent, and compounding.

Ingest a public GitHub repository
github.com/
Press Enter or click Ingest — takes ~2 minutes for a typical repo

What Engram gives you

🗺
Knowledge graph
Services, modules, and dependencies mapped automatically. Answers "what is this and how does it fit?" — including coupling signals no vector search can surface.
🧠
Memory-grounded answers
Ask why a service was built the way it was. Engram retrieves the exact incident, rejected alternative, and reviewer's objection — not just the code.
Proactive insights
A background scanner surfaces recurring failures, hidden coupling, and anti-pattern recurrence before they become incidents — from your real history.
📈
Compounding value
Every commit distills into a rule. Every query updates memory. The longer Engram runs, the smarter it gets. Switching means forgetting years of accumulated knowledge.
Claude Code · MCP Integration

Query memory from inside your editor

Add Engram as an MCP server. Every Claude Code session gets instant access to codebase memory — architecture, past decisions, hard-won conventions — without leaving the terminal. No install required.

~/.claude/settings.json
{ "mcpServers": { "engram": { "type": "sse", "url": "https://engram-ai.app/mcp/sse" } } }

No install required — hosted MCP server. Full setup guide →

query_memoryAsk a question, get a grounded answer from codebase memory
get_concept_graphArchitectural overview of a repo as structured text
get_relevant_rulesConventions and patterns for the file you're editing
get_context_pageRead an architecture doc or context page by slug
record_observationPersist what you discovered this session into episodic memory
record_ruleWrite a distilled rule or pattern into semantic memory
search_memoriesRaw retrieval — ranked snippets without an LLM call
Ingesting repository…
Connecting…