Connect Claude Code
Add Engram as an MCP server so Claude Code can query your codebase memory during any session.
Quick setup — 2 steps
1
Add Engram to your Claude Code settings
Open (or create) ~/.claude/settings.json and add the mcpServers block:
{ "mcpServers": { "engram": { "type": "sse", "url": "https://engram-ai.app/mcp/sse" } } }
2
Restart Claude Code
Quit and reopen Claude Code (or run /mcp in the CLI to reload servers). Engram will appear in the MCP tools list.
Using Claude Code CLI? Run
/mcp to see connected servers and verify Engram is listed.
Recommended startup workflow
At the start of each session, ask Claude Code to:
# Orient in a codebase
Use list_repos to see what's available, then get_concept_graph
for <repo> to understand the architecture before we start.
Available tools
list_repos
readDiscover ingested repos with memory counts.
query_memory
readAsk a natural-language question; get a memory-grounded LLM answer.
search_memories
readRaw vector search — retrieve ranked snippets without an LLM call.
get_concept_graph
readArchitectural overview of a repo: subsystems, layers, and relationships.
get_context_index
readList all context pages available for a repo.
get_context_page
readRead a context page by slug or fuzzy title match.
get_relevant_rules
readConventions and patterns relevant to a file path or question.
record_observation
writePersist what you discovered into episodic memory for future sessions.
record_rule
writeSave a distilled rule, pattern, or decision into semantic memory.
Example session prompts
# Before editing a file Use get_relevant_rules for repo=vllm, file_path=vllm/engine/async_llm_engine.py # Understanding a subsystem Use get_context_page for repo=vllm, slug=scheduler # After finding something important Use record_observation: "The async engine uses a shadow copy of the scheduler state to avoid blocking the event loop during rebalancing." repo=vllm