Memory for
AI agents.

Your AI agent forgets everything between sessions. Engram gives it persistent, intelligent memory — a temporal knowledge graph that extracts facts, detects conflicts, and consolidates over time. Open source. Works with any framework through MCP.

No memory

Every conversation starts from zero. Your agent re-asks the same questions, forgets preferences, loses project context.

Vector stores are not memory

Embedding raw messages is not understanding. There is no fact extraction, no conflict resolution, no temporal awareness.

Too much infrastructure

Many memory tools need an external vector or graph database before you can store a single fact. Engram starts with a single SQLite file and moves to Postgres only when you need it.

Framework lock-in

Most memory solutions are tied to one framework. Switch from LangChain to CrewAI and your memory layer breaks.

Not a vector store.
A knowledge graph that learns.

Fact extraction

Raw conversation messages go in. Structured facts come out. Entities, relationships, preferences, events — all extracted automatically via LLM.

Conflict detection

When new information contradicts existing facts, Engram detects the conflict and supersedes the old fact. "I moved to Austin" invalidates "I live in Seattle."

Hybrid retrieval

Vector search + FTS5 keyword search + graph spreading activation. Six retrieval signals combined: semantic similarity, temporal proximity, node importance, calibrated confidence, keyword match, and graph connections.

Consolidation

Background engine that decays stale facts, promotes important ones, deduplicates, summarizes entity histories, and builds preference rollups. Memory stays clean over time.

Temporal grounding

Facts carry timestamps. Queries understand "last week" and "recently." Context output is date-annotated so your agent knows when things happened.

Provider-agnostic

One binary speaks Ollama (local, free), OpenAI-compatible endpoints, Anthropic Claude, Google Gemini, or a shell-out command. Set one env var.

Watch facts accumulate, conflicts resolve.

Two short scenarios. The first shows what your agent remembers across sessions. The second shows how Engram supersedes a stale fact when reality changes.

Scenario 1 Before / after memory
Session 1
# User says:
"I prefer concise answers and
my project is JamJet Cloud."

# Engram extracts:
fact user.style    = "concise"
fact user.project  = "JamJet Cloud"
Session 8
# User says:
"Draft the release note for
the dashboard."

# Agent recalls:
project = "JamJet Cloud"
style   = "concise"
recent  = "policy violations panel"
Scenario 2 Conflict resolution
Sunil — lives in Pune stored Jan 2025 incoming "I moved to Amsterdam" today supersede newer explicit statement · same entity

Old fact is kept in history (replayable, auditable) but no longer surfaces in retrieval. Reason and timestamp are recorded.

One memory layer. Any framework.

Engram exposes 11 MCP tools. Any client that speaks MCP can use it — no adapter code, no integration library, no lock-in.

Claude Desktop
Cursor
VS Code
Windsurf
LangChain
CrewAI
Spring AI
Custom agents

Also available as: Rust library (embed in your app) · REST API · Python client · Java client · Spring Boot starter

Running in 60 seconds.

Docker (recommended)

docker run --rm -i \
  -v engram-data:/data \
  ghcr.io/jamjet-labs/engram-server:0.5.0

Uses local Ollama by default. Zero config.

Cargo install

cargo install jamjet-engram-server
engram serve --db memory.db

Native binary. ~3MB. Starts instantly.

MCP config (Claude Desktop)

{
  "mcpServers": {
    "memory": {
      "command": "engram",
      "args": ["serve", "--db", "memory.db"]
    }
  }
}

Add to your MCP config and restart. Done.

memory_add memory_recall memory_context memory_search memory_forget memory_stats memory_consolidate messages_save messages_get messages_list messages_delete

Honest comparison.

Capability Engram Mem0 Zep Letta
Fact extraction Yes (LLM-based) Yes Basic No
Conflict detection Yes (auto-supersede) Partial No No
Knowledge graph Yes (entities + relationships) v2 (new) Yes No
Hybrid retrieval Vector + FTS5 + graph (6 signals) Vector + graph Hybrid Vector only
Consolidation engine 5 ops (decay, promote, dedup, summarize, reflect) No No No
MCP server Yes (11 tools) No No No
Zero-infra quickstart SQLite (single file) Needs Qdrant Needs Neo4j + Docker Needs PostgreSQL
LLM provider choice Ollama, OpenAI, Anthropic, Google, shell-out OpenAI default OpenAI default OpenAI default
Message store Built-in (save/get/list/delete) No Yes Yes
Spring Boot starter Maven Central No No No
Maturity New (v0.5.0, small community) Established (large community) Established Established (ex-MemGPT)

All frameworks evolve. Check their docs for the latest. Engram's advantage is MCP-native distribution + zero-infra + consolidation. Its disadvantage is a smaller community and fewer production deployments.

Want memory without
running a server?

Hosted Engram is a managed surface inside JamJet Cloud. Same MCP API, same retrieval quality — no Postgres, no Docker, no provisioning. Memory spaces per agent, retention controls per project, audit trail for every read and write.

Managed PostgresAuto-provisioned, encrypted, backed up
API keys & scopesPer-agent, per-project, per-environment
Team memory spacesShared facts across an agent fleet
Retention controlsTTL, archival, hard delete on request
Audit logsEvery read and write, exportable
Export & deleteMove out anytime — no lock-in

Need more than memory?

Engram is part of JamJet — the open-source safety layer for AI agents. When your agents need policy checks, audit trails, human approval, crash recovery, and cost limits, JamJet has it built in.

Add memory to Claude Desktop Learn about JamJet