Memory that dreams.
Graph memory for AI agents — with a built-in consolidation cycle.
Runs on SQLite. No Neo4j. No Redis.
$ pip install engrava
Building a serious AI agent means dealing with memory.
Vector databases don’t understand knowledge structure. Graph databases require Neo4j. Memory frameworks LLM-extract everything into noise — 97.8% junk in one production audit (Mem0 issue #4573).
Engrava is different.
Quick start
Three APIs: create_thought,
search_hybrid,
create_edge.
Auto-embed. No vector plumbing.
from engrava import SqliteEngravaCore
import asyncio
async def main():
store = SqliteEngravaCore("./agent.db")
await store.initialize()
# Store a thought
thought = await store.create_thought(
essence="User prefers concise answers",
thought_type="preference",
priority=0.8,
confidence=0.9,
)
# Hybrid search — FTS5 + vector + recency
results = await store.search_hybrid("user preferences", top_k=5)
# Graph relationship
await store.create_edge(thought.id, another_id, "INFLUENCES")
asyncio.run(main())
Full API reference in the docs.
What you get
Nine primitives. One pip install.
No services, no credits, no egress.
-
Graph Memory
7 edge types. No Neo4j. Just SQLite.
-
Embedded
One
pip install. Runs inside your Python process. No server. -
Hybrid Search
FTS5 + vector + recency. 3-signal fusion.
-
Lifecycle Management
CREATED → ACTIVE → DONE → ARCHIVED. -
Dreaming
Algorithmic memory consolidation, built in.
-
Audit Trail
SHA-256 hash-linked journal. Tamper-evident.
-
Batteries Included
Graph + search + dreaming + audit — one package, zero services.
-
MindQL
FIND,EVOLVE,SURFACE,FORGET. No Cypher. -
5 Embedding Providers
Local, OpenAI, Ollama, HuggingFace. Auto-embed.
Why engrava.
Memory that dreams
is the headline — but it’s one layer of a stack.
Engrava ships graph memory, hybrid search, dreaming consolidation,
and a tamper-evident audit trail in one pip install.
No services to run, no credits to budget, no data leaving your Python process.
Built from two years of cognitive-architecture research. MIT-licensed.
How engrava compares
Graph-first. Self-host. Zero egress. Everything else in one
pip install.
| Feature | engrava | Mem0 | Zep / Graphiti | ChromaDB |
|---|---|---|---|---|
| Graph memory | ✓ free | Pro tier only | built-in (Graphiti) | ✗ |
| Dreaming | ✓ | ✗ | ✗ | ✗ |
| Hybrid search | ✓ | ✓ | ✓ | ✗ |
| Audit trail | ✓ | ✗ | ✗ | ✗ |
| All-in-one stack | ✓ | ✗ (memory only) | ✗ (memory only) | ✗ (vector only) |
| Zero external infra | ✓ | ✗ (managed SaaS) | ✗ (managed SaaS) | ✓ |
| Self-host | ✓ | ✗ | Graphiti OSS only | ✓ |
| Lifecycle mgmt | ✓ | ✗ | ✗ | ✗ |
| MindQL | ✓ | ✗ | Cypher | ✗ |
| License | MIT | Apache | Apache | Apache |
| Pricing | $0 | Hobby free; Starter $19/mo; Pro $249/mo | Free 1K credits; Flex from $25/mo (20K credits/mo) | $0 |
Pricing verified 2026-04-20 via vendor sites. Subject to change.
Why engrava over managed alternatives?
-
One package, not a stack.
Mem0 and Zep ship memory only. ChromaDB ships vectors only.
Engrava ships graph + hybrid search + dreaming + audit in one
pip install— no orchestration, no multi-service wiring. - Mem0 gates graph memory behind Pro tier ($249/mo). Engrava: graph-first core, free forever.
- Zep is a managed service (credit-based, from $25/mo). Graphiti is its OSS engine — but runs a separate service, not embedded. Engrava: self-host, embedded in your Python process.
- All three require network egress on every memory op. Engrava runs in your process — no data leaves.
How dreaming works
Every thought in Engrava has a score. Engrava computes it from four
signals —
recency,
frequency,
confidence, and
emotional charge
— then passes them through three gates
(promote_threshold,
fade_threshold,
archive_threshold)
to decide what gets promoted, what fades, what gets archived.
Runs without LLMs. Deterministic. Configurable in YAML.
Install & configure
One pip install. Optional extras for
embedding backends. One YAML file for the rest.
Install
# Basic
pip install engrava
# With local embeddings (sentence-transformer)
pip install engrava[embeddings-local]
# With OpenAI-compatible embeddings (OpenAI, Azure, Groq, vLLM, LiteLLM)
pip install engrava[embeddings-openai]
# Alt: Ollama (local LLM server) or HuggingFace Inference API
pip install engrava[embeddings-ollama]
pip install engrava[embeddings-hf]
Configure — engrava.yaml
# engrava.yaml
store:
path: "./agent.db"
embeddings:
provider: "sentence-transformer" # or "openai", "ollama", "huggingface", "callback"
model: "all-MiniLM-L12-v2"
auto_embed: true
dreaming:
enabled: true
signals: [recency, frequency, confidence, emotional_charge]
promote_threshold: 0.75
fade_threshold: 0.2
journal:
enabled: true # tamper-evident audit log (SHA-256 hash chain)
Full configuration reference in the docs.
Built from research.
Engrava was extracted from research at Sovantica on cognitive architectures for AI agents — what kinds of memory, attention, and consolidation a long-running agent actually needs to operate beyond a single session. After two years of development — 2,696 tests, 269 source files — the persistence layer proved useful enough to ship standalone.
The dreaming algorithm isn't a metaphor. It's grounded in memory consolidation research. The audit trail isn't a feature. It's how you debug an agent that thinks.
Built on foundations from sleep consolidation, hippocampal pattern separation, and predictive coding. Read the full story →