ClawVault v1.11: Observational Memory — Your Agent Watches, Learns, and Remembers
ClawVault v1.11 introduces observational memory — automatic conversation compression into prioritized observations that route to vault categories, auto-link for Obsidian graph view, and survive context death. Now fully local-first with zero cloud dependency.
Updated
Your AI agent has conversations all day. Decisions get made. People get mentioned. Errors happen. Deadlines come up. And then the session ends, and it all vanishes.
Not anymore.
ClawVault v1.11 introduces observational memory — a system that watches your agent's conversations and automatically extracts what matters, compresses it, categorizes it, and stores it for next time.
The Problem
AI agents suffer from amnesia. Every session starts blank. The solutions so far have been:
- Manual memory — "Remember to write things down." Works when you remember to remember. Which is... sometimes.
- Dump everything — Shove the whole conversation into a file. Now you have a 50KB wall of text that's useless for context injection.
- Vector search — Great for retrieval, terrible at knowing what to store in the first place.
The real gap: nobody is watching the conversation as it happens. Nobody is deciding "this is a decision, this matters, this is noise."
Until now.
How It Works
1. Observe — Compress Conversations
clawvault observe --compress session-transcript.md
Your 200-line conversation becomes 12 prioritized observations:
## 2026-02-12
🔴 14:30 [[PostgreSQL]] chosen for the database over [[MongoDB]].
🔴 14:35 Shipping deadline is March 1 ([[Justin]] from [[Hale]]).
🟡 14:40 [[Maria]] requested fix for the login page.
🟡 14:45 Discussed API design with [[Sarah]].
🟢 14:50 Server deployed successfully.
🟢 14:55 Built new dashboard feature.
Emoji priorities:
- 🔴 Critical — decisions, errors, blockers, deadlines
- 🟡 Notable — preferences, people interactions, architecture discussions
- 🟢 Info — routine updates, general progress
The compression uses LLM intelligence (Gemini, Anthropic, or OpenAI) with a rule-based fallback. A post-processing step enforces priority rules because LLMs tend to under-classify — "Decision: use X" gets rephrased to "X chosen for Y", and we catch those patterns too.
2. Route — Auto-Categorize to Vault
Critical and notable observations are automatically routed to the right vault directories:
decisions/2026-02-12.md ← "PostgreSQL chosen for the database"
people/2026-02-12.md ← "Justin mentioned shipping deadline"
lessons/2026-02-12.md ← "Context injection needs priority ordering"
commitments/2026-02-12.md ← "Deadline is March 1"
No manual filing. No "which folder does this go in?" The router uses pattern matching to categorize observations automatically.
3. Link — Wiki-Links for Graph View
Every routed observation gets automatic [[wiki-links]] for proper nouns and known entities:
- 🔴 [[PostgreSQL]] chosen for the database.
- 🔴 [[Justin]] ([[Hale]]) mentioned shipping deadline.
- 🟡 Talked to [[Sarah]] about the API design.
- 🔴 Decision: [[React]] chosen over [[Vue]].
Open your vault in Obsidian and these connections light up in the graph view. People connect to decisions. Projects connect to deadlines. The knowledge graph builds itself.
4. Sleep/Wake — Automatic Lifecycle
The real power is automation. You don't run observe manually — it's wired into the sleep/wake lifecycle:
# Session ends — observations auto-generated
clawvault sleep "finished API endpoints" --session-transcript conversation.md
# New session starts — observations injected into context
clawvault wake
# → Recent 🔴/🟡 observations included automatically
When your agent wakes up, it already knows what decisions were made, who said what, and what broke — without loading the entire previous conversation.
5. Budget — Token-Aware Context
Context windows aren't infinite. The new --budget flag lets you control how much memory gets injected:
clawvault context "what decisions were made" --budget 2000
Items are injected by priority: 🔴 first, then 🟡, then 🟢, until the token budget is exhausted. Critical decisions always make the cut. Routine updates only appear if there's room.
Stress Tested at Scale
We didn't just build it — we broke it. 100 generated conversations processed by 5 parallel agents testing:
- Compression quality — 4.9:1 compression ratio
- Concurrent writes — Zero corruption under parallel observe calls
- Context death — Memory survives multiple sleep/wake cycles
- Edge cases — Empty files, 1000-line files, unicode, pure-decision and pure-error files
- Classification accuracy — Decisions correctly tagged 🔴, routine updates correctly 🟢
76 automated tests. Zero crashes across 100 sessions. Zero data loss.
Watch Mode
For real-time observation, point the watcher at your session directory:
# Watch for new session files and auto-compress
clawvault observe --watch ./sessions/
# Or run as a background daemon
clawvault observe --daemon
New session files get picked up, compressed, and routed automatically.
The Architecture
Session ends → sleep → observe transcript → LLM compress
→ emoji priorities → enforce rules → route to categories
→ wiki-link entities → store as markdown
Session starts → wake → read observations
→ inject 🔴 first → budget-aware assembly → agent has context
ClawVault is the engine. OpenClaw (or any agent framework) is the trigger. The hook system lets you wire observe into any lifecycle event.
Install
npm install -g clawvault
Set your LLM key for compression (any one works):
export GEMINI_API_KEY=... # Recommended (fast, free tier)
export ANTHROPIC_API_KEY=... # Also supported
export OPENAI_API_KEY=... # Also supported
No API key? The rule-based fallback still works — just without LLM-powered compression.
What Shipped Since
Since observational memory launched, v1.11 has added:
- OpenClaw hook integration — The hook auto-calls
observe --compresson session end via thecommand:newevent. Fully automatic. - Cloud sync removed (v1.11.0) — ClawVault is now fully local-first. Zero network calls except optional LLM compression.
- Intelligence fixes (v1.11.1) — Compressor priority enforcement, temporal decay for 🟢 items, richer wake summaries.
- Entity-slug routing (v1.11.2) — People and project observations route to entity subfolders (
people/pedro/,projects/clawvault/). - Security transparency (v1.11.4) — Full capability table, explicit disclosure of every network call.
"An elephant never forgets." — Now neither does your agent. 🐘
Continue reading
The Session That Was Never Observed: Building Active Memory for AI Agents
How we built ClawVault's Active Session Observer — threshold scaling, byte cursors, and the 14MB session that exposed a blind spot in our memory architecture.
CrewAI Memory vs ClawVault: Framework-Locked or Framework-Free?
Compare CrewAI memory (short-term, long-term, entity) with ClawVault's framework-agnostic persistent memory for AI agents.