How It Works
BaseLayer turns your AI conversations into a searchable knowledge graph through three stages: Capture, Dream, and Recall.
Capture
BaseLayer watches your AI conversations as they happen.
Chrome extension for web AI: Chat with Claude, ChatGPT, Gemini, OpenRouter, or Open WebUI and the extension captures each conversation automatically.
IDE watchers for coding tools: The desktop app monitors Cursor, Claude Code, Windsurf, GitHub Copilot, and Aider. It picks up conversations automatically. No plugins needed.
The desktop app is available on macOS (10.15+). Captured conversations are sent securely to your memory in the cloud.
Dream
This is where raw conversations become structured knowledge.
After a conversation is captured, BaseLayer’s dream engine runs in the background. It reads through what was discussed and extracts:
- People: colleagues, contacts, anyone mentioned by name
- Projects: codebases, products, initiatives you’re working on
- Organizations: companies, teams, groups
- Technologies: languages, frameworks, services, infrastructure
- Concepts: ideas, patterns, decisions, strategies
- Relationships: how all of the above connect to each other
BaseLayer supports 14 entity types in total: person, place, project, plan, organization, concept, event, media, technology, goal, document, belief, habit, and product.
For example, if you told Claude: “I’m working with Sarah on the payments migration to Stripe”, BaseLayer would extract:
- A person named Sarah
- A project called payments migration
- A technology reference to Stripe
- Relationships linking them together
The dream engine is event-driven and runs in the cloud. Processing speed depends on your tier: Pro users get near-realtime extraction (roughly 2 minutes), while Free tier conversations are processed every 2 hours.
”Dreaming” is our term for the background knowledge extraction process. It happens automatically. You don’t need to do anything.
Recall
Any MCP-connected AI can search your knowledge graph.
When you connect BaseLayer to Claude, ChatGPT, or another AI tool via MCP (Model Context Protocol), that AI gains access to 11 tools:
Querying
| Tool | What it does |
|---|
ask_question | Get a synthesized prose answer from your memory, with citations |
Searching
| Tool | What it does |
|---|
memory_search | Semantic search across entities and facts |
get_entity | Fetch a full dossier on a specific entity |
get_entity_relations | Explore connections and facts between entities |
get_entity_provenance | Trace where a piece of knowledge came from |
list_entities | Browse all entities of a given type |
recent_conversations | Load recent conversation context for session orientation |
retrieve_evidence | Search raw conversation history for exact quotes |
Writing
| Tool | What it does |
|---|
record_memory | Persist a new observation or decision to your memory |
vault_fact | Assert a specific fact as a graph edge between two entities |
update_plan | Create or update a persistent plan that tracks multi-step efforts |
The recommended flow: start with ask_question for synthesized answers, use memory_search when you need to browse or explore, and call get_entity for deep dives into specific people, projects, or concepts.
The result
Your knowledge compounds across every AI tool you use. A conversation in ChatGPT becomes context available in Claude. A decision made in Cursor is remembered in Gemini. A preference stated once is never forgotten.
One memory. Every AI. All your knowledge.