Tier 1: Retrieval backends
Use your existing databases. pgvector, Pinecone, Qdrant, Neo4j, Memgraph — the built-in pipeline handles embedding, extraction, fusion, and synthesis on top.
Install extras for your stack, open a configured Astrocyte, then either call memory from your own code or wire
the same instance into an agent framework adapter — LangGraph, CrewAI, MCP, OpenAI/Claude agent SDKs, and
others. Optionally add
MIP (Memory Intent Protocol) to declare routing rules — Astrocyte
decides which bank, tags, and compliance policies to apply, with zero routing code in your app.
In a uv-managed project, use
uv add astrocyte astrocyte-pgvector to record dependencies in
pyproject.toml.
MIP (Memory Intent Protocol) lets you declare routing rules in mip.yaml and reference it from
astrocyte.yaml with mip_config_path: ./mip.yaml. Astrocyte routes automatically —
no routing code in your app.
version: "1.0"banks: - id: "student-{student_id}" description: Per-student academic memory compliance: pdparules: - name: pii-lockdown priority: 1 override: true # compliance — intent layer cannot override match: pii_detected: true action: bank: private-encrypted tags: [pii, compliance] retain_policy: redact_before_store - name: student-answer priority: 10 match: all: - content_type: student_answer - metadata.student_id: present action: bank: "student-{metadata.student_id}" tags: ["{metadata.topic}"]profile: personal
provider_tier: storagevector_store: pgvectorvector_store_config: connection_url: postgresql://localhost:5432/memories
llm_provider: anthropicllm_provider_config: api_key: ${ANTHROPIC_API_KEY} model: claude-sonnet-4-20250514
# PII detection — country-specific + LLM fallbackcompliance_profile: pdpabarriers: pii: mode: rules_then_llm countries: [SG, IN]
# Point to MIP routing policymip_config_path: ./mip.yaml
lifecycle: enabled: true ttl: archive_after_days: 90 delete_after_days: 365from astrocyte import Astrocyte
brain = Astrocyte.from_config("astrocyte.yaml")
# MIP routes to bank "student-stu-42" with tag "algebra"await brain.retain( "The quadratic formula solves second degree equations", bank_id="default", # MIP overrides this metadata={"student_id": "stu-42", "topic": "algebra"}, content_type="student_answer",)
hits = await brain.recall("quadratic formula", bank_id="student-stu-42")No mip.yaml needed — without mip_config_path, content goes straight to the bank you specify.
profile: personal
# Tier 1: bring your own databaseprovider_tier: storagevector_store: pgvectorvector_store_config: connection_url: postgresql://localhost:5432/memories
# LLM for embedding + reflectllm_provider: anthropicllm_provider_config: api_key: ${ANTHROPIC_API_KEY} model: claude-sonnet-4-20250514from astrocyte import Astrocyte
brain = Astrocyte.from_config("astrocyte.yaml")
await brain.retain("Calvin prefers dark mode", bank_id="user-123")
hits = await brain.recall("What are Calvin's preferences?", bank_id="user-123")Add langgraph (or your chain stack) when you compile a graph. Same brain backs every framework
adapter — see agent framework middleware.
from astrocyte import Astrocytefrom astrocyte.integrations.langgraph import AstrocyteMemory
brain = Astrocyte.from_config("astrocyte.yaml")memory = AstrocyteMemory(brain, bank_id="user-123")
await memory.save_context( inputs={"question": "Theme preference?"}, outputs={"answer": "Calvin prefers dark mode."}, thread_id="thread-abc",)
prompt_vars = await memory.load_memory_variables({})# → inject prompt_vars["memory"] into the next model messageRetain, recall, and reflect — with pipelines and policies that stay valid when you swap storage or engines.
Store memories with automatic chunking, entity extraction, and embedding. PII scanning and content validation happen before anything reaches your backend.
Multi-strategy retrieval — semantic, graph, keyword, temporal — fused with reciprocal rank fusion and reranked. Token budgets enforced automatically.
Synthesize answers from memory. Native engine reflect or automatic fallback via LLM. Disposition-aware personality for support, coding, or research agents.
Your agent stack runs the loop; Astrocyte is the governed memory boundary before context and completions. What you paste into the next turn is still your context-engineering choice — Astrocyte makes the evidence consistent and auditable. Read the architecture note →
Orchestration, tools, MCP — when to call memory and how the loop runs.
Retain, recall, reflect — hybrid retrieval, policies, token budgets on the memory path.
Prompt layout and snippets — your app turns hits into system/user messages for the model.
Completions and tool calls — any provider or gateway (OpenAI, Anthropic, LiteLLM, OpenRouter, and similar).
One memory integration for every framework. Each links to a dedicated guide with install, usage, and API reference.
LangGraph
CrewAI
AutoGen / AG2
DSPy
LlamaIndex
Haystack
CAMEL-AI
LiveKit Agents
MCP ServerBring your own database or plug in a full memory engine — Astrocyte keeps governance and observability consistent.
Tier 1: Retrieval backends
Use your existing databases. pgvector, Pinecone, Qdrant, Neo4j, Memgraph — the built-in pipeline handles embedding, extraction, fusion, and synthesis on top.
Tier 2: Memory engines
Full-stack engines that own the pipeline. Mystique, Mem0, Zep, Letta, Cognee — Astrocyte adds governance, observability, and portability.
Neuroscience-inspired policies that protect every operation — regardless of backend.
Regex, NER, or LLM-based scanning with country-specific patterns (SG, IN, UK, EU, AU, CA, JP, CN). Per-type actions, DLP output scanning, and compliance profiles (GDPR, HIPAA, PDPA).
Token bucket rate limiting, daily quotas, and per-operation token budgets. Prevent runaway agents.
Automatic degraded mode when backends go down. Empty recall, error, or cache — with controlled recovery.
OpenTelemetry spans and Prometheus metrics on every operation. Switch providers without rebuilding dashboards.
Per-bank read/write/forget/admin permissions. Principals, grants, and audit trails built in.
Classification levels, compliance profiles (GDPR, HIPAA, PDPA), data residency, encryption, and DLP.
One framework, every agent stack.
MCP Server
Any MCP-capable agent (Claude Code, Cursor, Windsurf) gets memory via astrocyte-mcp. Zero code integration.
Memory Intent Protocol (MIP)
Declarative memory routing — mechanical rules resolve deterministically, LLM judgment only when needed. One mip.yaml governs bank routing, compliance enforcement, and escalation across all agents.
Agent Frameworks
LangGraph, CrewAI, Pydantic AI, OpenAI Agents SDK, AG2 — thin middleware wires Astrocyte into each framework’s memory abstraction.
LLM Providers
Multi-provider gateways (e.g. LiteLLM, Portkey, OpenRouter, Vercel AI Gateway) or direct SDKs — OpenAI, Anthropic, Bedrock, Azure, Vertex, Ollama — or bring your own via the LLM SPI.
Memory Portability
Export and import memories between providers using Astrocyte Memory Archive (AMA), a portable newline-delimited JSON (JSONL) format. No vendor lock-in.