Skip to content

Innovations roadmap

This document describes Astrocyte’s innovation roadmap — capabilities inspired by ByteRover (agent-native curation, zero-infra, progressive retrieval) and Hindsight (biomimetic memory, multi-strategy retrieval, mental models). Each innovation is independently implementable, backward-compatible, and feature-gated.

For the core architecture see architecture-framework.md. For the built-in pipeline see built-in-pipeline.md.


Astrocyte (open source) owns what, when, and the policy — what to store, when to retrieve, how to govern, how to orchestrate.

Mystique (proprietary) owns how well — better retrieval algorithms, deeper synthesis, smarter consolidation. Same operations, better results.

Users on the free tier get every capability. Users on Mystique get the same capabilities, executed better. No capability is withheld from the open-source framework — Mystique’s advantage is in execution quality, not feature gating.

CapabilityAstrocyte (free)Mystique (premium)
Recall cacheFramework-level LRU cacheSame (provider-agnostic)
Memory hierarchyLayer-weighted RRF fusionSame + mental model formation
Utility scoringRecency/frequency/relevance compositeSame + quality-based consolidation
Tiered retrieval5-tier progressive escalationSame (provider-agnostic)
LLM-curated retainADD/UPDATE/MERGE/SKIP/DELETE curationSame + deeper entity resolution
Curated recallFreshness/reliability/salience re-scoringSame (post-retrieval, provider-agnostic)
Progressive retrievaldetail_level: "titles" for token savingsSame (protocol-level)
Cross-source fusionexternal_context for RAG/graph blendingSame + Hindsight-specific optimization
Cross-engine routingAdaptive per-query weightsN/A (framework-level)
ReflectSingle-pass LLM synthesisAgentic multi-turn with tool use + dispositions
ConsolidationBasic dedup + archiveQuality-based loss functions + observation formation
Entity resolutionBasic NER + exact dedupCanonical resolution + co-occurrence + spreading activation
Retrieval fusionStandard RRFTuned RRF + cross-encoder reranking
ScaleSingle processMulti-tenant, distributed

Inspired by: ByteRover’s Tier 0/1 cache — most queries resolve from cache without hitting storage.

Module: astrocyte/pipeline/recall_cache.py

LRU cache keyed by query embedding cosine similarity. On retain, the affected bank’s cache is invalidated (contents changed). Configured via HomeostasisConfig.recall_cache.

cache = RecallCache(similarity_threshold=0.95, max_entries=256, ttl_seconds=300)
# Before retrieval: check cache
cached = cache.get(bank_id, query_vector)
if cached:
return cached # Skip embedding + retrieval + fusion
# After retrieval: store result
cache.put(bank_id, query_vector, result)
# On retain: invalidate
cache.invalidate_bank(bank_id)

Impact: 5-10x latency reduction for repeated/similar queries. 80% of steady-state workloads resolve from cache.

Inspired by: Hindsight’s Facts → Mental Models + ByteRover’s Context Tree hierarchy.

Three-layer memory model with weighted recall scoring:

LayerWhat it containsDefault weight
factRaw retained content1.0
observationPatterns noticed across facts1.5
modelConsolidated understanding2.0

Type additions:

  • memory_layer: str | None on VectorItem, VectorHit, MemoryHit, ScoredItem
  • layer_weights: dict[str, float] | None on RecallRequest
  • layer_distribution: dict[str, int] | None on RecallTrace

Fusion: layer_weighted_rrf_fusion() applies multiplicative weights per layer after standard RRF, then re-sorts.

hits = await brain.recall(
"What does Calvin prefer?",
bank_id="user-123",
layer_weights={"fact": 1.0, "observation": 1.5, "model": 2.0},
)
# Models ranked 2x above raw facts

Inspired by: ByteRover’s lifecycle metadata (importance, maturity, decay) + Hindsight’s consolidation quality.

Module: astrocyte/pipeline/utility.py

Per-memory composite score:

utility = recency × 0.3 + frequency × 0.2 + relevance × 0.3 + freshness × 0.2
ComponentWhat it measuresDecay
recencyTime since last recallExponential (half-life 7 days)
frequencyHow often recalled (normalized to 0-1)None (counter)
relevanceAverage relevance score when recalledNone (running average)
freshnessHow new the memory isExponential (half-life 28 days)

Type addition: utility_score: float | None on MemoryHit.

UtilityTracker maintains per-memory stats in memory with LRU eviction (max 10K entries).


2.1 Adaptive Tiered Retrieval (implemented)

Section titled “2.1 Adaptive Tiered Retrieval (implemented)”

Inspired by: ByteRover’s 5-tier progressive escalation (0ms → 15s). Module: astrocyte/pipeline/tiered_retrieval.py

Progressive recall escalation — cheaper tiers tried first, escalate only when needed:

TierStrategyLatencyCost
0Recall cache hit~0msFree
1Fuzzy text match on recent memories~5msFree
2BM25 keyword search only~50msFree
3Full multi-strategy (semantic+graph+BM25+temporal)~200msEmbedding API
4Agentic recall (LLM reformulates query + retry)~3-10sLLM API

Escalation stops when min_results (default 3) with min_score (default 0.5) is satisfied. max_tier (default 3) caps escalation.

Type addition: tier_used: int | None on RecallTrace.

Inspired by: ByteRover’s core innovation — the reasoning LLM decides what/how to store. Module: astrocyte/pipeline/curated_retain.py

Opt-in curation mode where the LLM analyzes incoming content against existing memories and decides:

ActionWhenWhat happens
ADDGenuinely new informationStore as new memory
UPDATEExisting memory is outdatedReplace with new version, keep provenance
MERGEMultiple memories about same topicConsolidate into one richer memory
SKIPRedundant or low-valueDon’t store (better than post-hoc dedup)
DELETENew info contradicts oldRemove outdated memory

Also assigns memory_layer (fact/observation/model) during curation.

Type additions: retention_action, curated, memory_layer on RetainResult.

Originally planned for Mystique, moved to astrocyte — this is a framework capability. Module: astrocyte/pipeline/curated_recall.py

Post-retrieval re-scoring of recall hits by:

  • Freshness: exponential decay on occurred_at
  • Source reliability: metadata-based scoring
  • Domain salience: similarity to bank context/mission

Re-ranks and optionally filters below quality threshold. Provider-agnostic — works with any Tier 1 or Tier 2 backend.

Originally planned for Mystique, moved to astrocyte — this is a protocol-level capability.

detail_level field on RecallRequest:

ValueBehaviorToken cost
"titles"First sentence + metadata/score only~10x fewer
"bodies"Full text (current behavior)Normal
"full" / NoneCurrent behaviorNormal

Enables two-pass pattern: agent gets title manifest first, then fetches specific memories.

Originally planned for Mystique, moved to astrocyte — this is an orchestration capability.

external_context field on RecallRequest: callers pass in results from external RAG or graph systems, which the framework fuses with provider recall results under one token budget.

# Caller fetches RAG results separately
rag_hits = await rag_client.search("deployment pipeline")
# Fuse with memory recall
hits = await brain.recall(
"How does deployment work?",
bank_id="team",
external_context=[MemoryHit(text=r.text, score=r.score) for r in rag_hits],
)
# → Fused results from memory + RAG, deduplicated, budget-enforced

Module: astrocyte/hybrid.pyAdaptiveRouter class

Adaptive per-query weights in HybridEngineProvider. Classifies queries by:

  • Temporal signals — date/time keywords boost engine (if it supports temporal search)
  • Entity density — capitalized proper nouns boost engine (if it supports graph search)
  • Question complexity — how/why/explain boost engine (if it supports reflect)
  • Query length — short queries boost pipeline (keyword/BM25 sufficient)
hybrid = HybridEngineProvider(engine=mystique, pipeline=pipeline, adaptive_routing=True)
# Temporal query → routes more weight to engine
# Simple factual → routes more weight to pipeline

Phase 3: Declarative routing (implemented)

Section titled “Phase 3: Declarative routing (implemented)”

Design doc: memory-intent-protocol.md

Implementation (Python): astrocyte.mip — YAML loader, mechanical rule engine, MipRouter (rules first, then optional LLM intent when configured). Rust and spec-only options (e.g. integration patterns in memory-intent-protocol.md §6) follow the usual parallel-implementation cadence.

MIP makes memory routing declarative — both deterministic rules and LLM-based intent in one protocol. “Intent” carries two senses: the system’s declared intent (rules, compliance policies, escalation paths) and the model’s expressed intent (LLM judgment when rules can’t resolve).

Components:

  • mip.yaml — bank definitions, priority-ordered mechanical rules, match DSL (all/any/none, existence checks, value checks, computed signals), action DSL (bank templates, tags, retain policies, escalation)
  • Intent policy — LLM prompt + constraints, fires only on escalation from mechanical rules
  • Override hierarchy — compliance rules always mechanical, never delegated to model judgment
  • Retrieval/reflect triggers — auto-recall and auto-reflect conditions

Resolution pipeline: Mechanical rules first → ambiguity detector → LLM intent only when rules cannot resolve. Zero inference cost for the deterministic path.

Relationship to existing innovations:

  • LLM-curated retain (§2.2) is a subset of MIP’s intent layer
  • Cross-engine routing (§2.6) is orthogonal to MIP routing
  • MIP does not replace these — it provides the declarative policy layer above them

ConsolidationTrace dataclass exposing when mental models were updated, observations formed, facts consolidated, and disposition drift detected. Mystique-specific because mental models are computed by Hindsight’s engine.

Quality-based loss functions and observation formation — Hindsight’s engine advantage. Per-bank policies for when to consolidate (session-end, quota-trigger, idle-window, scheduled).


All innovations follow these rules:

  1. Types: New fields have None/False defaults. Pattern: field: type | None = None
  2. Config: New sections have enabled: bool = False as first field
  3. Pipeline: Feature-gated with if config.X.enabled: — default path unchanged
  4. SPI: No changes to Protocol method signatures. New fields are optional on request/result types.
  5. Tests: All existing tests pass after each innovation