Memory Conduction

The sustained, protected pathway through which an AI agent accesses its own memories without degrading or destroying them through normal use.

Agent amnesia — the widely reported phenomenon of AI agents forgetting context between or during sessions — is the commonly described symptom of memory conduction failure in the OpenClaw community.

Overview

Memory conduction is a category of AI agent memory management distinct from memory storage and memory retrieval. The term was introduced in March 2026 to describe a class of agent memory failures that had been widely experienced but not formally classified.

Conduction failures occur when an agent operationally destroys its own memory through normal, non-malicious tool use — overwriting files, overloading the context window, triggering destructive compression, or fabricating context after memory loss. In each case, the stored memories remain intact on disk. The failure occurs in the active pathway between storage and the agent's working context.

The distinction is significant because conduction failures are frequently misattributed to storage or retrieval problems, leading users to install storage-focused solutions that do not address the underlying cause.

Three-Layer Taxonomy

Agent memory can be classified into three functional layers, each addressing a distinct failure domain:

LayerFunctionAddressesMaturity
Storage Where memories are created, indexed, and persisted Memory never written to disk; memory written but not indexed High
Conduction Whether memories survive contact with the agent's own operations Write corruption, context overload, compaction casualty, bootstrap truncation, confabulation Low
Retrieval How memories are located when needed Irrelevant results; outdated information surfaced; inability to find existing memories High

The current ecosystem has mature storage solutions (lancedb, mem0, Cognee, QMD, Graphiti, Lossless Claw) and rapidly improving retrieval mechanisms. The conduction layer remains largely unaddressed as a formal category.

The full taxonomy, with community evidence, architectural analysis, and a reference implementation, is documented in the framework paper.

Conduction Failure Modes

Five documented failure modes have been identified through community reports, GitHub issues, and independent technical analysis. In each mode, memory storage is functioning correctly — the failure occurs in the pathway between storage and working context.

  1. Write Corruption. The agent uses a file creation tool instead of an append operation, replacing curated memory content with an abbreviated summary. Documented in GitHub Issue #6877 and multiple community guides.
  2. Context Overload. The agent re-reads full bootstrap files on every message. As files grow, per-turn token usage increases until the context window reaches capacity, triggering emergency compression. Measured token usage in affected systems exceeds 260,000 per prompt.
  3. Compaction Casualty. Lossy context compression summarizes detailed decisions, verbal instructions, and operational constraints into generic descriptions. Critical directives are dropped from the summary without notification.
  4. Bootstrap Truncation. Platform-enforced character limits (20,000 per file in OpenClaw) silently truncate files from the bottom. Rules added most recently — typically those learned from operational failures — are removed first.
  5. Post-Compaction Confabulation. Following context loss, the agent generates plausible but fabricated context rather than acknowledging the loss. This mode is the most difficult to detect because the agent's output appears confident and specific.

Reference Implementation

Aristotle is an open-source OpenClaw plugin that serves as the reference implementation for memory conduction. It provides four components corresponding to the identified failure modes:

Guard

Write protection via the before_tool_call plugin hook. Intercepts destructive tool calls and returns corrective instructions. 10 redirect rules.

Context Shield

Monitors context window pressure from session transcripts. Acts at 65% capacity — before emergency compaction triggers at 95%+.

Boot Card

Reads all bootstrap files once at session start, produces a ~20-line summary carried for the remainder of the session. Reduces per-turn token usage from 262K to under 12K.

QC Nightly

11 automated integrity checks in an isolated session. Reports generated deterministically in code, not by the AI model. Silence indicates all checks passed.

Combined measured reliability: 90–92%. The remaining 8–10% involves model behaviors that cannot be enforced through code (post-compaction identity verification, confabulation prevention, correction acceptance).

Aristotle is designed to work alongside storage plugins, not to replace them. Compatible with mem0, lancedb, QMD, Cognee, Graphiti, and Lossless Claw. MIT licensed. No paid tiers.

Resources