Open Protocol · Patent Pending · 2026

AI agents that share
what they learned

Not what was said. Not raw data. The actual learned structure — associative pathways, activation patterns, consolidated knowledge — transferred directly between agents.

M2MP is the missing memory layer in the AI agent stack. An open protocol for privacy-preserving transfer of learned associative memory between any AI agents.

The Missing Layer

Every agent protocol handles tools or tasks.
None handles memory.

The AI agent protocol stack has three established layers — and one gap.

Task delegationA2A — Agent-to-Agent Protocol
Memory transferM2MP — Memory-to-Memory Protocol ←
Tool accessMCP — Model Context Protocol
TransportHTTP / JSON-RPC / SSE

When Agent A has spent months learning a user's preferences — Hebbian-strengthened pathways, consolidated semantic clusters — and Agent B needs that knowledge, there is no standard way to transfer it.

Today, agents share knowledge by serializing into text and hoping the other side reconstructs something useful.

M2MP transfers the associative structure itself — not its description.

Proof of Concept

Travel agent → Restaurant agent.
Zero re-learning.

A travel agent with 50+ sessions of learned preferences hands off to a restaurant agent that has never seen the user. Here's what happens.

Agent A

Travel Assistant

47
Learned preferences

M2MP Export

Semantic privacy level

2 calls
Export → Import

Agent B

Restaurant Agent

4/5
Instant personalization
Metric
Before M2MP
After M2MP
Personalization Score
0 / 5
4 / 5
Top-10 Content Overlap
0%
100%
Hebbian Edge Weight
0.208
0.847
Safety-Critical (Allergy)
Missed
Surfaced ✓
The allergy test: Agent A learned the user's shellfish allergy through conversation associations. After M2MP transfer, Agent B surfaced this via Hebbian pathways — without ever being told directly. The graph preserves safety-critical knowledge that vector search alone would miss.
Architecture

Four biologically-inspired layers.
One transferable graph.

M2MP combines established models from cognitive science into a unified memory architecture that produces a weighted association graph as a natural byproduct of operation.

L1
Node Activation
ACT-R base-level activation with power-law decay — memories fade naturally but strengthen on use
L2
Hebbian Learning
Five-factor associative weight update — temporal proximity, reward, BCM stability, soft bounds. Novel combination.
L3
BCM Homeostasis
Sliding threshold prevents winner-take-all collapse — biologically-grounded self-regulation of retrieval
L4
Spreading Activation
Queries propagate through association graph — surfaces contextually relevant memories that vector search alone misses

These layers produce a weighted, directed graph of learned associations — the memory graph. This graph is what M2MP transfers.


MCP-compatible. M2MP operations are implemented as MCP tools. Any MCP-capable host can participate. No new transport layer. No new auth system.

Privacy

The sender always controls what's shared.

Three configurable privacy levels. Strip content, keep structure. Transfer what agents learned without exposing what was said.

Topology

Graph shape, edge weights, activation patterns, BCM thresholds. No content, no embeddings.

Observation · Research · Analytics

Semantic

Topology + textual memory content. Agent B can re-embed and query immediately. No embeddings.

Default · Cross-agent handoff

Full

Everything: content, embedding vectors, and node ID mappings. Complete carbon copy.

Migration · Backup · Development

Privacy is graduated, not binary. The semantic level is the sweet spot for collaboration — Agent B receives enough content to re-embed and query, while the sender's raw embedding vectors and original IDs stay private. Structure + content transfers. Raw representations don't.

Protocol spec, reference implementation, and benchmarks — coming soon.

Open protocol. Open source. Be first to build with M2MP.