Add Persistent Memory to CrewAI Agents

Give your CrewAI crews long-term memory so agents remember past interactions, customer details, and project decisions across runs.

Table of contents
  1. Prerequisites
  2. Install
  3. How it works
  4. HippoDid as a CrewAI tool
  5. Per-customer character memory
  6. Working example: customer service crew
  7. Injecting context via assemble_context
  8. Tips
  9. Next steps

Prerequisites

Install

pip install hippodid crewai crewai-tools

How it works

CrewAI has built-in short-term and long-term memory, but it resets between runs and cannot be shared across deployments or other AI tools. HippoDid replaces this with a cloud-backed memory layer:

  1. Before a crew runs, each agent retrieves relevant context from HippoDid
  2. During execution, agents store new facts and decisions
  3. On the next run, those memories are available to any agent in the crew

You integrate HippoDid into CrewAI by creating a custom tool that agents can call.


HippoDid as a CrewAI tool

Define a tool that wraps HippoDid’s search and write operations.

from crewai.tools import BaseTool
from hippodid import HippoDid
from pydantic import Field

class HippoDidSearchTool(BaseTool):
    """Search long-term memory for relevant context."""
    name: str = "search_memory"
    description: str = (
        "Search the team's long-term memory for relevant facts, decisions, "
        "and customer information. Use this before starting any task to check "
        "what is already known."
    )
    client: HippoDid = Field(exclude=True)
    character_id: str = Field(exclude=True)

    def _run(self, query: str) -> str:
        results = self.client.search_memories(
            character_id=self.character_id,
            query=query,
            limit=10,
        )
        if not results["memories"]:
            return "No relevant memories found."
        lines = []
        for mem in results["memories"]:
            lines.append(f"- [{mem['category']}] {mem['content']}")
        return "\n".join(lines)


class HippoDidStoreTool(BaseTool):
    """Store a fact or decision in long-term memory."""
    name: str = "store_memory"
    description: str = (
        "Store an important fact, decision, or customer preference in long-term "
        "memory. Other agents and future runs will be able to recall this."
    )
    client: HippoDid = Field(exclude=True)
    character_id: str = Field(exclude=True)

    def _run(self, content: str) -> str:
        result = self.client.add_memory(
            character_id=self.character_id,
            content=content,
        )
        return f"Stored memory: {result['id']} [{result.get('category', 'auto')}]"

Per-customer character memory

For customer service crews, create one HippoDid character per customer. Each customer gets their own memory namespace with preferences, history, and interaction logs.

from hippodid import HippoDid

client = HippoDid(api_key="hd_key_...")

def get_tools_for_customer(customer_id: str):
    """Return HippoDid tools scoped to a specific customer character."""
    return [
        HippoDidSearchTool(client=client, character_id=customer_id),
        HippoDidStoreTool(client=client, character_id=customer_id),
    ]

# Look up the customer's character by alias
character = client.resolve_character(alias=f"client:{customer_email}")
tools = get_tools_for_customer(character["id"])

Working example: customer service crew

A complete crew with a researcher agent that checks memory and a responder agent that drafts the reply.

from crewai import Agent, Task, Crew, Process
from hippodid import HippoDid

client = HippoDid(api_key="hd_key_...")
customer_character_id = "CUSTOMER_CHARACTER_ID"

# Tools scoped to this customer
search_tool = HippoDidSearchTool(client=client, character_id=customer_character_id)
store_tool = HippoDidStoreTool(client=client, character_id=customer_character_id)

# Agent 1: Research — checks memory for context
researcher = Agent(
    role="Customer Research Specialist",
    goal="Find all relevant context about the customer before drafting a response",
    backstory=(
        "You have access to the company's long-term memory system. Before the "
        "team responds to any customer, you search for their history, preferences, "
        "past issues, and communication style."
    ),
    tools=[search_tool],
    verbose=True,
)

# Agent 2: Responder — writes the reply and stores new facts
responder = Agent(
    role="Customer Support Agent",
    goal="Write a personalized, helpful response using the context provided",
    backstory=(
        "You craft thoughtful support responses. You always store new facts learned "
        "from the interaction — preferences, issues, sentiment — so the team "
        "remembers them next time."
    ),
    tools=[store_tool],
    verbose=True,
)

# Tasks
research_task = Task(
    description=(
        "The customer sent: '{customer_message}'\n\n"
        "Search memory for everything relevant to this customer and their request. "
        "Summarize what you find."
    ),
    expected_output="A summary of relevant customer context from memory.",
    agent=researcher,
)

response_task = Task(
    description=(
        "Using the research context, write a personalized response to the customer. "
        "After drafting, store any new facts you learned (preferences, issues, "
        "sentiment) in memory for future reference."
    ),
    expected_output="A customer-facing response ready to send.",
    agent=responder,
    context=[research_task],
)

# Crew
support_crew = Crew(
    agents=[researcher, responder],
    tasks=[research_task, response_task],
    process=Process.sequential,
    verbose=True,
)

# Run
result = support_crew.kickoff(
    inputs={"customer_message": "I received the wrong size again. This is the third time."}
)
print(result)

On the first run, the researcher may find no history. The responder stores the sizing issue. On the next run, the researcher finds the pattern of repeated sizing problems, and the responder can escalate proactively.


Injecting context via assemble_context

For agents that need structured context at the start of a task (rather than searching during execution), use assemble_context() to pre-build the context block.

context = client.assemble_context(
    character_id=customer_character_id,
    query="customer with repeated sizing issues",
    strategy="concierge",
    max_tokens=2000,
)

research_task = Task(
    description=(
        f"Customer context from memory:\n{context}\n\n"
        f"The customer sent: ''\n\n"
        "Analyze the context and recommend how to handle this interaction."
    ),
    expected_output="Recommended approach based on customer history.",
    agent=researcher,
)

Tips

  • One character per customer keeps memory clean and searchable. Use batch onboarding to create thousands of characters from a CRM export.
  • Use aliases (client:email@example.com) so you can look up characters by customer identifier instead of UUID.
  • Store decisions, not raw transcripts. Let HippoDid’s AI extraction pull out the structured facts. Use EXTRACTED mode (the default) for best results.
  • Search before acting. Train your agents to always call search_memory at the start of a task. This is the single biggest improvement to multi-run quality.

Next steps


Copyright © 2026 SameThoughts. HippoDid is proprietary software. Open-source components (Spring Boot Starter, MCP Server) are Apache 2.0.

This site uses Just the Docs, a documentation theme for Jekyll.