Python SDK
Add persistent memory to your AI agents in a few lines of Python.
Table of contents
Installation
pip install hippodid
For batch operations with pandas support:
pip install hippodid[pandas]
Requires Python 3.8+.
Quick start
from hippodid import HippoDid
client = HippoDid(api_key="hd_key_...")
# Create a character
character = client.create_character(name="My Agent", category_preset="developer")
char_id = character.id
# Store a memory
memory = client.add_memory(char_id, content="User prefers dark mode and vim keybindings")
print(f"Stored: {memory.id} [{memory.category}]")
# Search memories
results = client.search(char_id, query="UI preferences", limit=5)
for m in results.memories:
print(f"[{m.salience:.2f}] {m.content}")
# Assemble context for an LLM prompt
context = client.assemble_context(char_id, query="What does the user prefer?")
print(context.formatted_prompt)
Configuration
from hippodid import HippoDid
client = HippoDid(
api_key="hd_key_...", # Required
base_url="https://api.hippodid.com", # Optional, default shown
tenant_id="your-tenant-uuid", # Optional, for multi-tenant apps
timeout=30, # Optional, request timeout in seconds (default: 30)
max_retries=3, # Optional, retry count on transient errors (default: 3)
)
| Parameter | Required | Default | Description |
|---|---|---|---|
api_key | Yes | – | Your HippoDid API key (hd_key_...) |
base_url | No | https://api.hippodid.com | API base URL |
tenant_id | No | – | Tenant ID for multi-tenant setups |
timeout | No | 30 | Request timeout in seconds |
max_retries | No | 3 | Automatic retries on 5xx and network errors |
Async client
For asyncio-based applications (FastAPI, etc.), use the async variant:
from hippodid import AsyncHippoDid
client = AsyncHippoDid(api_key="hd_key_...")
character = await client.create_character(name="Async Agent")
memory = await client.add_memory(character.id, content="Likes async code")
results = await client.search(character.id, query="preferences")
The async client exposes the same methods as the synchronous client, but all return awaitables.
Context assembly
assemble_context() retrieves and formats memories into a single text block suitable for injecting into an LLM system prompt. You choose a strategy that controls which memories are included and how they are organized.
context = client.assemble_context(
character_id,
query="What does the user prefer?",
strategy="default",
max_tokens=2000,
)
print(context.formatted_prompt) # Ready-to-use LLM prompt
print(context.token_estimate) # Approximate token count
Assembly strategies
default
Balanced mix of recent and relevant memories, grouped by category.
context = client.assemble_context(char_id, query="help the user", strategy="default")
conversational
Optimized for chat. Prioritizes recent interactions, user preferences, and relationship context.
context = client.assemble_context(
char_id,
query="continue our conversation about the project",
strategy="conversational",
max_tokens=1500,
)
task_focused
Prioritizes skills, decisions, and goal-related memories. Best for agents executing specific tasks.
context = client.assemble_context(
char_id,
query="deploy the staging environment",
strategy="task_focused",
)
concierge
Designed for customer-facing agents. Emphasizes preferences, past interactions, and service history.
context = client.assemble_context(
char_id,
query="help the customer with their order",
strategy="concierge",
)
matching
Pure semantic similarity search. Returns the most relevant memories for the query with no category grouping.
context = client.assemble_context(
char_id,
query="PostgreSQL configuration",
strategy="matching",
max_tokens=1000,
)
API reference
Characters
| Method | Description |
|---|---|
create_character(name, category_preset=None, external_id=None) | Create a new character |
get_character(character_id) | Get a character by ID |
get_character_by_external_id(external_id) | Look up a character by external ID |
list_characters(page=0, limit=20) | List all characters for the tenant |
update_character(character_id, **fields) | Update character profile fields |
delete_character(character_id) | Archive/delete a character |
clone_character(character_id, name, external_id=None, copy_tags=False) | Clone a character |
set_memory_mode(character_id, mode) | Set memory mode (EXTRACTED, VERBATIM, HYBRID) — set after creation, not at create time |
upsert_by_external_id(external_id, name, category_preset=None) | Create or update a character by external ID |
Memories
| Method | Description |
|---|---|
add_memory(character_id, content, category=None, importance=None) | Add a memory with AI extraction |
add_memory_direct(character_id, content, category, salience=0.5) | Add a memory directly (no AI) |
search(character_id, query, limit=10, category=None) | Semantic search across memories |
get_memories(character_id, page=0, limit=20, category=None) | List memories with optional category filter |
get_memory(character_id, memory_id) | Get a single memory by ID |
update_memory(character_id, memory_id, content=None, category=None) | Update a memory |
delete_memory(character_id, memory_id) | Delete a memory |
Categories
| Method | Description |
|---|---|
list_categories(character_id) | List all categories for a character |
add_category(character_id, name, description=None, importance=None) | Add a custom category |
Tags
| Method | Description |
|---|---|
list_tags(character_id) | List all tags on a character |
add_tags(character_id, tags) | Add tags to a character |
replace_tags(character_id, tags) | Replace all tags on a character |
remove_tag(character_id, tag) | Remove a single tag |
Templates
| Method | Description |
|---|---|
create_template(name, config) | Create a character template |
list_templates() | List all templates |
get_template(template_id) | Get a template by ID |
update_template(template_id, **fields) | Update a template |
delete_template(template_id) | Delete a template |
preview_template(template_id) | Preview what a character from this template looks like |
clone_template(template_id, name) | Clone an existing template |
Batch operations
| Method | Description |
|---|---|
batch_create_characters(template_id, data, external_id_column, on_conflict?, dry_run?) | Batch create characters. Accepts a list of dicts or pandas DataFrame; the SDK converts to CSV and uploads via multipart/form-data internally. |
batch_create_from_file(template_id, file_path, external_id_column, on_conflict?, dry_run?) | Batch create from a CSV file on disk |
get_batch_job_status(job_id) | Check progress of an async batch job |
Agent config
| Method | Description |
|---|---|
get_agent_config(character_id) | Get agent configuration |
set_agent_config(character_id, **config) | Set agent configuration |
delete_agent_config(character_id) | Remove agent configuration |
create_agent_config_template(name, config) | Create a reusable agent config template |
list_agent_config_templates() | List all agent config templates |
Ask
| Method | Description |
|---|---|
ask(character_id, question, context=None) | Ask a question using the character’s memory as context |
Context assembly
| Method | Description |
|---|---|
assemble_context(character_id, query, strategy="default", max_tokens=2000) | Assemble memories into an LLM-ready context block |
Error handling
All SDK errors extend HippoDidError:
from hippodid.errors import (
HippoDidError,
NotFoundError,
AuthenticationError,
RateLimitError,
ValidationError,
)
try:
client.get_character("nonexistent-id")
except NotFoundError as e:
print(f"Not found: {e.message}")
except AuthenticationError:
print("Check your API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after}s")
except ValidationError as e:
print(f"Invalid input: {e.errors}")
except HippoDidError as e:
print(f"API error {e.status_code}: {e.message}")
| Exception | HTTP status | When |
|---|---|---|
AuthenticationError | 401 | Invalid or missing API key |
NotFoundError | 404 | Character or memory not found |
ValidationError | 400 | Request body fails validation |
RateLimitError | 429 | Tier rate limit or quota exceeded |
HippoDidError | any | Base class for all API errors |
LangChain integration
The Python SDK works well with LangChain for building memory-augmented agents. See the dedicated guide for setup instructions: