TypeScript SDK
Add persistent memory to your AI agents with a fully typed TypeScript client.
Table of contents
Installation
npm install @hippodid/sdk
Requires Node.js 18+ (uses the built-in fetch API – zero external dependencies).
Importing
ESM
import { HippoDid } from "@hippodid/sdk";
CommonJS
const { HippoDid } = require("@hippodid/sdk");
Quick start
import { HippoDid } from "@hippodid/sdk";
const client = new HippoDid({ apiKey: "hd_key_..." });
// Create a character
const character = await client.createCharacter({
name: "My Agent",
categoryPreset: "developer",
});
// Store a memory
const memories = await client.addMemory(character.id, {
content: "User prefers dark mode and vim keybindings",
});
console.log(`Stored: ${memories[0].id} [${memories[0].category}]`);
// Search memories
const results = await client.searchMemories(character.id, {
query: "UI preferences",
topK: 5,
});
for (const m of results) {
console.log(`[${m.score.toFixed(2)}] ${m.content}`);
}
// Assemble context for an LLM prompt
const context = await client.assembleContext(character.id, "What does the user prefer?");
console.log(context.formattedPrompt);
Configuration
const client = new HippoDid({
apiKey: "hd_key_...", // Required
baseUrl: "https://api.hippodid.com", // Optional, default shown
maxRetries: 3, // Optional, retry count (default: 3)
retryDelay: 1000, // Optional, base delay in ms (default: 1000)
});
| Parameter | Required | Default | Description |
|---|---|---|---|
apiKey | Yes | – | Your HippoDid API key (hd_key_...) |
baseUrl | No | https://api.hippodid.com | API base URL |
maxRetries | No | 3 | Automatic retries on 5xx and network errors |
retryDelay | No | 1000 | Base delay between retries in milliseconds (exponential backoff) |
Context assembly
assembleContext() retrieves and formats memories into a single text block ready to inject into an LLM system prompt. Choose a strategy to control which memories are selected and how they are organized.
const context = await client.assembleContext(characterId, "What does the user prefer?", {
strategy: "default",
topK: 10,
});
console.log(context.formattedPrompt); // Ready-to-use LLM prompt
console.log(context.tokenEstimate); // Approximate token count
Assembly strategies
default
Balanced mix of recent and relevant memories, grouped by category.
const ctx = await client.assembleContext(charId, "help the user", {
strategy: "default",
});
conversational
Optimized for chat. Prioritizes recent interactions, user preferences, and relationship context.
const ctx = await client.assembleContext(charId, "continue our conversation about the project", {
strategy: "conversational",
topK: 15,
});
task_focused
Prioritizes skills, decisions, and goal-related memories. Best for agents executing tasks.
const ctx = await client.assembleContext(charId, "deploy the staging environment", {
strategy: "task_focused",
});
concierge
Designed for customer-facing agents. Emphasizes preferences, past interactions, and service history.
const ctx = await client.assembleContext(charId, "help the customer with their order", {
strategy: "concierge",
});
matching
Pure semantic similarity. Returns the most relevant memories with no category grouping.
const ctx = await client.assembleContext(charId, "PostgreSQL configuration", {
strategy: "matching",
topK: 5,
});
API reference
Characters
| Method | Description |
|---|---|
createCharacter({ name, categoryPreset?, externalId? }) | Create a new character |
getCharacter(characterId) | Get a character by ID |
getCharacterByExternalId(externalId) | Look up a character by external ID |
listCharacters({ page?, limit? }) | List all characters for the tenant |
updateCharacter(characterId, fields) | Update character profile fields |
deleteCharacter(characterId) | Archive/delete a character |
cloneCharacter(characterId, { name, externalId?, copyTags? }) | Clone a character |
setMemoryMode(characterId, mode) | Set memory mode (EXTRACTED, VERBATIM, HYBRID) — set after creation, not at create time |
Memories
| Method | Description |
|---|---|
addMemory(characterId, { content, sourceType? }) | Add a memory with AI extraction |
addMemoryDirect(characterId, { content, category, salience?, visibility? }) | Add a memory directly (no AI) |
searchMemories(characterId, { query, topK?, categories? }) | Semantic search across memories |
getMemories(characterId, { page?, limit?, category? }) | List memories with optional filters |
deleteMemory(characterId, memoryId) | Delete a memory |
updateMemory(characterId, memoryId, { content, salience? }) | Update a memory |
Categories
| Method | Description |
|---|---|
listCategories(characterId) | List all categories for a character |
addCategory(characterId, { name, description?, importance? }) | Add a custom category |
Tags
| Method | Description |
|---|---|
listTags(characterId) | List all tags on a character |
addTags(characterId, tags) | Add tags to a character |
replaceTags(characterId, tags) | Replace all tags on a character |
removeTag(characterId, tag) | Remove a single tag |
Templates
| Method | Description |
|---|---|
createCharacterTemplate({ name, config }) | Create a character template |
listCharacterTemplates() | List all templates |
getCharacterTemplate(templateId) | Get a template by ID |
previewCharacterTemplate(templateId, sampleRow) | Preview a character from this template |
cloneCharacterTemplate(templateId) | Clone an existing template |
Batch operations
| Method | Description |
|---|---|
batchCreateCharacters({ templateId, rows, externalIdColumn, onConflict?, dryRun? }) | Batch create characters. Accepts rows as JSON objects; the SDK converts to CSV and uploads via multipart/form-data internally. |
getBatchJobStatus(jobId) | Check progress of an async batch job |
Agent config
| Method | Description |
|---|---|
getAgentConfig(characterId) | Get agent configuration |
setAgentConfig(characterId, config) | Set agent configuration |
deleteAgentConfig(characterId) | Remove agent configuration |
createAgentConfigTemplate(name, config) | Create a reusable agent config template |
listAgentConfigTemplates() | List all agent config templates |
getAgentConfigTemplate(templateId) | Get an agent config template by ID |
updateAgentConfigTemplate(templateId, name, config) | Update an agent config template |
deleteAgentConfigTemplate(templateId) | Delete an agent config template |
Profile
| Method | Description |
|---|---|
updateProfile(characterId, { systemPrompt?, personality?, background?, rules? }) | Update the character’s profile |
Context assembly
| Method | Description |
|---|---|
assembleContext(characterId, query, { strategy?, topK?, categories?, includeProfile?, includeConfig? }) | Assemble memories into an LLM-ready context block (client-side) |
Error handling
All SDK errors extend HippoDidError:
import {
HippoDidError,
NotFoundError,
AuthenticationError,
RateLimitError,
ValidationError,
} from "@hippodid/sdk";
try {
await client.getCharacter("nonexistent-id");
} catch (error) {
if (error instanceof NotFoundError) {
console.log(`Not found: ${error.message}`);
} else if (error instanceof AuthenticationError) {
console.log("Check your API key");
} else if (error instanceof RateLimitError) {
console.log(`Rate limited. Retry after ${error.retryAfterMs}ms`);
} else if (error instanceof ValidationError) {
console.log(`Invalid input: ${error.message}`);
} else if (error instanceof HippoDidError) {
console.log(`API error ${error.status}: ${error.message}`);
}
}
| Error class | HTTP status | When |
|---|---|---|
AuthenticationError | 401, 403 | Invalid or missing API key |
NotFoundError | 404 | Character or memory not found |
ValidationError | 400, 422 | Request body fails validation |
RateLimitError | 429 | Tier rate limit or quota exceeded |
HippoDidError | any | Base class for all API errors |
Vercel / Next.js deployment
The SDK uses the native fetch API with zero dependencies, so it works in all Vercel runtimes including Edge:
// app/api/recall/route.ts (Next.js App Router, Edge-compatible)
import { HippoDid } from "@hippodid/sdk";
const client = new HippoDid({ apiKey: process.env.HIPPODID_API_KEY! });
export async function POST(request: Request) {
const { characterId, query } = await request.json();
const results = await client.searchMemories(characterId, { query, topK: 5 });
return Response.json(results);
}
export const runtime = "edge";
The SDK is fetch-based with no Node.js-specific APIs, so it works in Edge Runtime, Cloudflare Workers, Deno, and Bun without modification.
Vercel AI SDK integration
Use assembleContext() to inject persistent memory into Vercel AI SDK streamText calls:
import { HippoDid } from "@hippodid/sdk";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
const hippodid = new HippoDid({ apiKey: process.env.HIPPODID_API_KEY! });
export async function POST(request: Request) {
const { characterId, messages } = await request.json();
const lastMessage = messages[messages.length - 1].content;
// Assemble memory context from the user's latest message
const memoryContext = await hippodid.assembleContext(characterId, lastMessage, {
strategy: "conversational",
topK: 15,
});
const result = streamText({
model: openai("gpt-4o"),
system: `You are a helpful assistant with persistent memory.
Here is what you remember about this user:
${memoryContext.formattedPrompt}`,
messages,
});
return result.toDataStreamResponse();
}
This pattern works with any model provider supported by the Vercel AI SDK (OpenAI, Anthropic, Google, etc.).