Jarvis-like Memory for OpenClaw

Multi-Layer Memory System with Qdrant + Redis

45-60 min Intermediate Free

What You Will Build

A three-layer memory system that never forgets:

Redis Buffer

Short-term memory — Instant retrieval

10.0.0.36:6379

Markdown Logs

Human-readable — Forever

~/.openclaw/workspace/memory/

Qdrant Vector DB

Semantic search — Forever

10.0.0.40:6333

Key Feature: User-Centric Memory (Mem0-style)

Memories belong to the user, not the session. Search across ALL your conversations, not just the current chat.

Step 1: Install Dependencies

1.1 Install Qdrant (Vector Database)

mkdir -p ~/docker/qdrant && cd ~/docker/qdrant cat > docker-compose.yml << 'EOF' version: '3.8' services: qdrant: image: qdrant/qdrant:latest container_name: qdrant ports: - "6333:6333" - "6334:6334" volumes: - ./qdrant_storage:/qdrant/storage restart: unless-stopped EOF docker-compose up -d

Test: curl http://localhost:6333/healthz{"status":"ok"}

1.2 Install Redis (Short-term Buffer)

docker run -d \ --name redis \ -p 6379:6379 \ redis:latest \ redis-server --appendonly yes

Test: redis-cli pingPONG

1.3 Pull Embedding Model

ollama pull snowflake-arctic-embed2

This model creates 1024-dim vectors for semantic search.

Step 2: Create Memory Skill

mkdir -p ~/.openclaw/workspace/skills/qdrant-memory/scripts mkdir -p ~/.openclaw/workspace/skills/mem-redis/scripts mkdir -p ~/.openclaw/workspace/memory

Three directories: Qdrant scripts, Redis scripts, and memory logs.

Step 3: Create Storage Script

File: ~/.openclaw/workspace/skills/qdrant-memory/scripts/auto_store.py

#!/usr/bin/env python3 """ Store conversation turns to Qdrant (Mem0-style). Each turn creates 3 embeddings: user message, AI response, summary. """ import argparse, hashlib, json, os, sys, urllib.request, uuid from datetime import datetime from typing import List, Optional, Dict, Any QDRANT_URL = "http://localhost:6333" COLLECTION_NAME = "openclaw_memories" OLLAMA_URL = "http://localhost:11434/v1" def get_embedding(text: str) -> Optional[List[float]]: """Generate embedding using snowflake-arctic-embed2""" data = json.dumps({ "model": "snowflake-arctic-embed2", "input": text[:8192] }).encode() req = urllib.request.Request( f"{OLLAMA_URL}/embeddings", data=data, headers={"Content-Type": "application/json"} ) try: with urllib.request.urlopen(req, timeout=30) as response: result = json.loads(response.read().decode()) return result["data"][0]["embedding"] # Ollama format except Exception as e: print(f"Embedding error: {e}", file=sys.stderr) return None

Note: Ollama returns {"data": [{"embedding": [...]}]} format.

3.1 Storage Function

def store_memory_point(user_id, text, speaker, date_str, conversation_id, turn_number, tags): """Store a single memory point to Qdrant""" embedding = get_embedding(text) if not embedding: return None point_id = str(uuid.uuid4()) payload = { "user_id": user_id, # Mem0-style: persistent user ID "text": text, "date": date_str, "tags": tags, "source_type": speaker, # "user" or "assistant" "conversation_id": conversation_id, "turn_number": turn_number, "created_at": datetime.now().isoformat() } # Qdrant upsert format upsert_data = { "points": [{ "id": point_id, "vector": embedding, "payload": payload }] } req = urllib.request.Request( f"{QDRANT_URL}/collections/{COLLECTION_NAME}/points?wait=true", data=json.dumps(upsert_data).encode(), headers={"Content-Type": "application/json"}, method="PUT" ) try: with urllib.request.urlopen(req, timeout=10) as response: result = json.loads(response.read().decode()) return point_id if result.get("status") == "ok" else None except Exception as e: print(f"Storage error: {e}", file=sys.stderr) return None

Step 4: Create Retrieval Script

File: ~/.openclaw/workspace/skills/qdrant-memory/scripts/get_conversation_context.py

#!/usr/bin/env python3 """Mem0-style retrieval: Search by user_id across ALL conversations.""" import argparse, json, sys, urllib.request from typing import List, Optional QDRANT_URL = "http://localhost:6333" COLLECTION_NAME = "openclaw_memories" OLLAMA_URL = "http://localhost:11434/v1" def search_user_memories(user_id: str, query: str, limit: int = 10): """Search memories for a specific user across all conversations.""" # Get embedding for query data = json.dumps({"model": "snowflake-arctic-embed2", "input": query[:8192]}) req = urllib.request.Request(f"{OLLAMA_URL}/embeddings", data=data.encode(), headers={"Content-Type": "application/json"}) with urllib.request.urlopen(req, timeout=30) as resp: embedding = json.loads(resp.read())["data"][0]["embedding"] # Search WITH user_id filter (Mem0-style) search_data = json.dumps({ "vector": embedding, "limit": limit, "with_payload": True, "filter": { "must": [ {"key": "user_id", "match": {"value": user_id}} ] } }).encode() req = urllib.request.Request( f"{QDRANT_URL}/collections/{COLLECTION_NAME}/points/search", data=search_data, headers={"Content-Type": "application/json"}, method="POST" ) with urllib.request.urlopen(req, timeout=10) as resp: return json.loads(resp.read()).get("result", [])

Step 5: Create Redis Buffer Script

File: ~/.openclaw/workspace/skills/mem-redis/scripts/hb_append.py

#!/usr/bin/env python3 """Heartbeat: Append new turns to Redis short-term buffer.""" import os, sys, json, redis from datetime import datetime, timezone from pathlib import Path REDIS_HOST = os.getenv("REDIS_HOST", "localhost") USER_ID = os.getenv("USER_ID", "user") STATE_FILE = Path("/root/.openclaw/workspace/.mem_last_turn") def get_session_transcript(): """Find current OpenClaw session JSONL file.""" sessions_dir = Path("/root/.openclaw/agents/main/sessions") files = list(sessions_dir.glob("*.jsonl")) return max(files, key=lambda p: p.stat().st_mtime) if files else None def parse_turns_since(last_turn_num): """Extract conversation turns since last processed.""" # ... parses OpenClaw session JSONL ... # Returns list of turn dicts with role, content, timestamp pass def main(): r = redis.Redis(host=REDIS_HOST, port=6379, decode_responses=True) last_turn = int(STATE_FILE.read_text().strip()) if STATE_FILE.exists() else 0 new_turns = parse_turns_since(last_turn) if not new_turns: print(f"No new turns since turn {last_turn}") sys.exit(0) key = f"mem:{USER_ID}" for turn in new_turns: r.lpush(key, json.dumps(turn)) STATE_FILE.write_text(str(max(t['turn'] for t in new_turns))) print(f"✅ Appended {len(new_turns)} turns to Redis")

Step 6: Daily Backup Script

File: ~/.openclaw/workspace/skills/mem-redis/scripts/cron_backup.py

#!/usr/bin/env python3 """Daily: Flush Redis → Qdrant, then clear Redis.""" import os, sys, json, redis, urllib.request from datetime import datetime QDRANT_URL = "http://localhost:6333" OLLAMA_URL = "http://localhost:11434/v1" COLLECTION = "openclaw_memories" REDIS_HOST = os.getenv("REDIS_HOST", "localhost") def get_embedding(text): data = json.dumps({"model": "snowflake-arctic-embed2", "input": text[:8192]}) req = urllib.request.Request(f"{OLLAMA_URL}/embeddings", data=data.encode(), headers={"Content-Type": "application/json"}) with urllib.request.urlopen(req, timeout=30) as resp: return json.loads(resp.read())["data"][0]["embedding"] def main(): r = redis.Redis(host=REDIS_HOST, port=6379, decode_responses=True) items = r.lrange("mem:user", 0, -1) print(f"Backing up {len(items)} items...") for item_json in items: item = json.loads(item_json) text = f"{item.get('role')}: {item.get('content')}" vector = get_embedding(text) # Store to Qdrant... r.delete("mem:user") print("✅ Backup complete, Redis cleared")

Step 7: Configure OpenClaw

7.1 Create SKILL.md

File: ~/.openclaw/workspace/skills/qdrant-memory/SKILL.md

--- name: qdrant-memory description: Full conversation memory storage to Qdrant (Mem0-style). --- # Qdrant Memory - Mem0-Style Usage ## Quick Commands - `save q` → Store conversation to Qdrant - `search q topic` → Search memories ## Configuration - Qdrant: http://localhost:6333 - Ollama: http://localhost:11434 - Model: snowflake-arctic-embed2 (1024 dims)

7.2 Add Cron Job

crontab -e # Add daily backup at 3:00 AM 0 3 * * * /usr/bin/python3 ~/.openclaw/workspace/skills/mem-redis/scripts/cron_backup.py

Step 8: Testing

Test Redis

redis-cli ping # PONG redis-cli LPUSH mem:test '{"msg":"hello"}' redis-cli LRANGE mem:test 0 -1 # See the message redis-cli DEL mem:test

Test Qdrant

curl http://localhost:6333/healthz # {"status":"ok"} # Create collection (1024 dims for snowflake-arctic-embed2) curl -X PUT http://localhost:6333/collections/openclaw_memories \ -H "Content-Type: application/json" \ -d '{"vectors":{"size":1024,"distance":"Cosine"}}'

Test Memory Storage

cd ~/.openclaw/workspace/skills/qdrant-memory/scripts python3 auto_store.py "What about Qdrant?" "Qdrant is a vector database..." --user-id test --turn 1

Usage Guide

CommandPurpose
save qStore current conversation to Qdrant
search q topicSearch all memories by topic
save memSave to Redis + Markdown immediately
redis-cli LLEN mem:userCheck Redis buffer size

Troubleshooting

Redis won't connect

docker restart redis docker logs redis

Qdrant won't connect

cd ~/docker/qdrant && docker-compose restart curl http://localhost:6333/healthz

Embeddings fail

ollama list # Check model exists ollama pull snowflake-arctic-embed2 # Pull if missing

Video Tutorial

Watch the complete walkthrough on YouTube:

Watch on YouTube