{"id":"8f534613-a7ea-486c-80ea-45736f795b3b","shortId":"47Nhhh","kind":"skill","title":"Agent Memory Systems","tagline":"Antigravity Awesome Skills skill by Sickn33","description":"# Agent Memory Systems\n\nMemory is the cornerstone of intelligent agents. Without it, every interaction\nstarts from zero. This skill covers the architecture of agent memory: short-term\n(context window), long-term (vector stores), and the cognitive architectures\nthat organize them.\n\nKey insight: Memory isn't just storage - it's retrieval. A million stored facts\nmean nothing if you can't find the right one. Chunking, embedding, and retrieval\nstrategies determine whether your agent remembers or forgets.\n\nThe field is fragmented with inconsistent terminology. We use the CoALA cognitive\narchitecture framework: semantic memory (facts), episodic memory (experiences),\nand procedural memory (how-to knowledge).\n\n## Principles\n\n- Memory quality = retrieval quality, not storage quantity\n- Chunk for retrieval, not for storage\n- Context isolation is the enemy of memory\n- Right memory type for right information\n- Decay old memories - not everything should be forever\n- Test retrieval accuracy before production\n- Background memory formation beats real-time\n\n## Capabilities\n\n- agent-memory\n- long-term-memory\n- short-term-memory\n- working-memory\n- episodic-memory\n- semantic-memory\n- procedural-memory\n- memory-retrieval\n- memory-formation\n- memory-decay\n\n## Scope\n\n- vector-database-operations → data-engineer\n- rag-pipeline-architecture → llm-architect\n- embedding-model-selection → ml-engineer\n- knowledge-graph-design → knowledge-engineer\n\n## Tooling\n\n### Memory_frameworks\n\n- LangMem (LangChain) - When: LangGraph agents with persistent memory Note: Semantic, episodic, procedural memory types\n- MemGPT / Letta - When: Virtual context management, OS-style memory Note: Hierarchical memory tiers, automatic paging\n- Mem0 - When: User memory layer for personalization Note: Designed for user preferences and history\n\n### Vector_stores\n\n- Pinecone - When: Managed, enterprise-scale (billions of vectors) Note: Best query performance, highest cost\n- Qdrant - When: Complex metadata filtering, open-source Note: Rust-based, excellent filtering\n- Weaviate - When: Hybrid search, knowledge graph features Note: GraphQL interface, good for relationships\n- ChromaDB - When: Prototyping, small/medium apps Note: Developer-friendly, ~20ms p50 at 100K vectors\n- pgvector - When: Already using PostgreSQL, simpler setup Note: Good for <1M vectors, familiar tooling\n\n### Embedding_models\n\n- OpenAI text-embedding-3-large - When: Best quality, 3072 dimensions Note: $0.13/1M tokens\n- OpenAI text-embedding-3-small - When: Good balance, 1536 dimensions Note: $0.02/1M tokens, 5x cheaper\n- nomic-embed-text-v1.5 - When: Open-source, local deployment Note: 768 dimensions, good quality\n- all-MiniLM-L6-v2 - When: Lightweight, fast local embedding Note: 384 dimensions, lowest latency\n\n## Patterns\n\n### Memory Type Architecture\n\nChoosing the right memory type for different information\n\n**When to use**: Designing agent memory system\n\n# MEMORY TYPE ARCHITECTURE (CoALA Framework):\n\n\"\"\"\nThree memory types for different purposes:\n\n1. Semantic Memory: Facts and knowledge\n   - What you know about the world\n   - User preferences, domain knowledge\n   - Stored in profiles (structured) or collections (unstructured)\n\n2. Episodic Memory: Experiences and events\n   - What happened (timestamped events)\n   - Past conversations, task outcomes\n   - Used for learning from experience\n\n3. Procedural Memory: How to do things\n   - Rules, skills, workflows\n   - Often implemented as few-shot examples\n   - \"How did I solve this before?\"\n\"\"\"\n\n## LangMem Implementation\n\"\"\"\nfrom langmem import MemoryStore\nfrom langgraph.graph import StateGraph\n\n# Initialize memory store\nmemory = MemoryStore(\n    connection_string=os.environ[\"POSTGRES_URL\"]\n)\n\n# Semantic memory: user profile\nawait memory.semantic.upsert(\n    namespace=\"user_profile\",\n    key=user_id,\n    content={\n        \"name\": \"Alice\",\n        \"preferences\": [\"dark mode\", \"concise responses\"],\n        \"expertise_level\": \"developer\",\n    }\n)\n\n# Episodic memory: past interaction\nawait memory.episodic.add(\n    namespace=\"conversations\",\n    content={\n        \"timestamp\": datetime.now(),\n        \"summary\": \"Helped debug authentication issue\",\n        \"outcome\": \"resolved\",\n        \"key_insights\": [\"Token expiry was root cause\"],\n    },\n    metadata={\"user_id\": user_id, \"topic\": \"debugging\"}\n)\n\n# Procedural memory: learned pattern\nawait memory.procedural.add(\n    namespace=\"skills\",\n    content={\n        \"task_type\": \"debug_auth\",\n        \"steps\": [\"Check token expiry\", \"Verify refresh flow\"],\n        \"example_interaction\": few_shot_example,\n    }\n)\n\"\"\"\n\n## Memory Retrieval at Runtime\n\"\"\"\nasync def prepare_context(user_id, query):\n    # Get user profile (semantic)\n    profile = await memory.semantic.get(\n        namespace=\"user_profile\",\n        key=user_id\n    )\n\n    # Find relevant past experiences (episodic)\n    similar_experiences = await memory.episodic.search(\n        namespace=\"conversations\",\n        query=query,\n        filter={\"user_id\": user_id},\n        limit=3\n    )\n\n    # Find relevant skills (procedural)\n    relevant_skills = await memory.procedural.search(\n        namespace=\"skills\",\n        query=query,\n        limit=2\n    )\n\n    return {\n        \"profile\": profile,\n        \"past_experiences\": similar_experiences,\n        \"relevant_skills\": relevant_skills,\n    }\n\"\"\"\n\n### Vector Store Selection Pattern\n\nChoosing the right vector database for your use case\n\n**When to use**: Setting up persistent memory storage\n\n# VECTOR STORE SELECTION:\n\n\"\"\"\nDecision matrix:\n\n|            | Pinecone | Qdrant | Weaviate | ChromaDB | pgvector |\n|------------|----------|--------|----------|----------|----------|\n| Scale      | Billions | 100M+  | 100M+    | 1M       | 1M       |\n| Managed    | Yes      | Both   | Both     | Self     | Self     |\n| Filtering  | Basic    | Best   | Good     | Basic    | SQL      |\n| Hybrid     | No       | Yes    | Best     | No       | Yes      |\n| Cost       | High     | Medium | Medium   | Free     | Free     |\n| Latency    | 5ms      | 7ms    | 10ms     | 20ms     | 15ms     |\n\"\"\"\n\n## Pinecone (Enterprise Scale)\n\"\"\"\nfrom pinecone import Pinecone\n\npc = Pinecone(api_key=os.environ[\"PINECONE_API_KEY\"])\nindex = pc.Index(\"agent-memory\")\n\n# Upsert with metadata\nindex.upsert(\n    vectors=[\n        {\n            \"id\": f\"memory-{uuid4()}\",\n            \"values\": embedding,\n            \"metadata\": {\n                \"user_id\": user_id,\n                \"timestamp\": datetime.now().isoformat(),\n                \"type\": \"episodic\",\n                \"content\": memory_text,\n            }\n        }\n    ],\n    namespace=namespace\n)\n\n# Query with filter\nresults = index.query(\n    vector=query_embedding,\n    filter={\"user_id\": user_id, \"type\": \"episodic\"},\n    top_k=5,\n    include_metadata=True\n)\n\"\"\"\n\n## Qdrant (Complex Filtering)\n\"\"\"\nfrom qdrant_client import QdrantClient\nfrom qdrant_client.models import PointStruct, Filter, FieldCondition\n\nclient = QdrantClient(url=\"http://localhost:6333\")\n\n# Complex filtering with Qdrant\nresults = client.search(\n    collection_name=\"agent_memory\",\n    query_vector=query_embedding,\n    query_filter=Filter(\n        must=[\n            FieldCondition(key=\"user_id\", match={\"value\": user_id}),\n            FieldCondition(key=\"type\", match={\"value\": \"semantic\"}),\n        ],\n        should=[\n            FieldCondition(key=\"topic\", match={\"any\": [\"auth\", \"security\"]}),\n        ]\n    ),\n    limit=5\n)\n\"\"\"\n\n## ChromaDB (Prototyping)\n\"\"\"\nimport chromadb\n\nclient = chromadb.PersistentClient(path=\"./memory_db\")\ncollection = client.get_or_create_collection(\"agent_memory\")\n\n# Simple and fast for prototypes\ncollection.add(\n    ids=[str(uuid4())],\n    embeddings=[embedding],\n    documents=[memory_text],\n    metadatas=[{\"user_id\": user_id, \"type\": \"episodic\"}]\n)\n\nresults = collection.query(\n    query_embeddings=[query_embedding],\n    n_results=5,\n    where={\"user_id\": user_id}\n)\n\"\"\"\n\n### Chunking Strategy Pattern\n\nBreaking documents into retrievable chunks\n\n**When to use**: Processing documents for memory storage\n\n# CHUNKING STRATEGIES:\n\n\"\"\"\nThe chunking dilemma:\n- Too large: Vector loses specificity\n- Too small: Loses context\n\nOptimal chunk size depends on:\n- Document type (code vs prose vs data)\n- Query patterns (factual vs exploratory)\n- Embedding model (each has sweet spot)\n\nGeneral guidance: 256-512 tokens for most use cases\n\"\"\"\n\n## Fixed-Size Chunking (Baseline)\n\"\"\"\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\n\nsplitter = RecursiveCharacterTextSplitter(\n    chunk_size=500,      # Characters\n    chunk_overlap=50,    # Overlap prevents cutting sentences\n    separators=[\"\\n\\n\", \"\\n\", \". \", \" \", \"\"]  # Priority order\n)\n\nchunks = splitter.split_text(document)\n\"\"\"\n\n## Semantic Chunking (Better Quality)\n\"\"\"\nfrom langchain_experimental.text_splitter import SemanticChunker\nfrom langchain_openai import OpenAIEmbeddings\n\n# Splits based on semantic similarity\nsplitter = SemanticChunker(\n    embeddings=OpenAIEmbeddings(),\n    breakpoint_threshold_type=\"percentile\",\n    breakpoint_threshold_amount=95\n)\n\nchunks = splitter.split_text(document)\n\"\"\"\n\n## Structure-Aware Chunking (Documents with Hierarchy)\n\"\"\"\nfrom langchain.text_splitter import MarkdownHeaderTextSplitter\n\n# Respect document structure\nsplitter = MarkdownHeaderTextSplitter(\n    headers_to_split_on=[\n        (\"#\", \"Header 1\"),\n        (\"##\", \"Header 2\"),\n        (\"###\", \"Header 3\"),\n    ]\n)\n\nchunks = splitter.split_text(markdown_doc)\n# Each chunk has header metadata for context\n\"\"\"\n\n## Contextual Chunking (Anthropic's Approach)\n\"\"\"\n# Add context to each chunk before embedding\n# Reduces retrieval failures by 35%\n\ndef add_context_to_chunk(chunk, document_summary):\n    context_prompt = f'''\n    Document summary: {document_summary}\n\n    The following is a chunk from this document:\n    {chunk}\n    '''\n    return context_prompt\n\n# Embed the contextualized chunk, not raw chunk\nfor chunk in chunks:\n    contextualized = add_context_to_chunk(chunk, summary)\n    embedding = embed(contextualized)\n    store(chunk, embedding)  # Store original, embed contextualized\n\"\"\"\n\n## Code-Specific Chunking\n\"\"\"\nfrom langchain.text_splitter import Language, RecursiveCharacterTextSplitter\n\n# Language-aware splitting\npython_splitter = RecursiveCharacterTextSplitter.from_language(\n    language=Language.PYTHON,\n    chunk_size=1000,\n    chunk_overlap=200\n)\n\n# Respects function/class boundaries\nchunks = python_splitter.split_text(python_code)\n\"\"\"\n\n### Background Memory Formation\n\nProcessing memories asynchronously for better quality\n\n**When to use**: You want higher recall without slowing interactions\n\n# BACKGROUND MEMORY FORMATION:\n\n\"\"\"\nReal-time memory extraction slows conversations and adds\ncomplexity to agent tool calls. Background processing after\nconversations yields higher quality memories.\n\nPattern: Subconscious memory formation\n\"\"\"\n\n## LangGraph Background Processing\n\"\"\"\nfrom langgraph.graph import StateGraph\nfrom langgraph.checkpoint.postgres import PostgresSaver\n\nasync def background_memory_processor(thread_id: str):\n    # Run after conversation ends or goes idle\n    conversation = await load_conversation(thread_id)\n\n    # Extract insights without time pressure\n    insights = await llm.invoke('''\n        Analyze this conversation and extract:\n        1. Key facts learned about the user\n        2. User preferences revealed\n        3. Tasks completed or pending\n        4. Patterns in user behavior\n\n        Be thorough - this runs in background.\n\n        Conversation:\n        {conversation}\n    ''')\n\n    # Store to long-term memory\n    for insight in insights:\n        await memory.semantic.upsert(\n            namespace=\"user_insights\",\n            key=generate_key(insight),\n            content=insight,\n            metadata={\"source_thread\": thread_id}\n        )\n\n# Trigger on conversation end or idle timeout\n@on_conversation_idle(timeout_minutes=5)\nasync def process_conversation(thread_id):\n    await background_memory_processor(thread_id)\n\"\"\"\n\n## Memory Consolidation (Like Sleep)\n\"\"\"\n# Periodically consolidate and deduplicate memories\n\nasync def consolidate_memories(user_id: str):\n    # Get all memories for user\n    memories = await memory.semantic.list(\n        namespace=\"user_insights\",\n        filter={\"user_id\": user_id}\n    )\n\n    # Find similar memories (potential duplicates)\n    clusters = cluster_by_similarity(memories, threshold=0.9)\n\n    # Merge similar memories\n    for cluster in clusters:\n        if len(cluster) > 1:\n            merged = await llm.invoke(f'''\n                Consolidate these related memories into one:\n                {cluster}\n\n                Preserve all important information.\n            ''')\n            await memory.semantic.upsert(\n                namespace=\"user_insights\",\n                key=generate_key(merged),\n                content=merged\n            )\n            # Delete originals\n            for old in cluster:\n                await memory.semantic.delete(old.id)\n\"\"\"\n\n### Memory Decay Pattern\n\nForgetting old, irrelevant memories\n\n**When to use**: Memory grows large, retrieval slows down\n\n# MEMORY DECAY:\n\n\"\"\"\nNot all memories should live forever:\n- Old preferences may be outdated\n- Task details lose relevance\n- Conflicting memories confuse retrieval\n\nImplement intelligent decay based on:\n- Recency (when was it created/accessed?)\n- Frequency (how often is it retrieved?)\n- Importance (is it a core fact or detail?)\n\"\"\"\n\n## Time-Based Decay\n\"\"\"\nfrom datetime import datetime, timedelta\n\nasync def decay_old_memories(namespace: str, max_age_days: int):\n    cutoff = datetime.now() - timedelta(days=max_age_days)\n\n    old_memories = await memory.episodic.list(\n        namespace=namespace,\n        filter={\"last_accessed\": {\"$lt\": cutoff.isoformat()}}\n    )\n\n    for mem in old_memories:\n        # Soft delete (mark as archived)\n        await memory.episodic.update(\n            id=mem.id,\n            metadata={\"archived\": True, \"archived_at\": datetime.now()}\n        )\n\"\"\"\n\n## Utility-Based Decay (MIRIX Approach)\n\"\"\"\ndef calculate_memory_utility(memory):\n    '''\n    Composite utility score inspired by cognitive science:\n    - Recency: When was it last accessed?\n    - Frequency: How often is it accessed?\n    - Importance: How critical is this information?\n    '''\n    now = datetime.now()\n\n    # Recency score (exponential decay with 72h half-life)\n    hours_since_access = (now - memory.last_accessed).total_seconds() / 3600\n    recency_score = 0.5 ** (hours_since_access / 72)\n\n    # Frequency score\n    frequency_score = min(memory.access_count / 10, 1.0)\n\n    # Importance (from metadata or heuristic)\n    importance = memory.metadata.get(\"importance\", 0.5)\n\n    # Weighted combination\n    utility = (\n        0.4 * recency_score +\n        0.3 * frequency_score +\n        0.3 * importance\n    )\n\n    return utility\n\nasync def prune_low_utility_memories(threshold=0.2):\n    all_memories = await memory.list_all()\n    for mem in all_memories:\n        if calculate_memory_utility(mem) < threshold:\n            await memory.archive(mem.id)\n\"\"\"\n\n## Sharp Edges\n\n### Chunking Isolates Information From Its Context\n\nSeverity: CRITICAL\n\nSituation: Processing documents for vector storage\n\nSymptoms:\nRetrieval finds chunks but they don't make sense alone. Agent\nanswers miss the big picture. \"The function returns X\" retrieved\nwithout knowing which function. References to \"this\" without\nknowing what \"this\" refers to.\n\nWhy this breaks:\nWhen we chunk for AI processing, we're breaking connections,\nreducing a holistic narrative to isolated fragments that often\nmiss the big picture. A chunk about \"the configuration\" without\ncontext about what system is being configured is nearly useless.\n\nRecommended fix:\n\n## Contextual Chunking (Anthropic's approach)\n# Add document context to each chunk before embedding\n# Reduces retrieval failures by 35%\n\ndef contextualize_chunk(chunk, document):\n    summary = summarize(document)\n\n    # LLM generates context for chunk\n    context = llm.invoke(f'''\n        Document summary: {summary}\n\n        Generate a brief context statement for this chunk\n        that would help someone understand what it refers to:\n\n        {chunk}\n    ''')\n\n    return f\"{context}\\n\\n{chunk}\"\n\n# Embed the contextualized version\nfor chunk in chunks:\n    contextualized = contextualize_chunk(chunk, full_doc)\n    embedding = embed(contextualized)\n    # Store original chunk, embed contextualized\n    store(original=chunk, embedding=embedding)\n\n## Hierarchical Chunking\n# Store at multiple granularities\nchunks_small = split(doc, size=256)\nchunks_medium = split(doc, size=512)\nchunks_large = split(doc, size=1024)\n\n# Retrieve at appropriate level based on query\n\n### Chunk Size Mismatched to Query Patterns\n\nSeverity: HIGH\n\nSituation: Configuring chunking for memory storage\n\nSymptoms:\nHigh-quality documents produce low-quality retrievals. Simple\nquestions miss relevant information. Complex questions get\nfragments instead of complete answers.\n\nWhy this breaks:\nOptimal chunk size depends on query patterns:\n- Factual queries need small, specific chunks\n- Conceptual queries need larger context\n- Code needs function-level boundaries\n\nThe sweet spot varies by document type and embedding model.\nDefault 1000 characters works for nothing specific.\n\nRecommended fix:\n\n## Test different sizes\nfrom sklearn.metrics import recall_score\n\ndef evaluate_chunk_size(documents, test_queries, chunk_size):\n    chunks = split_documents(documents, size=chunk_size)\n    index = build_index(chunks)\n\n    correct_retrievals = 0\n    for query, expected_chunk in test_queries:\n        results = index.search(query, k=5)\n        if expected_chunk in results:\n            correct_retrievals += 1\n\n    return correct_retrievals / len(test_queries)\n\n# Test multiple sizes\nfor size in [256, 512, 768, 1024]:\n    recall = evaluate_chunk_size(docs, test_queries, size)\n    print(f\"Size {size}: Recall@5 = {recall:.2%}\")\n\n## Size recommendations by content type\nCHUNK_SIZES = {\n    \"documentation\": 512,   # Complete concepts\n    \"code\": 1000,          # Function-level\n    \"conversation\": 256,   # Turn-level\n    \"articles\": 768,       # Paragraph-level\n}\n\n## Use overlap to prevent boundary issues\nsplitter = RecursiveCharacterTextSplitter(\n    chunk_size=512,\n    chunk_overlap=50,  # 10% overlap\n)\n\n### Semantic Search Returns Irrelevant Results\n\nSeverity: HIGH\n\nSituation: Querying memory for context\n\nSymptoms:\nAgent retrieves memories that seem related but aren't useful.\n\"Tell me about the user's preferences\" returns conversation\nabout preferences in general, not this user's. High similarity\nscores for wrong content.\n\nWhy this breaks:\nSemantic similarity isn't the same as relevance. \"The user\nlikes Python\" and \"Python is a programming language\" are\nsemantically similar but very different types of information.\nWithout metadata filtering, retrieval is just word matching.\n\nRecommended fix:\n\n## Always filter by metadata first\n# Don't rely on semantic similarity alone\n\n# Bad: Only semantic search\nresults = index.query(\n    vector=query_embedding,\n    top_k=5\n)\n\n# Good: Filter then search\nresults = index.query(\n    vector=query_embedding,\n    filter={\n        \"user_id\": current_user.id,\n        \"type\": \"preference\",\n        \"created_after\": cutoff_date,\n    },\n    top_k=5\n)\n\n## Use hybrid search (semantic + keyword)\nfrom qdrant_client import QdrantClient\n\nclient = QdrantClient(...)\n\n# Hybrid search with fusion\nresults = client.search(\n    collection_name=\"memories\",\n    query_vector=semantic_embedding,\n    query_text=query,  # Also keyword match\n    fusion={\"method\": \"rrf\"},  # Reciprocal Rank Fusion\n)\n\n## Rerank results with cross-encoder\nfrom sentence_transformers import CrossEncoder\n\nreranker = CrossEncoder(\"cross-encoder/ms-marco-MiniLM-L-6-v2\")\n\n# Initial retrieval (recall-oriented)\ncandidates = index.query(query_embedding, top_k=20)\n\n# Rerank (precision-oriented)\npairs = [(query, c.text) for c in candidates]\nscores = reranker.predict(pairs)\nreranked = sorted(zip(candidates, scores), key=lambda x: x[1], reverse=True)\n\n### Old Memories Override Current Information\n\nSeverity: HIGH\n\nSituation: User preferences or facts change over time\n\nSymptoms:\nAgent uses outdated preferences. \"User prefers dark mode\" from\n6 months ago overrides recent \"switch to light mode\" request.\nAgent confidently uses stale data.\n\nWhy this breaks:\nVector stores don't have temporal awareness by default. A memory\nfrom a year ago has the same retrieval weight as one from today.\nRecent information should generally override old information\nfor preferences and mutable facts.\n\nRecommended fix:\n\n## Add temporal scoring\nfrom datetime import datetime, timedelta\n\ndef time_decay_score(memory, half_life_days=30):\n    age = (datetime.now() - memory.created_at).days\n    decay = 0.5 ** (age / half_life_days)\n    return decay\n\ndef retrieve_with_recency(query, user_id):\n    # Get candidates\n    candidates = index.query(\n        vector=embed(query),\n        filter={\"user_id\": user_id},\n        top_k=20\n    )\n\n    # Apply time decay\n    for candidate in candidates:\n        time_score = time_decay_score(candidate)\n        candidate.final_score = candidate.similarity * 0.7 + time_score * 0.3\n\n    # Re-sort by final score\n    return sorted(candidates, key=lambda x: x.final_score, reverse=True)[:5]\n\n## Update instead of append for preferences\nasync def update_preference(user_id, category, value):\n    # Delete old preference\n    await memory.delete(\n        filter={\"user_id\": user_id, \"type\": \"preference\", \"category\": category}\n    )\n\n    # Store new preference\n    await memory.upsert(\n        id=f\"pref-{user_id}-{category}\",\n        content={\"category\": category, \"value\": value},\n        metadata={\"updated_at\": datetime.now()}\n    )\n\n## Explicit versioning for facts\nawait memory.upsert(\n    id=f\"fact-{fact_id}-v{version}\",\n    content=new_fact,\n    metadata={\n        \"version\": version,\n        \"supersedes\": previous_id,\n        \"valid_from\": datetime.now()\n    }\n)\n\n### Contradictory Memories Retrieved Together\n\nSeverity: MEDIUM\n\nSituation: User has changed preferences or provided conflicting info\n\nSymptoms:\nAgent retrieves \"user prefers dark mode\" and \"user prefers light\nmode\" in same context. Gives inconsistent answers. Seems confused\nor forgetful to user.\n\nWhy this breaks:\nWithout conflict resolution, both old and new information coexist.\nSemantic search might return both because they're both about the\nsame topic (preferences). Agent has no way to know which is current.\n\nRecommended fix:\n\n## Detect conflicts on storage\nasync def store_with_conflict_check(memory, user_id):\n    # Find potentially conflicting memories\n    similar = await index.query(\n        vector=embed(memory.content),\n        filter={\"user_id\": user_id, \"type\": memory.type},\n        threshold=0.9,  # Very similar\n        top_k=5\n    )\n\n    for existing in similar:\n        if is_contradictory(memory.content, existing.content):\n            # Ask for resolution\n            resolution = await resolve_conflict(memory, existing)\n            if resolution == \"replace\":\n                await index.delete(existing.id)\n            elif resolution == \"version\":\n                await mark_superseded(existing.id, memory.id)\n\n    await index.upsert(memory)\n\n## Conflict detection heuristic\ndef is_contradictory(new_content, old_content):\n    # Use LLM to detect contradiction\n    result = llm.invoke(f'''\n        Do these two statements contradict each other?\n\n        Statement 1: {old_content}\n        Statement 2: {new_content}\n\n        Respond with just YES or NO.\n    ''')\n    return result.strip().upper() == \"YES\"\n\n## Periodic consolidation\nasync def consolidate_memories(user_id):\n    all_memories = await index.list(filter={\"user_id\": user_id})\n    clusters = cluster_by_topic(all_memories)\n\n    for cluster in clusters:\n        if has_conflicts(cluster):\n            resolved = await llm.invoke(f'''\n                These memories may conflict. Create one consolidated\n                memory that represents the current truth:\n                {cluster}\n            ''')\n            await replace_cluster(cluster, resolved)\n\n### Retrieved Memories Exceed Context Window\n\nSeverity: MEDIUM\n\nSituation: Retrieving too many memories at once\n\nSymptoms:\nToken limit errors. Agent truncates important information.\nSystem prompt gets cut off. Retrieved memories compete with\nuser query for space.\n\nWhy this breaks:\nRetrieval typically returns top-k results. If k is too high or\nchunks are too large, retrieved context overwhelms the window.\nCritical information (system prompt, recent messages) gets pushed\nout.\n\nRecommended fix:\n\n## Budget tokens for different memory types\nTOKEN_BUDGET = {\n    \"system_prompt\": 500,\n    \"user_profile\": 200,\n    \"recent_messages\": 2000,\n    \"retrieved_memories\": 1000,\n    \"current_query\": 500,\n    \"buffer\": 300,  # Safety margin\n}\n\ndef budget_aware_retrieval(query, context_limit=4000):\n    remaining = context_limit - TOKEN_BUDGET[\"system_prompt\"] - TOKEN_BUDGET[\"buffer\"]\n\n    # Prioritize recent messages\n    recent = get_recent_messages(limit=TOKEN_BUDGET[\"recent_messages\"])\n    remaining -= count_tokens(recent)\n\n    # Then user profile\n    profile = get_user_profile(limit=TOKEN_BUDGET[\"user_profile\"])\n    remaining -= count_tokens(profile)\n\n    # Finally retrieved memories with remaining budget\n    memories = retrieve_memories(query, max_tokens=remaining)\n\n    return build_context(profile, recent, memories)\n\n## Dynamic k based on chunk size\ndef retrieve_with_budget(query, max_tokens=1000):\n    avg_chunk_tokens = 150  # From your data\n    max_k = max_tokens // avg_chunk_tokens\n\n    results = index.query(query, top_k=max_k)\n\n    # Trim if still over budget\n    total_tokens = 0\n    filtered = []\n    for result in results:\n        tokens = count_tokens(result.text)\n        if total_tokens + tokens <= max_tokens:\n            filtered.append(result)\n            total_tokens += tokens\n        else:\n            break\n\n    return filtered\n\n### Query and Document Embeddings From Different Models\n\nSeverity: MEDIUM\n\nSituation: Upgrading embedding model or mixing providers\n\nSymptoms:\nRetrieval quality suddenly drops. Relevant documents not found.\nRandom results returned. Works for new documents, fails for old.\n\nWhy this breaks:\nEmbedding models produce different vector spaces. A query embedded\nwith text-embedding-3 won't match documents embedded with text-ada-002.\nMixing models creates garbage similarity scores.\n\nRecommended fix:\n\n## Track embedding model in metadata\nawait index.upsert(\n    id=doc_id,\n    vector=embedding,\n    metadata={\n        \"embedding_model\": \"text-embedding-3-small\",\n        \"embedding_version\": \"2024-01\",\n        \"content\": content\n    }\n)\n\n## Filter by model version on retrieval\nresults = index.query(\n    vector=query_embedding,\n    filter={\"embedding_model\": current_model},\n    top_k=10\n)\n\n## Migration strategy for model upgrades\nasync def migrate_embeddings(old_model, new_model):\n    # Get all documents with old model\n    old_docs = await index.list(filter={\"embedding_model\": old_model})\n\n    for doc in old_docs:\n        # Re-embed with new model\n        new_embedding = await embed(doc.content, model=new_model)\n\n        # Update in place\n        await index.update(\n            id=doc.id,\n            vector=new_embedding,\n            metadata={\"embedding_model\": new_model}\n        )\n\n## Use separate collections during migration\n# Old collection: production queries\n# New collection: re-embedding in progress\n# Switch over when complete\n\n## Validation Checks\n\n### In-Memory Store in Production Code\n\nSeverity: ERROR\n\nIn-memory stores lose data on restart\n\nMessage: In-memory store detected. Use persistent storage (Postgres, Qdrant, Pinecone) for production.\n\n### Vector Upsert Without Metadata\n\nSeverity: WARNING\n\nVectors should have metadata for filtering\n\nMessage: Vector upsert without metadata. Add user_id, type, timestamp for proper filtering.\n\n### Query Without User Filtering\n\nSeverity: ERROR\n\nQueries should filter by user to prevent data leakage\n\nMessage: Vector query without user filtering. Always filter by user_id to prevent data leakage.\n\n### Hardcoded Chunk Size Without Justification\n\nSeverity: INFO\n\nChunk size should be tested and justified\n\nMessage: Hardcoded chunk size. Test different sizes for your content type and measure retrieval accuracy.\n\n### Chunking Without Overlap\n\nSeverity: WARNING\n\nChunk overlap prevents boundary issues\n\nMessage: Text splitting without overlap. Add chunk_overlap (10-20%) to prevent boundary issues.\n\n### Semantic Search Without Filters\n\nSeverity: WARNING\n\nPure semantic search often returns irrelevant results\n\nMessage: Pure semantic search. Add metadata filters (user, type, time) for better relevance.\n\n### Retrieval Without Result Limit\n\nSeverity: WARNING\n\nUnbounded retrieval can overflow context\n\nMessage: Retrieval without limit. Set top_k to prevent context overflow.\n\n### Embeddings Without Model Version Tracking\n\nSeverity: WARNING\n\nTrack embedding model to handle migrations\n\nMessage: Store embedding model version in metadata to handle model migrations.\n\n### Different Models for Document and Query Embedding\n\nSeverity: ERROR\n\nDocuments and queries must use same embedding model\n\nMessage: Ensure same embedding model for indexing and querying.\n\n## Collaboration\n\n### Delegation Triggers\n\n- user needs vector database at scale -> data-engineer (Production vector store operations)\n- user needs embedding model optimization -> ml-engineer (Custom embeddings, fine-tuning)\n- user needs knowledge graph -> knowledge-engineer (Graph-based memory structures)\n- user needs RAG pipeline -> llm-architect (End-to-end retrieval augmented generation)\n- user needs multi-agent shared memory -> multi-agent-orchestration (Memory sharing between agents)\n\n## Related Skills\n\nWorks well with: `autonomous-agents`, `multi-agent-orchestration`, `llm-architect`, `agent-tool-builder`\n\n## When to Use\n- User mentions or implies: agent memory\n- User mentions or implies: long-term memory\n- User mentions or implies: memory systems\n- User mentions or implies: remember across sessions\n- User mentions or implies: memory retrieval\n- User mentions or implies: episodic memory\n- User mentions or implies: semantic memory\n- User mentions or implies: vector store\n- User mentions or implies: rag\n- User mentions or implies: langmem\n- User mentions or implies: memgpt\n- User mentions or implies: conversation history\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["agent","memory","systems","antigravity","awesome","skills","sickn33"],"capabilities":["skill","source-sickn33","category-antigravity-awesome-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/agent-memory-systems","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"install_from":"skills.sh"}},"qualityScore":"0.300","qualityRationale":"deterministic score 0.30 from registry signals: · indexed on skills.sh · published under sickn33/antigravity-awesome-skills","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill:v1","enrichmentVersion":1,"enrichedAt":"2026-04-25T11:40:40.064Z","embedding":null,"createdAt":"2026-04-18T20:32:04.445Z","updatedAt":"2026-04-25T11:40:40.064Z","lastSeenAt":"2026-04-25T11:40:40.064Z","tsv":"'-01':3157 '-20':3397 '-512':974 '/1m':358,373 '/memory_db':875 '/ms-marco-minilm-l-6-v2':2303 '0':2023,3039 '0.02':372 '0.13':357 '0.2':1676 '0.3':1662,1665,2494 '0.4':1659 '0.5':1633,1655,2446 '0.7':2491 '0.9':1409,2692 '002':3125 '1':439,1070,1286,1420,2043,2339,2759 '1.0':1646 '10':1645,2116,3178,3396 '1000':1181,1985,2088,2920,3010 '100k':327 '100m':706,707 '1024':1902,2059 '10ms':737 '150':3014 '1536':369 '15ms':739 '1m':339,708,709 '2':462,661,1072,1293,2075,2763 '20':2315,2474 '200':1184,2914 '2000':2917 '2024':3156 '20ms':324,738 '256':973,1890,2056,2093 '3':349,364,481,647,1074,1297,3115,3152 '30':2439 '300':2925 '3072':354 '35':1103,1808 '3600':1630 '384':405 '4':1302 '4000':2935 '5':382,803,867,912,1353,2035,2073,2227,2249,2511,2697 '50':998,2115 '500':994,2911,2923 '512':1896,2057,2084,2112 '5ms':735 '5x':375 '6':2367 '6333':825 '72':1637 '72h':1618 '768':390,2058,2098 '7ms':736 '95':1043 'access':1552,1598,1604,1624,1627,1636 'accuraci':152,3377 'across':3617 'ada':3124 'add':1092,1105,1143,1223,1796,2423,3311,3393,3419 'age':1534,1542,2440,2447 'agent':1,10,19,33,84,164,231,425,758,834,881,1226,1723,2131,2358,2377,2601,2650,2848,3559,3564,3569,3577,3580,3586,3596 'agent-memori':163,757 'agent-tool-build':3585 'ago':2369,2399 'ai':1754 'alic':538 'all-minilm-l6-v2':394 'alon':1722,2215 'alreadi':331 'also':2278 'alway':2204,3340 'amount':1042 'analyz':1281 'answer':1724,1946,2617 'anthrop':1089,1793 'antigrav':4 'api':749,753 'app':319 'append':2515 'appli':2475 'approach':1091,1580,1795 'appropri':1905 'architect':209,3547,3584 'architectur':31,48,100,206,412,430 'archiv':1564,1570,1572 'aren':2138 'articl':2097 'ask':2707,3697 'async':608,1252,1354,1375,1526,1669,2518,2665,2778,3184 'asynchron':1198 'augment':3553 'auth':591,864 'authent':561 'automat':255 'autonom':3576 'autonomous-ag':3575 'avg':3011,3022 'await':528,551,583,620,635,654,1268,1279,1325,1360,1388,1422,1436,1453,1546,1565,1679,1693,2529,2543,2564,2679,2711,2719,2725,2730,2786,2808,2825,3139,3200,3220,3229 'awar':1050,1171,2391,2930 'awesom':5 'background':155,1193,1212,1229,1242,1254,1312,1361 'bad':2216 'balanc':368 'base':299,1028,1496,1519,1577,1907,2999,3538 'baselin':984 'basic':717,720 'beat':158 'behavior':1306 'best':283,352,718,725 'better':1015,1200,3426 'big':1727,1771 'billion':279,705 'boundari':1187,1973,2106,3386,3400,3705 'break':921,1749,1758,1949,2166,2384,2626,2867,3061,3101 'breakpoint':1036,1040 'brief':1830 'budget':2901,2908,2929,2940,2944,2955,2971,2983,3006,3036 'buffer':2924,2945 'build':2018,2992 'builder':3588 'c':2324 'c.text':2322 'calcul':1582,1688 'call':1228 'candid':2309,2326,2333,2461,2462,2479,2481,2487,2503 'candidate.final':2488 'candidate.similarity':2490 'capabl':162 'case':685,979 'categori':2524,2538,2539,2550,2552,2553 'category-antigravity-awesome-skills' 'caus':571 'chang':2354,2594 'charact':995,1986 'cheaper':376 'check':593,2670,3262 'choos':413,677 'chromadb':315,702,868,871 'chromadb.persistentclient':873 'chunk':76,123,918,925,934,937,949,983,992,996,1009,1014,1044,1051,1075,1081,1088,1096,1108,1109,1123,1127,1134,1137,1139,1141,1146,1147,1153,1162,1179,1182,1188,1698,1715,1752,1774,1792,1801,1811,1812,1821,1835,1845,1851,1857,1859,1862,1863,1871,1876,1880,1885,1891,1897,1910,1920,1951,1962,2003,2008,2010,2015,2020,2027,2038,2062,2081,2110,2113,2881,3001,3012,3023,3350,3356,3365,3378,3383,3394 'clarif':3699 'clear':3672 'client':812,821,872,2257,2260 'client.get':877 'client.search':831,2267 'cluster':1403,1404,1414,1416,1419,1431,1452,2793,2794,2800,2802,2806,2824,2827,2828 'coala':98,431 'code':955,1160,1192,1968,2087,3269 'code-specif':1159 'coexist':2635 'cognit':47,99,1591 'collabor':3500 'collect':460,832,876,880,2268,3243,3247,3251 'collection.add':888 'collection.query':905 'combin':1657 'compet':2859 'complet':1299,1945,2085,3260 'complex':290,808,826,1224,1939 'composit':1586 'concept':2086 'conceptu':1963 'concis':542 'confid':2378 'configur':1777,1785,1919 'conflict':1489,2598,2628,2662,2669,2676,2713,2733,2805,2814 'confus':1491,2619 'connect':519,1759 'consolid':1367,1371,1377,1425,2777,2780,2817 'content':536,555,587,781,1334,1445,2079,2163,2551,2573,2740,2742,2761,2765,3158,3159,3372 'context':38,129,245,611,947,1086,1093,1106,1112,1129,1144,1703,1779,1798,1819,1822,1831,1848,1967,2129,2614,2833,2886,2933,2937,2993,3438,3448 'contextu':1087,1133,1142,1151,1158,1791,1810,1854,1860,1861,1868,1873 'contradict':2747,2755 'contradictori':2585,2704,2738 'convers':473,554,638,1221,1232,1262,1267,1270,1283,1313,1314,1343,1349,1357,2092,2149,3662 'core':1513 'cornerston':16 'correct':2021,2041,2045 'cost':287,728 'count':1644,2959,2975,3046 'cover':29 'creat':879,2243,2815,3128 'created/accessed':1502 'criteria':3708 'critic':1607,1705,2890 'cross':2291,2301 'cross-encod':2290,2300 'crossencod':2297,2299 'current':2345,2658,2822,2921,3174 'current_user.id':2240 'custom':3524 'cut':1001,2855 'cutoff':1537,2245 'cutoff.isoformat':1554 'dark':540,2364,2605 'data':201,959,2381,3017,3277,3332,3347,3510 'data-engin':200,3509 'databas':198,681,3506 'date':2246 'datetim':1522,1524,2427,2429 'datetime.now':557,777,1538,1574,1612,2441,2559,2584 'day':1535,1540,1543,2438,2444,2450 'debug':560,578,590 'decay':142,194,1457,1473,1495,1520,1528,1578,1616,2433,2445,2452,2477,2485 'decis':697 'dedupl':1373 'def':609,1104,1253,1355,1376,1527,1581,1670,1809,2001,2431,2453,2519,2666,2736,2779,2928,3003,3185 'default':1984,2393 'deleg':3501 'delet':1447,1561,2526 'depend':951,1953 'deploy':388 'describ':3676 'design':220,265,424 'detail':1486,1516 'detect':2661,2734,2746,3285 'determin':81 'develop':322,546 'developer-friend':321 'differ':419,437,1994,2190,2904,3069,3105,3368,3474 'dilemma':938 'dimens':355,370,391,406 'doc':1079,1865,1888,1894,1900,2064,3142,3199,3208,3211 'doc.content':3222 'doc.id':3232 'document':894,922,930,953,1012,1047,1052,1061,1110,1115,1117,1126,1708,1797,1813,1816,1825,1928,1979,2005,2012,2013,2083,3066,3086,3095,3119,3194,3477,3483 'domain':453 'drop':3084 'duplic':1402 'dynam':2997 'edg':1697 'elif':2722 'els':3060 'emb':379,1131,1150,1157,1852,1867,1872,2465,2682,3214,3221 'embed':77,211,343,348,363,403,770,793,839,892,893,907,909,965,1034,1098,1149,1154,1803,1866,1877,1878,1982,2224,2236,2274,2312,3067,3075,3102,3110,3114,3120,3135,3145,3147,3151,3154,3170,3172,3187,3203,3219,3235,3237,3254,3450,3458,3465,3480,3489,3494,3518,3525 'embedding-model-select':210 'encod':2292,2302 'end':1263,1344,3549,3551 'end-to-end':3548 'enemi':133 'engin':202,216,223,3511,3523,3535 'ensur':3492 'enterpris':277,741 'enterprise-scal':276 'environ':3688 'environment-specif':3687 'episod':105,178,237,463,547,632,780,800,903,3629 'episodic-memori':177 'error':2847,3271,3324,3482 'evalu':2002,2061 'event':467,471 'everi':22 'everyth':146 'exampl':497,599,603 'exceed':2832 'excel':300 'exist':2699,2715 'existing.content':2706 'existing.id':2721,2728 'expect':2026,2037 'experi':107,465,480,631,634,666,668 'expert':3693 'expertis':544 'expiri':568,595 'explicit':2560 'exploratori':964 'exponenti':1615 'extract':1219,1273,1285 'f':766,1114,1424,1824,1847,2069,2546,2567,2750,2810 'fact':65,104,442,1288,1514,2353,2420,2563,2568,2569,2575 'factual':962,1957 'fail':3096 'failur':1101,1806 'familiar':341 'fast':401,885 'featur':308 'few-shot':494 'field':89 'fieldcondit':820,844,852,859 'filter':292,301,641,716,788,794,809,819,827,841,842,1393,1550,2196,2205,2229,2237,2467,2531,2684,2788,3040,3063,3160,3171,3202,3305,3318,3322,3327,3339,3341,3405,3421 'filtered.append':3055 'final':2499,2978 'find':72,628,648,1398,1714,2674 'fine':3527 'fine-tun':3526 'first':2208 'fix':981,1790,1992,2203,2422,2660,2900,3133 'fixed-s':980 'flow':598 'follow':1120 'forev':149,1479 'forget':87,1459,2621 'format':157,191,1195,1214,1240 'found':3088 'fragment':91,1766,1942 'framework':101,226,432 'free':732,733 'frequenc':1503,1599,1638,1640,1663 'friend':323 'full':1864 'function':1730,1737,1971,2090 'function-level':1970,2089 'function/class':1186 'fusion':2265,2281,2286 'garbag':3129 'general':971,2153,2412 'generat':1331,1442,1818,1828,3554 'get':615,1382,1941,2460,2854,2896,2950,2966,3192 'give':2615 'goe':1265 'good':312,337,367,392,719,2228 'granular':1884 'graph':219,307,3532,3537 'graph-bas':3536 'graphql':310 'grow':1467 'guidanc':972 'half':1620,2436,2448 'half-lif':1619 'handl':3461,3471 'happen':469 'hardcod':3349,3364 'header':1065,1069,1071,1073,1083 'help':559,1838 'heurist':1651,2735 'hierarch':252,1879 'hierarchi':1054 'high':729,1917,1926,2124,2158,2348,2879 'high-qual':1925 'higher':1207,1234 'highest':286 'histori':270,3663 'holist':1762 'hour':1622,1634 'how-to':111 'hybrid':304,722,2251,2262 'id':535,574,576,613,627,643,645,765,773,775,796,798,847,851,889,899,901,915,917,1258,1272,1340,1359,1365,1380,1395,1397,1567,2239,2459,2469,2471,2523,2533,2535,2545,2549,2566,2570,2581,2673,2686,2688,2783,2790,2792,3141,3143,3231,3313,3344 'idl':1266,1346,1350 'implement':492,505,1493 'impli':3595,3601,3609,3615,3622,3628,3634,3640,3646,3651,3656,3661 'import':508,512,745,813,817,870,988,1020,1025,1058,1166,1246,1250,1434,1509,1523,1605,1647,1652,1654,1666,1998,2258,2296,2428,2850 'in-memori':3263,3272,3281 'includ':804 'inconsist':93,2616 'index':755,2017,2019,3497 'index.delete':2720 'index.list':2787,3201 'index.query':790,2221,2233,2310,2463,2680,3026,3167 'index.search':2032 'index.update':3230 'index.upsert':763,2731,3140 'info':2599,3355 'inform':141,420,1435,1610,1700,1938,2193,2346,2410,2415,2634,2851,2891 'initi':514,2304 'input':3702 'insight':53,566,1274,1278,1322,1324,1329,1333,1335,1392,1440 'inspir':1589 'instead':1943,2513 'int':1536 'intellig':18,1494 'interact':23,550,600,1211 'interfac':311 'irrelev':1461,2121,3413 'isn':55,2169 'isoformat':778 'isol':130,1699,1765 'issu':562,2107,3387,3401 'justif':3353 'justifi':3362 'k':802,2034,2226,2248,2314,2473,2696,2873,2876,2998,3019,3029,3031,3177,3445 'key':52,533,565,625,750,754,845,853,860,1287,1330,1332,1441,1443,2335,2504 'keyword':2254,2279 'know':447,1735,1742,2655 'knowledg':114,218,222,306,444,454,3531,3534 'knowledge-engin':221,3533 'knowledge-graph-design':217 'l6':397 'lambda':2336,2505 'langchain':228,1023 'langchain.text':986,1056,1164 'langchain_experimental.text':1018 'langgraph':230,1241 'langgraph.checkpoint.postgres':1249 'langgraph.graph':511,1245 'langmem':227,504,507,3652 'languag':1167,1170,1176,1177,2184 'language-awar':1169 'language.python':1178 'larg':350,940,1468,1898,2884 'larger':1966 'last':1551,1597 'latenc':408,734 'layer':261 'leakag':3333,3348 'learn':478,581,1289 'len':1418,2047 'letta':242 'level':545,1906,1972,2091,2096,2101 'life':1621,2437,2449 'light':2374,2610 'lightweight':400 'like':1368,2177 'limit':646,660,866,2846,2934,2938,2953,2969,3431,3442,3664 'live':1478 'llm':208,1817,2744,3546,3583 'llm-architect':207,3545,3582 'llm.invoke':1280,1423,1823,2749,2809 'load':1269 'local':387,402 'localhost':824 'long':41,167,1318,3603 'long-term':40,1317,3602 'long-term-memori':166 'lose':942,946,1487,3276 'low':1672,1931 'low-qual':1930 'lowest':407 'lt':1553 'make':1720 'manag':246,275,710 'mani':2840 'margin':2927 'mark':1562,2726 'markdown':1078 'markdownheadertextsplitt':1059,1064 'match':848,855,862,2201,2280,3118,3673 'matrix':698 'max':1533,1541,2988,3008,3018,3020,3030,3053 'may':1482,2813 'mean':66 'measur':3375 'medium':730,731,1892,2590,2836,3072 'mem':1556,1683,1691 'mem.id':1568,1695 'mem0':257 'memgpt':241,3657 'memori':2,11,13,34,54,103,106,110,116,135,137,144,156,165,169,173,176,179,182,185,187,190,193,225,234,239,250,253,260,410,416,426,428,434,441,464,483,515,517,525,548,580,604,692,759,767,782,835,882,895,932,1194,1197,1213,1218,1236,1239,1255,1320,1362,1366,1374,1378,1384,1387,1400,1407,1412,1428,1456,1462,1466,1472,1476,1490,1530,1545,1559,1583,1585,1674,1678,1686,1689,1922,2127,2133,2270,2343,2395,2435,2586,2671,2677,2714,2732,2781,2785,2798,2812,2818,2831,2841,2858,2905,2919,2980,2984,2986,2996,3265,3274,3283,3539,3561,3566,3597,3605,3610,3623,3630,3636 'memory-decay':192 'memory-form':189 'memory-retriev':186 'memory.access':1643 'memory.archive':1694 'memory.content':2683,2705 'memory.created':2442 'memory.delete':2530 'memory.episodic.add':552 'memory.episodic.list':1547 'memory.episodic.search':636 'memory.episodic.update':1566 'memory.id':2729 'memory.last':1626 'memory.list':1680 'memory.metadata.get':1653 'memory.procedural.add':584 'memory.procedural.search':655 'memory.semantic.delete':1454 'memory.semantic.get':621 'memory.semantic.list':1389 'memory.semantic.upsert':529,1326,1437 'memory.type':2690 'memory.upsert':2544,2565 'memorystor':509,518 'mention':3593,3599,3607,3613,3620,3626,3632,3638,3644,3649,3654,3659 'merg':1410,1421,1444,1446 'messag':2895,2916,2948,2952,2957,3280,3306,3334,3363,3388,3415,3439,3463,3491 'metadata':291,572,762,771,805,897,1084,1336,1569,1649,2195,2207,2556,2576,3138,3146,3236,3297,3303,3310,3420,3469 'method':2282 'might':2638 'migrat':3179,3186,3245,3462,3473 'million':63 'min':1642 'minilm':396 'minut':1352 'mirix':1579 'mismatch':1912 'miss':1725,1769,1936,3710 'mix':3078,3126 'ml':215,3522 'ml-engin':214,3521 'mode':541,2365,2375,2606,2611 'model':212,344,966,1983,3070,3076,3103,3127,3136,3148,3162,3173,3175,3182,3189,3191,3197,3204,3206,3217,3223,3225,3238,3240,3452,3459,3466,3472,3475,3490,3495,3519 'month':2368 'multi':3558,3563,3579 'multi-ag':3557 'multi-agent-orchestr':3562,3578 'multipl':1883,2051 'must':843,3486 'mutabl':2419 'n':910,1004,1005,1006,1849,1850 'name':537,833,2269 'namespac':530,553,585,622,637,656,784,785,1327,1390,1438,1531,1548,1549 'narrat':1763 'near':1787 'need':1959,1965,1969,3504,3517,3530,3542,3556 'new':2541,2574,2633,2739,2764,3094,3190,3216,3218,3224,3234,3239,3250 'nomic':378 'nomic-embed-text-v1':377 'note':235,251,264,282,296,309,320,336,356,371,389,404 'noth':67,1989 'often':491,1505,1601,1768,3411 'old':143,1450,1460,1480,1529,1544,1558,2342,2414,2527,2631,2741,2760,3098,3188,3196,3198,3205,3210,3246 'old.id':1455 'one':75,1430,2406,2816 'open':294,385 'open-sourc':293,384 'openai':345,360,1024 'openaiembed':1026,1035 'oper':199,3515 'optim':948,1950,3520 'orchestr':3565,3581 'order':1008 'organ':50 'orient':2308,2319 'origin':1156,1448,1870,1875 'os':248 'os-styl':247 'os.environ':521,751 'outcom':475,563 'outdat':1484,2360 'output':3682 'overflow':3437,3449 'overlap':997,999,1183,2103,2114,2117,3380,3384,3392,3395 'overrid':2344,2370,2413 'overwhelm':2887 'p50':325 'page':256 'pair':2320,2329 'paragraph':2100 'paragraph-level':2099 'past':472,549,630,665 'path':874 'pattern':409,582,676,920,961,1237,1303,1458,1915,1956 'pc':747 'pc.index':756 'pend':1301 'percentil':1039 'perform':285 'period':1370,2776 'permiss':3703 'persist':233,691,3287 'person':263 'pgvector':329,703 'pictur':1728,1772 'pinecon':273,699,740,744,746,748,752,3291 'pipelin':205,3544 'place':3228 'pointstruct':818 'postgr':522,3289 'postgresql':333 'postgressav':1251 'potenti':1401,2675 'precis':2318 'precision-ori':2317 'pref':2547 'prefer':268,452,539,1295,1481,2147,2151,2242,2351,2361,2363,2417,2517,2521,2528,2537,2542,2595,2604,2609,2649 'prepar':610 'preserv':1432 'pressur':1277 'prevent':1000,2105,3331,3346,3385,3399,3447 'previous':2580 'principl':115 'print':2068 'priorit':2946 'prioriti':1007 'procedur':109,184,238,482,579,651 'procedural-memori':183 'process':929,1196,1230,1243,1356,1707,1755 'processor':1256,1363 'produc':1929,3104 'product':154,3248,3268,3293,3512 'profil':457,527,532,617,619,624,663,664,2913,2964,2965,2968,2973,2977,2994 'program':2183 'progress':3256 'prompt':1113,1130,2853,2893,2910,2942 'proper':3317 'prose':957 'prototyp':317,869,887 'provid':2597,3079 'prune':1671 'pure':3408,3416 'purpos':438 'push':2897 'python':1173,1191,2178,2180 'python_splitter.split':1189 'qdrant':288,700,807,811,829,2256,3290 'qdrant_client.models':816 'qdrantclient':814,822,2259,2261 'qualiti':117,119,353,393,1016,1201,1235,1927,1932,3082 'quantiti':122 'queri':284,614,639,640,658,659,786,792,836,838,840,906,908,960,1909,1914,1955,1958,1964,2007,2025,2030,2033,2049,2066,2126,2223,2235,2271,2275,2277,2311,2321,2457,2466,2862,2922,2932,2987,3007,3027,3064,3109,3169,3249,3319,3325,3336,3479,3485,3499 'question':1935,1940 'rag':204,3543,3647 'rag-pipeline-architectur':203 'random':3089 'rank':2285 'raw':1136 're':1757,2496,2643,3213,3253 're-emb':3212 're-embed':3252 're-sort':2495 'real':160,1216 'real-tim':159,1215 'recal':1208,1999,2060,2072,2074,2307 'recall-ori':2306 'recenc':1498,1593,1613,1631,1660,2456 'recent':2371,2409,2894,2915,2947,2949,2951,2956,2961,2995 'reciproc':2284 'recommend':1789,1991,2077,2202,2421,2659,2899,3132 'recursivecharactertextsplitt':989,991,1168,2109 'recursivecharactertextsplitter.from':1175 'reduc':1099,1760,1804 'refer':1738,1745,1843 'refresh':597 'relat':1427,2136,3570 'relationship':314 'relev':629,649,652,669,671,1488,1937,2174,3085,3427 'reli':2211 'remain':2936,2958,2974,2982,2990 'rememb':85,3616 'replac':2718,2826 'repres':2820 'request':2376 'requir':3701 'rerank':2287,2298,2316,2330 'reranker.predict':2328 'resolut':2629,2709,2710,2717,2723 'resolv':564,2712,2807,2829 'respect':1060,1185 'respond':2766 'respons':543 'restart':3279 'result':789,830,904,911,2031,2040,2122,2220,2232,2266,2288,2748,2874,3025,3042,3044,3056,3090,3166,3414,3430 'result.strip':2773 'result.text':3048 'retriev':61,79,118,125,151,188,605,924,1100,1469,1492,1508,1713,1733,1805,1903,1933,2022,2042,2046,2132,2197,2305,2403,2454,2587,2602,2830,2838,2857,2868,2885,2918,2931,2979,2985,3004,3081,3165,3376,3428,3435,3440,3552,3624 'return':662,1128,1667,1731,1846,2044,2120,2148,2451,2501,2639,2772,2870,2991,3062,3091,3412 'reveal':1296 'revers':2340,2509 'review':3694 'right':74,136,140,415,679 'root':570 'rrf':2283 'rule':488 'run':1260,1310 'runtim':607 'rust':298 'rust-bas':297 'safeti':2926,3704 'scale':278,704,742,3508 'scienc':1592 'scope':195,3675 'score':1588,1614,1632,1639,1641,1661,1664,2000,2160,2327,2334,2425,2434,2483,2486,2489,2493,2500,2508,3131 'search':305,2119,2219,2231,2252,2263,2637,3403,3410,3418 'second':1629 'secur':865 'seem':2135,2618 'select':213,675,696 'self':714,715 'semant':102,181,236,440,524,618,857,1013,1030,2118,2167,2186,2213,2218,2253,2273,2636,3402,3409,3417,3635 'semantic-memori':180 'semanticchunk':1021,1033 'sens':1721 'sentenc':1002,2294 'separ':1003,3242 'session':3618 'set':689,3443 'setup':335 'sever':1704,1916,2123,2347,2589,2835,3071,3270,3298,3323,3354,3381,3406,3432,3455,3481 'share':3560,3567 'sharp':1696 'short':36,171 'short-term':35 'short-term-memori':170 'shot':496,602 'sickn33':9 'similar':633,667,1031,1399,1406,1411,2159,2168,2187,2214,2678,2694,2701,3130 'simpl':883,1934 'simpler':334 'sinc':1623,1635 'situat':1706,1918,2125,2349,2591,2837,3073 'size':950,982,993,1180,1889,1895,1901,1911,1952,1995,2004,2009,2014,2016,2052,2054,2063,2067,2070,2071,2076,2082,2111,3002,3351,3357,3366,3369 'skill':6,7,28,489,586,650,653,657,670,672,3571,3667 'sklearn.metrics':1997 'sleep':1369 'slow':1210,1220,1470 'small':365,945,1886,1960,3153 'small/medium':318 'soft':1560 'solv':501 'someon':1839 'sort':2331,2497,2502 'sourc':295,386,1337 'source-sickn33' 'space':2864,3107 'specif':943,1161,1961,1990,3689 'split':1027,1067,1172,1887,1893,1899,2011,3390 'splitter':987,990,1019,1032,1057,1063,1165,1174,2108 'splitter.split':1010,1045,1076 'spot':970,1976 'sql':721 'stale':2380 'start':24 'stategraph':513,1247 'statement':1832,2754,2758,2762 'step':592 'still':3034 'stop':3695 'storag':58,121,128,693,933,1711,1923,2664,3288 'store':44,64,272,455,516,674,695,1152,1155,1315,1869,1874,1881,2386,2540,2667,3266,3275,3284,3464,3514,3642 'str':890,1259,1381,1532 'strategi':80,919,935,3180 'string':520 'structur':458,1049,1062,3540 'structure-awar':1048 'style':249 'subconsci':1238 'substitut':3685 'success':3707 'sudden':3083 'summar':1815 'summari':558,1111,1116,1118,1148,1814,1826,1827 'supersed':2579,2727 'sweet':969,1975 'switch':2372,3257 'symptom':1712,1924,2130,2357,2600,2844,3080 'system':3,12,427,1782,2852,2892,2909,2941,3611 'task':474,588,1298,1485,3671 'tell':2141 'tempor':2390,2424 'term':37,42,168,172,1319,3604 'terminolog':94 'test':150,1993,2006,2029,2048,2050,2065,3360,3367,3691 'text':347,362,380,783,896,1011,1046,1077,1190,2276,3113,3123,3150,3389 'text-ada':3122 'text-embed':346,361,3112,3149 'thing':487 'thorough':1308 'thread':1257,1271,1338,1339,1358,1364 'three':433 'threshold':1037,1041,1408,1675,1692,2691 'tier':254 'time':161,1217,1276,1518,2356,2432,2476,2482,2484,2492,3424 'time-bas':1517 'timedelta':1525,1539,2430 'timeout':1347,1351 'timestamp':470,556,776,3315 'today':2408 'togeth':2588 'token':359,374,567,594,975,2845,2902,2907,2939,2943,2954,2960,2970,2976,2989,3009,3013,3021,3024,3038,3045,3047,3051,3052,3054,3058,3059 'tool':224,342,1227,3587 'top':801,2225,2247,2313,2472,2695,2872,3028,3176,3444 'top-k':2871 'topic':577,861,2648,2796 'total':1628,3037,3050,3057 'track':3134,3454,3457 'transform':2295 'treat':3680 'trigger':1341,3502 'trim':3032 'true':806,1571,2341,2510 'truncat':2849 'truth':2823 'tune':3528 'turn':2095 'turn-level':2094 'two':2753 'type':138,240,411,417,429,435,589,779,799,854,902,954,1038,1980,2080,2191,2241,2536,2689,2906,3314,3373,3423 'typic':2869 'unbound':3434 'understand':1840 'unstructur':461 'updat':2512,2520,2557,3226 'upgrad':3074,3183 'upper':2774 'upsert':760,3295,3308 'url':523,823 'use':96,332,423,476,684,688,928,978,1204,1465,2102,2140,2250,2359,2379,2743,3241,3286,3487,3591,3665 'useless':1788 'user':259,267,451,526,531,534,573,575,612,616,623,626,642,644,772,774,795,797,846,850,898,900,914,916,1292,1294,1305,1328,1379,1386,1391,1394,1396,1439,2145,2156,2176,2238,2350,2362,2458,2468,2470,2522,2532,2534,2548,2592,2603,2608,2623,2672,2685,2687,2782,2789,2791,2861,2912,2963,2967,2972,3312,3321,3329,3338,3343,3422,3503,3516,3529,3541,3555,3592,3598,3606,3612,3619,3625,3631,3637,3643,3648,3653,3658 'util':1576,1584,1587,1658,1668,1673,1690 'utility-bas':1575 'uuid4':768,891 'v':2571 'v1':381 'v2':398 'valid':2582,3261,3690 'valu':769,849,856,2525,2554,2555 'vari':1977 'vector':43,197,271,281,328,340,673,680,694,764,791,837,941,1710,2222,2234,2272,2385,2464,2681,3106,3144,3168,3233,3294,3300,3307,3335,3505,3513,3641 'vector-database-oper':196 'verifi':596 'version':1855,2561,2572,2577,2578,2724,3155,3163,3453,3467 'virtual':244 'vs':956,958,963 'want':1206 'warn':3299,3382,3407,3433,3456 'way':2653 'weaviat':302,701 'weight':1656,2404 'well':3573 'whether':82 'window':39,2834,2889 'without':20,1209,1275,1734,1741,1778,2194,2627,3296,3309,3320,3337,3352,3379,3391,3404,3429,3441,3451 'won':3116 'word':2200 'work':175,1987,3092,3572 'workflow':490 'working-memori':174 'world':450 'would':1837 'wrong':2162 'x':1732,2337,2338,2506 'x.final':2507 'year':2398 'yes':711,724,727,2769,2775 'yield':1233 'zero':26 'zip':2332","prices":[{"id":"1e847c32-b141-4a94-b9eb-c0a0c664475c","listingId":"8f534613-a7ea-486c-80ea-45736f795b3b","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T20:32:04.445Z"}],"sources":[{"listingId":"8f534613-a7ea-486c-80ea-45736f795b3b","source":"github","sourceId":"sickn33/antigravity-awesome-skills/agent-memory-systems","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/agent-memory-systems","isPrimary":false,"firstSeenAt":"2026-04-18T21:30:28.534Z","lastSeenAt":"2026-04-25T06:50:22.980Z"},{"listingId":"8f534613-a7ea-486c-80ea-45736f795b3b","source":"skills_sh","sourceId":"sickn33/antigravity-awesome-skills/agent-memory-systems","sourceUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/agent-memory-systems","isPrimary":true,"firstSeenAt":"2026-04-18T20:32:04.445Z","lastSeenAt":"2026-04-25T11:40:40.064Z"}],"details":{"listingId":"8f534613-a7ea-486c-80ea-45736f795b3b","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"agent-memory-systems","source":"skills_sh","category":"antigravity-awesome-skills","skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/agent-memory-systems"},"updatedAt":"2026-04-25T11:40:40.064Z"}}