{"id":"3b0f385f-4883-40e9-9fd7-53ebf05069cb","shortId":"R6Td2E","kind":"skill","title":"langchain-architecture","tagline":"Master the LangChain framework for building sophisticated LLM applications with agents, chains, memory, and tool integration.","description":"# LangChain Architecture\n\nMaster the LangChain framework for building sophisticated LLM applications with agents, chains, memory, and tool integration.\n\n## Do not use this skill when\n\n- The task is unrelated to langchain architecture\n- You need a different domain or tool outside this scope\n\n## Instructions\n\n- Clarify goals, constraints, and required inputs.\n- Apply relevant best practices and validate outcomes.\n- Provide actionable steps and verification.\n- If detailed examples are required, open `resources/implementation-playbook.md`.\n\n## Use this skill when\n\n- Building autonomous AI agents with tool access\n- Implementing complex multi-step LLM workflows\n- Managing conversation memory and state\n- Integrating LLMs with external data sources and APIs\n- Creating modular, reusable LLM application components\n- Implementing document processing pipelines\n- Building production-grade LLM applications\n\n## Core Concepts\n\n### 1. Agents\nAutonomous systems that use LLMs to decide which actions to take.\n\n**Agent Types:**\n- **ReAct**: Reasoning + Acting in interleaved manner\n- **OpenAI Functions**: Leverages function calling API\n- **Structured Chat**: Handles multi-input tools\n- **Conversational**: Optimized for chat interfaces\n- **Self-Ask with Search**: Decomposes complex queries\n\n### 2. Chains\nSequences of calls to LLMs or other utilities.\n\n**Chain Types:**\n- **LLMChain**: Basic prompt + LLM combination\n- **SequentialChain**: Multiple chains in sequence\n- **RouterChain**: Routes inputs to specialized chains\n- **TransformChain**: Data transformations between steps\n- **MapReduceChain**: Parallel processing with aggregation\n\n### 3. Memory\nSystems for maintaining context across interactions.\n\n**Memory Types:**\n- **ConversationBufferMemory**: Stores all messages\n- **ConversationSummaryMemory**: Summarizes older messages\n- **ConversationBufferWindowMemory**: Keeps last N messages\n- **EntityMemory**: Tracks information about entities\n- **VectorStoreMemory**: Semantic similarity retrieval\n\n### 4. Document Processing\nLoading, transforming, and storing documents for retrieval.\n\n**Components:**\n- **Document Loaders**: Load from various sources\n- **Text Splitters**: Chunk documents intelligently\n- **Vector Stores**: Store and retrieve embeddings\n- **Retrievers**: Fetch relevant documents\n- **Indexes**: Organize documents for efficient access\n\n### 5. Callbacks\nHooks for logging, monitoring, and debugging.\n\n**Use Cases:**\n- Request/response logging\n- Token usage tracking\n- Latency monitoring\n- Error handling\n- Custom metrics collection\n\n## Quick Start\n\n```python\nfrom langchain.agents import AgentType, initialize_agent, load_tools\nfrom langchain.llms import OpenAI\nfrom langchain.memory import ConversationBufferMemory\n\n# Initialize LLM\nllm = OpenAI(temperature=0)\n\n# Load tools\ntools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n\n# Add memory\nmemory = ConversationBufferMemory(memory_key=\"chat_history\")\n\n# Create agent\nagent = initialize_agent(\n    tools,\n    llm,\n    agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,\n    memory=memory,\n    verbose=True\n)\n\n# Run agent\nresult = agent.run(\"What's the weather in SF? Then calculate 25 * 4\")\n```\n\n## Architecture Patterns\n\n### Pattern 1: RAG with LangChain\n```python\nfrom langchain.chains import RetrievalQA\nfrom langchain.document_loaders import TextLoader\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings import OpenAIEmbeddings\n\n# Load and process documents\nloader = TextLoader('documents.txt')\ndocuments = loader.load()\n\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\ntexts = text_splitter.split_documents(documents)\n\n# Create vector store\nembeddings = OpenAIEmbeddings()\nvectorstore = Chroma.from_documents(texts, embeddings)\n\n# Create retrieval chain\nqa_chain = RetrievalQA.from_chain_type(\n    llm=llm,\n    chain_type=\"stuff\",\n    retriever=vectorstore.as_retriever(),\n    return_source_documents=True\n)\n\n# Query\nresult = qa_chain({\"query\": \"What is the main topic?\"})\n```\n\n### Pattern 2: Custom Agent with Tools\n```python\nfrom langchain.agents import Tool, AgentExecutor\nfrom langchain.agents.react.base import ReActDocstoreAgent\nfrom langchain.tools import tool\n\n@tool\ndef search_database(query: str) -> str:\n    \"\"\"Search internal database for information.\"\"\"\n    # Your database search logic\n    return f\"Results for: {query}\"\n\n@tool\ndef send_email(recipient: str, content: str) -> str:\n    \"\"\"Send an email to specified recipient.\"\"\"\n    # Email sending logic\n    return f\"Email sent to {recipient}\"\n\ntools = [search_database, send_email]\n\nagent = initialize_agent(\n    tools,\n    llm,\n    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n    verbose=True\n)\n```\n\n### Pattern 3: Multi-Step Chain\n```python\nfrom langchain.chains import LLMChain, SequentialChain\nfrom langchain.prompts import PromptTemplate\n\n# Step 1: Extract key information\nextract_prompt = PromptTemplate(\n    input_variables=[\"text\"],\n    template=\"Extract key entities from: {text}\\n\\nEntities:\"\n)\nextract_chain = LLMChain(llm=llm, prompt=extract_prompt, output_key=\"entities\")\n\n# Step 2: Analyze entities\nanalyze_prompt = PromptTemplate(\n    input_variables=[\"entities\"],\n    template=\"Analyze these entities: {entities}\\n\\nAnalysis:\"\n)\nanalyze_chain = LLMChain(llm=llm, prompt=analyze_prompt, output_key=\"analysis\")\n\n# Step 3: Generate summary\nsummary_prompt = PromptTemplate(\n    input_variables=[\"entities\", \"analysis\"],\n    template=\"Summarize:\\nEntities: {entities}\\nAnalysis: {analysis}\\n\\nSummary:\"\n)\nsummary_chain = LLMChain(llm=llm, prompt=summary_prompt, output_key=\"summary\")\n\n# Combine into sequential chain\noverall_chain = SequentialChain(\n    chains=[extract_chain, analyze_chain, summary_chain],\n    input_variables=[\"text\"],\n    output_variables=[\"entities\", \"analysis\", \"summary\"],\n    verbose=True\n)\n```\n\n## Memory Management Best Practices\n\n### Choosing the Right Memory Type\n```python\n# For short conversations (< 10 messages)\nfrom langchain.memory import ConversationBufferMemory\nmemory = ConversationBufferMemory()\n\n# For long conversations (summarize old messages)\nfrom langchain.memory import ConversationSummaryMemory\nmemory = ConversationSummaryMemory(llm=llm)\n\n# For sliding window (last N messages)\nfrom langchain.memory import ConversationBufferWindowMemory\nmemory = ConversationBufferWindowMemory(k=5)\n\n# For entity tracking\nfrom langchain.memory import ConversationEntityMemory\nmemory = ConversationEntityMemory(llm=llm)\n\n# For semantic retrieval of relevant history\nfrom langchain.memory import VectorStoreRetrieverMemory\nmemory = VectorStoreRetrieverMemory(retriever=retriever)\n```\n\n## Callback System\n\n### Custom Callback Handler\n```python\nfrom langchain.callbacks.base import BaseCallbackHandler\n\nclass CustomCallbackHandler(BaseCallbackHandler):\n    def on_llm_start(self, serialized, prompts, **kwargs):\n        print(f\"LLM started with prompts: {prompts}\")\n\n    def on_llm_end(self, response, **kwargs):\n        print(f\"LLM ended with response: {response}\")\n\n    def on_llm_error(self, error, **kwargs):\n        print(f\"LLM error: {error}\")\n\n    def on_chain_start(self, serialized, inputs, **kwargs):\n        print(f\"Chain started with inputs: {inputs}\")\n\n    def on_agent_action(self, action, **kwargs):\n        print(f\"Agent taking action: {action}\")\n\n# Use callback\nagent.run(\"query\", callbacks=[CustomCallbackHandler()])\n```\n\n## Testing Strategies\n\n```python\nimport pytest\nfrom unittest.mock import Mock\n\ndef test_agent_tool_selection():\n    # Mock LLM to return specific tool selection\n    mock_llm = Mock()\n    mock_llm.predict.return_value = \"Action: search_database\\nAction Input: test query\"\n\n    agent = initialize_agent(tools, mock_llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)\n\n    result = agent.run(\"test query\")\n\n    # Verify correct tool was selected\n    assert \"search_database\" in str(mock_llm.predict.call_args)\n\ndef test_memory_persistence():\n    memory = ConversationBufferMemory()\n\n    memory.save_context({\"input\": \"Hi\"}, {\"output\": \"Hello!\"})\n\n    assert \"Hi\" in memory.load_memory_variables({})['history']\n    assert \"Hello!\" in memory.load_memory_variables({})['history']\n```\n\n## Performance Optimization\n\n### 1. Caching\n```python\nfrom langchain.cache import InMemoryCache\nimport langchain\n\nlangchain.llm_cache = InMemoryCache()\n```\n\n### 2. Batch Processing\n```python\n# Process multiple documents in parallel\nfrom langchain.document_loaders import DirectoryLoader\nfrom concurrent.futures import ThreadPoolExecutor\n\nloader = DirectoryLoader('./docs')\ndocs = loader.load()\n\ndef process_doc(doc):\n    return text_splitter.split_documents([doc])\n\nwith ThreadPoolExecutor(max_workers=4) as executor:\n    split_docs = list(executor.map(process_doc, docs))\n```\n\n### 3. Streaming Responses\n```python\nfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n\nllm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()])\n```\n\n## Resources\n\n- **references/agents.md**: Deep dive on agent architectures\n- **references/memory.md**: Memory system patterns\n- **references/chains.md**: Chain composition strategies\n- **references/document-processing.md**: Document loading and indexing\n- **references/callbacks.md**: Monitoring and observability\n- **assets/agent-template.py**: Production-ready agent template\n- **assets/memory-config.yaml**: Memory configuration examples\n- **assets/chain-example.py**: Complex chain examples\n\n## Common Pitfalls\n\n1. **Memory Overflow**: Not managing conversation history length\n2. **Tool Selection Errors**: Poor tool descriptions confuse agents\n3. **Context Window Exceeded**: Exceeding LLM token limits\n4. **No Error Handling**: Not catching and handling agent failures\n5. **Inefficient Retrieval**: Not optimizing vector store queries\n\n## Production Checklist\n\n- [ ] Implement proper error handling\n- [ ] Add request/response logging\n- [ ] Monitor token usage and costs\n- [ ] Set timeout limits for agent execution\n- [ ] Implement rate limiting\n- [ ] Add input validation\n- [ ] Test with edge cases\n- [ ] Set up observability (callbacks)\n- [ ] Implement fallback strategies\n- [ ] Version control prompts and configurations\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["langchain","architecture","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows"],"capabilities":["skill","source-sickn33","skill-langchain-architecture","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/langchain-architecture","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34726 github stars · SKILL.md body (10,524 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-23T12:51:08.466Z","embedding":null,"createdAt":"2026-04-18T21:39:39.725Z","updatedAt":"2026-04-23T12:51:08.466Z","lastSeenAt":"2026-04-23T12:51:08.466Z","tsv":"'/docs':970 '0':337 '1':136,389,577,938,1050 '10':701 '1000':430 '2':183,479,607,950,1058 '200':433 '25':384 '3':221,561,635,995,1067 '4':253,385,985,1075 '5':291,736,1085 'access':97,290 'across':227 'act':153 'action':76,146,834,836,842,843,876 'add':349,1099,1116 'agent':14,32,94,137,149,321,358,359,361,364,373,481,548,550,553,833,840,861,883,885,889,1015,1038,1066,1083,1111 'agent.run':375,846,895 'agentexecutor':489 'agenttyp':319 'agenttype.conversational':365 'agenttype.zero':554,890 'aggreg':220 'ai':93 'analysi':633,644,650,684 'analyz':608,610,617,623,629,674 'api':117,162 'appli':68 'applic':12,30,122,133 'architectur':3,21,50,386,1016 'arg':909 'ask':177,1168 'assert':903,922,929 'assets/agent-template.py':1034 'assets/chain-example.py':1044 'assets/memory-config.yaml':1040 'autonom':92,138 'basecallbackhandl':771,774 'basic':196 'batch':951 'best':70,690 'boundari':1176 'build':9,27,91,128 'cach':939,948 'calcul':383 'call':161,187 'callback':292,762,765,845,848,1008,1126 'case':300,1122 'catch':1080 'chain':15,33,184,193,202,210,450,452,454,458,471,565,596,624,654,667,669,671,673,675,677,818,826,1022,1046 'charactertextsplitt':407,427 'chat':164,173,355 'checklist':1094 'choos':692 'chroma':411 'chroma.from':444 'chunk':272,428,431 'clarif':1170 'clarifi':62 'class':772 'clear':1143 'collect':312 'combin':199,664 'common':1048 'complex':99,181,1045 'compon':123,263 'composit':1023 'concept':135 'concurrent.futures':965 'configur':1042,1134 'confus':1065 'constraint':64 'content':525 'context':226,917,1068 'control':1131 'convers':106,170,700,711,1055 'conversationbuffermemori':231,331,352,706,708,915 'conversationbufferwindowmemori':239,732,734 'conversationentitymemori':743,745 'conversationsummarymemori':235,718,720 'core':134 'correct':899 'cost':1106 'creat':118,357,438,448 'criteria':1179 'custom':310,480,764 'customcallbackhandl':773,849 'data':114,212 'databas':501,507,511,545,878,905 'debug':298 'decid':144 'decompos':180 'deep':1012 'def':499,520,775,790,804,816,831,859,910,973 'describ':1147 'descript':367,557,893,1064 'detail':81 'differ':54 'directoryload':963,969 'dive':1013 'doc':971,975,976,980,989,993,994 'document':125,254,260,264,273,284,287,419,423,436,437,445,466,956,979,1026 'documents.txt':422 'domain':55 'edg':1121 'effici':289 'email':522,530,534,539,547 'embed':280,441,447 'end':793,800 'entiti':248,590,605,609,615,619,620,643,648,683,738 'entitymemori':244 'environ':1159 'environment-specif':1158 'error':308,807,809,814,815,1061,1077,1097 'exampl':82,1043,1047 'exceed':1070,1071 'execut':1112 'executor':987 'executor.map':991 'expert':1164 'extern':113 'extract':578,581,588,595,601,672 'f':515,538,784,798,812,825,839 'failur':1084 'fallback':1128 'fetch':282 'framework':7,25 'function':158,160 'generat':636 'goal':63 'grade':131 'handl':165,309,1078,1082,1098 'handler':766 'hello':921,930 'hi':919,923 'histori':356,753,928,935,1056 'hook':293 'implement':98,124,1095,1113,1127 'import':318,326,330,396,401,406,410,414,487,492,496,569,574,705,717,731,742,756,770,853,857,943,945,962,966,1002 'index':285,1029 'ineffici':1086 'inform':246,509,580 'initi':320,332,360,549,884 'inmemorycach':944,949 'input':67,168,207,584,613,641,678,822,829,830,880,918,1117,1173 'instruct':61 'integr':19,37,110 'intellig':274 'interact':228 'interfac':174 'interleav':155 'intern':506 'k':735 'keep':240 'key':354,579,589,604,632,662 'kwarg':782,796,810,823,837 'langchain':2,6,20,24,49,392,946 'langchain-architectur':1 'langchain.agents':317,486 'langchain.agents.react.base':491 'langchain.cache':942 'langchain.callbacks.base':769 'langchain.callbacks.streaming':1000 'langchain.chains':395,568 'langchain.document':399,960 'langchain.embeddings':413 'langchain.llm':947 'langchain.llms':325 'langchain.memory':329,704,716,730,741,755 'langchain.prompts':573 'langchain.text':404 'langchain.tools':495 'langchain.vectorstores':409 'last':241,726 'latenc':306 'length':1057 'leverag':159 'limit':1074,1109,1115,1135 'list':990 'llm':11,29,103,121,132,198,333,334,345,347,348,363,456,457,552,598,599,626,627,656,657,721,722,746,747,777,785,792,799,806,813,865,872,888,1004,1072 'llm-math':344 'llmchain':195,570,597,625,655 'llms':111,142,189 'load':256,266,322,338,341,416,1027 'loader':265,400,420,961,968 'loader.load':424,972 'log':295,302,1101 'logic':513,536 'long':710 'main':476 'maintain':225 'manag':105,689,1054 'manner':156 'mapreducechain':216 'master':4,22 'match':1144 'math':346 'max':983 'memori':16,34,107,222,229,350,351,353,368,369,688,695,707,719,733,744,758,912,914,926,933,1018,1041,1051 'memory.load':925,932 'memory.save':916 'messag':234,238,243,702,714,728 'metric':311 'miss':1181 'mock':858,864,871,873,887 'mock_llm.predict.call':908 'mock_llm.predict.return':874 'modular':119 'monitor':296,307,1031,1102 'multi':101,167,563 'multi-input':166 'multi-step':100,562 'multipl':201,955 'n':242,593,621,651,727 'naction':879 'nanalysi':622,649 'need':52 'nentiti':594,647 'nsummari':652 'observ':1033,1125 'old':713 'older':237 'open':85 'openai':157,327,335,1005 'openaiembed':415,442 'optim':171,937,1089 'organ':286 'outcom':74 'output':603,631,661,681,920,1153 'outsid':58 'overal':668 'overflow':1052 'overlap':432 'parallel':217,958 'pattern':387,388,478,560,1020 'perform':936 'permiss':1174 'persist':913 'pipelin':127 'pitfal':1049 'poor':1062 'practic':71,691 'print':783,797,811,824,838 'process':126,218,255,418,952,954,974,992 'product':130,1036,1093 'production-grad':129 'production-readi':1035 'prompt':197,582,600,602,611,628,630,639,658,660,781,788,789,1132 'prompttempl':575,583,612,640 'proper':1096 'provid':75 'pytest':854 'python':315,393,484,566,697,767,852,940,953,998 'qa':451,470 'queri':182,468,472,502,518,847,882,897,1092 'quick':313 'rag':390 'rate':1114 'react':151,366,556,892 'reactdocstoreag':493 'readi':1037 'reason':152 'recipi':523,533,542 'references/agents.md':1011 'references/callbacks.md':1030 'references/chains.md':1021 'references/document-processing.md':1025 'references/memory.md':1017 'relev':69,283,752 'request/response':301,1100 'requir':66,84,1172 'resourc':1010 'resources/implementation-playbook.md':86 'respons':795,802,803,997 'result':374,469,516,894 'retriev':252,262,279,281,449,461,463,750,760,761,1087 'retrievalqa':397 'retrievalqa.from':453 'return':464,514,537,867,977 'reusabl':120 'review':1165 'right':694 'rout':206 'routerchain':205 'run':372 'safeti':1175 'scope':60,1146 'search':179,500,505,512,544,877,904 'select':863,870,902,1060 'self':176,779,794,808,820,835 'self-ask':175 'semant':250,749 'send':521,528,535,546 'sent':540 'sequenc':185,204 'sequenti':666 'sequentialchain':200,571,670 'serial':780,821 'serpapi':343 'set':1107,1123 'sf':381 'short':699 'shot':555,891 'similar':251 'size':429 'skill':42,89,1138 'skill-langchain-architecture' 'slide':724 'sophist':10,28 'sourc':115,269,465 'source-sickn33' 'special':209 'specif':868,1160 'specifi':532 'split':988 'splitter':271,405,426 'start':314,778,786,819,827 'state':109 'stdout':1001 'step':77,102,215,564,576,606,634 'stop':1166 'store':232,259,276,277,440,1091 'str':503,504,524,526,527,907 'strategi':851,1024,1129 'stream':996,1006 'streamingstdoutcallbackhandl':1003,1009 'structur':163 'stuff':460 'substitut':1156 'success':1178 'summar':236,646,712 'summari':637,638,653,659,663,676,685 'system':139,223,763,1019 'take':148,841 'task':45,1142 'temperatur':336 'templat':587,616,645,1039 'test':850,860,881,896,911,1119,1162 'text':270,425,434,446,586,592,680 'text_splitter.split':435,978 'textload':402,421 'threadpoolexecutor':967,982 'timeout':1108 'token':303,1073,1103 'tool':18,36,57,96,169,323,339,340,342,362,483,488,497,498,519,543,551,862,869,886,900,1059,1063 'topic':477 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'track':245,305,739 'transform':213,257 'transformchain':211 'treat':1151 'true':371,467,559,687,1007 'type':150,194,230,455,459,696 'unittest.mock':856 'unrel':47 'usag':304,1104 'use':40,87,141,299,844,1136 'util':192 'valid':73,1118,1161 'valu':875 'variabl':585,614,642,679,682,927,934 'various':268 'vector':275,439,1090 'vectorstor':443 'vectorstore.as':462 'vectorstorememori':249 'vectorstoreretrievermemori':757,759 'verbos':370,558,686 'verif':79 'verifi':898 'version':1130 'weather':379 'window':725,1069 'worker':984 'workflow':104","prices":[{"id":"086283a1-2d5e-49f2-8cbd-4ddb1d18826a","listingId":"3b0f385f-4883-40e9-9fd7-53ebf05069cb","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:39:39.725Z"}],"sources":[{"listingId":"3b0f385f-4883-40e9-9fd7-53ebf05069cb","source":"github","sourceId":"sickn33/antigravity-awesome-skills/langchain-architecture","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/langchain-architecture","isPrimary":false,"firstSeenAt":"2026-04-18T21:39:39.725Z","lastSeenAt":"2026-04-23T12:51:08.466Z"}],"details":{"listingId":"3b0f385f-4883-40e9-9fd7-53ebf05069cb","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"langchain-architecture","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34726,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-23T06:41:03Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"7547998550110bde8278ece6fbb4453235830927","skill_md_path":"skills/langchain-architecture/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/langchain-architecture"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"langchain-architecture","description":"Master the LangChain framework for building sophisticated LLM applications with agents, chains, memory, and tool integration."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/langchain-architecture"},"updatedAt":"2026-04-23T12:51:08.466Z"}}