{"id":"6ecab8e5-748e-429a-a5fd-734922e4e644","shortId":"qcBnh3","kind":"skill","title":"vercel-ai-sdk-expert","tagline":"Expert in the Vercel AI SDK. Covers Core API (generateText, streamText), UI hooks (useChat, useCompletion), tool calling, and streaming UI components with React and Next.js.","description":"# Vercel AI SDK Expert\n\nYou are a production-grade Vercel AI SDK expert. You help developers build AI-powered applications, chatbots, and generative UI experiences primarily using Next.js and React. You are an expert in both the `ai` (AI SDK Core) and `@ai-sdk/react` (AI SDK UI) packages. You understand streaming, language model integration, system prompts, tool calling (function calling), and structured data generation.\n\n## When to Use This Skill\n\n- Use when adding AI chat or text generation features to a React or Next.js app\n- Use when streaming LLM responses to a frontend UI\n- Use when implementing tool calling / function calling with an LLM\n- Use when returning structured data (JSON) from an LLM using `generateObject`\n- Use when building AI-powered generative UIs (streaming React components)\n- Use when migrating from direct OpenAI/Anthropic API calls to the unified AI SDK\n- Use when troubleshooting streaming issues with `useChat` or `streamText`\n\n## Core Concepts\n\n### Why Vercel AI SDK?\n\nThe Vercel AI SDK is a unified framework that abstracts away provider-specific APIs (OpenAI, Anthropic, Google Gemini, Mistral). It provides two main layers:\n1. **AI SDK Core (`ai`)**: Server-side functions to interact with LLMs (`generateText`, `streamText`, `generateObject`).\n2. **AI SDK UI (`@ai-sdk/react`)**: Frontend hooks to manage chat state and streaming (`useChat`, `useCompletion`).\n\n## Server-Side Generation (Core API)\n\n### Basic Text Generation\n\n```typescript\nimport { generateText } from \"ai\";\nimport { openai } from \"@ai-sdk/openai\";\n\n// Returns the full string once completion is done (no streaming)\nconst { text, usage } = await generateText({\n  model: openai(\"gpt-4o\"),\n  system: \"You are a helpful assistant evaluating code.\",\n  prompt: \"Review the following python code...\",\n});\n\nconsole.log(text);\nconsole.log(`Tokens used: ${usage.totalTokens}`);\n```\n\n### Streaming Text\n\n```typescript\n// app/api/chat/route.ts (Next.js App Router API Route)\nimport { streamText } from 'ai';\nimport { openai } from '@ai-sdk/openai';\n\n// Allow streaming responses up to 30 seconds\nexport const maxDuration = 30;\n\nexport async function POST(req: Request) {\n  const { messages } = await req.json();\n\n  const result = streamText({\n    model: openai('gpt-4o'),\n    system: 'You are a friendly customer support bot.',\n    messages,\n  });\n\n  // Automatically converts the stream to a readable web stream\n  return result.toDataStreamResponse();\n}\n```\n\n### Structured Data (JSON) Generation\n\n```typescript\nimport { generateObject } from 'ai';\nimport { openai } from '@ai-sdk/openai';\nimport { z } from 'zod';\n\nconst { object } = await generateObject({\n  model: openai('gpt-4o-2024-08-06'), // Use models good at structured output\n  system: 'Extract information from the receipt text.',\n  prompt: receiptText,\n  // Pass a Zod schema to enforce output structure\n  schema: z.object({\n    storeName: z.string(),\n    totalAmount: z.number(),\n    items: z.array(z.object({\n      name: z.string(),\n      price: z.number(),\n    })),\n    date: z.string().describe(\"ISO 8601 date format\"),\n  }),\n});\n\n// `object` is automatically fully typed according to the Zod schema!\nconsole.log(object.totalAmount); \n```\n\n## Frontend UI Hooks\n\n### `useChat` (Conversational UI)\n\n```tsx\n// app/page.tsx (Next.js Client Component)\n\"use client\";\n\nimport { useChat } from \"ai/react\";\n\nexport default function Chat() {\n  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({\n    api: \"/api/chat\", // Points to the streamText route created above\n    // Optional callbacks\n    onFinish: (message) => console.log(\"Done streaming:\", message),\n    onError: (error) => console.error(error)\n  });\n\n  return (\n    <div className=\"flex flex-col h-screen max-w-md mx-auto p-4\">\n      <div className=\"flex-1 overflow-y-auto mb-4\">\n        {messages.map((m) => (\n          <div key={m.id} className={`mb-4 ${m.role === 'user' ? 'text-right' : 'text-left'}`}>\n            <span className={`p-2 rounded-lg inline-block ${m.role === 'user' ? 'bg-blue-500 text-white' : 'bg-gray-200'}`}>\n              {m.target || m.content}\n            </span>\n          </div>\n        ))}\n      </div>\n      \n      <form onSubmit={handleSubmit} className=\"flex gap-2\">\n        <input\n          value={input}\n          onChange={handleInputChange}\n          placeholder=\"Say something...\"\n          className=\"flex-1 p-2 border rounded\"\n          disabled={isLoading}\n        />\n        <button type=\"submit\" disabled={isLoading} className=\"bg-black text-white p-2 rounded\">\n          Send\n        </button>\n      </form>\n    </div>\n  );\n}\n```\n\n## Tool Calling (Function Calling)\n\nTools allow the LLM to interact with your code, fetching external data or performing actions before responding to the user.\n\n### Server-Side Tool Definition\n\n```typescript\n// app/api/chat/route.ts\nimport { streamText, tool } from 'ai';\nimport { openai } from '@ai-sdk/openai';\nimport { z } from 'zod';\n\nexport async function POST(req: Request) {\n  const { messages } = await req.json();\n\n  const result = streamText({\n    model: openai('gpt-4o'),\n    messages,\n    tools: {\n      getWeather: tool({\n        description: 'Get the current weather in a given location',\n        parameters: z.object({\n          location: z.string().describe('The city and state, e.g. San Francisco, CA'),\n          unit: z.enum(['celsius', 'fahrenheit']).optional(),\n        }),\n        // Execute runs when the LLM decides to call this tool\n        execute: async ({ location, unit = 'celsius' }) => {\n          // Fetch from your actual weather API or database\n          const temp = location.includes(\"San Francisco\") ? 15 : 22;\n          return `The weather in ${location} is ${temp}° ${unit}.`;\n        },\n      }),\n    },\n    // Allows the LLM to call tools automatically in a loop until it has the answer\n    maxSteps: 5, \n  });\n\n  return result.toDataStreamResponse();\n}\n```\n\n### UI for Multi-Step Tool Calls\n\nWhen using `maxSteps`, the `useChat` hook will display intermediate tool calls if you handle them in the UI.\n\n```tsx\n// Inside the `useChat` messages.map loop\n{m.role === 'assistant' && m.toolInvocations?.map((toolInvocation) => (\n  <div key={toolInvocation.toolCallId} className=\"text-sm text-gray-500\">\n    {toolInvocation.state === 'result' ? (\n      <p>✅ Fetched weather for {toolInvocation.args.location}</p>\n    ) : (\n      <p>⏳ Fetching weather for {toolInvocation.args.location}...</p>\n    )}\n  </div>\n))}\n```\n\n## Best Practices\n\n- ✅ **Do:** Use `openai('gpt-4o')` or `anthropic('claude-3-5-sonnet-20240620')` format (from specific provider packages like `@ai-sdk/openai`) instead of the older edge runtime wrappers.\n- ✅ **Do:** Provide a strict Zod `schema` and a clear `system` prompt when using `generateObject()`.\n- ✅ **Do:** Set `maxDuration = 30` (or higher if on Pro) in Next.js API routes that use `streamText`, as LLMs take time to stream responses and Vercel's default is 10-15s.\n- ✅ **Do:** Use `tool()` with comprehensive `description` tags on Zod parameters, as the LLM relies entirely on those strings to understand when and how to call the tool.\n- ✅ **Do:** Enable `maxSteps: 5` (or similar) when providing tools, otherwise the LLM won't be able to reply to the user *after* seeing the tool result!\n- ❌ **Don't:** Forget to return `result.toDataStreamResponse()` in Next.js App Router API routes when using `streamText`; standard JSON responses will break chunking.\n- ❌ **Don't:** Blindly trust the output of `generateObject` without validation, even though Zod forces the shape — always handle failure states using `try/catch`.\n\n## Troubleshooting\n\n**Problem:** The streaming chat cuts off abruptly after 10-15 seconds.\n**Solution:** The serverless function timed out. Add `export const maxDuration = 30;` (or whatever your plan limit is) to the Next.js API route file.\n\n**Problem:** \"Tool execution failed\" or the LLM didn't return an answer after using a tool.\n**Solution:** `streamText` stops immediately after a tool call completes unless you provide `maxSteps`. Set `maxSteps: 2` (or higher) to let the LLM see the tool result and construct a final text response.\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["vercel","sdk","expert","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding"],"capabilities":["skill","source-sickn33","skill-vercel-ai-sdk-expert","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/vercel-ai-sdk-expert","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34404 github stars · SKILL.md body (8,465 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T00:51:55.686Z","embedding":null,"createdAt":"2026-04-18T21:47:07.072Z","updatedAt":"2026-04-22T00:51:55.686Z","lastSeenAt":"2026-04-22T00:51:55.686Z","tsv":"'-06':408 '-08':407 '-1':572 '-15':880,988 '-2':533,561,574,592 '-20240620':819 '-3':816 '-4':521 '-5':817 '/api/chat':493 '/openai':267,327,392,637,829 '/react':78,236 '1':213 '10':879,987 '15':719 '2':229,1044 '200':552 '2024':406 '22':720 '30':333,338,854,1000 '4o':287,356,405,659,812 '5':745,912 '500':545,794 '8601':449 'abl':924 'abrupt':985 'abstract':197 'accord':457 'action':613 'actual':709 'ad':106 'add':996 'ai':3,10,32,42,50,70,71,76,79,107,153,171,186,190,214,217,230,234,260,265,320,325,385,390,630,635,827 'ai-pow':49,152 'ai-sdk':75,233,264,324,389,634,826 'ai/react':480 'allow':328,600,729 'alway':972 'answer':743,1024 'anthrop':204,814 'api':14,166,202,252,315,492,711,862,945,1010 'app':118,313,943 'app/api/chat/route.ts':311,625 'app/page.tsx':471 'applic':52 'ask':1094 'assist':293,780 'async':340,643,702 'automat':366,454,735 'await':281,347,399,650 'away':198 'basic':253 'best':805 'bg':543,550,586 'bg-black':585 'bg-blue':542 'bg-gray':549 'black':587 'blind':958 'block':539 'blue':544 'border':575 'bot':364 'boundari':1102 'break':954 'build':48,151 'button':579 'ca':685 'call':22,92,94,132,134,167,596,598,698,733,754,765,906,1036 'callback':502 'celsius':688,705 'chat':108,241,484,982 'chatbot':53 'chunk':955 'citi':679 'clarif':1096 'classnam':519,531,558,570,584,787 'claud':815 'clear':845,1069 'client':473,476 'code':295,301,607 'complet':273,1037 'compon':26,159,474 'comprehens':886 'concept':183 'console.error':511 'console.log':302,304,462,505 'const':278,336,345,349,397,485,648,652,714,998 'construct':1056 'convers':468 'convert':367 'core':13,73,182,216,251 'cover':12 'creat':499 'criteria':1105 'current':667 'custom':362 'cut':983 'data':97,142,378,610 'databas':713 'date':445,450 'decid':696 'default':482,877 'definit':623 'describ':447,677,1073 'descript':664,887 'develop':47 'didn':1020 'direct':164 'disabl':577,582 'display':762 'div':516,784 'done':275,506 'e.g':682 'edg':834 'enabl':910 'enforc':429 'entir':896 'environ':1085 'environment-specif':1084 'error':510,512 'evalu':294 'even':966 'execut':691,701,1015 'experi':57 'expert':5,6,34,44,66,1090 'export':335,339,481,642,997 'extern':609 'extract':416 'fahrenheit':689 'fail':1016 'failur':974 'featur':112 'fetch':608,706,797,801 'file':1012 'final':1058 'flex':559,571 'follow':299 'forc':969 'forget':937 'form':555 'format':451,820 'framework':195 'francisco':684,718 'friend':361 'frontend':126,237,464 'full':270 'fulli':455 'function':93,133,221,341,483,597,644,993 'gap':560 'gemini':206 'generat':55,98,111,155,250,255,380 'generateobject':148,228,383,400,850,963 'generatetext':15,226,258,282 'get':665 'getweath':662 'given':671 'good':411 'googl':205 'gpt':286,355,404,658,811 'gpt-4o':285,354,403,657,810 'grade':40 'gray':551,793 'handl':768,973 'handleinputchang':488,566 'handlesubmit':489,557 'help':46,292 'higher':856,1046 'hook':18,238,466,760 'immedi':1032 'implement':130 'import':257,261,317,321,382,386,393,477,626,631,638 'inform':417 'inlin':538 'inline-block':537 'input':487,562,564,1099 'insid':774 'instead':830 'integr':88 'interact':223,604 'intermedi':763 'isload':490,578,583 'iso':448 'issu':177 'item':438 'json':143,379,951 'key':517,785 'languag':86 'layer':212 'left':529 'let':1048 'lg':536 'like':825 'limit':1005,1061 'llm':122,137,146,602,695,731,894,920,1019,1050 'llms':225,868 'locat':672,675,703,725 'location.includes':716 'loop':738,778 'm':515 'm.content':554 'm.id':518 'm.role':522,540,779 'm.target':553 'm.toolinvocations':781 'main':211 'manag':240 'map':782 'match':1070 'maxdur':337,853,999 'maxstep':744,757,911,1041,1043 'mb':520 'messag':346,365,486,504,508,649,660 'messages.map':514,777 'migrat':162 'miss':1107 'mistral':207 'model':87,283,352,401,410,655 'multi':751 'multi-step':750 'name':441 'next.js':30,60,117,312,472,861,942,1009 'object':398,452 'object.totalamount':463 'older':833 'onchang':565 'onerror':509 'onfinish':503 'onsubmit':556 'openai':203,262,284,322,353,387,402,632,656,809 'openai/anthropic':165 'option':501,690 'otherwis':918 'output':414,430,961,1079 'p':532,573,591 'packag':82,824 'paramet':673,891 'pass':424 'perform':612 'permiss':1100 'placehold':567 'plan':1004 'point':494 'post':342,645 'power':51,154 'practic':806 'price':443 'primarili':58 'pro':859 'problem':979,1013 'product':39 'production-grad':38 'prompt':90,296,422,847 'provid':200,209,823,838,916,1040 'provider-specif':199 'python':300 'react':28,62,115,158 'readabl':372 'receipt':420 'receipttext':423 'reli':895 'repli':926 'req':343,646 'req.json':348,651 'request':344,647 'requir':1098 'respond':615 'respons':123,330,873,952,1060 'result':350,653,796,934,1054 'result.todatastreamresponse':376,747,940 'return':140,268,375,513,721,746,939,1022 'review':297,1091 'right':526 'round':535,576,593 'rounded-lg':534 'rout':316,498,863,946,1011 'router':314,944 'run':692 'runtim':835 'safeti':1101 'san':683,717 'say':568 'schema':427,432,461,842 'scope':1072 'sdk':4,11,33,43,72,77,80,172,187,191,215,231,235,266,326,391,636,828 'second':334,989 'see':931,1051 'send':594 'server':219,248,620 'server-sid':218,247,619 'serverless':992 'set':852,1042 'shape':971 'side':220,249,621 'similar':914 'skill':103,1064 'skill-vercel-ai-sdk-expert' 'sm':790 'solut':990,1029 'someth':569 'sonnet':818 'source-sickn33' 'span':530 'specif':201,822,1086 'standard':950 'state':242,681,975 'step':752 'stop':1031,1092 'storenam':434 'stream':24,85,121,157,176,244,277,308,329,369,374,507,872,981 'streamtext':16,181,227,318,351,497,627,654,866,949,1030 'strict':840 'string':271,899 'structur':96,141,377,413,431 'submit':581 'substitut':1082 'success':1104 'support':363 'system':89,288,357,415,846 'tag':888 'take':869 'task':1068 'temp':715,727 'test':1088 'text':110,254,279,303,309,421,525,528,547,589,789,792,1059 'text-gray':791 'text-left':527 'text-right':524 'text-sm':788 'text-whit':546,588 'though':967 'time':870,994 'token':305 'tool':21,91,131,595,599,622,628,661,663,700,734,753,764,884,908,917,933,1014,1028,1035,1053 'toolinvoc':783 'toolinvocation.args.location':800,804 'toolinvocation.state':795 'toolinvocation.toolcallid':786 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'totalamount':436 'treat':1077 'troubleshoot':175,978 'trust':959 'try/catch':977 'tsx':470,773 'two':210 'type':456,580 'typescript':256,310,381,624 'ui':17,25,56,81,127,156,232,465,469,748,772 'understand':84,901 'unifi':170,194 'unit':686,704,728 'unless':1038 'usag':280 'usage.totaltokens':307 'use':59,101,104,119,128,138,147,149,160,173,306,409,475,756,808,849,865,883,948,976,1026,1062 'usechat':19,179,245,467,478,491,759,776 'usecomplet':20,246 'user':523,541,618,929 'valid':965,1087 'valu':563 'vercel':2,9,31,41,185,189,875 'vercel-ai-sdk-expert':1 'weather':668,710,723,798,802 'web':373 'whatev':1002 'white':548,590 'without':964 'won':921 'wrapper':836 'z':394,639 'z.array':439 'z.enum':687 'z.number':437,444 'z.object':433,440,674 'z.string':435,442,446,676 'zod':396,426,460,641,841,890,968","prices":[{"id":"c77abfdd-81de-4cc6-8a2e-851d3ac6799f","listingId":"6ecab8e5-748e-429a-a5fd-734922e4e644","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:47:07.072Z"}],"sources":[{"listingId":"6ecab8e5-748e-429a-a5fd-734922e4e644","source":"github","sourceId":"sickn33/antigravity-awesome-skills/vercel-ai-sdk-expert","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/vercel-ai-sdk-expert","isPrimary":false,"firstSeenAt":"2026-04-18T21:47:07.072Z","lastSeenAt":"2026-04-22T00:51:55.686Z"}],"details":{"listingId":"6ecab8e5-748e-429a-a5fd-734922e4e644","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"vercel-ai-sdk-expert","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34404,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-21T16:43:40Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"d3d4fdc12879604c04682c3d351eeb008cd804b5","skill_md_path":"skills/vercel-ai-sdk-expert/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/vercel-ai-sdk-expert"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"vercel-ai-sdk-expert","description":"Expert in the Vercel AI SDK. Covers Core API (generateText, streamText), UI hooks (useChat, useCompletion), tool calling, and streaming UI components with React and Next.js."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/vercel-ai-sdk-expert"},"updatedAt":"2026-04-22T00:51:55.686Z"}}