{"id":"5f90ff5b-7b8a-4af3-be0e-563cc70898f4","shortId":"qz3utZ","kind":"skill","title":"azure-ai-voicelive-ts","tagline":"Azure AI Voice Live SDK for JavaScript/TypeScript. Build real-time voice AI applications with bidirectional WebSocket communication.","description":"# @azure/ai-voicelive (JavaScript/TypeScript)\n\nReal-time voice AI SDK for building bidirectional voice assistants with Azure AI in Node.js and browser environments.\n\n## Installation\n\n```bash\nnpm install @azure/ai-voicelive @azure/identity\n# TypeScript users\nnpm install @types/node\n```\n\n**Current Version**: 1.0.0-beta.3\n\n**Supported Environments**:\n- Node.js LTS versions (20+)\n- Modern browsers (Chrome, Firefox, Safari, Edge)\n\n## Environment Variables\n\n```bash\nAZURE_VOICELIVE_ENDPOINT=https://<resource>.cognitiveservices.azure.com\n# Optional: API key if not using Entra ID\nAZURE_VOICELIVE_API_KEY=<your-api-key>\n# Optional: Logging\nAZURE_LOG_LEVEL=info\n```\n\n## Authentication\n\n### Microsoft Entra ID (Recommended)\n\n```typescript\nimport { DefaultAzureCredential } from \"@azure/identity\";\nimport { VoiceLiveClient } from \"@azure/ai-voicelive\";\n\nconst credential = new DefaultAzureCredential();\nconst endpoint = \"https://your-resource.cognitiveservices.azure.com\";\n\nconst client = new VoiceLiveClient(endpoint, credential);\n```\n\n### API Key\n\n```typescript\nimport { AzureKeyCredential } from \"@azure/core-auth\";\nimport { VoiceLiveClient } from \"@azure/ai-voicelive\";\n\nconst endpoint = \"https://your-resource.cognitiveservices.azure.com\";\nconst credential = new AzureKeyCredential(\"your-api-key\");\n\nconst client = new VoiceLiveClient(endpoint, credential);\n```\n\n## Client Hierarchy\n\n```\nVoiceLiveClient\n└── VoiceLiveSession (WebSocket connection)\n    ├── updateSession()      → Configure session options\n    ├── subscribe()          → Event handlers (Azure SDK pattern)\n    ├── sendAudio()          → Stream audio input\n    ├── addConversationItem() → Add messages/function outputs\n    └── sendEvent()          → Send raw protocol events\n```\n\n## Quick Start\n\n```typescript\nimport { DefaultAzureCredential } from \"@azure/identity\";\nimport { VoiceLiveClient } from \"@azure/ai-voicelive\";\n\nconst credential = new DefaultAzureCredential();\nconst endpoint = process.env.AZURE_VOICELIVE_ENDPOINT!;\n\n// Create client and start session\nconst client = new VoiceLiveClient(endpoint, credential);\nconst session = await client.startSession(\"gpt-4o-mini-realtime-preview\");\n\n// Configure session\nawait session.updateSession({\n  modalities: [\"text\", \"audio\"],\n  instructions: \"You are a helpful AI assistant. Respond naturally.\",\n  voice: {\n    type: \"azure-standard\",\n    name: \"en-US-AvaNeural\",\n  },\n  turnDetection: {\n    type: \"server_vad\",\n    threshold: 0.5,\n    prefixPaddingMs: 300,\n    silenceDurationMs: 500,\n  },\n  inputAudioFormat: \"pcm16\",\n  outputAudioFormat: \"pcm16\",\n});\n\n// Subscribe to events\nconst subscription = session.subscribe({\n  onResponseAudioDelta: async (event, context) => {\n    // Handle streaming audio output\n    const audioData = event.delta;\n    playAudioChunk(audioData);\n  },\n  onResponseTextDelta: async (event, context) => {\n    // Handle streaming text\n    process.stdout.write(event.delta);\n  },\n  onInputAudioTranscriptionCompleted: async (event, context) => {\n    console.log(\"User said:\", event.transcript);\n  },\n});\n\n// Send audio from microphone\nfunction sendAudioChunk(audioBuffer: ArrayBuffer) {\n  session.sendAudio(audioBuffer);\n}\n```\n\n## Session Configuration\n\n```typescript\nawait session.updateSession({\n  // Modalities\n  modalities: [\"audio\", \"text\"],\n  \n  // System instructions\n  instructions: \"You are a customer service representative.\",\n  \n  // Voice selection\n  voice: {\n    type: \"azure-standard\",  // or \"azure-custom\", \"openai\"\n    name: \"en-US-AvaNeural\",\n  },\n  \n  // Turn detection (VAD)\n  turnDetection: {\n    type: \"server_vad\",      // or \"azure_semantic_vad\"\n    threshold: 0.5,\n    prefixPaddingMs: 300,\n    silenceDurationMs: 500,\n  },\n  \n  // Audio formats\n  inputAudioFormat: \"pcm16\",\n  outputAudioFormat: \"pcm16\",\n  \n  // Tools (function calling)\n  tools: [\n    {\n      type: \"function\",\n      name: \"get_weather\",\n      description: \"Get current weather\",\n      parameters: {\n        type: \"object\",\n        properties: {\n          location: { type: \"string\" }\n        },\n        required: [\"location\"]\n      }\n    }\n  ],\n  toolChoice: \"auto\",\n});\n```\n\n## Event Handling (Azure SDK Pattern)\n\nThe SDK uses a subscription-based event handling pattern:\n\n```typescript\nconst subscription = session.subscribe({\n  // Connection lifecycle\n  onConnected: async (args, context) => {\n    console.log(\"Connected:\", args.connectionId);\n  },\n  onDisconnected: async (args, context) => {\n    console.log(\"Disconnected:\", args.code, args.reason);\n  },\n  onError: async (args, context) => {\n    console.error(\"Error:\", args.error.message);\n  },\n  \n  // Session events\n  onSessionCreated: async (event, context) => {\n    console.log(\"Session created:\", context.sessionId);\n  },\n  onSessionUpdated: async (event, context) => {\n    console.log(\"Session updated\");\n  },\n  \n  // Audio input events (VAD)\n  onInputAudioBufferSpeechStarted: async (event, context) => {\n    console.log(\"Speech started at:\", event.audioStartMs);\n  },\n  onInputAudioBufferSpeechStopped: async (event, context) => {\n    console.log(\"Speech stopped at:\", event.audioEndMs);\n  },\n  \n  // Transcription events\n  onConversationItemInputAudioTranscriptionCompleted: async (event, context) => {\n    console.log(\"User said:\", event.transcript);\n  },\n  onConversationItemInputAudioTranscriptionDelta: async (event, context) => {\n    process.stdout.write(event.delta);\n  },\n  \n  // Response events\n  onResponseCreated: async (event, context) => {\n    console.log(\"Response started\");\n  },\n  onResponseDone: async (event, context) => {\n    console.log(\"Response complete\");\n  },\n  \n  // Streaming text\n  onResponseTextDelta: async (event, context) => {\n    process.stdout.write(event.delta);\n  },\n  onResponseTextDone: async (event, context) => {\n    console.log(\"\\n--- Text complete ---\");\n  },\n  \n  // Streaming audio\n  onResponseAudioDelta: async (event, context) => {\n    const audioData = event.delta;\n    playAudioChunk(audioData);\n  },\n  onResponseAudioDone: async (event, context) => {\n    console.log(\"Audio complete\");\n  },\n  \n  // Audio transcript (what assistant said)\n  onResponseAudioTranscriptDelta: async (event, context) => {\n    process.stdout.write(event.delta);\n  },\n  \n  // Function calling\n  onResponseFunctionCallArgumentsDone: async (event, context) => {\n    if (event.name === \"get_weather\") {\n      const args = JSON.parse(event.arguments);\n      const result = await getWeather(args.location);\n      \n      await session.addConversationItem({\n        type: \"function_call_output\",\n        callId: event.callId,\n        output: JSON.stringify(result),\n      });\n      \n      await session.sendEvent({ type: \"response.create\" });\n    }\n  },\n  \n  // Catch-all for debugging\n  onServerEvent: async (event, context) => {\n    console.log(\"Event:\", event.type);\n  },\n});\n\n// Clean up when done\nawait subscription.close();\n```\n\n## Function Calling\n\n```typescript\n// Define tools in session config\nawait session.updateSession({\n  modalities: [\"audio\", \"text\"],\n  instructions: \"Help users with weather information.\",\n  tools: [\n    {\n      type: \"function\",\n      name: \"get_weather\",\n      description: \"Get current weather for a location\",\n      parameters: {\n        type: \"object\",\n        properties: {\n          location: {\n            type: \"string\",\n            description: \"City and state or country\",\n          },\n        },\n        required: [\"location\"],\n      },\n    },\n  ],\n  toolChoice: \"auto\",\n});\n\n// Handle function calls\nconst subscription = session.subscribe({\n  onResponseFunctionCallArgumentsDone: async (event, context) => {\n    if (event.name === \"get_weather\") {\n      const args = JSON.parse(event.arguments);\n      const weatherData = await fetchWeather(args.location);\n      \n      // Send function result\n      await session.addConversationItem({\n        type: \"function_call_output\",\n        callId: event.callId,\n        output: JSON.stringify(weatherData),\n      });\n      \n      // Trigger response generation\n      await session.sendEvent({ type: \"response.create\" });\n    }\n  },\n});\n```\n\n## Voice Options\n\n| Voice Type | Config | Example |\n|------------|--------|---------|\n| Azure Standard | `{ type: \"azure-standard\", name: \"...\" }` | `\"en-US-AvaNeural\"` |\n| Azure Custom | `{ type: \"azure-custom\", name: \"...\", endpointId: \"...\" }` | Custom voice endpoint |\n| Azure Personal | `{ type: \"azure-personal\", speakerProfileId: \"...\" }` | Personal voice clone |\n| OpenAI | `{ type: \"openai\", name: \"...\" }` | `\"alloy\"`, `\"echo\"`, `\"shimmer\"` |\n\n## Supported Models\n\n| Model | Description | Use Case |\n|-------|-------------|----------|\n| `gpt-4o-realtime-preview` | GPT-4o with real-time audio | High-quality conversational AI |\n| `gpt-4o-mini-realtime-preview` | Lightweight GPT-4o | Fast, efficient interactions |\n| `phi4-mm-realtime` | Phi multimodal | Cost-effective applications |\n\n## Turn Detection Options\n\n```typescript\n// Server VAD (default)\nturnDetection: {\n  type: \"server_vad\",\n  threshold: 0.5,\n  prefixPaddingMs: 300,\n  silenceDurationMs: 500,\n}\n\n// Azure Semantic VAD (smarter detection)\nturnDetection: {\n  type: \"azure_semantic_vad\",\n}\n\n// Azure Semantic VAD (English optimized)\nturnDetection: {\n  type: \"azure_semantic_vad_en\",\n}\n\n// Azure Semantic VAD (Multilingual)\nturnDetection: {\n  type: \"azure_semantic_vad_multilingual\",\n}\n```\n\n## Audio Formats\n\n| Format | Sample Rate | Use Case |\n|--------|-------------|----------|\n| `pcm16` | 24kHz | Default, high quality |\n| `pcm16-8000hz` | 8kHz | Telephony |\n| `pcm16-16000hz` | 16kHz | Voice assistants |\n| `g711_ulaw` | 8kHz | Telephony (US) |\n| `g711_alaw` | 8kHz | Telephony (EU) |\n\n## Key Types Reference\n\n| Type | Purpose |\n|------|---------|\n| `VoiceLiveClient` | Main client for creating sessions |\n| `VoiceLiveSession` | Active WebSocket session |\n| `VoiceLiveSessionHandlers` | Event handler interface |\n| `VoiceLiveSubscription` | Active event subscription |\n| `ConnectionContext` | Context for connection events |\n| `SessionContext` | Context for session events |\n| `ServerEventUnion` | Union of all server events |\n\n## Error Handling\n\n```typescript\nimport {\n  VoiceLiveError,\n  VoiceLiveConnectionError,\n  VoiceLiveAuthenticationError,\n  VoiceLiveProtocolError,\n} from \"@azure/ai-voicelive\";\n\nconst subscription = session.subscribe({\n  onError: async (args, context) => {\n    const { error } = args;\n    \n    if (error instanceof VoiceLiveConnectionError) {\n      console.error(\"Connection error:\", error.message);\n    } else if (error instanceof VoiceLiveAuthenticationError) {\n      console.error(\"Auth error:\", error.message);\n    } else if (error instanceof VoiceLiveProtocolError) {\n      console.error(\"Protocol error:\", error.message);\n    }\n  },\n  \n  onServerError: async (event, context) => {\n    console.error(\"Server error:\", event.error?.message);\n  },\n});\n```\n\n## Logging\n\n```typescript\nimport { setLogLevel } from \"@azure/logger\";\n\n// Enable verbose logging\nsetLogLevel(\"info\");\n\n// Or via environment variable\n// AZURE_LOG_LEVEL=info\n```\n\n## Browser Usage\n\n```typescript\n// Browser requires bundler (Vite, webpack, etc.)\nimport { VoiceLiveClient } from \"@azure/ai-voicelive\";\nimport { InteractiveBrowserCredential } from \"@azure/identity\";\n\n// Use browser-compatible credential\nconst credential = new InteractiveBrowserCredential({\n  clientId: \"your-client-id\",\n  tenantId: \"your-tenant-id\",\n});\n\nconst client = new VoiceLiveClient(endpoint, credential);\n\n// Request microphone access\nconst stream = await navigator.mediaDevices.getUserMedia({ audio: true });\nconst audioContext = new AudioContext({ sampleRate: 24000 });\n\n// Process audio and send to session\n// ... (see samples for full implementation)\n```\n\n## Best Practices\n\n1. **Always use `DefaultAzureCredential`** — Never hardcode API keys\n2. **Set both modalities** — Include `[\"text\", \"audio\"]` for voice assistants\n3. **Use Azure Semantic VAD** — Better turn detection than basic server VAD\n4. **Handle all error types** — Connection, auth, and protocol errors\n5. **Clean up subscriptions** — Call `subscription.close()` when done\n6. **Use appropriate audio format** — PCM16 at 24kHz for best quality\n\n## Reference Links\n\n| Resource | URL |\n|----------|-----|\n| npm Package | https://www.npmjs.com/package/@azure/ai-voicelive |\n| GitHub Source | https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-voicelive |\n| Samples | https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-voicelive/samples |\n| API Reference | https://learn.microsoft.com/javascript/api/@azure/ai-voicelive |\n\n## When to Use\nThis skill is applicable to execute the workflow or actions described in the overview.\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["azure","voicelive","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows"],"capabilities":["skill","source-sickn33","skill-azure-ai-voicelive-ts","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/azure-ai-voicelive-ts","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34928 github stars · SKILL.md body (12,931 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-24T18:50:29.072Z","embedding":null,"createdAt":"2026-04-18T21:32:08.689Z","updatedAt":"2026-04-24T18:50:29.072Z","lastSeenAt":"2026-04-24T18:50:29.072Z","tsv":"'/azure/azure-sdk-for-js/tree/main/sdk/ai/ai-voicelive':1122 '/azure/azure-sdk-for-js/tree/main/sdk/ai/ai-voicelive/samples':1126 '/javascript/api/@azure/ai-voicelive':1131 '/package/@azure/ai-voicelive':1117 '0.5':253,355,798 '1':1050 '1.0.0':58 '16000hz':853 '16khz':854 '2':1058 '20':65 '24000':1036 '24khz':842,1105 '3':1068 '300':255,357,800 '4':1080 '4o':218,747,752,765,772 '5':1090 '500':257,359,802 '6':1098 '8000hz':848 '8khz':849,859,864 'access':1024 'action':1144 'activ':879,887 'add':173 'addconversationitem':172 'ai':3,7,18,30,39,234,762 'alaw':863 'alloy':736 'alway':1051 'api':80,89,124,144,1056,1127 'applic':19,785,1138 'appropri':1100 'arg':413,420,428,560,665,921,925 'args.code':424 'args.connectionid':417 'args.error.message':432 'args.location':567,672 'args.reason':425 'arraybuff':305 'ask':1182 'assist':36,235,541,856,1067 'async':269,282,291,412,419,427,436,444,455,464,475,483,491,498,507,513,523,532,544,552,589,657,920,953 'audio':170,228,274,299,315,360,450,521,536,538,612,757,834,1029,1038,1064,1101 'audiobuff':304,307 'audiocontext':1032,1034 'audiodata':277,280,527,530 'auth':940,1086 'authent':97 'auto':389,649 'avaneur':247,342,710 'await':214,224,311,565,568,579,599,609,670,676,690,1027 'azur':2,6,38,75,87,93,165,241,331,335,351,392,700,704,711,715,722,726,803,810,813,820,824,830,976,1070 'azure-ai-voicelive-t':1 'azure-custom':334,714 'azure-person':725 'azure-standard':240,330,703 'azure/ai-voicelive':24,49,110,134,191,915,992 'azure/core-auth':130 'azure/identity':50,106,187,996 'azure/logger':966 'azurekeycredenti':128,141 'base':401 'bash':46,74 'basic':1077 'best':1048,1107 'beta.3':59 'better':1073 'bidirect':21,34 'boundari':1190 'browser':43,67,980,983,999 'browser-compat':998 'build':13,33 'bundler':985 'call':368,550,572,602,652,680,1094 'callid':574,682 'case':744,840 'catch':584 'catch-al':583 'chrome':68 'citi':641 'clarif':1184 'clean':595,1091 'clear':1157 'client':119,147,152,202,207,874,1009,1017 'client.startsession':215 'clientid':1006 'clone':731 'cognitiveservices.azure.com':78 'communic':23 'compat':1000 'complet':503,519,537 'config':608,698 'configur':159,222,309 'connect':157,409,416,893,931,1085 'connectioncontext':890 'console.error':430,930,939,948,956 'console.log':294,415,422,439,447,458,467,478,494,501,516,535,592 'const':111,115,118,135,138,146,192,196,206,212,265,276,406,526,559,563,653,664,668,916,923,1002,1016,1025,1031 'context':271,284,293,414,421,429,438,446,457,466,477,485,493,500,509,515,525,534,546,554,591,659,891,896,922,955 'context.sessionid':442 'convers':761 'cost':783 'cost-effect':782 'countri':645 'creat':201,441,876 'credenti':112,123,139,151,193,211,1001,1003,1021 'criteria':1193 'current':56,377,628 'custom':323,336,712,716,719 'debug':587 'default':792,843 'defaultazurecredenti':104,114,185,195,1053 'defin':604 'describ':1145,1161 'descript':375,626,640,742 'detect':344,787,807,1075 'disconnect':423 'done':598,1097 'echo':737 'edg':71 'effect':784 'effici':774 'els':934,943 'en':245,340,708,823 'en-us-avaneur':244,339,707 'enabl':967 'endpoint':77,116,122,136,150,197,200,210,721,1020 'endpointid':718 'english':816 'entra':85,99 'environ':44,61,72,974,1173 'environment-specif':1172 'error':431,906,924,927,932,936,941,945,950,958,1083,1089 'error.message':933,942,951 'etc':988 'eu':866 'event':163,180,264,270,283,292,390,402,434,437,445,452,456,465,473,476,484,489,492,499,508,514,524,533,545,553,590,593,658,883,888,894,899,905,954 'event.arguments':562,667 'event.audioendms':471 'event.audiostartms':462 'event.callid':575,683 'event.delta':278,289,487,511,528,548 'event.error':959 'event.name':556,661 'event.transcript':297,481 'event.type':594 'exampl':699 'execut':1140 'expert':1178 'fast':773 'fetchweath':671 'firefox':69 'format':361,835,836,1102 'full':1046 'function':302,367,371,549,571,601,622,651,674,679 'g711':857,862 'generat':689 'get':373,376,557,624,627,662 'getweath':566 'github':1118 'github.com':1121,1125 'github.com/azure/azure-sdk-for-js/tree/main/sdk/ai/ai-voicelive':1120 'github.com/azure/azure-sdk-for-js/tree/main/sdk/ai/ai-voicelive/samples':1124 'gpt':217,746,751,764,771 'gpt-4o':750,770 'gpt-4o-mini-realtime-preview':216,763 'gpt-4o-realtime-preview':745 'handl':272,285,391,403,650,907,1081 'handler':164,884 'hardcod':1055 'help':233,615 'hierarchi':153 'high':759,844 'high-qual':758 'id':86,100,1010,1015 'implement':1047 'import':103,107,127,131,184,188,909,963,989,993 'includ':1062 'info':96,971,979 'inform':619 'input':171,451,1187 'inputaudioformat':258,362 'instal':45,48,54 'instanceof':928,937,946 'instruct':229,318,319,614 'interact':775 'interactivebrowsercredenti':994,1005 'interfac':885 'javascript/typescript':12,25 'json.parse':561,666 'json.stringify':577,685 'key':81,90,125,145,867,1057 'learn.microsoft.com':1130 'learn.microsoft.com/javascript/api/@azure/ai-voicelive':1129 'level':95,978 'lifecycl':410 'lightweight':769 'limit':1149 'link':1110 'live':9 'locat':383,387,632,637,647 'log':92,94,961,969,977 'lts':63 'main':873 'match':1158 'messag':960 'messages/function':174 'microphon':301,1023 'microsoft':98 'mini':219,766 'miss':1195 'mm':778 'modal':226,313,314,611,1061 'model':740,741 'modern':66 'multilingu':827,833 'multimod':781 'n':517 'name':243,338,372,623,706,717,735 'natur':237 'navigator.mediadevices.getusermedia':1028 'never':1054 'new':113,120,140,148,194,208,1004,1018,1033 'node.js':41,62 'npm':47,53,1113 'object':381,635 'onconnect':411 'onconversationiteminputaudiotranscriptioncomplet':474 'onconversationiteminputaudiotranscriptiondelta':482 'ondisconnect':418 'onerror':426,919 'oninputaudiobufferspeechstart':454 'oninputaudiobufferspeechstop':463 'oninputaudiotranscriptioncomplet':290 'onresponseaudiodelta':268,522 'onresponseaudiodon':531 'onresponseaudiotranscriptdelta':543 'onresponsecr':490 'onresponsedon':497 'onresponsefunctioncallargumentsdon':551,656 'onresponsetextdelta':281,506 'onresponsetextdon':512 'onservererror':952 'onserverev':588 'onsessioncr':435 'onsessionupd':443 'openai':337,732,734 'optim':817 'option':79,91,161,695,788 'output':175,275,573,576,681,684,1167 'outputaudioformat':260,364 'overview':1148 'packag':1114 'paramet':379,633 'pattern':167,394,404 'pcm16':259,261,363,365,841,847,852,1103 'pcm16-16000hz':851 'pcm16-8000hz':846 'permiss':1188 'person':723,727,729 'phi':780 'phi4':777 'phi4-mm-realtime':776 'playaudiochunk':279,529 'practic':1049 'prefixpaddingm':254,356,799 'preview':221,749,768 'process':1037 'process.env.azure':198 'process.stdout.write':288,486,510,547 'properti':382,636 'protocol':179,949,1088 'purpos':871 'qualiti':760,845,1108 'quick':181 'rate':838 'raw':178 'real':15,27,755 'real-tim':14,26,754 'realtim':220,748,767,779 'recommend':101 'refer':869,1109,1128 'repres':325 'request':1022 'requir':386,646,984,1186 'resourc':1111 'respond':236 'respons':488,495,502,688 'response.create':582,693 'result':564,578,675 'review':1179 'safari':70 'safeti':1189 'said':296,480,542 'sampl':837,1044,1123 'sampler':1035 'scope':1160 'sdk':10,31,166,393,396 'see':1043 'select':327 'semant':352,804,811,814,821,825,831,1071 'send':177,298,673,1040 'sendaudio':168 'sendaudiochunk':303 'sendev':176 'server':250,348,790,795,904,957,1078 'servereventunion':900 'servic':324 'session':160,205,213,223,308,433,440,448,607,877,881,898,1042 'session.addconversationitem':569,677 'session.sendaudio':306 'session.sendevent':580,691 'session.subscribe':267,408,655,918 'session.updatesession':225,312,610 'sessioncontext':895 'set':1059 'setloglevel':964,970 'shimmer':738 'silencedurationm':256,358,801 'skill':1136,1152 'skill-azure-ai-voicelive-ts' 'smarter':806 'sourc':1119 'source-sickn33' 'speakerprofileid':728 'specif':1174 'speech':459,468 'standard':242,332,701,705 'start':182,204,460,496 'state':643 'stop':469,1180 'stream':169,273,286,504,520,1026 'string':385,639 'subscrib':162,262 'subscript':266,400,407,654,889,917,1093 'subscription-bas':399 'subscription.close':600,1095 'substitut':1170 'success':1192 'support':60,739 'system':317 'task':1156 'telephoni':850,860,865 'tenant':1014 'tenantid':1011 'test':1176 'text':227,287,316,505,518,613,1063 'threshold':252,354,797 'time':16,28,756 'tool':366,369,605,620 'toolchoic':388,648 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'transcript':472,539 'treat':1165 'trigger':687 'true':1030 'ts':5 'turn':343,786,1074 'turndetect':248,346,793,808,818,828 'type':239,249,329,347,370,380,384,570,581,621,634,638,678,692,697,702,713,724,733,794,809,819,829,868,870,1084 'types/node':55 'typescript':51,102,126,183,310,405,603,789,908,962,982 'ulaw':858 'union':901 'updat':449 'updatesess':158 'url':1112 'us':246,341,709,861 'usag':981 'use':84,397,743,839,997,1052,1069,1099,1134,1150 'user':52,295,479,616 'vad':251,345,349,353,453,791,796,805,812,815,822,826,832,1072,1079 'valid':1175 'variabl':73,975 'verbos':968 'version':57,64 'via':973 'vite':986 'voic':8,17,29,35,238,326,328,694,696,720,730,855,1066 'voicel':4,76,88,199 'voiceliveauthenticationerror':912,938 'voicelivecli':108,121,132,149,154,189,209,872,990,1019 'voiceliveconnectionerror':911,929 'voiceliveerror':910 'voiceliveprotocolerror':913,947 'voicelivesess':155,878 'voicelivesessionhandl':882 'voicelivesubscript':886 'weather':374,378,558,618,625,629,663 'weatherdata':669,686 'webpack':987 'websocket':22,156,880 'workflow':1142 'www.npmjs.com':1116 'www.npmjs.com/package/@azure/ai-voicelive':1115 'your-api-key':142 'your-client-id':1007 'your-resource.cognitiveservices.azure.com':117,137 'your-tenant-id':1012","prices":[{"id":"739f31fc-a828-4375-9f06-fa633a3e3bdf","listingId":"5f90ff5b-7b8a-4af3-be0e-563cc70898f4","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:32:08.689Z"}],"sources":[{"listingId":"5f90ff5b-7b8a-4af3-be0e-563cc70898f4","source":"github","sourceId":"sickn33/antigravity-awesome-skills/azure-ai-voicelive-ts","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/azure-ai-voicelive-ts","isPrimary":false,"firstSeenAt":"2026-04-18T21:32:08.689Z","lastSeenAt":"2026-04-24T18:50:29.072Z"}],"details":{"listingId":"5f90ff5b-7b8a-4af3-be0e-563cc70898f4","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"azure-ai-voicelive-ts","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34928,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-24T06:41:17Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"d91dbcb9691355ca7879b1f6ec4b12dec96163cb","skill_md_path":"skills/azure-ai-voicelive-ts/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/azure-ai-voicelive-ts"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"azure-ai-voicelive-ts","description":"Azure AI Voice Live SDK for JavaScript/TypeScript. Build real-time voice AI applications with bidirectional WebSocket communication."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/azure-ai-voicelive-ts"},"updatedAt":"2026-04-24T18:50:29.072Z"}}