{"id":"12f3e42f-8dd7-474c-a469-43ce9d2965a1","shortId":"Xyrbhw","kind":"skill","title":"voice-ai-engine-development","tagline":"Build real-time conversational AI voice engines using async worker pipelines, streaming transcription, LLM agents, and TTS synthesis with interrupt handling and multi-provider support","description":"# Voice AI Engine Development\n\n## Overview\n\nThis skill guides you through building production-ready voice AI engines with real-time conversation capabilities. Voice AI engines enable natural, bidirectional conversations between users and AI agents through streaming audio processing, speech-to-text transcription, LLM-powered responses, and text-to-speech synthesis.\n\nThe core architecture uses an async queue-based worker pipeline where each component runs independently and communicates via `asyncio.Queue` objects, enabling concurrent processing, interrupt handling, and real-time streaming at every stage.\n\n## When to Use This Skill\n\nUse this skill when:\n- Building real-time voice conversation systems\n- Implementing voice assistants or chatbots\n- Creating voice-enabled customer service agents\n- Developing voice AI applications with interrupt capabilities\n- Integrating multiple transcription, LLM, or TTS providers\n- Working with streaming audio processing pipelines\n- The user mentions Vocode, voice engines, or conversational AI\n\n## Core Architecture Principles\n\n### The Worker Pipeline Pattern\n\nEvery voice AI engine follows this pipeline:\n\n```\nAudio In → Transcriber → Agent → Synthesizer → Audio Out\n           (Worker 1)   (Worker 2)  (Worker 3)\n```\n\n**Key Benefits:**\n- **Decoupling**: Workers only know about their input/output queues\n- **Concurrency**: All workers run simultaneously via asyncio\n- **Backpressure**: Queues automatically handle rate differences\n- **Interruptibility**: Everything can be stopped mid-stream\n\n### Base Worker Pattern\n\nEvery worker follows this pattern:\n\n```python\nclass BaseWorker:\n    def __init__(self, input_queue, output_queue):\n        self.input_queue = input_queue   # asyncio.Queue to consume from\n        self.output_queue = output_queue # asyncio.Queue to produce to\n        self.active = False\n    \n    def start(self):\n        \"\"\"Start the worker's processing loop\"\"\"\n        self.active = True\n        asyncio.create_task(self._run_loop())\n    \n    async def _run_loop(self):\n        \"\"\"Main processing loop - runs forever until terminated\"\"\"\n        while self.active:\n            item = await self.input_queue.get()  # Block until item arrives\n            await self.process(item)              # Process the item\n    \n    async def process(self, item):\n        \"\"\"Override this - does the actual work\"\"\"\n        raise NotImplementedError\n    \n    def terminate(self):\n        \"\"\"Stop the worker\"\"\"\n        self.active = False\n```\n\n## Component Implementation Guide\n\n### 1. Transcriber (Audio → Text)\n\n**Purpose**: Converts incoming audio chunks to text transcriptions\n\n**Interface Requirements**:\n```python\nclass BaseTranscriber:\n    def __init__(self, transcriber_config):\n        self.input_queue = asyncio.Queue()   # Audio chunks (bytes)\n        self.output_queue = asyncio.Queue()  # Transcriptions\n        self.is_muted = False\n    \n    def send_audio(self, chunk: bytes):\n        \"\"\"Client calls this to send audio\"\"\"\n        if not self.is_muted:\n            self.input_queue.put_nowait(chunk)\n        else:\n            # Send silence instead (prevents echo during bot speech)\n            self.input_queue.put_nowait(self.create_silent_chunk(len(chunk)))\n    \n    def mute(self):\n        \"\"\"Called when bot starts speaking (prevents echo)\"\"\"\n        self.is_muted = True\n    \n    def unmute(self):\n        \"\"\"Called when bot stops speaking\"\"\"\n        self.is_muted = False\n```\n\n**Output Format**:\n```python\nclass Transcription:\n    message: str          # \"Hello, how are you?\"\n    confidence: float     # 0.95\n    is_final: bool        # True = complete sentence, False = partial\n    is_interrupt: bool    # Set by TranscriptionsWorker\n```\n\n**Supported Providers**:\n- **Deepgram** - Fast, accurate, streaming\n- **AssemblyAI** - High accuracy, good for accents\n- **Azure Speech** - Enterprise-grade\n- **Google Cloud Speech** - Multi-language support\n\n**Critical Implementation Details**:\n- Use WebSocket for bidirectional streaming\n- Run sender and receiver tasks concurrently with `asyncio.gather()`\n- Mute transcriber when bot speaks to prevent echo/feedback loops\n- Handle both final and partial transcriptions\n\n### 2. Agent (Text → Response)\n\n**Purpose**: Processes user input and generates conversational responses\n\n**Interface Requirements**:\n```python\nclass BaseAgent:\n    def __init__(self, agent_config):\n        self.input_queue = asyncio.Queue()   # TranscriptionAgentInput\n        self.output_queue = asyncio.Queue()  # AgentResponse\n        self.transcript = None               # Conversation history\n    \n    async def generate_response(self, human_input, is_interrupt, conversation_id):\n        \"\"\"Override this - returns AsyncGenerator of responses\"\"\"\n        raise NotImplementedError\n```\n\n**Why Streaming Responses?**\n- **Lower latency**: Start speaking as soon as first sentence is ready\n- **Better interrupts**: Can stop mid-response\n- **Sentence-by-sentence**: More natural conversation flow\n\n**Supported Providers**:\n- **OpenAI** (GPT-4, GPT-3.5) - High quality, fast\n- **Google Gemini** - Multimodal, cost-effective\n- **Anthropic Claude** - Long context, nuanced responses\n\n**Critical Implementation Details**:\n- Maintain conversation history in `Transcript` object\n- Stream responses using `AsyncGenerator`\n- **IMPORTANT**: Buffer entire LLM response before yielding to synthesizer (prevents audio jumping)\n- Handle interrupts by canceling current generation task\n- Update conversation history with partial messages on interrupt\n\n### 3. Synthesizer (Text → Audio)\n\n**Purpose**: Converts agent text responses to speech audio\n\n**Interface Requirements**:\n```python\nclass BaseSynthesizer:\n    async def create_speech(self, message: BaseMessage, chunk_size: int) -> SynthesisResult:\n        \"\"\"\n        Returns a SynthesisResult containing:\n        - chunk_generator: AsyncGenerator that yields audio chunks\n        - get_message_up_to: Function to get partial text (for interrupts)\n        \"\"\"\n        raise NotImplementedError\n```\n\n**SynthesisResult Structure**:\n```python\nclass SynthesisResult:\n    chunk_generator: AsyncGenerator[ChunkResult, None]\n    get_message_up_to: Callable[[float], str]  # seconds → partial text\n    \n    class ChunkResult:\n        chunk: bytes          # Raw PCM audio\n        is_last_chunk: bool\n```\n\n**Supported Providers**:\n- **ElevenLabs** - Most natural voices, streaming\n- **Azure TTS** - Enterprise-grade, many languages\n- **Google Cloud TTS** - Cost-effective, good quality\n- **Amazon Polly** - AWS integration\n- **Play.ht** - Voice cloning\n\n**Critical Implementation Details**:\n- Stream audio chunks as they're generated\n- Convert audio to LINEAR16 PCM format (16kHz sample rate)\n- Implement `get_message_up_to()` for interrupt handling\n- Handle audio format conversion (MP3 → PCM)\n\n### 4. Output Device (Audio → Client)\n\n**Purpose**: Sends synthesized audio back to the client\n\n**CRITICAL: Rate Limiting for Interrupts**\n\n```python\nasync def send_speech_to_output(self, message, synthesis_result,\n                                stop_event, seconds_per_chunk):\n    chunk_idx = 0\n    async for chunk_result in synthesis_result.chunk_generator:\n        # Check for interrupt\n        if stop_event.is_set():\n            logger.debug(f\"Interrupted after {chunk_idx} chunks\")\n            message_sent = synthesis_result.get_message_up_to(\n                chunk_idx * seconds_per_chunk\n            )\n            return message_sent, True  # cut_off = True\n        \n        start_time = time.time()\n        \n        # Send chunk to output device\n        self.output_device.consume_nonblocking(chunk_result.chunk)\n        \n        # CRITICAL: Wait for chunk to play before sending next one\n        # This is what makes interrupts work!\n        speech_length = seconds_per_chunk\n        processing_time = time.time() - start_time\n        await asyncio.sleep(max(speech_length - processing_time, 0))\n        \n        chunk_idx += 1\n    \n    return message, False  # cut_off = False\n```\n\n**Why Rate Limiting?**\nWithout rate limiting, all audio chunks would be sent immediately, which would:\n- Buffer entire message on client side\n- Make interrupts impossible (all audio already sent)\n- Cause timing issues\n\nBy sending one chunk every N seconds:\n- Real-time playback is maintained\n- Interrupts can stop mid-sentence\n- Natural conversation flow is preserved\n\n## The Interrupt System\n\nThe interrupt system is critical for natural conversations.\n\n### How Interrupts Work\n\n**Scenario**: Bot is saying \"I think the weather will be nice today and tomorrow and—\" when user interrupts with \"Stop\".\n\n**Step 1: User starts speaking**\n```python\n# TranscriptionsWorker detects new transcription while bot speaking\nasync def process(self, transcription):\n    if not self.conversation.is_human_speaking:  # Bot was speaking!\n        # Broadcast interrupt to all in-flight events\n        interrupted = self.conversation.broadcast_interrupt()\n        transcription.is_interrupt = interrupted\n```\n\n**Step 2: broadcast_interrupt() stops everything**\n```python\ndef broadcast_interrupt(self):\n    num_interrupts = 0\n    # Interrupt all queued events\n    while True:\n        try:\n            interruptible_event = self.interruptible_events.get_nowait()\n            if interruptible_event.interrupt():  # Sets interruption_event\n                num_interrupts += 1\n        except queue.Empty:\n            break\n    \n    # Cancel current tasks\n    self.agent.cancel_current_task()              # Stop generating text\n    self.agent_responses_worker.cancel_current_task()  # Stop synthesizing\n    return num_interrupts > 0\n```\n\n**Step 3: SynthesisResultsWorker detects interrupt**\n```python\nasync def send_speech_to_output(self, synthesis_result, stop_event, ...):\n    async for chunk_result in synthesis_result.chunk_generator:\n        # Check stop_event (this is the interruption_event)\n        if stop_event.is_set():\n            logger.debug(\"Interrupted! Stopping speech.\")\n            # Calculate what was actually spoken\n            seconds_spoken = chunk_idx * seconds_per_chunk\n            partial_message = synthesis_result.get_message_up_to(seconds_spoken)\n            # e.g., \"I think the weather will be nice today\"\n            return partial_message, True  # cut_off = True\n```\n\n**Step 4: Agent updates history**\n```python\nif cut_off:\n    # Update conversation history with partial message\n    self.agent.update_last_bot_message_on_cut_off(message_sent)\n    # History now shows:\n    # Bot: \"I think the weather will be nice today\" (incomplete)\n```\n\n### InterruptibleEvent Pattern\n\nEvery event in the pipeline is wrapped in an `InterruptibleEvent`:\n\n```python\nclass InterruptibleEvent:\n    def __init__(self, payload, is_interruptible=True):\n        self.payload = payload\n        self.is_interruptible = is_interruptible\n        self.interruption_event = threading.Event()  # Initially not set\n        self.interrupted = False\n    \n    def interrupt(self) -> bool:\n        \"\"\"Interrupt this event\"\"\"\n        if not self.is_interruptible:\n            return False\n        if not self.interrupted:\n            self.interruption_event.set()  # Signal to stop!\n            self.interrupted = True\n            return True\n        return False\n    \n    def is_interrupted(self) -> bool:\n        return self.interruption_event.is_set()\n```\n\n## Multi-Provider Factory Pattern\n\nSupport multiple providers with a factory pattern:\n\n```python\nclass VoiceHandler:\n    \"\"\"Multi-provider factory for voice components\"\"\"\n    \n    def create_transcriber(self, agent_config: Dict):\n        \"\"\"Create transcriber based on transcriberProvider\"\"\"\n        provider = agent_config.get(\"transcriberProvider\", \"deepgram\")\n        \n        if provider == \"deepgram\":\n            return self._create_deepgram_transcriber(agent_config)\n        elif provider == \"assemblyai\":\n            return self._create_assemblyai_transcriber(agent_config)\n        elif provider == \"azure\":\n            return self._create_azure_transcriber(agent_config)\n        elif provider == \"google\":\n            return self._create_google_transcriber(agent_config)\n        else:\n            raise ValueError(f\"Unknown transcriber provider: {provider}\")\n    \n    def create_agent(self, agent_config: Dict):\n        \"\"\"Create LLM agent based on llmProvider\"\"\"\n        provider = agent_config.get(\"llmProvider\", \"openai\")\n        \n        if provider == \"openai\":\n            return self._create_openai_agent(agent_config)\n        elif provider == \"gemini\":\n            return self._create_gemini_agent(agent_config)\n        else:\n            raise ValueError(f\"Unknown LLM provider: {provider}\")\n    \n    def create_synthesizer(self, agent_config: Dict):\n        \"\"\"Create voice synthesizer based on voiceProvider\"\"\"\n        provider = agent_config.get(\"voiceProvider\", \"elevenlabs\")\n        \n        if provider == \"elevenlabs\":\n            return self._create_elevenlabs_synthesizer(agent_config)\n        elif provider == \"azure\":\n            return self._create_azure_synthesizer(agent_config)\n        elif provider == \"google\":\n            return self._create_google_synthesizer(agent_config)\n        elif provider == \"polly\":\n            return self._create_polly_synthesizer(agent_config)\n        elif provider == \"playht\":\n            return self._create_playht_synthesizer(agent_config)\n        else:\n            raise ValueError(f\"Unknown voice provider: {provider}\")\n```\n\n## WebSocket Integration\n\nVoice AI engines typically use WebSocket for bidirectional audio streaming:\n\n```python\n@app.websocket(\"/conversation\")\nasync def websocket_endpoint(websocket: WebSocket):\n    await websocket.accept()\n    \n    # Create voice components\n    voice_handler = VoiceHandler()\n    transcriber = voice_handler.create_transcriber(agent_config)\n    agent = voice_handler.create_agent(agent_config)\n    synthesizer = voice_handler.create_synthesizer(agent_config)\n    \n    # Create output device\n    output_device = WebsocketOutputDevice(\n        ws=websocket,\n        sampling_rate=16000,\n        audio_encoding=AudioEncoding.LINEAR16\n    )\n    \n    # Create conversation orchestrator\n    conversation = StreamingConversation(\n        output_device=output_device,\n        transcriber=transcriber,\n        agent=agent,\n        synthesizer=synthesizer\n    )\n    \n    # Start all workers\n    await conversation.start()\n    \n    try:\n        # Receive audio from client\n        async for message in websocket.iter_bytes():\n            conversation.receive_audio(message)\n    except WebSocketDisconnect:\n        logger.info(\"Client disconnected\")\n    finally:\n        await conversation.terminate()\n```\n\n## Common Pitfalls and Solutions\n\n### 1. Audio Jumping/Cutting Off\n\n**Problem**: Bot's audio jumps or cuts off mid-response.\n\n**Cause**: Sending text to synthesizer in small chunks causes multiple TTS calls.\n\n**Solution**: Buffer the entire LLM response before sending to synthesizer:\n\n```python\n# ❌ Bad: Yields sentence-by-sentence\nasync for sentence in llm_stream:\n    yield GeneratedResponse(message=BaseMessage(text=sentence))\n\n# ✅ Good: Buffer entire response\nfull_response = \"\"\nasync for chunk in llm_stream:\n    full_response += chunk\nyield GeneratedResponse(message=BaseMessage(text=full_response))\n```\n\n### 2. Echo/Feedback Loop\n\n**Problem**: Bot hears itself speaking and responds to its own audio.\n\n**Cause**: Transcriber not muted during bot speech.\n\n**Solution**: Mute transcriber when bot starts speaking:\n\n```python\n# Before sending audio to output\nself.transcriber.mute()\n# After audio playback complete\nself.transcriber.unmute()\n```\n\n### 3. Interrupts Not Working\n\n**Problem**: User can't interrupt bot mid-sentence.\n\n**Cause**: All audio chunks sent at once instead of rate-limited.\n\n**Solution**: Rate-limit audio chunks to match real-time playback:\n\n```python\nasync for chunk in synthesis_result.chunk_generator:\n    start_time = time.time()\n    \n    # Send chunk\n    output_device.consume_nonblocking(chunk)\n    \n    # Wait for chunk duration before sending next\n    processing_time = time.time() - start_time\n    await asyncio.sleep(max(seconds_per_chunk - processing_time, 0))\n```\n\n### 4. Memory Leaks from Unclosed Streams\n\n**Problem**: Memory usage grows over time.\n\n**Cause**: WebSocket connections or API streams not properly closed.\n\n**Solution**: Always use context managers and cleanup:\n\n```python\ntry:\n    async with websockets.connect(url) as ws:\n        # Use websocket\n        pass\nfinally:\n    # Cleanup\n    await conversation.terminate()\n    await transcriber.terminate()\n```\n\n## Production Considerations\n\n### 1. Error Handling\n\n```python\nasync def _run_loop(self):\n    while self.active:\n        try:\n            item = await self.input_queue.get()\n            await self.process(item)\n        except Exception as e:\n            logger.error(f\"Worker error: {e}\", exc_info=True)\n            # Don't crash the worker, continue processing\n```\n\n### 2. Graceful Shutdown\n\n```python\nasync def terminate(self):\n    \"\"\"Gracefully shut down all workers\"\"\"\n    self.active = False\n    \n    # Stop all workers\n    self.transcriber.terminate()\n    self.agent.terminate()\n    self.synthesizer.terminate()\n    \n    # Wait for queues to drain\n    await asyncio.sleep(0.5)\n    \n    # Close connections\n    if self.websocket:\n        await self.websocket.close()\n```\n\n### 3. Monitoring and Logging\n\n```python\n# Log key events\nlogger.info(f\"🎤 [TRANSCRIBER] Received: '{transcription.message}'\")\nlogger.info(f\"🤖 [AGENT] Generating response...\")\nlogger.info(f\"🔊 [SYNTHESIZER] Synthesizing {len(text)} characters\")\nlogger.info(f\"⚠️ [INTERRUPT] User interrupted bot\")\n\n# Track metrics\nmetrics.increment(\"transcriptions.count\")\nmetrics.timing(\"agent.response_time\", duration)\nmetrics.gauge(\"active_conversations\", count)\n```\n\n### 4. Rate Limiting and Quotas\n\n```python\n# Implement rate limiting for API calls\nfrom aiolimiter import AsyncLimiter\n\nrate_limiter = AsyncLimiter(max_rate=10, time_period=1)  # 10 calls/second\n\nasync def call_api(self, data):\n    async with rate_limiter:\n        return await self.client.post(data)\n```\n\n## Key Design Patterns\n\n### 1. Producer-Consumer with Queues\n\n```python\n# Producer\nasync def producer(queue):\n    while True:\n        item = await generate_item()\n        queue.put_nowait(item)\n\n# Consumer\nasync def consumer(queue):\n    while True:\n        item = await queue.get()\n        await process_item(item)\n```\n\n### 2. Streaming Generators\n\nInstead of returning complete results:\n\n```python\n# ❌ Bad: Wait for entire response\nasync def generate_response(prompt):\n    response = await openai.complete(prompt)  # 5 seconds\n    return response\n\n# ✅ Good: Stream chunks as they arrive\nasync def generate_response(prompt):\n    async for chunk in openai.complete(prompt, stream=True):\n        yield chunk  # Yield after 0.1s, 0.2s, etc.\n```\n\n### 3. Conversation State Management\n\nMaintain conversation history for context:\n\n```python\nclass Transcript:\n    event_logs: List[Message] = []\n    \n    def add_human_message(self, text):\n        self.event_logs.append(Message(sender=Sender.HUMAN, text=text))\n    \n    def add_bot_message(self, text):\n        self.event_logs.append(Message(sender=Sender.BOT, text=text))\n    \n    def to_openai_messages(self):\n        return [\n            {\"role\": \"user\" if msg.sender == Sender.HUMAN else \"assistant\",\n             \"content\": msg.text}\n            for msg in self.event_logs\n        ]\n```\n\n## Testing Strategies\n\n### 1. Unit Test Workers in Isolation\n\n```python\nasync def test_transcriber():\n    transcriber = DeepgramTranscriber(config)\n    \n    # Mock audio input\n    audio_chunk = b'\\x00\\x01\\x02...'\n    transcriber.send_audio(audio_chunk)\n    \n    # Check output\n    transcription = await transcriber.output_queue.get()\n    assert transcription.message == \"expected text\"\n```\n\n### 2. Integration Test Pipeline\n\n```python\nasync def test_full_pipeline():\n    # Create all components\n    conversation = create_test_conversation()\n    \n    # Send test audio\n    conversation.receive_audio(test_audio_chunk)\n    \n    # Wait for response\n    response = await wait_for_audio_output(timeout=5)\n    \n    assert response is not None\n```\n\n### 3. Test Interrupts\n\n```python\nasync def test_interrupt():\n    conversation = create_test_conversation()\n    \n    # Start bot speaking\n    await conversation.agent.generate_response(\"Tell me a long story\")\n    \n    # Interrupt mid-response\n    await asyncio.sleep(1)  # Let it speak for 1 second\n    conversation.broadcast_interrupt()\n    \n    # Verify partial message in transcript\n    last_message = conversation.transcript.event_logs[-1]\n    assert last_message.text != full_expected_message\n```\n\n## Implementation Workflow\n\nWhen implementing a voice AI engine:\n\n1. **Start with Base Workers**: Implement the base worker pattern first\n2. **Add Transcriber**: Choose a provider and implement streaming transcription\n3. **Add Agent**: Implement LLM integration with streaming responses\n4. **Add Synthesizer**: Implement TTS with audio streaming\n5. **Connect Pipeline**: Wire all workers together with queues\n6. **Add Interrupts**: Implement the interrupt system\n7. **Add WebSocket**: Create WebSocket endpoint for client communication\n8. **Test Components**: Unit test each worker in isolation\n9. **Test Integration**: Test the full pipeline end-to-end\n10. **Add Error Handling**: Implement robust error handling and logging\n11. **Optimize**: Add rate limiting, monitoring, and performance optimizations\n\n## Related Skills\n\n- `@websocket-patterns` - For WebSocket implementation details\n- `@async-python` - For asyncio and async patterns\n- `@streaming-apis` - For streaming API integration\n- `@audio-processing` - For audio format conversion and processing\n- `@systematic-debugging` - For debugging complex async pipelines\n\n## Resources\n\n**Libraries**:\n- `asyncio` - Async programming\n- `websockets` - WebSocket client/server\n- `FastAPI` - WebSocket server framework\n- `pydub` - Audio manipulation\n- `numpy` - Audio data processing\n\n**API Providers**:\n- Transcription: Deepgram, AssemblyAI, Azure Speech, Google Cloud Speech\n- LLM: OpenAI, Google Gemini, Anthropic Claude\n- TTS: ElevenLabs, Azure TTS, Google Cloud TTS, Amazon Polly, Play.ht\n\n## Summary\n\nBuilding a voice AI engine requires:\n- ✅ Async worker pipeline for concurrent processing\n- ✅ Queue-based communication between components\n- ✅ Streaming at every stage (transcription, LLM, synthesis)\n- ✅ Interrupt system for natural conversations\n- ✅ Rate limiting for real-time audio playback\n- ✅ Multi-provider support for flexibility\n- ✅ Proper error handling and graceful shutdown\n\n**The key insight**: Everything must stream and everything must be interruptible for natural, real-time conversations.\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["voice","engine","development","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding"],"capabilities":["skill","source-sickn33","skill-voice-ai-engine-development","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/voice-ai-engine-development","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34404 github stars · SKILL.md body (23,062 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T00:51:57.058Z","embedding":null,"createdAt":"2026-04-18T21:47:19.248Z","updatedAt":"2026-04-22T00:51:57.058Z","lastSeenAt":"2026-04-22T00:51:57.058Z","tsv":"'-1':2247 '-3.5':602 '-4':600 '/conversation':1484 '0':839,922,1074,1114,1764 '0.1':2056 '0.2':2058 '0.5':1877 '0.95':444 '1':200,337,925,1022,1093,1574,1812,1951,1971,2123,2229,2234,2261 '10':1948,1952,2344 '11':2354 '16000':1524 '16khz':786 '2':202,514,1062,1652,1849,2006,2159,2272 '3':204,658,1116,1692,1884,2061,2200,2282 '4':803,1191,1765,1927,2291 '5':2029,2194,2299 '6':2308 '7':2315 '8':2324 '9':2333 'accent':470 'accur':463 'accuraci':467 'activ':1924 'actual':322,1157 'add':2078,2090,2273,2283,2292,2309,2316,2345,2356 'agent':21,67,148,195,515,534,664,1192,1323,1340,1347,1354,1361,1373,1375,1380,1393,1400,1414,1432,1439,1446,1453,1460,1502,1504,1506,1507,1512,1539,1540,1899,2284 'agent.response':1920 'agent_config.get':1332,1385,1424 'agentrespons':543 'ai':3,11,34,48,57,66,151,177,187,1473,2259,2453 'aiolimit':1940 'alreadi':958 'alway':1787 'amazon':763,2446 'anthrop':612,2437 'api':1781,1937,1957,2382,2385,2423 'app.websocket':1483 'applic':152 'architectur':89,179 'arriv':306,2038 'ask':2550 'assemblyai':465,1344,2427 'assert':2155,2195,2248 'assist':139,2113 'async':15,92,286,313,548,675,822,840,1034,1121,1132,1485,1553,1618,1636,1730,1795,1816,1853,1954,1960,1979,1993,2020,2039,2044,2130,2164,2204,2373,2378,2402,2407,2456 'async-python':2372 'asyncgener':562,630,692,717 'asyncio':221,2376,2406 'asyncio.create':283 'asyncio.gather':498 'asyncio.queue':106,258,266,361,367,538,542 'asyncio.sleep':916,1757,1876,2228 'asynclimit':1942,1945 'audio':70,166,192,197,339,344,362,374,383,641,661,669,695,736,774,781,798,806,811,939,957,1480,1525,1550,1560,1575,1581,1665,1683,1688,1707,1721,2138,2140,2147,2148,2178,2180,2182,2191,2297,2388,2391,2417,2420,2486 'audio-process':2387 'audioencoding.linear16':1527 'automat':224 'aw':765 'await':301,307,915,1491,1546,1568,1756,1806,1808,1825,1827,1875,1882,1965,1986,2000,2002,2026,2153,2188,2215,2227 'azur':471,748,1351,1436,2428,2441 'b':2142 'back':812 'backpressur':222 'bad':1612,2015 'base':95,236,1328,1381,1420,2264,2268,2464 'baseag':530 'basemessag':681,1627,1648 'basesynthes':674 'basetranscrib':353 'basework':246 'benefit':206 'better':581 'bidirect':61,489,1479 'block':303 'bool':447,455,740,1266,1293 'bot':398,412,425,502,1002,1032,1044,1207,1217,1579,1656,1671,1677,1701,1914,2091,2213 'boundari':2558 'break':1096 'broadcast':1047,1063,1069 'buffer':632,947,1602,1631 'build':6,43,130,2450 'byte':364,377,733,1558 'calcul':1154 'call':379,410,423,1600,1938,1956 'callabl':724 'calls/second':1953 'cancel':646,1097 'capabl':55,155 'caus':960,1589,1597,1666,1705,1777 'charact':1908 'chatbot':141 'check':847,1139,2150 'choos':2275 'chunk':345,363,376,390,404,406,682,690,696,715,732,739,775,836,837,842,857,859,866,870,882,892,909,923,940,966,1134,1161,1165,1596,1638,1644,1708,1722,1732,1740,1743,1746,1761,2035,2046,2053,2141,2149,2183 'chunk_result.chunk':888 'chunkresult':718,731 'clarif':2552 'class':245,352,434,529,673,713,730,1240,1310,2071 'claud':613,2438 'cleanup':1792,1805 'clear':2525 'client':378,807,815,951,1552,1565,2322 'client/server':2411 'clone':769 'close':1785,1878 'cloud':477,756,2431,2444 'common':1570 'communic':104,2323,2465 'complet':449,1690,2012 'complex':2401 'compon':100,334,1318,1495,2171,2326,2467 'concurr':109,215,496,2460 'confid':442 'config':358,535,1324,1341,1348,1355,1362,1376,1394,1401,1415,1433,1440,1447,1454,1461,1503,1508,1513,2136 'connect':1779,1879,2300 'consider':1811 'consum':260,1974,1992,1995 'contain':689 'content':2114 'context':615,1789,2069 'continu':1847 'convers':10,54,62,135,176,524,546,557,594,622,651,800,983,997,1200,1529,1531,1925,2062,2066,2172,2175,2208,2211,2393,2479,2516 'conversation.agent.generate':2216 'conversation.broadcast':2236 'conversation.receive':1559,2179 'conversation.start':1547 'conversation.terminate':1569,1807 'conversation.transcript.event':2245 'convert':342,663,780 'core':88,178 'cost':610,759 'cost-effect':609,758 'count':1926 'crash':1844 'creat':142,677,1320,1326,1372,1378,1411,1417,1493,1514,1528,2169,2173,2209,2318 'criteria':2561 'critic':483,618,770,816,889,994 'current':647,1098,1101,1107 'custom':146 'cut':875,929,1187,1197,1210,1584 'data':1959,1967,2421 'debug':2398,2400 'decoupl':207 'deepgram':461,1334,1337,2426 'deepgramtranscrib':2135 'def':247,272,287,314,326,354,372,407,420,531,549,676,823,1035,1068,1122,1242,1263,1289,1319,1371,1410,1486,1817,1854,1955,1980,1994,2021,2040,2077,2089,2101,2131,2165,2205 'describ':2529 'design':1969 'detail':485,620,772,2371 'detect':1028,1118 'develop':5,36,149 'devic':805,885,1516,1518,1534,1536 'dict':1325,1377,1416 'differ':227 'disconnect':1566 'drain':1874 'durat':1747,1922 'e':1833,1838 'e.g':1174 'echo':396,416 'echo/feedback':506,1653 'effect':611,760 'elevenlab':743,1426,1429,2440 'elif':1342,1349,1356,1395,1434,1441,1448,1455 'els':391,1363,1402,1462,2112 'enabl':59,108,145 'encod':1526 'end':2341,2343 'end-to-end':2340 'endpoint':1488,2320 'engin':4,13,35,49,58,174,188,1474,2260,2454 'enterpris':474,751 'enterprise-grad':473,750 'entir':633,948,1604,1632,2018 'environ':2541 'environment-specif':2540 'error':1813,1837,2346,2350,2495 'etc':2060 'event':833,1054,1078,1083,1090,1131,1141,1146,1230,1256,1269,1891,2073 'everi':119,185,239,967,1229,2470 'everyth':229,1066,2503,2507 'exc':1839 'except':1094,1562,1830,1831 'expect':2157,2251 'expert':2546 'f':854,1366,1405,1465,1835,1893,1898,1903,1910 'factori':1300,1307,1315 'fals':271,333,371,430,451,928,931,1262,1275,1288,1863 'fast':462,605 'fastapi':2412 'final':446,510,1567,1804 'first':577,2271 'flexibl':2493 'flight':1053 'float':443,725 'flow':595,984 'follow':189,241 'forev':295 'format':432,785,799,2392 'framework':2415 'full':1634,1642,1650,2167,2250,2338 'function':701 'gemini':607,1397,2436 'generat':523,550,648,691,716,779,846,1104,1138,1735,1900,1987,2008,2022,2041 'generatedrespons':1625,1646 'get':697,703,720,790 'good':468,761,1630,2033 'googl':476,606,755,1358,1443,2430,2435,2443 'gpt':599,601 'grace':1850,1857,2498 'grade':475,752 'grow':1774 'guid':40,336 'handl':27,112,225,508,643,796,797,1814,2347,2351,2496 'handler':1497 'hear':1657 'hello':438 'high':466,603 'histori':547,623,652,1194,1201,1214,2067 'human':553,1042,2079 'id':558 'idx':838,858,867,924,1162 'immedi':944 'implement':137,335,484,619,771,789,1933,2253,2256,2266,2279,2285,2294,2311,2348,2370 'import':631,1941 'imposs':955 'in-flight':1051 'incom':343 'incomplet':1226 'independ':102 'info':1840 'init':248,355,532,1243 'initi':1258 'input':250,256,521,554,2139,2555 'input/output':213 'insight':2502 'instead':394,1712,2009 'int':684 'integr':156,766,1471,2160,2287,2335,2386 'interfac':349,526,670 'interrupt':26,111,154,228,454,556,582,644,657,707,795,820,849,855,903,954,976,988,991,999,1018,1048,1055,1057,1059,1060,1064,1070,1073,1075,1082,1089,1092,1113,1119,1145,1151,1247,1252,1254,1264,1267,1273,1291,1693,1700,1911,1913,2202,2207,2223,2237,2310,2313,2475,2510 'interruptible_event.interrupt':1087 'interruptibleev':1227,1238,1241 'isol':2128,2332 'issu':962 'item':300,305,309,312,317,1824,1829,1985,1988,1991,1999,2004,2005 'jump':642,1582 'jumping/cutting':1576 'key':205,1890,1968,2501 'know':210 'languag':481,754 'last':738,1206,2243 'last_message.text':2249 'latenc':571 'leak':1767 'len':405,1906 'length':906,919 'let':2230 'librari':2405 'limit':818,934,937,1716,1720,1929,1935,1944,1963,2358,2481,2517 'linear16':783 'list':2075 'llm':20,78,159,634,1379,1407,1605,1622,1640,2286,2433,2473 'llm-power':77 'llmprovid':1383,1386 'log':1887,1889,2074,2120,2246,2353 'logger.debug':853,1150 'logger.error':1834 'logger.info':1564,1892,1897,1902,1909 'long':614,2221 'loop':280,289,293,507,1654,1819 'lower':570 'main':291 'maintain':621,975,2065 'make':902,953 'manag':1790,2064 'mani':753 'manipul':2418 'match':1724,2526 'max':917,1758,1946 'memori':1766,1772 'mention':171 'messag':436,655,680,698,721,791,829,860,863,872,927,949,1167,1169,1185,1204,1208,1212,1555,1561,1626,1647,2076,2080,2084,2092,2096,2104,2240,2244,2252 'metric':1916 'metrics.gauge':1923 'metrics.increment':1917 'metrics.timing':1919 'mid':234,586,980,1587,1703,2225 'mid-respons':585,1586,2224 'mid-sent':979,1702 'mid-stream':233 'miss':2563 'mock':2137 'monitor':1885,2359 'mp3':801 'msg':2117 'msg.sender':2110 'msg.text':2115 'multi':30,480,1298,1313,2489 'multi-languag':479 'multi-provid':29,1297,1312,2488 'multimod':608 'multipl':157,1303,1598 'must':2504,2508 'mute':370,387,408,418,429,499,1669,1674 'n':968 'natur':60,593,745,982,996,2478,2512 'new':1029 'next':897,1750 'nice':1011,1181,1224 'nonblock':887,1742 'none':545,719,2199 'notimplementederror':325,566,709 'nowait':389,401,1085,1990 'nuanc':616 'num':1072,1091,1112 'numpi':2419 'object':107,626 'one':898,965 'openai':598,1387,1390,2103,2434 'openai.complete':2027,2048 'optim':2355,2362 'orchestr':1530 'output':252,264,431,804,827,884,1126,1515,1517,1533,1535,1685,2151,2192,2535 'output_device.consume':1741 'overrid':318,559 'overview':37 'partial':452,512,654,704,728,1166,1184,1203,2239 'pass':1803 'pattern':184,238,243,1228,1301,1308,1970,2270,2367,2379 'payload':1245,1250 'pcm':735,784,802 'per':835,869,908,1164,1760 'perform':2361 'period':1950 'permiss':2556 'pipelin':17,97,168,183,191,1233,2162,2168,2301,2339,2403,2458 'pitfal':1571 'play':894 'play.ht':767,2448 'playback':973,1689,1728,2487 'playht':1457 'polli':764,1450,2447 'power':79 'preserv':986 'prevent':395,415,505,640 'principl':180 'problem':1578,1655,1696,1771 'process':71,110,167,279,292,310,315,519,910,920,1036,1751,1762,1848,2003,2389,2395,2422,2461 'produc':268,1973,1978,1981 'producer-consum':1972 'product':45,1810 'production-readi':44 'program':2408 'prompt':2024,2028,2043,2049 'proper':1784,2494 'provid':31,162,460,597,742,1299,1304,1314,1331,1336,1343,1350,1357,1369,1370,1384,1389,1396,1408,1409,1423,1428,1435,1442,1449,1456,1468,1469,2277,2424,2490 'purpos':341,518,662,808 'pydub':2416 'python':244,351,433,528,672,712,821,1026,1067,1120,1195,1239,1309,1482,1611,1680,1729,1793,1815,1852,1888,1932,1977,2014,2070,2129,2163,2203,2374 'qualiti':604,762 'queu':1077 'queue':94,214,223,251,253,255,257,263,265,360,366,537,541,1872,1976,1982,1996,2307,2463 'queue-bas':93,2462 'queue.empty':1095 'queue.get':2001 'queue.put':1989 'quota':1931 'rais':324,565,708,1364,1403,1463 'rate':226,788,817,933,936,1523,1715,1719,1928,1934,1943,1947,1962,2357,2480 'rate-limit':1714,1718 'raw':734 're':778 'readi':46,580 'real':8,52,115,132,971,1726,2484,2514 'real-tim':7,51,114,131,970,1725,2483,2513 'receiv':494,1549,1895 'relat':2363 'requir':350,527,671,2455,2554 'resourc':2404 'respond':1661 'respons':80,517,525,551,564,569,587,617,628,635,666,1588,1606,1633,1635,1643,1651,1901,2019,2023,2025,2032,2042,2186,2187,2196,2217,2226,2290 'result':831,843,1129,1135,2013 'return':561,686,871,926,1111,1183,1274,1285,1287,1294,1338,1345,1352,1359,1391,1398,1430,1437,1444,1451,1458,1964,2011,2031,2106 'review':2547 'robust':2349 'role':2107 'run':101,218,288,294,491,1818 'safeti':2557 'sampl':787,1522 'say':1004 'scenario':1001 'scope':2528 'second':727,834,868,907,969,1159,1163,1172,1759,2030,2235 'self':249,274,290,316,328,356,375,409,422,533,552,679,828,1037,1071,1127,1244,1265,1292,1322,1374,1413,1820,1856,1958,2081,2093,2105 'self._create_assemblyai_transcriber':1346 'self._create_azure_synthesizer':1438 'self._create_azure_transcriber':1353 'self._create_deepgram_transcriber':1339 'self._create_elevenlabs_synthesizer':1431 'self._create_gemini_agent':1399 'self._create_google_synthesizer':1445 'self._create_google_transcriber':1360 'self._create_openai_agent':1392 'self._create_playht_synthesizer':1459 'self._create_polly_synthesizer':1452 'self._run_loop':285 'self.active':270,281,299,332,1822,1862 'self.agent.cancel':1100 'self.agent.terminate':1868 'self.agent.update':1205 'self.agent_responses_worker.cancel':1106 'self.client.post':1966 'self.conversation.broadcast':1056 'self.conversation.is':1041 'self.create':402 'self.event':2119 'self.event_logs.append':2083,2095 'self.input':254,359,536 'self.input_queue.get':302,1826 'self.input_queue.put':388,400 'self.interrupted':1261,1278,1283 'self.interruptible_events.get':1084 'self.interruption':1255 'self.interruption_event.is':1295 'self.interruption_event.set':1279 'self.is':369,386,417,428,1251,1272 'self.output':262,365,540 'self.output_device.consume':886 'self.payload':1249 'self.process':308,1828 'self.synthesizer.terminate':1869 'self.transcriber.mute':1686 'self.transcriber.terminate':1867 'self.transcriber.unmute':1691 'self.transcript':544 'self.websocket':1881 'self.websocket.close':1883 'send':373,382,392,809,824,881,896,964,1123,1590,1608,1682,1739,1749,2176 'sender':492,2085,2097 'sender.bot':2098 'sender.human':2086,2111 'sent':861,873,943,959,1213,1709 'sentenc':450,578,589,591,981,1615,1617,1620,1629,1704 'sentence-by-sent':588,1614 'server':2414 'servic':147 'set':456,852,1088,1149,1260,1296 'show':1216 'shut':1858 'shutdown':1851,2499 'side':952 'signal':1280 'silenc':393 'silent':403 'simultan':219 'size':683 'skill':39,125,128,2364,2520 'skill-voice-ai-engine-development' 'small':1595 'solut':1573,1601,1673,1717,1786 'soon':575 'source-sickn33' 'speak':414,427,503,573,1025,1033,1043,1046,1659,1679,2214,2232 'specif':2542 'speech':73,85,399,472,478,668,678,825,905,918,1124,1153,1672,2429,2432 'speech-to-text':72 'spoken':1158,1160,1173 'stage':120,2471 'start':273,275,413,572,878,913,1024,1543,1678,1736,1754,2212,2262 'state':2063 'step':1021,1061,1115,1190 'stop':232,329,426,584,832,978,1020,1065,1103,1109,1130,1140,1152,1282,1864,2548 'stop_event.is':851,1148 'stori':2222 'str':437,726 'strategi':2122 'stream':18,69,117,165,235,464,490,568,627,747,773,1481,1623,1641,1770,1782,2007,2034,2050,2280,2289,2298,2381,2384,2468,2505 'streaming-api':2380 'streamingconvers':1532 'structur':711 'substitut':2538 'success':2560 'summari':2449 'support':32,459,482,596,741,1302,2491 'synthes':196,639,659,810,1110,1412,1419,1509,1511,1541,1542,1593,1610,1904,1905,2293 'synthesi':24,86,830,1128,2474 'synthesis_result.chunk':845,1137,1734 'synthesis_result.get':862,1168 'synthesisresult':685,688,710,714 'synthesisresultswork':1117 'system':136,989,992,2314,2476 'systemat':2397 'systematic-debug':2396 'task':284,495,649,1099,1102,1108,2524 'tell':2218 'termin':297,327,1855 'test':2121,2125,2132,2161,2166,2174,2177,2181,2201,2206,2210,2325,2328,2334,2336,2544 'text':75,83,340,347,516,660,665,705,729,1105,1591,1628,1649,1907,2082,2087,2088,2094,2099,2100,2158 'text-to-speech':82 'think':1006,1176,1219 'threading.event':1257 'time':9,53,116,133,879,911,914,921,961,972,1727,1737,1752,1755,1763,1776,1921,1949,2485,2515 'time.time':880,912,1738,1753 'timeout':2193 'today':1012,1182,1225 'togeth':2305 'tomorrow':1014 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'track':1915 'transcrib':194,338,357,500,1321,1327,1368,1499,1501,1537,1538,1667,1675,1894,2133,2134,2274 'transcriber.output_queue.get':2154 'transcriber.send':2146 'transcriber.terminate':1809 'transcriberprovid':1330,1333 'transcript':19,76,158,348,368,435,513,625,1030,1038,2072,2152,2242,2281,2425,2472 'transcription.is':1058 'transcription.message':1896,2156 'transcriptionagentinput':539 'transcriptions.count':1918 'transcriptionswork':458,1027 'treat':2533 'tri':1081,1548,1794,1823 'true':282,419,448,874,877,1080,1186,1189,1248,1284,1286,1841,1984,1998,2051 'tts':23,161,749,757,1599,2295,2439,2442,2445 'typic':1475 'unclos':1769 'unit':2124,2327 'unknown':1367,1406,1466 'unmut':421 'updat':650,1193,1199 'url':1798 'usag':1773 'use':14,90,123,126,486,629,1476,1788,1801,2518 'user':64,170,520,1017,1023,1697,1912,2108 'valid':2543 'valueerror':1365,1404,1464 'verifi':2238 'via':105,220 'vocod':172 'voic':2,12,33,47,56,134,138,144,150,173,186,746,768,1317,1418,1467,1472,1494,1496,2258,2452 'voice-ai-engine-develop':1 'voice-en':143 'voice_handler.create':1500,1505,1510 'voicehandl':1311,1498 'voiceprovid':1422,1425 'wait':890,1744,1870,2016,2184,2189 'weather':1008,1178,1221 'websocket':487,1470,1477,1487,1489,1490,1521,1778,1802,2317,2319,2366,2369,2409,2410,2413 'websocket-pattern':2365 'websocket.accept':1492 'websocket.iter':1557 'websocketdisconnect':1563 'websocketoutputdevic':1519 'websockets.connect':1797 'wire':2302 'without':935 'work':163,323,904,1000,1695 'worker':16,96,182,199,201,203,208,217,237,240,277,331,1545,1836,1846,1861,1866,2126,2265,2269,2304,2330,2457 'workflow':2254 'would':941,946 'wrap':1235 'ws':1520,1800 'x00':2143 'x01':2144 'x02':2145 'yield':637,694,1613,1624,1645,2052,2054","prices":[{"id":"30277500-5e09-4cca-accb-c17cbd15f348","listingId":"12f3e42f-8dd7-474c-a469-43ce9d2965a1","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:47:19.248Z"}],"sources":[{"listingId":"12f3e42f-8dd7-474c-a469-43ce9d2965a1","source":"github","sourceId":"sickn33/antigravity-awesome-skills/voice-ai-engine-development","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/voice-ai-engine-development","isPrimary":false,"firstSeenAt":"2026-04-18T21:47:19.248Z","lastSeenAt":"2026-04-22T00:51:57.058Z"}],"details":{"listingId":"12f3e42f-8dd7-474c-a469-43ce9d2965a1","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"voice-ai-engine-development","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34404,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-21T16:43:40Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"d6bdc75985de911d0a0a3736be7bc63cd7be0935","skill_md_path":"skills/voice-ai-engine-development/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/voice-ai-engine-development"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"voice-ai-engine-development","description":"Build real-time conversational AI voice engines using async worker pipelines, streaming transcription, LLM agents, and TTS synthesis with interrupt handling and multi-provider support"},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/voice-ai-engine-development"},"updatedAt":"2026-04-22T00:51:57.058Z"}}