{"id":"4aed6674-0b21-4835-9438-5107a095c8f8","shortId":"tdWeKY","kind":"skill","title":"transformers-js","tagline":"Run Hugging Face models in JavaScript or TypeScript with Transformers.js in Node.js or the browser.","description":"# Transformers.js - Machine Learning for JavaScript\n\nTransformers.js enables running state-of-the-art machine learning models directly in JavaScript, both in browsers and Node.js environments, with no server required.\n\n## When to Use This Skill\n\nUse this skill when you need to:\n- Run ML models for text analysis, generation, or translation in JavaScript\n- Perform image classification, object detection, or segmentation\n- Implement speech recognition or audio processing\n- Build multimodal AI applications (text-to-image, image-to-text, etc.)\n- Run models client-side in the browser without a backend\n\n## Installation\n\n### NPM Installation\n```bash\nnpm install @huggingface/transformers\n```\n\n### Browser Usage (CDN)\n```javascript\n<script type=\"module\">\n  import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers';\n</script>\n```\n\n## Core Concepts\n\n### 1. Pipeline API\nThe pipeline API is the easiest way to use models. It groups together preprocessing, model inference, and postprocessing:\n\n```javascript\nimport { pipeline } from '@huggingface/transformers';\n\n// Create a pipeline for a specific task\nconst pipe = await pipeline('sentiment-analysis');\n\n// Use the pipeline\nconst result = await pipe('I love transformers!');\n// Output: [{ label: 'POSITIVE', score: 0.999817686 }]\n\n// IMPORTANT: Always dispose when done to free memory\nawait classifier.dispose();\n```\n\n**⚠️ Memory Management:** All pipelines must be disposed with `pipe.dispose()` when finished to prevent memory leaks. See examples in [Code Examples](./references/EXAMPLES.md) for cleanup patterns across different environments.\n\n### 2. Model Selection\nYou can specify a custom model as the second argument:\n\n```javascript\nconst pipe = await pipeline(\n  'sentiment-analysis',\n  'Xenova/bert-base-multilingual-uncased-sentiment'\n);\n```\n\n**Finding Models:**\n\nBrowse available Transformers.js models on Hugging Face Hub:\n- **All models**: https://huggingface.co/models?library=transformers.js&sort=trending\n- **By task**: Add `pipeline_tag` parameter\n  - Text generation: https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending\n  - Image classification: https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js&sort=trending\n  - Speech recognition: https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&library=transformers.js&sort=trending\n\n**Tip:** Filter by task type, sort by trending/downloads, and check model cards for performance metrics and usage examples.\n\n### 3. Device Selection\nChoose where to run the model:\n\n```javascript\n// Run on CPU (default for WASM)\nconst pipe = await pipeline('sentiment-analysis', 'model-id');\n\n// Run on GPU (WebGPU - experimental)\nconst pipe = await pipeline('sentiment-analysis', 'model-id', {\n  device: 'webgpu',\n});\n```\n\n### 4. Quantization Options\nControl model precision vs. performance:\n\n```javascript\n// Use quantized model (faster, smaller)\nconst pipe = await pipeline('sentiment-analysis', 'model-id', {\n  dtype: 'q4',  // Options: 'fp32', 'fp16', 'q8', 'q4'\n});\n```\n\n## Supported Tasks\n\n**Note:** All examples below show basic usage.\n\n### Natural Language Processing\n\n#### Text Classification\n```javascript\nconst classifier = await pipeline('text-classification');\nconst result = await classifier('This movie was amazing!');\n```\n\n#### Named Entity Recognition (NER)\n```javascript\nconst ner = await pipeline('token-classification');\nconst entities = await ner('My name is John and I live in New York.');\n```\n\n#### Question Answering\n```javascript\nconst qa = await pipeline('question-answering');\nconst answer = await qa({\n  question: 'What is the capital of France?',\n  context: 'Paris is the capital and largest city of France.'\n});\n```\n\n#### Text Generation\n```javascript\nconst generator = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX');\nconst text = await generator('Once upon a time', {\n  max_new_tokens: 100,\n  temperature: 0.7\n});\n```\n\n**For streaming and chat:** See **[Text Generation Guide](./references/TEXT_GENERATION.md)** for:\n- Streaming token-by-token output with `TextStreamer`\n- Chat/conversation format with system/user/assistant roles\n- Generation parameters (temperature, top_k, top_p)\n- Browser and Node.js examples\n- React components and API endpoints\n\n#### Translation\n```javascript\nconst translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M');\nconst output = await translator('Hello, how are you?', {\n  src_lang: 'eng_Latn',\n  tgt_lang: 'fra_Latn'\n});\n```\n\n#### Summarization\n```javascript\nconst summarizer = await pipeline('summarization');\nconst summary = await summarizer(longText, {\n  max_length: 100,\n  min_length: 30\n});\n```\n\n#### Zero-Shot Classification\n```javascript\nconst classifier = await pipeline('zero-shot-classification');\nconst result = await classifier('This is a story about sports.', ['politics', 'sports', 'technology']);\n```\n\n### Computer Vision\n\n#### Image Classification\n```javascript\nconst classifier = await pipeline('image-classification');\nconst result = await classifier('https://example.com/image.jpg');\n// Or with local file\nconst result = await classifier(imageUrl);\n```\n\n#### Object Detection\n```javascript\nconst detector = await pipeline('object-detection');\nconst objects = await detector('https://example.com/image.jpg');\n// Returns: [{ label: 'person', score: 0.95, box: { xmin, ymin, xmax, ymax } }, ...]\n```\n\n#### Image Segmentation\n```javascript\nconst segmenter = await pipeline('image-segmentation');\nconst segments = await segmenter('https://example.com/image.jpg');\n```\n\n#### Depth Estimation\n```javascript\nconst depthEstimator = await pipeline('depth-estimation');\nconst depth = await depthEstimator('https://example.com/image.jpg');\n```\n\n#### Zero-Shot Image Classification\n```javascript\nconst classifier = await pipeline('zero-shot-image-classification');\nconst result = await classifier('image.jpg', ['cat', 'dog', 'bird']);\n```\n\n### Audio Processing\n\n#### Automatic Speech Recognition\n```javascript\nconst transcriber = await pipeline('automatic-speech-recognition');\nconst result = await transcriber('audio.wav');\n// Returns: { text: 'transcribed text here' }\n```\n\n#### Audio Classification\n```javascript\nconst classifier = await pipeline('audio-classification');\nconst result = await classifier('audio.wav');\n```\n\n#### Text-to-Speech\n```javascript\nconst synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts');\nconst audio = await synthesizer('Hello, this is a test.', {\n  speaker_embeddings: speakerEmbeddings\n});\n```\n\n### Multimodal\n\n#### Image-to-Text (Image Captioning)\n```javascript\nconst captioner = await pipeline('image-to-text');\nconst caption = await captioner('image.jpg');\n```\n\n#### Document Question Answering\n```javascript\nconst docQA = await pipeline('document-question-answering');\nconst answer = await docQA('document-image.jpg', 'What is the total amount?');\n```\n\n#### Zero-Shot Object Detection\n```javascript\nconst detector = await pipeline('zero-shot-object-detection');\nconst objects = await detector('image.jpg', ['person', 'car', 'tree']);\n```\n\n### Feature Extraction (Embeddings)\n\n```javascript\nconst extractor = await pipeline('feature-extraction');\nconst embeddings = await extractor('This is a sentence to embed.');\n// Returns: tensor of shape [1, sequence_length, hidden_size]\n\n// For sentence embeddings (mean pooling)\nconst extractor = await pipeline('feature-extraction', 'onnx-community/all-MiniLM-L6-v2-ONNX');\nconst embeddings = await extractor('Text to embed', { pooling: 'mean', normalize: true });\n```\n\n## Finding and Choosing Models\n\n### Browsing the Hugging Face Hub\n\nDiscover compatible Transformers.js models on Hugging Face Hub:\n\n**Base URL (all models):**\n```\nhttps://huggingface.co/models?library=transformers.js&sort=trending\n```\n\n**Filter by task** using the `pipeline_tag` parameter:\n\n| Task | URL |\n|------|-----|\n| **Text Generation** | https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending |\n| **Text Classification** | https://huggingface.co/models?pipeline_tag=text-classification&library=transformers.js&sort=trending |\n| **Translation** | https://huggingface.co/models?pipeline_tag=translation&library=transformers.js&sort=trending |\n| **Summarization** | https://huggingface.co/models?pipeline_tag=summarization&library=transformers.js&sort=trending |\n| **Question Answering** | https://huggingface.co/models?pipeline_tag=question-answering&library=transformers.js&sort=trending |\n| **Image Classification** | https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js&sort=trending |\n| **Object Detection** | https://huggingface.co/models?pipeline_tag=object-detection&library=transformers.js&sort=trending |\n| **Image Segmentation** | https://huggingface.co/models?pipeline_tag=image-segmentation&library=transformers.js&sort=trending |\n| **Speech Recognition** | https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&library=transformers.js&sort=trending |\n| **Audio Classification** | https://huggingface.co/models?pipeline_tag=audio-classification&library=transformers.js&sort=trending |\n| **Image-to-Text** | https://huggingface.co/models?pipeline_tag=image-to-text&library=transformers.js&sort=trending |\n| **Feature Extraction** | https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js&sort=trending |\n| **Zero-Shot Classification** | https://huggingface.co/models?pipeline_tag=zero-shot-classification&library=transformers.js&sort=trending |\n\n**Sort options:**\n- `&sort=trending` - Most popular recently\n- `&sort=downloads` - Most downloaded overall\n- `&sort=likes` - Most liked by community\n- `&sort=modified` - Recently updated\n\n### Choosing the Right Model\n\nConsider these factors when selecting a model:\n\n**1. Model Size**\n- **Small (< 100MB)**: Fast, suitable for browsers, limited accuracy\n- **Medium (100MB - 500MB)**: Balanced performance, good for most use cases\n- **Large (> 500MB)**: High accuracy, slower, better for Node.js or powerful devices\n\n**2. Quantization**\nModels are often available in different quantization levels:\n- `fp32` - Full precision (largest, most accurate)\n- `fp16` - Half precision (smaller, still accurate)\n- `q8` - 8-bit quantized (much smaller, slight accuracy loss)\n- `q4` - 4-bit quantized (smallest, noticeable accuracy loss)\n\n**3. Task Compatibility**\nCheck the model card for:\n- Supported tasks (some models support multiple tasks)\n- Input/output formats\n- Language support (multilingual vs. English-only)\n- License restrictions\n\n**4. Performance Metrics**\nModel cards typically show:\n- Accuracy scores\n- Benchmark results\n- Inference speed\n- Memory requirements\n\n### Example: Finding a Text Generation Model\n\n```javascript\n// 1. Visit: https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending\n\n// 2. Browse and select a model (e.g., onnx-community/gemma-3-270m-it-ONNX)\n\n// 3. Check model card for:\n//    - Model size: ~270M parameters\n//    - Quantization: q4 available\n//    - Language: English\n//    - Use case: Instruction-following chat\n\n// 4. Use the model:\nimport { pipeline } from '@huggingface/transformers';\n\nconst generator = await pipeline(\n  'text-generation',\n  'onnx-community/gemma-3-270m-it-ONNX',\n  { dtype: 'q4' } // Use quantized version for faster inference\n);\n\nconst output = await generator('Explain quantum computing in simple terms.', {\n  max_new_tokens: 100\n});\n\nawait generator.dispose();\n```\n\n### Tips for Model Selection\n\n1. **Start Small**: Test with a smaller model first, then upgrade if needed\n2. **Check ONNX Support**: Ensure the model has ONNX files (look for `onnx` folder in model repo)\n3. **Read Model Cards**: Model cards contain usage examples, limitations, and benchmarks\n4. **Test Locally**: Benchmark inference speed and memory usage in your environment\n5. **Community Models**: Look for models by `Xenova` (Transformers.js maintainer) or `onnx-community`\n6. **Version Pin**: Use specific git commits in production for stability:\n   ```javascript\n   const pipe = await pipeline('task', 'model-id', { revision: 'abc123' });\n   ```\n\n## Advanced Configuration\n\n### Environment Configuration (`env`)\n\nThe `env` object provides comprehensive control over Transformers.js execution, caching, and model loading.\n\n**Quick Overview:**\n\n```javascript\nimport { env } from '@huggingface/transformers';\n\n// View version\nconsole.log(env.version); // e.g., '3.8.1'\n\n// Common settings\nenv.allowRemoteModels = true;  // Load from Hugging Face Hub\nenv.allowLocalModels = false;  // Load from file system\nenv.localModelPath = '/models/'; // Local model directory\nenv.useFSCache = true;         // Cache models on disk (Node.js)\nenv.useBrowserCache = true;    // Cache models in browser\nenv.cacheDir = './.cache';     // Cache directory location\n```\n\n**Configuration Patterns:**\n\n```javascript\n// Development: Fast iteration with remote models\nenv.allowRemoteModels = true;\nenv.useFSCache = true;\n\n// Production: Local models only\nenv.allowRemoteModels = false;\nenv.allowLocalModels = true;\nenv.localModelPath = '/app/models/';\n\n// Custom CDN\nenv.remoteHost = 'https://cdn.example.com/models';\n\n// Disable caching (testing)\nenv.useFSCache = false;\nenv.useBrowserCache = false;\n```\n\nFor complete documentation on all configuration options, caching strategies, cache management, pre-downloading models, and more, see:\n\n**→ [Configuration Reference](./references/CONFIGURATION.md)**\n\n### Working with Tensors\n\n```javascript\nimport { AutoTokenizer, AutoModel } from '@huggingface/transformers';\n\n// Load tokenizer and model separately for more control\nconst tokenizer = await AutoTokenizer.from_pretrained('bert-base-uncased');\nconst model = await AutoModel.from_pretrained('bert-base-uncased');\n\n// Tokenize input\nconst inputs = await tokenizer('Hello world!');\n\n// Run model\nconst outputs = await model(inputs);\n```\n\n### Batch Processing\n\n```javascript\nconst classifier = await pipeline('sentiment-analysis');\n\n// Process multiple texts\nconst results = await classifier([\n  'I love this!',\n  'This is terrible.',\n  'It was okay.'\n]);\n```\n\n## Browser-Specific Considerations\n\n### WebGPU Usage\nWebGPU provides GPU acceleration in browsers:\n\n```javascript\nconst pipe = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX', {\n  device: 'webgpu',\n  dtype: 'fp32'\n});\n```\n\n**Note**: WebGPU is experimental. Check browser compatibility and file issues if problems occur.\n\n### WASM Performance\nDefault browser execution uses WASM:\n\n```javascript\n// Optimized for browsers with quantization\nconst pipe = await pipeline('sentiment-analysis', 'model-id', {\n  dtype: 'q8'  // or 'q4' for even smaller size\n});\n```\n\n### Progress Tracking & Loading Indicators\n\nModels can be large (ranging from a few MB to several GB) and consist of multiple files. Track download progress by passing a callback to the `pipeline()` function:\n\n```javascript\nimport { pipeline } from '@huggingface/transformers';\n\n// Track progress for each file\nconst fileProgress = {};\n\nfunction onProgress(info) {\n  console.log(`${info.status}: ${info.file}`);\n  \n  if (info.status === 'progress') {\n    fileProgress[info.file] = info.progress;\n    console.log(`${info.file}: ${info.progress.toFixed(1)}%`);\n  }\n  \n  if (info.status === 'done') {\n    console.log(`✓ ${info.file} complete`);\n  }\n}\n\n// Pass callback to pipeline\nconst classifier = await pipeline('sentiment-analysis', null, {\n  progress_callback: onProgress\n});\n```\n\n**Progress Info Properties:**\n\n```typescript\ninterface ProgressInfo {\n  status: 'initiate' | 'download' | 'progress' | 'done' | 'ready';\n  name: string;      // Model id or path\n  file: string;      // File being processed\n  progress?: number; // Percentage (0-100, only for 'progress' status)\n  loaded?: number;   // Bytes downloaded (only for 'progress' status)\n  total?: number;    // Total bytes (only for 'progress' status)\n}\n```\n\nFor complete examples including browser UIs, React components, CLI progress bars, and retry logic, see:\n\n**→ [Pipeline Options - Progress Callback](./references/PIPELINE_OPTIONS.md#progress-callback)**\n\n## Error Handling\n\n```javascript\ntry {\n  const pipe = await pipeline('sentiment-analysis', 'model-id');\n  const result = await pipe('text to analyze');\n} catch (error) {\n  if (error.message.includes('fetch')) {\n    console.error('Model download failed. Check internet connection.');\n  } else if (error.message.includes('ONNX')) {\n    console.error('Model execution failed. Check model compatibility.');\n  } else {\n    console.error('Unknown error:', error);\n  }\n}\n```\n\n## Performance Tips\n\n1. **Reuse Pipelines**: Create pipeline once, reuse for multiple inferences\n2. **Use Quantization**: Start with `q8` or `q4` for faster inference\n3. **Batch Processing**: Process multiple inputs together when possible\n4. **Cache Models**: Models are cached automatically (see **[Caching Reference](./references/CACHE.md)** for details on browser Cache API, Node.js filesystem cache, and custom implementations)\n5. **WebGPU for Large Models**: Use WebGPU for models that benefit from GPU acceleration\n6. **Prune Context**: For text generation, limit `max_new_tokens` to avoid memory issues\n7. **Clean Up Resources**: Call `pipe.dispose()` when done to free memory\n\n## Memory Management\n\n**IMPORTANT:** Always call `pipe.dispose()` when finished to prevent memory leaks.\n\n```javascript\nconst pipe = await pipeline('sentiment-analysis');\nconst result = await pipe('Great product!');\nawait pipe.dispose();  // ✓ Free memory (100MB - several GB per model)\n```\n\n**When to dispose:**\n- Application shutdown or component unmount\n- Before loading a different model\n- After batch processing in long-running apps\n\nModels consume significant memory and hold GPU/CPU resources. Disposal is critical for browser memory limits and server stability.\n\nFor detailed patterns (React cleanup, servers, browser), see **[Code Examples](./references/EXAMPLES.md)**\n\n## Troubleshooting\n\n### Model Not Found\n- Verify model exists on Hugging Face Hub\n- Check model name spelling\n- Ensure model has ONNX files (look for `onnx` folder in model repo)\n\n### Memory Issues\n- Use smaller models or quantized versions (`dtype: 'q4'`)\n- Reduce batch size\n- Limit sequence length with `max_length`\n\n### WebGPU Errors\n- Check browser compatibility (Chrome 113+, Edge 113+)\n- Try `dtype: 'fp16'` if `fp32` fails\n- Fall back to WASM if WebGPU unavailable\n\n## Reference Documentation\n\n### This Skill\n- **[Pipeline Options](./references/PIPELINE_OPTIONS.md)** - Configure `pipeline()` with `progress_callback`, `device`, `dtype`, etc.\n- **[Configuration Reference](./references/CONFIGURATION.md)** - Global `env` configuration for caching and model loading\n- **[Caching Reference](./references/CACHE.md)** - Browser Cache API, Node.js filesystem cache, and custom cache implementations\n- **[Text Generation Guide](./references/TEXT_GENERATION.md)** - Streaming, chat format, and generation parameters\n- **[Model Architectures](./references/MODEL_ARCHITECTURES.md)** - Supported models and selection tips\n- **[Code Examples](./references/EXAMPLES.md)** - Real-world implementations for different runtimes\n\n### Official Transformers.js\n- Official docs: https://huggingface.co/docs/transformers.js\n- API reference: https://huggingface.co/docs/transformers.js/api/pipelines\n- Model hub: https://huggingface.co/models?library=transformers.js\n- GitHub: https://github.com/huggingface/transformers.js\n- Examples: https://github.com/huggingface/transformers.js/tree/main/examples\n\n## Best Practices\n\n1. **Always Dispose Pipelines**: Call `pipe.dispose()` when done - critical for preventing memory leaks\n2. **Start with Pipelines**: Use the pipeline API unless you need fine-grained control\n3. **Test Locally First**: Test models with small inputs before deploying\n4. **Monitor Model Sizes**: Be aware of model download sizes for web applications\n5. **Handle Loading States**: Show progress indicators for better UX\n6. **Version Pin**: Pin specific model versions for production stability\n7. **Error Boundaries**: Always wrap pipeline calls in try-catch blocks\n8. **Progressive Enhancement**: Provide fallbacks for unsupported browsers\n9. **Reuse Models**: Load once, use many times - don't recreate pipelines unnecessarily\n10. **Graceful Shutdown**: Dispose models on SIGTERM/SIGINT in servers\n\n## Quick Reference: Task IDs\n\n| Task | Task ID |\n|------|---------|\n| Text classification | `text-classification` or `sentiment-analysis` |\n| Token classification | `token-classification` or `ner` |\n| Question answering | `question-answering` |\n| Fill mask | `fill-mask` |\n| Summarization | `summarization` |\n| Translation | `translation` |\n| Text generation | `text-generation` |\n| Text-to-text generation | `text2text-generation` |\n| Zero-shot classification | `zero-shot-classification` |\n| Image classification | `image-classification` |\n| Image segmentation | `image-segmentation` |\n| Object detection | `object-detection` |\n| Depth estimation | `depth-estimation` |\n| Image-to-image | `image-to-image` |\n| Zero-shot image classification | `zero-shot-image-classification` |\n| Zero-shot object detection | `zero-shot-object-detection` |\n| Automatic speech recognition | `automatic-speech-recognition` |\n| Audio classification | `audio-classification` |\n| Text-to-speech | `text-to-speech` or `text-to-audio` |\n| Image-to-text | `image-to-text` |\n| Document question answering | `document-question-answering` |\n| Feature extraction | `feature-extraction` |\n| Sentence similarity | `sentence-similarity` |\n\n---\n\nThis skill enables you to integrate state-of-the-art machine learning capabilities directly into JavaScript applications without requiring separate ML servers or Python environments.\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["transformers","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity-skills"],"capabilities":["skill","source-sickn33","skill-transformers-js","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/transformers-js","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34460 github stars · SKILL.md body (22,149 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T06:52:02.898Z","embedding":null,"createdAt":"2026-04-18T21:46:29.940Z","updatedAt":"2026-04-22T06:52:02.898Z","lastSeenAt":"2026-04-22T06:52:02.898Z","tsv":"'-100':1693 '/.cache':1376 '/all-minilm-l6-v2-onnx':873 '/app/models':1402 '/docs/transformers.js':2106 '/docs/transformers.js/api/pipelines':2111 '/gemma-3-270m-it-onnx':463,1153,1192,1536 '/huggingface/transformers.js':2120 '/huggingface/transformers.js/tree/main/examples':2124 '/image.jpg'');':603,629,656,673 '/models':1358 '/models'';':1408 '/models?library=transformers.js':2116 '/models?library=transformers.js&sort=trending':249,908 '/models?pipeline_tag=audio-classification&library=transformers.js&sort=trending':966 '/models?pipeline_tag=automatic-speech-recognition&library=transformers.js&sort=trending':270,961 '/models?pipeline_tag=feature-extraction&library=transformers.js&sort=trending':978 '/models?pipeline_tag=image-classification&library=transformers.js&sort=trending':265,946 '/models?pipeline_tag=image-segmentation&library=transformers.js&sort=trending':956 '/models?pipeline_tag=image-to-text&library=transformers.js&sort=trending':973 '/models?pipeline_tag=object-detection&library=transformers.js&sort=trending':951 '/models?pipeline_tag=question-answering&library=transformers.js&sort=trending':941 '/models?pipeline_tag=summarization&library=transformers.js&sort=trending':936 '/models?pipeline_tag=text-classification&library=transformers.js&sort=trending':928 '/models?pipeline_tag=text-generation&library=transformers.js&sort=trending':260,923,1142 '/models?pipeline_tag=translation&library=transformers.js&sort=trending':932 '/models?pipeline_tag=zero-shot-classification&library=transformers.js&sort=trending':985 '/references/cache.md':1828,2061 '/references/configuration.md':1436,2050 '/references/examples.md':206,1964,2092 '/references/model_architectures.md':2084 '/references/pipeline_options.md':1733,2039 '/references/text_generation.md':486,2075 '0':1692 '0.7':477 '0.95':634 '0.999817686':175 '1':121,853,1019,1138,1221,1644,1788,2127 '10':2232 '100':475,555,1214 '100mb':1023,1031,1910 '113':2017,2019 '2':213,1051,1143,1234,1798,2140 '270m':1161 '3':289,1090,1154,1251,1809,2155 '3.8.1':1341 '30':558 '4':332,1083,1116,1174,1263,1818,2166 '5':1275,1841,2179 '500mb':1032,1041 '6':1289,1855,2189 '7':1869,2199 '8':1074,2211 '9':2219 'abc123':1310 'acceler':1522,1854 'accur':1066,1072 'accuraci':1029,1043,1080,1088,1123 'across':210 'add':252 'advanc':1311 'ai':86 'alway':177,1883,2128,2202 'amaz':392 'amount':804 'analysi':65,160,233,311,326,352,1496,1573,1661,1747,1899,2256 'analyz':1757 'answer':420,428,430,785,794,796,938,2265,2268,2382,2386 'api':123,126,515,1834,2064,2107,2147 'app':1935 'applic':87,1918,2178,2414 'architectur':2083 'argument':225 'art':31,2407 'ask':2456 'audio':82,697,721,729,751,962,2354,2357,2371 'audio-classif':728,2356 'audio.wav':715,735 'automat':699,708,1824,2347,2351 'automatic-speech-recognit':707,2350 'automodel':1443 'automodel.from':1466 'autotoken':1442 'autotokenizer.from':1457 'avail':238,1056,1165 'avoid':1866 'await':156,166,184,229,307,322,348,380,387,400,407,424,431,455,466,521,527,545,550,566,574,592,599,610,618,625,645,652,662,669,682,691,705,713,726,733,743,752,772,780,789,797,813,822,834,841,865,876,1184,1203,1215,1303,1456,1465,1476,1484,1492,1502,1528,1569,1657,1743,1753,1895,1902,1906 'awar':2171 'back':2027 'backend':107 'balanc':1033 'bar':1724 'base':902,1461,1470 'bash':111 'basic':370 'batch':1487,1810,1929,2003 'benchmark':1125,1262,1266 'benefit':1851 'bert':1460,1469 'bert-base-uncas':1459,1468 'best':2125 'better':1045,2187 'bird':696 'bit':1075,1084 'block':2210 'boundari':2201,2464 'box':635 'brows':237,889,1144 'browser':18,40,104,115,508,1027,1374,1514,1524,1546,1557,1564,1718,1832,1948,1960,2014,2062,2218 'browser-specif':1513 'build':84 'byte':1700,1709 'cach':1325,1364,1371,1377,1410,1423,1425,1819,1823,1826,1833,1837,2055,2059,2063,2067,2070 'call':1873,1884,2131,2205 'callback':1612,1652,1664,1732,1736,2044 'capabl':2410 'capit':437,444 'caption':768,771,779,781 'car':826 'card':282,1096,1120,1157,1254,1256 'case':1039,1169 'cat':694 'catch':1758,2209 'cdn':117,1404 'cdn.example.com':1407 'cdn.example.com/models'';':1406 'chat':481,1173,2077 'chat/conversation':496 'check':280,1093,1155,1235,1545,1767,1778,1976,2013 'choos':292,887,1008 'chrome':2016 'citi':447 'clarif':2458 'classif':73,262,376,384,404,562,571,588,596,678,688,722,730,925,943,963,982,2249,2252,2258,2261,2294,2298,2300,2303,2331,2336,2355,2358 'classifi':379,388,565,575,591,600,611,681,692,725,734,1491,1503,1656 'classifier.dispose':185 'clean':1870 'cleanup':208,1958 'clear':2431 'cli':1722 'client':100 'client-sid':99 'code':204,1962,2090 'commit':1295 'common':1342 'communiti':462,872,1003,1152,1191,1276,1288,1535 'compat':895,1092,1547,1780,2015 'complet':1417,1650,1715 'compon':513,1721,1921 'comprehens':1320 'comput':585,1207 'concept':120 'configur':1312,1314,1380,1421,1434,2040,2048,2053 'connect':1769 'consid':1012 'consider':1516 'consist':1602 'console.error':1763,1774,1782 'console.log':1338,1632,1641,1648 'const':154,164,227,305,320,346,378,385,398,405,422,429,453,464,519,525,543,548,564,572,590,597,608,616,623,643,650,660,667,680,689,703,711,724,731,741,750,770,778,787,795,811,820,832,839,863,874,1182,1201,1301,1454,1463,1474,1482,1490,1500,1526,1567,1627,1655,1741,1751,1893,1900 'consum':1937 'contain':1257 'context':440,1857 'control':335,1321,1453,2154 'core':119 'cpu':301 'creat':147,1791 'criteria':2467 'critic':1946,2135 'custom':220,1403,1839,2069 'default':302,1556 'deploy':2165 'depth':657,665,668,2314,2317 'depth-estim':664,2316 'depthestim':661,670 'describ':2435 'detail':1830,1955 'detect':75,614,622,809,819,948,2310,2313,2341,2346 'detector':617,626,812,823 'develop':1383 'devic':290,330,1050,1537,2045 'differ':211,1058,1926,2098 'direct':35,2411 'directori':1361,1378 'disabl':1409 'discov':894 'disk':1367 'dispos':178,192,1917,1944,2129,2235 'doc':2103 'docqa':788,798 'document':783,792,1418,2034,2380,2384 'document-image.jpg':799 'document-question-answ':791,2383 'dog':695 'done':180,1647,1676,1876,2134 'download':994,996,1429,1607,1674,1701,1765,2174 'dtype':356,1193,1539,1577,2000,2021,2046 'e.g':1149,1340 'easiest':129 'edg':2018 'els':1770,1781 'emb':848,880 'embed':760,830,840,860,875 'enabl':25,2399 'endpoint':516 'eng':535 'english':1112,1167 'english-on':1111 'enhanc':2213 'ensur':1238,1980 'entiti':394,406 'env':1315,1317,1333,2052 'env.allowlocalmodels':1351,1399 'env.allowremotemodels':1344,1389,1397 'env.cachedir':1375 'env.localmodelpath':1357,1401 'env.remotehost':1405 'env.usebrowsercache':1369,1414 'env.usefscache':1362,1391,1412 'env.version':1339 'environ':43,212,1274,1313,2422,2447 'environment-specif':2446 'error':1737,1759,1784,1785,2012,2200 'error.message.includes':1761,1772 'estim':658,666,2315,2318 'etc':96,2047 'even':1582 'exampl':202,205,288,367,511,1131,1259,1716,1963,2091,2121 'example.com':602,628,655,672 'example.com/image.jpg'');':601,627,654,671 'execut':1324,1558,1776 'exist':1971 'experiment':319,1544 'expert':2452 'explain':1205 'extract':829,838,869,975,2388,2391 'extractor':833,842,864,877 'face':6,243,892,900,1349,1974 'factor':1014 'fail':1766,1777,2025 'fall':2026 'fallback':2215 'fals':1352,1398,1413,1415 'fast':1024,1384 'faster':344,1199,1807 'featur':828,837,868,974,2387,2390 'feature-extract':836,867,2389 'fetch':1762 'file':607,1243,1355,1549,1605,1626,1684,1686,1984 'fileprogress':1628,1638 'filesystem':1836,2066 'fill':2269,2272 'fill-mask':2271 'filter':272,909 'find':235,885,1132 'fine':2152 'fine-grain':2151 'finish':196,1887 'first':1229,2158 'folder':1247,1988 'follow':1172 'format':497,1106,2078 'found':1968 'fp16':360,1067,2022 'fp32':359,1061,1540,2024 'fra':539 'franc':439,449 'free':182,1878,1908 'full':1062 'function':1616,1629 'gb':1600,1912 'generat':66,257,451,454,459,467,484,501,920,1135,1183,1188,1204,1532,1860,2073,2080,2279,2282,2287,2290 'generator.dispose':1216 'git':1294 'github':2117 'github.com':2119,2123 'github.com/huggingface/transformers.js':2118 'github.com/huggingface/transformers.js/tree/main/examples':2122 'global':2051 'good':1035 'gpu':317,1521,1853 'gpu/cpu':1942 'grace':2233 'grain':2153 'great':1904 'group':135 'guid':485,2074 'half':1068 'handl':1738,2180 'hello':529,754,1478 'hidden':856 'high':1042 'hold':1941 'hub':244,893,901,1350,1975,2113 'hug':5,242,891,899,1348,1973 'huggingface.co':248,259,264,269,907,922,927,931,935,940,945,950,955,960,965,972,977,984,1141,2105,2110,2115 'huggingface.co/docs/transformers.js':2104 'huggingface.co/docs/transformers.js/api/pipelines':2109 'huggingface.co/models?library=transformers.js':2114 'huggingface.co/models?library=transformers.js&sort=trending':247,906 'huggingface.co/models?pipeline_tag=audio-classification&library=transformers.js&sort=trending':964 'huggingface.co/models?pipeline_tag=automatic-speech-recognition&library=transformers.js&sort=trending':268,959 'huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js&sort=trending':976 'huggingface.co/models?pipeline_tag=image-classification&library=transformers.js&sort=trending':263,944 'huggingface.co/models?pipeline_tag=image-segmentation&library=transformers.js&sort=trending':954 'huggingface.co/models?pipeline_tag=image-to-text&library=transformers.js&sort=trending':971 'huggingface.co/models?pipeline_tag=object-detection&library=transformers.js&sort=trending':949 'huggingface.co/models?pipeline_tag=question-answering&library=transformers.js&sort=trending':939 'huggingface.co/models?pipeline_tag=summarization&library=transformers.js&sort=trending':934 'huggingface.co/models?pipeline_tag=text-classification&library=transformers.js&sort=trending':926 'huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending':258,921,1140 'huggingface.co/models?pipeline_tag=translation&library=transformers.js&sort=trending':930 'huggingface.co/models?pipeline_tag=zero-shot-classification&library=transformers.js&sort=trending':983 'huggingface/transformers':114,146,1181,1335,1445,1621 'id':314,329,355,1308,1576,1681,1750,2244,2247 'imag':72,91,93,261,587,595,640,648,677,687,764,767,775,942,952,968,2299,2302,2304,2307,2320,2322,2324,2326,2330,2335,2373,2377 'image-classif':594,2301 'image-segment':647,2306 'image-to-imag':2319,2323 'image-to-text':92,763,774,967,2372,2376 'image.jpg':693,782,824 'imageurl':612 'implement':78,1840,2071,2096 'import':143,176,1178,1332,1441,1618,1882 'includ':1717 'indic':1588,2185 'infer':139,1127,1200,1267,1797,1808 'info':1631,1667 'info.file':1634,1639,1642,1649 'info.progress':1640 'info.progress.tofixed':1643 'info.status':1633,1636,1646 'initi':1673 'input':1473,1475,1486,1814,2163,2461 'input/output':1105 'instal':108,110,113 'instruct':1171 'instruction-follow':1170 'integr':2402 'interfac':1670 'internet':1768 'issu':1550,1868,1993 'iter':1385 'javascript':9,23,37,70,118,142,226,298,340,377,397,421,452,518,542,563,589,615,642,659,679,702,723,740,769,786,810,831,1137,1300,1331,1382,1440,1489,1525,1561,1617,1739,1892,2413 'john':412 'js':3 'k':505 'label':172,631 'lang':534,538 'languag':373,1107,1166 'larg':1040,1592,1844 'largest':446,1064 'latn':536,540 'leak':200,1891,2139 'learn':21,33,2409 'length':554,557,855,2007,2010 'level':1060 'licens':1114 'like':999,1001 'limit':1028,1260,1861,1950,2005,2423 'live':415 'load':1328,1346,1353,1446,1587,1698,1924,2058,2181,2222 'local':606,1265,1359,1394,2157 'locat':1379 'logic':1727 'long':1933 'long-run':1932 'longtext':552 'look':1244,1278,1985 'loss':1081,1089 'love':169,1505 'machin':20,32,2408 'maintain':1284 'manag':187,1426,1881 'mani':2225 'mask':2270,2273 'match':2432 'max':472,553,1211,1862,2009 'mb':1597 'mean':861,882 'medium':1030 'memori':183,186,199,1129,1270,1867,1879,1880,1890,1909,1939,1949,1992,2138 'metric':285,1118 'min':556 'miss':2469 'ml':61,2418 'model':7,34,62,98,133,138,214,221,236,240,246,281,297,313,328,336,343,354,888,897,905,1011,1018,1020,1053,1095,1101,1119,1136,1148,1156,1159,1177,1219,1228,1240,1249,1253,1255,1277,1280,1307,1327,1360,1365,1372,1388,1395,1430,1449,1464,1481,1485,1575,1589,1680,1749,1764,1775,1779,1820,1821,1845,1849,1914,1927,1936,1966,1970,1977,1981,1990,1996,2057,2082,2086,2112,2160,2168,2173,2194,2221,2236 'model-id':312,327,353,1306,1574,1748 'modifi':1005 'monitor':2167 'movi':390 'much':1077 'multilingu':1109 'multimod':85,762 'multipl':1103,1498,1604,1796,1813 'must':190 'name':393,410,1678,1978 'natur':372 'need':58,1233,2150 'ner':396,399,408,2263 'new':417,473,1212,1863 'node.js':15,42,510,1047,1368,1835,2065 'normal':883 'note':365,1541 'notic':1087 'npm':109,112 'null':1662 'number':1690,1699,1707 'object':74,613,621,624,808,818,821,947,1318,2309,2312,2340,2345 'object-detect':620,2311 'occur':1553 'offici':2100,2102 'often':1055 'okay':1512 'onnx':461,871,1151,1190,1236,1242,1246,1287,1534,1773,1983,1987 'onnx-commun':460,870,1150,1189,1286,1533 'onprogress':1630,1665 'optim':1562 'option':334,358,987,1422,1730,2038 'output':171,493,526,1202,1483,2441 'overal':997 'overview':1330 'p':507 'paramet':255,502,916,1162,2081 'pari':441 'pass':1610,1651 'path':1683 'pattern':209,1381,1956 'per':1913 'percentag':1691 'perform':71,284,339,1034,1117,1555,1786 'permiss':2462 'person':632,825 'pin':1291,2191,2192 'pipe':155,167,228,306,321,347,1302,1527,1568,1742,1754,1894,1903 'pipe.dispose':194,1874,1885,1907,2132 'pipelin':122,125,144,149,157,163,189,230,253,308,323,349,381,401,425,456,522,546,567,593,619,646,663,683,706,727,744,773,790,814,835,866,914,1179,1185,1304,1493,1529,1570,1615,1619,1654,1658,1729,1744,1790,1792,1896,2037,2041,2130,2143,2146,2204,2230 'polit':582 'pool':862,881 'popular':991 'posit':173 'possibl':1817 'postprocess':141 'power':1049 'practic':2126 'pre':1428 'pre-download':1427 'precis':337,1063,1069 'preprocess':137 'pretrain':1458,1467 'prevent':198,1889,2137 'problem':1552 'process':83,374,698,1488,1497,1688,1811,1812,1930 'product':1297,1393,1905,2197 'progress':1585,1608,1623,1637,1663,1666,1675,1689,1696,1704,1712,1723,1731,1735,2043,2184,2212 'progress-callback':1734 'progressinfo':1671 'properti':1668 'provid':1319,1520,2214 'prune':1856 'python':2421 'q4':357,362,1082,1164,1194,1580,1805,2001 'q8':361,1073,1578,1803 'qa':423,432 'quantiz':333,342,1052,1059,1076,1085,1163,1196,1566,1800,1998 'quantum':1206 'question':419,427,433,784,793,937,2264,2267,2381,2385 'question-answ':426,2266 'quick':1329,2241 'rang':1593 'react':512,1720,1957 'read':1252 'readi':1677 'real':2094 'real-world':2093 'recent':992,1006 'recognit':80,267,395,701,710,958,2349,2353 'recreat':2229 'reduc':2002 'refer':1435,1827,2033,2049,2060,2108,2242 'remot':1387 'repo':1250,1991 'requir':47,1130,2416,2460 'resourc':1872,1943 'restrict':1115 'result':165,386,573,598,609,690,712,732,1126,1501,1752,1901 'retri':1726 'return':630,716,849 'reus':1789,1794,2220 'review':2453 'revis':1309 'right':1010 'role':500 'run':4,26,60,97,295,299,315,1480,1934 'runtim':2099 'safeti':2463 'scope':2434 'score':174,633,1124 'second':224 'see':201,482,1433,1728,1825,1961 'segment':77,641,644,649,651,653,953,2305,2308 'select':215,291,1016,1146,1220,2088 'sentenc':846,859,2392,2395 'sentence-similar':2394 'sentiment':159,232,310,325,351,1495,1572,1660,1746,1898,2255 'sentiment-analysi':158,231,309,324,350,1494,1571,1659,1745,1897,2254 'separ':1450,2417 'sequenc':854,2006 'server':46,1952,1959,2240,2419 'set':1343 'sever':1599,1911 'shape':852 'shot':561,570,676,686,807,817,981,2293,2297,2329,2334,2339,2344 'show':369,1122,2183 'shutdown':1919,2234 'side':101 'signific':1938 'sigterm/sigint':2238 'similar':2393,2396 'simpl':1209 'size':857,1021,1160,1584,2004,2169,2175 'skill':52,55,2036,2398,2426 'skill-transformers-js' 'slight':1079 'slower':1044 'small':1022,1223,2162 'smaller':345,1070,1078,1227,1583,1995 'smallest':1086 'sort':276,986,988,993,998,1004 'source-sickn33' 'speaker':759 'speakerembed':761 'specif':152,1293,1515,2193,2448 'specifi':218 'speech':79,266,700,709,739,748,957,2348,2352,2362,2366 'speed':1128,1268 'spell':1979 'sport':581,583 'src':533 'stabil':1299,1953,2198 'start':1222,1801,2141 'state':28,2182,2404 'state-of-the-art':27,2403 'status':1672,1697,1705,1713 'still':1071 'stop':2454 'stori':579 'strategi':1424 'stream':479,488,2076 'string':1679,1685 'substitut':2444 'success':2466 'suitabl':1025 'summar':541,544,547,551,933,2274,2275 'summari':549 'support':363,1098,1102,1108,1237,2085 'synthes':742,753 'system':1356 'system/user/assistant':499 'tag':254,915 'task':153,251,274,364,911,917,1091,1099,1104,1305,2243,2245,2246,2430 'technolog':584 'temperatur':476,503 'tensor':850,1439 'term':1210 'terribl':1509 'test':758,1224,1264,1411,2156,2159,2450 'text':64,89,95,256,375,383,450,458,465,483,717,719,737,746,766,777,878,919,924,970,1134,1187,1499,1531,1755,1859,2072,2248,2251,2278,2281,2284,2286,2360,2364,2369,2375,2379 'text-classif':382,2250 'text-gener':457,1186,1530,2280 'text-to-audio':2368 'text-to-imag':88 'text-to-speech':736,745,2359,2363 'text-to-text':2283 'text2text':2289 'text2text-generation':2288 'textstream':495 'tgt':537 'time':471,2226 'tip':271,1217,1787,2089 'togeth':136,1815 'token':403,474,490,492,1213,1447,1455,1472,1477,1864,2257,2260 'token-by-token':489 'token-classif':402,2259 'top':504,506 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'total':803,1706,1708 'track':1586,1606,1622 'transcrib':704,714,718 'transform':2,170 'transformers-j':1 'transformers.js':13,19,24,239,896,1283,1323,2101 'translat':68,517,520,523,528,929,2276,2277 'treat':2439 'tree':827 'trend':989 'trending/downloads':278 'tri':1740,2020,2208 'troubleshoot':1965 'true':884,1345,1363,1370,1390,1392,1400 'try-catch':2207 'type':275 'typescript':11,1669 'typic':1121 'ui':1719 'unavail':2032 'uncas':1462,1471 'unknown':1783 'unless':2148 'unmount':1922 'unnecessarili':2231 'unsupport':2217 'updat':1007 'upgrad':1231 'upon':469 'url':903,918 'usag':116,287,371,1258,1271,1518 'use':50,53,132,161,341,912,1038,1168,1175,1195,1292,1559,1799,1846,1994,2144,2224,2424 'ux':2188 'valid':2449 'verifi':1969 'version':1197,1290,1337,1999,2190,2195 'view':1336 'vision':586 'visit':1139 'vs':338,1110 'wasm':304,1554,1560,2029 'way':130 'web':2177 'webgpu':318,331,1517,1519,1538,1542,1842,1847,2011,2031 'without':105,2415 'work':1437 'world':1479,2095 'wrap':2203 'xenova':1282 'xenova/bert-base-multilingual-uncased-sentiment':234 'xenova/nllb-200-distilled-600m':524 'xenova/speecht5_tts':749 'xmax':638 'xmin':636 'ymax':639 'ymin':637 'york':418 'zero':560,569,675,685,806,816,980,2292,2296,2328,2333,2338,2343 'zero-shot':559,674,805,979,2291,2327,2337 'zero-shot-classif':568,2295 'zero-shot-image-classif':684,2332 'zero-shot-object-detect':815,2342","prices":[{"id":"5629d5a6-e44c-4fd4-a1c1-527e1706c028","listingId":"4aed6674-0b21-4835-9438-5107a095c8f8","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:46:29.940Z"}],"sources":[{"listingId":"4aed6674-0b21-4835-9438-5107a095c8f8","source":"github","sourceId":"sickn33/antigravity-awesome-skills/transformers-js","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/transformers-js","isPrimary":false,"firstSeenAt":"2026-04-18T21:46:29.940Z","lastSeenAt":"2026-04-22T06:52:02.898Z"}],"details":{"listingId":"4aed6674-0b21-4835-9438-5107a095c8f8","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"transformers-js","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34460,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-22T06:40:00Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"88e16cb4157e423f149c60bd83314584fbd7fa60","skill_md_path":"skills/transformers-js/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/transformers-js"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"transformers-js","license":"Apache-2.0","description":"Run Hugging Face models in JavaScript or TypeScript with Transformers.js in Node.js or the browser.","compatibility":"Requires Node.js 18+ or modern browser with ES modules support. WebGPU support requires compatible browser/environment. Internet access needed for downloading models from Hugging Face Hub (optional if using local models)."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/transformers-js"},"updatedAt":"2026-04-22T06:52:02.898Z"}}