{"id":"2881759f-ec48-4da6-9c9b-3ee7f241dcac","shortId":"5wNMt2","kind":"skill","title":"hugging-face-vision-trainer","tagline":"Train or fine-tune vision models on Hugging Face Jobs for detection, classification, and SAM or SAM2 segmentation.","description":"# Vision Model Training on Hugging Face Jobs\n\nTrain object detection, image classification, and SAM/SAM2 segmentation models on managed cloud GPUs. No local GPU setup required—results are automatically saved to the Hugging Face Hub.\n\n## When to Use This Skill\n\nUse this skill when users want to:\n- Fine-tune object detection models (D-FINE, RT-DETR v2, DETR, YOLOS) on cloud GPUs or local\n- Fine-tune image classification models (timm: MobileNetV3, MobileViT, ResNet, ViT/DINOv3, or any Transformers classifier) on cloud GPUs or local\n- Fine-tune SAM or SAM2 models for segmentation / image matting using bbox or point prompts\n- Train bounding-box detectors on custom datasets\n- Train image classifiers on custom datasets\n- Train segmentation models on custom mask datasets with prompts\n- Run vision training jobs on Hugging Face Jobs infrastructure\n- Ensure trained vision models are permanently saved to the Hub\n\n## Related Skills\n\n- **`hugging-face-jobs`** — General HF Jobs infrastructure: token authentication, hardware flavors, timeout management, cost estimation, secrets, environment variables, scheduled jobs, and result persistence. **Refer to the Jobs skill for any non-training-specific Jobs questions** (e.g., \"how do secrets work?\", \"what hardware is available?\", \"how do I pass tokens?\").\n- **`hugging-face-model-trainer`** — TRL-based language model training (SFT, DPO, GRPO). Use that skill for text/language model fine-tuning.\n\n## Local Script Execution\n\nHelper scripts use PEP 723 inline dependencies. Run them with `uv run`:\n```bash\nuv run scripts/dataset_inspector.py --dataset username/dataset-name --split train\nuv run scripts/estimate_cost.py --help\n```\n\n## Prerequisites Checklist\n\nBefore starting any training job, verify:\n\n### Account & Authentication\n- Hugging Face Account with [Pro](https://hf.co/pro), [Team](https://hf.co/enterprise), or [Enterprise](https://hf.co/enterprise) plan (Jobs require paid plan)\n- Authenticated login: Check with `hf_whoami()` (tool) or `hf auth whoami` (terminal)\n- Token has **write** permissions\n- **MUST pass token in job secrets** — see directive #3 below for syntax (MCP tool vs Python API)\n\n### Dataset Requirements — Object Detection\n- Dataset must exist on Hub\n- Annotations must use the `objects` column with `bbox`, `category` (and optionally `area`) sub-fields\n- Bboxes can be in **xywh (COCO)** or **xyxy (Pascal VOC)** format — auto-detected and converted\n- Categories can be **integers or strings** — strings are auto-remapped to integer IDs\n- `image_id` column is **optional** — generated automatically if missing\n- **ALWAYS validate unknown datasets** before GPU training (see Dataset Validation section)\n\n### Dataset Requirements — Image Classification\n- Dataset must exist on Hub\n- Must have an **`image` column** (PIL images) and a **`label` column** (integer class IDs or strings)\n- The label column can be `ClassLabel` type (with names) or plain integers/strings — strings are auto-remapped\n- Common column names auto-detected: `label`, `labels`, `class`, `fine_label`\n- **ALWAYS validate unknown datasets** before GPU training (see Dataset Validation section)\n\n### Dataset Requirements — SAM/SAM2 Segmentation\n- Dataset must exist on Hub\n- Must have an **`image` column** (PIL images) and a **`mask` column** (binary ground-truth segmentation mask)\n- Must have a **prompt** — either:\n  - A **`prompt` column** with JSON containing `{\"bbox\": [x0,y0,x1,y1]}` or `{\"point\": [x,y]}`\n  - OR a dedicated **`bbox`** column with `[x0,y0,x1,y1]` values\n  - OR a dedicated **`point`** column with `[x,y]` or `[[x,y],...]` values\n- Bboxes should be in **xyxy** format (absolute pixel coordinates)\n- Example dataset: `merve/MicroMat-mini` (image matting with bbox prompts)\n- **ALWAYS validate unknown datasets** before GPU training (see Dataset Validation section)\n\n### Critical Settings\n- **Timeout must exceed expected training time** — Default 30min is TOO SHORT. See directive #6 for recommended values.\n- **Hub push must be enabled** — `push_to_hub=True`, `hub_model_id=\"username/model-name\"`, token in `secrets`\n\n## Dataset Validation\n\n**Validate dataset format BEFORE launching GPU training to prevent the #1 cause of training failures: format mismatches.**\n\n**ALWAYS validate for** unknown/custom datasets or any dataset you haven't trained with before. **Skip for** `cppe-5` (the default in the training script).\n\n### Running the Inspector\n\n**Option 1: Via HF Jobs (recommended — avoids local SSL/dependency issues):**\n```python\nhf_jobs(\"uv\", {\n    \"script\": \"path/to/dataset_inspector.py\",\n    \"script_args\": [\"--dataset\", \"username/dataset-name\", \"--split\", \"train\"]\n})\n```\n\n**Option 2: Locally:**\n```bash\nuv run scripts/dataset_inspector.py --dataset username/dataset-name --split train\n```\n\n**Option 3: Via `HfApi().run_uv_job()` (if hf_jobs MCP unavailable):**\n```python\nfrom huggingface_hub import HfApi\napi = HfApi()\napi.run_uv_job(\n    script=\"scripts/dataset_inspector.py\",\n    script_args=[\"--dataset\", \"username/dataset-name\", \"--split\", \"train\"],\n    flavor=\"cpu-basic\",\n    timeout=300,\n)\n```\n\n### Reading Results\n\n- **`✓ READY`** — Dataset is compatible, use directly\n- **`✗ NEEDS FORMATTING`** — Needs preprocessing (mapping code provided in output)\n\n## Automatic Bbox Preprocessing\n\nThe object detection training script (`scripts/object_detection_training.py`) automatically handles bbox format detection (xyxy→xywh conversion), bbox sanitization, `image_id` generation, string category→integer remapping, and dataset truncation. **No manual preprocessing needed** — just ensure the dataset has `objects.bbox` and `objects.category` columns.\n\n## Training workflow\n\nCopy this checklist and track progress:\n\n```\nTraining Progress:\n- [ ] Step 1: Verify prerequisites (account, token, dataset)\n- [ ] Step 2: Validate dataset format (run dataset_inspector.py)\n- [ ] Step 3: Ask user about dataset size and validation split\n- [ ] Step 4: Prepare training script (OD: scripts/object_detection_training.py, IC: scripts/image_classification_training.py, SAM: scripts/sam_segmentation_training.py)\n- [ ] Step 5: Save script locally, submit job, and report details\n```\n\n**Step 1: Verify prerequisites**\n\nFollow the Prerequisites Checklist above.\n\n**Step 2: Validate dataset**\n\nRun the dataset inspector BEFORE spending GPU time. See \"Dataset Validation\" section above.\n\n**Step 3: Ask user preferences**\n\nALWAYS use the AskUserQuestion tool with option-style format:\n\n```python\nAskUserQuestion({\n    \"questions\": [\n        {\n            \"question\": \"Do you want to run a quick test with a subset of the data first?\",\n            \"header\": \"Dataset Size\",\n            \"options\": [\n                {\"label\": \"Quick test run (10% of data)\", \"description\": \"Faster, cheaper (~30-60 min, ~$2-5) to validate setup\"},\n                {\"label\": \"Full dataset (Recommended)\", \"description\": \"Complete training for best model quality\"}\n            ],\n            \"multiSelect\": false\n        },\n        {\n            \"question\": \"Do you want to create a validation split from the training data?\",\n            \"header\": \"Split data\",\n            \"options\": [\n                {\"label\": \"Yes (Recommended)\", \"description\": \"Automatically split 15% of training data for validation\"},\n                {\"label\": \"No\", \"description\": \"Use existing validation split from dataset\"}\n            ],\n            \"multiSelect\": false\n        },\n        {\n            \"question\": \"Which GPU hardware do you want to use?\",\n            \"header\": \"Hardware Flavor\",\n            \"options\": [\n                {\"label\": \"t4-small ($0.40/hr)\", \"description\": \"1x T4, 16 GB VRAM — sufficient for all OD models under 100M params\"},\n                {\"label\": \"l4x1 ($0.80/hr)\", \"description\": \"1x L4, 24 GB VRAM — more headroom for large images or batch sizes\"},\n                {\"label\": \"a10g-large ($1.50/hr)\", \"description\": \"1x A10G, 24 GB VRAM — faster training, more CPU/RAM\"},\n                {\"label\": \"a100-large ($2.50/hr)\", \"description\": \"1x A100, 80 GB VRAM — fastest, for very large datasets or image sizes\"}\n            ],\n            \"multiSelect\": false\n        }\n    ]\n})\n```\n\n**Step 4: Prepare training script**\n\nFor object detection, use [scripts/object_detection_training.py](scripts/object_detection_training.py) as the production-ready template. For image classification, use [scripts/image_classification_training.py](scripts/image_classification_training.py). For SAM/SAM2 segmentation, use [scripts/sam_segmentation_training.py](scripts/sam_segmentation_training.py). All scripts use `HfArgumentParser` — all configuration is passed via CLI arguments in `script_args`, NOT by editing Python variables. For timm model details, see [references/timm_trainer.md](references/timm_trainer.md). For SAM2 training details, see [references/finetune_sam2_trainer.md](references/finetune_sam2_trainer.md).\n\n**Step 5: Save script, submit job, and report**\n\n1. **Save the script locally** to `submitted_jobs/` in the workspace root (create if needed) with a descriptive name like `training_<dataset>_<YYYYMMDD_HHMMSS>.py`. Tell the user the path.\n2. **Submit** using `hf_jobs` MCP tool (preferred) or `HfApi().run_uv_job()` — see directive #1 for both methods. Pass all config via `script_args`.\n3. **Report** the job ID (from `.id` attribute), monitoring URL, Trackio dashboard (`https://huggingface.co/spaces/{username}/trackio`), expected time, and estimated cost.\n4. **Wait for user** to request status checks — don't poll automatically. Training jobs run asynchronously and can take hours.\n\n## Critical directives\n\nThese rules prevent common failures. Follow them exactly.\n\n### 1. Job submission: `hf_jobs` MCP tool vs Python API\n\n**`hf_jobs()` is an MCP tool, NOT a Python function.** Do NOT try to import it from `huggingface_hub`. Call it as a tool:\n\n```\nhf_jobs(\"uv\", {\"script\": training_script_content, \"flavor\": \"a10g-large\", \"timeout\": \"4h\", \"secrets\": {\"HF_TOKEN\": \"$HF_TOKEN\"}})\n```\n\n**If `hf_jobs` MCP tool is unavailable**, use the Python API directly:\n\n```python\nfrom huggingface_hub import HfApi, get_token\napi = HfApi()\njob_info = api.run_uv_job(\n    script=\"path/to/training_script.py\",  # file PATH, NOT content\n    script_args=[\"--dataset_name\", \"cppe-5\", ...],\n    flavor=\"a10g-large\",\n    timeout=14400,  # seconds (4 hours)\n    env={\"PYTHONUNBUFFERED\": \"1\"},\n    secrets={\"HF_TOKEN\": get_token()},  # MUST use get_token(), NOT \"$HF_TOKEN\"\n)\nprint(f\"Job ID: {job_info.id}\")\n```\n\n**Critical differences between the two methods:**\n\n| | `hf_jobs` MCP tool | `HfApi().run_uv_job()` |\n|---|---|---|\n| `script` param | Python code string or URL (NOT local paths) | File path to `.py` file (NOT content) |\n| Token in secrets | `\"$HF_TOKEN\"` (auto-replaced) | `get_token()` (actual token value) |\n| Timeout format | String (`\"4h\"`) | Seconds (`14400`) |\n\n**Rules for both methods:**\n- The training script MUST include PEP 723 inline metadata with dependencies\n- Do NOT use `image` or `command` parameters (those belong to `run_job()`, not `run_uv_job()`)\n\n### 2. Authentication via job secrets + explicit hub_token injection\n\n**Job config** MUST include the token in secrets — syntax depends on submission method (see table above).\n\n**Training script requirement:** The Transformers `Trainer` calls `create_repo(token=self.args.hub_token)` during `__init__()` when `push_to_hub=True`. The training script MUST inject `HF_TOKEN` into `training_args.hub_token` AFTER parsing args but BEFORE creating the `Trainer`. The template `scripts/object_detection_training.py` already includes this:\n\n```python\nhf_token = os.environ.get(\"HF_TOKEN\")\nif training_args.push_to_hub and not training_args.hub_token:\n    if hf_token:\n        training_args.hub_token = hf_token\n```\n\nIf you write a custom script, you MUST include this token injection before the `Trainer(...)` call.\n\n- Do NOT call `login()` in custom scripts unless replicating the full pattern from `scripts/object_detection_training.py`\n- Do NOT rely on implicit token resolution (`hub_token=None`) — unreliable in Jobs\n- See the `hugging-face-jobs` skill → *Token Usage Guide* for full details\n\n### 3. JobInfo attribute\n\nAccess the job identifier using `.id` (NOT `.job_id` or `.name` — these don't exist):\n\n```python\njob_info = api.run_uv_job(...)  # or hf_jobs(\"uv\", {...})\njob_id = job_info.id  # Correct -- returns string like \"687fb701029421ae5549d998\"\n```\n\n### 4. Required training flags and HfArgumentParser boolean syntax\n\n`scripts/object_detection_training.py` uses `HfArgumentParser` — all config is passed via `script_args`. Boolean arguments have two syntaxes:\n\n- **`bool` fields** (e.g., `push_to_hub`, `do_train`): Use as bare flags (`--push_to_hub`) or negate with `--no_` prefix (`--no_remove_unused_columns`)\n- **`Optional[bool]` fields** (e.g., `greater_is_better`): MUST pass explicit value (`--greater_is_better True`). Bare `--greater_is_better` causes `error: expected one argument`\n\nRequired flags for object detection:\n\n```\n--no_remove_unused_columns          # MUST: preserves image column for pixel_values\n--no_eval_do_concat_batches         # MUST: images have different numbers of target boxes\n--push_to_hub                       # MUST: environment is ephemeral\n--hub_model_id username/model-name\n--metric_for_best_model eval_map\n--greater_is_better True            # MUST pass \"True\" explicitly (Optional[bool])\n--do_train\n--do_eval\n```\n\nRequired flags for image classification:\n\n```\n--no_remove_unused_columns          # MUST: preserves image column for pixel_values\n--push_to_hub                       # MUST: environment is ephemeral\n--hub_model_id username/model-name\n--metric_for_best_model eval_accuracy\n--greater_is_better True            # MUST pass \"True\" explicitly (Optional[bool])\n--do_train\n--do_eval\n```\n\nRequired flags for SAM/SAM2 segmentation:\n\n```\n--remove_unused_columns False       # MUST: preserves input_boxes/input_points\n--push_to_hub                       # MUST: environment is ephemeral\n--hub_model_id username/model-name\n--do_train\n--prompt_type bbox                  # or \"point\"\n--dataloader_pin_memory False       # MUST: avoids pin_memory issues with custom collator\n```\n\n### 5. Timeout management\n\nDefault 30 min is TOO SHORT for object detection. Set minimum 2-4 hours. Add 30% buffer for model loading, preprocessing, and Hub push.\n\n| Scenario | Timeout |\n|----------|---------|\n| Quick test (100-200 images, 5-10 epochs) | 1h |\n| Development (500-1K images, 15-20 epochs) | 2-3h |\n| Production (1K-5K images, 30 epochs) | 4-6h |\n| Large dataset (5K+ images) | 6-12h |\n\n### 6. Trackio monitoring\n\nTrackio is **always enabled** in the object detection training script — it calls `trackio.init()` and `trackio.finish()` automatically. No need to pass `--report_to trackio`. The project name is taken from `--output_dir` and the run name from `--run_name`. For image classification, pass `--report_to trackio` in `TrainingArguments`.\n\nDashboard at: `https://huggingface.co/spaces/{username}/trackio`\n\n## Model & hardware selection\n\n### Recommended object detection models\n\n| Model | Params | Use case |\n|-------|--------|----------|\n| `ustc-community/dfine-small-coco` | 10.4M | Best starting point — fast, cheap, SOTA quality |\n| `PekingU/rtdetr_v2_r18vd` | 20.2M | Lightweight real-time detector |\n| `ustc-community/dfine-large-coco` | 31.4M | Higher accuracy, still efficient |\n| `PekingU/rtdetr_v2_r50vd` | 43M | Strong real-time baseline |\n| `ustc-community/dfine-xlarge-obj365` | 63.5M | Best accuracy (pretrained on Objects365) |\n| `PekingU/rtdetr_v2_r101vd` | 76M | Largest RT-DETR v2 variant |\n\nStart with `ustc-community/dfine-small-coco` for fast iteration. Move to D-FINE Large or RT-DETR v2 R50 for better accuracy.\n\n### Recommended image classification models\n\nAll `timm/` models work out of the box via `AutoModelForImageClassification` (loaded as `TimmWrapperForImageClassification`). See [references/timm_trainer.md](references/timm_trainer.md) for details.\n\n| Model | Params | Use case |\n|-------|--------|----------|\n| `timm/mobilenetv3_small_100.lamb_in1k` | 2.5M | Ultra-lightweight — mobile/edge, fastest training |\n| `timm/mobilevit_s.cvnets_in1k` | 5.6M | Mobile transformer — good accuracy/speed trade-off |\n| `timm/resnet50.a1_in1k` | 25.6M | Strong CNN baseline — reliable, well-studied |\n| `timm/vit_base_patch16_dinov3.lvd1689m` | 86.6M | Best accuracy — DINOv3 self-supervised ViT |\n\nStart with `timm/mobilenetv3_small_100.lamb_in1k` for fast iteration. Move to `timm/resnet50.a1_in1k` or `timm/vit_base_patch16_dinov3.lvd1689m` for better accuracy.\n\n### Recommended SAM/SAM2 segmentation models\n\n| Model | Params | Use case |\n|-------|--------|----------|\n| `facebook/sam2.1-hiera-tiny` | 38.9M | Fastest SAM2 — good for quick experiments |\n| `facebook/sam2.1-hiera-small` | 46.0M | Best starting point — good quality/speed balance |\n| `facebook/sam2.1-hiera-base-plus` | 80.8M | Higher capacity for complex segmentation |\n| `facebook/sam2.1-hiera-large` | 224.4M | Best SAM2 accuracy — requires more VRAM |\n| `facebook/sam-vit-base` | 93.7M | Original SAM — ViT-B backbone |\n| `facebook/sam-vit-large` | 312.3M | Original SAM — ViT-L backbone |\n| `facebook/sam-vit-huge` | 641.1M | Original SAM — ViT-H, best SAM v1 accuracy |\n\nStart with `facebook/sam2.1-hiera-small` for fast iteration. SAM2 models are generally more efficient than SAM v1 at similar quality. Only the mask decoder is trained by default (vision and prompt encoders are frozen).\n\n### Hardware recommendation\n\nAll recommended OD and IC models are under 100M params — **`t4-small` (16 GB VRAM, $0.40/hr) is sufficient for all of them.** Image classification models are generally smaller and faster than object detection models — `t4-small` handles even ViT-Base comfortably. For SAM2 models up to `hiera-base-plus`, `t4-small` is sufficient since only the mask decoder is trained. For `sam2.1-hiera-large` or SAM v1 models, use `l4x1` or `a10g-large`. Only upgrade if you hit OOM from large batch sizes — reduce batch size first before switching hardware. Common upgrade path: `t4-small` → `l4x1` ($0.80/hr, 24 GB) → `a10g-large` ($1.50/hr, 24 GB).\n\nFor full hardware flavor list: refer to the `hugging-face-jobs` skill. For cost estimation: run `scripts/estimate_cost.py`.\n\n## Quick start — Object Detection\n\nThe `script_args` below are the same for both submission methods. See directive #1 for the critical differences between them.\n\n```python\nOD_SCRIPT_ARGS = [\n    \"--model_name_or_path\", \"ustc-community/dfine-small-coco\",\n    \"--dataset_name\", \"cppe-5\",\n    \"--image_square_size\", \"640\",\n    \"--output_dir\", \"dfine_finetuned\",\n    \"--num_train_epochs\", \"30\",\n    \"--per_device_train_batch_size\", \"8\",\n    \"--learning_rate\", \"5e-5\",\n    \"--eval_strategy\", \"epoch\",\n    \"--save_strategy\", \"epoch\",\n    \"--save_total_limit\", \"2\",\n    \"--load_best_model_at_end\",\n    \"--metric_for_best_model\", \"eval_map\",\n    \"--greater_is_better\", \"True\",\n    \"--no_remove_unused_columns\",\n    \"--no_eval_do_concat_batches\",\n    \"--push_to_hub\",\n    \"--hub_model_id\", \"username/model-name\",\n    \"--do_train\",\n    \"--do_eval\",\n]\n```\n\n```python\nfrom huggingface_hub import HfApi, get_token\napi = HfApi()\njob_info = api.run_uv_job(\n    script=\"scripts/object_detection_training.py\",\n    script_args=OD_SCRIPT_ARGS,\n    flavor=\"t4-small\",\n    timeout=14400,\n    env={\"PYTHONUNBUFFERED\": \"1\"},\n    secrets={\"HF_TOKEN\": get_token()},\n)\nprint(f\"Job ID: {job_info.id}\")\n```\n\n### Key OD `script_args`\n\n- `--model_name_or_path` — recommended: `\"ustc-community/dfine-small-coco\"` (see model table above)\n- `--dataset_name` — the Hub dataset ID\n- `--image_square_size` — 480 (fast iteration) or 800 (better accuracy)\n- `--hub_model_id` — `\"username/model-name\"` for Hub persistence\n- `--num_train_epochs` — 30 typical for convergence\n- `--train_val_split` — fraction to split for validation (default 0.15), set if dataset lacks a validation split\n- `--max_train_samples` — truncate training set (useful for quick test runs, e.g. `\"785\"` for ~10% of a 7.8K dataset)\n- `--max_eval_samples` — truncate evaluation set\n\n## Quick start — Image Classification\n\n```python\nIC_SCRIPT_ARGS = [\n    \"--model_name_or_path\", \"timm/mobilenetv3_small_100.lamb_in1k\",\n    \"--dataset_name\", \"ethz/food101\",\n    \"--output_dir\", \"food101_classifier\",\n    \"--num_train_epochs\", \"5\",\n    \"--per_device_train_batch_size\", \"32\",\n    \"--per_device_eval_batch_size\", \"32\",\n    \"--learning_rate\", \"5e-5\",\n    \"--eval_strategy\", \"epoch\",\n    \"--save_strategy\", \"epoch\",\n    \"--save_total_limit\", \"2\",\n    \"--load_best_model_at_end\",\n    \"--metric_for_best_model\", \"eval_accuracy\",\n    \"--greater_is_better\", \"True\",\n    \"--no_remove_unused_columns\",\n    \"--push_to_hub\",\n    \"--hub_model_id\", \"username/food101-classifier\",\n    \"--do_train\",\n    \"--do_eval\",\n]\n```\n\n```python\nfrom huggingface_hub import HfApi, get_token\napi = HfApi()\njob_info = api.run_uv_job(\n    script=\"scripts/image_classification_training.py\",\n    script_args=IC_SCRIPT_ARGS,\n    flavor=\"t4-small\",\n    timeout=7200,\n    env={\"PYTHONUNBUFFERED\": \"1\"},\n    secrets={\"HF_TOKEN\": get_token()},\n)\nprint(f\"Job ID: {job_info.id}\")\n```\n\n### Key IC `script_args`\n\n- `--model_name_or_path` — any `timm/` model or Transformers classification model (see model table above)\n- `--dataset_name` — the Hub dataset ID\n- `--image_column_name` — column containing PIL images (default: `\"image\"`)\n- `--label_column_name` — column containing class labels (default: `\"label\"`)\n- `--hub_model_id` — `\"username/model-name\"` for Hub persistence\n- `--num_train_epochs` — 3-5 typical for classification (fewer than OD)\n- `--per_device_train_batch_size` — 16-64 (classification models use less memory than OD)\n- `--train_val_split` — fraction to split for validation (default 0.15), set if dataset lacks a validation split\n- `--max_train_samples` / `--max_eval_samples` — truncate for quick tests\n\n## Quick start — SAM/SAM2 Segmentation\n\n```python\nSAM_SCRIPT_ARGS = [\n    \"--model_name_or_path\", \"facebook/sam2.1-hiera-small\",\n    \"--dataset_name\", \"merve/MicroMat-mini\",\n    \"--prompt_type\", \"bbox\",\n    \"--prompt_column_name\", \"prompt\",\n    \"--output_dir\", \"sam2-finetuned\",\n    \"--num_train_epochs\", \"30\",\n    \"--per_device_train_batch_size\", \"4\",\n    \"--learning_rate\", \"1e-5\",\n    \"--logging_steps\", \"1\",\n    \"--save_strategy\", \"epoch\",\n    \"--save_total_limit\", \"2\",\n    \"--remove_unused_columns\", \"False\",\n    \"--dataloader_pin_memory\", \"False\",\n    \"--push_to_hub\",\n    \"--hub_model_id\", \"username/sam2-finetuned\",\n    \"--do_train\",\n    \"--report_to\", \"trackio\",\n]\n```\n\n```python\nfrom huggingface_hub import HfApi, get_token\napi = HfApi()\njob_info = api.run_uv_job(\n    script=\"scripts/sam_segmentation_training.py\",\n    script_args=SAM_SCRIPT_ARGS,\n    flavor=\"t4-small\",\n    timeout=7200,\n    env={\"PYTHONUNBUFFERED\": \"1\"},\n    secrets={\"HF_TOKEN\": get_token()},\n)\nprint(f\"Job ID: {job_info.id}\")\n```\n\n### Key SAM `script_args`\n\n- `--model_name_or_path` — SAM or SAM2 model (see model table above); auto-detects SAM vs SAM2\n- `--dataset_name` — the Hub dataset ID (e.g., `\"merve/MicroMat-mini\"`)\n- `--prompt_type` — `\"bbox\"` or `\"point\"` — type of prompt in the dataset\n- `--prompt_column_name` — column with JSON-encoded prompts (default: `\"prompt\"`)\n- `--bbox_column_name` — dedicated bbox column (alternative to JSON prompt column)\n- `--point_column_name` — dedicated point column (alternative to JSON prompt column)\n- `--mask_column_name` — column with ground-truth masks (default: `\"mask\"`)\n- `--hub_model_id` — `\"username/model-name\"` for Hub persistence\n- `--num_train_epochs` — 20-30 typical for SAM fine-tuning\n- `--per_device_train_batch_size` — 2-4 (SAM models use significant memory)\n- `--freeze_vision_encoder` / `--freeze_prompt_encoder` — freeze encoder weights (default: both frozen, only mask decoder trains)\n- `--train_val_split` — fraction to split for validation (default 0.1)\n\n## Checking job status\n\n**MCP tool (if available):**\n```\nhf_jobs(\"ps\")                                   # List all jobs\nhf_jobs(\"logs\", {\"job_id\": \"your-job-id\"})      # View logs\nhf_jobs(\"inspect\", {\"job_id\": \"your-job-id\"})   # Job details\n```\n\n**Python API fallback:**\n```python\nfrom huggingface_hub import HfApi\napi = HfApi()\napi.list_jobs()                                  # List all jobs\napi.get_job_logs(job_id=\"your-job-id\")           # View logs\napi.get_job(job_id=\"your-job-id\")                # Job details\n```\n\n## Common failure modes\n\n### OOM (CUDA out of memory)\nReduce `per_device_train_batch_size` (try 4, then 2), reduce `IMAGE_SIZE`, or upgrade hardware.\n\n### Dataset format errors\nRun `scripts/dataset_inspector.py` first. The training script auto-detects xyxy vs xywh, converts string categories to integer IDs, and adds `image_id` if missing. Ensure `objects.bbox` contains 4-value coordinate lists in absolute pixels and `objects.category` contains either integer IDs or string labels.\n\n### Hub push failures (401)\nVerify: (1) job secrets include token (see directive #2), (2) script sets `training_args.hub_token` BEFORE creating the `Trainer`, (3) `push_to_hub=True` is set, (4) correct `hub_model_id`, (5) token has write permissions.\n\n### Job timeout\nIncrease timeout (see directive #5 table), reduce epochs/dataset, or use checkpoint strategy with `hub_strategy=\"every_save\"`.\n\n### KeyError: 'test' (missing test split)\nThe object detection training script handles this gracefully — it falls back to the `validation` split. Ensure you're using the latest `scripts/object_detection_training.py`.\n\n### Single-class dataset: \"iteration over a 0-d tensor\"\n`torchmetrics.MeanAveragePrecision` returns scalar (0-d) tensors for per-class metrics when there's only one class. The template `scripts/object_detection_training.py` handles this by calling `.unsqueeze(0)` on these tensors. Ensure you're using the latest template.\n\n### Poor detection performance (mAP < 0.15)\nIncrease epochs (30-50), ensure 500+ images, check per-class mAP for imbalanced classes, try different learning rates (1e-5 to 1e-4), increase image size.\n\nFor comprehensive troubleshooting: see [references/reliability_principles.md](references/reliability_principles.md)\n\n## Reference files\n\n- [scripts/object_detection_training.py](scripts/object_detection_training.py) — Production-ready object detection training script\n- [scripts/image_classification_training.py](scripts/image_classification_training.py) — Production-ready image classification training script (supports timm models)\n- [scripts/sam_segmentation_training.py](scripts/sam_segmentation_training.py) — Production-ready SAM/SAM2 segmentation training script (bbox & point prompts)\n- [scripts/dataset_inspector.py](scripts/dataset_inspector.py) — Validate dataset format for OD, classification, and SAM segmentation\n- [scripts/estimate_cost.py](scripts/estimate_cost.py) — Estimate training costs for any vision model (includes SAM/SAM2)\n- [references/object_detection_training_notebook.md](references/object_detection_training_notebook.md) — Object detection training workflow, augmentation strategies, and training patterns\n- [references/image_classification_training_notebook.md](references/image_classification_training_notebook.md) — Image classification training workflow with ViT, preprocessing, and evaluation\n- [references/finetune_sam2_trainer.md](references/finetune_sam2_trainer.md) — SAM2 fine-tuning walkthrough with MicroMat dataset, DiceCE loss, and Trainer integration\n- [references/timm_trainer.md](references/timm_trainer.md) — Using timm models with HF Trainer (TimmWrapper, transforms, full example)\n- [references/hub_saving.md](references/hub_saving.md) — Detailed Hub persistence guide and verification checklist\n- [references/reliability_principles.md](references/reliability_principles.md) — Failure prevention principles from production experience\n\n## External links\n\n- [Transformers Object Detection Guide](https://huggingface.co/docs/transformers/tasks/object_detection)\n- [Transformers Image Classification Guide](https://huggingface.co/docs/transformers/tasks/image_classification)\n- [DETR Model Documentation](https://huggingface.co/docs/transformers/model_doc/detr)\n- [ViT Model Documentation](https://huggingface.co/docs/transformers/model_doc/vit)\n- [HF Jobs Guide](https://huggingface.co/docs/huggingface_hub/guides/jobs) — Main Jobs documentation\n- [HF Jobs Configuration](https://huggingface.co/docs/hub/en/jobs-configuration) — Hardware, secrets, timeouts, namespaces\n- [HF Jobs CLI Reference](https://huggingface.co/docs/huggingface_hub/guides/cli#hf-jobs) — Command line interface\n- [Object Detection Models](https://huggingface.co/models?pipeline_tag=object-detection)\n- [Image Classification Models](https://huggingface.co/models?pipeline_tag=image-classification)\n- [SAM2 Model Documentation](https://huggingface.co/docs/transformers/model_doc/sam2)\n- [SAM Model Documentation](https://huggingface.co/docs/transformers/model_doc/sam)\n- [Object Detection Datasets](https://huggingface.co/datasets?task_categories=task_categories:object-detection)\n- [Image Classification Datasets](https://huggingface.co/datasets?task_categories=task_categories:image-classification)\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["hugging","face","vision","trainer","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents"],"capabilities":["skill","source-sickn33","skill-hugging-face-vision-trainer","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/hugging-face-vision-trainer","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34768 github stars · SKILL.md body (29,480 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-23T18:51:30.255Z","embedding":null,"createdAt":"2026-04-18T21:38:51.248Z","updatedAt":"2026-04-23T18:51:30.255Z","lastSeenAt":"2026-04-23T18:51:30.255Z","tsv":"'-1':1878 '-10':1873 '-12':1902 '-20':1882 '-200':1870 '-3':1885 '-30':3037 '-4':1853,3050 '-5':643,915,1325,2409,2781 '-50':3363 '-6':1895 '-60':912 '-64':2794 '/datasets?task_categories=task_categories:image-classification)':3606 '/datasets?task_categories=task_categories:object-detection)':3600 '/dfine-large-coco':1996 '/dfine-small-coco':1975,2034,2405,2529 '/dfine-xlarge-obj365':2013 '/docs/hub/en/jobs-configuration)':3556 '/docs/huggingface_hub/guides/cli#hf-jobs)':3567 '/docs/huggingface_hub/guides/jobs)':3547 '/docs/transformers/model_doc/detr)':3535 '/docs/transformers/model_doc/sam)':3594 '/docs/transformers/model_doc/sam2)':3588 '/docs/transformers/model_doc/vit)':3541 '/docs/transformers/tasks/image_classification)':3529 '/docs/transformers/tasks/object_detection)':3522 '/enterprise)':298 '/enterprise),':293 '/hr':990,1008,1028,1044,2256,2342,2349 '/models?pipeline_tag=image-classification)':3582 '/models?pipeline_tag=object-detection)':3576 '/pro),':289 '/spaces/':1197,1958 '/trackio':1199,1960 '0':3316,3322,3344 '0.1':3081 '0.15':2573,2811,3359 '0.40':989,2255 '0.80':1007,2341 '1':619,654,793,838,1131,1173,1235,1337,2387,2506,2716,2872,2930,3229 '1.50':1027,2348 '10':905,2595 '10.4':1976 '100':1869 '100m':1003,2247 '14400':1331,1404,2503 '15':955,1881 '16':994,2252,2793 '1e-4':3381 '1e-5':2869,3379 '1h':1875 '1k':1889 '1k-5k':1888 '1x':992,1010,1030,1046 '2':676,800,847,914,1158,1436,1852,1884,2440,2655,2879,3049,3171,3236,3237 '2.5':2080 '2.50':1043 '20':3036 '20.2':1986 '224.4':2167 '24':1012,1032,2343,2350 '25.6':2099 '3':328,687,807,864,1183,1581,2780,3246 '30':911,1842,1856,1892,2421,2560,2860,3362 '300':722 '30min':581 '31.4':1997 '312.3':2185 '32':2636,2642 '38.9':2141 '4':817,1062,1205,1333,1617,1894,2866,3169,3208,3253 '401':3227 '43m':2004 '46.0':2150 '480':2543 '4h':1281,1402 '5':828,1124,1838,1872,2630,3258,3269 '5.6':2089 '500':1877,3365 '5e-5':2430,2645 '5k':1890,1899 '6':587,1901,1904 '63.5':2014 '640':2413 '641.1':2194 '687fb701029421ae5549d998':1616 '7.8':2598 '7200':2713,2927 '723':252,1415 '76m':2022 '785':2593 '8':2427 '80':1048 '80.8':2159 '800':2547 '86.6':2109 '93.7':2176 'a100':1041,1047 'a100-large':1040 'a10g':1025,1031,1278,1328,2315,2346 'a10g-large':1024,1277,1327,2314,2345 'absolut':550,3213 'access':1584 'account':280,284,796 'accuraci':1780,2000,2017,2052,2112,2131,2171,2204,2549,2666 'accuracy/speed':2094 'actual':1396 'add':1855,3200 'alreadi':1501 'altern':2999,3010 'alway':400,464,561,626,868,1909 'annot':346 'api':336,704,1244,1297,1307,2484,2694,2908,3118,3126 'api.get':3133,3144 'api.list':3128 'api.run':706,1311,1602,2488,2698,2912 'area':357 'arg':670,712,1103,1182,1321,1492,1634,2376,2397,2494,2497,2520,2614,2704,2707,2730,2836,2918,2921,2944 'argument':1100,1636,1687 'ask':808,865,3640 'askuserquest':871,879 'asynchron':1220 'attribut':1190,1583 'augment':3454 'auth':313 'authent':180,281,304,1437 'auto':373,386,451,457,1392,2958,3188 'auto-detect':372,456,2957,3187 'auto-remap':385,450 'auto-replac':1391 'automat':52,397,740,749,953,1216,1922 'automodelforimageclassif':2066 'avail':216,3088 'avoid':659,1831 'b':2182 'back':3297 'backbon':2183,2192 'balanc':2157 'bare':1650,1679 'base':229,2282,2291 'baselin':2009,2103 'bash':260,678 'basic':720 'batch':1021,1708,2325,2328,2425,2464,2634,2640,2791,2864,3047,3166 'bbox':123,353,361,512,524,544,559,741,751,757,1823,2847,2973,2993,2997,3423 'belong':1428 'best':927,1730,1777,1978,2016,2111,2152,2169,2201,2442,2448,2657,2663 'better':1670,1677,1682,1736,1783,2051,2130,2454,2548,2669 'binari':495 'bool':1640,1665,1743,1790 'boolean':1623,1635 'bound':129 'boundari':3648 'bounding-box':128 'box':130,1716,2064 'boxes/input_points':1807 'buffer':1857 'call':1264,1467,1540,1543,1918,3342 'capac':2162 'case':1971,2078,2139 'categori':354,377,763,3195 'caus':620,1683 'cheap':1982 'cheaper':910 'check':306,1212,3082,3367 'checklist':273,786,844,3505 'checkpoint':3275 'clarif':3642 'class':432,461,2766,3311,3328,3335,3370,3374 'classif':19,36,95,414,1080,1752,1947,2055,2264,2610,2740,2784,2795,3408,3433,3462,3525,3578,3602 'classifi':105,137,2626 'classlabel':441 'clear':3615 'cli':1099,3563 'cloud':43,87,107 'cnn':2102 'coco':366 'code':736,1372 'collat':1837 'column':351,393,424,430,438,454,488,494,508,525,536,781,1663,1696,1700,1756,1760,1802,2459,2674,2753,2755,2762,2764,2849,2882,2983,2985,2994,2998,3003,3005,3009,3014,3016,3018 'comfort':2283 'command':1425,3568 'common':453,1230,2334,3154 'communiti':1974,1995,2012,2033,2404,2528 'compat':728 'complet':924 'complex':2164 'comprehens':3386 'concat':1707,2463 'config':1179,1446,1629 'configur':1095,3553 'contain':511,2756,2765,3207,3217 'content':1275,1319,1385 'converg':2563 'convers':756 'convert':376,3193 'coordin':552,3210 'copi':784 'correct':1612,3254 'cost':185,1204,2366,3441 'cppe':642,1324,2408 'cpu':719 'cpu-bas':718 'cpu/ram':1038 'creat':937,1143,1468,1495,3243 'criteria':3651 'critic':572,1225,1355,2390 'cuda':3158 'custom':133,139,145,1529,1546,1836 'd':78,2041,3317,3323 'd-fine':77,2040 'dashboard':1194,1954 'data':895,907,944,947,958 'dataload':1826,2884 'dataset':134,140,147,264,337,341,403,408,411,415,467,472,475,479,554,564,569,607,610,630,633,671,682,713,726,767,776,798,802,811,849,852,859,898,921,969,1055,1322,1898,2406,2534,2538,2576,2600,2620,2746,2750,2814,2842,2963,2967,2981,3178,3312,3429,3479,3597,3603 'dataset_inspector.py':805 'decod':2226,2302,3070 'dedic':523,534,2996,3007 'default':580,645,1841,2230,2572,2759,2768,2810,2991,3024,3065,3080 'depend':254,1419,1454 'describ':3619 'descript':908,923,952,963,991,1009,1029,1045,1148 'detail':836,1112,1119,1580,2074,3116,3153,3499 'detect':18,34,75,340,374,458,745,753,1068,1692,1849,1914,1966,2273,2373,2959,3189,3289,3356,3399,3451,3518,3572,3596 'detector':131,1992 'detr':82,84,2026,2047,3530 'develop':1876 'devic':2423,2632,2638,2789,2862,3045,3164 'dfine':2416 'dicec':3480 'differ':1356,1712,2391,3376 'dinov3':2113 'dir':1937,2415,2624,2853 'direct':327,586,730,1172,1226,1298,2386,3235,3268 'document':3532,3538,3550,3585,3591 'dpo':234 'e.g':208,1642,1667,2592,2969 'edit':1106 'effici':2002,2216 'either':505,3218 'enabl':595,1910 'encod':2234,2989,3058,3061,3063 'end':2445,2660 'ensur':159,774,3205,3302,3348,3364 'enterpris':295 'env':1335,2504,2714,2928 'environ':188,1721,1768,1812,3631 'environment-specif':3630 'ephemer':1723,1770,1814 'epoch':1874,1883,1893,2420,2433,2436,2559,2629,2648,2651,2779,2859,2875,3035,3361 'epochs/dataset':3272 'error':1684,3180 'estim':186,1203,2367,3439 'ethz/food101':2622 'eval':1705,1732,1747,1779,1794,2431,2450,2461,2475,2602,2639,2646,2665,2685,2823 'evalu':2605,3469 'even':2279 'everi':3280 'exact':1234 'exampl':553,3496 'exceed':576 'execut':247 'exist':343,417,481,965,1598 'expect':577,1200,1685 'experi':2148,3513 'expert':3636 'explicit':1441,1673,1741,1788 'extern':3514 'f':1351,2513,2723,2937 'face':3,15,30,57,156,173,224,283,1572,2362 'facebook/sam-vit-base':2175 'facebook/sam-vit-huge':2193 'facebook/sam-vit-large':2184 'facebook/sam2.1-hiera-base-plus':2158 'facebook/sam2.1-hiera-large':2166 'facebook/sam2.1-hiera-small':2149,2207,2841 'facebook/sam2.1-hiera-tiny':2140 'failur':623,1231,3155,3226,3508 'fall':3296 'fallback':3119 'fals':931,971,1060,1803,1829,2883,2887 'fast':1981,2036,2122,2209,2544 'faster':909,1035,2270 'fastest':1051,2086,2143 'fewer':2785 'field':360,1641,1666 'file':1316,1379,1383,3392 'fine':9,72,79,92,112,243,462,2042,3042,3474 'fine-tun':8,71,91,111,242,3041,3473 'finetun':2417,2856 'first':896,2330,3183 'flag':1620,1651,1689,1749,1796 'flavor':182,717,983,1276,1326,2355,2498,2708,2922 'follow':841,1232 'food101':2625 'format':371,549,611,624,732,752,803,877,1400,3179,3430 'fraction':2567,2805,3075 'freez':3056,3059,3062 'frozen':2236,3067 'full':920,1551,1579,2353,3495 'function':1254 'gb':995,1013,1033,1049,2253,2344,2351 'general':175,2214,2267 'generat':396,761 'get':1305,1341,1345,1394,2482,2510,2692,2720,2906,2934 'good':2093,2145,2155 'gpu':47,405,469,566,614,856,974 'gpus':44,88,108 'grace':3294 'greater':1668,1675,1680,1734,1781,2452,2667 'ground':497,3021 'ground-truth':496,3020 'grpo':235 'guid':1577,3502,3519,3526,3544 'h':1886,1896,1903,2200 'handl':750,2278,3292,3339 'hardwar':181,214,975,982,1962,2237,2333,2354,3177,3557 'haven':635 'header':897,945,981 'headroom':1016 'help':271 'helper':248 'hf':176,308,312,656,664,694,1161,1238,1245,1269,1283,1285,1288,1339,1348,1361,1389,1485,1505,1508,1519,1523,1606,2508,2718,2932,3089,3095,3106,3491,3542,3551,3561 'hf.co':288,292,297 'hf.co/enterprise)':296 'hf.co/enterprise),':291 'hf.co/pro),':287 'hfapi':689,703,705,1167,1304,1308,1365,2481,2485,2691,2695,2905,2909,3125,3127 'hfargumentpars':1093,1622,1627 'hiera':2290 'hiera-base-plus':2289 'higher':1999,2161 'hit':2321 'hour':1224,1334,1854 'hub':58,168,345,419,483,591,598,600,701,1263,1302,1442,1478,1513,1562,1645,1654,1719,1724,1766,1771,1810,1815,1863,2467,2468,2479,2537,2550,2555,2677,2678,2689,2749,2770,2775,2890,2891,2903,2966,3026,3031,3123,3224,3249,3255,3278,3500 'hug':2,14,29,56,155,172,223,282,1571,2361 'hugging-face-job':171,1570,2360 'hugging-face-model-train':222 'hugging-face-vision-train':1 'huggingfac':700,1262,1301,2478,2688,2902,3122 'huggingface.co':1196,1957,3521,3528,3534,3540,3546,3555,3566,3575,3581,3587,3593,3599,3605 'huggingface.co/datasets?task_categories=task_categories:image-classification)':3604 'huggingface.co/datasets?task_categories=task_categories:object-detection)':3598 'huggingface.co/docs/hub/en/jobs-configuration)':3554 'huggingface.co/docs/huggingface_hub/guides/cli#hf-jobs)':3565 'huggingface.co/docs/huggingface_hub/guides/jobs)':3545 'huggingface.co/docs/transformers/model_doc/detr)':3533 'huggingface.co/docs/transformers/model_doc/sam)':3592 'huggingface.co/docs/transformers/model_doc/sam2)':3586 'huggingface.co/docs/transformers/model_doc/vit)':3539 'huggingface.co/docs/transformers/tasks/image_classification)':3527 'huggingface.co/docs/transformers/tasks/object_detection)':3520 'huggingface.co/models?pipeline_tag=image-classification)':3580 'huggingface.co/models?pipeline_tag=object-detection)':3574 'huggingface.co/spaces/':1195,1956 'ic':823,2243,2612,2705,2728 'id':390,392,433,602,760,1187,1189,1353,1589,1592,1610,1726,1773,1817,2470,2515,2539,2552,2680,2725,2751,2772,2893,2939,2968,3028,3099,3103,3110,3114,3137,3141,3147,3151,3198,3202,3220,3257 'identifi':1587 'imag':35,94,120,136,391,413,423,426,487,490,556,759,1019,1057,1079,1423,1699,1710,1751,1759,1871,1880,1891,1900,1946,2054,2263,2410,2540,2609,2752,2758,2760,3173,3201,3366,3383,3407,3461,3524,3577,3601 'imbalanc':3373 'implicit':1559 'import':702,1259,1303,2480,2690,2904,3124 'includ':1413,1448,1502,1533,3232,3446 'increas':3265,3360,3382 'info':1310,1601,2487,2697,2911 'infrastructur':158,178 'init':1474 'inject':1444,1484,1536 'inlin':253,1416 'input':1806,3645 'inspect':3108 'inspector':652,853 'integ':380,389,431,764,3197,3219 'integers/strings':447 'integr':3484 'interfac':3570 'issu':662,1834 'iter':2037,2123,2210,2545,3313 'job':16,31,153,157,174,177,191,198,206,278,300,324,657,665,692,695,708,833,1128,1138,1162,1170,1186,1218,1236,1239,1246,1270,1289,1309,1313,1352,1362,1368,1431,1435,1439,1445,1567,1573,1586,1591,1600,1604,1607,1609,2363,2486,2490,2514,2696,2700,2724,2910,2914,2938,3083,3090,3094,3096,3098,3102,3107,3109,3113,3115,3129,3132,3134,3136,3140,3145,3146,3150,3152,3230,3263,3543,3549,3552,3562 'job_info.id':1354,1611,2516,2726,2940 'jobinfo':1582 'json':510,2988,3001,3012 'json-encod':2987 'k':1879,2599 'key':2517,2727,2941 'keyerror':3282 'l':2191 'l4':1011 'l4x1':1006,2312,2340 'label':429,437,459,460,463,901,919,949,961,985,1005,1023,1039,2761,2767,2769,3223 'lack':2577,2815 'languag':230 'larg':1018,1026,1042,1054,1279,1329,1897,2043,2316,2324,2347 'largest':2023 'latest':3307,3353 'launch':613 'learn':2428,2643,2867,3377 'less':2798 'lightweight':1988,2084 'like':1150,1615 'limit':2439,2654,2878,3607 'line':3569 'link':3515 'list':2356,3092,3130,3211 'load':1860,2067,2441,2656 'local':46,90,110,245,660,677,831,1135,1377 'log':2870,3097,3105,3135,3143 'login':305,1544 'loss':3481 'm':1977,1987,1998,2015,2081,2090,2100,2110,2142,2151,2160,2168,2177,2186,2195 'main':3548 'manag':42,184,1840 'manual':770 'map':735,1733,2451,3358,3371 'mask':146,493,500,2225,2301,3015,3023,3025,3069 'mat':121,557 'match':3616 'max':2581,2601,2819,2822 'mcp':332,696,1163,1240,1249,1290,1363,3085 'memori':1828,1833,2799,2886,3055,3161 'merve/micromat-mini':555,2844,2970 'metadata':1417 'method':1176,1360,1408,1457,2384 'metric':1728,1775,2446,2661,3329 'micromat':3478 'min':913,1843 'minimum':1851 'mismatch':625 'miss':399,3204,3284,3653 'mobil':2091 'mobile/edge':2085 'mobilenetv3':98 'mobilevit':99 'mode':3156 'model':12,26,40,76,96,117,143,162,225,231,241,601,928,1001,1111,1725,1731,1772,1778,1816,1859,1961,1967,1968,2056,2059,2075,2135,2136,2212,2244,2265,2274,2286,2310,2398,2443,2449,2469,2521,2531,2551,2615,2658,2664,2679,2731,2737,2741,2743,2771,2796,2837,2892,2945,2952,2954,3027,3052,3256,3413,3445,3489,3531,3537,3573,3579,3584,3590 'monitor':1191,1906 'move':2038,2124 'multiselect':930,970,1059 'must':320,342,347,416,420,480,484,501,575,593,1343,1412,1447,1483,1532,1671,1697,1709,1720,1738,1757,1767,1785,1804,1811,1830 'name':444,455,1149,1323,1594,1932,1941,1944,2399,2407,2522,2535,2616,2621,2732,2747,2754,2763,2838,2843,2850,2946,2964,2984,2995,3006,3017 'namespac':3560 'need':731,733,772,1145,1924 'negat':1656 'non':203 'non-training-specif':202 'none':1564 'num':2418,2557,2627,2777,2857,3033 'number':1713 'object':33,74,339,350,744,1067,1691,1848,1913,1965,2272,2372,3288,3398,3450,3517,3571,3595 'objects.bbox':778,3206 'objects.category':780,3216 'objects365':2020 'od':821,1000,2241,2395,2495,2518,2787,2801,3432 'one':1686,3334 'oom':2322,3157 'option':356,395,653,675,686,875,900,948,984,1664,1742,1789 'option-styl':874 'origin':2178,2187,2196 'os.environ.get':1507 'output':739,1936,2414,2623,2852,3625 'paid':302 'param':1004,1370,1969,2076,2137,2248 'paramet':1426 'pars':1491 'pascal':369 'pass':220,321,1097,1177,1631,1672,1739,1786,1926,1948 'path':1157,1317,1378,1380,2336,2401,2524,2618,2734,2840,2948 'path/to/dataset_inspector.py':668 'path/to/training_script.py':1315 'pattern':1552,3458 'pekingu/rtdetr_v2_r101vd':2021 'pekingu/rtdetr_v2_r18vd':1985 'pekingu/rtdetr_v2_r50vd':2003 'pep':251,1414 'per':2422,2631,2637,2788,2861,3044,3163,3327,3369 'per-class':3326,3368 'perform':3357 'perman':164 'permiss':319,3262,3646 'persist':194,2556,2776,3032,3501 'pil':425,489,2757 'pin':1827,1832,2885 'pixel':551,1702,1762,3214 'plain':446 'plan':299,303 'plus':2292 'point':125,518,535,1825,1980,2154,2975,3004,3008,3424 'poll':1215 'poor':3355 'prefer':867,1165 'prefix':1659 'prepar':818,1063 'preprocess':734,742,771,1861,3467 'prerequisit':272,795,840,843 'preserv':1698,1758,1805 'pretrain':2018 'prevent':617,1229,3509 'principl':3510 'print':1350,2512,2722,2936 'pro':286 'product':1075,1887,3396,3405,3417,3512 'production-readi':1074,3395,3404,3416 'progress':789,791 'project':1931 'prompt':126,149,504,507,560,1821,2233,2845,2848,2851,2971,2978,2982,2990,2992,3002,3013,3060,3425 'provid':737 'ps':3091 'push':592,596,1476,1643,1652,1717,1764,1808,1864,2465,2675,2888,3225,3247 'py':1152,1382 'python':335,663,698,878,1107,1243,1253,1296,1299,1371,1504,1599,2394,2476,2611,2686,2833,2900,3117,3120 'pythonunbuff':1336,2505,2715,2929 'qualiti':929,1984,2222 'quality/speed':2156 'question':207,880,881,932,972 'quick':888,902,1867,2147,2370,2589,2607,2827,2829 'r50':2049 'rate':2429,2644,2868,3378 're':3304,3350 'read':723 'readi':725,1076,3397,3406,3418 'real':1990,2007 'real-tim':1989,2006 'recommend':589,658,922,951,1964,2053,2132,2238,2240,2525 'reduc':2327,3162,3172,3271 'refer':195,2357,3391,3564 'references/finetune_sam2_trainer.md':1121,1122,3470,3471 'references/hub_saving.md':3497,3498 'references/image_classification_training_notebook.md':3459,3460 'references/object_detection_training_notebook.md':3448,3449 'references/reliability_principles.md':3389,3390,3506,3507 'references/timm_trainer.md':1114,1115,2071,2072,3485,3486 'relat':169 'reli':1557 'reliabl':2104 'remap':387,452,765 'remov':1661,1694,1754,1800,2457,2672,2880 'replac':1393 'replic':1549 'repo':1469 'report':835,1130,1184,1927,1949,2897 'request':1210 'requir':49,301,338,412,476,1463,1618,1688,1748,1795,2172,3644 'resnet':100 'resolut':1561 'result':50,193,724 'return':1613,3320 'review':3637 'root':1142 'rt':81,2025,2046 'rt-detr':80,2024,2045 'rule':1228,1405 'run':150,255,259,262,269,650,680,690,804,850,886,904,1168,1219,1366,1430,1433,1940,1943,2368,2591,3181 'safeti':3647 'sam':21,114,825,2179,2188,2197,2202,2218,2308,2834,2919,2942,2949,2960,3040,3051,3435,3589 'sam/sam2':38,477,1085,1798,2133,2831,3419,3447 'sam2':23,116,1117,2144,2170,2211,2285,2855,2951,2962,3472,3583 'sam2-finetuned':2854 'sam2.1-hiera-large':2306 'sampl':2583,2603,2821,2824 'sanit':758 'save':53,165,829,1125,1132,2434,2437,2649,2652,2873,2876,3281 'scalar':3321 'scenario':1865 'schedul':190 'scope':3618 'script':246,249,649,667,669,709,711,747,820,830,1065,1091,1102,1126,1134,1181,1272,1274,1314,1320,1369,1411,1462,1482,1530,1547,1633,1916,2375,2396,2491,2493,2496,2519,2613,2701,2703,2706,2729,2835,2915,2917,2920,2943,3186,3238,3291,3401,3410,3422 'scripts/dataset_inspector.py':263,681,710,3182,3426,3427 'scripts/estimate_cost.py':270,2369,3437,3438 'scripts/image_classification_training.py':824,1082,1083,2702,3402,3403 'scripts/object_detection_training.py':748,822,1070,1071,1500,1554,1625,2492,3308,3338,3393,3394 'scripts/sam_segmentation_training.py':826,1088,1089,2916,3414,3415 'second':1332,1403 'secret':187,211,325,606,1282,1338,1388,1440,1452,2507,2717,2931,3231,3558 'section':410,474,571,861 'see':326,407,471,568,585,858,1113,1120,1171,1458,1568,2070,2385,2530,2742,2953,3234,3267,3388 'segment':24,39,119,142,478,499,1086,1799,2134,2165,2832,3420,3436 'select':1963 'self':2115 'self-supervis':2114 'self.args.hub':1471 'set':573,1850,2574,2586,2606,2812,3239,3252 'setup':48,918 'sft':233 'short':584,1846 'signific':3054 'similar':2221 'sinc':2298 'singl':3310 'single-class':3309 'size':812,899,1022,1058,2326,2329,2412,2426,2542,2635,2641,2792,2865,3048,3167,3174,3384 'skill':63,66,170,199,238,1574,2364,3610 'skill-hugging-face-vision-trainer' 'skip':640 'small':988,2251,2277,2295,2339,2501,2711,2925 'smaller':2268 'sota':1983 'source-sickn33' 'specif':205,3632 'spend':855 'split':266,673,684,715,815,940,946,954,967,2566,2569,2580,2804,2807,2818,3074,3077,3286,3301 'squar':2411,2541 'ssl/dependency':661 'start':275,1979,2029,2118,2153,2205,2371,2608,2830 'status':1211,3084 'step':792,799,806,816,827,837,846,863,1061,1123,2871 'still':2001 'stop':3638 'strategi':2432,2435,2647,2650,2874,3276,3279,3455 'string':382,383,435,448,762,1373,1401,1614,3194,3222 'strong':2005,2101 'studi':2107 'style':876 'sub':359 'sub-field':358 'submiss':1237,1456,2383 'submit':832,1127,1137,1159 'subset':892 'substitut':3628 'success':3650 'suffici':997,2258,2297 'supervis':2116 'support':3411 'switch':2332 'syntax':331,1453,1624,1639 't4':987,993,2250,2276,2294,2338,2500,2710,2924 't4-small':986,2249,2275,2293,2337,2499,2709,2923 'tabl':1459,2532,2744,2955,3270 'take':1223 'taken':1934 'target':1715 'task':3614 'team':290 'tell':1153 'templat':1077,1499,3337,3354 'tensor':3318,3324,3347 'termin':315 'test':889,903,1868,2590,2828,3283,3285,3634 'text/language':240 'time':579,857,1201,1991,2008 'timeout':183,574,721,1280,1330,1399,1839,1866,2502,2712,2926,3264,3266,3559 'timm':97,1110,2058,2736,3412,3488 'timm/mobilenetv3_small_100.lamb_in1k':2079,2120,2619 'timm/mobilevit_s.cvnets_in1k':2088 'timm/resnet50.a1_in1k':2098,2126 'timm/vit_base_patch16_dinov3.lvd1689m':2108,2128 'timmwrapp':3493 'timmwrapperforimageclassif':2069 'token':179,221,316,322,604,797,1284,1286,1306,1340,1342,1346,1349,1386,1390,1395,1397,1443,1450,1470,1472,1486,1489,1506,1509,1517,1520,1522,1524,1535,1560,1563,1575,2483,2509,2511,2693,2719,2721,2907,2933,2935,3233,3241,3259 'tool':310,333,872,1164,1241,1250,1268,1291,1364,3086 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'torchmetrics.meanaverageprecision':3319 'total':2438,2653,2877 'track':788 'trackio':1193,1905,1907,1929,1951,2899 'trackio.finish':1921 'trackio.init':1919 'trade':2096 'trade-off':2095 'train':6,27,32,127,135,141,152,160,204,232,267,277,406,470,567,578,615,622,637,648,674,685,716,746,782,790,819,925,943,957,1036,1064,1118,1151,1217,1273,1410,1461,1481,1619,1647,1745,1792,1820,1915,2087,2228,2304,2419,2424,2473,2558,2564,2582,2585,2628,2633,2683,2778,2790,2802,2820,2858,2863,2896,3034,3046,3071,3072,3165,3185,3290,3400,3409,3421,3440,3452,3457,3463 'trainer':5,226,1466,1497,1539,3245,3483,3492 'training_args.hub':1488,1516,1521,3240 'training_args.push':1511 'trainingargu':1953 'transform':104,1465,2092,2739,3494,3516,3523 'treat':3623 'tri':1257,3168,3375 'trl':228 'trl-base':227 'troubleshoot':3387 'true':599,1479,1678,1737,1740,1784,1787,2455,2670,3250 'truncat':768,2584,2604,2825 'truth':498,3022 'tune':10,73,93,113,244,3043,3475 'two':1359,1638 'type':442,1822,2846,2972,2976 'typic':2561,2782,3038 'ultra':2083 'ultra-lightweight':2082 'unavail':697,1293 'unknown':402,466,563 'unknown/custom':629 'unless':1548 'unreli':1565 'unsqueez':3343 'unus':1662,1695,1755,1801,2458,2673,2881 'upgrad':2318,2335,3176 'url':1192,1375 'usag':1576 'use':61,64,122,236,250,348,729,869,964,980,1069,1081,1087,1092,1160,1294,1344,1422,1588,1626,1648,1970,2077,2138,2311,2587,2797,3053,3274,3305,3351,3487,3608 'user':68,809,866,1155,1208 'usernam':1198,1959 'username/dataset-name':265,672,683,714 'username/food101-classifier':2681 'username/model-name':603,1727,1774,1818,2471,2553,2773,3029 'username/sam2-finetuned':2894 'ustc':1973,1994,2011,2032,2403,2527 'ustc-commun':1972,1993,2010,2031,2402,2526 'uv':258,261,268,666,679,691,707,1169,1271,1312,1367,1434,1603,1608,2489,2699,2913 'v1':2203,2219,2309 'v2':83,2027,2048 'val':2565,2803,3073 'valid':401,409,465,473,562,570,608,609,627,801,814,848,860,917,939,960,966,2571,2579,2809,2817,3079,3300,3428,3633 'valu':531,543,590,1398,1674,1703,1763,3209 'variabl':189,1108 'variant':2028 'verif':3504 'verifi':279,794,839,3228 'via':655,688,1098,1180,1438,1632,2065 'view':3104,3142 'vision':4,11,25,151,161,2231,3057,3444 'vit':2117,2181,2190,2199,2281,3466,3536 'vit-b':2180 'vit-bas':2280 'vit-h':2198 'vit-l':2189 'vit/dinov3':101 'voc':370 'vram':996,1014,1034,1050,2174,2254 'vs':334,1242,2961,3191 'wait':1206 'walkthrough':3476 'want':69,884,935,978 'weight':3064 'well':2106 'well-studi':2105 'whoami':309,314 'work':212,2060 'workflow':783,3453,3464 'workspac':1141 'write':318,1527,3261 'x':519,538,541 'x0':513,527 'x1':515,529 'xywh':365,755,3192 'xyxi':368,548,754,3190 'y':520,539,542 'y0':514,528 'y1':516,530 'yes':950 'yolo':85 'your-job-id':3100,3111,3138,3148","prices":[{"id":"04eb0fac-0ff4-44ca-9c4c-49cfc9b20815","listingId":"2881759f-ec48-4da6-9c9b-3ee7f241dcac","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:38:51.248Z"}],"sources":[{"listingId":"2881759f-ec48-4da6-9c9b-3ee7f241dcac","source":"github","sourceId":"sickn33/antigravity-awesome-skills/hugging-face-vision-trainer","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/hugging-face-vision-trainer","isPrimary":false,"firstSeenAt":"2026-04-18T21:38:51.248Z","lastSeenAt":"2026-04-23T18:51:30.255Z"}],"details":{"listingId":"2881759f-ec48-4da6-9c9b-3ee7f241dcac","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"hugging-face-vision-trainer","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34768,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-23T06:41:03Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"9339478defc3566c4efa8ac1c6d0c6d341871e4b","skill_md_path":"skills/hugging-face-vision-trainer/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/hugging-face-vision-trainer"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"hugging-face-vision-trainer","description":"Train or fine-tune vision models on Hugging Face Jobs for detection, classification, and SAM or SAM2 segmentation."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/hugging-face-vision-trainer"},"updatedAt":"2026-04-23T18:51:30.255Z"}}