{"id":"df12221c-5a6b-47a8-be18-25391ff8a82a","shortId":"8wgPG4","kind":"skill","title":"dspy-finetune-bootstrap","tagline":"This skill should be used when the user asks to \"fine-tune a DSPy model\", \"distill a program into weights\", \"use BootstrapFinetune\", \"create a student model\", \"reduce inference costs with fine-tuning\", mentions \"model distillation\", \"teacher-student training\", or wants to deploy ","description":"# DSPy BootstrapFinetune Optimizer\n\n## Goal\n\nDistill a DSPy program into fine-tuned model weights for efficient production deployment.\n\n## When to Use\n\n- You have a working DSPy program with a large model\n- Need to reduce inference costs\n- Want faster responses (smaller model)\n- Deploying to resource-constrained environments\n\n## Inputs\n\n| Input | Type | Description |\n|-------|------|-------------|\n| `program` | `dspy.Module` | Teacher program to distill |\n| `trainset` | `list[dspy.Example]` | Training examples |\n| `metric` | `callable` | Validation metric (optional) |\n| `train_kwargs` | `dict` | Training hyperparameters |\n\n## Outputs\n\n| Output | Type | Description |\n|--------|------|-------------|\n| `finetuned_program` | `dspy.Module` | Program with fine-tuned weights |\n| `model_path` | `str` | Path to saved model |\n\n## Workflow\n\n### Phase 1: Prepare Teacher Program\n\n```python\nimport dspy\n\n# Configure with strong teacher model\ndspy.configure(lm=dspy.LM(\"openai/gpt-4o\"))\n\nclass TeacherQA(dspy.Module):\n    def __init__(self):\n        self.cot = dspy.ChainOfThought(\"question -> answer\")\n    \n    def forward(self, question):\n        return self.cot(question=question)\n```\n\n### Phase 2: Enable Experimental Features & Generate Training Traces\n\nBootstrapFinetune is experimental and requires enabling the flag:\n\n```python\nimport dspy\nfrom dspy.teleprompt import BootstrapFinetune\n\n# Enable experimental features\ndspy.settings.experimental = True\n\noptimizer = BootstrapFinetune(\n    metric=lambda gold, pred, trace=None: gold.answer.lower() in pred.answer.lower(),\n    train_kwargs={\n        'learning_rate': 5e-5,\n        'num_train_epochs': 3,\n        'per_device_train_batch_size': 4,\n        'warmup_ratio': 0.1\n    }\n)\n```\n\n### Phase 3: Fine-tune Student Model\n\n```python\nfinetuned = optimizer.compile(\n    TeacherQA(),\n    trainset=trainset\n)\n```\n\n### Phase 4: Deploy\n\n```python\n# Save the fine-tuned model (saves state-only by default)\nfinetuned.save(\"finetuned_qa_model.json\")\n\n# Load and use (must recreate architecture first)\nloaded = TeacherQA()\nloaded.load(\"finetuned_qa_model.json\")\nresult = loaded(question=\"What is machine learning?\")\n```\n\n## Production Example\n\n```python\nimport dspy\nfrom dspy.teleprompt import BootstrapFinetune\nfrom dspy.evaluate import Evaluate\nimport logging\nimport os\n\n# Enable experimental features\ndspy.settings.experimental = True\n\nlogger = logging.getLogger(__name__)\n\nclass ClassificationSignature(dspy.Signature):\n    \"\"\"Classify text into categories.\"\"\"\n    text: str = dspy.InputField()\n    label: str = dspy.OutputField(desc=\"Category: positive, negative, neutral\")\n\nclass TextClassifier(dspy.Module):\n    def __init__(self):\n        self.classify = dspy.Predict(ClassificationSignature)\n    \n    def forward(self, text):\n        return self.classify(text=text)\n\ndef classification_metric(gold, pred, trace=None):\n    \"\"\"Exact label match.\"\"\"\n    gold_label = gold.label.lower().strip()\n    pred_label = pred.label.lower().strip() if pred.label else \"\"\n    return gold_label == pred_label\n\ndef finetune_classifier(trainset, devset, output_dir=\"./finetuned_model\"):\n    \"\"\"Full fine-tuning pipeline.\"\"\"\n    \n    # Configure teacher (strong model)\n    dspy.configure(lm=dspy.LM(\"openai/gpt-4o\"))\n    \n    teacher = TextClassifier()\n    \n    # Evaluate teacher\n    evaluator = Evaluate(devset=devset, metric=classification_metric, num_threads=8)\n    teacher_score = evaluator(teacher)\n    logger.info(f\"Teacher score: {teacher_score:.2%}\")\n\n    # Fine-tune (train_kwargs passed to constructor)\n    optimizer = BootstrapFinetune(\n        metric=classification_metric,\n        train_kwargs={\n            'learning_rate': 2e-5,\n            'num_train_epochs': 3,\n            'per_device_train_batch_size': 8,\n            'gradient_accumulation_steps': 2,\n            'warmup_ratio': 0.1,\n            'weight_decay': 0.01,\n            'logging_steps': 10,\n            'save_strategy': 'epoch',\n            'output_dir': output_dir\n        }\n    )\n\n    finetuned = optimizer.compile(\n        teacher,\n        trainset=trainset\n    )\n    \n    # Evaluate fine-tuned model\n    student_score = evaluator(finetuned)\n    logger.info(f\"Student score: {student_score:.2%}\")\n\n    # Save (state-only as JSON)\n    finetuned.save(os.path.join(output_dir, \"final_model.json\"))\n\n    return {\n        \"teacher_score\": teacher_score,\n        \"student_score\": student_score,\n        \"model_path\": os.path.join(output_dir, \"final_model.json\")\n    }\n\n# For RAG fine-tuning\nclass RAGClassifier(dspy.Module):\n    \"\"\"RAG pipeline that can be fine-tuned.\"\"\"\n    \n    def __init__(self, num_passages=3):\n        self.retrieve = dspy.Retrieve(k=num_passages)\n        self.classify = dspy.ChainOfThought(\"context, text -> label\")\n    \n    def forward(self, text):\n        context = self.retrieve(text).passages\n        return self.classify(context=context, text=text)\n\ndef finetune_rag_classifier(trainset, devset):\n    \"\"\"Fine-tune a RAG-based classifier.\"\"\"\n\n    # Configure retriever and LM\n    colbert = dspy.ColBERTv2(url='http://20.102.90.50:2017/wiki17_abstracts')\n    dspy.configure(\n        lm=dspy.LM(\"openai/gpt-4o\"),\n        rm=colbert\n    )\n\n    rag = RAGClassifier()\n\n    # Fine-tune (train_kwargs in constructor)\n    optimizer = BootstrapFinetune(\n        metric=classification_metric,\n        train_kwargs={\n            'learning_rate': 1e-5,\n            'num_train_epochs': 5\n        }\n    )\n\n    finetuned = optimizer.compile(\n        rag,\n        trainset=trainset\n    )\n\n    return finetuned\n```\n\n## Training Arguments Reference\n\n| Argument | Description | Typical Value |\n|----------|-------------|---------------|\n| `learning_rate` | Learning rate | 1e-5 to 5e-5 |\n| `num_train_epochs` | Training epochs | 3-5 |\n| `per_device_train_batch_size` | Batch size | 4-16 |\n| `gradient_accumulation_steps` | Gradient accumulation | 2-8 |\n| `warmup_ratio` | Warmup proportion | 0.1 |\n| `weight_decay` | L2 regularization | 0.01 |\n| `max_grad_norm` | Gradient clipping | 1.0 |\n\n## Best Practices\n\n1. **Strong teacher** - Use GPT-4 or Claude as teacher\n2. **Quality data** - Teacher traces are only as good as training examples\n3. **Validate improvement** - Compare student to teacher on held-out set\n4. **Start with more epochs** - Fine-tuning often needs 3-5 epochs\n5. **Monitor overfitting** - Track validation loss during training\n\n## Limitations\n\n- Requires access to model weights (not API-only models)\n- Training requires GPU resources\n- Student may not match teacher quality on all inputs\n- Fine-tuning takes hours/days depending on data size\n- Model size reduction may cause capability loss\n\n## Official Documentation\n\n- **DSPy Documentation**: https://dspy.ai/\n- **DSPy GitHub**: https://github.com/stanfordnlp/dspy\n- **BootstrapFinetune API**: https://dspy.ai/api/optimizers/BootstrapFinetune/\n- **Fine-tuning Guide**: https://dspy.ai/tutorials/classification_finetuning/","tags":["dspy","finetune","bootstrap","skills","omidzamani","agent-skills","claude-code","claude-skills","llm","prompt-optimization","rag"],"capabilities":["skill","source-omidzamani","skill-dspy-finetune-bootstrap","topic-agent-skills","topic-claude-code","topic-claude-skills","topic-dspy","topic-llm","topic-prompt-optimization","topic-rag"],"categories":["dspy-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/OmidZamani/dspy-skills/dspy-finetune-bootstrap","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add OmidZamani/dspy-skills","source_repo":"https://github.com/OmidZamani/dspy-skills","install_from":"skills.sh"}},"qualityScore":"0.487","qualityRationale":"deterministic score 0.49 from registry signals: · indexed on github topic:agent-skills · 74 github stars · SKILL.md body (6,917 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-05-02T06:55:44.270Z","embedding":null,"createdAt":"2026-04-18T22:14:11.832Z","updatedAt":"2026-05-02T06:55:44.270Z","lastSeenAt":"2026-05-02T06:55:44.270Z","tsv":"'-16':645 '-4':676 '-5':636,716 '-8':652 '/api/optimizers/bootstrapfinetune/':780 '/finetuned_model':377 '/stanfordnlp/dspy':775 '/tutorials/classification_finetuning/':787 '0.01':453,662 '0.1':234,450,657 '1':144,671 '1.0':668 '10':456 '1e-5':604,627 '2':179,415,447,484,651,681 '20.102.90.50':578 '2017/wiki17_abstracts':579 '2e-5':433 '3':225,236,437,532,635,693,715 '4':231,249,644,705 '5':608,718 '5e-5':221,629 '8':404,443 'access':728 'accumul':445,647,650 'answer':169 'api':734,777 'api-on':733 'architectur':271 'argument':617,619 'ask':13 'base':569 'batch':229,441,640,642 'best':669 'bootstrap':4 'bootstrapfinetun':27,51,186,200,207,292,425,596,776 'callabl':113 'capabl':764 'categori':315,323 'caus':763 'class':160,309,327,516 'classif':345,400,427,598 'classifi':312,372,560,570 'classificationsignatur':310,335 'claud':678 'clip':667 'colbert':575,585 'compar':696 'configur':151,383,571 'constrain':95 'constructor':423,594 'context':540,547,553,554 'cost':34,85 'creat':28 'data':683,757 'decay':452,659 'def':163,170,330,336,344,370,527,543,557 'default':263 'depend':755 'deploy':49,67,91,250 'desc':322 'descript':100,125,620 'devic':227,439,638 'devset':374,397,398,562 'dict':119 'dir':376,461,463,494,509 'distil':21,41,54,106 'document':767,769 'dspi':2,19,50,56,75,150,196,288,768,771 'dspy-finetune-bootstrap':1 'dspy.ai':770,779,786 'dspy.ai/api/optimizers/bootstrapfinetune/':778 'dspy.ai/tutorials/classification_finetuning/':785 'dspy.chainofthought':167,539 'dspy.colbertv2':576 'dspy.configure':156,387,580 'dspy.evaluate':294 'dspy.example':109 'dspy.inputfield':318 'dspy.lm':158,389,582 'dspy.module':102,128,162,329,518 'dspy.outputfield':321 'dspy.predict':334 'dspy.retrieve':534 'dspy.settings.experimental':204,304 'dspy.signature':311 'dspy.teleprompt':198,290 'effici':65 'els':364 'enabl':180,191,201,301 'environ':96 'epoch':224,436,459,607,632,634,709,717 'evalu':296,393,395,396,407,469,476 'exact':351 'exampl':111,285,692 'experiment':181,188,202,302 'f':410,479 'faster':87 'featur':182,203,303 'final_model.json':495,510 'fine':16,37,60,132,238,255,380,417,471,514,525,564,589,711,751,782 'fine-tun':15,36,59,131,237,254,379,416,470,513,524,563,588,710,750,781 'finetun':3,126,243,371,464,477,558,609,615 'finetuned.save':264,491 'finetuned_qa_model.json':265,276 'first':272 'flag':193 'forward':171,337,544 'full':378 'generat':183 'github':772 'github.com':774 'github.com/stanfordnlp/dspy':773 'goal':53 'gold':210,347,354,366 'gold.answer.lower':214 'gold.label.lower':356 'good':689 'gpt':675 'gpu':739 'grad':664 'gradient':444,646,649,666 'guid':784 'held':702 'held-out':701 'hours/days':754 'hyperparamet':121 'import':149,195,199,287,291,295,297,299 'improv':695 'infer':33,84 'init':164,331,528 'input':97,98,749 'json':490 'k':535 'kwarg':118,218,420,430,592,601 'l2':660 'label':319,352,355,359,367,369,542 'lambda':209 'larg':79 'learn':219,283,431,602,623,625 'limit':726 'list':108 'lm':157,388,574,581 'load':266,273,278 'loaded.load':275 'log':298,454 'logger':306 'logger.info':409,478 'logging.getlogger':307 'loss':723,765 'machin':282 'match':353,744 'max':663 'may':742,762 'mention':39 'metric':112,115,208,346,399,401,426,428,597,599 'model':20,31,40,62,80,90,135,141,155,241,257,386,473,505,730,736,759 'monitor':719 'must':269 'name':308 'need':81,714 'negat':325 'neutral':326 'none':213,350 'norm':665 'num':222,402,434,530,536,605,630 'offici':766 'often':713 'openai/gpt-4o':159,390,583 'optim':52,206,424,595 'optimizer.compile':244,465,610 'option':116 'os':300 'os.path.join':492,507 'output':122,123,375,460,462,493,508 'overfit':720 'pass':421 'passag':531,537,550 'path':136,138,506 'per':226,438,637 'phase':143,178,235,248 'pipelin':382,520 'posit':324 'practic':670 'pred':211,348,358,368 'pred.answer.lower':216 'pred.label':363 'pred.label.lower':360 'prepar':145 'product':66,284 'program':23,57,76,101,104,127,129,147 'proport':656 'python':148,194,242,251,286 'qualiti':682,746 'question':168,173,176,177,279 'rag':512,519,559,568,586,611 'rag-bas':567 'ragclassifi':517,587 'rate':220,432,603,624,626 'ratio':233,449,654 'recreat':270 'reduc':32,83 'reduct':761 'refer':618 'regular':661 'requir':190,727,738 'resourc':94,740 'resource-constrain':93 'respons':88 'result':277 'retriev':572 'return':174,340,365,496,551,614 'rm':584 'save':140,252,258,457,485 'score':406,412,414,475,481,483,498,500,502,504 'self':165,172,332,338,529,545 'self.classify':333,341,538,552 'self.cot':166,175 'self.retrieve':533,548 'set':704 'size':230,442,641,643,758,760 'skill':6 'skill-dspy-finetune-bootstrap' 'smaller':89 'source-omidzamani' 'start':706 'state':260,487 'state-on':259,486 'step':446,455,648 'str':137,317,320 'strategi':458 'strip':357,361 'strong':153,385,672 'student':30,44,240,474,480,482,501,503,697,741 'take':753 'teacher':43,103,146,154,384,391,394,405,408,411,413,466,497,499,673,680,684,699,745 'teacher-stud':42 'teacherqa':161,245,274 'text':313,316,339,342,343,541,546,549,555,556 'textclassifi':328,392 'thread':403 'topic-agent-skills' 'topic-claude-code' 'topic-claude-skills' 'topic-dspy' 'topic-llm' 'topic-prompt-optimization' 'topic-rag' 'trace':185,212,349,685 'track':721 'train':45,110,117,120,184,217,223,228,419,429,435,440,591,600,606,616,631,633,639,691,725,737 'trainset':107,246,247,373,467,468,561,612,613 'true':205,305 'tune':17,38,61,133,239,256,381,418,472,515,526,565,590,712,752,783 'type':99,124 'typic':621 'url':577 'use':9,26,70,268,674 'user':12 'valid':114,694,722 'valu':622 'want':47,86 'warmup':232,448,653,655 'weight':25,63,134,451,658,731 'work':74 'workflow':142","prices":[{"id":"7de9bb53-bff9-415f-8e66-4700236bb33a","listingId":"df12221c-5a6b-47a8-be18-25391ff8a82a","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"OmidZamani","category":"dspy-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T22:14:11.832Z"}],"sources":[{"listingId":"df12221c-5a6b-47a8-be18-25391ff8a82a","source":"github","sourceId":"OmidZamani/dspy-skills/dspy-finetune-bootstrap","sourceUrl":"https://github.com/OmidZamani/dspy-skills/tree/master/skills/dspy-finetune-bootstrap","isPrimary":false,"firstSeenAt":"2026-04-18T22:14:11.832Z","lastSeenAt":"2026-05-02T06:55:44.270Z"}],"details":{"listingId":"df12221c-5a6b-47a8-be18-25391ff8a82a","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"OmidZamani","slug":"dspy-finetune-bootstrap","github":{"repo":"OmidZamani/dspy-skills","stars":74,"topics":["agent-skills","claude-code","claude-skills","dspy","llm","prompt-optimization","rag"],"license":"mit","html_url":"https://github.com/OmidZamani/dspy-skills","pushed_at":"2026-02-21T12:49:43Z","description":"Collection of Claude Skills for DSPy framework - program language models, optimize prompts, and build RAG pipelines systematically","skill_md_sha":"72473b4e508e6a4312037a77eb6a9b7c2a189924","skill_md_path":"skills/dspy-finetune-bootstrap/SKILL.md","default_branch":"master","skill_tree_url":"https://github.com/OmidZamani/dspy-skills/tree/master/skills/dspy-finetune-bootstrap"},"layout":"multi","source":"github","category":"dspy-skills","frontmatter":{"name":"dspy-finetune-bootstrap","description":"This skill should be used when the user asks to \"fine-tune a DSPy model\", \"distill a program into weights\", \"use BootstrapFinetune\", \"create a student model\", \"reduce inference costs with fine-tuning\", mentions \"model distillation\", \"teacher-student training\", or wants to deploy a DSPy program as fine-tuned weights for production efficiency."},"skills_sh_url":"https://skills.sh/OmidZamani/dspy-skills/dspy-finetune-bootstrap"},"updatedAt":"2026-05-02T06:55:44.270Z"}}