{"id":"4ff5f775-5bc1-483c-975c-50167982edad","shortId":"eQeknn","kind":"skill","title":"scikit-learn","tagline":"Machine learning in Python with scikit-learn. Use for classification, regression, clustering, model evaluation, and ML pipelines.","description":"# Scikit-learn\n\n## Overview\n\nThis skill provides comprehensive guidance for machine learning tasks using scikit-learn, the industry-standard Python library for classical machine learning. Use this skill for classification, regression, clustering, dimensionality reduction, preprocessing, model evaluation, and building production-ready ML pipelines.\n\n## Installation\n\n```bash\n# Install scikit-learn using uv\nuv uv pip install scikit-learn\n\n# Optional: Install visualization dependencies\nuv uv pip install matplotlib seaborn\n\n# Commonly used with\nuv uv pip install pandas numpy\n```\n\n## When to Use This Skill\n\nUse the scikit-learn skill when:\n\n- Building classification or regression models\n- Performing clustering or dimensionality reduction\n- Preprocessing and transforming data for machine learning\n- Evaluating model performance with cross-validation\n- Tuning hyperparameters with grid or random search\n- Creating ML pipelines for production workflows\n- Comparing different algorithms for a task\n- Working with both structured (tabular) and text data\n- Need interpretable, classical machine learning approaches\n\n## Quick Start\n\n### Classification Example\n\n```python\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import classification_report\n\n# Split data\nX_train, X_test, y_train, y_test = train_test_split(\n    X, y, test_size=0.2, stratify=y, random_state=42\n)\n\n# Preprocess\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n\n# Train model\nmodel = RandomForestClassifier(n_estimators=100, random_state=42)\nmodel.fit(X_train_scaled, y_train)\n\n# Evaluate\ny_pred = model.predict(X_test_scaled)\nprint(classification_report(y_test, y_pred))\n```\n\n### Complete Pipeline with Mixed Data\n\n```python\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.ensemble import GradientBoostingClassifier\n\n# Define feature types\nnumeric_features = ['age', 'income']\ncategorical_features = ['gender', 'occupation']\n\n# Create preprocessing pipelines\nnumeric_transformer = Pipeline([\n    ('imputer', SimpleImputer(strategy='median')),\n    ('scaler', StandardScaler())\n])\n\ncategorical_transformer = Pipeline([\n    ('imputer', SimpleImputer(strategy='most_frequent')),\n    ('onehot', OneHotEncoder(handle_unknown='ignore'))\n])\n\n# Combine transformers\npreprocessor = ColumnTransformer([\n    ('num', numeric_transformer, numeric_features),\n    ('cat', categorical_transformer, categorical_features)\n])\n\n# Full pipeline\nmodel = Pipeline([\n    ('preprocessor', preprocessor),\n    ('classifier', GradientBoostingClassifier(random_state=42))\n])\n\n# Fit and predict\nmodel.fit(X_train, y_train)\ny_pred = model.predict(X_test)\n```\n\n## Core Capabilities\n\n### 1. Supervised Learning\n\nComprehensive algorithms for classification and regression tasks.\n\n**Key algorithms:**\n- **Linear models**: Logistic Regression, Linear Regression, Ridge, Lasso, ElasticNet\n- **Tree-based**: Decision Trees, Random Forest, Gradient Boosting\n- **Support Vector Machines**: SVC, SVR with various kernels\n- **Ensemble methods**: AdaBoost, Voting, Stacking\n- **Neural Networks**: MLPClassifier, MLPRegressor\n- **Others**: Naive Bayes, K-Nearest Neighbors\n\n**When to use:**\n- Classification: Predicting discrete categories (spam detection, image classification, fraud detection)\n- Regression: Predicting continuous values (price prediction, demand forecasting)\n\n**See:** `references/supervised_learning.md` for detailed algorithm documentation, parameters, and usage examples.\n\n### 2. Unsupervised Learning\n\nDiscover patterns in unlabeled data through clustering and dimensionality reduction.\n\n**Clustering algorithms:**\n- **Partition-based**: K-Means, MiniBatchKMeans\n- **Density-based**: DBSCAN, HDBSCAN, OPTICS\n- **Hierarchical**: AgglomerativeClustering\n- **Probabilistic**: Gaussian Mixture Models\n- **Others**: MeanShift, SpectralClustering, BIRCH\n\n**Dimensionality reduction:**\n- **Linear**: PCA, TruncatedSVD, NMF\n- **Manifold learning**: t-SNE, UMAP, Isomap, LLE\n- **Feature extraction**: FastICA, LatentDirichletAllocation\n\n**When to use:**\n- Customer segmentation, anomaly detection, data visualization\n- Reducing feature dimensions, exploratory data analysis\n- Topic modeling, image compression\n\n**See:** `references/unsupervised_learning.md` for detailed documentation.\n\n### 3. Model Evaluation and Selection\n\nTools for robust model evaluation, cross-validation, and hyperparameter tuning.\n\n**Cross-validation strategies:**\n- KFold, StratifiedKFold (classification)\n- TimeSeriesSplit (temporal data)\n- GroupKFold (grouped samples)\n\n**Hyperparameter tuning:**\n- GridSearchCV (exhaustive search)\n- RandomizedSearchCV (random sampling)\n- HalvingGridSearchCV (successive halving)\n\n**Metrics:**\n- **Classification**: accuracy, precision, recall, F1-score, ROC AUC, confusion matrix\n- **Regression**: MSE, RMSE, MAE, R², MAPE\n- **Clustering**: silhouette score, Calinski-Harabasz, Davies-Bouldin\n\n**When to use:**\n- Comparing model performance objectively\n- Finding optimal hyperparameters\n- Preventing overfitting through cross-validation\n- Understanding model behavior with learning curves\n\n**See:** `references/model_evaluation.md` for comprehensive metrics and tuning strategies.\n\n### 4. Data Preprocessing\n\nTransform raw data into formats suitable for machine learning.\n\n**Scaling and normalization:**\n- StandardScaler (zero mean, unit variance)\n- MinMaxScaler (bounded range)\n- RobustScaler (robust to outliers)\n- Normalizer (sample-wise normalization)\n\n**Encoding categorical variables:**\n- OneHotEncoder (nominal categories)\n- OrdinalEncoder (ordered categories)\n- LabelEncoder (target encoding)\n\n**Handling missing values:**\n- SimpleImputer (mean, median, most frequent)\n- KNNImputer (k-nearest neighbors)\n- IterativeImputer (multivariate imputation)\n\n**Feature engineering:**\n- PolynomialFeatures (interaction terms)\n- KBinsDiscretizer (binning)\n- Feature selection (RFE, SelectKBest, SelectFromModel)\n\n**When to use:**\n- Before training any algorithm that requires scaled features (SVM, KNN, Neural Networks)\n- Converting categorical variables to numeric format\n- Handling missing data systematically\n- Creating non-linear features for linear models\n\n**See:** `references/preprocessing.md` for detailed preprocessing techniques.\n\n### 5. Pipelines and Composition\n\nBuild reproducible, production-ready ML workflows.\n\n**Key components:**\n- **Pipeline**: Chain transformers and estimators sequentially\n- **ColumnTransformer**: Apply different preprocessing to different columns\n- **FeatureUnion**: Combine multiple transformers in parallel\n- **TransformedTargetRegressor**: Transform target variable\n\n**Benefits:**\n- Prevents data leakage in cross-validation\n- Simplifies code and improves maintainability\n- Enables joint hyperparameter tuning\n- Ensures consistency between training and prediction\n\n**When to use:**\n- Always use Pipelines for production workflows\n- When mixing numerical and categorical features (use ColumnTransformer)\n- When performing cross-validation with preprocessing steps\n- When hyperparameter tuning includes preprocessing parameters\n\n**See:** `references/pipelines_and_composition.md` for comprehensive pipeline patterns.\n\n## Example Scripts\n\n### Classification Pipeline\n\nRun a complete classification workflow with preprocessing, model comparison, hyperparameter tuning, and evaluation:\n\n```bash\npython scripts/classification_pipeline.py\n```\n\nThis script demonstrates:\n- Handling mixed data types (numeric and categorical)\n- Model comparison using cross-validation\n- Hyperparameter tuning with GridSearchCV\n- Comprehensive evaluation with multiple metrics\n- Feature importance analysis\n\n### Clustering Analysis\n\nPerform clustering analysis with algorithm comparison and visualization:\n\n```bash\npython scripts/clustering_analysis.py\n```\n\nThis script demonstrates:\n- Finding optimal number of clusters (elbow method, silhouette analysis)\n- Comparing multiple clustering algorithms (K-Means, DBSCAN, Agglomerative, Gaussian Mixture)\n- Evaluating clustering quality without ground truth\n- Visualizing results with PCA projection\n\n## Reference Documentation\n\nThis skill includes comprehensive reference files for deep dives into specific topics:\n\n### Quick Reference\n**File:** `references/quick_reference.md`\n- Common import patterns and installation instructions\n- Quick workflow templates for common tasks\n- Algorithm selection cheat sheets\n- Common patterns and gotchas\n- Performance optimization tips\n\n### Supervised Learning\n**File:** `references/supervised_learning.md`\n- Linear models (regression and classification)\n- Support Vector Machines\n- Decision Trees and ensemble methods\n- K-Nearest Neighbors, Naive Bayes, Neural Networks\n- Algorithm selection guide\n\n### Unsupervised Learning\n**File:** `references/unsupervised_learning.md`\n- All clustering algorithms with parameters and use cases\n- Dimensionality reduction techniques\n- Outlier and novelty detection\n- Gaussian Mixture Models\n- Method selection guide\n\n### Model Evaluation\n**File:** `references/model_evaluation.md`\n- Cross-validation strategies\n- Hyperparameter tuning methods\n- Classification, regression, and clustering metrics\n- Learning and validation curves\n- Best practices for model selection\n\n### Preprocessing\n**File:** `references/preprocessing.md`\n- Feature scaling and normalization\n- Encoding categorical variables\n- Missing value imputation\n- Feature engineering techniques\n- Custom transformers\n\n### Pipelines and Composition\n**File:** `references/pipelines_and_composition.md`\n- Pipeline construction and usage\n- ColumnTransformer for mixed data types\n- FeatureUnion for parallel transformations\n- Complete end-to-end examples\n- Best practices\n\n## Common Workflows\n\n### Building a Classification Model\n\n1. **Load and explore data**\n   ```python\n   import pandas as pd\n   df = pd.read_csv('data.csv')\n   X = df.drop('target', axis=1)\n   y = df['target']\n   ```\n\n2. **Split data with stratification**\n   ```python\n   from sklearn.model_selection import train_test_split\n   X_train, X_test, y_train, y_test = train_test_split(\n       X, y, test_size=0.2, stratify=y, random_state=42\n   )\n   ```\n\n3. **Create preprocessing pipeline**\n   ```python\n   from sklearn.pipeline import Pipeline\n   from sklearn.preprocessing import StandardScaler\n   from sklearn.compose import ColumnTransformer\n\n   # Handle numeric and categorical features separately\n   preprocessor = ColumnTransformer([\n       ('num', StandardScaler(), numeric_features),\n       ('cat', OneHotEncoder(), categorical_features)\n   ])\n   ```\n\n4. **Build complete pipeline**\n   ```python\n   model = Pipeline([\n       ('preprocessor', preprocessor),\n       ('classifier', RandomForestClassifier(random_state=42))\n   ])\n   ```\n\n5. **Tune hyperparameters**\n   ```python\n   from sklearn.model_selection import GridSearchCV\n\n   param_grid = {\n       'classifier__n_estimators': [100, 200],\n       'classifier__max_depth': [10, 20, None]\n   }\n\n   grid_search = GridSearchCV(model, param_grid, cv=5)\n   grid_search.fit(X_train, y_train)\n   ```\n\n6. **Evaluate on test set**\n   ```python\n   from sklearn.metrics import classification_report\n\n   best_model = grid_search.best_estimator_\n   y_pred = best_model.predict(X_test)\n   print(classification_report(y_test, y_pred))\n   ```\n\n### Performing Clustering Analysis\n\n1. **Preprocess data**\n   ```python\n   from sklearn.preprocessing import StandardScaler\n\n   scaler = StandardScaler()\n   X_scaled = scaler.fit_transform(X)\n   ```\n\n2. **Find optimal number of clusters**\n   ```python\n   from sklearn.cluster import KMeans\n   from sklearn.metrics import silhouette_score\n\n   scores = []\n   for k in range(2, 11):\n       kmeans = KMeans(n_clusters=k, random_state=42)\n       labels = kmeans.fit_predict(X_scaled)\n       scores.append(silhouette_score(X_scaled, labels))\n\n   optimal_k = range(2, 11)[np.argmax(scores)]\n   ```\n\n3. **Apply clustering**\n   ```python\n   model = KMeans(n_clusters=optimal_k, random_state=42)\n   labels = model.fit_predict(X_scaled)\n   ```\n\n4. **Visualize with dimensionality reduction**\n   ```python\n   from sklearn.decomposition import PCA\n\n   pca = PCA(n_components=2)\n   X_2d = pca.fit_transform(X_scaled)\n\n   plt.scatter(X_2d[:, 0], X_2d[:, 1], c=labels, cmap='viridis')\n   ```\n\n## Best Practices\n\n### Always Use Pipelines\nPipelines prevent data leakage and ensure consistency:\n```python\n# Good: Preprocessing in pipeline\npipeline = Pipeline([\n    ('scaler', StandardScaler()),\n    ('model', LogisticRegression())\n])\n\n# Bad: Preprocessing outside (can leak information)\nX_scaled = StandardScaler().fit_transform(X)\n```\n\n### Fit on Training Data Only\nNever fit on test data:\n```python\n# Good\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train)\nX_test_scaled = scaler.transform(X_test)  # Only transform\n\n# Bad\nscaler = StandardScaler()\nX_all_scaled = scaler.fit_transform(np.vstack([X_train, X_test]))\n```\n\n### Use Stratified Splitting for Classification\nPreserve class distribution:\n```python\nX_train, X_test, y_train, y_test = train_test_split(\n    X, y, test_size=0.2, stratify=y, random_state=42\n)\n```\n\n### Set Random State for Reproducibility\n```python\nmodel = RandomForestClassifier(n_estimators=100, random_state=42)\n```\n\n### Choose Appropriate Metrics\n- Balanced data: Accuracy, F1-score\n- Imbalanced data: Precision, Recall, ROC AUC, Balanced Accuracy\n- Cost-sensitive: Define custom scorer\n\n### Scale Features When Required\nAlgorithms requiring feature scaling:\n- SVM, KNN, Neural Networks\n- PCA, Linear/Logistic Regression with regularization\n- K-Means clustering\n\nAlgorithms not requiring scaling:\n- Tree-based models (Decision Trees, Random Forest, Gradient Boosting)\n- Naive Bayes\n\n## Troubleshooting Common Issues\n\n### ConvergenceWarning\n**Issue:** Model didn't converge\n**Solution:** Increase `max_iter` or scale features\n```python\nmodel = LogisticRegression(max_iter=1000)\n```\n\n### Poor Performance on Test Set\n**Issue:** Overfitting\n**Solution:** Use regularization, cross-validation, or simpler model\n```python\n# Add regularization\nmodel = Ridge(alpha=1.0)\n\n# Use cross-validation\nscores = cross_val_score(model, X, y, cv=5)\n```\n\n### Memory Error with Large Datasets\n**Solution:** Use algorithms designed for large data\n```python\n# Use SGD for large datasets\nfrom sklearn.linear_model import SGDClassifier\nmodel = SGDClassifier()\n\n# Or MiniBatchKMeans for clustering\nfrom sklearn.cluster import MiniBatchKMeans\nmodel = MiniBatchKMeans(n_clusters=8, batch_size=100)\n```\n\n## Additional Resources\n\n- Official Documentation: https://scikit-learn.org/stable/\n- User Guide: https://scikit-learn.org/stable/user_guide.html\n- API Reference: https://scikit-learn.org/stable/api/index.html\n- Examples Gallery: https://scikit-learn.org/stable/auto_examples/index.html\n\n## Limitations\n- Use this skill only when the task clearly matches the scope described above.\n- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.\n- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.","tags":["scikit","learn","antigravity","awesome","skills","sickn33","agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows"],"capabilities":["skill","source-sickn33","skill-scikit-learn","topic-agent-skills","topic-agentic-skills","topic-ai-agent-skills","topic-ai-agents","topic-ai-coding","topic-ai-workflows","topic-antigravity","topic-antigravity-skills","topic-claude-code","topic-claude-code-skills","topic-codex-cli","topic-codex-skills"],"categories":["antigravity-awesome-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/sickn33/antigravity-awesome-skills/scikit-learn","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"cli":"npx skills add sickn33/antigravity-awesome-skills","source_repo":"https://github.com/sickn33/antigravity-awesome-skills","install_from":"skills.sh"}},"qualityScore":"0.700","qualityRationale":"deterministic score 0.70 from registry signals: · indexed on github topic:agent-skills · 34583 github stars · SKILL.md body (15,355 chars)","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill-github:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T18:52:11.261Z","embedding":null,"createdAt":"2026-04-18T21:43:57.830Z","updatedAt":"2026-04-22T18:52:11.261Z","lastSeenAt":"2026-04-22T18:52:11.261Z","tsv":"'/stable/':1669 '/stable/api/index.html':1679 '/stable/auto_examples/index.html':1684 '/stable/user_guide.html':1674 '0':1375 '0.2':213,1151,1484 '1':368,1101,1119,1269,1378 '1.0':1608 '10':1223 '100':241,1218,1500,1662 '1000':1585 '11':1306,1330 '2':453,1123,1284,1305,1329,1365 '20':1224 '200':1219 '2d':1367,1374,1377 '3':533,1157,1333 '4':630,1190,1351 '42':218,244,352,1156,1203,1314,1345,1489,1503 '5':741,1204,1233,1621 '6':1239 '8':1659 'accuraci':575,1509,1520 'adaboost':408 'add':1603 'addit':1663 'age':297 'agglom':918 'agglomerativeclust':482 'algorithm':153,372,379,447,467,708,891,913,962,998,1007,1531,1548,1629 'alpha':1607 'alway':803,1385 'analysi':523,884,886,889,909,1268 'anomali':514 'api':1675 'appli':761,1334 'approach':170 'appropri':1505 'ask':1718 'auc':582,1518 'axi':1118 'bad':1406,1447 'balanc':1507,1519 'base':391,470,477,1554 'bash':69,854,895 'batch':1660 'bay':417,995,1563 'behavior':618 'benefit':777 'best':1046,1093,1250,1383 'best_model.predict':1256 'bin':696 'birch':490 'boost':397,1561 'bouldin':599 'bound':651 'boundari':1726 'build':62,114,745,1097,1191 'c':1379 'calinski':595 'calinski-harabasz':594 'capabl':367 'case':1012 'cat':337,1186 'categor':299,315,338,340,663,718,813,866,1059,1177,1188 'categori':428,667,670 'chain':755 'cheat':964 'choos':1504 'clarif':1720 'class':1466 'classic':46,167 'classif':14,53,115,173,194,259,374,425,432,555,574,839,844,981,1037,1099,1248,1260,1464 'classifi':348,1199,1215,1220 'clear':1693 'cluster':16,55,120,462,466,591,885,888,905,912,922,1006,1040,1267,1289,1310,1335,1340,1547,1650,1658 'cmap':1381 'code':786 'column':766 'columntransform':278,331,760,816,1078,1173,1181 'combin':328,768 'common':93,950,960,966,1095,1565 'compar':151,603,910 'comparison':849,868,892 'complet':265,843,1087,1192 'compon':753,1364 'composit':744,1071 'comprehens':29,371,625,834,877,937 'compress':527 'confus':583 'consist':795,1394 'construct':1075 'continu':437 'converg':1572 'convergencewarn':1567 'convert':717 'core':366 'cost':1522 'cost-sensit':1521 'creat':145,303,727,1158 'criteria':1729 'cross':136,544,550,614,783,820,871,1031,1597,1611,1614 'cross-valid':135,543,549,613,782,819,870,1030,1596,1610 'csv':1113 'curv':621,1045 'custom':512,1067,1525 'cv':1232,1620 'data':127,164,197,269,460,516,522,558,631,635,725,779,862,1081,1105,1125,1271,1390,1421,1427,1508,1514,1633 'data.csv':1114 'dataset':1626,1639 'davi':598 'davies-bouldin':597 'dbscan':478,917 'decis':392,985,1556 'deep':941 'defin':292,1524 'demand':441 'demonstr':859,900 'densiti':476 'density-bas':475 'depend':86 'depth':1222 'describ':1697 'design':1630 'detail':446,531,738 'detect':430,434,515,1019 'df':1111,1121 'df.drop':1116 'didn':1570 'differ':152,762,765 'dimens':520 'dimension':56,122,464,491,1013,1354 'discov':456 'discret':427 'distribut':1467 'dive':942 'document':448,532,933,1666 'elasticnet':388 'elbow':906 'enabl':790 'encod':662,673,1058 'end':1089,1091 'end-to-end':1088 'engin':691,1065 'ensembl':406,988 'ensur':794,1393 'environ':1709 'environment-specif':1708 'error':1623 'estim':240,758,1217,1253,1499 'evalu':18,60,131,251,535,542,853,878,921,1027,1240 'exampl':174,452,837,1092,1680 'exhaust':565 'expert':1714 'explor':1104 'exploratori':521 'extract':506 'f1':579,1511 'f1-score':578,1510 'fastica':507 'featur':293,296,300,336,341,505,519,690,697,712,731,814,882,1054,1064,1178,1185,1189,1528,1533,1579 'featureunion':767,1083 'file':939,948,975,1003,1028,1052,1072 'find':607,901,1285 'fit':353,1415,1418,1424 'forecast':442 'forest':395,1559 'format':637,722 'fraud':433 'frequent':322,681 'full':342 'galleri':1681 'gaussian':484,919,1020 'gender':301 'good':1396,1429 'gotcha':969 'gradient':396,1560 'gradientboostingclassifi':291,349 'grid':141,1214,1226,1231 'grid_search.best':1252 'grid_search.fit':1234 'gridsearchcv':564,876,1212,1228 'ground':925 'group':560 'groupkfold':559 'guid':1000,1025,1671 'guidanc':30 'halv':572 'halvinggridsearchcv':570 'handl':325,674,723,860,1174 'harabasz':596 'hdbscan':479 'hierarch':481 'hyperparamet':139,547,562,609,792,826,850,873,1034,1206 'ignor':327 'imag':431,526 'imbalanc':1513 'import':179,185,189,193,273,277,281,286,290,883,951,1107,1132,1164,1168,1172,1211,1247,1275,1293,1297,1359,1643,1653 'improv':788 'imput':309,318,689,1063 'includ':828,936 'incom':298 'increas':1574 'industri':41 'industry-standard':40 'inform':1411 'input':1723 'instal':68,70,79,84,90,99,954 'instruct':955 'interact':693 'interpret':166 'isomap':503 'issu':1566,1568,1591 'iter':1576,1584 'iterativeimput':687 'joint':791 'k':419,472,684,915,991,1302,1311,1327,1342,1545 'k-mean':471,914,1544 'k-nearest':418,683,990 'kbinsdiscret':695 'kernel':405 'key':378,752 'kfold':553 'kmean':1294,1307,1308,1338 'kmeans.fit':1316 'knn':714,1536 'knnimput':682 'label':1315,1325,1346,1380 'labelencod':671 'larg':1625,1632,1638 'lasso':387 'latentdirichletalloc':508 'leak':1410 'leakag':780,1391 'learn':3,5,11,24,33,38,48,73,82,111,130,169,370,455,498,620,641,974,1002,1042 'librari':44 'limit':1685 'linear':380,384,493,730,733,977 'linear/logistic':1540 'lle':504 'load':1102 'logist':382 'logisticregress':1405,1582 'machin':4,32,47,129,168,400,640,984 'mae':588 'maintain':789 'manifold':497 'mape':590 'match':1694 'matplotlib':91 'matrix':584 'max':1221,1575,1583 'mean':473,647,678,916,1546 'meanshift':488 'median':312,679 'memori':1622 'method':407,907,989,1023,1036 'metric':573,626,881,1041,1506 'minibatchkmean':474,1648,1654,1656 'minmaxscal':650 'miss':675,724,1061,1731 'mix':268,810,861,1080 'mixtur':485,920,1021 'ml':20,66,146,750 'mlpclassifi':413 'mlpregressor':414 'model':17,59,118,132,236,237,344,381,486,525,534,541,604,617,734,848,867,978,1022,1026,1049,1100,1195,1229,1251,1337,1404,1496,1555,1569,1581,1601,1605,1617,1642,1645,1655 'model.fit':245,356,1347 'model.predict':254,363 'mse':586 'multipl':769,880,911 'multivari':688 'n':239,1216,1309,1339,1363,1498,1657 'naiv':416,994,1562 'nearest':420,685,992 'need':165 'neighbor':421,686,993 'network':412,716,997,1538 'neural':411,715,996,1537 'never':1423 'nmf':496 'nomin':666 'non':729 'non-linear':728 'none':1225 'normal':644,657,661,1057 'novelti':1018 'np.argmax':1331 'np.vstack':1455 'num':332,1182 'number':903,1287 'numer':295,306,333,335,721,811,864,1175,1184 'numpi':101 'object':606 'occup':302 'offici':1665 'onehot':323 'onehotencod':283,324,665,1187 'optic':480 'optim':608,902,971,1286,1326,1341 'option':83 'order':669 'ordinalencod':668 'other':415,487 'outlier':656,1016 'output':1703 'outsid':1408 'overfit':611,1592 'overview':25 'panda':100,1108 'parallel':772,1085 'param':1213,1230 'paramet':449,830,1009 'partit':469 'partition-bas':468 'pattern':457,836,952,967 'pca':494,930,1360,1361,1362,1539 'pca.fit':1368 'pd':1110 'pd.read':1112 'perform':119,133,605,818,887,970,1266,1587 'permiss':1724 'pip':78,89,98 'pipelin':21,67,147,266,274,305,308,317,343,345,742,754,805,835,840,1069,1074,1160,1165,1193,1196,1387,1388,1399,1400,1401 'plt.scatter':1372 'polynomialfeatur':692 'poor':1586 'practic':1047,1094,1384 'precis':576,1515 'pred':253,264,362,1255,1265 'predict':355,426,436,440,799,1317,1348 'preprocess':58,124,219,304,632,739,763,823,829,847,1051,1159,1270,1397,1407 'preprocessor':330,346,347,1180,1197,1198 'preserv':1465 'prevent':610,778,1389 'price':439 'print':258,1259 'probabilist':483 'product':64,149,748,807 'production-readi':63,747 'project':931 'provid':28 'python':7,43,175,270,855,896,1106,1128,1161,1194,1207,1244,1272,1290,1336,1356,1395,1428,1468,1495,1580,1602,1634 'qualiti':923 'quick':171,946,956 'r':589 'random':143,216,242,350,394,568,1154,1201,1312,1343,1487,1491,1501,1558 'randomforestclassifi':190,238,1200,1497 'randomizedsearchcv':567 'rang':652,1304,1328 'raw':634 'readi':65,749 'recal':577,1516 'reduc':518 'reduct':57,123,465,492,1014,1355 'refer':932,938,947,1676 'references/model_evaluation.md':623,1029 'references/pipelines_and_composition.md':832,1073 'references/preprocessing.md':736,1053 'references/quick_reference.md':949 'references/supervised_learning.md':444,976 'references/unsupervised_learning.md':529,1004 'regress':15,54,117,376,383,385,435,585,979,1038,1541 'regular':1543,1595,1604 'report':195,260,1249,1261 'reproduc':746,1494 'requir':710,1530,1532,1550,1722 'resourc':1664 'result':928 'review':1715 'rfe':699 'ridg':386,1606 'rmse':587 'robust':540,654 'robustscal':653 'roc':581,1517 'run':841 'safeti':1725 'sampl':561,569,659 'sample-wis':658 'scale':224,231,248,257,642,711,1055,1280,1319,1324,1350,1371,1413,1434,1441,1452,1527,1534,1551,1578 'scaler':220,313,1277,1402,1430,1448 'scaler.fit':225,1281,1435,1453 'scaler.transform':232,1442 'scikit':2,10,23,37,72,81,110 'scikit-learn':1,9,22,36,71,80,109 'scikit-learn.org':1668,1673,1678,1683 'scikit-learn.org/stable/':1667 'scikit-learn.org/stable/api/index.html':1677 'scikit-learn.org/stable/auto_examples/index.html':1682 'scikit-learn.org/stable/user_guide.html':1672 'scope':1696 'score':580,593,1299,1300,1322,1332,1512,1613,1616 'scorer':1526 'scores.append':1320 'script':838,858,899 'scripts/classification_pipeline.py':856 'scripts/clustering_analysis.py':897 'seaborn':92 'search':144,566,1227 'see':443,528,622,735,831 'segment':513 'select':178,537,698,963,999,1024,1050,1131,1210 'selectfrommodel':701 'selectkbest':700 'sensit':1523 'separ':1179 'sequenti':759 'set':1243,1490,1590 'sgd':1636 'sgdclassifi':1644,1646 'sheet':965 'silhouett':592,908,1298,1321 'simpleimput':287,310,319,677 'simpler':1600 'simplifi':785 'size':212,1150,1483,1661 'skill':27,51,106,112,935,1688 'skill-scikit-learn' 'sklearn.cluster':1292,1652 'sklearn.compose':276,1171 'sklearn.decomposition':1358 'sklearn.ensemble':188,289 'sklearn.impute':285 'sklearn.linear':1641 'sklearn.metrics':192,1246,1296 'sklearn.model':177,1130,1209 'sklearn.pipeline':272,1163 'sklearn.preprocessing':184,280,1167,1274 'sne':501 'solut':1573,1593,1627 'source-sickn33' 'spam':429 'specif':944,1710 'spectralclust':489 'split':182,196,208,1124,1135,1146,1462,1479 'stack':410 'standard':42 'standardscal':186,221,282,314,645,1169,1183,1276,1278,1403,1414,1431,1449 'start':172 'state':217,243,351,1155,1202,1313,1344,1488,1492,1502 'step':824 'stop':1716 'strategi':311,320,552,629,1033 'stratif':1127 'stratifi':214,1152,1461,1485 'stratifiedkfold':554 'structur':160 'substitut':1706 'success':571,1728 'suitabl':638 'supervis':369,973 'support':398,982 'svc':401 'svm':713,1535 'svr':402 'systemat':726 't-sne':499 'tabular':161 'target':672,775,1117,1122 'task':34,156,377,961,1692 'techniqu':740,1015,1066 'templat':958 'tempor':557 'term':694 'test':181,201,205,207,211,230,234,256,262,365,1134,1139,1143,1145,1149,1242,1258,1263,1426,1440,1444,1459,1472,1476,1478,1482,1589,1712 'text':163 'timeseriessplit':556 'tip':972 'tool':538 'topic':524,945 'topic-agent-skills' 'topic-agentic-skills' 'topic-ai-agent-skills' 'topic-ai-agents' 'topic-ai-coding' 'topic-ai-workflows' 'topic-antigravity' 'topic-antigravity-skills' 'topic-claude-code' 'topic-claude-code-skills' 'topic-codex-cli' 'topic-codex-skills' 'train':180,199,203,206,223,228,235,247,250,358,360,706,797,1133,1137,1141,1144,1236,1238,1420,1433,1438,1457,1470,1474,1477 'transform':126,226,307,316,329,334,339,633,756,770,774,1068,1086,1282,1369,1416,1436,1446,1454 'transformedtargetregressor':773 'treat':1701 'tree':390,393,986,1553,1557 'tree-bas':389,1552 'troubleshoot':1564 'truncatedsvd':495 'truth':926 'tune':138,548,563,628,793,827,851,874,1035,1205 'type':294,863,1082 'umap':502 'understand':616 'unit':648 'unknown':326 'unlabel':459 'unsupervis':454,1001 'usag':451,1077 'use':12,35,49,74,94,104,107,424,511,602,704,802,804,815,869,1011,1386,1460,1594,1609,1628,1635,1686 'user':1670 'uv':75,76,77,87,88,96,97 'val':1615 'valid':137,545,551,615,784,821,872,1032,1044,1598,1612,1711 'valu':438,676,1062 'variabl':664,719,776,1060 'varianc':649 'various':404 'vector':399,983 'viridi':1382 'visual':85,517,894,927,1352 'vote':409 'wise':660 'without':924 'work':157 'workflow':150,751,808,845,957,1096 'x':198,200,209,222,227,229,233,246,255,357,364,1115,1136,1138,1147,1235,1257,1279,1283,1318,1323,1349,1366,1370,1373,1376,1412,1417,1432,1437,1439,1443,1450,1456,1458,1469,1471,1480,1618 'y':202,204,210,215,249,252,261,263,359,361,1120,1140,1142,1148,1153,1237,1254,1262,1264,1473,1475,1481,1486,1619 'zero':646","prices":[{"id":"f14234fd-fab7-4e23-a983-2538b96bde6f","listingId":"4ff5f775-5bc1-483c-975c-50167982edad","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"sickn33","category":"antigravity-awesome-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T21:43:57.830Z"}],"sources":[{"listingId":"4ff5f775-5bc1-483c-975c-50167982edad","source":"github","sourceId":"sickn33/antigravity-awesome-skills/scikit-learn","sourceUrl":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/scikit-learn","isPrimary":false,"firstSeenAt":"2026-04-18T21:43:57.830Z","lastSeenAt":"2026-04-22T18:52:11.261Z"}],"details":{"listingId":"4ff5f775-5bc1-483c-975c-50167982edad","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"sickn33","slug":"scikit-learn","github":{"repo":"sickn33/antigravity-awesome-skills","stars":34583,"topics":["agent-skills","agentic-skills","ai-agent-skills","ai-agents","ai-coding","ai-workflows","antigravity","antigravity-skills","claude-code","claude-code-skills","codex-cli","codex-skills","cursor","cursor-skills","developer-tools","gemini-cli","gemini-skills","kiro","mcp","skill-library"],"license":"mit","html_url":"https://github.com/sickn33/antigravity-awesome-skills","pushed_at":"2026-04-22T06:40:00Z","description":"Installable GitHub library of 1,400+ agentic skills for Claude Code, Cursor, Codex CLI, Gemini CLI, Antigravity, and more. Includes installer CLI, bundles, workflows, and official/community skill collections.","skill_md_sha":"a471656b7fd13c252f5a1bf8248efdf38389d74c","skill_md_path":"skills/scikit-learn/SKILL.md","default_branch":"main","skill_tree_url":"https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/scikit-learn"},"layout":"multi","source":"github","category":"antigravity-awesome-skills","frontmatter":{"name":"scikit-learn","license":"BSD-3-Clause license","description":"Machine learning in Python with scikit-learn. Use for classification, regression, clustering, model evaluation, and ML pipelines."},"skills_sh_url":"https://skills.sh/sickn33/antigravity-awesome-skills/scikit-learn"},"updatedAt":"2026-04-22T18:52:11.261Z"}}