{"id":"fac86b7d-9b85-4e4f-83d1-fe5925b9dbb1","shortId":"HHwCU7","kind":"skill","title":"Coreml","tagline":"Swift Ios Skills skill by Dpearson2699","description":"# Core ML Swift Integration\n\nLoad, configure, and run Core ML models in iOS apps. This skill covers the\nSwift side: model loading, prediction, MLTensor, profiling, and deployment.\nTarget iOS 26+ with Swift 6.3, backward-compatible to iOS 14 unless noted.\n\n> **Scope boundary:** Python-side model conversion, optimization (quantization,\n> palettization, pruning), and framework selection live in the `apple-on-device-ai`\n> skill. This skill owns Swift integration only.\n\nSee [references/coreml-swift-integration.md](references/coreml-swift-integration.md) for complete code patterns including\nactor-based caching, batch inference, image preprocessing, and testing.\n\n## Contents\n\n- [Loading Models](#loading-models)\n- [Model Configuration](#model-configuration)\n- [Making Predictions](#making-predictions)\n- [MLTensor (iOS 18+)](#mltensor-ios-18)\n- [Working with MLMultiArray](#working-with-mlmultiarray)\n- [Image Preprocessing](#image-preprocessing)\n- [Multi-Model Pipelines](#multi-model-pipelines)\n- [Vision Integration](#vision-integration)\n- [Performance Profiling](#performance-profiling)\n- [Model Deployment](#model-deployment)\n- [Memory Management](#memory-management)\n- [Common Mistakes](#common-mistakes)\n- [Review Checklist](#review-checklist)\n- [References](#references)\n\n## Loading Models\n\n### Auto-Generated Classes\n\nWhen you drag a `.mlpackage` or `.mlmodelc` into Xcode, it generates a Swift\nclass with typed input/output. Use this whenever possible.\n\n```swift\nimport CoreML\n\nlet config = MLModelConfiguration()\nconfig.computeUnits = .all\n\nlet model = try MyImageClassifier(configuration: config)\n```\n\n### Manual Loading\n\nLoad from a URL when the model is downloaded at runtime or stored outside the\nbundle.\n\n```swift\nlet modelURL = Bundle.main.url(\n    forResource: \"MyModel\", withExtension: \"mlmodelc\"\n)!\nlet model = try MLModel(contentsOf: modelURL, configuration: config)\n```\n\n### Async Loading (iOS 16+)\n\nLoad models without blocking the main thread. Prefer this for large models.\n\n```swift\nlet model = try await MLModel.load(\n    contentsOf: modelURL,\n    configuration: config\n)\n```\n\n### Compile at Runtime\n\nCompile a `.mlpackage` or `.mlmodel` to `.mlmodelc` on device. Useful for\nmodels downloaded from a server.\n\n```swift\nlet compiledURL = try await MLModel.compileModel(at: packageURL)\nlet model = try MLModel(contentsOf: compiledURL, configuration: config)\n```\n\nCache the compiled URL -- recompiling on every launch wastes time. Copy\n`compiledURL` to a persistent location (e.g., Application Support).\n\n## Model Configuration\n\n`MLModelConfiguration` controls compute units, GPU access, and model parameters.\n\n### Compute Units Decision Table\n\n| Value | Uses | When to Choose |\n|---|---|---|\n| `.all` | CPU + GPU + Neural Engine | Default. Let the system decide. |\n| `.cpuOnly` | CPU | Background tasks, audio sessions, or when GPU is busy. |\n| `.cpuAndGPU` | CPU + GPU | Need GPU but model has ops unsupported by ANE. |\n| `.cpuAndNeuralEngine` | CPU + Neural Engine | Best energy efficiency for compatible models. |\n\n```swift\nlet config = MLModelConfiguration()\nconfig.computeUnits = .cpuAndNeuralEngine\n\n// Allow low-priority background inference\nconfig.computeUnits = .cpuOnly\n```\n\n### Configuration Properties\n\n```swift\nlet config = MLModelConfiguration()\nconfig.computeUnits = .all\nconfig.allowLowPrecisionAccumulationOnGPU = true // faster, slight precision loss\n```\n\n## Making Predictions\n\n### With Auto-Generated Classes\n\nThe generated class provides typed input/output structs.\n\n```swift\nlet model = try MyImageClassifier(configuration: config)\nlet input = MyImageClassifierInput(image: pixelBuffer)\nlet output = try model.prediction(input: input)\nprint(output.classLabel)        // \"golden_retriever\"\nprint(output.classLabelProbs)   // [\"golden_retriever\": 0.95, ...]\n```\n\n### With MLDictionaryFeatureProvider\n\nUse when inputs are dynamic or not known at compile time.\n\n```swift\nlet inputFeatures = try MLDictionaryFeatureProvider(dictionary: [\n    \"image\": MLFeatureValue(pixelBuffer: pixelBuffer),\n    \"confidence_threshold\": MLFeatureValue(double: 0.5),\n])\nlet output = try model.prediction(from: inputFeatures)\nlet label = output.featureValue(for: \"classLabel\")?.stringValue\n```\n\n### Async Prediction (iOS 17+)\n\n```swift\nlet output = try await model.prediction(from: inputFeatures)\n```\n\n### Batch Prediction\n\nProcess multiple inputs in one call for better throughput.\n\n```swift\nlet batchInputs = try MLArrayBatchProvider(array: inputs.map { input in\n    try MLDictionaryFeatureProvider(dictionary: [\"image\": MLFeatureValue(pixelBuffer: input)])\n})\nlet batchOutput = try model.predictions(from: batchInputs)\nfor i in 0..<batchOutput.count {\n    let result = batchOutput.features(at: i)\n    print(result.featureValue(for: \"classLabel\")?.stringValue ?? \"unknown\")\n}\n```\n\n### Stateful Prediction (iOS 18+)\n\nUse `MLState` for models that maintain state across predictions (sequence models,\nLLMs, audio accumulators). Create state once and pass it to each prediction call.\n\n```swift\nlet state = model.makeState()\n\n// Each prediction carries forward the internal model state\nfor frame in audioFrames {\n    let input = try MLDictionaryFeatureProvider(dictionary: [\n        \"audio_features\": MLFeatureValue(multiArray: frame)\n    ])\n    let output = try await model.prediction(from: input, using: state)\n    let classification = output.featureValue(for: \"label\")?.stringValue\n}\n```\n\nState is not `Sendable` -- use it from a single actor or task. Call\n`model.makeState()` to create independent state for concurrent streams.\n\n## MLTensor (iOS 18+)\n\n`MLTensor` is a Swift-native multidimensional array for pre/post-processing.\nOperations run lazily -- call `.shapedArray(of:)` to materialize results.\n\n```swift\nimport CoreML\n\n// Creation\nlet tensor = MLTensor([1.0, 2.0, 3.0, 4.0])\nlet zeros = MLTensor(zeros: [3, 224, 224], scalarType: Float.self)\n\n// Reshaping\nlet reshaped = tensor.reshaped(to: [2, 2])\n\n// Math operations\nlet softmaxed = tensor.softmax()\nlet normalized = (tensor - tensor.mean()) / tensor.standardDeviation()\n\n// Interop with MLMultiArray\nlet multiArray = try MLMultiArray([1.0, 2.0, 3.0, 4.0])\nlet fromMultiArray = MLTensor(multiArray)\nlet backToArray = tensor.shapedArray(of: Float.self)\n```\n\n## Working with MLMultiArray\n\n`MLMultiArray` is the primary data exchange type for non-image model inputs and\noutputs. Use it when the auto-generated class expects array-type features.\n\n```swift\n// Create a 3D array: [batch, sequence, features]\nlet array = try MLMultiArray(shape: [1, 128, 768], dataType: .float32)\n\n// Write values\nfor i in 0..<128 {\n    array[[0, i, 0] as [NSNumber]] = NSNumber(value: Float(i))\n}\n\n// Read values\nlet value = array[[0, 0, 0] as [NSNumber]].floatValue\n\n// Create from data pointer for zero-copy interop\nlet data: [Float] = [1.0, 2.0, 3.0]\nlet fromData = try MLMultiArray(dataPointer: UnsafeMutableRawPointer(mutating: data),\n                                 shape: [3],\n                                 dataType: .float32,\n                                 strides: [1])\n```\n\nSee [references/coreml-swift-integration.md](references/coreml-swift-integration.md) for advanced MLMultiArray patterns\nincluding NLP tokenization and audio feature extraction.\n\n## Image Preprocessing\n\nImage models expect `CVPixelBuffer` input. Use `CGImage` conversion for photos\nfrom the camera or photo library. Vision's `VNCoreMLRequest` handles this\nautomatically; manual conversion is needed only for direct `MLModel` prediction.\n\n```swift\nimport CoreVideo\n\nfunc createPixelBuffer(from cgImage: CGImage, width: Int, height: Int) -> CVPixelBuffer? {\n    var pixelBuffer: CVPixelBuffer?\n    let attrs: [CFString: Any] = [\n        kCVPixelBufferCGImageCompatibilityKey: true,\n        kCVPixelBufferCGBitmapContextCompatibilityKey: true,\n    ]\n    CVPixelBufferCreate(kCFAllocatorDefault, width, height,\n                        kCVPixelFormatType_32ARGB, attrs as CFDictionary, &pixelBuffer)\n\n    guard let buffer = pixelBuffer else { return nil }\n    CVPixelBufferLockBaseAddress(buffer, [])\n    let context = CGContext(\n        data: CVPixelBufferGetBaseAddress(buffer),\n        width: width, height: height,\n        bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(buffer),\n        space: CGColorSpaceCreateDeviceRGB(),\n        bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue\n    )\n    context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))\n    CVPixelBufferUnlockBaseAddress(buffer, [])\n    return buffer\n}\n```\n\nFor additional preprocessing patterns (normalization, center-cropping), see\n[references/coreml-swift-integration.md](references/coreml-swift-integration.md).\n\n## Multi-Model Pipelines\n\nChain models when preprocessing or postprocessing requires a separate model.\n\n```swift\n// Sequential inference: preprocessor -> main model -> postprocessor\nlet preprocessed = try preprocessor.prediction(from: rawInput)\nlet mainOutput = try mainModel.prediction(from: preprocessed)\nlet finalOutput = try postprocessor.prediction(from: mainOutput)\n```\n\nFor Xcode-managed pipelines, use the pipeline model type in the `.mlpackage`.\nEach sub-model runs on its optimal compute unit.\n\n## Vision Integration\n\nUse Vision to run Core ML image models with automatic image preprocessing\n(resizing, normalization, color space, orientation).\n\n### Modern: CoreMLRequest (iOS 18+)\n\n```swift\nimport Vision\nimport CoreML\n\nlet model = try MLModel(contentsOf: modelURL, configuration: config)\nlet request = CoreMLRequest(model: .init(model))\nlet results = try await request.perform(on: cgImage)\n\nif let classification = results.first as? ClassificationObservation {\n    print(\"\\(classification.identifier): \\(classification.confidence)\")\n}\n```\n\n### Legacy: VNCoreMLRequest\n\n```swift\nlet vnModel = try VNCoreMLModel(for: model)\nlet request = VNCoreMLRequest(model: vnModel) { request, error in\n    guard let results = request.results as? [VNRecognizedObjectObservation] else { return }\n    for observation in results {\n        let label = observation.labels.first?.identifier ?? \"unknown\"\n        let confidence = observation.labels.first?.confidence ?? 0\n        let boundingBox = observation.boundingBox // normalized coordinates\n        print(\"\\(label): \\(confidence) at \\(boundingBox)\")\n    }\n}\nrequest.imageCropAndScaleOption = .scaleFill\n\nlet handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer)\ntry handler.perform([request])\n```\n\n> For complete Vision framework patterns (text recognition, barcode detection,\n> document scanning), see the `vision-framework` skill.\n\n## Performance Profiling\n\n### MLComputePlan (iOS 17.4+)\n\nInspect which compute device each operation will use before running predictions.\n\n```swift\nlet computePlan = try await MLComputePlan.load(\n    contentsOf: modelURL, configuration: config\n)\nguard case let .program(program) = computePlan.modelStructure else { return }\nguard let mainFunction = program.functions[\"main\"] else { return }\n\nfor operation in mainFunction.block.operations {\n    let deviceUsage = computePlan.deviceUsage(for: operation)\n    let estimatedCost = computePlan.estimatedCost(of: operation)\n    print(\"\\(operation.operatorName): \\(deviceUsage?.preferredComputeDevice ?? \"unknown\")\")\n}\n```\n\n### Instruments\n\nUse the **Core ML** instrument template in Instruments to profile:\n- Model load time\n- Prediction latency (per-operation breakdown)\n- Compute device dispatch (CPU/GPU/ANE per operation)\n- Memory allocation\n\nRun outside the debugger for accurate results (Xcode: Product > Profile).\n\n## Model Deployment\n\n### Bundle vs On-Demand Resources\n\n| Strategy | Pros | Cons |\n|---|---|---|\n| Bundle in app | Instant availability, works offline | Increases app download size |\n| On-demand resources | Smaller initial download | Requires download before first use |\n| Background Assets (iOS 16+) | Downloads ahead of time | More complex setup |\n| CloudKit / server | Maximum flexibility | Requires network, longer setup |\n\n### Size Considerations\n\n- App Store limit: 4 GB for app bundle\n- Cellular download limit: 200 MB (can request exception)\n- Use ODR tags for models > 50 MB\n- Pre-compile to `.mlmodelc` to skip on-device compilation\n\n```swift\n// On-demand resource loading\nlet request = NSBundleResourceRequest(tags: [\"ml-model-v2\"])\ntry await request.beginAccessingResources()\nlet modelURL = Bundle.main.url(forResource: \"LargeModel\", withExtension: \"mlmodelc\")!\nlet model = try await MLModel.load(contentsOf: modelURL, configuration: config)\n// Call request.endAccessingResources() when done\n```\n\n## Memory Management\n\n- **Unload on background:** Release model references when the app enters background\n  to free GPU/ANE memory. Reload on foreground return.\n- **Use `.cpuOnly` for background tasks:** Background processing cannot use GPU or\n  ANE; setting `.cpuOnly` avoids silent fallback and resource contention.\n- **Share model instances:** Never create multiple `MLModel` instances from the same\n  compiled model. Use an actor to provide shared access.\n- **Monitor memory pressure:** Large models (>100 MB) can trigger memory warnings.\n  Register for `UIApplication.didReceiveMemoryWarningNotification` and release\n  cached models when under pressure.\n\nSee [references/coreml-swift-integration.md](references/coreml-swift-integration.md) for an actor-based model manager with\nlifecycle-aware loading and cache eviction.\n\n## Common Mistakes\n\n**DON'T:** Load models on the main thread.\n**DO:** Use `MLModel.load(contentsOf:configuration:)` async API or load on a background actor.\n**Why:** Large models can take seconds to load, freezing the UI.\n\n**DON'T:** Recompile `.mlpackage` to `.mlmodelc` on every app launch.\n**DO:** Compile once with `MLModel.compileModel(at:)` and cache the compiled URL persistently.\n**Why:** Compilation is expensive. Cache the `.mlmodelc` in Application Support.\n\n**DON'T:** Hardcode `.cpuOnly` unless you have a specific reason.\n**DO:** Use `.all` and let the system choose the optimal compute unit.\n**Why:** `.all` enables Neural Engine and GPU, which are faster and more energy-efficient.\n\n**DON'T:** Ignore `MLFeatureValue` type mismatches between input and model expectations.\n**DO:** Match types exactly -- use `MLFeatureValue(pixelBuffer:)` for images, not raw data.\n**Why:** Type mismatches cause cryptic runtime crashes or silent incorrect results.\n\n**DON'T:** Create a new `MLModel` instance for every prediction.\n**DO:** Load once and reuse. Use an actor to manage the model lifecycle.\n**Why:** Model loading allocates significant memory and compute resources.\n\n**DON'T:** Skip error handling for model loading and prediction.\n**DO:** Catch errors and provide fallback behavior when the model fails.\n**Why:** Models can fail to load on older devices or when resources are constrained.\n\n**DON'T:** Assume all operations run on the Neural Engine.\n**DO:** Use `MLComputePlan` (iOS 17.4+) to verify device dispatch per operation.\n**Why:** Unsupported operations fall back to CPU, which may bottleneck the pipeline.\n\n**DON'T:** Process images manually before passing to Vision + Core ML.\n**DO:** Use `CoreMLRequest` (iOS 18+) or `VNCoreMLRequest` (legacy) to let Vision handle preprocessing.\n**Why:** Vision handles orientation, scaling, and pixel format conversion correctly.\n\n## Review Checklist\n\n- [ ] Model loaded asynchronously (not blocking main thread)\n- [ ] `MLModelConfiguration.computeUnits` set appropriately for use case\n- [ ] Model instance reused across predictions (not recreated each time)\n- [ ] Auto-generated class used when available (typed inputs/outputs)\n- [ ] Error handling for model loading and prediction failures\n- [ ] Compiled model cached persistently if compiled at runtime\n- [ ] Image inputs use Vision pipeline (`CoreMLRequest` iOS 18+ or `VNCoreMLRequest`) for correct preprocessing\n- [ ] `MLComputePlan` checked to verify compute device dispatch (iOS 17.4+)\n- [ ] Batch predictions used when processing multiple inputs\n- [ ] Model size appropriate for deployment strategy (bundle vs ODR)\n- [ ] Memory tested on target devices (especially older devices with less RAM)\n- [ ] Predictions run outside debugger for accurate performance measurement\n\n## References\n\n- Patterns and code: [references/coreml-swift-integration.md](references/coreml-swift-integration.md)\n- Model conversion and optimization (Python-side): covered in the `apple-on-device-ai` skill\n- Apple docs: [Core ML](https://sosumi.ai/documentation/coreml) |\n  [MLModel](https://sosumi.ai/documentation/coreml/mlmodel) |\n  [MLComputePlan](https://sosumi.ai/documentation/coreml/mlcomputeplan-1w21n)","tags":["coreml","swift","ios","skills","dpearson2699"],"capabilities":["skill","source-dpearson2699","category-swift-ios-skills"],"categories":["swift-ios-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/dpearson2699/swift-ios-skills/coreml","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"install_from":"skills.sh"}},"qualityScore":"0.300","qualityRationale":"deterministic score 0.30 from registry signals: · indexed on skills.sh · published under dpearson2699/swift-ios-skills","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T05:40:40.449Z","embedding":null,"createdAt":"2026-04-18T20:34:25.384Z","updatedAt":"2026-04-22T05:40:40.449Z","lastSeenAt":"2026-04-22T05:40:40.449Z","tsv":"'/documentation/coreml)':1896 '/documentation/coreml/mlcomputeplan-1w21n)':1904 '/documentation/coreml/mlmodel)':1900 '0':546,782,785,787,799,800,801,949,951,1129 '0.5':485 '0.95':457 '1':772,833 '1.0':678,715,817 '100':1457 '128':773,783 '14':46 '16':249,1302 '17':501 '17.4':1171,1709,1832 '18':114,118,562,651,1055,1743,1818 '2':696,697 '2.0':679,716,818 '200':1331 '224':687,688 '26':37 '3':686,829 '3.0':680,717,819 '32argb':910 '3d':762 '4':1323 '4.0':681,718 '50':1341 '6.3':40 '768':774 '8':935 'access':333,1451 'accumul':576 'accur':1260,1865 'across':570,1780 'actor':87,637,1447,1479,1513,1645 'actor-bas':86,1478 'addit':961 'advanc':838 'ahead':1304 'ai':70,1888 'alloc':1254,1654 'allow':395 'ane':378,1423 'api':1507 'app':21,1278,1284,1320,1326,1401,1533 'appl':67,1885,1890 'apple-on-device-ai':66,1884 'applic':324,1555 'appropri':1773,1842 'array':526,659,756,763,768,784,798 'array-typ':755 'asset':1300 'assum':1697 'async':246,498,1506 'asynchron':1766 'attr':898,911 'audio':360,575,608,845 'audiofram':602 'auto':174,421,751,1787 'auto-gener':173,420,750,1786 'automat':871,1044 'avail':1280,1792 'avoid':1426 'await':266,295,506,616,1078,1187,1369,1381 'awar':1486 'back':1720 'background':358,399,1299,1395,1403,1415,1417,1512 'backtoarray':724 'backward':42 'backward-compat':41 'barcod':1157 'base':88,1480 'batch':90,510,764,1833 'batchinput':523,542 'batchoutput':538 'batchoutput.count':547 'batchoutput.features':550 'behavior':1676 'best':383 'better':519 'bitmapinfo':941 'bitspercompon':934 'block':253,1768 'bottleneck':1725 'boundari':50 'boundingbox':1131,1139 'breakdown':1246 'buffer':917,923,929,938,957,959 'bundl':229,1267,1276,1327,1846 'bundle.main.url':233,1373 'busi':366 'bytesperrow':936 'cach':89,307,1468,1489,1542,1551,1805 'call':517,586,640,665,1387 'camera':862 'cannot':1419 'carri':593 'case':1194,1776 'catch':1671 'category-swift-ios-skills' 'caus':1620 'cellular':1328 'center':966 'center-crop':965 'cfdictionari':913 'cfstring':899 'cgcolorspacecreatedevicergb':940 'cgcontext':926 'cgimag':856,887,888,945,1081 'cgimagealphainfo.noneskipfirst.rawvalue':942 'cgrect':947 'chain':975 'check':1825 'checklist':165,168,1763 'choos':345,1574 'class':176,190,423,426,753,1789 'classif':623,1084 'classification.confidence':1090 'classification.identifier':1089 'classificationobserv':1087 'classlabel':496,556 'cloudkit':1310 'code':83,1871 'color':1049 'common':159,162,1491 'common-mistak':161 'compat':43,387 'compil':272,275,309,469,1345,1353,1443,1536,1544,1548,1803,1808 'compiledurl':293,304,318 'complet':82,1151 'complex':1308 'comput':330,337,1031,1174,1247,1577,1658,1828 'computeplan':1185 'computeplan.deviceusage':1214 'computeplan.estimatedcost':1219 'computeplan.modelstructure':1198 'con':1275 'concurr':647 'confid':481,1126,1128,1137 'config':202,211,245,271,306,391,407,437,1068,1192,1386 'config.allowlowprecisionaccumulationongpu':411 'config.computeunits':204,393,401,409 'configur':13,103,106,210,244,270,305,327,403,436,1067,1191,1385,1505 'consider':1319 'constrain':1694 'content':96,1431 'contentsof':242,268,303,1065,1189,1383,1504 'context':925,943 'control':329 'convers':55,857,873,1760,1875 'coordin':1134 'copi':317,812 'core':8,16,1039,1230,1737,1892 'coreml':1,200,673,1060 'coremlrequest':1053,1071,1741,1816 'corevideo':883 'correct':1761,1822 'cover':24,1881 'cpu':347,357,368,380,1722 'cpu/gpu/ane':1250 'cpuandgpu':367 'cpuandneuralengin':379,394 'cpuon':356,402,1413,1425,1560 'crash':1623 'creat':577,643,760,805,1436,1630 'createpixelbuff':885 'creation':674 'crop':967 'cryptic':1621 'cvpixelbuff':853,893,896,1145 'cvpixelbuffercr':905 'cvpixelbuffergetbaseaddress':928 'cvpixelbuffergetbytesperrow':937 'cvpixelbufferlockbaseaddress':922 'cvpixelbufferunlockbaseaddress':956 'data':735,807,815,827,927,1616 'datapoint':824 'datatyp':775,830 'debugg':1258,1863 'decid':355 'decis':339 'default':351 'demand':1271,1289,1357 'deploy':34,150,153,1266,1844 'detect':1158 'devic':69,283,1175,1248,1352,1689,1712,1829,1853,1856,1887 'deviceusag':1213,1224 'dictionari':476,532,607 'direct':878 'dispatch':1249,1713,1830 'doc':1891 'document':1159 'done':1390 'doubl':484 'download':222,287,1285,1293,1295,1303,1329 'dpearson2699':7 'drag':179 'draw':944 'dynam':464 'e.g':323 'effici':385,1593 'els':919,1114,1199,1206 'enabl':1581 'energi':384,1592 'energy-effici':1591 'engin':350,382,1583,1704 'enter':1402 'error':1106,1663,1672,1795 'especi':1854 'estimatedcost':1218 'everi':313,1532,1636 'evict':1490 'exact':1608 'except':1335 'exchang':736 'expect':754,852,1604 'expens':1550 'extract':847 'fail':1680,1684 'failur':1802 'fall':1719 'fallback':1428,1675 'faster':413,1588 'featur':609,758,766,846 'finaloutput':1005 'first':1297 'flexibl':1313 'float':792,816 'float.self':690,727 'float32':776,831 'floatvalu':804 'foreground':1410 'format':1759 'forresourc':234,1374 'forward':594 'frame':600,612 'framework':61,1153,1165 'free':1405 'freez':1522 'fromdata':821 'frommultiarray':720 'func':884 'gb':1324 'generat':175,187,422,425,752,1788 'golden':451,455 'gpu':332,348,364,369,371,1421,1585 'gpu/ane':1406 'guard':915,1108,1193,1201 'handl':869,1664,1750,1754,1796 'handler':1143 'handler.perform':1148 'hardcod':1559 'height':891,908,932,933,954,955 'identifi':1123 'ignor':1596 'imag':92,126,129,441,477,533,741,848,850,1041,1045,1613,1731,1811 'image-preprocess':128 'import':199,672,882,1057,1059 'includ':85,841 'incorrect':1626 'increas':1283 'independ':644 'infer':91,400,987 'init':1073 'initi':1292 'input':439,447,448,462,514,528,536,604,619,743,854,1601,1812,1839 'input/output':193,429 'inputfeatur':473,491,509 'inputs.map':527 'inputs/outputs':1794 'inspect':1172 'instanc':1434,1439,1634,1778 'instant':1279 'instrument':1227,1232,1235 'int':890,892 'integr':11,76,140,143,1034 'intern':596 'interop':708,813 'io':3,20,36,45,113,117,248,500,561,650,1054,1170,1301,1708,1742,1817,1831 'kcfallocatordefault':906 'kcvpixelbuffercgbitmapcontextcompatibilitykey':903 'kcvpixelbuffercgimagecompatibilitykey':901 'kcvpixelformattyp':909 'known':467 'label':493,626,1121,1136 'larg':260,1455,1515 'largemodel':1375 'latenc':1242 'launch':314,1534 'lazili':664 'legaci':1091,1746 'less':1858 'let':201,206,231,238,263,292,299,352,390,406,432,438,443,472,486,492,503,522,537,548,588,603,613,622,675,682,692,700,703,711,719,723,767,796,814,820,897,916,924,992,998,1004,1061,1069,1075,1083,1094,1100,1109,1120,1125,1130,1142,1184,1195,1202,1212,1217,1360,1371,1378,1571,1748 'librari':865 'lifecycl':1485,1650 'lifecycle-awar':1484 'limit':1322,1330 'live':63 'llms':574 'load':12,29,97,100,171,213,214,247,250,1239,1359,1487,1495,1509,1521,1639,1653,1667,1686,1765,1799 'loading-model':99 'locat':322 'longer':1316 'loss':416 'low':397 'low-prior':396 'main':255,989,1205,1499,1769 'mainfunct':1203 'mainfunction.block.operations':1211 'mainmodel.prediction':1001 'mainoutput':999,1009 'maintain':568 'make':107,110,417 'making-predict':109 'manag':155,158,1013,1392,1482,1647 'manual':212,872,1732 'match':1606 'materi':669 'math':698 'maximum':1312 'may':1724 'mb':1332,1342,1458 'measur':1867 'memori':154,157,1253,1391,1407,1453,1461,1656,1849 'memory-manag':156 'mismatch':1599,1619 'mistak':160,163,1492 'ml':9,17,1040,1231,1365,1738,1893 'ml-model-v2':1364 'mlarraybatchprovid':525 'mlcomputeplan':1169,1707,1824,1901 'mlcomputeplan.load':1188 'mldictionaryfeatureprovid':459,475,531,606 'mlfeaturevalu':478,483,534,610,1597,1610 'mlmodel':241,279,302,879,1064,1438,1633,1897 'mlmodel.compilemodel':296,1539 'mlmodel.load':267,1382,1503 'mlmodelc':183,237,281,1347,1377,1530,1553 'mlmodelconfigur':203,328,392,408 'mlmodelconfiguration.computeunits':1771 'mlmultiarray':121,125,710,714,730,731,770,823,839 'mlpackag':181,277,1022,1528 'mlstate':564 'mltensor':31,112,116,649,652,677,684,721 'mltensor-io':115 'model':18,28,54,98,101,102,105,133,137,149,152,172,207,220,239,251,261,264,286,300,326,335,373,388,433,566,573,597,742,851,973,976,984,990,1018,1026,1042,1062,1072,1074,1099,1103,1238,1265,1340,1366,1379,1397,1433,1444,1456,1469,1481,1496,1516,1603,1649,1652,1666,1679,1682,1764,1777,1798,1804,1840,1874 'model-configur':104 'model-deploy':151 'model.makestate':590,641 'model.prediction':446,489,507,617 'model.predictions':540 'modelurl':232,243,269,1066,1190,1372,1384 'modern':1052 'monitor':1452 'multi':132,136,972 'multi-model':131,971 'multi-model-pipelin':135 'multiarray':611,712,722 'multidimension':658 'multipl':513,1437,1838 'mutat':826 'myimageclassifi':209,435 'myimageclassifierinput':440 'mymodel':235 'nativ':657 'need':370,875 'network':1315 'neural':349,381,1582,1703 'never':1435 'new':1632 'nil':921 'nlp':842 'non':740 'non-imag':739 'normal':704,964,1048,1133 'note':48 'nsbundleresourcerequest':1362 'nsnumber':789,790,803 'observ':1117 'observation.boundingbox':1132 'observation.labels.first':1122,1127 'odr':1337,1848 'offlin':1282 'older':1688,1855 'on-demand':1269,1287,1355 'on-devic':1350 'one':516 'op':375 'oper':662,699,1177,1209,1216,1221,1245,1252,1699,1715,1718 'operation.operatorname':1223 'optim':56,1030,1576,1877 'orient':1051,1755 'output':444,487,504,614,745 'output.classlabel':450 'output.classlabelprobs':454 'output.featurevalue':494,624 'outsid':227,1256,1862 'own':74 'packageurl':298 'palett':58 'paramet':336 'pass':581,1734 'pattern':84,840,963,1154,1869 'per':1244,1251,1714 'per-oper':1243 'perform':144,147,1167,1866 'performance-profil':146 'persist':321,1546,1806 'photo':859,864 'pipelin':134,138,974,1014,1017,1727,1815 'pixel':1758 'pixelbuff':442,479,480,535,895,914,918,1146,1611 'pointer':808 'possibl':197 'postprocess':980 'postprocessor':991 'postprocessor.prediction':1007 'pre':1344 'pre-compil':1343 'pre/post-processing':661 'precis':415 'predict':30,108,111,418,499,511,560,571,585,592,880,1182,1241,1637,1669,1781,1801,1834,1860 'prefer':257 'preferredcomputedevic':1225 'preprocess':93,127,130,849,962,978,993,1003,1046,1751,1823 'preprocessor':988 'preprocessor.prediction':995 'pressur':1454,1472 'primari':734 'print':449,453,553,1088,1135,1222 'prioriti':398 'process':512,1418,1730,1837 'product':1263 'profil':32,145,148,1168,1237,1264 'program':1196,1197 'program.functions':1204 'properti':404 'pros':1274 'provid':427,1449,1674 'prune':59 'python':52,1879 'python-sid':51,1878 'quantiz':57 'ram':1859 'raw':1615 'rawinput':997 'read':794 'reason':1566 'recognit':1156 'recompil':311,1527 'recreat':1783 'refer':169,170,1398,1868 'references/coreml-swift-integration.md':79,80,835,836,969,970,1474,1475,1872,1873 'regist':1463 'releas':1396,1467 'reload':1408 'request':1070,1101,1105,1149,1334,1361 'request.beginaccessingresources':1370 'request.endaccessingresources':1388 'request.imagecropandscaleoption':1140 'request.perform':1079 'request.results':1111 'requir':981,1294,1314 'reshap':691,693 'resiz':1047 'resourc':1272,1290,1358,1430,1659,1692 'result':549,670,1076,1110,1119,1261,1627 'result.featurevalue':554 'results.first':1085 'retriev':452,456 'return':920,958,1115,1200,1207,1411 'reus':1642,1779 'review':164,167,1762 'review-checklist':166 'run':15,663,1027,1038,1181,1255,1700,1861 'runtim':224,274,1622,1810 'scalartyp':689 'scale':1756 'scalefil':1141 'scan':1160 'scope':49 'second':1519 'see':78,834,968,1161,1473 'select':62 'sendabl':631 'separ':983 'sequenc':572,765 'sequenti':986 'server':290,1311 'session':361 'set':1424,1772 'setup':1309,1317 'shape':771,828 'shapedarray':666 'share':1432,1450 'side':27,53,1880 'signific':1655 'silent':1427,1625 'singl':636 'size':1286,1318,1841 'skill':4,5,23,71,73,1166,1889 'skip':1349,1662 'slight':414 'smaller':1291 'softmax':701 'sosumi.ai':1895,1899,1903 'sosumi.ai/documentation/coreml)':1894 'sosumi.ai/documentation/coreml/mlcomputeplan-1w21n)':1902 'sosumi.ai/documentation/coreml/mlmodel)':1898 'source-dpearson2699' 'space':939,1050 'specif':1565 'state':559,569,578,589,598,621,628,645 'store':226,1321 'strategi':1273,1845 'stream':648 'stride':832 'stringvalu':497,557,627 'struct':430 'sub':1025 'sub-model':1024 'support':325,1556 'swift':2,10,26,39,75,189,198,230,262,291,389,405,431,471,502,521,587,656,671,759,881,985,1056,1093,1183,1354 'swift-nat':655 'system':354,1573 'tabl':340 'tag':1338,1363 'take':1518 'target':35,1852 'task':359,639,1416 'templat':1233 'tensor':676,705 'tensor.mean':706 'tensor.reshaped':694 'tensor.shapedarray':725 'tensor.softmax':702 'tensor.standarddeviation':707 'test':95,1850 'text':1155 'thread':256,1500,1770 'threshold':482 'throughput':520 'time':316,470,1240,1306,1785 'token':843 'tri':208,240,265,294,301,434,445,474,488,505,524,530,539,605,615,713,769,822,994,1000,1006,1063,1077,1096,1147,1186,1368,1380 'trigger':1460 'true':412,902,904 'type':192,428,737,757,1019,1598,1607,1618,1793 'ui':1524 'uiapplication.didreceivememorywarningnotification':1465 'unit':331,338,1032,1578 'unknown':558,1124,1226 'unless':47,1561 'unload':1393 'unsafemutablerawpoint':825 'unsupport':376,1717 'url':217,310,1545 'use':194,284,342,460,563,620,632,746,855,1015,1035,1179,1228,1298,1336,1412,1420,1445,1502,1568,1609,1643,1706,1740,1775,1790,1813,1835 'v2':1367 'valu':341,778,791,795,797 'var':894 'verifi':1711,1827 'vision':139,142,866,1033,1036,1058,1152,1164,1736,1749,1753,1814 'vision-framework':1163 'vision-integr':141 'vncoremlmodel':1097 'vncoremlrequest':868,1092,1102,1745,1820 'vnimagerequesthandl':1144 'vnmodel':1095,1104 'vnrecognizedobjectobserv':1113 'vs':1268,1847 'warn':1462 'wast':315 'whenev':196 'width':889,907,930,931,952,953 'withextens':236,1376 'without':252 'work':119,123,728,1281 'working-with-mlmultiarray':122 'write':777 'x':948 'xcode':185,1012,1262 'xcode-manag':1011 'y':950 'zero':683,685,811 'zero-copi':810","prices":[{"id":"07692ff8-8219-412f-9115-33fcc68fa8c8","listingId":"fac86b7d-9b85-4e4f-83d1-fe5925b9dbb1","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"dpearson2699","category":"swift-ios-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T20:34:25.384Z"}],"sources":[{"listingId":"fac86b7d-9b85-4e4f-83d1-fe5925b9dbb1","source":"github","sourceId":"dpearson2699/swift-ios-skills/coreml","sourceUrl":"https://github.com/dpearson2699/swift-ios-skills/tree/main/skills/coreml","isPrimary":false,"firstSeenAt":"2026-04-18T22:00:53.668Z","lastSeenAt":"2026-04-22T00:53:42.445Z"},{"listingId":"fac86b7d-9b85-4e4f-83d1-fe5925b9dbb1","source":"skills_sh","sourceId":"dpearson2699/swift-ios-skills/coreml","sourceUrl":"https://skills.sh/dpearson2699/swift-ios-skills/coreml","isPrimary":true,"firstSeenAt":"2026-04-18T20:34:25.384Z","lastSeenAt":"2026-04-22T05:40:40.449Z"}],"details":{"listingId":"fac86b7d-9b85-4e4f-83d1-fe5925b9dbb1","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"dpearson2699","slug":"coreml","source":"skills_sh","category":"swift-ios-skills","skills_sh_url":"https://skills.sh/dpearson2699/swift-ios-skills/coreml"},"updatedAt":"2026-04-22T05:40:40.449Z"}}