{"id":"dffc3578-e656-42de-8836-7c01d6cef3c5","shortId":"cqWVM5","kind":"skill","title":"Speech Recognition","tagline":"Swift Ios Skills skill by Dpearson2699","description":"# Speech Recognition\n\nTranscribe live and pre-recorded audio to text using Apple's Speech framework.\nCovers `SFSpeechRecognizer` (iOS 10+) and the new `SpeechAnalyzer` API (iOS 26+).\n\n## Contents\n\n- [SpeechAnalyzer (iOS 26+)](#speechanalyzer-ios-26)\n- [SFSpeechRecognizer Setup](#sfspeechrecognizer-setup)\n- [Authorization](#authorization)\n- [Live Microphone Transcription](#live-microphone-transcription)\n- [Pre-Recorded Audio File Recognition](#pre-recorded-audio-file-recognition)\n- [On-Device vs Server Recognition](#on-device-vs-server-recognition)\n- [Handling Results](#handling-results)\n- [Common Mistakes](#common-mistakes)\n- [Review Checklist](#review-checklist)\n- [References](#references)\n\n## SpeechAnalyzer (iOS 26+)\n\n`SpeechAnalyzer` is an actor-based API introduced in iOS 26 that replaces\n`SFSpeechRecognizer` for new projects. It uses Swift concurrency, `AsyncSequence`\nfor results, and supports modular analysis via `SpeechTranscriber`.\n\n### Basic transcription with SpeechAnalyzer\n\n```swift\nimport Speech\n\n// 1. Create a transcriber module\nguard let locale = SpeechTranscriber.supportedLocale(\n    equivalentTo: Locale.current\n) else { return }\nlet transcriber = SpeechTranscriber(locale: locale, preset: .offlineTranscription)\n\n// 2. Ensure assets are installed\nif let request = try await AssetInventory.assetInstallationRequest(\n    supporting: [transcriber]\n) {\n    try await request.downloadAndInstall()\n}\n\n// 3. Create input stream and analyzer\nlet (inputSequence, inputBuilder) = AsyncStream.makeStream(of: AnalyzerInput.self)\nlet audioFormat = await SpeechAnalyzer.bestAvailableAudioFormat(\n    compatibleWith: [transcriber]\n)\nlet analyzer = SpeechAnalyzer(modules: [transcriber])\n\n// 4. Feed audio buffers (from AVAudioEngine or file)\nTask {\n    // Append PCM buffers converted to audioFormat\n    let pcmBuffer: AVAudioPCMBuffer = // ... your audio buffer\n    inputBuilder.yield(AnalyzerInput(buffer: pcmBuffer))\n    inputBuilder.finish()\n}\n\n// 5. Consume results\nTask {\n    for try await result in transcriber.results {\n        let text = String(result.text.characters)\n        print(text)\n    }\n}\n\n// 6. Run analysis\nlet lastSampleTime = try await analyzer.analyzeSequence(inputSequence)\n\n// 7. Finalize\nif let lastSampleTime {\n    try await analyzer.finalizeAndFinish(through: lastSampleTime)\n} else {\n    try analyzer.cancelAndFinishNow()\n}\n```\n\n### Transcribing an audio file with SpeechAnalyzer\n\n```swift\nlet transcriber = SpeechTranscriber(locale: locale, preset: .offlineTranscription)\nlet audioFile = try AVAudioFile(forReading: fileURL)\nlet analyzer = SpeechAnalyzer(\n    inputAudioFile: audioFile, modules: [transcriber], finishAfterFile: true\n)\nfor try await result in transcriber.results {\n    print(String(result.text.characters))\n}\n```\n\n### Key differences from SFSpeechRecognizer\n\n| Feature | SFSpeechRecognizer | SpeechAnalyzer |\n|---|---|---|\n| Concurrency | Callbacks/delegates | async/await + AsyncSequence |\n| Type | `class` | `actor` |\n| Modules | Monolithic | Composable (`SpeechTranscriber`, `SpeechDetector`) |\n| Audio input | `append(_:)` on request | `AsyncStream<AnalyzerInput>` |\n| Availability | iOS 10+ | iOS 26+ |\n| On-device | `requiresOnDeviceRecognition` | Asset-based via `AssetInventory` |\n\n## SFSpeechRecognizer Setup\n\n### Creating a recognizer with locale\n\n```swift\nimport Speech\n\n// Default locale (user's current language)\nlet recognizer = SFSpeechRecognizer()\n\n// Specific locale\nlet recognizer = SFSpeechRecognizer(locale: Locale(identifier: \"en-US\"))\n\n// Check if recognition is available for this locale\nguard let recognizer, recognizer.isAvailable else {\n    print(\"Speech recognition not available\")\n    return\n}\n```\n\n### Monitoring availability changes\n\n```swift\nfinal class SpeechManager: NSObject, SFSpeechRecognizerDelegate {\n    private let recognizer = SFSpeechRecognizer()!\n\n    override init() {\n        super.init()\n        recognizer.delegate = self\n    }\n\n    func speechRecognizer(\n        _ speechRecognizer: SFSpeechRecognizer,\n        availabilityDidChange available: Bool\n    ) {\n        // Update UI — disable record button when unavailable\n    }\n}\n```\n\n## Authorization\n\nRequest **both** speech recognition and microphone permissions before starting\nlive transcription. Add these keys to `Info.plist`:\n\n- `NSSpeechRecognitionUsageDescription`\n- `NSMicrophoneUsageDescription`\n\n```swift\nimport Speech\nimport AVFoundation\n\nfunc requestPermissions() async -> Bool {\n    let speechStatus = await withCheckedContinuation { continuation in\n        SFSpeechRecognizer.requestAuthorization { status in\n            continuation.resume(returning: status)\n        }\n    }\n    guard speechStatus == .authorized else { return false }\n\n    let micStatus: Bool\n    if #available(iOS 17, *) {\n        micStatus = await AVAudioApplication.requestRecordPermission()\n    } else {\n        micStatus = await withCheckedContinuation { continuation in\n            AVAudioSession.sharedInstance().requestRecordPermission { granted in\n                continuation.resume(returning: granted)\n            }\n        }\n    }\n    return micStatus\n}\n```\n\n## Live Microphone Transcription\n\nThe standard pattern: `AVAudioEngine` captures microphone audio → buffers are\nappended to `SFSpeechAudioBufferRecognitionRequest` → results stream in.\n\n```swift\nimport Speech\nimport AVFoundation\n\nfinal class LiveTranscriber {\n    private let recognizer = SFSpeechRecognizer(locale: Locale(identifier: \"en-US\"))!\n    private let audioEngine = AVAudioEngine()\n    private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?\n    private var recognitionTask: SFSpeechRecognitionTask?\n\n    func startTranscribing() throws {\n        // Cancel any in-progress task\n        recognitionTask?.cancel()\n        recognitionTask = nil\n\n        // Configure audio session\n        let audioSession = AVAudioSession.sharedInstance()\n        try audioSession.setCategory(.record, mode: .measurement, options: .duckOthers)\n        try audioSession.setActive(true, options: .notifyOthersOnDeactivation)\n\n        // Create request\n        let request = SFSpeechAudioBufferRecognitionRequest()\n        request.shouldReportPartialResults = true\n        self.recognitionRequest = request\n\n        // Start recognition task\n        recognitionTask = recognizer.recognitionTask(with: request) { result, error in\n            if let result {\n                let text = result.bestTranscription.formattedString\n                print(\"Transcription: \\(text)\")\n\n                if result.isFinal {\n                    self.stopTranscribing()\n                }\n            }\n            if let error {\n                print(\"Recognition error: \\(error)\")\n                self.stopTranscribing()\n            }\n        }\n\n        // Install audio tap\n        let inputNode = audioEngine.inputNode\n        let recordingFormat = inputNode.outputFormat(forBus: 0)\n        inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {\n            buffer, _ in\n            request.append(buffer)\n        }\n\n        audioEngine.prepare()\n        try audioEngine.start()\n    }\n\n    func stopTranscribing() {\n        audioEngine.stop()\n        audioEngine.inputNode.removeTap(onBus: 0)\n        recognitionRequest?.endAudio()\n        recognitionRequest = nil\n        recognitionTask?.cancel()\n        recognitionTask = nil\n    }\n}\n```\n\n## Pre-Recorded Audio File Recognition\n\nUse `SFSpeechURLRecognitionRequest` for audio files on disk:\n\n```swift\nfunc transcribeFile(at url: URL) async throws -> String {\n    guard let recognizer = SFSpeechRecognizer(), recognizer.isAvailable else {\n        throw SpeechError.unavailable\n    }\n    let request = SFSpeechURLRecognitionRequest(url: url)\n    request.shouldReportPartialResults = false\n\n    return try await withCheckedThrowingContinuation { continuation in\n        recognizer.recognitionTask(with: request) { result, error in\n            if let error {\n                continuation.resume(throwing: error)\n            } else if let result, result.isFinal {\n                continuation.resume(\n                    returning: result.bestTranscription.formattedString\n                )\n            }\n        }\n    }\n}\n```\n\n## On-Device vs Server Recognition\n\nOn-device recognition (iOS 13+) works offline but supports fewer locales:\n\n```swift\nlet recognizer = SFSpeechRecognizer(locale: Locale(identifier: \"en-US\"))!\n\n// Check if on-device is supported for this locale\nif recognizer.supportsOnDeviceRecognition {\n    let request = SFSpeechAudioBufferRecognitionRequest()\n    request.requiresOnDeviceRecognition = true  // Force on-device\n}\n```\n\n> **Tip:** On-device recognition avoids network latency and the one-minute\n> audio limit imposed by server-based recognition. However, accuracy may be\n> lower and not all locales are supported. Check `supportsOnDeviceRecognition`\n> before forcing on-device mode.\n\n## Handling Results\n\n### Partial vs final results\n\n```swift\nlet request = SFSpeechAudioBufferRecognitionRequest()\nrequest.shouldReportPartialResults = true  // default is true\n\nrecognizer.recognitionTask(with: request) { result, error in\n    guard let result else { return }\n\n    if result.isFinal {\n        // Final transcription — recognition is complete\n        let final = result.bestTranscription.formattedString\n    } else {\n        // Partial result — may change as more audio is processed\n        let partial = result.bestTranscription.formattedString\n    }\n}\n```\n\n### Accessing alternative transcriptions and confidence\n\n```swift\nrecognizer.recognitionTask(with: request) { result, error in\n    guard let result else { return }\n\n    // Best transcription\n    let best = result.bestTranscription\n\n    // All alternatives (sorted by confidence, descending)\n    for transcription in result.transcriptions {\n        for segment in transcription.segments {\n            print(\"\\(segment.substring): \\(segment.confidence)\")\n        }\n    }\n}\n```\n\n### Adding punctuation (iOS 16+)\n\n```swift\nlet request = SFSpeechAudioBufferRecognitionRequest()\nrequest.addsPunctuation = true\n```\n\n### Contextual strings\n\nImprove recognition of domain-specific terms:\n\n```swift\nlet request = SFSpeechAudioBufferRecognitionRequest()\nrequest.contextualStrings = [\"SwiftUI\", \"Xcode\", \"CloudKit\"]\n```\n\n## Common Mistakes\n\n### Not requesting both speech and microphone authorization\n\n```swift\n// ❌ DON'T: Only request speech authorization for live audio\nSFSpeechRecognizer.requestAuthorization { status in\n    // Missing microphone permission — audio engine will fail\n    self.startRecording()\n}\n\n// ✅ DO: Request both permissions before recording\nSFSpeechRecognizer.requestAuthorization { status in\n    guard status == .authorized else { return }\n    AVAudioSession.sharedInstance().requestRecordPermission { granted in\n        guard granted else { return }\n        self.startRecording()\n    }\n}\n```\n\n### Not handling availability changes\n\n```swift\n// ❌ DON'T: Assume recognizer stays available after initial check\nlet recognizer = SFSpeechRecognizer()!\n// Recognition may fail if network drops or locale changes\n\n// ✅ DO: Monitor availability via delegate\nrecognizer.delegate = self\nfunc speechRecognizer(\n    _ speechRecognizer: SFSpeechRecognizer,\n    availabilityDidChange available: Bool\n) {\n    recordButton.isEnabled = available\n}\n```\n\n### Not stopping the audio engine when recognition ends\n\n```swift\n// ❌ DON'T: Leave audio engine running after recognition finishes\nrecognizer.recognitionTask(with: request) { result, error in\n    if result?.isFinal == true {\n        // Audio engine still running, wasting resources and battery\n    }\n}\n\n// ✅ DO: Clean up all audio resources\nrecognizer.recognitionTask(with: request) { result, error in\n    if result?.isFinal == true || error != nil {\n        self.audioEngine.stop()\n        self.audioEngine.inputNode.removeTap(onBus: 0)\n        self.recognitionRequest?.endAudio()\n        self.recognitionRequest = nil\n    }\n}\n```\n\n### Assuming on-device recognition is available for all locales\n\n```swift\n// ❌ DON'T: Force on-device without checking support\nlet request = SFSpeechAudioBufferRecognitionRequest()\nrequest.requiresOnDeviceRecognition = true // May silently fail\n\n// ✅ DO: Check support before requiring on-device\nif recognizer.supportsOnDeviceRecognition {\n    request.requiresOnDeviceRecognition = true\n} else {\n    // Fall back to server-based or inform user\n}\n```\n\n### Not handling the one-minute recognition limit\n\n```swift\n// ❌ DON'T: Start one long continuous recognition session\nfunc startRecording() {\n    // This will be cut off after ~60 seconds (server-based)\n}\n\n// ✅ DO: Restart recognition when approaching the limit\nfunc startRecording() {\n    // Use a timer to restart before the limit\n    recognitionTimer = Timer.scheduledTimer(withTimeInterval: 55, repeats: false) {\n        [weak self] _ in\n        self?.restartRecognition()\n    }\n}\n```\n\n### Creating multiple simultaneous recognition tasks\n\n```swift\n// ❌ DON'T: Start a new task without canceling the previous one\nfunc startRecording() {\n    recognitionTask = recognizer.recognitionTask(with: request) { ... }\n    // Previous task is still running — undefined behavior\n}\n\n// ✅ DO: Cancel existing task before creating a new one\nfunc startRecording() {\n    recognitionTask?.cancel()\n    recognitionTask = nil\n    recognitionTask = recognizer.recognitionTask(with: request) { ... }\n}\n```\n\n## Review Checklist\n\n- [ ] `NSSpeechRecognitionUsageDescription` is in Info.plist\n- [ ] `NSMicrophoneUsageDescription` is in Info.plist (if using live audio)\n- [ ] Authorization is requested before starting recognition\n- [ ] `SFSpeechRecognizerDelegate` is set to handle `availabilityDidChange`\n- [ ] Audio engine is stopped and tap removed when recognition ends\n- [ ] `recognitionRequest.endAudio()` is called when done recording\n- [ ] Previous `recognitionTask` is canceled before starting a new one\n- [ ] `supportsOnDeviceRecognition` is checked before requiring on-device mode\n- [ ] Partial results are handled separately from final (`isFinal`) results\n- [ ] One-minute limit is accounted for in server-based recognition\n- [ ] For iOS 26+: `AssetInventory` assets are installed before using `SpeechAnalyzer`\n- [ ] For iOS 26+: `SpeechTranscriber.supportedLocale(equivalentTo:)` is checked\n\n## References\n\n- [Speech framework](https://sosumi.ai/documentation/speech)\n- [SpeechAnalyzer](https://sosumi.ai/documentation/speech/speechanalyzer)\n- [SpeechTranscriber](https://sosumi.ai/documentation/speech/speechtranscriber)\n- [SFSpeechRecognizer](https://sosumi.ai/documentation/speech/sfspeechrecognizer)\n- [SFSpeechAudioBufferRecognitionRequest](https://sosumi.ai/documentation/speech/sfspeechaudiobufferrecognitionrequest)\n- [SFSpeechURLRecognitionRequest](https://sosumi.ai/documentation/speech/sfspeechurlrecognitionrequest)\n- [SFSpeechRecognitionResult](https://sosumi.ai/documentation/speech/sfspeechrecognitionresult)\n- [SFSpeechRecognitionRequest](https://sosumi.ai/documentation/speech/sfspeechrecognitionrequest)\n- [AssetInventory](https://sosumi.ai/documentation/speech/assetinventory)\n- [Asking Permission to Use Speech Recognition](https://sosumi.ai/documentation/speech/asking-permission-to-use-speech-recognition)\n- [Recognizing Speech in Live Audio](https://sosumi.ai/documentation/speech/recognizing-speech-in-live-audio)","tags":["speech","recognition","swift","ios","skills","dpearson2699"],"capabilities":["skill","source-dpearson2699","category-swift-ios-skills"],"categories":["swift-ios-skills"],"synonyms":[],"warnings":[],"endpointUrl":"https://skills.sh/dpearson2699/swift-ios-skills/speech-recognition","protocol":"skill","transport":"skills-sh","auth":{"type":"none","details":{"install_from":"skills.sh"}},"qualityScore":"0.300","qualityRationale":"deterministic score 0.30 from registry signals: · indexed on skills.sh · published under dpearson2699/swift-ios-skills","verified":false,"liveness":"unknown","lastLivenessCheck":null,"agentReviews":{"count":0,"score_avg":null,"cost_usd_avg":null,"success_rate":null,"latency_p50_ms":null,"narrative_summary":null,"summary_updated_at":null},"enrichmentModel":"deterministic:skill:v1","enrichmentVersion":1,"enrichedAt":"2026-04-22T05:40:40.310Z","embedding":null,"createdAt":"2026-04-18T20:34:18.821Z","updatedAt":"2026-04-22T05:40:40.310Z","lastSeenAt":"2026-04-22T05:40:40.310Z","tsv":"'/documentation/speech)':1332 '/documentation/speech/asking-permission-to-use-speech-recognition)':1373 '/documentation/speech/assetinventory)':1364 '/documentation/speech/recognizing-speech-in-live-audio)':1381 '/documentation/speech/sfspeechaudiobufferrecognitionrequest)':1348 '/documentation/speech/sfspeechrecognitionrequest)':1360 '/documentation/speech/sfspeechrecognitionresult)':1356 '/documentation/speech/sfspeechrecognizer)':1344 '/documentation/speech/sfspeechurlrecognitionrequest)':1352 '/documentation/speech/speechanalyzer)':1336 '/documentation/speech/speechtranscriber)':1340 '0':619,622,639,1067 '1':139 '10':28,327 '1024':624 '13':722 '16':891 '17':472 '2':159 '26':35,39,43,101,112,329,1312,1322 '3':175 '4':198 '5':224 '55':1172 '6':240 '60':1147 '7':249 'access':849 'account':1303 'accuraci':782 'actor':106,313 'actor-bas':105 'ad':888 'add':432 'altern':850,872 'analysi':129,242 'analyz':180,194,283 'analyzer.analyzesequence':247 'analyzer.cancelandfinishnow':261 'analyzer.finalizeandfinish':256 'analyzerinput':220 'analyzerinput.self':186 'api':33,108 'append':207,321,503 'appl':21 'approach':1156 'ask':1365 'asset':161,335,1314 'asset-bas':334 'assetinventori':338,1313,1361 'assetinventory.assetinstallationrequest':169 'assum':975,1072 'async':446,667 'async/await':309 'asyncsequ':123,310 'asyncstream':324 'asyncstream.makestream':184 'audio':17,61,67,200,217,264,319,500,553,610,651,657,773,843,933,940,1013,1022,1038,1050,1242,1255,1378 'audioengin':529 'audioengine.inputnode':614 'audioengine.inputnode.removetap':637 'audioengine.prepare':631 'audioengine.start':633 'audioengine.stop':636 'audiofil':277,286 'audioformat':188,212 'audiosess':556 'audiosession.setactive':566 'audiosession.setcategory':559 'author':49,50,420,462,923,930,956,1243 'avail':325,373,386,389,411,470,970,978,996,1006,1009,1078 'availabilitydidchang':410,1005,1254 'avaudioapplication.requestrecordpermission':475 'avaudioengin':203,497,530 'avaudiofil':279 'avaudiopcmbuff':215 'avaudiosession.sharedinstance':482,557,959 'avfound':443,513 'avoid':765 'await':168,173,189,230,246,255,293,450,474,478,687 'back':1114 'base':107,336,779,1118,1151,1308 'basic':132 'batteri':1045 'behavior':1209 'best':866,869 'bool':412,447,468,1007 'buffer':201,209,218,221,501,627,630 'buffers':623 'button':417 'call':1267 'callbacks/delegates':308 'cancel':542,549,645,1193,1211,1222,1274 'captur':498 'category-swift-ios-skills' 'chang':390,840,971,993 'check':369,739,792,981,1090,1101,1282,1326 'checklist':93,96,1230 'class':312,393,515 'clean':1047 'cloudkit':914 'common':87,90,915 'common-mistak':89 'compatiblewith':191 'complet':832 'compos':316 'concurr':122,307 'confid':853,875 'configur':552 'consum':225 'content':36 'contextu':898 'continu':452,480,689,1136 'continuation.resume':457,486,700,708 'convert':210 'cover':25 'creat':140,176,341,570,1180,1215 'current':353 'cut':1144 'default':349,812 'deleg':998 'descend':876 'devic':72,78,332,713,719,743,759,763,798,1075,1088,1107,1287 'differ':301 'disabl':415 'disk':660 'domain':904 'domain-specif':903 'done':1269 'dpearson2699':8 'drop':990 'duckoth':564 'els':150,259,381,463,476,675,703,824,836,864,957,965,1112 'en':367,525,737 'en-us':366,524,736 'end':1017,1264 'endaudio':641,1069 'engin':941,1014,1023,1039,1256 'ensur':160 'equivalentto':148,1324 'error':587,603,606,607,695,699,702,819,859,1032,1056,1062 'exist':1212 'fail':943,987,1099 'fall':1113 'fals':465,684,1174 'featur':304 'feed':199 'fewer':727 'file':62,68,205,265,652,658 'fileurl':281 'final':250,392,514,804,828,834,1295 'finish':1027 'finishafterfil':289 'forbus':618 'forc':756,795,1085 'format':625 'forread':280 'framework':24,1329 'func':406,444,539,634,662,1001,1139,1159,1197,1219 'grant':484,488,961,964 'guard':144,377,460,670,821,861,954,963 'handl':82,85,800,969,1123,1253,1292 'handling-result':84 'howev':781 'identifi':365,523,735 'import':137,347,440,442,510,512 'impos':775 'improv':900 'in-progress':544 'info.plist':436,1234,1238 'inform':1120 'init':402 'initi':980 'input':177,320 'inputaudiofil':285 'inputbuild':183 'inputbuilder.finish':223 'inputbuilder.yield':219 'inputnod':613 'inputnode.installtap':620 'inputnode.outputformat':617 'inputsequ':182,248 'instal':163,609,1316 'introduc':109 'io':4,27,34,38,42,100,111,326,328,471,721,890,1311,1321 'isfin':1036,1060,1296 'key':300,434 'languag':354 'lastsampletim':244,253,258 'latenc':767 'leav':1021 'let':145,152,165,181,187,193,213,234,243,252,269,276,282,355,360,378,398,448,466,518,528,555,572,590,592,602,612,615,671,678,698,705,730,751,807,822,833,846,862,868,893,908,982,1092 'limit':774,1129,1158,1168,1301 'live':12,51,55,430,491,932,1241,1377 'live-microphone-transcript':54 'livetranscrib':516 'local':146,155,156,272,273,345,350,359,363,364,376,521,522,728,733,734,748,789,992,1081 'locale.current':149 'long':1135 'lower':785 'may':783,839,986,1097 'measur':562 'microphon':52,56,426,492,499,922,938 'micstatus':467,473,477,490 'minut':772,1127,1300 'miss':937 'mistak':88,91,916 'mode':561,799,1288 'modul':143,196,287,314 'modular':128 'monitor':388,995 'monolith':315 'multipl':1181 'network':766,989 'new':31,117,1190,1217,1278 'nil':551,643,647,1063,1071,1224 'notifyothersondeactiv':569 'nsmicrophoneusagedescript':438,1235 'nsobject':395 'nsspeechrecognitionusagedescript':437,1231 'offlin':724 'offlinetranscript':158,275 'on-devic':70,330,711,717,741,757,761,796,1073,1086,1105,1285 'on-device-vs-server-recognit':76 'onbus':621,638,1066 'one':771,1126,1134,1196,1218,1279,1299 'one-minut':770,1125,1298 'option':563,568 'overrid':401 'partial':802,837,847,1289 'pattern':496 'pcm':208 'pcmbuffer':214,222 'permiss':427,939,948,1366 'pre':15,59,65,649 'pre-record':14,58,648 'pre-recorded-audio-file-recognit':64 'preset':157,274 'previous':1195,1203,1271 'print':238,297,382,595,604,885 'privat':397,517,527,531,535 'process':845 'progress':546 'project':118 'punctuat':889 'recogn':343,356,361,379,399,519,672,731,976,983,1374 'recognit':2,10,63,69,75,81,371,384,424,580,605,653,716,720,764,780,830,901,985,1016,1026,1076,1128,1137,1154,1183,1248,1263,1309,1370 'recognitionrequest':533,640,642 'recognitionrequest.endaudio':1265 'recognitiontask':537,548,550,582,644,646,1199,1221,1223,1225,1272 'recognitiontim':1169 'recognizer.delegate':404,999 'recognizer.isavailable':380,674 'recognizer.recognitiontask':583,691,815,855,1028,1052,1200,1226 'recognizer.supportsondevicerecognition':750,1109 'record':16,60,66,416,560,650,950,1270 'recordbutton.isenabled':1008 'recordingformat':616,626 'refer':97,98,1327 'remov':1261 'repeat':1173 'replac':114 'request':166,323,421,571,573,578,585,679,693,752,808,817,857,894,909,918,928,946,1030,1054,1093,1202,1228,1245 'request.addspunctuation':896 'request.append':629 'request.contextualstrings':911 'request.downloadandinstall':174 'request.requiresondevicerecognition':754,1095,1110 'request.shouldreportpartialresults':575,683,810 'requestpermiss':445 'requestrecordpermiss':483,960 'requir':1104,1284 'requiresondevicerecognit':333 'resourc':1043,1051 'restart':1153,1165 'restartrecognit':1179 'result':83,86,125,226,231,294,506,586,591,694,706,801,805,818,823,838,858,863,1031,1035,1055,1059,1290,1297 'result.besttranscription':870 'result.besttranscription.formattedstring':594,710,835,848 'result.isfinal':599,707,827 'result.text.characters':237,299 'result.transcriptions':880 'return':151,387,458,464,487,489,685,709,825,865,958,966 'review':92,95,1229 'review-checklist':94 'run':241,1024,1041,1207 'second':1148 'segment':882 'segment.confidence':887 'segment.substring':886 'self':405,1000,1176,1178 'self.audioengine.inputnode.removetap':1065 'self.audioengine.stop':1064 'self.recognitionrequest':577,1068,1070 'self.startrecording':944,967 'self.stoptranscribing':600,608 'separ':1293 'server':74,80,715,778,1117,1150,1307 'server-bas':777,1116,1149,1306 'session':554,1138 'set':1251 'setup':45,48,340 'sfspeechaudiobufferrecognitionrequest':505,534,574,753,809,895,910,1094,1345 'sfspeechrecogn':26,44,47,115,303,305,339,357,362,400,409,520,673,732,984,1004,1341 'sfspeechrecognitionrequest':1357 'sfspeechrecognitionresult':1353 'sfspeechrecognitiontask':538 'sfspeechrecognizer-setup':46 'sfspeechrecognizer.requestauthorization':454,934,951 'sfspeechrecognizerdeleg':396,1249 'sfspeechurlrecognitionrequest':655,680,1349 'silent':1098 'simultan':1182 'skill':5,6 'sort':873 'sosumi.ai':1331,1335,1339,1343,1347,1351,1355,1359,1363,1372,1380 'sosumi.ai/documentation/speech)':1330 'sosumi.ai/documentation/speech/asking-permission-to-use-speech-recognition)':1371 'sosumi.ai/documentation/speech/assetinventory)':1362 'sosumi.ai/documentation/speech/recognizing-speech-in-live-audio)':1379 'sosumi.ai/documentation/speech/sfspeechaudiobufferrecognitionrequest)':1346 'sosumi.ai/documentation/speech/sfspeechrecognitionrequest)':1358 'sosumi.ai/documentation/speech/sfspeechrecognitionresult)':1354 'sosumi.ai/documentation/speech/sfspeechrecognizer)':1342 'sosumi.ai/documentation/speech/sfspeechurlrecognitionrequest)':1350 'sosumi.ai/documentation/speech/speechanalyzer)':1334 'sosumi.ai/documentation/speech/speechtranscriber)':1338 'source-dpearson2699' 'specif':358,905 'speech':1,9,23,138,348,383,423,441,511,920,929,1328,1369,1375 'speechanalyz':32,37,41,99,102,135,195,267,284,306,1319,1333 'speechanalyzer-io':40 'speechanalyzer.bestavailableaudioformat':190 'speechdetector':318 'speecherror.unavailable':677 'speechmanag':394 'speechrecogn':407,408,1002,1003 'speechstatus':449,461 'speechtranscrib':131,154,271,317,1337 'speechtranscriber.supportedlocale':147,1323 'standard':495 'start':429,579,1133,1188,1247,1276 'startrecord':1140,1160,1198,1220 'starttranscrib':540 'status':455,459,935,952,955 'stay':977 'still':1040,1206 'stop':1011,1258 'stoptranscrib':635 'stream':178,507 'string':236,298,669,899 'super.init':403 'support':127,170,726,745,791,1091,1102 'supportsondevicerecognit':793,1280 'swift':3,121,136,268,346,391,439,509,661,729,806,854,892,907,924,972,1018,1082,1130,1185 'swiftui':912 'tap':611,1260 'task':206,227,547,581,1184,1191,1204,1213 'term':906 'text':19,235,239,593,597 'throw':541,668,676,701 'timer':1163 'timer.scheduledtimer':1170 'tip':760 'transcrib':11,142,153,171,192,197,262,270,288 'transcribefil':663 'transcriber.results':233,296 'transcript':53,57,133,431,493,596,829,851,867,878 'transcription.segments':884 'tri':167,172,229,245,254,260,278,292,558,565,632,686 'true':290,567,576,755,811,814,897,1037,1061,1096,1111 'type':311 'ui':414 'unavail':419 'undefin':1208 'updat':413 'url':665,666,681,682 'us':368,526,738 'use':20,120,654,1161,1240,1318,1368 'user':351,1121 'var':532,536 'via':130,337,997 'vs':73,79,714,803 'wast':1042 'weak':1175 'withcheckedcontinu':451,479 'withcheckedthrowingcontinu':688 'without':1089,1192 'withtimeinterv':1171 'work':723 'xcode':913","prices":[{"id":"0bd289dd-08f3-4957-9843-937c5a80b3e8","listingId":"dffc3578-e656-42de-8836-7c01d6cef3c5","amountUsd":"0","unit":"free","nativeCurrency":null,"nativeAmount":null,"chain":null,"payTo":null,"paymentMethod":"skill-free","isPrimary":true,"details":{"org":"dpearson2699","category":"swift-ios-skills","install_from":"skills.sh"},"createdAt":"2026-04-18T20:34:18.821Z"}],"sources":[{"listingId":"dffc3578-e656-42de-8836-7c01d6cef3c5","source":"github","sourceId":"dpearson2699/swift-ios-skills/speech-recognition","sourceUrl":"https://github.com/dpearson2699/swift-ios-skills/tree/main/skills/speech-recognition","isPrimary":false,"firstSeenAt":"2026-04-18T22:01:17.041Z","lastSeenAt":"2026-04-22T00:53:44.633Z"},{"listingId":"dffc3578-e656-42de-8836-7c01d6cef3c5","source":"skills_sh","sourceId":"dpearson2699/swift-ios-skills/speech-recognition","sourceUrl":"https://skills.sh/dpearson2699/swift-ios-skills/speech-recognition","isPrimary":true,"firstSeenAt":"2026-04-18T20:34:18.821Z","lastSeenAt":"2026-04-22T05:40:40.310Z"}],"details":{"listingId":"dffc3578-e656-42de-8836-7c01d6cef3c5","quickStartSnippet":null,"exampleRequest":null,"exampleResponse":null,"schema":null,"openapiUrl":null,"agentsTxtUrl":null,"citations":[],"useCases":[],"bestFor":[],"notFor":[],"kindDetails":{"org":"dpearson2699","slug":"speech-recognition","source":"skills_sh","category":"swift-ios-skills","skills_sh_url":"https://skills.sh/dpearson2699/swift-ios-skills/speech-recognition"},"updatedAt":"2026-04-22T05:40:40.310Z"}}