On-device Speech Recognition for Apple Silicon
WhisperKit is a Swift package that integrates OpenAI’s popular Whisper speech recognition model with Apple’s CoreML framework for efficient, local inference on Apple devices.
Check out the demo app on TestFlight.
[Blog Post] [Python Tools Repo]
WhisperKit can be integrated into your Swift project using the Swift Package Manager.
File
> Add Package Dependencies...
.https://github.com/argmaxinc/whisperkit
.Finish
to add WhisperKit to your project.If you’re using WhisperKit as part of a swift package, you can include it in your Package.swift dependencies as follows:
dependencies: [
.package(url: "https://github.com/argmaxinc/WhisperKit.git", from: "0.9.0"),
],
Then add WhisperKit
as a dependency for your target:
.target(
name: "YourApp",
dependencies: ["WhisperKit"]
),
You can install WhisperKit
command line app using Homebrew by running the following command:
brew install whisperkit-cli
To get started with WhisperKit, you need to initialize it in your project.
This example demonstrates how to transcribe a local audio file:
import WhisperKit
// Initialize WhisperKit with default settings
Task {
let pipe = try? await WhisperKit()
let transcription = try? await pipe!.transcribe(audioPath: "path/to/your/audio.{wav,mp3,m4a,flac}")?.text
print(transcription)
}
WhisperKit automatically downloads the recommended model for the device if not specified. You can also select a specific model by passing in the model name:
let pipe = try? await WhisperKit(WhisperKitConfig(model: "large-v3"))
This method also supports glob search, so you can use wildcards to select a model:
let pipe = try? await WhisperKit(WhisperKitConfig(model: "distil*large-v3"))
Note that the model search must return a single model from the source repo, otherwise an error will be thrown.
For a list of available models, see our HuggingFace repo.
WhisperKit also comes with the supporting repo whisperkittools
which lets you create and deploy your own fine tuned versions of Whisper in CoreML format to HuggingFace. Once generated, they can be loaded by simply changing the repo name to the one used to upload the model:
let config = WhisperKitConfig(model: "large-v3", modelRepo: "username/your-model-repo")
let pipe = try? await WhisperKit(config)
The Swift CLI allows for quick testing and debugging outside of an Xcode project. To install it, run the following:
git clone https://github.com/argmaxinc/whisperkit.git
cd whisperkit
Then, setup the environment and download your desired model.
make setup
make download-model MODEL=large-v3
Note:
MODEL
(see what’s available in our HuggingFace repo, where we use the prefix openai_whisper-{MODEL}
)download-model
, make sure git-lfs is installedIf you would like download all available models to your local folder, use this command instead:
make download-models
You can then run them via the CLI with:
swift run whisperkit-cli transcribe --model-path "Models/whisperkit-coreml/openai_whisper-large-v3" --audio-path "path/to/your/audio.{wav,mp3,m4a,flac}"
Which should print a transcription of the audio file. If you would like to stream the audio directly from a microphone, use:
swift run whisperkit-cli transcribe --model-path "Models/whisperkit-coreml/openai_whisper-large-v3" --stream
Our goal is to make WhisperKit better and better over time and we’d love your help! Just search the code for “TODO” for a variety of features that are yet to be built. Please refer to our contribution guidelines for submitting issues, pull requests, and coding standards, where we also have a public roadmap of features we are looking forward to building in the future.
WhisperKit is released under the MIT License. See LICENSE for more details.
If you use WhisperKit for something cool or just find it useful, please drop us a note at [email protected]!
If you use WhisperKit for academic work, here is the BibTeX:
@misc{whisperkit-argmax,
title = {WhisperKit},
author = {Argmax, Inc.},
year = {2024},
URL = {https://github.com/argmaxinc/WhisperKit}
}