A PyTorch-based Speech Toolkit
| ๐ Tutorials | ๐ Website | ๐ Documentation | ๐ค Contributing | ๐ค HuggingFace | โถ๏ธ YouTube | ๐ฆ X |
Please, help our community project. Star on GitHub!
Exciting News (January, 2024): Discover what is new in SpeechBrain 1.0 here!
SpeechBrain is an open-source PyTorch toolkit that accelerates Conversational AI development, i.e., the technology behind speech assistants, chatbots, and large language models.
It is crafted for fast and easy creation of advanced technologies for Speech and Text Processing.
With the rise of deep learning, once-distant domains like speech processing and NLP are now very close. A well-designed neural network and large datasets are all you need.
We think it is now time for a holistic toolkit that, mimicking the human brain, jointly supports diverse technologies for complex Conversational AI systems.
This spans speech recognition, speaker recognition, speech enhancement, speech separation, language modeling, dialogue, and beyond.
Aligned with our long-term goal of natural human-machine conversation, including for non-verbal individuals, we have recently added support for the EEG modality.
We share over 200 competitive training recipes on more than 40 datasets supporting 20 speech and text processing tasks (see below).
We support both training from scratch and fine-tuning pretrained models such as Whisper, Wav2Vec2, WavLM, Hubert, GPT2, Llama2, and beyond. The models on HuggingFace can be easily plugged in and fine-tuned.
For any task, you train the model using these commands:
python train.py hparams/train.yaml
The hyperparameters are encapsulated in a YAML file, while the training process is orchestrated through a Python script.
We maintained a consistent code structure across different tasks.
For better replicability, training logs and checkpoints are hosted on Dropbox.
from speechbrain.inference import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-conformer-transformerlm-librispeech", savedir="pretrained_models/asr-transformer-transformerlm-librispeech")
asr_model.transcribe_file("speechbrain/asr-conformer-transformerlm-librispeech/example.wav")
๐ Research Acceleration: Speeding up academic and industrial research. You can develop and integrate new models effortlessly, comparing their performance against our baselines.
โก๏ธ Rapid Prototyping: Ideal for quick prototyping in time-sensitive projects.
๐ Educational Tool: SpeechBrainโs simplicity makes it a valuable educational resource. It is used by institutions like Mila, Concordia University, Avignon University, and many others for student training.
To get started with SpeechBrain, follow these simple steps:
Install SpeechBrain using PyPI:
pip install speechbrain
Access SpeechBrain in your Python code:
import speechbrain as sb
This installation is recommended for users who wish to conduct experiments and customize the toolkit according to their needs.
Clone the GitHub repository and install the requirements:
git clone https://github.com/speechbrain/speechbrain.git
cd speechbrain
pip install -r requirements.txt
pip install --editable .
Access SpeechBrain in your Python code:
import speechbrain as sb
Any modifications made to the speechbrain
package will be automatically reflected, thanks to the --editable
flag.
Ensure your installation is correct by running the following commands:
pytest tests
pytest --doctest-modules speechbrain
In SpeechBrain, you can train a model for any task using the following steps:
cd recipes/<dataset>/<task>/
python experiment.py params.yaml
The results will be saved in the output_folder
specified in the YAML file.
Website: Explore general information on the official website.
Tutorials: Start with basic tutorials covering fundamental functionalities. Find advanced tutorials and topics in the Tutorial notebooks category in the SpeechBrain documentation.
Documentation: Detailed information on the SpeechBrain API, contribution guidelines, and code is available in the documentation.
Tasks | Datasets | Technologies/Models |
---|---|---|
Language Modeling | CommonVoice, LibriSpeech | n-grams, RNNLM, TransformerLM |
Response Generation | MultiWOZ | GPT2, Llama2 |
Grapheme-to-Phoneme | LibriSpeech | RNN, Transformer, Curriculum Learning, Homograph loss |
Tasks | Datasets | Technologies/Models |
---|---|---|
Motor Imagery | BNCI2014001, BNCI2014004, BNCI2015001, Lee2019_MI, Zhou201 | EEGNet, ShallowConvNet, EEGConformer |
P300 | BNCI2014009, EPFLP300, bi2015a, | EEGNet |
SSVEP | Lee2019_SSVEP | EEGNet |
SpeechBrain includes a range of native functionalities that enhance the development of Conversational AI technologies. Here are some examples:
Training Orchestration: The Brain
class serves as a fully customizable tool for managing training and evaluation loops over data. It simplifies training loops while providing the flexibility to override any part of the process.
Hyperparameter Management: A YAML-based hyperparameter file specifies all hyperparameters, from individual numbers (e.g., learning rate) to complete objects (e.g., custom models). This elegant solution drastically simplifies the training script.
Dynamic Dataloader: Enables flexible and efficient data reading.
GPU Training: Supports single and multi-GPU training, including distributed training.
Dynamic Batching: On-the-fly dynamic batching enhances the efficient processing of variable-length signals.
Mixed-Precision Training: Accelerates training through mixed-precision techniques.
Efficient Data Reading: Reads large datasets efficiently from a shared Network File System (NFS) via WebDataset.
Hugging Face Integration: Interfaces seamlessly with HuggingFace for popular models such as wav2vec2 and Hubert.
Orion Integration: Interfaces with Orion for hyperparameter tuning.
Speech Augmentation Techniques: Includes SpecAugment, Noise, Reverberation, and more.
Data Preparation Scripts: Includes scripts for preparing data for supported datasets.
SpeechBrain is rapidly evolving, with ongoing efforts to support a growing array of technologies in the future.
SpeechBrain integrates a variety of technologies, including those that achieves competitive or state-of-the-art performance.
For a comprehensive overview of the achieved performance across different tasks, datasets, and technologies, please visit here.
We have ambitious plans for the future, with a focus on the following priorities:
Scale Up: We aim to provide comprehensive recipes and technologies for training massive models on extensive datasets.
Scale Down: While scaling up delivers unprecedented performance, we recognize the challenges of deploying large models in production scenarios. We are focusing on real-time, streamable, and small-footprint Conversational AI.
Multimodal Large Language Models: We envision a future where a single foundation model can handle a wide range of text, speech, and audio tasks. Our core team is focused on enabling the training of advanced multimodal LLMs.
If you use SpeechBrain in your research or business, please cite it using the following BibTeX entry:
@misc{speechbrainV1,
title={Open-Source Conversational AI with {SpeechBrain} 1.0},
author={Mirco Ravanelli and Titouan Parcollet and Adel Moumen and Sylvain de Langen and Cem Subakan and Peter Plantinga and Yingzhi Wang and Pooneh Mousavi and Luca Della Libera and Artem Ploujnikov and Francesco Paissan and Davide Borra and Salah Zaiem and Zeyu Zhao and Shucong Zhang and Georgios Karakasidis and Sung-Lin Yeh and Pierre Champion and Aku Rouhe and Rudolf Braun and Florian Mai and Juan Zuluaga-Gomez and Seyed Mahed Mousavi and Andreas Nautsch and Xuechen Liu and Sangeet Sagar and Jarod Duret and Salima Mdhaffar and Gaelle Laperriere and Mickael Rouvier and Renato De Mori and Yannick Esteve},
year={2024},
eprint={2407.00463},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.00463},
}
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and Franรงois Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}