Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™
OpenVINO™ Training Extensions is a low-code transfer learning framework for Computer Vision.
The API & CLI commands of the framework allows users to train, infer, optimize and deploy models easily and quickly even with low expertise in the deep learning field.
OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on PyTorch and OpenVINO™ toolkit.
OpenVINO™ Training Extensions provides a “recipe” for every supported task type, which consolidates necessary information to build a model.
Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general.
If you are an experienced user, you can configure your own model based on torchvision, mmcv and OpenVINO Model Zoo (OMZ).
Furthermore, OpenVINO™ Training Extensions provides automatic configuration for ease of use.
The framework will analyze your dataset and identify the most suitable model and figure out the best input size setting and other hyper-parameters.
The development team is continuously extending this Auto-configuration functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project.
OpenVINO™ Training Extensions supports the following computer vision tasks:
OpenVINO™ Training Extensions supports the following learning methods:
OpenVINO™ Training Extensions provides the following usability features:
Please refer to the installation guide.
If you want to make changes to the library, then a local installation is recommended.
pip install otx[base]
Alternatively, for zsh users:
pip install 'otx[base]'
# Use of virtual environment is highy recommended
# Using conda
yes | conda create -n otx_env python=3.10
conda activate otx_env
# Or using your favorite virtual environment
# ...
# Clone the repository and install in editable mode
git clone https://github.com/openvinotoolkit/training_extensions.git
cd training_extensions
pip install -e .[base] # for zsh: pip install -e '.[base]'
OpenVINO™ Training Extensions supports both API and CLI-based training. The API is more flexible and allows for more customization, while the CLI training utilizes command line interfaces, and might be easier for those who would like to use OpenVINO™ Training Extensions off-the-shelf.
For the CLI, the commands below provide subcommands, how to use each subcommand, and more:
# See available subcommands
otx --help
# Print help messages from the train subcommand
otx train --help
# Print help messages for more details
otx train --help -v # Print required parameters
otx train --help -vv # Print all configurable parameters
You can find details with examples in the CLI Guide. and API Quick-Guide.
Below is how to train with auto-configuration, which is provided to users with datasets and tasks:
# Training with Auto-Configuration via Engine
from otx.engine import Engine
engine = Engine(data_root="data/wgisd", task="DETECTION")
engine.train()
For more examples, see documentation: CLI Guide
otx train --data_root data/wgisd --task DETECTION
For more examples, see documentation: API Quick-Guide
In addition to the examples above, please refer to the documentation for tutorials on using custom models, training parameter overrides, and tutorial per task types, etc.
otx benchmark
subcommandConvModule
by removing conv_cfg
, norm_cfg
, and act_cfg
Please refer to the CHANGELOG.md
OpenVINO™ Toolkit is licensed under Apache License Version 2.0.
By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Please use Issues tab for your bug reporting, feature requesting, or any questions.
Intel is committed to respecting human rights and avoiding complicity in human rights abuses.
See Intel’s Global Human Rights Principles.
Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.
For those who would like to contribute to the library, see CONTRIBUTING.md for details.
Thank you! we appreciate your support!