We write your reusable computer vision tools. 💜
We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🤝
Pip install the supervision package in a
Python>=3.8 environment.
pip install supervision
Read more about conda, mamba, and installing from source in our guide.
Supervision was designed to be model agnostic. Just plug in any classification, detection, or segmentation model. For your convenience, we have created connectors for the most popular libraries like Ultralytics, Transformers, or MMDetection.
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(...)
model = YOLO("yolov8s.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)
len(detections)
# 5
inference
Running with Inference requires a Roboflow API KEY.
import cv2
import supervision as sv
from inference import get_model
image = cv2.imread(...)
model = get_model(model_id="yolov8s-640", api_key=<ROBOFLOW API KEY>)
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)
len(detections)
# 5
Supervision offers a wide range of highly customizable annotators, allowing you to compose the perfect visualization for your use case.
import cv2
import supervision as sv
image = cv2.imread(...)
detections = sv.Detections(...)
box_annotator = sv.BoxAnnotator()
annotated_frame = box_annotator.annotate(
scene=image.copy(),
detections=detections)
https://github.com/roboflow/supervision/assets/26109316/691e219c-0565-4403-9218-ab5644f39bce
Supervision provides a set of utils that allow you to load, split, merge, and save datasets in one of the supported formats.
import supervision as sv
from roboflow import Roboflow
project = Roboflow().workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")
ds = sv.DetectionDataset.from_coco(
images_directory_path=f"{dataset.location}/train",
annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)
path, image, annotation = ds[0]
# loads image on demand
for path, image, annotation in ds:
# loads image on demand
load
dataset = sv.DetectionDataset.from_yolo(
images_directory_path=...,
annotations_directory_path=...,
data_yaml_path=...
)
dataset = sv.DetectionDataset.from_pascal_voc(
images_directory_path=...,
annotations_directory_path=...
)
dataset = sv.DetectionDataset.from_coco(
images_directory_path=...,
annotations_path=...
)
split
train_dataset, test_dataset = dataset.split(split_ratio=0.7)
test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
len(train_dataset), len(test_dataset), len(valid_dataset)
# (700, 150, 150)
merge
ds_1 = sv.DetectionDataset(...)
len(ds_1)
# 100
ds_1.classes
# ['dog', 'person']
ds_2 = sv.DetectionDataset(...)
len(ds_2)
# 200
ds_2.classes
# ['cat']
ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
len(ds_merged)
# 300
ds_merged.classes
# ['cat', 'dog', 'person']
save
dataset.as_yolo(
images_directory_path=...,
annotations_directory_path=...,
data_yaml_path=...
)
dataset.as_pascal_voc(
images_directory_path=...,
annotations_directory_path=...
)
dataset.as_coco(
images_directory_path=...,
annotations_path=...
)
convert
sv.DetectionDataset.from_yolo(
images_directory_path=...,
annotations_directory_path=...,
data_yaml_path=...
).as_pascal_voc(
images_directory_path=...,
annotations_directory_path=...
)
Want to learn how to use Supervision? Explore our how-to guides, end-to-end examples, cheatsheet, and cookbooks!
Dwell Time Analysis with Computer Vision | Real-Time Stream Processing
Speed Estimation & Vehicle Tracking | Computer Vision | Open Source
Did you build something cool using supervision? Let us know!
https://github.com/roboflow/supervision/assets/26109316/c9436828-9fbf-4c25-ae8c-60e9c81b3900
https://github.com/roboflow/supervision/assets/26109316/3ac6982f-4943-4108-9b7f-51787ef1a69f
Visit our documentation page to learn how supervision can help you build computer vision applications faster and more reliably.
We love your input! Please see our contributing guide to get started. Thank you 🙏 to all our contributors!