Label images and video for Computer Vision applications
Image labeling in multiple annotation formats:
This project was developed for the following paper, please consider citing it:
@INPROCEEDINGS{8594067,
author={J. {Cartucho} and R. {Ventura} and M. {Veloso}},
booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Robust Object Recognition Through Symbiotic Deep Learning In Mobile Robots},
year={2018},
pages={2336-2341},
}
To start using the YOLO Bounding Box Tool you need to download the latest release or clone the repo:
git clone --recurse-submodules [email protected]:Cartucho/OpenLabeling.git
You need to install:
python -mpip install -U pip
python -mpip install -U opencv-python
python -mpip install -U opencv-contrib-python
python -mpip install -U numpy
python -mpip install -U tqdm
python -mpip install -U lxml
Alternatively, you can install everything at once by simply running:
python -mpip install -U pip
python -mpip install -U -r requirements.txt
Step by step:
Open the main/
directory
Insert the input images and videos in the folder input/
Insert the classes in the file class_list.txt (one class name per line)
Run the code:
You can find the annotations in the folder output/
python main.py [-h] [-i] [-o] [-t] [--tracker TRACKER_TYPE] [-n N_FRAMES]
optional arguments:
-h, --help Show this help message and exit
-i, --input Path to images and videos input folder | Default: input/
-o, --output Path to output folder (if using the PASCAL VOC format it's important to set this path correctly) | Default: output/
-t, --thickness Bounding box and cross line thickness (int) | Default: -t 1
--tracker tracker_type tracker_type being used: ['CSRT', 'KCF','MOSSE', 'MIL', 'BOOSTING', 'MEDIANFLOW', 'TLD', 'GOTURN', 'DASIAMRPN']
-n N_FRAMES number of frames to track object for
To use DASIAMRPN Tracker:
object_detection/models
directory (you need to create the models
folder by yourself). The outline of object_detection
looks like that:
tf_object_detection.py
utils.py
models/ssdlite_mobilenet_v2_coco_2018_05_09
Download the pre-trained model by clicking this link http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz and put it into object_detection/models
. Create the models
folder if necessary. Make sure to extract the model.
Note: Default model used in main_auto.py
is ssdlite_mobilenet_v2_coco_2018_05_09
. We can
set graph_model_path
in file main_auto.py
to change the pretrain model
Using main_auto.py
to automatically label data first
TODO: explain how the user can
Keyboard, press:
Key | Description |
---|---|
a/d | previous/next image |
s/w | previous/next class |
e | edges |
h | help |
q | quit |
Video:
Key | Description |
---|---|
p | predict the next frames’ labels |
Mouse: