Hand-Tracking
Hand-Tracking copied to clipboard
Tracking hands using deep learning
Hand Tracking
Tracking hands using SSD with MobilenetV1
Outcome
See it here
What I did here
- Recorded images of my hands in different postures, positions and background.
- Randomly did some augmentations to those images viz resize, coloration, add noise, flip and rotate.
- Used Tensorflow's object detection API for training SSD with MobilnetV1.
Generation and annotation of dataset
- I generated the dataset myself. The model trained by victordibia did not work for me. So I had to generate data by myself and annotate them.
- Got some images from the camera of hand in different postures, positions and background using the get_camera_images.py. The pictures are stored in unlabelled_images/ folder.
- Now I do some image augmentations on the previous images. The augmentations are selected randomly selected by the program augment_images.py. The augmented images are stored in augmented/ folder.
- Now I annotate the augmented images using the labelImg program by tzutalin.
How to train
Follow this tutorial by sentdex here
What is the intention behind this project
The main intention behind the project is my sign language project. Many of you complained that the skin detection using histogram backprojection does not work well for you. So I decided to go for hand detection instead of skin colour detection. So you can expect a lot of changes in the sign language program within the mext couple of months.
Credits
- sentdex for the Tensorflow Object Detection API Tutorial here
- datitran for the generate_tfrecords.py and xml_to_csv.py
- victordibia for the original inspiration.
- tzutalin for labelImg