calling gesture recognizer model and getting results
Hi Thanks for the mediapipe lib, i had successfuly build the library and integrated it in my project, i am also able to run your example. In my project i am able to send the frames for landmark detection and create landmarks on the frame.
I do not have software background, and i am doing this for personal use out of pure curiosity.
I want to recognize the hand gesture, in the sample python code of mediapipe they provide a method of sending the frames to a model "gesture_recognizer.task" here is the python code snippets for it
#python code for initializing the gesture recognizer num_hands = 2 model_path = "C:\Users\vivek\Downloads\gesture_recognizer.task" GestureRecognizer = mp.tasks.vision.GestureRecognizer GestureRecognizerOptions = mp.tasks.vision.GestureRecognizerOptions VisionRunningMode = mp.tasks.vision.RunningMode self.lock = threading.Lock() self.current_gestures = [] options = GestureRecognizerOptions( base_options=python.BaseOptions(model_asset_path=model_path), running_mode=VisionRunningMode.LIVE_STREAM, num_hands = num_hands, result_callback=self.__result_callback) recognizer = GestureRecognizer.create_from_options(options)
#python code for getting the gestures
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_drawing.draw_landmarks(frame, hand_landmarks, mp_hands.HAND_CONNECTIONS)
mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=np_array)
recognizer.recognize_async(mp_image, timestamp) #this line calls the recognizer
timestamp = timestamp + 1 # should be monotonically increasing, because in LIVE_STREAM mode
self.put_gestures(frame)
Can you please tell me how to implement this part in your library. I have been trying but i am stuck as i do not technically understand your code and whats happening in it.
Thanks for the help.