Wenhao Li
Wenhao Li
Hi @hassony2 , thanks for answering. It's OK to run PyTorch code in my virtual environment. I find the exact line is “hand_verts, hand_joints = mano_layer(random_pose, random_shape)” that causes the...
This method outputs relative coordinates with respect to the root joint. You can refer to [1] to train a trajectory model and output the coordinates of the root joint to...
The single-view 3D human pose estimation method is hard to estimate global coordinates. You can refer to some multi-view methods.
May be it can, you can try it by running our demo code.
We only use flip data augmentation in the training and testing phases following previous works [VideoPose3D](https://github.com/facebookresearch/VideoPose3D)
'input_2D' is the 2D poses detected by HRNet. The demo code is following "https://github.com/fabro66/GAST-Net-3DPoseEstimation", which uses YOLOv3 and HRNet.
We run the model on Ubuntu and do not try it on Windows10
Our repo is built on top of [ST-GCN](https://github.com/vanoracai/Exploiting-Spatial-temporal-Relationships-for-3D-Pose-Estimation-via-Graph-Convolutional-Networks), we train the first stage for 20 epochs.
In our demo code, we plot the joints, which is time-consuming. Without visualization, our model is real-time.
You can using "model['trans'] = nn.DataParallel(model['trans'])"