L
L
hi,have the problem been solved?
I am working on reproducing the numbers reported in the paper. Train dataset: H36M, MuCo, COCO Test dataset: 3DPW I am using pytorch 1.6, python 3.7, cuda10 Here is the...
@mks0601 thanks,i find you have merged root pose and camera rotation. # merge root pose and camera rotation root_pose = smpl_pose[self.root_joint_idx,:].numpy() root_pose, _ = cv2.Rodrigues(root_pose) root_pose, _ = cv2.Rodrigues(np.dot(R,root_pose)) smpl_pose[self.root_joint_idx]...
yes, the camera extrinsic parameter include the R and the T ,I think the fit_mesh_coord_cam have applied camera extrinsics by the " merge root pose and camera rotation". but the...
The code is for side view: pose, shape, trans = smpl_param['pose'], smpl_param['shape'], smpl_param['trans'] smpl_pose = torch.FloatTensor(pose).view(-1,3); smpl_shape = torch.FloatTensor(shape).view(1,-1); # smpl parameters (pose: 72 dimension, shape: 10 dimension) R, t...
yes, I follow your codes in Human36M/Human36M.py,i can get right result about front view, which have applied extrinsic translation(R,T) and internal parameters(cam_param['focal'], cam_param['princpt'])  the original coordinate system is x,y,z...
thanks,the 'internal parameters' is cam_param['focal'] and cam_param['princpt'],there is just one extrinsics for front view, now I want to visualize the orientation of the whole about side view. my unclear description...
Thanks for your patient reply, I try it
@mks0601 Can you provide the benchmark code for 3DPW challenge? how can I reproduce the competition performance 
Thank you for your reply,your I2L-MeshNet wons the first and second place at 3DPW challenge on unknown assocation track which is not allowed to use ground truth data in any...