YHDang

Results 13 comments of YHDang

> It is a very excellent work! Thanks for your detailed sharing. I have learned your codes of ROOTNET and POSENET and I want to make some improvements based on...

> @95xueqian hello,Have you solved the problem of inception V4 pretrained model? How to solve it? Thank you Hi, could you find pre-trained models?

> As mentioned in the paper, you are using posenet's h36m dataset. And the training process adopts supervision on joint rotation. Are you using the rotation annotated by posenet? >...

> > Hi @vgonzalez88 , > > Actually, we do not process image from original video by our self. We use the images processed by [PoseNet](https://github.com/mks0601/3DMPPE_POSENET_RELEASE). I think you can...

您好,我是直接用的UniposeLSTM在PennAction上运行。但目前还没运行成功,遇到一个问题就是作者给的ground truth heatmap是368*368,但模型的output是46*46,很奇怪。 ------------------ 原始邮件 ------------------ 发件人: ***@***.***>; 发送时间: 2021年6月14日(星期一) 中午1:56 收件人: ***@***.***>; 抄送: ***@***.***>; ***@***.***>; 主题: Re: [bmartacho/UniPose] Questions about the augmentation on PennAction dataset (#25) 您好,请问关于作者的数据处理 penn_action_data.py 您调试成功了吗? 我似乎不太明白作者本身对数据的分配是如何进行的,以及如何制作的dataloader,请问你能分享下你做的数据分配以及代码吗? —...

> 标签通过stride = 8, 变ground truth 变为46*46 所以就是46x46, 但是它的数据源数据是什么样的呢? 能否分享下吗? 我是直接用的作者给的那个dataloader,没做其他更改。

> Thanks for your great work. I try to evaluation on PoseTrack 2018 with `jsonformat_std_to_posetrack18.py -e 0.4 -d lighttrack -m track -f 17 -r 0.80`, but, why the option "-f"...

> Would you show more details of the error information? Thanks for your reply. There is no error. The printed information is as follows. ![image](https://user-images.githubusercontent.com/34653678/126485399-2338291d-fe3d-4ffd-ae84-a80723210ed6.png) And then the program doesn't...

> What's the version of pytorch you used? It is better to use pytorch 1.0. Oh, I use pytorch 1.5. I'll try to change it to 1.0. Thanks very much.