Results 12 comments of Xitong Yang

Thanks for your interest in our work! In STEP, initial proposals (anchors) are simply sampled from spatial grids. Please check the codes here: https://github.com/NVlabs/STEP/blob/master/data/ava.py#L342.

Thanks for your interest in our work! It seems that the dataloader cannot find the frames you provided. Could you please double check the data folder you've created? Make sure...

Thanks for your interest in our work! How many GPUs are you using for your job? Have you tried using 1 GPU?

"num" refers to the number of frames you loaded as input of the model.

Hi Ran, do you have a chance to fix the saved model release so that we can download it. Thanks!

I tried `git clone` (also `git lfs clone`) the repo and the following error pops up: > batch response: This repository is over its data quota. Account responsible for LFS...

Thanks @lavenderrz . Could you also share the train/test split on COIN and CrossTask so that we can try to compare with the reported results. As described in the supplementary...

Hi @lavenderrz , could you please kindly share the train/test split on COIN/CrossTask. Are they simply following the standard train/test splits of the two datasets, or is there a customized...

Hi, thanks for you interest in our work! Currently, 3DB only supports model inference on single images.

Please find more details about the joints mapping [here](https://github.com/facebookresearch/sam-3d-body/issues/34#issuecomment-3573816204). We will update the README and example codes to make it more clear. Thanks!