Zujing Liu
Zujing Liu
Same. Have u solve it?
I met the same question, but have no idea why. Have you solve it? @YZsZY
> You can get the position of pruned parameters by replacing [Line 150](https://github.com/horseee/LLM-Pruner/blob/1455fa9646bc8b87ccbc613cf1b97e5729e06152/hf_prune.py#L150) in hf_prune.py by: > > ``` > for group in pruner.step(interactive=True): > print(group.details()) > group.prune() > ```...
Hi! Thanks for your timely reply! But im still a little confused. If i make the bbox controlled by scene_scale bigger, shouldn't part of background also be in the bbox...
Sure. Here is the rendered rgb, depth and normal image at 185k iter    And here is the rendered rgb, depth and normal image at 185k iter ...
So that means no matter how big the bbox might be, the background still cannot be reconstructed,right?
but i set the [scene_scale](https://github.com/autonomousvision/sdfstudio/blob/370902a10dbef08cb3fe4391bd3ed1e227b5c165/nerfstudio/data/dataparsers/mipnerf360_dataparser.py#L54) in mipnerf 360 dataparser to make the bbox bigger, `scene_scale=4` but still cannot contruct the background. All above result is in such settings.
From the code behind I assumed that it is used to scale the pose so i didn't dare modify it. I will try it latter. Thanks for your help!
Thank you for reminding! I ' ll try your advice and thank you for your reply!
@niujinshuchong Sorry for bothering again. I tried bakedangelo as you advised, and i set the [scene_scale](https://github.com/autonomousvision/sdfstudio/blob/370902a10dbef08cb3fe4391bd3ed1e227b5c165/nerfstudio/data/dataparsers/mipnerf360_dataparser.py#L54) in mipnerf 360 to make it learn the background mesh. And it only trained...