shap-e icon indicating copy to clipboard operation
shap-e copied to clipboard

how to training the model?

Open lishu2006ll opened this issue 2 years ago • 6 comments

Found untrained code? Where is the training code and raw data set?

lishu2006ll avatar Jun 14 '23 02:06 lishu2006ll

I would like to know same.

pjrenzov avatar Aug 01 '23 11:08 pjrenzov

I would like to know same too

jianjiabailu avatar Dec 07 '23 07:12 jianjiabailu

Bumping this, would also like to see the training code and dataset

Pearces1997 avatar Mar 02 '24 04:03 Pearces1997

for anyone wondering for fine tuning Shap-E or Point-E, here is another project Cap3D. The devs have provided code for fine tuning.

cap3D/text-to-3D/finetune_shapE.py at main · crockwell/Cap3D

HaiderSaleem avatar May 02 '24 22:05 HaiderSaleem

for anyone wondering for fine tuning Shap-E or Point-E, here is another project Cap3D. The devs have provided code for fine tuning.

cap3D/text-to-3D/finetune_shapE.py at main · crockwell/Cap3D

Does the finetune_shapE.py code have the whole model of shap-e trained? It seems to me that the code only trains the transformer and diffusion parts, omitting the first two layers of cross attention and patch embedding. I'm not sure if I'm understanding correctly? I see that the data loaded during training is latent_code, how is this latent_code obtained?

chenyg59 avatar Jun 27 '24 15:06 chenyg59

It seems to me that the code only trains the transformer and diffusion parts, omitting the first two layers of cross attention and patch embedding.

That is the point of "fine-tuning", through which just the weights of qkv projections are updated.

I see that the data loaded during training is latent_code, how is this latent_code obtained?

Through the "3D Encoder" as described at Fig.2 in the paper.

Aut0matas avatar Aug 01 '24 09:08 Aut0matas