stone.wlg
stone.wlg
> You should give a proposal, such as #3909. done
> If your going to include the binary, why not include the model as well? > > And this would only work for a linux x86 image. Would be good...
> If your going to include the binary, why not include the model as well? > > And this would only work for a linux x86 image. Would be good...
After several test, i have several conclusions, sharing with you: 1. use case reader_0 for train reader_1 for validate (as predict) 2. save model i add save_to_local_dir=True in TrainerParam, but...
maybe: It however requires the model to fit on one GPU.  https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/