Jacob Dineen
Jacob Dineen
@hosseinkalbasi If you call `to_parquet()` after `fit_tranform` on the `nvtabular.workflow` object, it will auto-generate a schema file stored in `output_dir`. e.g., ``` dataset = nvt.Dataset(self.df) workflow.fit_transform(dataset).to_parquet(output_dir) ```
@rnyak - poster of #405 here. I think that this issue (I had the same question but closed) is due to trying to use the overloaded `trainer` class for binary...
Thank you, @gabrielspmoreira! You can close this issue. My team and I will try to write a custom train loop for the keras-style PyTorch API.
@gabrielspmoreira Is there any chance that this will be added as a feature in later releases?
Apologies for not being clear. I'll send over some additional context tomorrow.
@gabrielspmoreira For additional context - Model.fit() is currently the preferred way to use the package for tasks involving BinaryClassification. When using the Trainer class for BinaryClassification, the package uses some...
@rnyak Correct, ideally the simpler training loop would support multi-gpu distributed training.
@rnyak A member of the team is going to test out the solutions from [this issue](https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/456) in the next day or so. **As for what we have tried - Assume...
[This](https://discuss.pytorch.org/t/dataparallel-doesnt-work-when-calling-model-module-some-attribute/47556/6) is a pretty good example of the issue
I would like to work on Clip for pytorch.