vinson2233
vinson2233
so the solution is not to use autoarima if the series is difficult to forecast?
pytorch is really memory sensitive. make sure your CUDA memory is not occupied. use `nvidia-smi` command in the terminal to see GPU memory usage
what limit you from doing so? you can simply calculate it using scikit learn, just put it inside the training loop. but i wonder what you mean by precision recall...
@vkmavani sure. The `preprocess` object from CLIP takes care of all of the preprocessing steps for the image part, so you don't need to worry about image_size or transform(see https://github.com/openai/CLIP/blob/main/clip/clip.py...
@lonngxiang For more information, read https://github.com/openai/CLIP/issues/57, clip.model.convert_weights basically convert the CLIP model weight into float16. This will help accelerate and reduce memory usage during training. The definition of clip.model.convert_weight can...
@lonngxiang oh you are correct. pardon me, I have edited my code above. The dataset should return something that can be put on PyTorch tensor.
Yeah, if already using preprocess inside the class. The result from the batch can be used directly to the CLIP. So that line can be change into this : `images...
Hmmmm, that error is new for me. Is the error occurred when calculating the loss?
Are you using CPU by any chance? The mixed precision training usually don't work on CPU
@lonngxiang I have updated the code again. Basically, remove all code related to mixed-precision training when using CPU instead of GPU