train-CLIP
train-CLIP copied to clipboard
A PyTorch Lightning solution to training OpenAI's CLIP from scratch.
i am trying to run train a model using the following command **python train.py --model_name RN50 --folder ArchDaily --batch_size 512 --accelerator cuda** i get the above error: _File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/hooks.py", line...
I have some problems about how to load checkpoing with Customwrapped. Could give some code example to me? I would be very grateful to you
I want to use custom tokenizer and encoder trained from huggingface tokenizer. After training the huggingface tokenizer, I got a json containing vocas. However, I don't know how to feed...
File "/home/rishabh/Rishabclip/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 2430, in __call__ ` "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) "` `ValueError: text input must of type `str` (single...
Hi, I borrowed some snippets from your codebase for the distributed GPU and minibatch-within-batch training in my own project. However, I found that training using `manual_backward()` + FP16 does not...
How to use clip on chinese dataset? Should I change txt_encoder pretrain model with a chinese version?
Hi I'm having a little trouble understanding the dataset structure that I should follow in order to be able to train with this package. Is it one parent folder, one...
Hi: thank you very much for share your good code, i had trained the RN50 model with the CustomCLIPWrapper and I want to know how to write the inference code?...
Hi, Can somebody please help me out here why this error is coming? Using native 16bit precision. GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU...
I think it could be pretty useful to add a webdataset loader to this, so webdataset datasets can be used here. This is relevant as large webdataset are starting to...