Sagar1094
Sagar1094
Hi, I used around 8000000 text sentences while fine tuning the language model but the newly added vocabulary size is only 50000. My data have atleast around 1000000-2000000 tokens to...
# ❓ Questions & Help While training the model we are taking multiple features which also includes 'item_id-list_seq', 'category-list_seq' as categorical features and 'product_recency_days_log_norm-list_seq', 'et_dayofweek_sin-list_seq' as continuous variables. As per...
Hi Ethan, First of all amazing project. Have been trying my hands on it and just wanted to understand that from json data of 9.4 mil rows after running the...
Hi @ZeroRin, I want to train the model on a custom dataset, Could you please help me with the format of dataset for the 2nd and 3rd scripts. For the...
Hi, I am using google colab for training on my custom dataset. I am facing memory error, I have 1800 images in which 1300 are training images, 300 valid and...