gamercoder153
gamercoder153
 Iam facing this error with your colab notebook during inferencing: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing
@mxtsai which model?
@danielhanchen in colab after finetuning
@KillerShoaib Man thanks a lot for fixing, I really appreciate that. Can u explain to me where to add that I am using Google Colab t4 GPU: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing
great! thanks a lot guys @KillerShoaib @danielhanchen
@danielhanchen @KillerShoaib I checked it once again it literally the same 
@KillerShoaib Iam using this colab notebook : https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing from their github I am using llama3-8b-instruct model : https://huggingface.co/unsloth/llama-3-8b-Instruct
@KillerShoaib No, Iam not using any older finetuning model. Let me try it once again
@KillerShoaib Its the same stuff dude!! just generates sometimes and goes on loop like this  till 128 max new tokens and if text streaming is true it does the...
@hiyouga there is no other process