NickyDark1
NickyDark1
model_id = "h2oai/h2o-danube-1.8b-chat"# 
same title
multiple GPUs is it possible to train?
Is it possible to add new token and special tokens to be trained? What would the code be like?
### Feature request how to fine tune LoRA to HQQ? ### Motivation how to fine tune LoRA to HQQ? ### Your contribution how to fine tune LoRA to HQQ?
model: gemma-2b-it-function respond:  my idea: Basically you could have a mini agent who makes the calls to the functions and within them you could have an LLM more qualified...
error train, no download data, etc https://github.com/yl4579/StyleTTS2/blob/main/Colab/StyleTTS2_Finetune_Demo.ipynb
First of all, thank you very much for an excellent job, I thought it was great and I congratulate you. I have some problems with the code and I was...
hello, I have built a micropython port of the Waveshares esp32-s3-touch-lcd-1.46 drivers. I really don't know what drivers other than these that are originally written for Arduino and adapted to...