saivineetha
saivineetha
In 05_dataloader.ipynb in examples folder https://github.com/NVIDIA/GenerativeAIExamples/blob/v0.4.0/notebooks/05_dataloader.ipynb is there a way to use Llama2 7b model instead of default Llama2 13b model
I'm working with generation of deepfakes for Bangla language. While inferencing, I'm providing source audio of a speaker. To generate the deepfakes of remaining speakers using this, the output is...
I have finetuned Llama 2 7B hf model using PEFT. Merged the peft with base model and tried converting to gguf using llama.cpp. Its giving the following error ``` python...