Mustangs007
Mustangs007
### Help where i have to put above code..... input_text = [ #'What is the capital of United States?', 'I like', ] MAX_LENGTH = 128 input_tokens = model.tokenizer(input_text, return_tensors="np", return_attention_mask=False,...
mx.load(str(to_load_path)) Help where i have to put above code..... input_text = [ #'What is the capital of United States?', 'I like', ] MAX_LENGTH = 128 input_tokens = model.tokenizer(input_text, return_tensors="np", return_attention_mask=False,...
can you give a sample example expected_output to put here thanks
Thanks for replying but i am attaching LLM to agent only , can you please elaborate more thanks. hoping for faster reply. thanks
why i put config because it is require otherwise by defualt it will use gpt. can you please provide a sample working example.. thanks
> from langchain.llms import Ollama ollama_llm = Ollama(model="llama2",temperature=0) from langchain.llms import Ollama ollama_llm = Ollama(model="llama2",temperature=0) is above way of using ollama model and what you mentioned is different or same??
I also need help how to use csv rag with local llama, it is not working..
i dont know how text is getting cross lines. please anyhelp thanks
convert(hf_path = "microsoft/Phi-3-mini-4k-instruct", mlx_path = "phir3_model_f",q_bits = 8,q_group_size=32, quantize=True) phi3_path = "/Train_custom_LLM/LLMX/phir3_model_f/" Thanks for quick reply and editing my question..
Thanks for the reply, yes i have seen above doc, and following same code, but getting above error.