carloposo

Results 6 comments of carloposo

> I did install `llama cpp` by the readme docs. > > i have cuda GPU so i installed the cublas version. > > ``` > # Example: cuBLAS >...

> @carloposo @KerenK-EXRM > > my understanding is that the instruct model (8b) has extra set of tokens or has diffenrent prompt template. > > try 7b models? No 7B...

@toomy0toons found out the answer here https://youtu.be/S6PdFPoteBU?si=pSsxCNFJsz_dxn8b&t=551

Would this fix https://github.com/PromtEngineer/localGPT/issues/786 ?

Thank you @gy850222 @jexp will try this week, keep you posted!

> Hi @carloposo we updated the readme on how to use ollama in our application Awesome @kartikpersistent , thanks!