Akshat Shrivastava
Akshat Shrivastava
Hello @juntao @hydai , myself Akshat Shrivastava sophomore at IIT BHU(Varanasi) in past few days i have understood the workflow of finetuning and running llm's locally, i was wondering weather...
 Till now i have made a small dataset , and have successfully converted to required prompt template , example: ['You are a reviewer of Rust source...
 I successfully fine tuned llama3-8b-Instruct model with my small dataset, the attached image is of training loss which is decreasing steadily which is a good sign,...
  These are the direct comparison between vanilla llama3-8b-Instruct and my fine tuned model for Code Explanation, My model appears to be much more direct and on point which...
 I have created my dataset for task 2 and did some experimentation for objective 2 and found out that as asked in objective if i use 262k context...
 Got llamaedge up and running and inference via chat endpoint for my fine tuned model is working fine
 There are my results for now for fine tuned model of objective 1 and objective 2, both of these models are running on LlamaEdge API server , for...
@juntao Thanks for selecting me as a mentee for this issue, how do you want me to get started with the issue?, and how should i structure my tasks, should...
I have created an [issue](https://github.com/WasmEdge/WasmEdge/issues/3495) , currently I have created a rough timeline for myself which I will be improving according to mentor reviews
@juntao @hydai Currently this is the rough timeline I have created for myself please do give your inputs to this and do review my current dataset which i have created