Akshat Shrivastava
Akshat Shrivastava
@juntao @hydai Can you please take a look at my datasets and suggest what improvements should be done and how many fields are required
The current datasets are created from [rust by example](https://doc.rust-lang.org/rust-by-example/) and I have created [collab notebook](https://colab.research.google.com/drive/1wlCOvklww1YvACuIRrhkdFFH_vU7Hgbn?usp=sharing) where I have used base model as [Llama-3-8b-Instruct 4 bit quantized version](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit) for fine tuning...
I created few example questions myself and then used chat gpt using prompt as ''' Generate pairs of code explanation/ ques ans from the text above in form of '''...
I have accordingly updated my milestones , do inform me if anything else should be added or changed also could I get more info on model 2 and how are...
Yes code explanation of dataset 1 are based off of same book and I have generated them in similar manner and prompt as stated above So according to [flows learn...
Sure I will look into it and test both methods for all use cases , we can try to do RAG on fine tuned model too as stated by you...
Hey, is there a way I can get access to google colab pro? While fine tuning my credits are ending really fast and then I have to wait for 24...
# Update - Understood the basic functionality of gaianet - Implemented the RAG functionality of gaianet - Tested with different variations of snapshots on Llama-3-8b-Instruct 1. Converted current dataset to...
# Update I have started to work on local LLM integration with github PR bot and was facing some issues after setting all the parameters right which included llm_model_name, llm_ctx_size,...
and it seems to be only working on terminal as in gaianet UI the answers doesnt show up, if I use request package in code editors and send curl request...