almugabo
almugabo
This tool has the potential to encourage sharing more data on QA from various domains. One idea I think it is worth looking into, is to combine it with semi-automatically...
Great package ! It would be nice to add support for proxies the request.get method accept proxies (in a form of a dictionary) dict_proxies = {'https': 'https://username:password@HOST:PORT', 'http': 'http://username:password@HOST:PORT', }...
in processing references of some reports (by using **processFulltextDocument**) I noticed that Grobid seems to skip some pages for example, when the following file is processed , the references extracted...
I propose to add the possibility to fine-tune only parts of the Models [Parameters efficient fine-tuning) Rationale: the increasing size of language models has intensified also the search on how...
Hi, I am new to happy-transformer and impressed how it makes things much much easier. I have a - perhaps naive - question. How would one go about to train...
I am not able to get sciwing to run properly on google colab I suspect that it may be due to unclear dependencies during installation here is what Idid !...
this is more a question than an issue. Can one extend the context size of the model ? I am asking because I would like to test finetuning it to...
This is an amazing work you have done! Congratulations. (I came here after seeing the paper) I noticed that you are using two different frameworks (Llama factory for continuous pre-training...
Could you please add fine-tuning support for gemma-2 ? It has good good multilingual capabilities and is a good candidate for fine-tuning for languages other than English. Its different sizes...
I am trying to full fine tune Llama3.2-1b to "teach" it another language (via continous pretraining). he idea is to have a model, which, given a prompt in a language...