TeoR95
TeoR95
@vors Did you succeed in solving the issue? If yes, how did you do it?
@pseudotensor Thank you for your reply. Awesome, glad to hear that someone succeeded in doing RAG with such a huge amount of data. Can you explain a bit in detail...
@pseudotensor Okay, thank you for your reply :) Can someone else kindly give me a detailed answer to my questions above (if there is an answer), please? I do not...
@kategeorge007 Unfortunately, sometimes the articles are not present in Scihub and somehow, it finds the title, but it can't download it. Finally, I found out that if you use Google...
Hello @emrgnt-cmplxty! First of all, thank you so much for your fast reply. I really appreciate! Secondly, yes sorry, I should give more details. I am currently working on an...
@emrgnt-cmplxty Thank you so much for your kind reply! I am using a Tesla V100-SXM2-32GB GPU. Yes, OpenAI could be the easiest and the most efficient solution, but I wanted...
Hello everyone, also top.clusters
@dosu-bot Unfortunately, that was not the right answer. Anyway, I uninstalled llama-index and I pip installed it again. This time, it gives me back this error: --------------------------------------------------------------------------- AttributeError Traceback (most...
@dosu-bot this could be due to the fact that I am using "documents" as a list. In fact, I tried to parse only one PDF and it worked (even if...
@dosu-bot okay, I changed the script like this, in order to have a quantized version of the model: from llama_index.legacy.llms.huggingface import HuggingFaceInferenceAPI model_id = "anakin87/zephyr-7b-alpha-sharded" # model repo id ######...