sadaf0714
sadaf0714
I have checked and all of this is fine. Can you please suggest why it i not extracting context relevance and groundedness metrics while i am getting answer relevance fine.
Any updates on this?
Sure here's the code: tru = Tru() tru.reset_database() os.environ["HUGGINGFACE_API_KEY"] = hf_token provider = LiteLLM(model_engine="huggingface/mistralai/Mistral-7B-Instruct-v0.2") f_qa_relevance = Feedback( provider.relevance_with_cot_reasons, name="Answer Relevance" ).on_input_output() f_context_relevance = ( Feedback( provider.qs_relevance_with_cot_reasons, name="Context Relevance", ) .on(Select.RecordCalls.retrieve.args.query)...
please find the full notebook attached. [https://github.com/sadaf0714/trulens/blob/main/TruLens_langchain.ipynb](url)
https://github.com/sadaf0714/trulens @joshreini1 please try with this
 This is the error snapshot.
this is how i created feedback metrices f_qa_relevance = Feedback( provider.relevance_with_cot_reasons, name="Answer Relevance" ).on_input_output() query = Select.Record.app.retriever._get_relevant_documents.args.query context = Select.Record.app.retriever.get_relevant_documents.rets[:].page_content f_context_relevance = ( Feedback( provider.qs_relevance_with_cot_reasons, name="Context Relevance", ) .on(query) .on(context)...
> @sadaf0714 could you please tell me how can I fix it.  > > also plz check  import it like this: from trulens_eval import Feedback, LiteLLM, TruLlama, Select,...
Hi, for now can you please tell me what will be the retrieval queries to be used in place of hc_retrievalquery and vc_retrievalquery so that it generates proper results. The...