ragas icon indicating copy to clipboard operation
ragas copied to clipboard

AttributeError when I try to use ContextRelevance, LLMContextPrecisionWithoutReference to evaluate the test dataset

Open Haoyuxiaohan opened this issue 10 months ago • 3 comments

Describe the bug I got an AttributeError: property' object has no attribute 'get' when I try to use evaluate() function to evaluate a dataset with ContextRelevance, LLMContextPrecisionWithoutReference metrics. It's working when I only use faithfulness. And it's working when I evaluate a SingleTurnSample-

Ragas version: newest version Python version: newest version

Code to Reproduce result = evaluate(eval_dataset, metrics=[faithfulness, AnswerRelevancy], llm = evaluator_llm)

Error trace File , line 27 22 eval_dataset = EvaluationDataset.from_pandas(df_bge_pd) 23 print(eval_dataset) ---> 27 result = evaluate(eval_dataset, metrics=[faithfulness, AnswerRelevancy], llm = evaluator_llm) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-f55819bb-501c-45c7-93b3-dc3c0125c4ca/lib/python3.11/site-packages/ragas/validation.py:60, in validate_required_columns(ds, metrics) 58 metric_type = get_supported_metric_type(ds) 59 for m in metrics: ---> 60 required_columns = set(m.required_columns.get(metric_type, [])) 61 available_columns = set(ds.features()) 62 if not required_columns.issubset(available_columns):

Haoyuxiaohan avatar Apr 02 '25 14:04 Haoyuxiaohan

The evaluation dataset I created is using eval_dataset = EvaluationDataset.from_pandas(df) because the data source is a pandas dataframe

Haoyuxiaohan avatar Apr 02 '25 14:04 Haoyuxiaohan

I ran into the same problem with BleuScore, but it was because I wasn't passing it properly.

results = evaluate(eval_dataset, metrics=[BleuScore])
> 'property' object has no attribute 'get'
results = evaluate(eval_dataset, metrics=[BleuScore()])
> {'bleu_score': ...}

Perhaps a more detailed error message would help here! Or, even better, if the user passes a class instead of an object, perhaps initialise it for them?

ziggycross avatar Apr 08 '25 16:04 ziggycross

Hi @Haoyuxiaohan,

Were you able to resolve it? Thanks @ziggycross for helping us out. Yes, we need to pass the object of metrics in the list of metrics when doing evaluations.

Take reference form the below example.

import pandas as pd
from ragas.dataset_schema import EvaluationDataset

sample_dict = {'user_input': {0: 'When was Einstein born?'},
 'response': {0: 'Albert Einstein was born in 1879.'},
 'reference': {0: 'Albert Einstein was born in 1879.'},
 'retrieved_contexts': {0: []}}

df = pd.DataFrame(data = sample_dict)

dataset = EvaluationDataset.from_pandas(dataframe=df)

result = evaluate(dataset , metrics=[ContextRelevance(llm=evaluator_llm)])

sahusiddharth avatar Apr 13 '25 10:04 sahusiddharth