Enhance Serving Evaluation endpoints
Description
This PR involves 2 changes to the serving functions:
- Add
metric_user_resultsto evaluation results asuser_result - Add
/evaluation-jsonendpoint to accept evaluation data in the form of a json -
queryis removed from reponse
Sample request for /evaluation-json:
{
"metrics": ["RMSE()", "NDCG(k=10)"],
"data": [
["123", "1539", 1],
["123", "2", 1],
["124", "1", 1]
]
}
Response:
{
"result": {
"NDCG@10": 0.3175294778309396,
"RMSE": 2.781925109617526
},
"user_result": {
"NDCG@10": {
"62": 0.20438239758848611,
"63": 0.43067655807339306
},
"RMSE": {
"62": 2.244862849697699,
"63": 3.3189873695373535
}
}
}
Related Issues
Checklist:
- [ ] I have added tests.
- [ ] I have updated the documentation accordingly.
- [ ] I have updated
README.md(if you are adding a new model). - [ ] I have updated
examples/README.md(if you are adding a new example). - [ ] I have updated
datasets/README.md(if you are adding a new dataset).
I think we can remove "query" from the response to minimize the bandwidth because "data" could be quite significant.
Also, the returned user_result contains mapped user indices. Shall we try to map them back to the original user IDs, using the mapping in train_set, for it to be consistent with data in the request?
Also, the returned
user_resultcontains mapped user indices. Shall we try to map them back to the original user IDs, using the mapping intrain_set, for it to be consistent with data in the request?
Now using mapped user indices, new response is updated in main post.