support multilabel confusion matrix
using sklearn multilabel_confusion_matrix
sample usage:
confusion_metric = evaluate.load("confusion_matrix", config_name="multilabel")
y_true= np.array([[0, 0, 0, 0, 1], [1, 0, 1, 0, 0], [0, 0, 1, 0, 1], [1, 0, 0, 0, 0]])
y_pred= np.array([[0, 1, 0, 0, 1], [0, 0, 1, 0, 0], [0, 0, 1, 1, 1], [1, 0, 1, 0, 1]])
confusion_metric.compute(references= y_true, predictions= y_pred)
output:
{'confusion_matrix': array([[[2, 0],
[1, 1]],
[[3, 1],
[0, 0]],
[[1, 1],
[0, 2]],
[[3, 1],
[0, 0]],
[[1, 1],
[0, 2]]])}
@lvwerra
There was a problem with file formatting, I reformatted it and comitted the new code
There's an error in tests, I can't understand it, it seems to be unrelated with the code π sorry for inconvenience
Overall LGTM, but I wonder if it should be a separate metric to keep 1:1 mapping to
sklearn
I'm not sure 100% what is the optimal way, but I think the main idea behind evaluate is to be an abstraction.
anyway, if you think if it's better to be in a seperate module I can do it.
thank you βΊοΈ
@osanseviero will you accept the request? I've done the suggested edites
thanks for rerunning the workflow, but I don't understand what's the problem π¬
I fixed the problem in docstring example that causes unit test failure, can you run the test and merge please? @lvwerra thank you