Compute Ranking loss measure.
Compute the average number of label pairs that are incorrectly ordered given y_score weighted by the size of the label set and the number of labels not in the label set.
This is similar to the error set size, but weighted by the number of relevant and irrelevant labels. The best performance is achieved with a ranking loss of zero.
Read more in the User Guide.
Added in version 0.17: A function label_ranking_loss
True binary labels in binary indicator format.
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers). For decision_function scores, values greater than or equal to zero should indicate the positive class.
Sample weights.
Average number of label pairs that are incorrectly ordered given y_score weighted by the size of the label set and the number of labels not in the label set.
References
[1]Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
Examples
>>> from sklearn.metrics import label_ranking_loss >>> y_true = [[1, 0, 0], [0, 0, 1]] >>> y_score = [[0.75, 0.5, 1], [1, 0.2, 0.1]] >>> label_ranking_loss(y_true, y_score) 0.75
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4