Precision Recall visualization.
It is recommended to use from_estimator
or from_predictions
to create a PrecisionRecallDisplay
. All parameters are stored as attributes.
For general information regarding scikit-learn
visualization tools, see the Visualization Guide. For guidance on interpreting these plots, refer to the Model Evaluation Guide.
Precision values.
Recall values.
Average precision. If None, the average precision is not shown.
Name of estimator. If None, then the estimator name is not shown.
The class considered as the positive class. If None, the class will not be shown in the legend.
Added in version 0.24.
The prevalence of the positive label. It is used for plotting the chance level line. If None, the chance level line will not be plotted even if plot_chance_level
is set to True when plotting.
Added in version 1.3.
Precision recall curve.
The chance level line. It is None
if the chance level is not plotted.
Added in version 1.3.
Axes with precision recall curve.
Figure containing the curve.
Notes
The average precision (cf. average_precision_score
) in scikit-learn is computed without any interpolation. To be consistent with this metric, the precision-recall curve is plotted without any interpolation as well (step-wise style).
You can change this style by passing the keyword argument drawstyle="default"
in plot
, from_estimator
, or from_predictions
. However, the curve will not be strictly consistent with the reported average precision.
Examples
>>> import matplotlib.pyplot as plt >>> from sklearn.datasets import make_classification >>> from sklearn.metrics import (precision_recall_curve, ... PrecisionRecallDisplay) >>> from sklearn.model_selection import train_test_split >>> from sklearn.svm import SVC >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=0) >>> clf = SVC(random_state=0) >>> clf.fit(X_train, y_train) SVC(random_state=0) >>> predictions = clf.predict(X_test) >>> precision, recall, _ = precision_recall_curve(y_test, predictions) >>> disp = PrecisionRecallDisplay(precision=precision, recall=recall) >>> disp.plot() <...> >>> plt.show()
Plot precision-recall curve given an estimator and some data.
For general information regarding scikit-learn
visualization tools, see the Visualization Guide. For guidance on interpreting these plots, refer to the Model Evaluation Guide.
Fitted classifier or a fitted Pipeline
in which the last estimator is a classifier.
Input values.
Target values.
Sample weights.
Whether to drop some suboptimal thresholds which would not appear on a plotted precision-recall curve. This is useful in order to create lighter precision-recall curves.
Added in version 1.3.
Specifies whether to use predict_proba or decision_function as the target response. If set to ‘auto’, predict_proba is tried first and if it does not exist decision_function is tried next.
The class considered as the positive class when computing the precision and recall metrics. By default, estimators.classes_[1]
is considered as the positive class.
Name for labeling curve. If None
, no name is used.
Axes object to plot on. If None
, a new figure and axes is created.
Whether to plot the chance level. The chance level is the prevalence of the positive label computed from the data passed during from_estimator
or from_predictions
call.
Added in version 1.3.
Keyword arguments to be passed to matplotlib’s plot
for rendering the chance level line.
Added in version 1.3.
Whether to remove the top and right spines from the plot.
Added in version 1.6.
Keyword arguments to be passed to matplotlib’s plot
.
PrecisionRecallDisplay
Notes
The average precision (cf. average_precision_score
) in scikit-learn is computed without any interpolation. To be consistent with this metric, the precision-recall curve is plotted without any interpolation as well (step-wise style).
You can change this style by passing the keyword argument drawstyle="default"
. However, the curve will not be strictly consistent with the reported average precision.
Examples
>>> import matplotlib.pyplot as plt >>> from sklearn.datasets import make_classification >>> from sklearn.metrics import PrecisionRecallDisplay >>> from sklearn.model_selection import train_test_split >>> from sklearn.linear_model import LogisticRegression >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split( ... X, y, random_state=0) >>> clf = LogisticRegression() >>> clf.fit(X_train, y_train) LogisticRegression() >>> PrecisionRecallDisplay.from_estimator( ... clf, X_test, y_test) <...> >>> plt.show()
Plot precision-recall curve given binary class predictions.
For general information regarding scikit-learn
visualization tools, see the Visualization Guide. For guidance on interpreting these plots, refer to the Model Evaluation Guide.
True binary labels.
Estimated probabilities or output of decision function.
Sample weights.
Whether to drop some suboptimal thresholds which would not appear on a plotted precision-recall curve. This is useful in order to create lighter precision-recall curves.
Added in version 1.3.
The class considered as the positive class when computing the precision and recall metrics.
Name for labeling curve. If None
, name will be set to "Classifier"
.
Axes object to plot on. If None
, a new figure and axes is created.
Whether to plot the chance level. The chance level is the prevalence of the positive label computed from the data passed during from_estimator
or from_predictions
call.
Added in version 1.3.
Keyword arguments to be passed to matplotlib’s plot
for rendering the chance level line.
Added in version 1.3.
Whether to remove the top and right spines from the plot.
Added in version 1.6.
Keyword arguments to be passed to matplotlib’s plot
.
PrecisionRecallDisplay
Notes
The average precision (cf. average_precision_score
) in scikit-learn is computed without any interpolation. To be consistent with this metric, the precision-recall curve is plotted without any interpolation as well (step-wise style).
You can change this style by passing the keyword argument drawstyle="default"
. However, the curve will not be strictly consistent with the reported average precision.
Examples
>>> import matplotlib.pyplot as plt >>> from sklearn.datasets import make_classification >>> from sklearn.metrics import PrecisionRecallDisplay >>> from sklearn.model_selection import train_test_split >>> from sklearn.linear_model import LogisticRegression >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split( ... X, y, random_state=0) >>> clf = LogisticRegression() >>> clf.fit(X_train, y_train) LogisticRegression() >>> y_pred = clf.predict_proba(X_test)[:, 1] >>> PrecisionRecallDisplay.from_predictions( ... y_test, y_pred) <...> >>> plt.show()
Plot visualization.
Extra keyword arguments will be passed to matplotlib’s plot
.
Axes object to plot on. If None
, a new figure and axes is created.
Name of precision recall curve for labeling. If None
, use estimator_name
if not None
, otherwise no labeling is shown.
Whether to plot the chance level. The chance level is the prevalence of the positive label computed from the data passed during from_estimator
or from_predictions
call.
Added in version 1.3.
Keyword arguments to be passed to matplotlib’s plot
for rendering the chance level line.
Added in version 1.3.
Whether to remove the top and right spines from the plot.
Added in version 1.6.
Keyword arguments to be passed to matplotlib’s plot
.
PrecisionRecallDisplay
Object that stores computed values.
Notes
The average precision (cf. average_precision_score
) in scikit-learn is computed without any interpolation. To be consistent with this metric, the precision-recall curve is plotted without any interpolation as well (step-wise style).
You can change this style by passing the keyword argument drawstyle="default"
. However, the curve will not be strictly consistent with the reported average precision.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4