Build a text report showing the rules of a decision tree.
Note that backwards compatibility may not be supported.
The decision tree estimator to be exported. It can be an instance of DecisionTreeClassifier or DecisionTreeRegressor.
An array containing the feature names. If None generic names will be used (“feature_0”, “feature_1”, …).
Names of each of the target classes in ascending numerical order. Only relevant for classification and not supported for multi-output.
if None
, the class names are delegated to decision_tree.classes_
;
otherwise, class_names
will be used as class names instead of decision_tree.classes_
. The length of class_names
must match the length of decision_tree.classes_
.
Added in version 1.3.
Only the first max_depth levels of the tree are exported. Truncated branches will be marked with “…”.
Number of spaces between edges. The higher it is, the wider the result.
Number of decimal digits to display.
If true the classification weights will be exported on each leaf. The classification weights are the number of samples each class.
Text summary of all the rules in the decision tree.
Examples
>>> from sklearn.datasets import load_iris >>> from sklearn.tree import DecisionTreeClassifier >>> from sklearn.tree import export_text >>> iris = load_iris() >>> X = iris['data'] >>> y = iris['target'] >>> decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2) >>> decision_tree = decision_tree.fit(X, y) >>> r = export_text(decision_tree, feature_names=iris['feature_names']) >>> print(r) |--- petal width (cm) <= 0.80 | |--- class: 0 |--- petal width (cm) > 0.80 | |--- petal width (cm) <= 1.75 | | |--- class: 1 | |--- petal width (cm) > 1.75 | | |--- class: 2
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4