A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.geeksforgeeks.org/dsa/random-forest-classifier-using-scikit-learn/ below:

Random Forest Classifier using Scikit-learn

Random Forest Classifier using Scikit-learn

Last Updated : 15 Jul, 2025

Random Forest is a method that combines the predictions of multiple decision trees to produce a more accurate and stable result. It can be used for both classification and regression tasks.

In classification tasks, Random Forest Classification predicts categorical outcomes based on the input data. It uses multiple decision trees and outputs the label that has the maximum votes among all the individual tree predictions.

Random Forest Classifier Working of Random Forest Classifier
  1. Bootstrap Sampling: Random rows are picked (with replacement) to train each tree.
  2. Random Feature Selection: Each tree uses a random set of features (not all features).
  3. Build Decision Trees: Trees split the data using the best feature from their random set. Splitting continues until a stopping rule is met (like max depth).
  4. Make Predictions: Each tree gives its own prediction.
  5. Majority Voting: The final prediction is the one most tree agree on.
Benefits of Random Forest Classification: Implementing Random Forest Classification in Python

Before implementing random forest classifier in Python let's first understand it's parameters.

Now that we know it's parameters we can start building it in python.

1. Import Required Libraries

We will be importing Pandas, matplotlib, seaborn and sklearn to build the model.

python
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn

from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris
2. Import Dataset

For this we'll use the Iris Dataset which is available within sci-kit learn. This dataset contains information about three types of Iris flowers and their respective features (sepal length, sepal width, petal length and petal width).

python
iris = load_iris()
df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
df['target'] = iris.target

df

Output:

Iris Dataset 3. Data Preparation

Here we will separate the features (X) and the target variable (y).

python
X = df.iloc[:, :-1].values
y = df.iloc[:, -1].values
4. Splitting the Dataset

We'll split the dataset into training and testing sets so we can train the model on one part and evaluate it on another.

python
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
5. Feature Scaling

Feature scaling ensures that all the features are on a similar scale which is important for some machine learning models. However Random Forest is not highly sensitive to feature scaling. But it is a good practice to scale when combining models.

python
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
6. Building Random Forest Classifier

We will create the Random Forest Classifier model, train it on the training data and make predictions on the test data.

python
classifier = RandomForestClassifier(n_estimators=100, random_state=42)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
7. Evaluation of the Model

We will evaluate the model using the accuracy score and confusion matrix.

python
accuracy = accuracy_score(y_test, y_pred)
print(f'Accuracy: {accuracy * 100:.2f}%')

conf_matrix = confusion_matrix(y_test, y_pred)

plt.figure(figsize=(8, 6))
sns.heatmap(conf_matrix, annot=True, fmt='g', cmap='Blues', cbar=False, 
            xticklabels=iris.target_names, yticklabels=iris.target_names)

plt.title('Confusion Matrix Heatmap')
plt.xlabel('Predicted Labels')
plt.ylabel('True Labels')
plt.show()

Output:

Accuracy: 100.00%

Confusion Matrix 8. Feature Importance

Random Forest Classifiers also provide insight into which features were the most important in making predictions. We can plot the feature importance.

Python
feature_importances = classifier.feature_importances_

plt.barh(iris.feature_names, feature_importances)
plt.xlabel('Feature Importance')
plt.title('Feature Importance in Random Forest Classifier')
plt.show()

Output:

Feature Importance in Random Classifier

From the graph we can see that petal width (cm) is the most important feature followed closely by petal length (cm). The sepal width (cm) and sepal length (cm) have lower importance in determining the model’s predictions. This indicates that the classifier relies more on the petal measurements to make predictions about the flower species.

Random Forest can also be used for regression problem: Random Forest Regression in Python



RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4