A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.geeksforgeeks.org/python/python-opencv-bfmatcher-function/ below:

Python OpenCV - BFMatcher() Function

Python OpenCV - BFMatcher() Function

Last Updated : 23 Jul, 2025

In this article, we will be going to implement Python OpenCV - BFMatcher() Function.

Prerequisites: OpenCV, matplotlib

What is BFMatcher() Function?

BFMatcher() function is used in feature matching and used to match features in one image with other image. BFMatcher refers to a Brute-force matcher that is nothing, but a distance computation used to match the descriptor of one feature from the first set with each of the other features in the second set. The nearest is then returned. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets. So in order to implement the function, our aim is to find the closest descriptor from the set of features of one image to the set of features of another image.

Sample input images:

image1 image2 Installation of required modules:

To install prerequisite modules, run the following commands in command prompt or terminal.

pip install opencv-python==3.4.2.16
pip install opencv-contrib-python==3.4.2.16
pip install matplotlib

Code Implementation:

Python
# Importing required modules
import cv2
import matplotlib.pyplot as plt

# reading images
img1 = cv2.imread("image1.jpg")
img2 = cv2.imread("image2.jpg")

# function for feature matching
def BFMatching(img1, img2):
    # Initiate SIFT detector
    feat = cv2.ORB_create(5)

    # find the keypoints and descriptors with SIFT
    kpnt1, des1 = feat.detectAndCompute(img1, None)
    kpnt2, des2 = feat.detectAndCompute(img2, None)

    # BFMatcher with default parameters
    bf = cv2.BFMatcher()
    # finding matches from BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2) 
    # Apply ratio test
    good = []
    matched_image = cv2.drawMatchesKnn(img1, 
           kpnt1, img2, kpnt2, matches, None,
           matchColor=(0, 255, 0), matchesMask=None,
           singlePointColor=(255, 0, 0), flags=0)
# creating a criteria for the good matches
# and appending good matchings in good[]
    for m, n in matches:
        # print("m.distance is <",m.distance,"> 
        # 1.001*n.distance is <",0.98*n.distance,">")
        if m.distance < 0.98 * n.distance:
            good.append([m])
# for jupyter notebook use this function
# to see output image
#   plt.imshow(matched_image)

# if you are using python then run this-
    cv2.imshow("matches", matched_image)
    cv2.waitKey(0)
# uncomment the below section if you want to see
# the key points that are being used by the above program
    print("key points of first image- ")
    print(kpnt1)
    print("\nkey points of second image-")
    print(kpnt2)
    print("\noverall features that matched by BFMatcher()-")
    print(matches)
    return("good features", good)  # returning ggod features


BFMatching(img1, img2)

Explanation:

You can see the output. In the below output overall matches are 5. Whether the good features with our logic are 4. 

Output:

 

We will see the output image that is plotted or shown by the program:

output

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4